Chatbots are Poisoned

Chatbots are Poisoned

It’s getting harder to trust chatbots. Whether they are telling the truth or just hallucinating, nobody knows. And it’s getting difficult to find the answer to the problem, because nobody knows how they work. Last month, Google CEO Sundar Pichai admitted that Google doesn't fully understand how its AI chatbot Bard comes up with certain responses.

Just imagine, how hard the problem of hallucination will turn out to be when datasets, on which AI chatbots are trained, are at target. The rise of data poisoning, in which malicious actors inject false information into training datasets, has aggravated the entire issue.

There are certain ways to do it. Hackers can target expired domains and manipulate the content on the platform. For instance, expired domains hosting image URLs in datasets can be purchased by malicious actors who then substitute the images with malicious content, thereby tainting the training dataset. This method is called splitview poisoning.?

There’s another method called Front-Running Poisoning. This could be understood from Wikipedia’s example. Wikipedia is a big datasource for chatbots, which can be targeted by malicious actors to edit and corrupt data, though for a brief period of time (as moderators keep rejecting non-verified information), and it makes it for training data.

Recently, researchers published a report and found that a mere 100 instances of poisoned examples can effectively manipulate various textual contexts, resulting in consistently negative sentiment and disruption of outputs for numerous unrelated tasks.?

The research report also raises concerns about the effectiveness of defence mechanisms relying on data filtering or reducing model capacity, as these approaches provide only limited protection and often lead to decreased accuracy during testing.

This raises a pertinent question as long as this threat exists, will chatbots ever be foolproof? And will this become the next biggest threat for language models??

Read the full story?here.


Star War Inspires

From humanoid robots to hovering speeders, elements once confined to the realm of imagination are edging closer to reality. All thanks to Star Wars for its inspiration.

Robotics and AI advancements, as showcased by Elon Musk's 'Optimus' and Boston Dynamics' agile Atlas, demonstrate remarkable progress. California-based Aerofex's Aero-X hovercraft and Malloy Aeronautics' speedy Hoverbike evoke the thrill of cinematic speeders. Innovative initiatives like Hungary's Flike tricopter and bionic prosthetics, including the groundbreaking Luke arm, highlight the quest for eco-friendly transport and enhanced mobility. The bridging of sci-fi and reality unfolds, reminding us that in our journey forward, as Yoda wisely said, "do or do not. There is no try."

Read the full story?here.


Losses Don’t Matter

OpenAI's losses have doubled to $540 million since its venture into developing ChatGPT and similar products. The company has invested substantial amounts in computation, product development, and retaining top talent. At first glance, it may seem like a colossal failure in OpenAI's AI endeavors. However, this perspective does not capture the whole picture.

Economists argue that it typically takes 15-20 years for a general-purpose technology to have a significant impact on productivity. Venture capitalists (VCs) are well aware of this fact and continue to invest heavily in the AI market, despite potential regulatory challenges that could hinder its rapid progress.

Read the full story?here.


Bluesky Goes Crazy

Jack Dorsey's new decentralised social media platform, Bluesky, which is currently accessible through an invite-only system, has encountered significant issues. Unlike Twitter, the absence of moderation on this platform has led to peculiar occurrences. Users are granted the ability to filter content, including explicit material, violent content, and politically-charged hate groups, from their feeds.?

The unconventional activities taking place on Bluesky have sparked a fear of missing out (FOMO) among prospective users, resulting in some being willing to pay up to $400 for a platform membership. Furthermore, concerns have been raised regarding the concept of decentralisation in social media and the potential negative consequences that can arise without adequate regulation.

要查看或添加评论,请登录

AIM Events的更多文章

社区洞察

其他会员也浏览了