Cheese Sticking, AI Knows ?????

Cheese Sticking, AI Knows ?????

Google AI Overview’s latest reply to ‘cheese not sticking to pizza’ has taken the internet by storm, making users question the capabilities of its generative AI search experience.?

Off to a rocky start

This has happened before. During its initial lab testing phase, known as the Search Generative Experience (SGE), it asked users to ‘drink a couple of litres of light-coloured urine in order to pass kidney stones’, and said that ‘geologists recommend humans eat one rock per day’.

A few months ago, Google also ended up upsetting Indian IT minister Rajeev Chandrasekhar after Gemini expressed a biassed opinion about Prime Minister Narendra Modi.?

Image problem: The tech giant also had to temporarily suspend its image-generating feature after Gemini inaccurately depicted people of colour in Nazi-era uniforms, showcasing historically inaccurate and insensitive image

.?

Who is to be blamed??

German AI cognitive scientist Joscha Bach believes this bias wasn't hardcoded, but inferred through the system's interactions and prompts.

He said that Gemini's behaviour reflects the social processes and prompts fed into it rather than being solely algorithmic. He said that the model developed opinions and biases based on the input it received, even generating arguments to support its stance on various issues like meat-eating or antinatalism.

He highlighted the potential of such models for sociological study, as they possess a vast understanding of internet opinions. Instead of focusing solely on cultural conflicts, he suggested viewing these AI behaviours as mirrors of society, urging a deeper understanding of our societal condition.

In most scenarios, the responsibility for misinformation generated by AI largely falls on content creators, media platforms, and users rather than the technology itself. Essentially, human actors play a significant role in perpetuating misinformation, whether through its creation, dissemination, or a failure to verify its accuracy.

This is precisely what we pointed out a few months ago in our article ‘You Are to be Blamed for ChatGPT’s Flaws’, which stressed that the cycle of misinformation is mostly driven by human inputs and interactions, not just AI capabilities.

That also explains why OpenAI has been busy partnering with news agencies.?

Using AI hallucination as a feature?

At the same time, experts like Yann LeCun and others believe this is an inherent feature of auto-regressive language models (LLMs).?

As long as LLMs exist, these AI behaviours will persist until companies come up with creative solutions, such as citing original sources directly to ensure accuracy—a strategy Perplexity AI employs while its founder, Aravind Srinivas, openly mocks Google for its mishaps.

“That’s not a huge problem if you use them as writing aids or for entertainment purposes. Making them factual and controllable will require a major redesign,” shared LeCun.?

Some, like OpenAI chief Sam Altman, think AI hallucination is “creativity”, while others believe these might be helpful in making new scientific discoveries.?

The same goes for Grok, wherein Musk is looking to include a ‘Fun Mode’, through which users will be able to see a humorous take on news.?


Generative AI is democratising observability

In our latest episode of Tech Talks, AIM got in touch with New Relic CEO Ashan Willy to discuss how generative AI is transforming the observability space.


Microsoft Now has Both Kevin and Devin

Microsoft already has a strong presence in the developer ecosystem, from Visual Studio Code to GitHub Copilot Workspace. Now, with its recent partnership with Cognition Labs, the tech giant is bringing Devin, the autonomous AI software agent, to enterprise customers, making Kevin-like developers ‘Super Kevin’.

Read the full story here.??

Mitch Mitchem

Top Requested Speaker on AI for Personal & Business | CEO of HIVE Interactive | AI and Human Augmented Intelligence Expert. Learning, Tech and Entertainment Disruptor with a Focus on The Future of Humanity.

4 个月

All of this based on fake posts about fake reposes on Gemini. It's a completely fake story that the mainstream media pick up on, now that's funny. Fact checked on multiple outlets. "The story about Gemini, an AI search feature, advising people to put glue on their pizza is indeed a fabricated and humorous anecdote that has circulated online. The claim gained traction as an example of AI errors, often cited in discussions about the potential pitfalls and unexpected outputs of AI systems. In reality, no AI or Gemini feature officially recommended such an action. The story appears to be a satirical take on the challenges and unexpected results that can sometimes arise from AI-generated content. It serves as a reminder of the importance of context and critical evaluation of AI outputs, particularly when dealing with sensitive or potentially dangerous advice. The widespread sharing of this fake story underscores how easily misinformation or jokes can be mistaken for truth, especially in the realm of AI and technology."

要查看或添加评论,请登录

社区洞察

其他会员也浏览了