Are we seeing ethical behaviour emerging in Google's AI, or is it just hallucinating?
Image Credit: Google Gemini AI Image Generation

Are we seeing ethical behaviour emerging in Google's AI, or is it just hallucinating?

Google has been blamed for squandering an initial lead on AI transformers and embarrassed by the poor performance of its Bard AI, but it seems that they're betting the farm on a panoply of tools that go beyond search: AI Studio, NotebookLM and Gemini, among others.

After reading that a significant number of their staff have been fired for protesting the controversial sale and unethical use of Google technology in the Middle East, particularly as part of Israel's attacks on the Occupied Palestinian Territories, so I asked Google's new Gemini (which all of us Google Workspace users are now forced to pay extra for) a couple of simple questions.

I found the answers surprising:

These intriguing answers led to a fascinating conversation with the unusually transparent tool, which went so far as to indicate that its parent company may well be engaging in unethical conduct when it comes to supplying technology that is used to harm people.

For the rest of my discussion, which culminated with the friendly Google AI supplying me with an actual podcast recording between two uncannily realistic commentators, click here to read it in full on the Bad Security Blog (and don't forget to subscribe)!

要查看或添加评论,请登录

Claudiu Popa的更多文章

社区洞察

其他会员也浏览了