Are we seeing ethical behaviour emerging in Google's AI, or is it just hallucinating?
Claudiu Popa
Certified Cybersecurity Expert & Privacy Advocate | Public Speaker & Media Analyst | Author, Educator & Podcaster | Opinions are my own, but happily shared.
Google has been blamed for squandering an initial lead on AI transformers and embarrassed by the poor performance of its Bard AI, but it seems that they're betting the farm on a panoply of tools that go beyond search: AI Studio, NotebookLM and Gemini, among others.
After reading that a significant number of their staff have been fired for protesting the controversial sale and unethical use of Google technology in the Middle East, particularly as part of Israel's attacks on the Occupied Palestinian Territories, so I asked Google's new Gemini (which all of us Google Workspace users are now forced to pay extra for) a couple of simple questions.
I found the answers surprising:
These intriguing answers led to a fascinating conversation with the unusually transparent tool, which went so far as to indicate that its parent company may well be engaging in unethical conduct when it comes to supplying technology that is used to harm people.
For the rest of my discussion, which culminated with the friendly Google AI supplying me with an actual podcast recording between two uncannily realistic commentators, click here to read it in full on the Bad Security Blog (and don't forget to subscribe)!