Google's Responsible AI Approach gets a Bold Boost at Next 23
Wei Wen Chen
I write about data management, analytics, artificial intelligence and machine learning. Please connect with me and we will learn and grow together.
As an avid user of Google Cloud and its groundbreaking technologies for many years, I was thrilled to hear Sundar Pichai's keynote presentation at the recent Google Cloud Next '23 conference. The keynote provided an illuminating window into Google's multifaceted strategy around responsible artificial intelligence development, an increasingly crucial topic as AI continues permeating diverse sectors at an astounding pace. Here is my take on the key components of Google's responsible AI efforts over the last few months, culminating in major announcements made at Next '23.
Rigorously Championing Fairness, Equity, and Inclusion
Google has long been cognizant of the immense potential societal impact of AI, both positive and negative. The company has actively invested in research aimed at promoting fairness, equity, and inclusion in AI systems developed internally and made available to the public. Google's methodology around fostering more fair and inclusive AI is notably comprehensive and rigorous - involving everything from building and nurturing a more diverse and inclusive workforce, to extensive testing of AI systems for possible unfair or biased outcomes, particularly for marginalized groups. The overarching aim is to build AI systems that are fundamentally fair and inclusive for all of society, while acknowledging the inherent complexity and continuous evolution required in pursuing this vision.
Democratizing Access to Advanced Technologies
A core emphasis at Google Cloud is evaluating and validating AI systems extensively, in order to build trustworthy and accountable products. The focus is on ensuring advanced technologies not only avoid harm, but proactively benefit humanity as a whole - aligning seamlessly with Google's admirable overall mission as a company. I get a strong sense that Google sees itself as a custodian of AI technology on behalf of society at large.
Establishing Best Practices for the Responsible Development of Generative AI
As a pioneer in artificial intelligence research and development, Google is also at the global forefront when it comes to developing a thoughtful, comprehensive framework of best practices for the responsible development of generative AI models. Google has firmly committed to applying its rigorous AI principles and ethics review processes to this emerging field of generative AI. Areas of focus include designing generative AI systems responsibly from the outset, extensive adversarial testing to catch potential harms, and providing clear and helpful explanations to users on limitations and appropriate uses.
Advancing Responsible AI through Public Policy and Industry Collaboration
In addition to in-house research and product development, Google has also shown visionary leadership in advancing responsible AI practices more broadly through public policy advocacy and collaborations across the AI industry. This includes joining other prominent AI leaders to make joint commitments to using AI to solve pressing societal challenges, promoting the development of safe and secure AI applications, and building greater public understanding and trust in AI systems.
Much of this emphasis can be seen in the free courses that Google provides in their Generative AI skills boost badges that anyone can take.
In June I completed the various Google AI certifications
领英推荐
And most recently I finished the additional Responsible AI courses that were added
Key Takeaways from Google Cloud Next '23
Sundar Pichai's forward-looking keynote presentation at the recent Google Cloud Next '23 conference contained several watershed moments that provided a window into Google Cloud's emerging role as a leader in responsible AI innovation. Notably, Google announced new digital watermarking and verification capabilities for AI-generated images on its platform - the first offering of its kind in the cloud industry. This new transparency feature aims to help users more easily identify artificial intelligence generated content across the internet, thereby enhancing accountability. The announcement reinforced Google Cloud's admirable commitment to developing and deploying AI technologies in a thoughtful and responsible manner, guided by longstanding, principled AI development practices.
A Deeper look at the new Digital Watermarking feature, created by DeepMind
Digging deeper the capabilities actually come from Google DeepMind with detailed announcements around SynthID, a watermarking / identification tool for generative art. The technology embeds a digital watermark, invisible to the human eye, directly onto an image’s pixels. SynthID is rolling out first to “a limited number” of customers using Imagen, Google’s art generator available on its suite of cloud-based AI tools. “Watermarking audio and visual content to help make it clear that content is AI-generated” was one of the voluntary commitments that seven AI companies agreed to develop after a July meeting at the White House. Google is the first of the companies to launch such a system.
The New Search Generative Experience (SGE) - Now with Citations
Ive had access to Google Search Labs, and after the Google Next announcements on Thursday, Search Generative Experience, or SGE for short, now gives me reference links directly within the AI-generated response.
Previously, if I wanted to know where SGE was getting its sources, I had to click on a small icon on the upper righthand corner, which would reveal corroborating sites for each of SGE's claims. Now, SGE's responses have arrow icons directly on the page, which drop down to show where the information was sourced from. While not a massive change, these citations add to Google’s goal of being transparent, responsible by helping weed out hallucinations.
Responsible AI is “The new way to cloud”
Google's multi-pronged strategy around responsible artificial intelligence development encompasses technological breakthroughs, public policy advocacy, deep ethics considerations, and more. As a longtime admirer of Google Cloud, it is tremendously exciting to see these ambitious initiatives come to life, which will undoubtedly help shape the responsible evolution of AI technology for the benefit of society. The major announcements at Google Cloud Next '23 serve as inspiration that Google intends to continue leading the way in responsible innovation in the field of artificial intelligence.
Have they talked about enforcements for the responsible ai?