Generative AI has reached the fever pitch of its hype in the news cycle in the past couple of months. Companies of all kinds are asking all the important questions: What is it? How does it work? And, perhaps most importantly, what can it do for my business? The hype comes with a touch of confusion, as conflicting notions about what these new tools can do compete for attention. In today’s newsletter, Google Cloud’s global vice president of artificial intelligence Philip Moyer breaks down some of the common misconceptions of generative AI and how you can use this exciting new technology to accomplish your goals.
Generative AI misconceptions with Philip Moyer, Google Cloud’s VP of AI
With more than three decades experience in technology leadership, including at Microsoft and AWS, Philip Moyer has been driving important conversations around innovation for some time. His latest work, as global vice president for AI business and commercialization at Google Cloud, he is at the forefront of the ways generative and all kinds of AIs are transforming organizations today.
I’ve been in a lot of meetings this year with Fortune 500 companies discussing how they want to bring generative AI into their business. It’s hard to think of a time when there has been so much interest by enterprises in a new technology so quickly.??
But matching this excitement is a lot of hype, so I’ve had to debunk as many ideas as I’ve put forward. I want to share some of these misconceptions to help demystify this fast-evolving technology.?
Misconception 1: One model to rule them all
- It is a myth that a single defining algorithm captures all use cases.
- The nature of generative AI, especially for enterprises, suggests we will be looking at thousands of models or more. Some models are good at summarization, others are good at bulleted lists, others are good at reasoning, and so on. Industries, lines of business, and companies have very different editorial tones for how they express knowledge. All of these factors should be considered when choosing your models.
Misconception 2: Bigger is better
- Generative AI models consume large amounts of computing resources — the larger the model, the more it costs to query.
- Your enterprise model doesn’t need to know the words to every Taylor Swift song to generate a summary report on next quarter’s sales goals by region. Context is king and you need to be selective in just how much IQ a model requires for your use case.?
Misconception 3: Just me and my bot
- Just as past “bring your own device” and “bring your own app” movements raised “shadow IT” concerns, some financial institutions I work with have shut down access to publicly available generative AI for fear that models could leak proprietary information.
- Let’s say a bank is exploring a merger for a large industrial client, and someone in the M&A department queries a public model asking, “What are some good takeover targets for XYZ Industries?” The model is now trained to answer this question . . . for anyone. Most customers I speak to are worried about the security of both the questions they ask of generative AI and the output a model produces. And they probably should be.?
Misconception 4: No questions asked
- The accuracy and reliability of generative AI has been one of the biggest topics around the new technology. An algorithm is designed to give an answer no matter what, so sometimes they give one without it being true.??
- It’s essential for enterprise customers to use models and a technology architecture that is grounded in the factuality of their own data.? Most generative AI models punt on this enterprise data problem. It is essential, especially in regulated industries, to not punt.?
Misconception 5: Ask me any question??
- Enterprise customers have many sources of information: pricing, HR, legal, finance, etc. I don’t know any company that allows open access to all of this information.
- Even so, there is a growing desire amongst some executives to build all information into a large language model, so it can conceivably answer all questions, whether that’s at the organizational level or the global level. As a company’s leaders think through how they can keep their information private and factual, they quickly realize the next step: How do I manage who can ask questions of my models, and at what level??
Macy’s is one of many retailers eagerly exploring the potential for generative AI to enhance the shopping experience for customers. Bennett Fox-Glassman, senior vice president for customer journey at Macy’s, described his excitement about this generative AI technology:
“Customers have been shopping at Macy’s for generations. Being able to deliver 360-degree personalization and contextual recommendations will help ensure that Macy’s is still providing future generations of shoppers with a seamless, exceptional experience. We’ve already realized an increase in revenue per visit, and conversion rates had great success using Google Cloud’s AI technology and are looking forward to exploring how these latest announcements bring together natural language processing and generative AI capabilities to deliver next-gen search and conversational experiences for our customers."
This perfectly sums up someone using generative AI without hype — looking for the lowest cost model, with the best facts, and delivering just what the user intends.?
For more insights on generative AI, see the full version of this article here.
This week in AI at Google
- Garden of AI-den: Among our recent AI announcements, few have been more exciting to developers than the reveal of Model Garden and Gen AI Studio for Vertex AI. Think of these as your one-stop shop for browsing, tweaking, and deploying foundation models, from Google or the larger AI ecosystem — all within your established Vertex AI workflow.
- Model citizens: If BigQuery is your insight-wrangling tool of choice, you’ll be glad to know it’s getting an AI-flavored upgrade in the form of a brand-new inference engine. Use our state-of-the-art pretrained models or import your own from your secret modeling lair, right to where your data lives.
- Chips ahoy: One last bit from the Data Cloud & AI Summit: we’ve announced an expanded partnership with NVIDIA that includes being the first cloud to offer their AI-optimized L4 Tensor Core GPUs, supported by new open-source integrations, Google Cloud Marketplace offerings, and even sustainability assistance. It’s a lot, and you can read all about it here.
- Contact lens: One of the most immediately useful enterprise applications for AI is customer support, which is why Google Cloud has been hard at work for a while on Contact Center AI. This post explains how you can use the new Gen App Builder product to help build even better CCAI experiences.
- Revving up devs: Google Cloud announced a new AI-focused partnership with developer tooling company Replit. Replit’s AI coding assistant tools will run on Google Cloud services and foundational models to level up the productivity of more than 20 million developers. That’s pretty darn cool.
- Chatter box: Each week the Google Cloud team creates lots of additional content to give context around the fast-moving AI space. If you have limited time this week, check out this Twitter space discussing our recent Generative AI announcements, Stephanie Wong’s latest short video on “gen apps,” and Phil Venables explaining our approach to AI in cybersecurity.
--
3 个月Google Cloud
--
3 个月Al
--
3 个月@
Network Coordinator
1 年Thank for sharing
Lead Software Development Engineer, Modern Core Java, Reactive Java, Spring Webflux framework, Google cloud platform, Reactive Micro Services, Spring Boot, Micro Services, android sdk.
1 年I am very curious to know how Generative AI will be useful in the domain of digital transformation of many monolithic apps to micro services and beyond like deployment of them to cloud environments.All that I see in articles is aiming for data scientists or ML Engineers.