AI is Coming to Your Phone, Laptop, and Search Engine
AI Everywhere.
AI is coming to the devices and apps you already use. In this issue of the AI Review, we cover AI in our phones, laptops, and the world’s No. 1 search engine.?
AI is about to get personal. Connecting directly to the devices we carry means it will know us well, and that has advantages. On-device AI will usher in a new era of personalization, where content is tailored to us in more advanced and intimate ways than what web-based AI tools can deliver. On-device AI agents will surface the stories we want, at the moment we want them, without requiring us to open and navigate multiple apps.?
How this impacts the stories we create… well, that’s up to all of us.??
Sign up now to get this newsletter in your inbox every month!?
Note: In this issue, we mark stories from pay-to-read publications with a “($)” after the link.
1. The Verge: Every New AI Feature Coming to the iPhone and Mac
Apple has entered the chat... and the email, the word processor and the voice assistant. On June 10, Apple announced iOS 18 and MacOS Sequoia. The company will bake AI tools — in Apple lingo, AI stands for “Apple Intelligence” — into its operating systems, giving generative AI capabilities to nearly all of its communication and content apps. The highly personal AI advice will be private, Apple says, even when users connect to ChatGPT for help from its cloud-based service.??
Takeaway: AI will soon be everywhere. Expect to receive more polished iPhone emails and text messages once the new OS ships.?
Microsoft’s Copilot rollout continues to change the landscape for consumer and enterprise AI. The new AI-enhanced Surface PCs from Microsoft are examples for other laptop vendors, in terms of hardware features and pricing. They are powered by the Qualcomm Snapdragon X-series platform and will put AI into the hands of enterprise users with models that work on-device.????
Takeaway: Laptop AI could be a boon for knowledge workers focused on intellectual tasks like data analysis and complex workflows, so businesses looking to boost productivity might drive the upgrade wave.?
3. The New York Times: It Looked like a Reliable News Site. It Was an A.I. Chop Shop. ($)
When an image of Irish DJ and broadcaster Dave Fanning was attached to an article claiming sexual misconduct, many took it as fact. However, it was a case of mistaken identity by the publisher, the news site BNN Breaking. The site, which many communication professionals may have seen covering their clients, has since been found to have published numerous false claims stemming from its reliance on generative AI.?
Takeaway: In the AI era, we should increase our skepticism of news stories, as the industry is still learning how and when to leverage AI tools.???
4. Forbes: The Nitty-Gritty About That Latest Risk-Of-AI Letter and a Vaunted Call for a Right to Forewarn ($)
Thirteen current and former employees of OpenAI and Google have written an open letter calling for AI workers to have the freedom to warn the public if they see danger in their technology. As the story stresses, existing whistleblower laws protect employees from reporting illegal activity, but nothing about researching or creating AI that will be dangerous to humankind is regulated or illegal (yet). Employees who make claims to that effect risk breaking nondisclosure and other agreements and incurring deep legal troubles.??
Takeaway: Existing laws offer an excellent example of how to counterbalance the drive to ship new products quickly. The question is, how do we apply that example to our brave new world of artificial intelligence? See RightToWarn.ai for the letter.?
5. Search Engine Land: Google AI Overviews Under Fire for Giving Dangerous and Wrong Answers
领英推荐
Google launched its AI Overviews – AI-written summaries of search queries – with a few off-kilter results ranging from absurd conclusions to deadly advice. Like other highly publicized problematic AI launches , the company has since regrouped to re-tune its AI models.??
Takeaway: With the U.S. presidential election just months away, this news reminds us that humans still need to check AI results, even from search engines, and that the battle against misinformation and brand damage is far from over.??
$250 million. That’s the value of OpenAI’s content licensing deal with News Corp over the next five years. And it’s just the tip of the iceberg, as OpenAI has also inked deals with The Associated Press, Le Monde, Axel Springer, Vox Media, The Atlantic, Dotdash Meredith, The Financial Times and Prisa Media. (The financial terms of all these deals have not been disclosed.)
As media organizations struggle financially, being compensated for content being used to feed and train large language models may make sense. But are they giving up their IP too cheaply? The New York Times seems to think so, as it is suing OpenAI for “billions of dollars in statutory and actual damages ” for scraping its content.?
The Washington Post: The AI Election Is Here. Regulators Can’t Decide Whose problem It Is. ??
The growing prominence of generative AI in everyday life is outpacing lawmakers and worrying election watchers. The good news? Sixteen states have created statutes requiring the disclosure of AI use in political campaigns. Some have even prohibited it. The bad news? Congress hasn’t yet dug into AI campaign regulation at the federal level. The best we have in protecting the electoral process comes from guidelines from the FTC , set in place to target the impersonation of government officials.??
Others are looking for immediate action as D.C. awaits Sen. Schumer’s comprehensive AI legislation. Sen. Amy Klobuchar, for one, is addressing AI with three bills that combat deception, ask for transparency, and train state/local election officials to work with the National Institute of Standards and Technology (NIST) to report unlawful AI use.?As the clock ticks down to election day, it will be interesting to see whether our elected officials can put politics aside to pass legislation emphasizing AI transparency.???
Invisible Technologies: The Impact of RAG ?
Generative AI is known to “hallucinate.” It can generate answers that appear to be plausible but are way off the mark. The potential for hallucinations in zero-error environments -- like healthcare, finance and many other fields --, holds back AI. That’s why Retrieval-Augmented Generation, or RAG, is growing. RAG allows specific knowledge sets — like a database of financial reports — to overlay a general-purpose LLM, leading to more accurate results and fewer hallucinations.?
This blog post by our client Invisible Technologies explores the benefits and shortfalls of RAG in business settings. It’s worth a read.?
Allison has tools to assist with your AI projects. Check out Allison AI , an integrated suite of products and consulting services for our clients and agency partners. Developed by our global task force of senior counselors and technology experts, Allison AI can help enhance your company’s capability to identify and responsibly infuse AI capabilities into your workstreams.
To learn more, say hello to the Allison AI team at [email protected] .??
Allison's AI Review is brought to you by contributors Jacques Couret , Jordan Fischler , Jenny Hon , Brian Kaveney , Zac Rivera , Eva Murphy Ryan , Jacob Nahin, Rafe Needleman , and Alan Ryan .