GenAI Weekly — Edition 14
Your Weekly Dose of Gen AI: News, Trends, and Breakthroughs
Stay at the forefront of the Gen AI revolution with Gen AI Weekly! Each week, we curate the most noteworthy news, insights, and breakthroughs in the field, equipping you with the knowledge you need to stay ahead of the curve.
Microsoft introduces the Copilot+ PCs
Copilot+ PCs have a few new tricks on offer to set them apart as AI PCs. Their primary accomplishment is running Microsoft's Copilot AI assistant locally — all other currently available PCs with Copilot run it via the cloud. The AI is activated with the Copilot key , which can only be re-mapped with third-party programs. Copilot+ brings with it a host of interesting features, including Recall and Cocreate.
Recall is a snapshot feature for Copilot+ PCs that remembers your work as you go, taking snapshots of applications and screens and remembering everything you've seen in case you forget where you saw it. Users can scrub on a timeline of the PC's recorded history, or search for keywords to find lost information or files. For safety, Recall can be disabled or paused for certain applications and programs, but likely not disabled completely as it's integrated into the OS on Copilot+ devices.
Cocreate is an AI imaging tool that attempts to AI-upgrade your art as you draw it, with varying levels of "imagination." From subtly adding shadows or reflections to a beach scene drawn in Paint, to fully Van Gogh-ifying your hand-drawn giraffe, Cocreate attempts to help out artists or artist-wannabes. With much fervor around AI art in particular, adding this tool is an interesting statement by Microsoft.
Windows Studio Effects is a series of webcam filters that allow you to blur your background or add special effects to any program that accesses your camera. This is not unlike Nvidia Broadcast or XSplit Vcam — except it will leverage your laptop NPU to do the heavy lifting.
See also: Introducing Copilot+ PCs from the Microsoft blog , Windows Recall sounds like a privacy nightmare
Small Language Models (SLMs) like Microsofts Phi-3, can run locally on phones or PCs without big compromises
When ChatGPT was released in November 2023, it could only be accessed through the cloud because the model behind it was downright enormous.
Today I am running a similarly capable AI program on a Macbook Air, and it isn’t even warm. The shrinkage shows how rapidly researchers are refining AI models to make them leaner and more efficient. It also shows how going to ever larger scales isn’t the only way to make machines significantly smarter.
With ChatGPT-like wit and wisdom is called Phi-3-mini. It’s part of a family of smaller AI models recently released by researchers at Microsoft. Although it’s compact enough to run on a smartphone, I tested it by running it on a laptop and accessing it from an iPhone through an app called Enchanted that provides a chat interface similar to the official ChatGPT app.
In a paper describing the Phi-3 family of models, Microsoft’s researchers say the model I used measures up favorably to GPT-3.5, the OpenAI model behind the first release of ChatGPT. That claim is based on measuring its performance on several standard AI benchmarks designed to measure common sense and reasoning. In my own testing, it certainly seems just as capable.
These are a big win for privacy. Just when we thought mobile hardware is good to run for years, the need for built-in NPUs (neural processing units) has arrived.
Sounds like Scarlett
Johansson said she had been contacted by OpenAI CEO Sam Altman in September 2023 about the company hiring her to provide the voice for ChatGPT 4.0. She said she declined for “personal reasons.”
“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson said. “Mr. Altman even insinuated that the similarity was intentional, tweeting a single word ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.”
According to Johansson, two days before OpenAI staged the ChatGPT 4.0 demo, Altman contacted her agent, “asking me to reconsider. Before we could connect, the system was out there.”
Also read: Sam Altman Is Showing Us Who He Really Is , Leaked OpenAI documents reveal aggressive tactics toward former employees
However, it seems as though OpenAI did not try and deliberately get a knock-off of Johansson’s voice based on a timeline they’ve published . You can also read more on the subject in this article by Mark Wilson writing for Tech Radar :
领英推荐
OpenAI 's high-profile run-in with Scarlett Johansson is turning into a sci-fi story to rival the move Her, and now it's taken another turn, with OpenAI sharing documents and an updated blog post suggesting that its 'Sky' chatbot in the ChatGPT app wasn't a deliberate attempt to copy the actress's voice.
OpenAI preemptively pulled its 'Sky' voice option in the ChatGPT app on May 19, just before Scarlett Johansson publicly expressed her "disbelief" at how "eerily similar" it sounded to her own (in a statement shared with NPR ). The actress also revealed that OpenAI CEO Sam Altman had previously approached her twice to license her voice for the app, and that she'd declined on both occasions.?
But now OpenAI is on the defensive, sharing documents with The Washington Post suggesting that its casting process for the various voices in the ChatGPT app was kept entirely separate from its reported approaches to Johansson.
The documents, recordings and interviews with people involved in the process suggest that "an actress was hired to create the Sky voice months before Altman contacted Johansson", according to The Washington Post.?
The agent of the actress chosen for the Sky voice also apparently confirmed that "neither Johansson nor the movie “Her” were ever mentioned by OpenAI" during the process, nor was the actress's natural speaking voice tweaked to sound more like Johansson.
OpenAI's lead for AI model behavior, Joanne Jang, also shared more details with The Washington Post on how the voices were cast. Jang stated that she "kept a tight tent" around the AI voices project and that Altman was "not intimately involved" in the decision-making process, as he was "on his world tour during much of the casting process".
Systematically Improving Your RAG
By the end of this post, you'll have a clear understanding of my systematic approach to improving RAG applications for the companies I work with. We'll cover key areas such as:
RAG isn’t just one single technique—it’s a collection of techniques that is still evolving. This is a good list of things one should be doing for a decent quality system.
Instagram co-founder joins Anthropic as Chief Product Officer
We're excited to announce that Mike Krieger has joined Anthropic as our Chief Product Officer. Mike will oversee Anthropic's product engineering, product management, and product design efforts as we work to expand our suite of enterprise applications and bring Claude to a wider audience.
Mike brings deep experience building and scaling innovative products and user experiences, most notably as the co-founder and CTO of Instagram. During his tenure, he grew the engineering team to over 450 people and helped scale the platform to more than a billion users.
More recently, Mike spent the past three years building Artifact, a personalized news app, prior to its acquisition by Yahoo. With deep expertise across the product development lifecycle, from hands-on coding to product vision and leadership, Mike is uniquely suited to take Anthropic's product efforts to the next level as the company continues its rapid growth.
We’ve discussed here about how chatbots aren’t the best user interface in the world. It’ll be interesting to see how this hiring coup will help Anthropic.
Google Search Is Now a Giant Hallucination
Google tested out AI overviews for months before releasing them nationwide last week, but clearly, that wasn’t enough time. The AI is hallucinating answers to several user queries, creating a less-than-trustworthy experience across Google’s flagship product. In the last week, Gizmodo received AI overviews from Google that reference glue-topped pizza and suggest Barack Obama was Muslim.
The hallucinations are concerning, but not entirely surprising. Like we’ve seen before with AI chatbots, this technology seems to confuse satire with journalism – several of the incorrect AI overviews we found seem to reference The Onion . The problem is that this AI offers an authoritative answer to millions of people who turn to Google Search daily to just look something up. Now, at least some of these people will be presented with hallucinated answers.
“The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web,” said a Google spokesperson in an emailed statement to Gizmodo, noting many of the examples the company has seen have been from uncommon queries. “We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”?
There are a lot of GenAI projects out there released to users without the “alpha” tag being legible. See also: Google scrambles to manually remove weird AI answers in search and Google just updated its algorithm. The Internet will never be the same .
For the extra curious