Brands are growing more concerned about how they are perceived by ChatGPT
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.
This week, I’m focusing on brands’ efforts to learn how their products are perceived by large language models. Also, new research from Apple could let Siri “see” what’s on your iPhone screen, and OpenAI will no longer require sign-up to use ChatGPT.
Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at [email protected], and follow me on X (formerly Twitter) @thesullivan.
Brands are getting curious about how they’re seen by AI models
People are increasingly turning to large language models (LLMs) to search for product information (see: chatbots like ChatGPT or AI search tools like Perplexity). Little surprise, then, that companies are growing more concerned about how these LLMs perceive their brand.?
Jack Smyth, chief solutions officer at digital marketing firm Jellyfish, has been conducting research to figure out why certain user prompts will cause LLMs to mention brands. LLMs organize words and phrases, including brand names, within a huge vector space according to their meaning and the contexts within which they’re often used. So, for example, an LLM might associate the name of a lotion product with the term “gentle.”
But what if the model fails to associate a certain term with the product? Or worse yet, what if the LLM associates the product name with some negative term. Smyth says he’s talked to financial firms that are trying to make sure LLMs aren’t associating their brand names with anything related to “ESG” or “woke” during an election year.?
While companies can’t change what the models already “know” about the brand, they can put new content into the information space in hopes that it’ll reach, and sway, AI? models. Right now, that exposure happens mainly when a model is trained using a huge compressed version of all the content on the internet, but that might change. “As these models become connected to the open web by default, which is the best way to make sure they’re as useful as possible, they’re going to be ingesting more and more topical or recent content,” Smyth tells me.?
“It's almost like model surgery or adversarial optimization, and that's where it gets really fun because we have to figure out what type of content is going to have the biggest impact on that model,” he says. Smyth believes that LLMs will increasingly be trained by watching web videos. “Our working hypothesis is that video—just because it’s a richer format or it might get more eyeballs on it—is likely going to be pretty significant.” He says he’s also advised brands to look at the entire body of media they’ve published over the years, and to eliminate anything that may have sent the wrong messages.
New Apple LLM research reveals a strategy for making Siri great
Apple wants to enable a compact language model on your iPhone to “see” content from applications and websites you have open on your screen or running in the background. In a new paper, the company’s AI researchers propose a method of creating a completely textual representation of such content (and its place on the screen) so that the language model can understand it and use it in conversations with the user. “To the best of our knowledge, this is the first work using a large language model that aims to encode context from a screen,” the researchers write in the open-access repository ArXiv.
For example, if a user is looking at a list of nearby businesses within the Maps app, they could simply ask “what are the hours for the last one on the list?” without having to name the business. Or, if a user is looking at a list of phone numbers on the Contact Us page of a website, they might tell Siri to “call the business number.”?
领英推荐
One of the coolest storylines in generative AI is the expanding set of data types that models can access, reference, and learn from. The first LLMs could only process text from the internet, but new multimodal models can understand audio and video. A future iPhone may be able to “see” app content on your phone, but Apple’s great opportunity is to inform Siri with more kinds of data that the iPhone can already collect, such as the audio environment collected by the microphones, the world as seen through the camera, and the motion and movement detected by the sensors.?
Why OpenAI waived user sign-in for ChatGPT
OpenAI is no longer requiring people to set up an account before using ChatGPT. Users can just click the app or website and start prompting. The company says it’s rolling out the sign-up-free ChatGPT slowly, with the aim to “make AI accessible to anyone curious about its capabilities.”
The move makes sense if you think about why OpenAI opened its chatbot to the public in the first place. Many of its researchers came to the company because they wanted to expose AI to real users instead of just writing research papers about it. The idea is that they can learn a lot about how the chatbot can be used, and misused, from people in everyday situations.?
The company also wants to use the dialogs that users get into with ChatGPT to train their AI models. User-generated content may be as valuable to AI companies as it is to social networks. By nixing the sign-up, OpenAI is removing a barrier for collecting training data. (The company points out, however, that users can opt out of having their conversations used for training.)?
Now that just anybody can use ChatGPT, the risk of someone using the tool for harmful purposes may increase. OpenAI says it’s putting additional content safeguards in place for users without accounts and will block a wider range of prompts.
OpenAI’s move may also be an attempt to get new users to try ChatGPT rather than any of the set of increasingly performant alternatives, such as Google’s Gemini. SimilarWeb data shows that February visits to the ChatGPT service on mobile and desktop were down 2.7% from January, and down 11% from the peak of its popularity in May 2023.
More AI coverage from Fast Company:
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Founder & CEO, Writing For Humans? | AI Content Editing | Content Strategy | Content Creation | ex-Edelman, ex-Ruder Finn
7 个月I sense a new business area for reputation management firms ...
Senior communications strategist, PR & Analyst Relations, extensive work with AI co's, enterprise tech, startups, cloud, cybersecurity, shadow IT, data analytics, pharma, medtech, reputation mgmt, crisis/issues mgmt
7 个月This is interesting take/angle
Chief Creative Officer at Code and Theory
7 个月With AI increasingly influencing consumer perceptions through dynamic, real-time content analysis, brands must proactively manage their digital footprint. It’s an evolution of SEO—into a realm where strategic content curation and “model surgery” become crucial in shaping AI’s understanding of a brand. As AI delves deeper into web videos and real-time content, brands have the opportunity to redefine their narratives, making a compelling case for themselves as the most potent source of truth. This paradigm shift not only highlights the importance of strategic content placement but also underscores the need for a meticulous review of historical media to ensure consistency with current brand values. In this new era, mastering AI’s operational dynamics and aligning them with innovative storytelling will be key to maintaining brand integrity and fostering meaningful connections.
I connect people, information and ideas. Precision focused software, helping you in ways that most just cannot.
7 个月Ok I think the only logical thing to do is to make an LLM and charge these Brands a multiplier fee for positive brand information vs. negative. Taking applications.