Expl(ai)ned: AI Companionship, Podcasts, and Freedom of Speech.
The New AI Project | University of Notre Dame
Labor, commerce, ethics, business, and arts—keep up with a universe of Generative AI.
NOVEMBER 2024~ Keep up with recent news in the world of Generative AI, including new features, AI in the workplace, social and ethical implications, regulations, and research revelations from the past month (15m read).
Tech Titans: New Features, Products, and More
Last month we discussed generative AI moving beyond text to incorporate multi-modal capabilities such as on-the-fly image interpretation and multilingual dialogue interactions with real-time voice. This month we see these trends continuing as developers jockey to create new experiences and ways to interact with generative systems beyond simple chatbots and text-only environments.
How Google’s NotebookLM Transforms Media
In recent months, AI-powered sound and music generation has become increasingly prevalent, with Google’s new NotebookLM taking sound-generating AI technology to the next level. Dubbed “the most exciting thing since ChatGPT” by some, NotebookLM allows users to upload up to 50 documents to the system which will automatically summarize the documents and become an expert on them for future queries. Here, however, is where it gets interesting. The software can automatically generate a podcast with two virtual hosts based on the imputed information. Part of what makes this technology stand out is the realistic nature of the two “hosts”; the use of filler words such as “um” and “oh” combined with natural pauses makes the technology sound more like real human voices. For example, we input last month’s edition of the newsletter, and here’s what it returned:
Innovations such as NotebookLM illustrate how AI will likely change how our society produces and consumes media, distancing humans from the production process. Before audio-generating AI, people would have to browse Google or YouTube to find entertaining content created by other humans. Now, NotebookLM users can generate unique, one-of-a-kind content based on their individual preferences, with endless content that relates directly to them. Given that other applications of AI have already been criticized for fueling human overconsumption, it will be interesting to see how the ability to generate an infinite library of personally tailored content impacts human media consumption patterns.?
Following NotebookLM’s popularity, Meta released their own version titled NotebookLlama a few weeks later, solidifying the trend toward user-tailored interfaces. Meta’s version, however, was open-sourced, a move that promotes greater transparency. Ultimately, as AI-driven tools like NotebookLM and NotebookLlama continue to become an everyday part of our lives, they are set to (a) redefine how we engage with AI systems and (b) how AI systems will cater to the users’ desires.
Anthropic’s Artifacts and OpenAI’s Canvas
AI "artifacts" and "canvas" are emerging as transformative features in AI chatbots, turning simple interactions into rich, ongoing, and collaborative environments. OpenAI's Canvas feature and Anthropic's Artifacts represent significant advances in how users can interact with AI. Canvas, part of ChatGPT, allows users to create and refine content iteratively within a dedicated workspace, making it easy to draft reports, write articles, or brainstorm ideas. Anthropic's Artifacts in Claude create a dedicated window to instantly see results, iterate them to your liking, and change things on the fly. For example, a user working on a marketing campaign can use Claude to save different versions of the campaign plan within Artifacts, refine it over multiple sessions, receive suggestions for improvement, and generate diagrams or tables of all the relevant statistics, all without losing track of previous edits or feedback. These features facilitate a persistent work environment that molds to your task, shifting AI from a mere tool into the realm of a collaborative partner.
Read more about…
AI at Work: Novel Uses, Labor, Commerce and Industry
AI’s Growing? Impact on Streamlining Healthcare Administration
While AI’s role in medical diagnostics often grabs the headlines, one of its most impactful applications is quietly taking shape in the area of administrative efficiency.
For many doctors, the daily grind of paperwork, electronic health records, and entering treatment codes takes away precious time from treating patients. Now, AI is stepping in to help alleviate this burden, promising to free up clinicians to do what they do best—care for patients.
One example in this space is Ambience Healthcare’s AI-powered platform, which acts as a digital scribe for doctors. Rather than clinicians juggling patient care and note-taking, Ambience’s system listens to doctor-patient interactions in real time and transcribes them into structured, accurate medical notes, even filtering out casual chit-chat to ensure that doctors’ time is spent more efficiently. Large health systems like UCSF Health and St. Luke’s Health System are already using the technology, and early results are promising: doctors report saving two to three hours per day, with significantly lower rates of burnout according to its founders.
Generative AI technologies offer significant potential to improve healthcare efficiency, particularly in tasks like medical summarization. However, these systems bring risks, especially if not carefully managed. In platforms like MyChart, where AI drafts responses to patient messages,? generative systems may introduce incorrect or misleading medical advice, and a small study found that if left unedited, these messages would pose a risk of severe harm about 7% of the time.
To help steer the appropriate use of these new systems, principles of safe use are being advanced., For example, healthcare professionals should review AI-generated drafts before storing them to ensure accuracy and safety, and patients should be informed when AI is involved in their care. While LLMs are typically intended for administrative tasks, they sometimes drift into clinical decision-making, posing further dangers if not carefully monitored. Ultimately, the success of AI in healthcare depends on striking the right balance between efficiency and safety while preserving the integrity of patient care.
Europe and the WEF Shape the Future of the Global Workforce
A study released earlier this month by McKinsey suggests a pivotal moment for Europe’s AI sector. With its rich digital markets, robust data sovereignty frameworks, and industrial capability, Europe is gaining competitiveness in the global market. To keep pace with the rapidly advancing markets of the U.S. and China, the report emphasizes the transformative potential of AI in key European industries like manufacturing, healthcare, and financial services. Outside of Europe, governments worldwide are racing to craft policies that help business and government leaders frame and respond to the new AI landscape. To support such efforts, the World Economic Forum recently published a “Governance in the Age of AI” framework.? Written as part of their “AI Governance Alliance: Briefing Paper Series” the white paper provides a broad framework for thinking about AI and points to a range of issues that must be addressed, from investing in innovation to protecting children and their data.? The paper will likely be useful to a wide audience given its broad scope and non-technical communication style. Among many conclusions and recommendations, they note that by promoting flexible skill development through partnerships with industry and academia, governments can cultivate digital literacy and specialized AI skills across all sectors.
AI in the World: Shaping Lifestyles and Society
Algorithmic Affection
While many worry about the impact of technology on our mental health, others are turning to AI for companionship. Apps like Nomi, Kindroid, and Replika allow users to create personalized AI-generated companions with which they can engage in interactive, back-and-forth conversations.
Some users turn to these virtual partners for casual gossip or advice, while others have formed romantic relationships with the chatbots, going so far as to request X-rated images of their digital partners. Replika CEO, Eugenia Kuyda, has even suggested that in the future, relationships with AI companions may lead to marriage. Beyond companionship and romance, AI systems have also been used to simulate conversations with deceased loved ones, and companies like Casio are exploring AI-powered robots designed to replace pets.
Loneliness currently affects around ? of adults, and many people turn to AI companions to reduce these feelings of solitude. Recent studies have found that AI may be a promising solution to this epidemic: in an investigation of AI companions’ impact on loneliness, researchers found that participants who used an AI companion reported a 16% decrease in loneliness over a week. Although these results may seem promising, several mental health experts have simultaneously raised concerns about the negative impacts of AI companionship. Joel Pearson, a cognitive neuroscientist at the University of New South Wales, argues that “AI is already affecting us and changing our mental health in ways that are bad for us.”? In some cases, users have already become overly reliant on–or even addicted to–their online companions. This psychological dependence is worrying, given that these robots are products that can be suddenly deactivated or changed. Additionally, experts are concerned that people will try to replace true human connection with AI robots that are controlled by for-profit companies that specialize in creating additive products. Most recently, a Florida woman named Megan Garcia filed a lawsuit against AI companion generator Character.AI for its role in her 14-year-old son’s suicide. Garcia said that her son, who was having an emotional and sexual relationship with online chatbot “Dany,” became overly dependent on the virtual chatbot, which she claims was intentionally designed to be hyper-sexualized and knowingly marketed to minors.
Nobel Recognition for AI
AI played a pivotal role in the breakthroughs that were honored with the Physics and Chemistry Nobel Prizes earlier this month. John Hopfield and Geoffrey Hinton were awarded the Physics Nobel Prize for their contributions to the development of machine learning, while David Baker, Demis Hassabis, and John Jumper were honored with the Chemistry Nobel Prize for their ability to use AI research company DeepMind’s AlphaFold2 to predict the shape of proteins.
This accomplishment has been described by some as a “watershed moment” for AI systems. While many have remained skeptical over the technology’s abilities and have claimed that AI is nothing more than a trend that is not here to stay, these two Nobel Prizes prove that artificial intelligence can help aid in outstanding scientific discovery.,?
Nevertheless, the Nobel Prize honorees have met some backlash. Critics assert that because AI relies on existing data and cannot think for itself, it falls short of being original research and raises “fundamental problems” when being recognized with such distinguished awards. Adding to these concerns, physics honoree Geoffrey Hinton voiced his concern about the future of innovation. In reflecting on his work with artificial intelligence, Hinton compared the rise of AI to an intellectual Industrial Revolution and warned that soon, the technology may exceed people's cognitive ability.?
The recognition of artificial intelligence in these prestigious awards signals to doubters that AI is not a fleeting trend, but a transformative force that can shape the future of scientific inquiry for years to come.
Read more about…
Taming AI: Ethics, Policies and Regulations
AI is affecting (almost) everything in our everyday lives – including the 2024 U.S. presidential election. Are you wondering how the new technology has been showing up in the lead-up to the election? Read our new “The AI Vote” spotlight here.?
Should We Tame AI? Regulating AI and Freedom of Speech
Usually, we discuss AI regulation as a balancing act between the ethical use of AI and innovation. What happens when the conversation focuses on the balance between the ethical use of AI and our First Amendment right to freedom of speech?
YouTuber Christopher Kohls claims that California’s new AI laws go against his freedom of speech. Known on YouTube as “Mr Reagan,” Kohls posted a video that showed Kamala Harris speaking poorly about herself and her administration – Kohls utilized AI-based technology to “clone her voice and generate a self-mocking imitation.” This is called a deepfake – a type of output that AI technologies can generate that appears to be real. Elon Musk reposted the video, and at the time of writing, this repost had garnered more than 136 million views. While some users may have interpreted a satirical tone of the video, it was realistic enough that some viewers may have mistaken it for a real audio clip of the vice president. When California Governor Gavin Newsom signed a range of AI-related regulations in September of 2024, aimed at restricting the spread of deceptive media about candidates, Kohls sued to block one of these laws based on the right to freedom of speech. After hearing the arguments, U.S. District Judge John A. Mendez “granted a preliminary injunction against the law, stating that it likely violates the First Amendment.” Even though the California laws outlined exceptions for parody and satire, the judge did not believe that the laws contained enough nuance to protect these freedoms of speech.
While Kohls’ suit centers around the use of deepfakes, this question of expression extends to AI use in general. Many people share this fear of losing their right to free speech in a dynamic information generation age. Time magazine’s Richard Stengel made the Case for Protecting AI-Generated Speech With the First Amendment by invoking the words of Justice Oliver Wendell Holmes in his 1929 dissent in United States v. Schwimmer: “If there is any principle of the Constitution that more imperatively calls for attachment than any other, it is the principle of free thought—not free thought for those who agree with us, freedom for the thought we hate.” This same sentiment is shared among many free speech institutes and think tanks. The Foundation for Individual Rights and Expressions (FIRE) acknowledges that people have rights and technologies don’t, and they make a strong statement about AI regulation: “People, not technologies, have rights. People create and utilize technologies for expressive purposes, and technologies used for expressive purposes, such as to communicate and receive information, implicate First Amendment rights.” In their paper Freedom of Speech and AI Output, Professors Eugene Volokh, Mark A. Lemley, and Peter Henderson claim that “AI programs aren’t engaged in ‘self-expression’ [because] as best we can tell, they have no self to express.” So, the argument goes, regulations on AI usage are actually regulating the individuals who are using them, according to these authors. David Inserra of the CATO Institute goes so far as to say that generative AI “enables users to better express themselves and gain a deeper understanding of the world and the perspectives of others.” By the logic of these perspectives, generative AI has expressive benefits that should not be limited by governmental regulation.
As we continue to engage with generative AI and the many forms of deepfakes it can create, we need to ask ourselves and our governments: where should we draw the line between expression and illusion?
Content Labeling
One suggestion for limiting the misuse of AI has been to require that any AI-generated content be labeled as such. Is this possible? Who will enforce it?
Some social media platforms have begun to enforce content labeling in different ways. Meta, which owns Facebook and Instagram, is applying labels to content on its platforms when their system recognizes that content is created using AI. Of course, this means that not every piece of AI-generated content will get labeled. TikTok, on the other hand, may automatically apply a label, but it asks creators to do so themselves. There is not a great benefit for creators to add this label, so it seems unlikely that it will become a widespread practice. Perhaps the most strict platform, YouTube “requires” that content creators disclose if their work was created or edited using AI. However, as it becomes harder to recognize whether or not pieces of media were created using AI, such rules become harder to enforce.?
There are a few ways that content labeling can happen. One is through digital watermarking, which labels content in either an outwardly-facing way or within the data itself: Rafal Pytel of SoftwareMill explains that “there are many watermarks of generated data that are not visible, like SynthID (available on Google Cloud for images but also audio), AWS introduced watermarking API or open-source demos from TruePic or IMATAG. OpenAI also introduced a new format for their images due to adding watermarking in Dalle-3 (using C2PA).” Interestingly, traditional watermarking was the way that Getty Images realized that the AI art creator “Stable Diffusion” had illegally used Getty’s images to train their model, and why they then sued the AI company. The intersection of AI and new forms of watermarking holds promise in solving the enduring challenges of content labeling.??
Read more about…
Research Revelations
AI's Next Challenge: Tackling Complex Mathematics
Mathematics remains one of the greatest challenges for AI, as the logic and reasoning required often go beyond the predictive capabilities of large language models. While AI has made strides in solving basic arithmetic and simple equations, it still struggles with more complex problems like geometry and abstract reasoning.
However, recent advancements show progress in overcoming these limitations. Google's DeepMind, through its AlphaGeometry system, solved four of six problems at the 2024 International Mathematical Olympiad, a significant milestone in AI's journey to mastering mathematics. AlphaGeometry’s neuro-symbolic approach, which combines language modeling with symbolic deduction engines, enabled the AI to generate step-by-step proofs without the need for human demonstrations. This system outperformed previous AI models, like GPT-4 and Wu’s method, by using synthetic data to train itself on a vast array of geometry problems, producing human-readable solutions and achieving near-gold-medalist performance at the IMO. However, researchers agree that while AI systems like AlphaGeometry are impressive, there is still work to be done before AI can consistently tackle more complex, research-level problems across mathematical domains. ~
***all imagery created using Image Creator from Designer***
The New AI Project | University of Notre Dame
Editor: Graham Wolfe
Contributors: Clare Hill, Aiden Gilroy, Mary Claire Anderson,?
?????????????????????????Annie Zhao, Gaby Sanchez
Advisor: John Behrens
Recommended Videos
Ted Talks
What is AI?