Expl(ai)ned: ChatGPT Turns 2, Advertising, and the Trump Presidency
The New AI Project | University of Notre Dame
Labor, commerce, ethics, business, and arts—keep up with a universe of Generative AI.
DECEMBER 2024~ Keep up with recent news in the world of Generative AI, including new features, AI in the workplace, social and ethical implications, regulations, and research revelations from the past month (10m read).
Tech Titans: New Features, Products, and More
Anthropic’s Computer Use: Learning to do your work by watching you
Anthropic has unveiled a groundbreaking AI tool as part of its Claude 3.5 Sonnet product, that learns from a user’s computer much like a human would. It works by taking screenshots of the user’s screen as they undertake an activity, then maps out the steps (like clicks, app switches, and searches), and is able to execute them to achieve the desired outcome. Essentially, it views the screen, moves the mouse, and types—just like a person. This innovation could revolutionize how we use AI. Imagine planning a trip: instead of spending hours researching hotels, flights, and budgets, the AI handles the entire process. While still in beta and not yet perfect at tasks like scrolling or zooming, this development marks a significant step toward seamless human-AI collaboration.
ChatGPT & the Generative Revolution Turn 2
Since its launch on November 30, 2022, OpenAI’s ChatGPT has sent shockwaves into classrooms, boardrooms, and living rooms, amassing over 200 million weekly users by August 2024. It has transcended its humble beginnings as an industry-leading chatbot and become a cultural phenomenon synonymous with disruptive technology. Its integration into sectors like education and business has transformed operations, with 75% of global knowledge workers utilizing generative AI tools. This time two years ago, writers here at The New AI Project could reasonably read just about every word published in a week about novel artificial intelligence, and nowadays… let’s just say times have changed.? The ChatGPT launch set off an explosion in the AI landscape with new products hitting the market, tech titans pivoting their strategies (see the End of Search? below), and the emergence of many new household names (see Anthropic above). As ChatGPT enters its third year, with ethical and legal challenges looming larger in the consumer consciousness, it continues to shape the future of AI, and the future of our world in tandem.
The End of Search?
After a 26-year run as the dominant force in internet search and becoming both a proper noun and a verb, Google is being challenged by the generative AI darlings? OpenAI and Perplexity who are trying to combine the best of language model chat-bots and internet search.??
In the chat-centered approach, a system provides a response by predicting how a human would answer the question based on having been fed billions of paragraphs of information, mostly from the internet.? The upside is that it has done the integration of the web pages for you and can eliminate your need for following links.? The downside is that the quality of results varies and the system may be out of date.? The hybrid systems seek to bridge the gaps but it is sometimes hard to know if a result is from the “creative” language model that may or may not be 100% correct,? a recent article on the web (that may or may not be correct), or from some mysterious combination that is hard to assess.
??????????????Perplexity released its AI search engine in 2023, focused on being a “trustworthy guide to the World Wide Web,” to contrast what they call Google’s “auctioneer of users’ attention.” They sought to cut out the middleman and directly provide answers to a user’s question without the need to sift through links. In fact, during the 2024 election cycle, with the disclosure of a voter’s zip code, perplexity’s system could compile all the relevant sources and an instructive summary of all the candidates on a given district’s ballot in seconds.? Perplexity’s system even provides the user with potential follow-up questions based on the original question.
OpenAI released their SearchGPT late last month as a new function of their already successful generative AI model. The new function works by automatically accessing content on the internet when the prompt requires it, yet also allowing users to opt-in to the feature if desired. This allows ChatGPT to integrate its pre-loaded AI models, limited to information before December 2023, with some of the latest information available on the web. As noted above, users should make sure to be clear when the responses are generated from the language model, and when they are from “real” search.? Both of the AI search engines attach links to the automated responses to your queries and allow the user to follow up with additional questions without leaving the AI chat interface.???????????????
Google, has not remained a bystander to this world. It released “AI Overviews” in May as an optional add-on to its search engine. This feature, if turned on, responds immediately to your typical search with an AI summary from a few relevant sources directly above Google’s typical links-based architecture.? This approach remains search-first while adding AI-based summarization to help manage the results.? What is from the web and what is from the AI-summarizer are clear and the behavior is therefore predictable.
It seems like it is still early days in the evolution of how research and information gathering will evolve in this new world of synthesized data shared through chat-bots and raw data accessed from pages around the web.? In the meantime, the key point for users is to understand when a system is summarizing (and what is summarizing), and when it is generating novel, and perhaps imperfect, “information”.
Read more about…
AI at Work: Novel Uses, Recommendations and Impact on Labor
AI in Advertising: Innovation Meets Consumer Skepticism
Artificial intelligence has revolutionized digital advertising for its ability to predict trends, optimize ad campaigns, and target specific audiences. Recently, however, companies have begun pushing AI's capabilities even further, leveraging the technology to create images and videos for advertising campaigns. Despite the potential surrounding AI in advertising, ads that use AI-generated images and videos have been met with pushback and skepticism. Even the highest quality image-generating AI tends to make telltale mistakes or create unrealistic silhouettes, and consumers who can spot these properties have made their disdain for ads with an “AI look” very well-known.?
Most recently, Coca-Cola was slammed for its AI-generated Christmas promotional video, which was criticized as “soulless” and “devoid of any actual creativity,” despite being intended as a tribute to their 1995 commercial “Holidays Are Coming.” Their original commercial was very similar to the AI-generated one, except it featured real imagery and human actors. AI use in creative domains has already been a highly contentious topic, and many argue that using artificial intelligence to generate advertisements compromises the role of human artists. Earlier this year, Lego was criticized for using artificial intelligence to create advertisements of their figures, despite their policy against using AI-generated content. In their official apology for their use of AI, Lego had to formally remark that, despite the “interesting opportunities” AI presents, their company “will continue to encourage and celebrate the talented artists who help bring our brand and characters to life.”?
AI ads have also been criticized for spreading misinformation and giving consumers false views of products they are considering purchasing. When fashion company Mango used AI-generated models to showcase its clothing, it faced backlash from shoppers who called the practice “false advertising.” Shoppers lamented that, when they order clothes online, they expect to see real human models showcasing how the pieces fit, something that AI-generated models simply cannot deliver.?
There are other known costs of using AI models to generate images, especially as it pertains to copyright infringement. Image-generating AI models are usually trained on data lakes with thousands, or sometimes even millions, of unlicensed works. Because of this, some AI image-generating tools have faced significant legal issues; for example, in a case that saw artists taking on AI image-generating platforms like Stable Diffusion in court, a judge’s ruling suggested that these AI models may infringe copyrights “by design” through their operation. Additionally, AI-generated works cannot be protected by copyright. These largely unresolved and extremely significant legal challenges have profound impacts on businesses; as companies are deploying AI image generators to create things like advertisements, they need to fully understand the legal risks and implications of using technology that was trained on material the company did not create.
While AI has already transformed advertising, it faces significant hurdles as it continues to make its way into the creative sphere. Consumer responses to AI-generated ads show that they value authenticity, accuracy, and the creative contributions traditionally provided by human artists.? How that balance fairs in the long run as humans do less of the hands-on creation and more of the design and direction is yet to be discovered.
Read more about…
AI in the World: Shaping Lifestyles and Society
Parenting in the AI Era: Closing the AI Knowledge Gap
Artificial intelligence is becoming an increasingly prevalent part of teenagers’ daily lives, with a recent poll finding that 70% of U.S. teens have used generative AI for purposes such as homework assignments and image generation. Despite this staggering percentage, much of this use occurs without parental knowledge–of the teens who reported using AI, only 37% of their parents were aware of the usage, with 25% of parents believing that their kids had never interacted with the technology. This lack of awareness exposes a disconnect between teens’ use of AI technology and parental understanding of how these tools are being utilized by their kids.?
This isn’t to say that parents are completely unaware of the potential capabilities of the technology; a study reported that 88% of Generation Z and Generation Alpha parents believe that AI will play a crucial role in their kids’ future success. Despite this acknowledgment, however, the knowledge gap between children and their parents persists, with only 17% of parents agreeing with the statement “I actively seek out information and resources to better understand AI technologies.” This leaves many parents unprepared to help their children navigate the implications of the technology.
This disconnect is particularly frightening as teenagers begin to increase their reliance on AI tools, often for the wrong reasons. Some teens reported AI being used to “create voices and images that can be used to enhance bullying in and out of school,” while others had stories of AI-generated deepfakes being used against them. From an educational standpoint, most teens who reported using AI said that they leveraged the technology for help on homework or class assignments. However, given the lack of uniform AI policies in most schools, this use has blurred the lines of what is considered academic dishonesty.?
Given the potential for harmful use of the technology, many experts have encouraged parents and educators to explore resources about generative AI and to engage in open conversations with their children about the dangers of the technology. By taking more proactive steps in the age of AI, parents can not only narrow the knowledge gap that exists between them and their kids but also ensure that their children are better prepared to navigate a future shaped by artificial intelligence.
Read more about…
Taming AI: Ethics, Policies and Regulationsnbsp;
Looking Forward: How a Trump Presidency is Expected to Impact AI Regulations?
On November 6, 2024, the U.S. presidential election was called for Donald Trump. A new administration means a new executive attitude toward AI. How should we expect a Trump presidency to change the AI landscape of the U.S.?
President-elect Trump spoke in December of 2023 about wanting to get rid of President Biden’s “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The order encouraged AI development but with a strong emphasis on ethics and governance. Some political leaders, including Trump, believe the order inhibits American competitive advantage in the AI space. As Harry Booth and Tharin Pillay of Time point out, “This position was reaffirmed in the Republican Party platform in July, which criticized the executive order for hindering innovation and imposing ‘radical leftwing ideas’ on the technology’s development.” So, Trump is likely to undo this executive order as the first of many steps he takes to shift American policy away from Biden’s AI regulation plan. He is also recruiting officials to his administration to help him with this goal – on November 12, 2024, President-elect Trump announced that Elon Musk and Vivek Ramaswamy will lead the “Department of Government Efficiency” which will be focused on cutting excess regulations. According to Haileleol Tibebu of Tech Policy Press, on the topic of AI Musk has demonstrated a “dual stance—advocating both for regulatory freedom to accelerate technology and for caution in specific high-risk areas— [which] adds an unpredictable dynamic to President-elect Trump’s deregulation agenda, raising questions about how AI policy may ultimately take shape under this administration.”?
In addition to pushing innovation for the sake of technological development, there is evidence that Trump and his administration view looser AI regulations as “essential to compete in the escalating AI race with China,” according to Forbes. In an interview with Logan Paul on his podcast “Impaulsive”, Trump said, “We have to take the lead over China” (48:50). Some commentators such as Arthur Herman are going so far as to label the race to AI innovation a new Cold War. He gives many reasons for concern over Chinese developments, but he says that “the most striking and notorious developments within the Chinese AI monolith today are AI’s applications for the total surveillance state.” One thing is for sure, a desire to outpace the Chinese government will be a driving factor of a new Trump-era AI policy.?
In the energy policy domain, Adam Thierer of “R Street” expects “a major nexus between AI policy and energy policy priorities” in the sense that “AI-related priorities [will be used] to advance permitting reforms and regulatory relaxation of various energy and environmental restrictions to ensure the development of more abundant energy options—especially nuclear power.” The link between AI and nuclear energy may not be immediately apparent. But, it is not unfounded, as evidenced by Oklo, “one of the nuclear startups backed by Sam Altman, the CEO of OpenAI who has described AI and cheap, green energy as mutually reinforcing essentials to achieving a future marked by ‘abundance.’” As AI continues to impact almost every area of life and policy, we should watch for unexpected crossovers between the technology and other hot-button issues.
Informed Consent in Healthcare
Healthcare is one industry that has had to immediately consider the consequences of its AI uses. As of December 2023, Los Angeles Pacific University reported that AI was already being used to transcribe medical documents, drug development, and even to diagnose patients. With this technological augmentation already underway in the field, what ethical responsibilities do healthcare workers have to disclose AI use in their treatment? Is it fundamentally different from the use of any other technology in healthcare? Hai Jin Park found that “the use of an AI tool increases the perceived importance of information related to its use, compared with when a physician consults with a human radiologist.” So, patients perceive information from AI in a different way than they perceive information from a human being. Doctors Susannah L. Rose and Devora Shapiro propose a framework for determining the amount that AI should change the informed consent process in a case-by-case manner. They suggest that use cases be evaluated by the following criteria: “(1) AI model autonomy, (2) departure from standards of practice, (3) whether the AI model is patient-facing, (4) clinical risk introduced by the model, and (5) administrative burdens.” Their model creates a score for each use case based on these criteria, and then the healthcare professional uses that score to determine how to proceed. This is just one proposed way of dealing with informed consent in the healthcare space – there may be many alternative methods that different members of the healthcare industry try before a standardized system is accepted. What’s important is that the professionals of every industry continue to think critically about the ways that their AI use will impact the people they serve.?
Read more about…
Research Revelations
Are Language Models Maxing Out?
It’s no secret that AI has made remarkable strides in recent years, with models like OpenAI's ChatGPT raising the field’s ceiling of performance markedly. However, the traditional approach of scaling up the capability of these models by feeding them more data and increasing computational power may be itself hitting a ceiling. A recent report by The Information has revealed that while Orion, or GPT-5, surpasses GPT-4 in performance, the improvements fall far short of expectations. The jump in quality is far smaller than the one between GPT-3 and GPT-4.
Training large AI models, often referred to as "training runs," can cost tens of millions of dollars due to the need for hundreds of chips running simultaneously. The complexity of these systems increases the risk of hardware failures, and researchers often don’t know how well the models will perform until the training is complete—a process that can take months. In addition, the demand for energy, water, and other resources is projected to increase sixfold over the next decade. This has driven companies like Microsoft to consider restarting facilities such as Three Mile Island, AWS to acquire a 960 MW power plant, and Google to secure the output of seven nuclear reactors—all in an effort to meet the massive power requirements of their expanding AI data centers.
Computational power aside, the scarcity of high-quality training data presents another critical challenge for AI development. Large language models have already consumed most of the readily available and accessible data, leaving little room for improvement through traditional data-driven scaling. Recent research suggests that the volume of text data used for training AI models is growing at 2.5 times per year, while computing capabilities are growing at four times that rate. By those numbers, companies will exhaust the supply of publicly available training data for AI language models sometime between 2026 and 2032.?
To fight these limitations, researchers have begun exploring alternative strategies to advance AI without relying solely on scaling. Techniques like "test-time compute" are being used to enhance model performance, making outputs more accurate and efficient without requiring additional data. Others are focusing on building specialized models tailored for specific tasks rather than pursuing generalized systems, which demand vast resources. OpenAI, for instance, is experimenting with synthetic data generation, though experts warn of risks like "model collapse," where AI trained on its own outputs loses quality over time. But efforts to scale are still in place, as companies like Google and OpenAI cut deals with sites like StackOverflow and Reddit, which has 80 million daily users and high traffic, to use its content in model training. ~
***all imagery created using Image Creator from Designer***
The New AI Project | University of Notre Dame
Editor: Graham Wolfe
Contributors: Clare Hill, Aiden Gilroy, Mary Claire Anderson,?
?????????????????????????Annie Zhao, Gaby Sanchez
Advisor: John Behrens
Recommended Videos
Ted Talks
What is AI?