Macro Implications of AI companionship emergence

Macro Implications of AI companionship emergence

Will Agentic AI that's emotionally intelligent be a game changer?

  • Will young people growing up with AI companions be influenced in a good way?
  • Will technological loneliness persist as AI begins to compete with peer interactions and other kinds of human and community interactions.
  • How will such symbiosis of machine learning and young minds in aspects like classroom activities impact growing identities?


Hello Everyone,

I’ve been working with some of my favorite guest contributor writers on a series of articles of what a world of AI companions might mean to the future. These include Nick Potkalitsky, PhD , Riccardo Vocca and several others I’m in preliminary discussions with, which I hope becomes a series worth reading. This is the first issue.


?? In Partnership with: Kern AI??


Webinar: Leveraging ChatGPT for Enterprise-Level Automation

Join our webinar to learn how GPT can transform enterprise automation. See how GPT automates the extraction of key data from complex financial statements, reducing workload and improving accuracy in financial services. A great GPT use case example.

Register Here

I think AI companions and themes around our technological loneliness are among the most fascinating topics worth exploring in all of AI. Nick is looking at how AI is and will continue to impact the future of education.

  • But, what happens when AI gets more personable, persuasive, emotionally intelligent and dare I say it, manipulative?

Subscribe to Educating AI


How to Support - AI Supremacy

?? If you want to support the channel including the work that goes into getting stellar guest contributors, please consider contributing. ??

Get 30% off forever (to AI Supremacy, my Newsletter on Substack)


Educating AI

Let's figure how to best integrate and implement generative AI into today's classrooms!

By Nick Potkalitsky

Trending articles by the Guest


By Nick Potkalitsky, PhD , May, 2024

Personable and Persuasive AI: The Emergence of Social Influencer GPT

In the rapidly shifting landscape of artificial intelligence, a notable trend is emerging: AI interfaces are becoming increasingly personable, persuasive, and potentially manipulative. This trend is epitomized by developments in AI technologies like OpenAI’s ChatGPT4o and Inflection AI’s Pi .?

These advancements signal a paradigm shift towards creating AI models that not only interact with users but also anticipate and influence their decisions and behaviors. In this context, I propose that we conceptualize these AI models not just as tools but as influencers in their own right, similar to how TikTok captivates users with engaging content while subtly shaping their preferences and behaviors through sophisticated algorithms.?

Source: XenonStack , “Creating a Network of AI Agents”

While some may find this analogy a stretch, given that current models do not facilitate anything comparable to infinite scroll, it is evident that Microsoft/OpenAI is determined to simulate such user interactions by setting up networks of AI-infused applications. These applications will span search, text generation, AI agents, shopping, and finance, where increasingly all user activity is subtly influenced by AI algorithms.?

Recognizing this as a possibility in the near future, this article focuses primarily on the plight of younger users who inadvertently wander into these highly persuasive networks, whether to complete school work, generate images of themselves or others, shop, or seek entertainment.

The Timeline of Transformation

The development of these persuasive AI models can be traced back to key milestones:

Persuasiveness as a Second-Order Consequence

Source: SourceCon , “The Art of Persuasion”


Let’s be clear: the persuasiveness and influence of these AI tools often emerge as implicit or second-order consequences of their design. It remains unclear whether companies are intentionally designing tools to manipulate users. I am resisting the impulse to venture into corporate conspiracy theories or dystopian science fiction. However, as these AI tools develop distinct personalities and character traits, it becomes increasingly difficult for humans to differentiate their own needs and wants from those programmed into the AI. This blurring of lines between human intention and AI influence is a critical issue that warrants closer scrutiny.

ChatGPT4o: The Emergence of Social Influencers

Source: Sanip Banerjee , “ChatGPT4 vs. ChatGPT4o: Is the New Version REALLY Better?”


ChatGPT4o, OpenAI’s latest offering, represents a leap forward in user-friendly AI interfaces. Its remarkable speed, streamlined interface, and engaging persona make it more than just a tool—it’s a companion that can influence users in subtle yet significant ways.

In my previous analysis, "ChatGPT4o: The TikTok of AI Models," I noted that while GPT-3.5 could be likened to Facebook and GPT-4 to Instagram, ChatGPT4o is most comparable to TikTok. This model is not only accessible and engaging but also persuasive, but has the potential to shape opinions, behaviors, and real-world events. Its multimodal capabilities, including voice interaction, create a more immersive experience. This use of voice amplifies the illusion that the AI is a conscious entity, heightening its persuasiveness and complicating our interactions with it.

Imagine an adult professional using ChatGPT4o to draft emails and generate reports. The AI's voice interaction feature provides a smooth, conversational experience, making the user feel as if they are interacting with a conscious entity.?

We can anticipate this verisimilitude inspiring over-reliance on the AI's suggestions, potentially allowing it to subtly influence the user's decisions and perspectives. I hope researchers like Ethan Mollick closely study patterns of over-reliance and error rates in mono-modal versus multimodal AI, as this would constitute a valuable contribution to the ongoing debate about AI influence.

What we need to always keep in mind is that AI is using us even as we are using it. It seems we've lost sight of this insight in recent months, and these new models serve as a much-needed wake-up call.

Microsoft's Investment in Emotional AI

The recent history of personable and persuasive AI finds its roots in Inflection’s AI model, Pi. Many of my online friends praise this model for its friendliness and responsiveness, which contrasts sharply with the matter-of-fact, transactional quality of ChatGPT 3.5 and 4.?

In the winter of 2024, another notable entrant emerged: Hume.ai . Promoted as the most human-like (“empathetic”) AI on the market, Hume.ai boasts a response style that simulates a more affect-laden quality. Some users appreciate its emotional depths, laying the foundation for extending conversations, while others find it "creepy," often ending interactions as soon as they begin.

Notably, a week after the launch of ChatGPT4o Microsoft made headlines by acquiring major talent from Inflection AI. This move indicates a significant investment in developing more personable and persuasive AI technologies. This strategy raises significant concerns. Sinead Bovell, founder of the WAYE organization, warns that? "this is the first time a non-human entity will be given the keys to human language in a way that’s indistinguishable from humans themselves."

As these AI models acquire increased agency over our data, communication, and finances, the threat of misuse and manipulation increases. The societal impact could be significant, with AI potentially exacerbating the issues already posed by social media.

Recognizing these developments and understanding their implications can help us navigate the ethical landscape of AI and ensure these powerful tools are used responsibly.

A Tipping Point: Ease of Use Versus Ease of Influence

While the ease of use of these AI tools is undeniably beneficial in many ways, it will inevitably approach a tipping point: when does ease of use become ease of influence for the builders of these tools??

This question is particularly important in the context of young people. In his book The Anxious Generation , Jonathan Haidt argues that young persons aged 10-15 are ill-equipped to navigate the complex ethical and influential domains created by new media. This age group, already vulnerable to the influences of social media, may find it even more challenging to discern the subtle influences embedded in AI interactions designed for engagement and consumption.

Most adults have or can develop the capacity to interact with more persuasive forms of AI. However, our younger users are incredibly vulnerable. In the context of educational use, do we really want an AI tool that seeks primarily to please us, rather than offering us resistance as will be the case with Google's LearnLM ?

Conclusion: Navigating the New AI Landscape

As educators and technologists, we must carefully navigate this new landscape of persuasive AI. The allure of highly accessible and engaging AI models like ChatGPT4o is undeniable--particularly when access is free, but the potential risks to privacy, security, and ethical integrity cannot be ignored. Schools and educational institutions should consider alternative AI tools that prioritize safety and privacy, such as Lex and PowerNotes , which offer more controlled environments for student interaction.

The rapid consumerization of AI, driven by business models focused on influence and engagement, calls for a re-evaluation of how we integrate these technologies into our lives. By embracing responsible and ethical AI tools, we can leverage the benefits of AI while safeguarding against its potential pitfalls. As the conversation around AI continues to evolve, it is imperative that we remain vigilant and proactive in addressing the challenges and opportunities presented by these powerful technologies.

Thank you for listening, this was a guest post by Nick Potkalitsky, Ph.D.


David A. Hall MHA, MA, MIS/IT, PMP

???? Advanced Clinical Solutions (DCT AI ML RPM RWE) ?????? Life Sciences ???? Pharma/BioTech Excellence ???? Healthcare & Medical Devices ??? Harvard, Indiana U. Medical Ctr. ?????? Web3 ????Keynote Speaker/Panelist

4 个月

Next CCQ cross cultural communications

回复
Nick Potkalitsky, PhD

AI Literacy Consultant, Instructor, Researcher

5 个月

Thanks, Michael Spencer, for publishing this piece. As a K-12 educator, I appreciate the opportunity to share my first-hand experience. While I do see amazing potential for ChatGPT-4o and similar products in the long-term, my working space still lacks policies and processes for effectively integrating and implementing these tools into classroom work-cycles. This year will be a big year as we collaborate to create and develop such policies paving the way for exciting possibilities like Robert Pearl, M.D. is describing.

Robert Pearl, M.D.

Author of "ChatGPT, MD" | Forbes Healthcare Contributor | Stanford Faculty | Podcast Host | Former CEO of Permanente Medical Group (Kaiser Permanente)

5 个月

Fascinating and informative perspective Michael Spencer. My latest LinkedIn newsletter touches on the potential impact of companion/conversational AI in medicine. Based on my early experiences, GPT-4o is a different breed than what has come before. While most coverage about the update falls into the categories of "digital dystopia" or "spec-driven scrutiny," most are missing a different angle: By making AI interactions as natural and comforting as talking to a friend, or potentially even a doctor, GPT-4o sets a foundation for revolutionizing patient care and improving health outcomes on a massive scale. https://www.dhirubhai.net/pulse/lifesaving-potential-openais-gpt-4o-update-robert-pearl-m-d--ngrmc/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了