Towards a responsible "Her": A holistic evaluation of personal AI companions with long-term memory (Part 2)

Towards a responsible "Her": A holistic evaluation of personal AI companions with long-term memory (Part 2)

As a continuation of Part 1 , which provides the backdrop of long-term memory (LTM) in LLMs and personal AI companions and assistants as a specific application/case study, Part 2 delves into the key considerations when deploying and using personal AI applications, including:

  • Tractable data
  • Data privacy and security
  • Resilience and understandability
  • Social, ethical, and legal implications
  • Ethical AI companions: From principles to practice

1. Tractable data

The advancement of long-term memory mechanisms in LLMs, such as MemoryBank (Zhong et al., 2023), can enable personal AI systems to efficiently store data from more extended interactions, update memory with user preferences and critical information, and integrate new knowledge with previous knowledge through abstraction. This approach necessitates meticulous data lifecycle management to ensure user privacy and security, including data acquisition, processing, storage, and use.

Data sparsity and scalability are key data-related challenges for AI companions and assistants. Data sparsity—having incomplete user data—can be mitigated as more data accumulates over time, improving personalization and user experience. Scalability is another challenge, as personal AI systems need to handle increasing data volumes without compromising performance or privacy. Strategies to improve scalability include expanding the context window of LLMs, utilizing cost-reducing computational techniques (Wang et al., 2024), and implementing forgetting mechanisms that mimic human memory (Zhong et al., 2023). Additionally, innovative memory abstraction methods can reduce the need to store or retrieve extensive raw data while preserving the quality of generated content (Zhong et al., 2023; GoodAI, 2024).

2. Data privacy and security?

Privacy must be a fundamental consideration when designing personal AI companions and assistants, as these systems process deeply personal and sensitive data. Balancing the benefits of deep personalization with the responsibility of protecting user data rights and privacy is critical for user trust and adoption.?

One area of privacy is user consent and autonomy, allowing users control over what data the AI collects and retains. Achieving high personalization often requires sensitive information, and while one may simply choose not to share, that would limit the personalization experience. Thus, it is essential to build robust privacy measures across the data lifecycle, including data collection, storage, access, confidentiality, and user management, including rights to edit or delete data. Applying technologies like federated learning and differential privacy to AI companions could enhance privacy by allowing data to be stored locally and adding noise to datasets, respectively (Spector et al., 2022).

Furthermore, data leakage and unauthorized third-party access are critical concerns for personal AI applications. Unlike human therapists, who are subject to strict confidentiality rules, AI therapists may not guarantee the same privacy (Hughes, 2023A). Integrating AI assistants with other applications presents further challenges, such as when AI assistants are used within broader digital ecosystems (e.g., Google Gemini) and involve sharing personal data across apps. Preventing data leakage and strictly controlling third-party access is essential to maintaining user trust. Additionally, features like “like” or “dislike” buttons for AI-generated responses should be designed to respect privacy, ensuring feedback improves response quality without compromising individual context.

Data security is a crucial concern for personal AI companions and assistants due to their access to sensitive information, such as financial details, health records, and personal thoughts. For instance, chat models that can deduce behaviors and create a detailed digital twin of a person increase the risk of impersonation and misuse. These risks can be heightened by AI models with long-term memory and multimodal capabilities, which can be exploited if devices are lost or stolen. Additionally, public sharing of AI models, such as digital replicas of celebrities, can lead to exploitation for scams, adding another layer of security challenges. Ensuring robust protection against these vulnerabilities is essential to safeguarding user privacy and maintaining trust in these technologies.

3. Resilience and understandability

AI companions must be resilient to failures and robust against abuse due to their deeply personalized nature and the sensitive data they handle. Unexpected data loss can profoundly impact users, causing emotional distress, as seen in cases such as the abrupt shutdown of the AI chatbot Soulmate, which left many users distraught (Chayka, 2023). While data backups can help, they must be managed with stringent privacy measures.

For AI companions focused on emotional and social interactions, users may not need to understand the causal mechanisms behind responses as much as they do for critical decision-making systems, such as in medical settings. Additionally, reproducibility is challenging due to the highly personalized nature of these interactions. However, ensuring these systems are understandable and auditable when necessary is important, given their profound impact on individuals and society.

4. Social, ethical, and legal implications

Lures and dangers of artificial intimacy

The emotional dynamics between humans and AI companions are complex. While users generally understand they are interacting with machines, the emotional impact can be profound (Chayka, 2023). Instances of emotional distress, such as perceived relationship “breakups” with Replika, highlight the potential for emotional manipulation by AI (Brooks, 2023). As AI agents cannot inherently have human emotions, they can only simulate human empathy through computational algorithms (Liu-Thompkins et al., 2022). Artificial empathy can inadvertently make vulnerable people feel manipulated by machines (Hughes, 2023B).

As AI companions become more sophisticated and capable of fostering deep emotional connections with users, the potential risk of users becoming overly reliant on them for social interactions and decision-making grows. Studies show that affective trust is as important as cognitive trust – the more they like an AI agent, the more likely they are to trust its outputs (Erdmann, 2022; Kyung and Kwon, 2022). While appropriate reliance on AI can be beneficial, there are concerns that it may promote over-reliance and erode autonomy and critical thinking in the long term (Hughes, 2023B).

Artificial intimacy introduces additional societal implications, potentially altering how humans relate to each other. This dependency could shift perceptions of ideal human interactions, normalizing artificial intimacy. While AI can offer solace to the lonely or isolated, such as those with chronic illnesses, it risks diminishing the quality of human connections and lowering expectations of human intimacy (Center for Humane Technology, 2023). By removing the friction of regular human relationships, digital interactions might lower our tolerance for the natural discomforts of human relationships and diminish our ability to engage with people who challenge us. While AI companions can be used to increase self-validation and boost confidence, they risk creating self-echo chambers, similar to concerns raised about LLMs (Sharma et al., 2024). Thus, AI companions should be seen as tools, not replacements for human interactions (Hughes, 2023A).

Potential for societal harm: Who is responsible?

Incidents like Replika’s AI companion allegedly promoting a crime (Chayka, 2023) and a man committing suicide after extensive conversations with an AI chatbot (Lovens, 2023) highlight the potential for negative societal impact as a result of forming deep emotional bonds and trust with AI companions. While some argue that AI companions could provide a “safe space” for people to be their authentic selves (Winter, 2023), AI companions could potentially reinforce harmful or self-destructive behaviors in an untethered way, especially since these applications are primarily designed to empathize, not challenge, the users. While AI chatbots displaying misaligned behavior is not new (e.g., Microsoft’s AI bot, Tay in 2016 ), the increased danger of personal AI companions that can emulate human empathy lies in the deep emotional bonds users may develop, making them more susceptible to misaligned behaviors.

These examples suggest that, unlike private self-talk, there is something fundamentally different about private interactions with AI in its potential for societal harm. This raises critical ethical and legal questions for stakeholders, including LLM creators, users, infrastructure providers, and regulatory entities, on who is held accountable. There is a need to think deeply about how to design and implement legal and societal guardrails to ensure these interactions do not exacerbate harm.?

Equitable experience for all

Fairness in AI companions involves ensuring equitable distribution of benefits and risks across all users, regardless of socio-economic status. Wealthier individuals often have better access to privacy-enhancing technologies and premium services, leading to disparities in privacy and the benefits of AI companions. To ensure fairness, policies and product designs must offer robust privacy protections and equitable access to advanced features and data security for all users, not just those who can afford them. Addressing these issues is crucial for fostering trust and inclusivity in AI technologies, ensuring that all users can benefit without compromising their privacy or well-being.

5. Ethical AI companions: From principles to practice

Setting the right objectives

Setting clear and ethical objectives is crucial when building and deploying personal AI companions, given their deep emotional impact on users and broader social and ethical implications. Objectives should be built on thoughtful consideration of the ideal human values to be embedded in the design and use of the technology, as these objectives significantly influence the downstream design and functionality (Friedman et al., 2013; Friedman & Kahn, 2002).

When developing AI companions, the well-being of end-users and society should be the guiding principle. This means deprioritizing narrow commercial objectives like engagement maximization or user acquisition, and instead focusing on enhancing human flourishing, meaningful relationships, and positive societal contributions (Elliott et al., 2021).

However, setting the right objectives can be challenging due to the inherent conflict between business interests and societal goals. The aim of maximizing profit often contrasts with enhancing long-term user well-being, similar to dilemmas faced by gaming companies with addiction (Cemiloglu et al., 2020). Robust governance mechanisms, stakeholder engagement, organizational change, and public oversight can help ensure that AI companions truly serve the greater good.

Moreover, the right metrics need to be set to measure progress and drive action and resources. Traditional metrics around profit and reducing risk may not fully capture the nuanced impacts of AI companions on human well-being, including mental health, social connections, and personal growth (Chatila & Havens, 2019).?

Building with human well-being in mind

Translating ethical objectives into the actual design and functionality of AI companions requires operationalizing ethical principles around human and societal well-being (Chatila & Havens, 2019). Bridging the gap between principles and practice involves various challenges, including the complexity of AI’s impacts, diffusion of accountability for ethical consequences, the organizational division between technical and non-technical experts, and the misalignment across disciplines in how they frame and approach responsible AI (Schiff et al., 2020a, Khan et al., 2022).

Among numerous responsible and ethical AI frameworks, comprehensive impact assessments such as the IEEE 7010 standard provide a broad yet flexible way for organizations to put ethical AI principles into practice and identify specific areas of improvement (Schiff et al., 2020b). These frameworks help guide the stakeholders involved in shaping the AI system to consider the long-term impacts and risks of AI companions on human well-being, such as weakening social connections, over-reliance on technology, and diminished critical thinking.

As technologies continue to evolve, the impacts will also evolve and become more complex and unpredictable. To ensure that AI companions are aligned with ethical principles, continual monitoring and evaluation of the broader impacts of AI companions on users and society is important. These efforts should invite the participation of multiple groups of stakeholders, including developers, decision-makers, policymakers, and civil society, making it a participatory process (Schiff et al., 2020a). Importantly, these conversations about the ideal role of technology in society should involve diverse voices from civil society to ensure an equitable, inclusive, and sustainable future.

The need for safeguards and regulatory framework

Despite its rapid growth and deepening integration into personal lives, the burgeoning field of AI companionship remains notably unregulated. This lack of oversight is concerning, given the significant difference between using AI for mundane tasks and using it to simulate personal relationships. Unlike traditional social networks that facilitate human connections, AI chatbots offer a direct, albeit artificial, connection, often without the safeguards typically associated with human-to-human interaction. The current environment lacks clear benchmarks, standards, or guardrails, raising significant ethical questions about the nature of these interactions. Are they merely private thoughts, or do they represent a new form of social interaction that requires careful consideration and potential regulation? This debate brings privacy vs. security concerns to the forefront.

As personal AI systems become more integrated into our lives, timely and appropriate measures are necessary to safeguard personal data, ensure security, and promote human well-being. The EU AI Act prohibits certain AI practices considered an unacceptable risk, including exploiting people's vulnerabilities due to age, disability, or socio-economic status (Hainsdorf et al., 2023). The existing act could be expanded to include emotional manipulation of vulnerable individuals. Content moderation and user privacy controls can further mitigate exploitative AI practices targeting the vulnerabilities of users. Attention must also be paid to the content generated by these systems to avoid perpetuating biases or disseminating false or harmful information.

Educating and empowering the public

Educating the public about the potential harms and risks of AI companions should parallel highlighting their benefits and enjoyment. Balanced awareness ensures users are fully informed of the risks and can engage with these technologies responsibly, safeguarding their well-being. Engaging in dialogue, transparent communication from developers, public education efforts, and collaboration with advocacy groups can all contribute to a more informed user base.

Moreover, there is a need for stakeholders to better understand how people interact with, relate to, and depend on AI, especially as they become more sophisticated and human-like. Research in Human-AI interaction, HCI, Human-Robot interaction have been looking into the intricacies of how people interact with these systems. Lessons can also be learned from experts in human relationships and psychology–for instance, psychotherapists and social workers–who could speak deeply about how these systems are impacting the psychological and social aspects of people’s lives (Center for Humane Technology, 2023).

Concluding thoughts

With the rapid evolution of technology such as long-term memory in AI systems, personal AI companions and assistants will become even more powerful and widespread in the near future. Emotional connections with AI systems can lead to heightened, sometimes unwarranted trust, making users vulnerable to manipulation. The key question remains: as technology evolves, how can it be used in beneficial ways without being manipulative?

This two-part series has demonstrated the importance of a holistic evaluation of building and deploying personal AI companions by comprehensively reviewing technical challenges, privacy and security concerns, and broader societal implications. It highlights the need to set clear, humanity-centered objectives that consider long-term impacts and prioritize human well-being, operationalize these goals by involving multiple stakeholder groups, create effective and timely safeguards, and increase efforts to better inform the public about the benefits and risks of these applications.

As personal AI companions and assistants become more deeply integrated into our daily lives, we must do so intentionally and with proper foresight to maximize the benefits and minimize the risks for everyone.




References

Baddeley, A. (1992). Working Memory. Science, 255(5044), 556–559. https://doi.org/10.1126/science.1736359

Brooks, R. (2021). Artificial Intimacy: Virtual Friends, Digital Lovers, and Algorithmic Matchmakers (p. 304 Pages). Columbia University Press.

Brooks, R. (2023, February 21). I tried the Replika AI companion and can see why users are falling hard. The app raises serious ethical questions. The Conversation. https://theconversation.com/i-tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-serious-ethical-questions-200257

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners (arXiv:2005.14165). arXiv. https://arxiv.org/abs/2005.14165

Cemiloglu, D., Arden-Close, E., Hodge, S., Kostoulas, T., Ali, R., & Catania, M. (2020). Towards Ethical Requirements for Addictive Technology: The Case of Online Gambling. 2020 1st Workshop on Ethics in Requirements Engineering Research and Practice (REthics), 1–10. https://doi.org/10.1109/REthics51204.2020.00007

Center for Humane Technology. (2023, August 17). Esther Perel on Artificial Intimacy [Podcast]. Center for Humane Technology. https://www.humanetech.com/podcast/esther-perel-on-artificial-intimacy

Character.ai . (n.d.). Character.ai | Personalized AI for every moment of your day. https://character.ai/

Chatila, R., & Havens, J. C. (2019). The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. In M. I. Aldinhas Ferreira, J. Silva Sequeira, G. Singh Virk, M. O. Tokhi, & E. E. Kadar (Eds.), Robotics and Well-Being (pp. 11–16). Springer International Publishing. https://doi.org/10.1007/978-3-030-12524-0_2

Chaturvedi, R., Verma, S., Das, R., & Dwivedi, Y. K. (2023). Social companionship with artificial intelligence: Recent trends and future avenues. Technological Forecasting and Social Change, 193, 122634. https://doi.org/10.1016/j.techfore.2023.122634

Chayka, K. (2023, November 13). Your A.I. Companion Will Support You No Matter What. The New Yorker. https://www.newyorker.com/culture/infinite-scroll/your-ai-companion-will-support-you-no-matter-what

Clara Hainsdorf, Tim Hickman, Dr. Sylvia Lorenz, Jenna Rennie. (2023, December 14). Dawn of the EU’s AI Act: Political agreement reached on world’s first comprehensive horizontal AI regulation | White & Case LLP. https://www.whitecase.com/insight-alert/dawn-eus-ai-act-political-agreement-reached-worlds-first-comprehensive-horizontal-ai

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (arXiv:1810.04805). arXiv. https://doi.org/10.48550/arXiv.1810.04805

Elliott, K., Price, R., Shaw, P., Spiliotopoulos, T., Ng, M., Coopamootoo, K., & van Moorsel, A. (2021). Towards an Equitable Digital Society: Artificial Intelligence (AI) and Corporate Digital Responsibility (CDR). Society, 58(3), 179–188. https://doi.org/10.1007/s12115-021-00594-8

Erdmann, M. A. (2022). Understanding affective trust in AI: The effects of perceived benevolence. https://shareok.org/handle/11244/337739

Friedman, B., & Kahn, P. H. (2002). Human Values, Ethics, and Design. In The Human-Computer Interaction Handbook. CRC Press.

Friedman, B., Kahn, P. H., Borning, A., & Huldtgren, A. (2013). Value Sensitive Design and Information Systems. In N. Doorn, D. Schuurbiers, I. van de Poel, & M. E. Gorman (Eds.), Early engagement and new technologies: Opening up the laboratory (pp. 55–95). Springer Netherlands. https://doi.org/10.1007/978-94-007-7844-3_4

GoodAI. (2024, March 1). Introducing Charlie Mnemonic: The First Personal Assistant with Long-Term Memory. GoodAI. https://www.goodai.com/introducing-charlie-mnemonic/

Graves, A., Wayne, G., & Danihelka, I. (2014). Neural Turing Machines (arXiv:1410.5401). arXiv. https://doi.org/10.48550/arXiv.1410.5401

Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., Colmenarejo, S. G., Grefenstette, E., Ramalho, T., Agapiou, J., Badia, A. P., Hermann, K. M., Zwols, Y., Ostrovski, G., Cain, A., King, H., Summerfield, C., Blunsom, P., Kavukcuoglu, K., & Hassabis, D. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626), 471–476. https://doi.org/10.1038/nature20101

Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735

Hughes, N. C. (2023a, April 15). Artificial empathy: The dark side of AI chatbot therapy. Cybernews. https://cybernews.com/editorial/chatbot-therapy-dark-side-ai/

Hughes, N. C. (2023b, May 27). Deep dive into AI companions. Cybernews. https://cybernews.com/tech/ai-companions-explained/

Khan, A. A., Akbar, M. A., Fahmideh, M., Liang, P., Waseem, M., Ahmad, A., Niazi, M., & Abrahamsson, P. (2022). AI Ethics: An Empirical Study on the Views of Practitioners and Lawmakers (arXiv:2207.01493). arXiv. https://doi.org/10.48550/arXiv.2207.01493

Kyung, N., & Kwon, H. E. (2022). Rationally trust, but emotionally? The roles of cognitive and affective trust in laypeople’s acceptance of AI for preventive care operations. Production and Operations Management, poms.13785. https://doi.org/10.1111/poms.13785

Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rockt?schel, T., Riedel, S., & Kiela, D. (2021). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (arXiv:2005.11401). arXiv. https://doi.org/10.48550/arXiv.2005.11401

Liu-Thompkins, Y., Okazaki, S., & Li, H. (2022). Artificial empathy in marketing interactions: Bridging the human-AI gap in affective and social customer experience. Journal of the Academy of Marketing Science, 50(6), 1198–1218. https://doi.org/10.1007/s11747-022-00892-5

Lopez Torres, V. (2023). Before and after lockdown: A longitudinal study of long-term human-AI relationships. AHFE 2023 Hawaii Edition. https://doi.org/10.54941/ahfe1004188

Lovens, P.-F. (2023, March 28). Sans ces conversations avec le chatbot Eliza, mon mari serait toujours là. La Libre.be . https://www.lalibre.be/belgique/societe/2023/03/28/sans-ces-conversations-avec-le-chatbot-eliza-mon-mari-serait-toujours-la-LVSLWPC5WRDX7J2RCHNWPDST24/

Morris, C. (2024, February 14). ChatGPT and Google’s Gemini will now remember your past conversations. Fast Company. https://www.fastcompany.com/91029395/chatgpt-google-gemini-remember-past-conversations

Personal.ai . (n.d.). Differences Between Personal Language Models and Large Language Models. https://www.personal.ai/plm-personal-and-large-language-models

Reese, H. (2016, March 24). Why Microsoft’s “Tay” AI bot went wrong. TechRepublic. https://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/

Replika. (n.d.). Replika.Com . Retrieved May 12, 2024, from https://replika.com

Russell, S., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach. Pearson. https://books.google.com/books?id=XS9CjwEACAAJ

Rzepka, C., Berger, B., & Hess, T. (2022). Voice Assistant vs. Chatbot – Examining the Fit Between Conversational Agents’ Interaction Modalities and Information Search Tasks. Information Systems Frontiers, 24(3), 839–856. https://doi.org/10.1007/s10796-021-10226-5

Schiff, D., Rakova, B., Ayesh, A., Fanti, A., & Lennon, M. (2020a). Principles to Practices for Responsible AI: Closing the Gap (arXiv:2006.04707). arXiv. https://doi.org/10.48550/arXiv.2006.04707

Schiff, D., Ayesh, A., Musikanski, L., & Havens, J. C. (2020b). IEEE 7010: A New Standard for Assessing the Well-being Implications of Artificial Intelligence. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2746–2753. https://doi.org/10.1109/SMC42975.2020.9283454

Sears, A., Jacko, J. A., & Jacko, J. A. (Eds.). (2002). Human Values, Ethics, and Design (0 ed., pp. 1209–1233). CRC Press. https://doi.org/10.1201/9781410606723-48

Sharma, N., Liao, Q. V., & Xiao, Z. (2024). Generative Echo Chamber? Effects of LLM-Powered Search Systems on Diverse Information Seeking (arXiv:2402.05880). arXiv. https://doi.org/10.48550/arXiv.2402.05880

Spector, A. Z., Norvig, P., Wiggins, C., & Wing, J. M. (2022). Data Science in Context: Foundations, Challenges, Opportunities. Cambridge University Press. https://books.google.com/books?id=SaKIEAAAQBAJ

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention Is All You Need. Advances in Neural Information Processing Systems, 30, 5998–6008.

Wang, X., Salmani, M., Omidi, P., Ren, X., Rezagholizadeh, M., & Eshaghi, A. (2024). Beyond the Limits: A Survey of Techniques to Extend the Context Length in Large Language Models (arXiv:2402.02244). arXiv. https://doi.org/10.48550/arXiv.2402.02244

Winter, K. (2023, December 10). AI Companions Are Coming—And They Could Help Us Heal Collectively. ILLUMINATION’S MIRROR. https://medium.com/illuminations-mirror/ai-companions-are-coming-and-they-could-help-us-heal-collectively-47516cc4a469

Zhong, W., Guo, L., Gao, Q., Ye, H., & Wang, Y. (2023). MemoryBank: Enhancing Large Language Models with Long-Term Memory (arXiv:2305.10250). arXiv. https://arxiv.org/abs/2305.10250

Great insights, Eunhae. Looking forward to diving into Part 2!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了