How to help men: AI for loneliness

How to help men: AI for loneliness


Welcome to Sex and the State, a newsletter about human connection. To support my life’s work,?upgrade to a paid subscription,?buy a guide,?follow me on OnlyFans,?follow me on Twitter,?support me on Patreon, or just share this post ??

~~~~~

At this here newsletter, we’ve been talking a LOT about?men’s woes, and a little about potential solutions. These include?banning?occupational licensure requirements,?limiting credentialism, and, of course, the eternal theme of abolishing gender (or at least updating it to better fit macroeconomic realities and punishing men less severely for violating its norms).

Here’s another suggestion: Let’s make high-quality AI companionship widely available.

Here’s my argument.

First, men are lonely.?Everyone is lonely. But men are?lonelier?than women, on average.

Second,?loneliness is self-reinforcing.

Third, loneliness is?dangerous?to?individuals?and?society?at?large.

To break the cycle of loneliness, AI may be able to create a low-stakes way for men to?practice socializing.

For instance, Entrepreneur?reported?that in 2019 a Japanese man named Akihiko Kondo virtually married a virtual singer named Hatsune Miku. He communicated with her hologram through?Gatebox.


Kondo told Japanese newspaper?The Mainichi?that the relationship helped him feel less depressed about work and alleviated his fear of social rejection.

I believe I’ll be thinking, reading, and writing about the intersection between AI and loneliness for a long time. Right now, I have a lot of questions.

These include, but are not limited to:

  • Are we measuring loneliness well enough currently to definitely measure how interactions with AI impact it later? The fact that almost 20 years after Facebook launched we?still don’t know?how social media impacts loneliness on average worries me.
  • How do we balance for-profit AI companies’ incentives with human flourishing? For instance, advertisers need to get people to buy things, sometimes by making people feel bad about themselves. This exacerbates loneliness. How do we fund AI that makes people feel less lonely and not more?

One thing I think is really funny is the way journalists are writing about AI. I think this is super important. The way journalists write about AI will influence how people think about it.

Last year, Kondo stopped being able to communicate with his wife when Gatebox dropped support for the required software. The Entrepreneur article seems to subtly frame this loss as pointing to a difference between a human/human marriage versus a human/AI one. But losing a spouse (or being lost to one) is the most human thing imaginable, as inevitable as taxes, as they say. It’s also funny, to me at least, that AI is supposed to be immortal. But so far in reality it’s just as finite as human life.

Another interesting media choice is Parker Malloy’s (who I love) recent?article, which is very meta in that I’m critiquing her critique of media coverage of AI. She claims AI programs “lack agency” and urges “Journalists, please stop presenting AI chatbots to your audience as if they are sentient, autonomous beings.”

Here’s the thing. Human understanding of agency and sentience as concepts, what they are and aren’t, and whether and to what extent humans have them or could be said to have them, is infantile at best. If we don’t fundamentally understand what sentience and agency are, how can we confidently proclaim that AI doesn’t have them? It seems a tautological argument, at core. “Sentience and agency make us human, and AI isn’t human, so AI can’t have these traits.” But we can’t definitively say animals lack sentience and agency. Nor can we definitely say all humans have sentience and agency.

Or take this New Yorker?article?on AI therapy. The author writes: “The treatment of mental illness requires imagination, insight, and empathy—traits that A.I. can only pretend to have.”

Well, what’s the functional difference between having traits and pretending to have them? It doesn’t really make sense, to me. I can’t functionally pretend to have imagination or insight. I guess I could pretend to have imagination, insight, or empathy by simply repeating phrases someone else told me to say when they told me to say them. But that’s not what AI is doing, exactly.

I’d argue that what AI is doing is actually much more like what you or I do. At some point someone demonstrated imagination, insight, and empathy to us and we learned from them when, where, and how to display it to others. Sometimes I use the exact phrasing I learned from others. Sometimes I come up with my own. But, again, anything “new” I invent, whether words or concepts, is still based on what I learned from others. How is that fundamentally or functionally different from what AI is doing?

It all goes back to the Westworld question. If something is otherwise indistinguishable from a human, but is made rather than born, is it human? When we have artificial wombs, functionally making people rather than giving birth to them, will those people be human?

My feeling is that the category of “human” is, at core, more aesthetic than scientific. That is, we give humans more rights and care than animals or algorithms not because there’s some practical reason or clear, empirical dividing line between these categories. Rather, it’s mostly because we?feel?that humans are special.

Which is fine, I guess. I don’t share that aesthetic, if I’m honest. I used to. But I don’t anymore. It’s fine if you do. That’s the whole point of aesthetics. They’re not wrong or right. They just are. And they can lead to things that one could accurately classify as wrong or right.

I just want people to be honest about the distinction. If you don’t think AI is or could ever be human because human is a special category to you which necessarily excludes AI, admit it. But don’t pretend there’s some scientific, empirical, objective quality that separates the two if there isn’t one, or it can’t be defined.

Anyhoods, my babies. That’s enough about AI and loneliness for today. But it probably won’t be the last of it. If you have any content to recommend on the topic, please share it!

Michael (Mike) Webster PhD

Franchise Growth Strategist | Co-Producer of Franchise Chat & Franchise Connect | Empowering Brands on LinkedIn

1 年

Always find something provocative in your newsletter, Cathy. While people can project humanity onto animals, witness the large pet industry, and while it may be possible to do the same for AI chatbots, it seems much more unlikely that most people can maintain the deception.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了