The Surrender of Autonomy

The Surrender of Autonomy

Autonomy in the Age of AI

There are dozens, or, when atomized into their constituent parts, hundreds of risks posed by AI. Google’s DeepMind helpfully identified many of them: social stereotypes and unfair discrimination, toxic language, compromising privacy by leaking private information, leading users to perform unethical or illegal actions, disseminating false or misleading information, facilitating fraud, and more targeted manipulation. Each of these could cause tremendous harm.

Take the risk of disinformation, for example. It can manipulate recipients, preying not only on our ignorance, fears, and insecurities but also on our love for others. Using AI to spread disinformation broadly could undermine trust in one another, in news sources, in the government, and, ultimately, in society. The inability to trust information would add friction to social relationships, business transactions, and the ability to self-govern. Without a shared source of truth at some foundational level, and if desired badly enough, a person could find the information that allows them to comfortably believe whatever they wish to believe and to do so with minimal cognitive dissonance. This could be like social media filter bubbles on steroids, guiding us into the deepest depths of confirmation bias or driving us further apart through an overwhelming propaganda system.

Another harm could be AI reinforcing harmful biases within training data. The AI could disproportionately and unfairly target people who are poor or racial minorities. It could uniformly suggest one gender for promotions even when the individual is less qualified than other candidates. It could spread unfounded information about certain religious or ethnic groups, scapegoating them for the problems suffered by the majority. Each of these scenarios could degrade society and restrain humanity's potential by artificially suppressing and limiting our potential to flourish.

A feature these and most harms share, however, is that they would only work at scale if deployed at scale, and the only probable way of doing so would be through common platforms used by at least tens of thousands or, more likely, by hundreds of millions or even billions of people. Importantly, the term “platforms,” as used here, includes not just social media but also Microsoft Azure and 365, Cloudflare’s internet infrastructure, Amazon Web Services, and more. These necessary vectors, disreputable as some of them may be in some regards, including for their privacy practices and effects on market competition, share a virtue: they can efficiently police the content that uses their platforms if they choose to do so.

Though imperfect, the platforms are generally highly effective at this screening. It’s shocking to see porn or gruesome images on Facebook precisely because they are good at removing such content before it spreads widely. Granted, they’re imperfect, but perfection wouldn’t be required to stop mass information, propaganda, or insidious biases. Even tools used at significantly smaller scales, as is the case with human resources programs responsible for hiring and promotions, can be monitored and corrected. Indeed, laws in some states already require such vigilance and impose liability on those who use AI to discriminate illegally. Regulation, either self-imposed or through government mandate, can reduce or prevent almost all of the risks Deepmind identified by limiting access to necessary materials, monitoring the use of water and electricity (necessary elements for vast computation), and dishing out considerable fines and other punishments, just as self-conscious companies, the Federal Trade Commission, European Union, and state laws are beginning to do today.

But there is one harm that cannot be regulated away. It can’t be monitored and remediated in any meaningful manner. It can’t be rolled back in any reasonable way any more than we could roll back people wearing shoes or owning vehicles. Worse still, we often don’t even classify it as a risk because we tend to treat it as a desirable outcome. This is, of course, the concession of autonomy. It’s the willful handing over of human agency to artificial intelligence.

This submission to AI is not new. Humans have gradually and gladly given up agency for decades. We happily let ATMs handle simple financial transactions beginning in the 1970’s. We welcomed spell check to personal computers in the 1980’s. We asked MapQuest in the 90s, and then Google Maps in the early aughts, to guide our routes rather than navigate with only an atlas and a keen eye for road signs. Our search engines sort through billions of web pages in a fraction of a second to serve links we’re most likely to find useful rather than requiring us to comb the internet manually. Our photo apps automatically edit our photos for us, so we don’t have to rotate or adjust brightness and saturation one by one. The list of activities has only grown and with greater rapidity over time: translation, driving, shopping, fitness tracking, relationship pairing, course and career selection, financial investments, logging into our devices with our faces and thumbprints, and much, much more.1

We are now even beginning to outsource playing time to AI. Grok, an AI-powered stuffed animal, can interact and “converse” with children. Toys like this may end up replacing significant chunks of time that would have otherwise been spent playing with parents or friends. The toys are designed to be engaging. It's not like a baseball bat, inert. It's more like social media, triggering the responses in your brain that keep you coming back and staying away from alternative activities. A bat doesn't beckon you with anthropomorphism. It doesn’t smile, speak, communicate, or make gestures. A bat cannot manipulate a person. It requires the human to provide the motivation and create the fun from the item.

At each step, we were introduced to AI – at first basic symbolic software that simply followed a long chain of if-this-then-that rules, now deep learning with neural nets – and we tested it out. Once convinced it performed better than us at the given task, we happily shoveled all subsequent tasks of the same sort to the AI. It’s a rare occasion when we ignore the route chosen by Google Maps or ignore a suggested spelling correction in Microsoft Word. Instances that may pop in your mind to the contrary likely spring into your thoughts precisely because it’s so uncommon. Otherwise, you wouldn’t use those applications so frequently. For every time you choose to take an alternative route to meet up with a friend, there are countless others where you simply follow the route Google originally suggested.

This is all entirely rational. In fact, for most people, almost all the time, it would be foolish not to follow the AI’s suggestions. The interesting factor is that the AI doesn’t even have to be substantially better than we are at the same task – it only needs to be a slight improvement over our typical attempts for it to be logical to defer to it. If you choose the best route 80% of the time on the first try and AI chooses it 85%, then it would make little sense not to defer to AI on almost all occasions.

But what does it mean to be human if our decisions are increasingly performed at the instruction of AI? There are myriad ways AI could further pervade our lives and we’d be grateful: identifying the precise nutrition plan we need to get the results we’re looking for, handing us the optimal opening line to use to begin a conversation with the person we’ll likely be highly compatible with, revising lyrics in our songs and poems to make them catchier or more clever or more moving, touching up artwork to add depth or pizazz or to correct an errant brushstroke, allocating our time in such a way that we feel productive and don’t feel listless or like a mere cog or burnt out. Importantly, just as with many AIs today (consider, for example, recommendation algorithms), the suggestions AI offers would be custom-tailored to you, and they’d improve over time.

For many luminaries in the AI field, AI will create a utopia where it will generate so much money and produce such a robust economy that humans will no longer need to work. Instead, we can focus on the activities that bring us the most joy or meaning. But it’s unlikely AI wouldn’t play an ever-increasing role in those same activities. Human creation, recreation, and leisure are as likely to succumb to the profusion of AI as any other activity. At some point, humans may cease to be the entity with the majority of the agency in the relationship and instead become the meat puppet of the AI, doing what the AI suggests for the simple reason that AI is better at making such decisions. AI could know what will make us happiest, fulfill us, and fill us with love, thrill, curiosity, and calm better than we ever could on our own.

The danger isn’t the world depicted in WALL-E, where humans are essentially lazy, obese babies who can’t be inconvenienced. It’s that we will continue to believe we’re the ones in charge long after we’re not. We may, in fact, be healthier, happier, more attractive, more successful, and more intelligent as a result of following AI’s guidance. But our autonomy would be long gone. We may continue to believe that we are ultimately still the deciders of our fates, but automation bias will only become stronger as AI becomes more powerful and accurate, making us less and less likely to override its suggestions. We may believe we’re in control, and technically, we would retain the ability to ignore AI, but if ignoring it would put you at a disadvantage compared to others and decrease the quality of your life based on the values you hold as an individual, why would you?

The implication is that all entities creating AI today need to think more deeply about the potential user dependence their products could create and the isolation it may cause. What happens when my son doesn’t need to ask me how to adjust his bike, or reset his circuit breaker? Perhaps we’ll become super self-reliant and self-isolating because we become overly reliant on AI. Why ask a human, even your parents, for help when AI will always be available, usually be cheaper, and be correct more often?

Superintelligence isn’t required for humans to become the minority shareholder in the agency relationship. The AI of today, or at least the AI of this decade, with all its faults, may be sufficiently advanced to outperform us at tasks that we believe make us human. And there is no need for nefarious intent, carelessness, or warped motives driven by greed to diminish our autonomy. A perfectly free, safe, secure, ethically developed, open-source, helpful AI would lead to the same outcome. It might even reach it quicker as people adopt it more rapidly.

Is it romanticism to think there is something to the struggles of life? That we, as a species, don't really want to snap our fingers and arrive at the top of a mountain; we want the struggle of the climb? Perhaps in the short term, we don’t. The biggest harm of AI may not be recognized as a harm at all until it's too late. We may willfully relinquish what it means to be human in any important way. We may, in fact, embrace the surrender. And there may be no policy, law, or regulation that can stop it.


[1] People outsourcing wedding vows to AI, graduation toasts, gift ideas, etc.

This post originally appeared at: https://intersectingai.substack.com/p/the-surrender-of-autonomy

要查看或添加评论,请登录

David Atkinson的更多文章

  • K-12 Education and GenAI Don’t Mix

    K-12 Education and GenAI Don’t Mix

    One of my least popular opinions is that the rush to cram GenAI into K-12 curricula is a bad idea. This post will lay…

    3 条评论
  • GenAI Questions Too Often Overlooked

    GenAI Questions Too Often Overlooked

    Jacob Morrison and I wrote a relatively short law review article exploring the thorny gray areas of the law swirling…

    2 条评论
  • GenAI Lawsuits: What You Need to Know (and some stuff you don’t)

    GenAI Lawsuits: What You Need to Know (and some stuff you don’t)

    If you want to understand the legal risks of generative AI, you can’t go wrong by first understanding the ongoing…

  • GenAIuflecting

    GenAIuflecting

    Lately, a surprising number of people have asked my thoughts on the intersection of law and generative AI (GenAI)…

  • The Risks of Alternative Language Models

    The Risks of Alternative Language Models

    There is something like "the enemy of my enemy is my friend" going on in the AI space, with people despising OpenAI…

  • Humans and AI

    Humans and AI

    Part 3 of our miniseries on how human contractors contribute to AI. Poor Working Conditions and Human Error While tech…

  • AI and Its Human Annotators

    AI and Its Human Annotators

    Part 2 of our miniseries on the role of humans in creating AI. Pluralism In AI Unlike most traditional AI, where you…

  • RLHF and Human Feedback

    RLHF and Human Feedback

    Part 1 of our miniseries on RLHF and the role humans play in making AI. RLHF puts a friendly face on an alien…

  • Some Concluding Thoughts on GenAI and the Workforce

    Some Concluding Thoughts on GenAI and the Workforce

    This is Part 4 of our bite-sized series on GenAI and the workforce. The Reality: For Now, Human Labor Is Still More…

  • UBI? My, Oh, My

    UBI? My, Oh, My

    Part 3 of our bit-sized series on GenAI’s potential impact on the workforce. Economic Impact If many people lose their…

社区洞察

其他会员也浏览了