AI's Many Potentials

AI's Many Potentials

Part 2 of our bite-sized overview of GenAI and how it may impact the workforce.


MIT professor Daron Acemoglu notes in his statement to the AI Workforce Forum that it’s not inevitable that AI should displace workers.[1]?

Many of the routine tasks that workers previously performed have already been automated, so a large fraction of current jobs requires problem-solving and decision-making tasks. Empowering workers to perform these tasks more effectively, and to accomplish even more sophisticated decision-making tasks, will necessitate providing workers with better information and decision-support tools. Generative AI is particularly well-suited to this type of information provision. An irony of our current digital era is that information is abundant, but useful information is scarce. Generative AI can help by recognizing the relevant context and presenting information that is useful for problem-solving, human decision-making, and performance in new, more complex tasks. For example, an electrician can much more effectively diagnose rare problems and handle complex tasks when empowered with AI tools that present information and recommendations on the basis of the accumulated knowledge from similar cases in the past. In essence, AI holds great potential for training and retraining expert workers, such as educators, medical personnel, software developers, and other “skilled craft workers” such as electricians and plumbers.?

However, several factors in place make such an outcome more likely absent regulatory and legislative intervention:?

First, many US corporations are focused on cost cutting due to the pressures of competition or short-run performance metrics. This often means that increasing the contribution of employees to long-run performance does not receive as much attention as it deserves.?

Dr. Acemoglu goes on to note the likelier outcome from GenAI, in his opinion:

All of these factors push us towards more automation, rather than the pro-worker path. They are also amplified by claims that the next stage of automation itself can be inequality-reducing and projections that generative AI will lead to a huge productivity boom (thus obviating the need to make workers more productive). Both of these claims are unsupported. The idea that automation of well-paid office jobs will reduce inequality is not convincing in light of existing evidence; previous office software systems and early AI have not done this, and even if some high-skill tasks, such as accounting or financial analysis, are automated, workers previously performing these tasks will then compete with less-skilled workers for jobs. This will still transfer some of the burden of automation onto lower-skill Americans (as demonstrated by previous waves of automation). Moreover, while generative AI has tremendous potential, massive productivity benefits from AI-based automation are unlikely for at least two key reasons: first, because these tools can only automate a subset of tasks that humans perform; and second, because some of the tasks that will be automated—especially those involving social skills—are already performed quite productively by existing workers, which limits any opportunity for revolutionary productivity improvements. Even if there were significant productivity gains from automation, these gains would not accrue directly to workers, and building shared prosperity by relying mainly on redistributive policies. For example, higher minimum wages could encourage even more automation in an environment where generative AI is providing additional automation tools.

The Role of Unions

I once asked an economics professor whether he thought the benefits of LLMs outweighed the potential harms to copyright holders. He said, in general, that he trusts the markets to sort things out and used the successful union strikes of 2023 (actors, screenwriters, and auto workers) as an example.?

However, it’s not obvious that the invisible hand will be enough. Those strikes only happened after the harms occurred. Imagine if the government didn’t pass regulations against chemical companies dumping toxic sludge into public waterways. Would that be acceptable? Or do we instead want the government to anticipate and head off foreseeable harms? In many instances (low pay, being treated like a meat puppet by Amazon and Uber’s bossware algorithms, etc.), the harms went on for many years and affected hundreds of thousands of people. Society must decide what tradeoffs between encouraging thriving businesses and preserving and supporting human dignity it’s okay with.

Another argument is that “the market” seems to only help organized labor (actors, screenwriters, teachers, nurses, and other unions and associations). There is no comparable union for visual concept artists, who mostly perform work-for-hire (meaning everything they create belongs to the entity paying them). They have no organized labor representatives. How will the market adjust for them? What effect might a single artist or many disorganized individuals have? The benefit of a union is that it can coordinate for all its members to go on strike, but when people work as individuals, like artists, going on strike just means one person forgoing what little income they may have made while another artist accepts the contract. The bargaining power is essentially nonexistent. It seems extremely unlikely that companies will voluntarily give artists protection similar to that of actors absent government intervention. A laissez-faire approach to AI’s impact on the workforce may be ill-advised.

Don’t Blame AI

AI can be the “most-good” thing ever invented. But it won’t be the best thing since the chisel unless thoughtful and forward-looking policies are implemented. That is, AI likely won’t be a boon to everyone all by itself. It’s not inherently an unmitigated pro for humanity. Furthermore, there is no reason to believe the government will necessarily implement the necessary thoughtful policies. The US government still hasn’t implemented meaningful laws regarding social media, and it’s been around for decades. It seems at least as likely that the government will yield decisions to capitalism in a libertarian fashion, as requested by the very companies that stand to benefit most from AI.?

If we want a better world, we don’t only need more innovation and better technology; we need excellent governance and well-informed leaders with the conviction necessary to steward thoughtful, nuanced policy through the halls of Congress and to the President’s desk, as well as a Supreme Court that is more interested in the well-being of all Americans than a more narrow focus on the harms to the relatively few corporations. We need those same individuals also to consider the well-being of people of other nations. And we likely need these considerations to occur in the next 10 to 20 years. Feeling optimistic?

Despite knowing how advanced AI’s capabilities have become, we should not blame it for harms. Decisions about how AI is developed and deployed are based on our (human) choices and actions. Rather than act as if AI is self-aware, we should be concerned about how our actions impact the use of generative AI. It’s easy to blame politicians and their appointees, but it’s the citizens who elect those politicians. The capabilities of our government is a reflection of the critical thinking of the voters.


1

https://www.schumer.senate.gov/imo/media/doc/Daron%20Acemoglu%20-%20Statement.pdf

The following students from the University of Texas at Austin contributed to the editing and writing of the content of LEAI: Carter E. Moxley, Brian Villamar, Ananya Venkataramaiah, Parth Mehta, Lou Kahn, Vishal Rachpaudi, Chibudom Okereke, Isaac Lerma, Colton Clements, Catalina Mollai, Thaddeus Kvietok, Maria Carmona, Mikayla Francisco, Aaliyah Mcfarlin

要查看或添加评论,请登录

David Atkinson的更多文章

  • K-12 Education and GenAI Don’t Mix

    K-12 Education and GenAI Don’t Mix

    One of my least popular opinions is that the rush to cram GenAI into K-12 curricula is a bad idea. This post will lay…

    3 条评论
  • GenAI Questions Too Often Overlooked

    GenAI Questions Too Often Overlooked

    Jacob Morrison and I wrote a relatively short law review article exploring the thorny gray areas of the law swirling…

    2 条评论
  • GenAI Lawsuits: What You Need to Know (and some stuff you don’t)

    GenAI Lawsuits: What You Need to Know (and some stuff you don’t)

    If you want to understand the legal risks of generative AI, you can’t go wrong by first understanding the ongoing…

  • GenAIuflecting

    GenAIuflecting

    Lately, a surprising number of people have asked my thoughts on the intersection of law and generative AI (GenAI)…

  • The Risks of Alternative Language Models

    The Risks of Alternative Language Models

    There is something like "the enemy of my enemy is my friend" going on in the AI space, with people despising OpenAI…

  • The Surrender of Autonomy

    The Surrender of Autonomy

    Autonomy in the Age of AI There are dozens, or, when atomized into their constituent parts, hundreds of risks posed by…

  • Humans and AI

    Humans and AI

    Part 3 of our miniseries on how human contractors contribute to AI. Poor Working Conditions and Human Error While tech…

  • AI and Its Human Annotators

    AI and Its Human Annotators

    Part 2 of our miniseries on the role of humans in creating AI. Pluralism In AI Unlike most traditional AI, where you…

  • RLHF and Human Feedback

    RLHF and Human Feedback

    Part 1 of our miniseries on RLHF and the role humans play in making AI. RLHF puts a friendly face on an alien…

  • Some Concluding Thoughts on GenAI and the Workforce

    Some Concluding Thoughts on GenAI and the Workforce

    This is Part 4 of our bite-sized series on GenAI and the workforce. The Reality: For Now, Human Labor Is Still More…

社区洞察

其他会员也浏览了