Will generative AI help or hurt jobs?

Will generative AI help or hurt jobs?

As a cyborg anthropologist, I have seen this 'point/counterpoint' play out repeatedly. In the past, it was mostly theoretical. Now it's quite real. The question is not if generative AI like ChatGPT will affect jobs and productivity. It's for whom—and how equitably.?

Of course, jobs will be affected—many will go away, and yes, some others will be created. At this precise moment in history, I don't see much indication that people are investing in up-skilling or cross-skilling the workers they will make redundant, and unemployment programs designed for short gaps are not enough to truly support people in shifting careers.

Why? We have a new workforce entering the marketplace:?machine coworkers. I wrote a piece on this not too long ago, and now already need an update—the advent of generative AI has accelerated the conversation.

I often think of AI as the shift from digital tools to machine coworkers. It's not enough to think of machines as something we turn on and off when convenient—they are now alongside our knowledge work in nearly every way.

The almost-certain, predictable future in this kind of scenario is the continued concentration of value and wealth in the hands of very few unless we commit to structural changes in how we view human workers and how much work we expect of real humans. Because this kind of 'intelligence' being released is both unprecedented and happening seemingly overnight, I'm basing this assertion not on a meta-analysis of historic tech advances but on awareness of our capitalist structure: the majority of corporations are required to pursue profits for shareholders, and many only measure that potential return on investment in the near-term. Short-term-ism is antithetical to long-term investment in career growth unless the company has convinced shareholders to play a long game, is privately held, or has bylaws (like a B Corporation) which incentivize it to take a longer view.

Net-new productivity vs. human replacement

On the one hand, generative AI could help entry-level workers get promoted into more senior knowledge worker roles (e.g., make a person with good ideas but so-so writing skills a much better writer), level out the playing field for small businesses (e.g., get better at marketing), and revolutionize information access (e.g., ask for what you want vs. searching through tons of only loosely-relevant pages). I personally would love to see this future as members of my LGBTQ community, people of color, people with disabilities, women in leadership roles and many other marginalized groups can use any and all help 'privilege-hacking' their way through a difficult system with odds stacked against them.

On the other hand, knowledge workers will probably lose a lot of jobs, and it's unclear how many new ones will emerge in time to avoid economic ruin for vast numbers of people—especially those who are already marginalized. Robotics came to auto manufacturing over decades. But generative AI is coming for both rote and creative knowledge jobs in what seems like a matter of days: a skilled knowledge worker like an educator or author can now easily produce 10X or more content than before generative AI came into the mainstream over the past months.?

Sometimes, generative AI improvements go towards something which wasn't happening—like a teacher's newfound capacity to create a summary of today's news stories mapped back to a core curriculum, or research pieces on niche communities who were not profitable enough to merit a publisher's editing budget before.

However, when that machine productivity is applied to an existing organization who were doing work deemed valuable, like copyediting, research assistance, or documentation, will that company then re-invest the savings from that productivity into re-employing those people into new roles?

Will the job get improved, shifted, or just cut?

A few years ago, AI transcription tools got good enough that I no longer had to set aside funds to pay someone to take notes, especially live notes. It was a huge boost in the quality of (and follow-up on) our internal and client meetings. It freed us up to reallocate that person's time to more creative endeavors while also allowing us to take notes on nearly every call and focus and feel more present. The young woman doing that work went on to more fulfilling creative endeavors within our company doing design and web development.

Yet in a corporate context, an expenditure for a $10 seat license of a transcription tool will motivate someone, somewhere, to just cut the job of the person who was partly serving as a transcriptionist without pausing to talk to them, find out what else they were doing, and re-train and re-assign them. A lack of AI strategy and vision will tank employee trust and morale and devastate freelancers and small vendors to big firms. Publicly-traded companies are especially bad at this kind of transition because there aren't a lot of benefits to show in the short term for re-educating someone. If they keep that person, they will often backfill their overworked schedule with new complex tasks now that they don't have to do so much rote work instead of recalibrating to research-backed approaches of working fewer, better-focused hours.

In the near term, I anticipate unemployment insurance premiums will go up, the period of unemployment insurance will extend via emergency bills, and a lot of hand-wringing and deer-in-the-headlights moments will occur in upcoming elections when citizens ask legislators about disappearing middle-class jobs.

Over the next three to ten years, I expect we will see more focus on creatively-reworded social programs designed to acknowledge the reality that power has concentrated in AI-savvy players. MIT Technology Review just released a great article on this—and it reflects the haziness of our view of the future:

OpenAI, the hosts of ChatGPT, have been quite open about these changes, publishing a paper that said "GPT is a GPT" (meaning generative pre-trained transformers, the underlying tech of ChatGPT, are what economists call?general purpose technologies—a class of technology, like personal computers or the internet, which is so broadly applicable that it can affect entire economies).

A black woman in fashionable clothing leans against the glass wall of a server room, working on her laptop with a smile on her face.

Raising Digital (and AI) Fluency

Legislators, investors, and everyday people all need to understand what AI is, and also what is 'upstream' from it (data) and downstream from it (job impacts, economic implications, changes in the nature of work, how 'truth' is perceived, and any number of existential questions). I think of this as Digital Fluency—a deeper understanding of the thinking, skills, tools, data, and business models which make up the digital transformation companies have been working towards for a while.

AI fluency is a specific subset of that Digital Fluency. AI fluency requires a certain baseline level of Digital Fluency (like understanding how data moves from system to system). AI-fluent people understand topics like:

  • How an algorithm works (not necessarily mathematically, but how to break down a problem into simpler steps for a machine to understand)
  • What analytical AI is (like machine learning applied to data from the past to tease out helpful information)
  • What generative AI is (like ChatGPT or DALL-E), and how it uses models of our language—however imperfect—and existing source data to synthesize responses to prompts
  • Strong general communication skills (for 'prompt engineering' to get the results we want from generative AI)—this is akin to people today who are very good at getting the best results from a search engine by forming their queries well
  • The impact of machine-made decisions on real people (like credit reports' impact)
  • The importance of sound source data (both for solid results but also for purposes of attending to bias)

I've written a lot about Digital Fluency elsewhere, like the?Field Guide to Digital Fluency.

Preparing organizations for AI

Organizations need to attend to several levels of change. At the individual level, they must up their team members' Digital (and AI) Fluency. This is for a couple of reasons.

Morale: prepare and invest in people

If people struggle with stress and anxiety about their jobs, their performance suffers. Doctor's office chain One Medical surveyed a broad group, and 45% of employees struggling with mental health reported losing at least 5 hours of productivity a week.

To do right by employees, organizations need to?prepare?and?invest in them.?This can be through traditional learning programs, mental health support, or other known strategies. But it also can be de-mystifying tech through practice and experimentation. One company we were working on Digital Fluency created a program for people to build their own 'bots' in Microsoft Office's built-in tools. It made the tools less scary, and the employees better able to see a benefit for themselves, and gave employees a lot of opportunities to explore advanced tech, pose candid questions to leaders, and ask questions (including 'dumb' ones that it turned out execs had too).

Strategy:?every?company needs an AI strategy

I've heard many people, such as VC?Sarah Guo, use the buzz about ChatGPT to get leaders to start seriously interacting with AI strategy. A good AI strategy will include at least:

  • The shifts in thinking a company needs to understand AI and its impact on their industry
  • Rapid re-evaluation of their?business model and business model environment, especially as relating to market forces and supplier forces
  • Changes to the?value models?they will use to serve users, customers, and other stakeholders
  • Technology and data upgrades needed (included candid discussion of 'technology debt' from under-maintained or out-of-date systems)
  • Use case identification for incremental (optimization) needs and exponential (net new value) opportunities.
  • Prioritization of technology strategies like 'digital factories' and DevOps to prepare for an explosion of new use cases
  • First- and second-order impacts of AI identifying what goes away, what happens first, and what that could create as a secondary impact (for example, a lot of grammar-checking tasks basically go away, which frees up new capacity for writing, and then there is an explosion of 'good-enough' content that might disrupt publishers and journalists)

(Guo shared her deck and key takeaways below)


Runway: individuals and teams need time to adapt

In some industries, market forces are causing budget cuts, hiring freezes, and layoffs. Anything leaders can do to create a runway to adapt will help morale and lessen the likelihood of even-more-disruptive layoffs in the future.?

Professionals can attempt to buy themselves time through common-sense budgeting and connecting as many of their projects to AI components and strategies (where realistic) to train themselves while continuing to do their everyday work.

For example, a marketing professional could allocate a bit of their time in an upcoming campaign to practice using AI tools like Writer.com or Jasper so that they improve their prompt engineering; or a learning & development professional could expand curriculum ideas with a (well-cited) discussion with ChatGPT. If a company has Microsoft Office 365, they likely have access to at least some AI tools through Power Automate.

Teams need to start exploring what the impact could be on their collaboration, functional purpose, and budgets could be—both positive and negative. (I'll expand on this in future articles).

Now what?

A quick recap:

  • AI has already changed many areas of our lives.
  • Generative AI is a new class of AI that will affect work that could only have been done by humans previously.
  • It's not entirely clear how big an impact generative AI will have in the short term, but it's a near-certain change in the long term due to its broad applicability.
  • Individuals should raise their Digital Fluency and AI Fluency to stay competitive (or find more exciting roles with their increased powers).
  • Companies must set clear AI strategies, help their employees adapt, and buy time for themselves and their teams.

In the coming weeks, I plan to release a series of pieces on these topics along the lines of the webinars we have had recently and covering many new topics as well. Stay tuned—I'll try to balance optimism with pragmatic next steps as we all adapt!

I'm a cyborg anthropologist whose practice is making exponential technologies more understandable and applicable and whose purpose is to help humans be more meaningfully connected and self-expressed through tech rather than less.?

I work through?Causeit, Inc.. to deliver Digital Fluency and AI Fluency programs, get hired for keynotes and live sessions with change-making organizations, and publish the Causeit?Field Guide to Digital Fluency.?I sit on the Accenture Technology Vision Board, am on the faculty at Singularity University, and have a part-time unpaid job exploring all the food carts on the West Coast.?

Watch our webinar on the topic from March 30, 2023:

Follow MJ Petroni and Causeit, Inc. on LinkedIn to get insights, Words of the Day, sightings from the field and free AI & Digital Fluency event access.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了