A mental model for thinking about AI

A mental model for thinking about AI

Or, thinking concretely about how AI changes your job

Read this article on Substack

In an article that reads suspiciously like it was written by AI, a commonly cited statistic appears: by 2030, 300 million jobs will be replaced by AI. The number is based on a study by Goldman Sachs. A similar study by McKinsey estimates that some 400 million workers could be replaced “by automation” over the next seven years.?

While McKinsey doesn’t reveal the full methodology behind the report because, well, it’s McKinsey, the estimate derives at least in part from their analysis of 2000 work activities that are automatable across 800 occupations.

About half the activities (actually somewhere between 30% and 60%, a range also known as the “consultancy buffer”) could be automated in some way. Only 5% of jobs could be fully automated, a number familiar to typesetters and weavers.

The Goldman Sachs study also analyzed “occupational tasks”–in their case, looking at 900 occupations–to determine which were automatable. Their actual finding was that advances in AI “could expose the equivalent of 300 million full-time jobs to automation” and that generative AI has the potential to add as much as 7% to global GDP.

Notice that we are talking about automating tasks, not jobs. This is a nuanced distinction but an important one.

Headline writers aren’t a nuanced bunch. Story after story conflates task automation with job loss. A delicious irony, then, to find that headline writing is one of those obviously automatable tasks. These sensationalist stories follow the ‘if it bleeds, it leads’ principle while ignoring AI’s potential to increase worker productivity and ADD as many as 890 million new jobs.

These studies tell us that AI is changing jobs, for example, by automating rote, repetitive tasks or those that require quick comparison and analysis of large datasets. AI’s capacity to automate tasks frees us to work on interesting, enigmatic problems.

During the hiring process, I ask candidates to describe a task they are good at but hate doing . That task is one you should ask AI to automate for you.?

History supplies us with a wealth of examples of jobs changing to adapt to technological changes. We need a mental model to help us move beyond abstract speculation and into specific actions we can take.

Let’s take a closer look.

A mental model

I’ve been developing a mental model to help me think through AI’s very specific impact on industries and jobs.

The first part of the model identifies what happens to your job when AI shifts the problem your job solves. Your skill set matches the new problem well, but the tasks you perform change completely. I call this phenomenon the Farrier’s Conundrum. The purpose your job satisfies before and after the technological shift is the same: how to keep the transportation system flowing smoothly. How you fulfill that purpose–shoeing horses versus changing tires–are miles different. Some people don’t want to work on inanimate objects when their passion is caring for living, breathing animals.

The second part involves considering AI as a utility, like SaaS and cloud infrastructure. Developing utilities to create and deliver power, for example, shifted the responsibility from individual companies to centralized utility companies. People who worked on solving the power, saas, and cloud infrastructure problems at individual companies still had jobs because the centralized utilities needed their skill sets. Your job doesn’t go away, but where and for whom you perform it changes.

The third part examines how the core AI business model changes as the consumption of AI models matures. AI companies are currently building generic foundation models and don’t really concern themselves with exactly how those models are applied to industry-specific problems. A bank or pharmaceutical company is left to figure out the best way to use AI. This makes sense because they have domain expertise that AI companies lack, but it’s not always clear where to start. And since no one really knows how these models work–even the AI companies–it’s extremely hard to know when the technology has been appropriately applied.

Finally, you must identify whether your job and the problems it was created to solve are in a primary market or the secondary market that serves it. Jobs in the core market might change, and those changes might very well make the jobs in the secondary market disappear.?

To start applying this model, allow me a moment of whimsy. Let’s borrow from a past technological shift: imagine you are a farrier in New York City at the turn of the 20th century.

The Farrier’s Conundrum


In 1905, John D. Hurley ran one of the most successful and well-regarded farrier businesses in Manhattan. Over the next ten years, as cars outpaced horses as the predominant mode of ground transportation in New York City, Hurley’s business steadily dwindled. In 1917, the last of the horsecar trolleys were taken out of service, and Hurley faced a grim reality: Cars now dominated transportation.?

The following year, Hurley succumbed and closed his business. There simply weren’t enough horses left to sustain a horseshoeing enterprise, and the farrier businesses that survived were mostly in rural areas where people kept horses for sport or pleasure. He was not alone. Such failure was a common fate for many farriers during this period as the need for horseshoes and related services plummeted.

In contrast, George T. Buchanan capitalized on the burgeoning automotive industry. In 1912, he opened Buchanan Auto Repair, one of the first auto repair shops in New York City. His shop performed general automotive repairs, but Buchanan seized on wheel and tire maintenance and distribution as a keen opportunity. His business thrived, and then in 1915, John D. Hertz started the Yellow Cab Company. The two formed a partnership, and by 1920, Buchanan had expanded to multiple locations as his business boomed.?

Between 1905 and 1920, the number of farriers in New York City decreased by over 70%, with many either closing their businesses or transitioning to other forms of labor. Conversely, the number of auto repair shops surged, with an estimated increase of over 300% during the same period. By 1920, there were approximately 2,500 automobile-related service shops, compared to less than 800 in 1905.

A prescient farrier in the late 1800s and early 1900s might have anticipated this technological shift and retooled his business from shoeing horses to “shoeing” cars. Imagine the cognitive load required to upend a successful hundred-year-old business, step out of your area of expertise, and learn a new trade.

Generative AI has placed a similar cognitive load on us.

If your job involves analyzing datasets sourced from log files, databases, or Excel spreadsheets–Security Operations Center (SOC) analysts, business analysts, or financial analysts–do you really believe that is the same job you’ll be doing two years (or even two months) from now? What about manually configuring complex SaaS applications and cloud platforms?

More likely, SOC analysts will spend time making sure that the AI models analyzing a company’s security posture haven’t been tampered with. There will be tools to learn, and those AI models will need guardrails to control their behavior and decisions.

The Utility Effect

Foundational generative AI models are utilities, meaning they deliver fundamental capabilities like algorithms, and you use those algorithms to solve specific problems.

Could you build the algorithms, source the data, and create the infrastructure required to train the model? Of course, you could...

Read the rest of this article on Substack .


要查看或添加评论,请登录

社区洞察

其他会员也浏览了