The Last Decade of The Interface Worker

The Last Decade of The Interface Worker

If you are reading this, you’re likely a knowledge worker. You work from an office, from home, or perhaps a blend of both. You spend most of your time at a computer, often online. Maybe you write content or software, review materials, make decisions, conduct research, or spend much of your time meeting people, trying to understand their problems and helping them find solutions. There are many ways to be a knowledge worker and just as many ways to generate value in this role. This is going to change.

In this essay, I argue that a fundamental wave of disruption is approaching knowledge work. This wave, which is currently unfolding before our eyes, is driven by breakthroughs in AI and generative AI. I can’t predict the future, but I am convinced that we are witnessing one of the most significant transitions in the history of human civilization. The world of our children will be very different from ours. Even if we don’t achieve artificial general intelligence anytime soon, I believe that in 5, 10, or maybe 20 years, we will witness major societal shifts, comparable to those following the introduction of the alphabet, electricity, or general education. I am excited but also slightly scared.

Predicting the future is impossible, but we can and should think about the present. My goal here is to present a simple mental model to dissect these complexities and help you think about both the smaller and bigger picture. I myself use this model to make decisions and navigate these turbulent times. Perhaps it will be useful for you as well.

The Coming Wave

At the beginning of 2023, one of my New Year resolutions was to get the AWS certification for Solutions Architect. It seemed like a challenging but relevant way to refresh some fundamental knowledge that I hadn’t practiced hands-on for years. I spent many hours preparing for the exam, doing online courses, reviewing documentation, and attempting sample tests. Around this same time, I began using GPT extensively, so it was natural for me to seek guidance, even coaching, from ChatGPT. And what coaching it was! GPT helped me to understand concepts more deeply, connect the dots more quickly, and effortlessly map complicated business problems into technical concepts. I started using it to answer questions from sample exams and even prompted it to create new questions that tested my knowledge and readiness. GPT almost never failed me. Every time I asked, it answered.

And that’s when I began to question the point of getting this certification. What did it really prove? That I could answer some questions and solve problems when asked? But GPT could do that too – faster, better, and available 24/7. What I was certifying for now seemed like an obsolete job, and suddenly, paying $150 for the exam felt utterly pointless.

When I finally decided not to take the exam, it struck me as something profound, a deep insight into the future we are heading towards. I didn’t fully grasp it then, but this realization greatly influenced many of my decisions thereafter.

I’m writing this in December 2023. It’s been only a year since ChatGPT was introduced, yet its impact has been monumental. What seemed impossible – creating beautiful images, automating publishing or just talking to a computer – has now become routine. It is possible that foundation models like GPT or Claude will become as commoditized and invisible as transistors or operating systems, embedded into our society. The impact of this is hard to underestimate.

It’s evident that a lot of cognitive work is going to be automated. This won’t be a walk in the park, and few people seem to realize this. Even the term “automation” doesn’t quite do justice here; it implies something mechanical, laborious, and repetitive – tasks we’d happily outsource to robots. But the automation enabled by AI and large language models goes far beyond this.

For certain tasks, AI can now outperform humans, whether they are writers, illustrators, developers, or content moderators. And it’s only getting better. The range of tasks where AI outperforms humans will continue to expand. These aren’t mundane or boring jobs; many people find meaning and identity in them. Are we the last generation of knowledge workers who will actually get paid for their labor? Should we start working on a global UBI program to hedge our bets? Perhaps. Or perhaps there’s a more nuanced way to think about it.

Talking Machines

The Turing Test is one of the simplest yet most fundamental concepts in computer science. It goes like this: imagine you’re having a text-based chat with someone. You can’t see or hear them, only read their responses. You’re free to ask anything and continue the conversation as long as you want. How would you know if you’re talking to a human? What questions would you ask? Are there specific questions or a series of queries that, while easy for humans, would stump a computer? If, after a lengthy chat, you can’t decide whether you’re conversing with a person or a machine, then the program is considered to have passed the Turing Test, or the imitation game as described by Alan Turing in the early 1950s.

We’ve seen seven decades of failures. Even famous programs like ELIZA, cute as attempts, revealed more about human gullibility than the capabilities of computers. Seven decades, and then, quite suddenly, in the early 2020s, computers started talking. We had to introduce guardrails to prevent humans from mistaking them for other humans. Without these safety measures, LLMs would pass a Turing test against an average person fairly easily. With the safeguards, it’s somewhat amusing to watch them struggle.

Large Language Models have essentially solved language as a problem. They understand humans and can imitate human conversation at an almost superhuman level. They process what we say, engage with us thoughtfully and empathetically, and even help us to solve important problems. They translate concepts, styles, or ideas, assisting us in connecting the dots. This is both remarkable and quite unbelievable.

Modern LLMs have learned about our world and culture by reading virtually everything we’ve ever written. They can be thought of as enormous zip files that have compressed all our written knowledge into a neural network. The text they’ve absorbed and understand is a projection of our world, and some, like Ilya Sutskever, claim that these models can reconstruct our world from that projection.

These two capabilities – the ability to understand and speak, combined with effortless access to our knowledge and culture – have profound implications for knowledge work. I argue that many knowledge workers actually serve as translators, human interfaces for our vast, interconnected brain of facts, skills, and insights. If machines can now understand our questions and access the same knowledge within seconds, these human translators are no longer needed.

Translators and Interfaces

Think of the last time you visited a doctor. Perhaps you were feeling unwell and needed advice, a prescription, or simply reassurance. Your consultation might have lasted 15 minutes or less, during which you answered questions and received a diagnosis. Maybe you even underwent some tests, but the technician refused to interpret them for you, deferring to the doctor’s verdict.

You can’t know everything, so you consult a specialist. Someone who has the knowledge and skills to translate your problem into a solution that humanity has discovered. This has given rise to a whole class of knowledge workers. They work as translators, bridging the gap between what you present to them – be it physiological symptoms, a hardware issue, or a complex legal case – and a potential solution grounded in our cultural knowledge and expertise. Often, they perform this task mechanically or routinely. They act as an interface between your problem, expressed in natural language or other simple means, and a solution articulated in domain-specific expertise and jargon. I refer to them as interface workers.

Becoming a proficient interface worker in some fields requires significant time and effort. That’s why we compensate them well. Think of a doctor, lawyer, or highly qualified accountant. In fact, much of our education system is predicated on producing experts certified to translate specific knowledge domains. Others, like content moderators, call center operators, or customer service representatives, are less costly. Learning to follow guidelines and apply them diligently doesn’t require extensive training. But when machines take over the role of translator, these jobs, despite their differences, start to resemble each other. To a large language model, they are just an unimaginable number of simple mathematical operations performed on massive computing clusters.

In a certain way, a doctor, lawyer, or accountant is like a modern elevator operator. Being an effective operator required a range of skills. Manual elevators were often controlled by a large lever, necessitating a keen sense of timing to consistently align the elevator with each floor. But elevator operators are no longer needed because we figured out better, cheaper and safer ways to handle this.

There are many interface workers because translating problems into solutions is a crucial task in our complex world. Doctors, lawyers, accountants, architects, consultants, content creators, software engineers, graphic designers, ghostwriters, customer service representatives – to name a few. Of course, reality is more nuanced. Many jobs also involve creativity, relationships, storytelling or empathy. But translation is often a significant part, and if it is the majority of what you do, you are probably an interface worker.

I’m oversimplifying to make a point. Machines aren’t perfect yet. They are often unreliable, hallucinate, or struggle with seemingly simple tasks. But they are improving, and our reliance on them as translators will only grow.

About a century ago, and earlier, the term ‘computer’ actually referred to people. Now, it feels archaic and strange to think of a computer as anything other than a device that assists us in various tasks. My bet is that this will be the fate of many modern interface jobs, even those now held in high esteem. In the future, maybe quite a close future, we might not need interface workers because we won’t need human translators. We will have assistants – instant, accessible, and affordable – who will perform these translations for us. We might still call them “doctors,” “lawyers,” or “AWS architects,” but they won’t be human like us. How long will it take? And how will we get there?

The Path to The Future

I have no idea what the future will look like but I am certain that large language models are a very important part of it, and interface workers – human interface workers – are not. The mental model described above guides me in making decisions today and navigating the complex situation we find ourselves in.

To conclude this piece, I want to share two observations that I find particularly relevant.

First, there is the lump of labor fallacy. This is the (mistaken) belief that there is a fixed amount of work available in the economy, and that increasing the number of workers (or introducing automation) reduces the amount of work available for everyone else, and vice versa. The world doesn’t operate this way, but delving into the reasons for this is beyond the scope of this already lengthy article. If you’re curious, I recommend a great essay by Ben Evans. Generally, the Lump of Labor Fallacy is good news for us – history has shown that gains in productivity typically lead to increased wealth and a better world for everyone, including more jobs. Would you rather live in 1823 or 2023? Yeah, me too.

My second observation is more intuitive, and a hopeful one. I believe that the automation of interface work might actually make us more connected and create space for things we should value more. After all, intelligence is just one aspect of being human, and maybe not the most important one. We are all unique, each with our own story, and we’re here together for a brief moment, living our lives unprepared. I hope (hope!) we are entering a new renaissance of empathy and compassion, where we value stories, connections, and relationships as much, if not more, than hard skills like problem-solving or intelligence. I don’t know if it’s possible to build the whole world economy around these values, but it certainly sounds like a project worth trying. Superintelligent assistants seem like a part of this project, just as much as ordinary humans are.


All images created with DALL·E and Midjourney.


Pawe? Polewicz

Backend Developers LTD

1 年

When my father studied at Warsaw University of Technology, calculators were strictly forbidden during exams... When I studied there 25 years later, having a (simple) calculator was mandatory - our professor didn't want us to use the time or the exam sheet to perform addition or multiplication under the line. Today LLMs are forbidden on exams... Imagine a world where every 7-yo can outsmart any human using a piece of plastic he carries in his pocket, then add voice recognition and reduce it to 5-yo with a smartwatch who'd be able to answer any question from "Who Wants to Be a Millionaire". I've seen a TV series a few years back, SF. In this world an invention of matter synthesizer reshaped society, where it was "reputation" that became the currency people strived for. I don't know what's going to happen, but if you could feed your test results into a FDA-approved LLM for $0.01 or ask a human doctor to interpret it for you, would you spend a fee of ~3 million percent on that human interaction knowing it's 2, 5, 10, 100 times more likely to make a mistake? I'm not sure if I should encourage my children to spend half of their childhood studying things LLMs already outperform them on. There might be a better way.

I admite your optimism. I don’t share it with you, at least in the medium term. For now, I see a race where there’s no human development in the equation. Rising inequality leads to rising social tensions. And those to world instability and…

LLM output doesn't fit at all with what I consider to be "automation", and you seem to repeat that word a lot. LLMs' output is most often of a superficial quality. I.e. when you don't know what the thing you asked LLM is about at all, then output seems great! But when you don't take it at face value, holes appear pretty quickly, but what is even worse, they are indistinguishable from the truth. As any great liar, LLMs mix-in truths and falsehoods. This is the opposite of automation in my mind, which is based on reliability. Doesn't matter if it is a physical factory machine, or a software CI process. In neither context, no one calls something that so often blows up an "automation ". I guess only context where this is ok is where you know humans will actively verify whatever is spitted out by these magical black box, e.g. automatic flagging post as offending on social networks, where whoever gets flagged is very likely to oppose it if they feel it was unjustified. disclaimer: I also use LLMs for everyday work (chatgpt, github copilot)

Maciej Cielecki

?? Executive Chairman & Chief AI Officer @ 10Clouds ??

1 年

There is also another possible path: That people will want to have interface workers because they want to interact with people, who interact with AI, and due to the lump of work argument that you mentioned it might be that interface workers who use AI will not loose their jobs. What do you think?

要查看或添加评论,请登录

Michal Przadka的更多文章

  • The Anti-Patterns Of Slack

    The Anti-Patterns Of Slack

    I hate Slack. It's a productivity sink disguised as a productivity app—a digital quicksand that slowly but surely drags…

    29 条评论
  • How AI Models Stack Up Against My 11-Year-Old?

    How AI Models Stack Up Against My 11-Year-Old?

    I created an AI math reasoning benchmark using puzzles from this year’s GMIL competition — a long-running international…

    13 条评论
  • Chatbots, endpoints and ambient intelligence

    Chatbots, endpoints and ambient intelligence

    Chat interfaces are becoming increasingly common when interacting with AI. We open yet another window, type our…

    7 条评论
  • Thinking with language models

    Thinking with language models

    Imagine somebody gave you a very special city map. You can get to many places with this map.

    14 条评论
  • Gen AI Guide For A Busy Executive

    Gen AI Guide For A Busy Executive

    AI is hard, and generative AI is no different. In today’s rapidly evolving landscape, it is particularly tough to…

    3 条评论
  • Shazam for Singing

    Shazam for Singing

    Last weekend, I introduced my daughters to Shazam. We tested it with a random track I played on my phone, and it…

    5 条评论
  • Prompt Engineering with Bedtime Stories

    Prompt Engineering with Bedtime Stories

    Prompting GPT and LLMs can be difficult. I have heard a variety of complaints about these models - irrelevance…

    16 条评论
  • My First Two Months As A Prompt Engineering Intern

    My First Two Months As A Prompt Engineering Intern

    I left Kantar at the end of March, opening up time to dive into fresh thoughts and daring ventures. What a thrilling…

    45 条评论
  • How to become an Outlook ninja in 5 easy steps?

    How to become an Outlook ninja in 5 easy steps?

    Email is a Swiss army knife for the modern knowledge worker. We use it for almost everything - to communicate…

    6 条评论

社区洞察

其他会员也浏览了