AI Won't Replace You, But a Philosopher Will
Socrates stares blankly at a computer screen.

AI Won't Replace You, But a Philosopher Will

Getting "Left Behind"

The narrative that AI is coming for your jobs has now been replaced by the mantra that AI won't replace you, but a person using AI will. The main difference between these two rhetorical approaches is how they mobilize fear. The first is fatalistic about the prospect that everyone (but especially you) is going to be replaced by AI. Tough. The second uses fear as a motivator, urging people to become one of the few survivors of the coming AI apocalypse.

"Don't get left behind!" is certainly an effective marketing slogan if you're one of the new "AI influencers" on social media. But the truth is: You already are a person using AI. If you've ever used Alexa or Siri or "Hey, Google," you're a person using AI. If you have spam filters on your email, you're a person using AI. If you've ever had a fraud alert put on your credit card, you're a person using AI. If you've ever used Netflix, or Spotify, or any sort of social media, you're a person using AI.

Here is a video of a user struggling with AI being assisted by a "prompt engineer."

What's compelling about this example is that the toddler already knows what she wants the AI to do, and she also knows that the AI is capable of performing this action—she just can't quite get the AI to actually do it. While Mom eventually steps in to get Alexa to play "Baby Shark," it's important to recognize that the child wasn't doing anything wrong. She was using the exact command the AI required in order to play the right song. Only, the AI's voice recognition model couldn't understand her, probably because it wasn't trained on young voices like hers.



What I Want vs. What I Get

As with any technology, there is an element of technical skill or proficiency required to actually get AI to give you what you want. But the rhetoric around getting "left behind" assumes that the gap between knowing and getting what you want from AI ought to be filled by users, not tech companies. This is the central fiction behind the concept of "prompt engineering," which presupposes that interacting with new Large Language Models like ChatGPT and Claude 2 is so unintuitive that it should be treated like a foreign language.

But you shouldn't have to learn an entirely new language to use any technology. As many AI experts have already pointed out, prompt engineering is not a durable skill everyone needs to learn; rather, it's a temporary state of affairs in which humans compensate for the shortcomings of AI technology to interpret natural language. In the "Baby Shark" case, the child's mom serves as the "prompt engineer" by enunciating in a way the AI could understand. This doesn't mark any special achievement or skill—it mostly exposes the AI's inadequate training.

Sure, there are some fun edge cases that prompt engineers continue to tinker with. For example, how to enhance LLMs' ability to solve multistep logic puzzles using Chain-of-Thought Reasoning or how to get more accurate citations by using "according to." But this sort of specialized experimentation isn't a skill everyone needs to learn for fear of getting "left behind" or "replaced" by AI.



Leaving ChatGPT Behind

The cultural obsession with prompt engineering is not only a byproduct of still-developing language models. It's also a factor of our temporary overuse of direct interactions with language models. Many developers have not yet had time to make sophisticated use of APIs to create specialized applications powered by these AI models, so many are still using the chat interface from OpenAI.

Previously, the OpenAI Playground was intended for developers to test their prompts before deployment. While the ChatGPT interface may be temporarily in vogue, it will increasingly be displaced by third-party applications designed for specific use cases. Companies creating these applications will consult with prompt engineers to facilitate end-user interactions with language models, requiring much less direct prompting by the majority of AI users. Most of the knowledge you gained in your expensive prompt engineering training will fall by the wayside. Soon ChatGPT will get left behind, not you.

Soon ChatGPT will get left behind, not you.

Users will continue to migrate towards specialized applications that access LLMs via APIs because actually getting the AI to do what you want (via prompt engineering) is not the biggest barrier to taking advantage of what LLMs have to offer. Especially as AI models continue to improve, users will be stumped much more by the following questions than they are by their ability to "speak the language" of the model:

  1. What can I use AI for, and which tool is best suited to which use case? What are the major limitations of each tool, and what sorts of errors should I be looking out for?
  2. What do I want the AI to produce? What would a model of the ideal response look like? What features, elements, or components would it have?

Put simply, the hardest part of using LLMs isn't getting what I want, it's knowing what I want. In the "Baby Shark" video, the child already knew the answer to these two essential questions. By comparison, the mom's contribution was trivial.

The hardest part of using LLMs isn't getting what I want, it's knowing what I want.



Thinking Like a Philosopher

Philosophers have never stolen anyone's job, and they're not going to steal yours. But philosophers are trained to articulate problems using precise, fine-grained methods that are critical in the use of LLMs. Of paramount importance is the ability to describe exactly what an "ideal" or "good" output would look like. In fact, replacing even the best prompt with an example output and telling the model to just "do another one like this" is often as good if not better than using the fully fleshed out prompt without an example. This is the power of single-shot and multi-shot prompting, and also underlines the potential for third-party applications to use models fine-tuned to their specific use case.

In an article for Harvard Business Review , Oguz A. Acar writes that "without a well-formulated problem, even the most sophisticated prompts will fall short." He argues that problem formulation as opposed to prompt engineering will serve as a "more enduring and adaptable skill that will keep enabling us to harness the potential of generative AI." He identifies 4 key components for effective problem formulation, all of which require a sharp analytical skill:

  1. Problem Diagnosis, or the ability to define the primary objective you want the AI to accomplish.
  2. Problem Decomposition, or the ability to break the primary objective down into smaller components or subtasks.
  3. Problem Reframing, or the ability to view the primary objective through multiple lenses and tackle it from multiple angles.
  4. Problem Constraint Design, or the ability to delineate the boundaries and scope of a problem.

All of these pieces are intensely demanding and, importantly, can only be undertaken by an expert in a given field. If I want an LLM to help me with legal advice, getting an experienced lawyer to help diagnose, decompose, reframe, and scope the problem is going to be much more impactful than getting a prompt engineer to tell me how to format my request.

In early experiments on AI essay writing, many educators noted that it was very capable of generating C papers, but struggled to generate A papers. Over time, many have realized that you get C-quality papers if you input C-quality prompts. To really take advantage of the depth of sophistication an LLM has to offer, you have to be able to describe your desired output in detail, not just ask it to write you a 3 page essay on any topic. How should it start? What key words should it use? How should it integrate examples and evidence? How should it situate the purpose of the essay? Whose voice should it write in? What topics or examples should I avoid? What is my thesis statement or primary argument? All of these questions will help

There are two main barriers to this kind of nuanced problem formulation:

  1. The user doesn't know what they're looking for. Just because a user has an in-depth understanding of a problem doesn't mean they have an in-depth understanding of the solution. Just as it's possible to know how to get from point A to point B without necessarily knowing how to draw a map that would allow someone else to get there, we often have trouble articulating what a good solution would look like. This requires an added layer of expertise the user often lacks. This can become particularly problematic due to automation bias, which leads users to think an output is better than it actually is just because a computer produced it.
  2. It takes too long to formulate a problem in detail. The time it takes to formulate a problem well often exceeds the amount of time it would've taken to solve it without the use of an LLM, defeating the purpose of the efficiency gains promised by AI. This is one reason for rampant prompt sharing among users who simply don't have time to do the required analytical work.

Third-party applications developed by subject-matter experts are well positioned to address both of these barriers. First, they are able to rigorously define the most salient problems in a given field using their domain knowledge. Second, they are able to spend the time it takes to develop a prompt that can then be reused by others, allowing for efficiency gains through scale. Rather than simply sharing prompts that users copy/paste into ChatGPT or other platforms, the best solutions will design a comprehensive user interface with features that complement the text output received from the LLM.

Drivers weren't "left behind" when the airplane was invented. That's because you don't need to know how to fly a plane in order to travel by air. The same goes for both prompt engineering and problem formulation. In the near future, the development of third-party applications powered by LLMs will vastly reduce the amount of specialized training and skills required to benefit from AI tools. End users won't have to learn an entirely new language in order to remain relevant, but the tech tools most likely to succeed in this new environment will be the ones best able to formulate critical problems within a specific field.



Written by Michael McCreary

要查看或添加评论,请登录

Teaching Tools的更多文章

社区洞察

其他会员也浏览了