What OpenAI’s o1 Means for Learning AI

What OpenAI’s o1 Means for Learning AI

Another bitter lesson?


OpenAI’s latest model is called o1 and it has some troubling implications for those learning to use AI because it means that what we learned last month may no longer be useful.

Learning prompt engineering

One of the new skills that came to prominence with the launch of ChatGPT is prompt engineering. Essentially that’s the skill of asking good questions (“prompts”) to the AI to get a good answer.

You can think of a Large Language Model (LLM) as a giant database of human knowledge and if you ask a simple question, you’ll merely get a routine answer. Ask, “What is a good dance song for a party?” and you’ll get a top-of-the-head answer such "Uptown Funk by Mark Ronson ft. Bruno Mars.” If you want to get the LLM to explore more deeply into other parts of its knowledge base you might ask “What is a good dance song for a party of elderly people who like Asia?” You’ll get a much more interesting answer such as "Ue o Muite Arukō (also known as "Sukiyaki") by Kyu Sakamoto.”

So if we want to learn to use AI then we should learn effective questioning strategies, that is, we should learn prompt engineering. Perhaps you’ve already offered this training.

Advanced prompt engineering and OpenAI’s o1

One of the most advanced prompt engineering methods you could have trained yourself in is “chain-of-thought”. If you ask, “How do I build a bridge across a wide river?” an LLM will give a answer off the top of its head which may or may not be a good design. Instead of jumping in directly with a question you could take the LLM through the problem the same way you would guide a human student. For example, you might say “We are going to talk about bridge building, and I want you to go through this step by step”, “First, what are the main design considerations for this kind of project?” and so on. This approach gives better answers to tough questions.

Here's the problem with learning to do advanced chain-of-thought prompting: OpenAI’s o1 already has that built in. Your advanced prompt engineering skill is already obsolete.

What we’re up against and what to do about it

Here’s what we are up against, AIs are advancing so quickly that by the time we’ve figured out what to do, such as learning how to prompt engineer or implement safety guidelines the technology has already moved on. We can’t do nothing, but it appears that often when we do do something it proves to be a waste of time because the technology changes faster than our organizations can react to.

The solution, in as much as there is one, is to ensure that you have advisors who can warn you if what you are intending to do will still be relevant by the time you do it. This is particularly true for anything expensive or that will take a long time to implement. In fact, I’d suggest that the average organization shouldn’t do anything that is particularly expensive with AI. Just let individuals use the off-the-shelf products to improve productivity or quality or reduce mental fatigue. You can use products that have AI built in, such as various recruiting applications, but be sure that the vendor can keep the product up to date or that the AI component isn’t so crucial that it needs to be constantly updated.

Is this a sign of things to come?

We’ve long had to deal with fast-moving technologies where things like the price of compute or battery storage or bandwidth or smartphone capability made significant advances over the course of a few years. The question is whether AI is moving at an even faster pace than we are used to. If we are really in a world where we see technology make a leap every 6-12 months then organizations will have to find ways to move much more quickly than they are currently designed to do.

We wish for rapid advances in AI. Sadly, if those rapid advances leave us breathless and constantly unsure of what’s next then we may face a bitter lesson about the downside of getting what we wished for.

Note: AI fans will recognize my allusion to one of the most famous papers in AI, “The Bitter Lesson” by Rich Sutton in 2019.

?

要查看或添加评论,请登录

David Creelman的更多文章

  • The Role of the Board's HR Committee: A quick example

    The Role of the Board's HR Committee: A quick example

    Peter Donovan, Managing Partner of executive search firm Top Gun Ventures, shared a short example of where the board’s…

    1 条评论
  • We need AI capability not applications

    We need AI capability not applications

    With the explosion of ChatGPT it's natural to wonder about what applications you should be deploying. That's okay but…

  • Addressing the Alignment Problem

    Addressing the Alignment Problem

    Anyone with a serious interest in AI will be concerned about the alignment problem. This is the problem that it's…

  • The Hybrid Workplace is a Mistake

    The Hybrid Workplace is a Mistake

    Like everyone else, my instinct was that the hybrid workplace (partly in the office, partly remote) made sense. Now I…

    3 条评论
  • Four books for Christmas

    Four books for Christmas

    Here are my gift recommendations for book lovers: For the HR pro: The CMO of People (2nd Edition) by Navin & Creelman…

    4 条评论
  • Is management a kind of bricolage?

    Is management a kind of bricolage?

    This article is part of a series of reflections following from “Management for Scientists and Engineers” Imagine your…

    5 条评论
  • This is fine

    This is fine

    I have to admit that for most of my career I thought the development of the science of management and the practice of…

    2 条评论
  • Why management may have gotten worse

    Why management may have gotten worse

    This article is part of a series of reflections following from my (unusual) book “Management for Scientists and…

    4 条评论
  • The Large HR Collider

    The Large HR Collider

    This article is part of a series of reflections following from my (unusual) book “Management for Scientists and…

    11 条评论
  • Introduction to the BoardHR Initiative

    Introduction to the BoardHR Initiative

    English version follows the Japanese version in the link above.

社区洞察

其他会员也浏览了