Asking For Context

Asking For Context

Here’s a question: A man and a goat are on one side of the river. They have a boat. How can they go across?


A man, a goat, and a boat

If you’re anything like Google Gemini or my dad, you might answer something like “The man has to row across by himself first and then he can go back for the goat.” To which, I might ask why the man can’t just row the goat in the boat himself. You’d probably be confused. You’d ask whether or not there’s a weight requirement or a wolf preventing you from using the simple obvious answer.

Sometimes the easiest questions are the most difficult. This is because humans (and LLMs) look for context to find the right answer to the question. We know that if someone asks what seems like a very obvious hypothetical it’s a riddle or trick of some kind. For example, if I ask, “I have a piece of paper. How do I take notes?”, you’re likely to ask if I need a pen or if I am having trouble understanding the subject well enough to condense it into notes. The context would imply that I’m not asking for the simple instruction of, “You write your notes on your paper.”

This is a major issue with context. Context is demonstrated in myriad ways. Context can be so second nature in daily life that in many situations we don’t actually know what context we ought to give. Asking the question about the man and the goat, I might want to provide the context that I’m looking for a straightforward answer and not giving a riddle. But different situations require different context. A person who doesn’t know what a goat is and therefore doesn’t know if it’s an animal that swims or flies might legitimately be asking if the goat has to go in the boat to go across the river and would be very confused if they get an answer about a nonexistent wolf. One of the hardest parts about prompt engineering is learning what context is necessary to get the answer you’re looking for.

Below is an example that illustrates some major problems with context that I encounter in day-to-day usage. This happens a lot when I’m trying to write a little app or program in a language that I’m not great at (most of them). As programs do, I need it to turn an input into an output. I usually start with a particular use case, and I progress it through the steps until I get the output I’m looking for. This causes no end of confusion when I get an error message like the dreaded ‘object reference not set to instance of an object’. In this situation, I’d ask ChatGPT how to fix the code. It’ll give me a fix that goes something like, “You queried the definition of object ‘word’ but that object doesn’t exist yet. You should write a check to make sure that ‘word’ is populated before checking for definition.” This is a great suggestion except for the fact that I’m using a test case that I know should succeed. ChatGPT is answering the question, ‘How do I handle this error?’ when I’m asking the question, ‘Why is function a not passing the data correctly to function b?’.

I think that the greatest advancement in GPT technology would be the ability to ask for additional context if needed. This is possible for SLMs in which prompts are heavily restricted so as to only allow users to ask about specific topics. I don’t think it is yet possible for LLMs like ChatGPT. I think that it takes a level of expertise to be able to say, “Your question is vague. Do you mean this or that?”, and I don’t believe that we’re close to that level in a general knowledge GPT tool. I don’t believe that even we humans are all that great at understanding the context we are given and in which situations we need more context before giving a correct answer.

要查看或添加评论,请登录

Leah Schneider的更多文章

  • Can AI Reason?

    Can AI Reason?

    Can AI reason? You may have seen several recent articles about an Apple study that questioned whether or not LLMs can…

  • The Worst Survey Question in the World

    The Worst Survey Question in the World

    The worst survey question in the world This is a question that I saw recently in a report on using AI in data…

  • AI Bias and Bad Prompts

    AI Bias and Bad Prompts

    A major cause of bad responses in AI is the fact that we don’t know how to ask for what we want. Sometimes we don’t…

  • Bar Chart Axis Musings

    Bar Chart Axis Musings

    Researching data best practices, I’ve heard it said that you should always start a bar graph y-axis at 0 because not…

    2 条评论
  • Why Do We Keep Proposing a ‘Data Driven Culture’?

    Why Do We Keep Proposing a ‘Data Driven Culture’?

    What is a data driven culture? I hear all the time that data is important, but is anyone doubting that point anymore? I…

    1 条评论

社区洞察

其他会员也浏览了