Many companies are hiring “prompt engineers” to tickle #ChatGPT in the right spot to cough up the results they need. To me, this is a flashing warning signal of bad #usability in current generative #AI tools. It’s similar to the way we used to need specially-trained query specialists to find anything in big databases of, say, medical research or legal cases. Enter Google, and anybody could search. The same level of usability leapfrogging is needed with these new tools: better usability of AI should be a major competitive advantage. And don’t count on a long-lasting career in prompt engineering. #userexperience https://lnkd.in/gHJREVeh
I have doubts about this statement: “many companies hire prompt engineers “…..can you share statistics or any other evidence to support this? It would be also interesting to look at the requirements in skills. It might be not a problem of usability, it might be a problem of lack of domain knowledge to solve specific problem. Chat GPT has become a hype only because UI and UX , the technology behind is not new and was available for years to the limited public due to high barriers to access. I believe that we will see improvements in UI and UX pretty soon because it is a key element of the success.
I don’t disagree but, you could argue that to find relevant results in Google you would need to learn about the different Google search operators?!?
Enter Google and anybody could search. True, but this doesn’t meant that everybody can find the right answer without some struggle. Same applies to prompts, for the time being. Nevertheless, I agree with the idea that ‘prompt engineers’ have their days numbered.
I guess it is because it is not about interacting with ChatGPT, it is about building instructions to create effective and creative conversations with it. Closer to programming, or writing SQL instructions. That’s why some are hiring prompt engineers, as we required programming engineers. But, yes it might not last for long as we build better tools for conversations with AI generative.
When prompted, ChatGPT confidently scores its own usability at an impressive 7-8 out of 10.
ChatGPT and Midjourney already have a much more accessible UI: natural language. What’s actually in those “prompt engineering guides” is a lot of domain knowledge, not knowledge how to use the chat UI. If you have little knowledge about image composition, you don’t get the same results out of Midjourney as somebody who has that kind of training. It reminds me of early DTP: suddenly everyone believed they could do fantastic layout, because now there was a tool where they could just press button and drag graphical elements arounds the screen. But that UI didn’t turn everyone into a designer. Same happening know:
Look at all of the Structured AI UX that is appearing. Companies like Rationale, Forge, Kittl, and DreamScape are all creating point-and-click experiences because users don't know all that they can ask the system or how to ask the system for what they want in a way it responds meaningfully. And on the output side, users want to use the results and so spreadsheets, slides, and structured layouts are more valuable than a text dump.
Example: a door. Assume this is the very first door in your life. Assume further you observe others operating the door effortlessly, while you struggle. An idea might pop up in your mind: let’s hire a “door operating engineer”. However, with some experience with doors you will notice some basic concepts: - A door may prevent or allow moving from one place to another. - The door itself moves to change in state from “allow” to “prevent” and back. - The movement might come in different forms, examples include towards you, away from you, to the side, up. - Sometimes a door seems to moves magically, allowing you to pass, sometimes you need to push or pull it in a specific direction. - Sometimes a door doesn’t move no matter what. It seems be be locked into the state where it prevents you to pass. - With some experience you learn how to guess which way of operating the door changes its state: signals that the door needs to be pushed or pulled in a specific direction. - You also learn that signals can be misleading, and the door is actually operated differently. It’s only a UI flaw if the signals are misleading or not present at all. Naturally, you will mix your own experimentation with observing what other people do.
The direction is certainly this: an ever greater "understanding" of natural language and ever superior performance. Just look at the progress from completion to chat models. For example, new ones almost independently implement techniques such as chain-of-thought. But how do we search Google today? And how long have we been repeating that query understanding will improve by becoming natural? There will be progress, but not immediate. Many limitations also arise from the nature of the language models.
Learning Program Director: AI Education
1 年"And don’t count on a long-lasting career in prompt engineering." - As a role with a short shelf life I mostly agree here. I do think an AI Literacy Educator role will be a potential longer and more far reaching role in companies though. Knowing the potential of the AI-enhanced tools, their limitations, their potential for baked in harmful bias, how something like a Microsoft Copilot integration has been customized for one's business will be key for associates to know. Plus company and government oversight policies are ever evolving - so that will give an AI Educator or Consultant role a longer shelf life too.