??Better questions or ??Better?answers?
Justin Ho????
I craft strategic mental health brands that stand out from the market, build trust and empower your clients.
In the age of AI-everything, are we looking for the right things?
If you do a simple Google search of “writing better ChatGPT prompts”, it returns an astonishing 35,300,000 results!
This number is practically meaningless even without the context of AI. Most users in any search platform don’t go beyond a handful of search result pages if that! ChatGPT and the likes seem to solve that problem. But how do you know
?? The quality of its response?
?? The accuracy of its data?
?? The authenticity of its feedback?
?? The originality of its creation?
With most text-based AI tools out there, it outputs the best guesses of what you ask it. If your question or prompt has more detailed context; it will give you a more detailed response. There are now companies hiring for “AI prompt engineers” along with a new market for teaching people interested in using such tools for writing better prompts. It seems unfathomable just a handful of years ago, these discussions feel like a sci-fi movie.
With the current AI-tech landscape, this is all under the assumption that users
?? Know exactly what they are searching for
?? Can effectively articulate their ask, or at least through iterative attempts
?? Able to ask the “right” questions that will result in them the best output in what they need
?? Able to think critically about the responses
?? Fully understand their own cognitive challenges with unfamiliar tools
Aside from the simple direct command of asking for what you need, i.e. “write a cover letter for me using my resume”. A common prompt method that’s been popularized is like a movie director describing a scene of a movie:
领英推荐
? Set the STAGE
? Set the PERIMETER
? Assign ACTOR
? Describe what ACTIONS
? Describe how you want it to BEHAVE
An example of this is like,
This was an example prompt on a recent project of mine, done with multiple iterations and refinement of conversation to get the final output.
My conundrum recently has been about whether I want my experiences with ChatGPT to give me better answers or ask me better questions. The gap I see in the way most people use ChatGPT is in a one-way conversation. You ask it what you want, maybe through multiple rounds of refined prompts, and await a magic response. It’s a demand-based approach to using AI. Nothing wrong with it. But I wonder if there could be a better approach with more depth that results in a user experience beyond just one-stop responses. An example of this is by asking it like a learner’s approach,
This approach gives me a deeper understanding of not only what to look for, but now start to formulate how I should approach my research. I now can either
?? Continue this exercise to finetune it,
?? Start a new session by asking the (some) questions it gave me, or
?? Answer these questions and let it generate a detailed research plan that fits my queries
In the latter case, it’s asking me questions that I might not have thought of before. This approach gives me insights and challenges me to think beyond my original assumptions. Which is more than just a straight, one-stop communication. I can still go back to using the “director” approach to dig deeper for more information afterward.
AI tools like ChatGPT are incredible, and I feel like we’re using them like a simple calculator instead of a supercomputer.