Useful AI tip of the week
Don’t tell an AI what not to do
Amsterdam was becoming too popular amongst young British men looking for sex, drugs, and rock and roll. To discourage them from coming, an online "Stay away" campaign started. The campaign caused a 22% drop in visitors from the UK compared to 2019. Why am I telling you this? It's better to tell people what to do, like "Stay away," rather than what not to do, such as "Don't visit Amsterdam."
This lesson also helps when talking to large language models. They respond better to direct instructions rather than phrases telling them what not to do, especially when generating images. See the example below.
So to get an image without an airplane, avoid mentioning "airplane" in your prompt all together. To get the desired textual output, be as specific and direct in your prompts as possible. Let’s say you are looking for a comprehensive definition of loss for a contract you are drafting.
This is how not to do it:
Give me a definition of loss. Don’t make it too simplistic.
This is how to do it:
Generate a detailed definition of loss for a contract I am drafting. The definition should be accurate, complete and include direct damages, consequential damages, and incidental damages.
The more specific and direct you are in your prompts, the higher your chance of getting the output you are looking for.
Great tip! How do you approach guiding an AI's behavior positively?
Asking questions, providing answers. Complex contracts simplified. Expert in law for Creative, Technology & Innovation sectors. B-Corp advocate.
8 个月Great example Pim. It reminds me of parenting tips like don't tell your kids not to run on the road as 'run on the road' is all they will hear; instead tell them positively to walk on the pavement. Lawyers often give the 'what not to do' perspective, so let's work at that, be specific and train legal AI with positive reinforcement