Explaining AI to My Mother: A Generative AI Explainer
By Mark Zaynard - No Generative AI was used in generating this document.
Artificial Intelligence (AI) is grabbing headlines these days: Type a few words into a prompt, and it spits out a paper, or a picture, or even a video. Some people are lauding its benefits (it will reduce our workload!), while others are shouting about its dangers (it will take our jobs!), but only a few are looking at AI for what it is: a tool, which people can use for good or for ill.
What even is AI?
The first question is "What is artificial intelligence?" By its nature, AI is a broad term that includes esoteric things like neural nets, machine learning, and large language models (LLMs). The working definition is usually confined to "programming that allows a computer to act like a human," which is how we'll use it here. Although AI is not true intelligence, people can interact with it in a natural manner, like another person; depending on the particular AI, it can ask for clarification or even predict what it is being asked for, even if the request isn’t exactly clear.
Primed by modern media, many people are worried about AI taking over, but in truth, AI has already become a big part of our lives. AI is the algorithms that power your video service's recommendations, your virtual assistants like Siri or Alexa, and even the spell check on your phone. Those are all examples of Narrow AI: complex and somewhat human-like, good at doing what they have been trained to do, and helpful in our personal and professional lives, but ultimately it is still clear that they are computers.
What is Generative AI?
Then there is Generative AI, which is getting all the headlines today: programs like ChatGPT or DALL-E, which can do things we thought were beyond the ability of computers—creating text, images, and even audio and video with little human effort. That's what has people concerned: If you can create a piece of writing or art by just entering a few things into a prompt, why bother developing those skills in the first place, or paying someone who has?
Generative AI is a step beyond the Narrow AI already integrated in our lives; it fuses Big Code with Big Data to create incredibly complex models of intelligence that can do surprising things, like mimic creativity. All AIs are trained on big sets of data, but the models used for Generative AI require much more data (orders of?magnitude more) to train on than previous generations of AI. That requirement(and its cost)is the source of several problems that we’ll discuss later.
The most recognizable Generative AI names are text generators like ChatGPT or image generators like DALL-E and Midjourney, but tools are springing up all over the place, to accomplish all sorts of tasks. After all, there is a tremendous benefit to being able to just have a short discussion about what you need and ending up with a product that fits the bill. Generative AI tools are collage artists, taking out tiny pieces of the data used to train them and sticking those pieces together into what looks like new content, and sometimes that new content is very good. Most of the time it is merely mediocre, but sometimes, it is very, very bad.
A cautionary tale of court and ChatGPT
In June 2023, the Southern District of New York held a hearing; it wanted a lawyer to explain why it should not sanction him for a pleading he filed earlier that year. The Court's primary objection was that the lawyer cited nine cases that did not seem to exist in LexisNexis, Westlaw, or any other legal database. It came out that those opinions had been fabricated by ChatGPT.
The lawyer had used ChatGPT, a Generative AI, to help him write the pleading in question; when initially challenged by the court, he had gone back and asked ChatGPT to find the cases, and ChatGPT had obliged. By the time the hearing occurred, the lawyer had realized his mistake: ChatGPT is not a search engine, but rather a storyteller. Its focus is not accuracy, but crafting an answer that looks good.
That is one of the biggest risks of Generative AI: It will always answer your question, even if it must make up facts to do so. This phenomenon is known as “hallucination“, and it occurs because Generative AI is meant to be creative, not accurate. Added to that, AI training (like human training) is only as good as its training materials, and Generative AIs need a tremendous volume data to train on. Buying data is expensive, so programmers have turned to the Internet, which is teeming with information but not known as a bastion of factual accuracy. Generative AI is affected by the data it is trained on, so just like a child, will reflect the values and misapprehensions contained in that data.
领英推荐
What are the risks of Generative AI?
There are four big risks for most people, and we've already covered one: accuracy and hallucination. AIs don't tell you when they’re guessing, and when they are wrong, they are very confidently wrong. That's a problem when the AI is telling you to take a non-existent street to get to your destination, but it is a much bigger problem when an AI is telling you whether a particular piece of tissue is cancerous. Because of the way Generative AIs are trained, they can also fall into self-referential cycles that end up magnifying errors significantly.
Related to accuracy is the issue of bias, which can often be traced back to the materials used for training. Bias is a problem with any kind of AI, not just Generative AI: Since AI is trained on historical data, it will tend to reinforce historical trends, which is a problem if those trends are what you are trying to change. AI bias is also harder to spot, not because it is more subtle, but because people tend to give a machine the benefit of the doubt. After all, machines don't have biases, right? And if the AI adds the results it generates to its training data, it can, again, magnify errors, just as noted in the discussion of accuracy.
There are other things to look out for as well. For instance, who owns the piece created by the AI? That seems like a silly question, but there are multiple angles to consider it from. On one hand, what happens if you use AI to write a defamatory piece? Are you responsible for it? Are you liable for the libel? There are cases working their way through the legal system right now to answer that question, but you probably are. On the other hand, Generative AI is not actually creating new work, just reusing bits of other works and rearranging them. Does the AI own that new work? Do you? Do any of the original creators’ rights attach, transfer, or interfere? For that matter, do you have any rights as a creator at all? How can you determine if Generative AI is truly creative, or just creatively larcenous? Those questions are being hotly contested right now, with no clear answer in sight. Right now, it’s a safe assumption that although using Generative AI opens you up to all the liability of creation, it probably doesn’t convey the benefits.
The last of the big four issues is preservation of confidence, which many people don't even think about. When you enter confidential information into a prompt, the AI tool files that away, adding it to its training data, and you don’t know when, where, or how it might surface again. Certainly, when a few Samsung engineers fed their proprietary designs into an AI to look for design mistakes, they didn’t think that they were releasing trade secrets into the wild. They had no idea that Generative AIs take anything fed into them and add it to their training data, reusing and redistributing that information into other works whenever the AI deems it appropriate. In exchange for saving a few hours’ work, they endangered a huge chunk of their company's intellectual property portfolio. In a litigation situation, you can often claw back information that was improperly released to the other side, but that's not possible with Generative AI; there's no unringing the bell, as they say. It’s not just an issue for intellectual property, either: credit card numbers, health information, and anything else that should remain confidential is at risk when entered into a Generative AI prompt.
Good grief, why would I ever use this?
The simple answer is that Generative AI can help you to do complicated tasks quickly and easily. It's true, there are a lot of risks with Generative AI, but that's?true of every tool. And just like every other tool, taking proper precautions can make life considerably easier (and safer).
Generative AI will not spit out content ready to go (yet), but it can still get you started. Like a cake mix, it will save you time, but you still have to put in work to make sure it comes out the way you want it. Another way to look at it is that Generative AI is a very eager but not particularly bright intern; it will do what you ask it to do, but you will need to verify what it has done and polish it up before it is ready for prime time.
Fortunately, there are ways to minimize some of the risks, four rules that will help keep you out of trouble with Generative AI; they are not a panacea, but they should keep you relatively safe. They don't correspond to the big four concerns on a one-to-one basis, but they do touch on all of them:
So, what do I do?
That’s a loaded question, but the answer is to dip your toeas deeply into the pool as you are comfortable with. Following the four rules above will help keep you out of trouble, but can't guarantee your safety, so know what the risks are and how to avoid them. If you are in a highly regulated context, like law ormedicine or finance, know what your responsibilities are and stick to them.
Something to remember is that money can alleviate a lot of the concerns around Generative AI: If you can afford to create a custom AI, train it on your own data,?and control who uses it and how, you can short-circuit a lot of the concerns. Of course, there's a reason that the big AI tools are trained on "free" data from the Internet; AI is expensive to develop, train, and maintain, so custom AIs are the rare exception, not the rule.
The thing to remember, though, is that Generative AI is never going away. It will get better, and subtler, and start showing up places that we've never considered but seem perfectly natural once we think about it. Generative AI is just a tool, no matter how slick it appears; just like any other tool, it should be used responsibly. Keep that in mind, act in a way that will keep you out of trouble, and you can use AI to whatever extent you want or need.