A.I. Is Changing Everything. Does That Include You?
Stephen Dubner
Host of Freakonomics Radio and co-author of the Freakonomics books
For all the speculation about the future, A.I. tools can be useful right now. Adam Davidson discovers what they can help us do, how we can get the most from them — and why the things that make them helpful also make them dangerous. (Part 3 of “How to Think About A.I. “)
This article comes from?Freakonomics Radio .?You can listen and follow our weekly podcast on?Apple Podcasts ,?Spotify , or?elsewhere .
* * *
Hey there, it’s Stephen Dubner. This is the third and final episode in our three-part series called “How to Think About A.I. ” The guest host for this series is Adam Davidson, one of the founders of N.P.R.’s Planet Money. Here’s Adam.
* * *
Adam DAVIDSON: Can you just tell us who you are and what your job is?
Have you heard of this new job? Prompt engineer? It’s a job that could only exist right now. A job that satisfies a need that almost none of us even knew we might have until the last few months. So, what is it exactly?
BERNSTEIN: The prompt engineer essentially is an expert in being the linguistic intermediary between user input and A.I. output.?
Please note how precise Anna Bernstein’s language is. That is her superpower. Being really, really precise about language. If you’ve played around with A.I. tools like ChatGPT or Google’s Bard, you’ve seen it: they respond to the precise words you type in. They can’t figure out a vibe or a hidden intention. Also, unless you happen to work at an A.I. company, you don’t ever interact with the raw model itself. Whatever you type in is put through a filter — a filter that people like Bernstein design.?The filter between you and the raw A.I. model is there for a bunch of reasons. The first is to avoid the ugly stuff. Since the A.I. models were all trained on an unfiltered mass of text from the internet, they can easily call up truly horrific, offensive words and ideas if left unfiltered.
But another reason for the filter is that the A.I. in its raw form is not always great at understanding what human beings want. Or, put another way, human beings are not always that great at telling the A.I. precisely what we want. We might type a quick prompt, like “write some marketing copy for my new blog.” The A.I. has no way of knowing what length you want, what audience you are targeting, what voice or writing style you’re looking for. That’s the kind of context Bernstein’s work provides. She gives the details that most people don’t think to type in.
In the previous episode of our series on A.I., we talked about how all new technology destroys some jobs and creates new ones, jobs nobody could have imagined. Well, prompt engineer is one of those new jobs. And most people have still never heard of it.
BERNSTEIN: Honestly, most of the time I would just say “I make the A.I. talk good,” and then that would get a laugh and everyone would move on, hopefully. It’s a difficult job to define.?
For a job that barely existed a few years ago, prompt engineering has become hot . I’ve seen dozens of prompt engineering gigs on jobs boards . Most pay more than $100,000 a year. I saw one that was more than $500,000.?This is just one early example of a new job created by A.I. Talk to leaders in medical research, engineering, finance, health care, education. Almost all the folks I speak to say, roughly, the same two things: A.I. is going to change everything. And we have no idea what those changes will look like.
That’s what we’re talking about today on the third and final episode of our series on A.I., here on Freakonomics Radio. What does the world look like when A.I. is everywhere, when A.I. is just assumed? At your job, at school, in your personal life? How will it change the way you live? And how will you have to change?
* * *
When Anna Bernstein talks about making the A.I. “talk good,” she is speaking of this new generation of A.I. programs, called large language models, known as L.L.M.s. These L.L.M.s take in massive amounts of data — we don’t know exactly how much, but it’s some huge percentage of all of human knowledge, meaning just about every publicly available book, blog, news article, as well as … well, whatever Twitter and Reddit and comments on YouTube videos represent.?And then, the A.I does something fairly simple: it assigns probabilities to words . We do this, too. If you heard me say, “I saw a huge school when I was out on that boat,” you’re probably able to do a quick calculation and figure out I mean a school of fish. You might be using a handful of parameters to make that calculation: the word boat makes you realize I was on the water, maybe you also know something about me and what I’ve been up to lately and what I’m interested in. Maybe we had just been talking about fish.
Neurologists explain that our brains can actively process about three to seven different chunks of information at any one time. The A.I. large language models employ hundreds of billions — up to a trillion — discrete parameters. So, the words A.I. writes are not based on a few facts, like that I’m in a boat and we were just talking about fish. They are based on trillions of data points. There’s an old saying: knowledge is knowing a lot of facts and wisdom is knowing which facts matter. In its raw state, an L.L.M. has almost all of human knowledge and almost no human wisdom.?There are a handful of L.L.M.s now: there’s OpenAI’s GPT-4, which fuels ChatGPT, as well as Microsoft’s Bing. Google’s L.L.M. is called PaLM 2 . We’ve heard so much about these L.L.M.s, I started to wonder: What are they, exactly? Are they one big box or some network of computers? Are they in one place or just some vague capability in the cloud? Who creates and manages these L.L.M.s?
Dario AMODEI: The number of people required to do it has gone up over time. When I was at OpenAI doing GPT-3, it was basically three of us who trained the original model.
That is Dario Amodei . I find this amazing. He says that basically three people did the original work on this L.L.M. that eventually changed the world. Amodei quit OpenAI, the company that created ChatGPT, and has a new job now.
AMODEI: I’m C.E.O. of an A.I. company called Anthropic.?
Before he was C.E.O. of Anthropic, when he was at OpenAI, Amodei was V.P. of Research there.
AMODEI: Then around the end of 2020, I and a number of our colleagues left to start this company called Anthropic that was really going to focus on, A, scaling up A.I. systems, but B, really thinking about the safety and controllability components of it more so than we felt other folks in the field had done so far.?
And why did they want to do this?
AMODEI: Things are moving very, very fast. And we want to move fast, too. But we want to move fast in a way that’s good.?
Anthropic has raised billions of dollars from venture capitalists and from Google, which owns a reported 10 percent of the company . Its goals are not small: it wants to win the race to be the dominant large language model in the world, surpassing OpenAI by being bigger, smarter, and more aligned with humanity. They call their L.L.M. chatbot Claude . There are around 35 people working on it today. I asked Amodei: What does it take to build one of these L.L.M.s?
AMODEI: The first stage is actually surprisingly simple. You just take this large language model and you feed it a whole bunch of text. People typically crawl the Internet. So, you know, this would be something like trillions of words of content. Just wide range of stuff you can find on the Internet from news articles to Wikipedia articles about baseball to the history of samurai in Japan. And you basically tell this language model to look at document after document. And for each document, look through the document up to a point and always predict the next words. So, look through the first three paragraphs, what’s the first word of the fourth paragraph, then what’s the second word of the fourth paragraph? And you can always tell whether it’s guessing right or wrong, because you know what the truth is.?
This, by the way, keeps coming up: people who work on A.I. talking about the models like they’re children. Some talk as if they are newborns, listening to language and slowly figuring out what words mean, the rules of grammar. Although, large language models often learn dozens of languages . Others talk about the models like they’re older kids, though still naive about the world. I sometimes think of those Amelia Bedelia books about the girl who takes everything literally. You ask her to plant a bulb and she puts a lightbulb in the dirt. Here’s prompt engineer Anna Bernstein:
BERNSTEIN: The analogy I often use, is it’s like I’m picking up a cup and teaching a toddler to drink from the cup. And I like bring the cup to my mouth and drink from it. And the toddler picks up the cup and brings the cup up to their mouth. And I’m like, “Yes, yes.” And then it brings the cup over the floor. And I’m like, “No!” And then it just drops the cup and I’m like, okay, we were close.
I found this surprising. The main work — the hard work — at these A.I. companies is not building the original large language model. That’s a big project. It costs more than $100 million and takes several months of intense, although fairly basic, computing. But the hard work comes once all the training and math are finished, and you have a large language model. And it’s sitting there, knowing everything, understanding nothing, equally capable of a romantic sonnet or a racist diatribe.
OpenAI spent months getting GPT-4 to stop being so offensive, so violent, so ugly, so useless. That requires a ton of human intervention. With Anthropic’s Claude, they intervene using a method they call “constitutional A.I.”
AMODEI: The idea in constitutional A.I. is that you write an explicit set of principles, a constitution. Claude’s constitution is about five pages long. It has some principles drawn from the U.N. Declaration on Human Rights, some from Apple’s terms of service, and some that we kind of came up with ourselves. And then you essentially train the model to do things in line with the Constitution and another copy of the model to tell the first copy of the model, hey, is what you just did in line with the Constitution? And so we kind of feed the model back on itself and teach it to be in line with the Constitution. And the great thing about the Constitution is, although I think this question of what should the values of the model be, how should it interact, I think that’s a very hard question. But I think an important advance here is if we’re able to make this five-page document and we can say, this is what Claude is attempting to do. It might not be perfect about it, but Claude is attempting to respect human rights. Its aim is not to have political bias in any direction. And then we can point to the Constitution and we can say, look, we’re not perfect at this, but, like, the training principles are right here. You know, we’ve not secretly snuck anything into the model.
This is a big concept Amodei is proposing: that one of the things we should teach A.I. is to tell right from wrong. Anthropic is not the first team that’s tried this. It has been a major hurdle for researchers . Amodei is cautiously optimistic that Claude’s constitution will help it clear that hurdle. And he thinks there are other reasons to feel optimistic about A.I.’s potential. Before he started working on A.I., he got a Ph.D. in biophysics.
AMODEI: When I think of the places we’ve made progress and the places where we failed, the diseases that we’ve done a good job of curing are those that are fundamentally simple, right? Viral diseases are very simple. There is a foreign invader in your body. You need to destroy it. Same with bacterial . The ones we haven’t succeeded that well at yet, even though we’ve sometimes made progress, are cancer. You have a billion different cells in your body that are out of control and rapidly self-replicating . Each of them has a different cocktail of like, crazy mutations that causes it to do a different crazy thing. How do you deal with that? It runs into this incredible complexity of number of cells, number of proteins within each cell, all these incredibly complicated regulatory pathways. And my thinking on it as I went from biology to A.I. was, “Oh my God, this is beyond human comprehension. I’m not sure that humans can completely solve or understand these problems.” But I have this feeling that maybe machines can, with still some help from some parts that need to be done by humans. But all the facts, all the incredibly complicated proteins regulating proteins, tens of thousands of RNA sequences, like, this is raw data. This feels like machine language, not human language. I mean, curing cancer has almost become a joke, right? The idea of totally curing a disease like that, to most people in that field and maybe to most people — it’s been said too many times, and the promise hasn’t been delivered. But I actually think A.I. is the technology that could do it. I mean, as worried as I am about the downsides I think the upsides are incredible and we’re in a period where science has not progressed as fast as we like. But I think A.I. could unblock it.?
But what about those downsides? Amodei testified in Congress about his fears of bad actors using A.I. to develop biological weapons. He points out that, today, it takes enormous resources — typically, the resources of a state — to develop most biological weapons, which is scary enough. But what if A.I. puts those tools in the hands of terrorists and psychopaths?
AMODEI: It just stands to reason that if you can do wonderful things with biology, you can also do horrible things with biology . And I’m very concerned about this. The truth is the scary-sounding stuff and the stuff you can get on Google, that’s not really the stuff that makes the core experts afraid. What you should think about is there’s a long process to really do something bad. I’m not going to talk about it in a public setting, and a lot of things I don’t know about even. But there are certain steps in that process where the information is really hard to get and the tasks are really hard to do. And what we looked at is, is that information being implicated? And the concern is that we’re getting to the beginning of where it is. We’re not there yet, but two to three years is our best guess. I don’t know what’s going to happen, but that’s our best guess for when it will get there.
These debates are essential: is A.I. good, is it bad, how can we encourage the goodness and try to head off the worst of the badness? Recently, Anthropic, OpenAI, Microsoft, and Google announced they were forming an oversight group to self-police their industry. It’s called Frontier Model Forum . Amodei and other A.I. executives have also called for the government to develop regulations for this new technology. There’s obviously an inherent tension here: Amodei says A.I. could be very dangerous, and should be regulated. At the same time, he’s running a company whose very mission is to make A.I. more powerful.
AMODEI: I think of myself as someone who’s trying to do the right thing. But I can’t say that my company has all the right incentives here. You’re kind of relying on me to go against the company’s incentives. And that’s true for the other companies as well. Somehow the government needs to play a watchdog or enforcement role while leveraging the expertise of the companies. And there’s probably a role for nonprofit organizations, too, and so there needs to be some kind of ecosystem where the strengths of each component help to ameliorate the weaknesses of the other components.
How to regulate A.I. is a big, important issue that’s in the midst of being figured out. In the meantime: A.I. is here. We can’t stop it. A lot of us are using it — not to develop bioweapons, but to beef up our resumés , inspire ideas for a dinner party , or just to play around. So, it makes sense to try to figure out what it’s good at. And getting to know A.I. is kind of like getting to know a stranger. It takes time.
BERNSTEIN: I mean, at this point, I’ve spent — I don’t want to know how many hours with these models.?
* * *
A.I., like any new technology, will create winners and losers. For now, Anna Bernstein is one of the winners. When she started this work, the job didn’t have a name. She — and her bosses — barely knew it was a job.
领英推荐
BERNSTEIN: They hired me on contract for a month to fix tone at the time. They were like, if we can really nail, you know, friendly or professional or formal or informal, that’ll improve the product. And I kind of figured that out for them, and so was hired full-time. And yeah, it’s been — it’s been a really wild ride ever since.
I find Bernstein’s experience so interesting because she’s an early pioneer into this new world of large language model A.I. What she has learned in copywriting will, I think, eventually apply to people in lots of other fields. After all, whatever we do with A.I. in biological research or aerospace technology or whatever, human beings will need to figure out how to communicate their goals to this computer brain.?What Bernstein learned — and what she taught me — is that learning to talk effectively with an A.I. L.L.M. requires you to spend some time thinking about how you talk to human beings.
BERNSTEIN: It’s definitely opened my eyes to how much of human communication is inference, is under this tent of context where, when we speak to each other, we think we’re speaking in a nuanced and precise way. And we are, but we’re doing that through relying on context cues and even in written communication, I’m not just talking about body language and tone, but that as well — but relying on the shared context with the person we’re talking to — this sort of fuzziness, or like slack they’ll give us. And you have these models that are trained for both really, really precise and literal instruction that human beings would struggle with because they are very intricate and it’s a lot of instruction at once, and at the same time, the same model is supposed to be good at this sort of fuzzier communication with people. And those two really at times are at odds with each other.?
Think of saying something like “it would be fun to get together soon” to another person. That phrase could mean what it says, and could be followed up by an email asking to schedule a date. But it could also mean — and maybe it’s more likely to mean — that it would very much not be fun to get together soon. Maybe I’m just saying that to be polite and to get out of this awkward encounter.?That’s a great and sometimes very confusing aspect of language: the very same words can mean a specific, precise thing — or the exact opposite.
For me, at least so far, this has been a bit maddening when I use A.I. tools. I’m thinking of something I want them to do, I type out the words telling it to do the thing I want, and they don’t quite do what I was hoping. I look at the words I used and realize I hadn’t put in enough context to guide the A.I. to what I want. For someone like Anna Bernstein, though, that’s not maddening — that’s the fun.
BERNSTEIN: Now, at the intersection of those two things is actually something I really enjoy, which are really, really well-crafted prompts, even like simple prompts where when you get just the right wording, it does exactly what you want. If you can describe exactly what you want, it gives you exactly what you want, where you really hit the nail on the head for describing the type of copy. And that can be so powerful.
My early experiences with A.I. were frustrating. I’d type in a prompt and get a very boring and generic response: “Generate a scientifically-backed meal plan to lose weight.” “Give me a list of movies that an 11-year-old boy and his parents might enjoy.” These are pretty generic. I didn’t tell A.I. what I like to eat or if I have any dietary restrictions. I didn’t explain what my son is into, what I’m into, what my wife’s into. If you don’t tell A.I., it won’t know. So, while it has been exposed to so much, it’ll just give you a blah, middle-of-the-road answer. The good news is that it’s not hard to improve the A.I. results. Just be more specific. Give it more context. And learn what kinds of prompts lead to better results. Anna Bernstein has a YouTube video that I found helpful. It’s called “Master the Art of Prompt Writing: 6 Tips to Writing Better Prompts,” if you want to check that out. So what’s Bernstein’s advice?
BERNSTEIN: It’s going to sound really basic, but just getting the right wording, trying synonyms, trying different syntax — variations on the same theme can really unlock capabilities in the A.I. You can also pile synonyms on there, if you’re trying to like get it to use a very enthusiastic voice, and just enthusiastic isn’t quite enough, you can pile on like “excited,” “hyped up,” and just like use all of those at once.
With A.I. it’s quite effective to just use normal language. it works well to just write down your thoughts — or, even better, speak them — like you’d be telling them to a person. A friend told me to think of talking to an intern —?a capable, super-eager, but hopelessly naive young intern. Use regular language but add as much context as you can. So, don’t just say: “I want to go to Montreal for the weekend, give me a list of fun things to do.” Give way more context: “I want to go to Montreal for the weekend. My wife and I love trying new local foods in out-of-the-way restaurants. Our 11-year-old son loves anything to do with sports. We love to go on long walks in regular neighborhoods and we don’t like touristy spots.” You get the picture: give a LOT of context.
A few other tricks or tools I find helpful: tell the A.I. to ask you questions. So, I might give it a prompt to do something and then I’ll add, “Ask me any questions that might help you fully meet my needs.” Another trick a friend told me: give it an instruction and then write: “Do not do anything yet. First: tell me what you think I’m asking you to do and let me know what you find confusing.”
For what it’s worth, I sometimes find this process pretty fun. Like I’m learning how to communicate with an alien. And I sometimes find the process maddening. Like I’m communicating with an alien.
BERNSTEIN: I mean, I love it. I have been on like meetings with coworkers where they’re like, “This prompt isn’t working. Can you help me?” And I start fixing the prompt. And they’re like, “Sorry about making you do this tedious thing.” And I’m like, “What? What tedious thing?” I think it also comes from a bit of a poetry writing background where at least my process for writing poetry involves like really concerningly obsessive editing where I am resetting the same line break over and over again and giving it time and rereading it. I had this poetry professor who was like, the next right word is usually your first thought or like your 153rd thought. But it’s rarely like your second thought or third thought.
I’ve been focused on avoiding the hype — the dystopian AND the giddy. And, truth be told, for all the attention, A.I. is still in such an early stage that most of the ways I use A.I. and that I see others use it are pretty banal.
Ethan MOLLICK: I think that it helps to have a utopian vision here.
That’s Ethan Mollick . He’s a professor of management at the Wharton School at the University of Pennsylvania, where he studies entrepreneurship and technological change. And I am not aware of anyone who is having more fun with A.I. than Ethan Mollick.
MOLLICK: I’ve been playing around with it for a while. I’ve been A.I.- adjacent for my whole career. I’m not a computer scientist, but I’ve been thinking about uses for it for a long time.
Mollick publishes a weekly newsletter, called “One Useful Thing ,” which provides a stream of things you can get A.I. tools to do. One area that Mollick finds A.I. especially helpful with is entrepreneurship. Which is what he teaches at Wharton.
MOLLICK: America is a nation of entrepreneurs in waiting. When we do surveys, a third of people have had an entrepreneurial idea in the last five years they wish they could execute on it and almost nobody does anything.?
What does that look like: A.I. helping would-be entrepreneurs actually pursue their dream? Coming up: Ethan Mollick and I go into business together.
* * *
I’m Adam Davidson, sitting in for Stephen Dubner on this, the final episode, of our series on A.I. I know for me, one of the best ways to get going on something new — a new business, a new project, a new exercise regimen — is to get a partner, a buddy who knows things I don’t, who can share the journey with me. But it is really hard to find good partners.
MOLLICK: A.I is a general purpose technology. It screws up in some areas. You have to use it the right way. But it’s really exciting to have a generalist co-founder with you who could give you that little bit of advice, a little bit of encouragement, push you over the line and that makes a big difference in people’s lives.?
Ethan Mollick teaches entrepreneurs how to start businesses. He’s at The Wharton School, one of the top business schools in the world. He also studies what leads to success and what leads to failure when people start businesses. And he says that a major barrier to success is not some fancy formula cooked up in the Ivy League. It’s so much simpler. One big reason people fail is that they stop too soon. Many stop before they do anything. They have an idea, and they never pursue it. Others pursue it for a while, but hit some blocks and that’s when they stop. A.I. is far from perfect. Mollick will certainly agree with that. But: it’s always ready to push another step. It never just gives up.
MOLLICK: If you ask the A.I especially GPT-4, the most advanced model you can get access to, and you say, “What should I do next to do this?” The steps that it will give you are perfectly reasonable. Are they the best steps possible? No. Will it make mistakes? Probably. But overcoming that inertia, getting a little push about what to do next is really helpful. And then it will help you actually do those experiments. I’m forcing all my students to actually interview the A.I. as an actual potential customer. That’s not because it’s as good as interviewing a customer. You absolutely have to interview potential customers. But it actually gets you part way there. It greases the wheels in a lot of ways to doing that initial testing and overcomes those barriers where you might otherwise say, “I need to hire someone to do this,” or “I don’t know what to do next” because it could help you overcome that inertia.
We’ve all had these fantasies, right? Of opening a lovely business in our town. Here’s mine: I live in a small town in Vermont. The other day I was two towns over, in Bristol, which has the perfect, tiny little small-town Vermont main street. I was with some friends, eating lunch, and across the street I saw a sign announcing that the local stationery store was for sale. I started to fantasize. Telling my friends how much I would love to buy that place and turn it into my dream stationery store. I love stationery — pens and paper and stamps and the whole thing. I love fancy, artisanal fountain pens and I also love big stacks of regular old copy paper and a giant wall filled with BIC pens. But, of course, it was just a silly fantasy. Some idle conversation. But talking to Ethan Mollick, I started to wonder: what if I had used an A.I. tool to flesh my idea out a bit? See if it was at all viable?
MOLLICK: So you say you have an idea, right? But I still think ideation is its own phase. Because a lot of ideation is just about combining ideas together , with variation. And that’s something that A.I. does really well because it finds connection between ideas . So I might start off with, “I want to launch a stationery store in Vermont. Give me 20 different variations on stationery store ideas that could be great. Give me 100 ideas.” And then I would probably do some constrained thinking. Let’s say I had unlimited money. “Give me 20 ideas with unlimited money.” So generally, when you prompt the A.I., you want to tell it who it is, what context it’s operating under.
It surprised me to see just how fun and useful this process that Mollick calls “constrained ideation” could be. Even when the ideas were silly or terrible, it got my brain working in new ways, thinking new thoughts. And, since the A.I. tools are so fast and easy, you can zoom in to explore the most ridiculous ideas.
MOLLICK: Let’s do something that’s like stationery that would be useful to astronauts or something like that.
This was starting to get fun.?Now, I don’t want to open a space-themed stationery store. That’s not the point. The point is that constrained ideation can get your brain open to all sorts of possibilities.
I actually started to give that historical/ancient stationery idea some serious thought. I’m so fascinated by old ways of writing. I would love a place where I could buy clay and a stylus to practice cuneiform writing; get some vellum and dip pens, or some real papyrus. I was just reading how in the ancient Middle East, some people wrote by scratching into pliable lead . Is it just me, or would you love to go to a store that had all of that? And with Mollick’s guidance, before long, I had a halfway decent business plan and market research proposal. Now, I’m not actually going to open a historical/ancient stationery store. Although, man, I do kinda want to. But the point is, the process was fun. And in 12 minutes, I accomplished what would probably normally take months. And who knows, I like playing around with different business ideas and maybe someday I’ll actually do one of ‘em.
A key point that Mollick showed me is that this is very much NOT a passive thing. I used to have that idea, that A.I. meant giving over all the thinking work to some computer. That using A.I. was cheating. And it can be. Like all the reports of people getting A.I. to write their school assignments for them. But this Mollick approach of using A.I. as a thought partner, pushing me forward, playing around with goofy ideas, fleshing out semi-formed thoughts into more rigorous ones, it felt very active. And it got me closer to what I actually think and want. It made my own thinking clearer, to me.
MOLLICK: In every case where we’re using this you should be able to push the A.I. to a point where you’re getting to a back and forth interaction with it, where you as a human are adding a lot to that interaction, and it’s helping you by giving you the immediate gratification you need, variation on ideas that you need. And if you can get there then I think this becomes really magical.
I caught a bit of Ethan Mollick’s infectious optimism. It started to help me see that A.I. doesn’t necessarily have to replace us. It can expand the range of human possibilities, allowing us to do far more than we could before. Ethan Mollick’s wife, Lilach Mollick, is also at Wharton where she leads their digital learning programs. They both weren’t sure at first how A.I. would impact education. There are a lot of fears, reasonable fears, that it will hurt education. Instead of learning, students will just use A.I. to pretend to have learned. But the Mollicks have been experimenting with A.I. as an educational tool and have found that it can be quite helpful .
MOLLICK: So for example, as much as we hate them, tests are one of the most powerful ways to learn because they not only test your knowledge, but they actually increase your future recall. Writing tests is hard to do. This writes tests for you. Educating people is the key to unlocking everything. If we could do that at scale, what does that mean? That’s incredibly exciting.
We started this series of episodes asking if A.I. is more good or more bad for humankind. We’ve made clear throughout that nobody really knows. This thing is so new, so weird, so fast-changing, that any prediction at all in any direction is quickly overtaken by a surprising reality. So, I’m not going to predict the future. I do think we can say some things — with some confidence — about right now. And here they are:
Just as I was finishing this episode, a friend told me about something that had just happened to him.
My friend was in India, because his mother-in-law was in the hospital, 84 years old, on a ventilator. The doctor said she would pass within a day or so. Her husband, who is 92, was distraught. For all the obvious reasons, but for another one, too. He wanted to tell her how much she had meant to him, how wonderful their 60-plus years of life together had been. But he didn’t know how to say that in words.
As it happens, his granddaughter, my friend’s daughter, works in A.I. She guided her grandfather through some A.I. prompts. Asked her grandfather some questions and entered them into ChatGPT. It produced a poem. A long poem. He said it perfectly captured his feelings about his wife. And that, on his own, he never would have been able to come up with the right words. He sat next to her, reading the poem, line by line. She died soon after. And he said it allows him to know he told her everything.
* * *
Freakonomics Radio?is produced by Stitcher and Renbud Radio. This episode was produced by?Julie Kanfer?and mixed by?Eleanor Osborne, Greg Rippin, Jasmin Klinger, and?Jeremy Johnston. We also had help this week from?Daniel Moritz-Rabson. Our staff also includes?Alina Kulman, Daria Klenert, Elsa Hernandez, Gabriel Roth, Lyric Bowditch, Morgan Levey, Neal Carruth, Rebecca Lee Douglas, Ryan Kelley, Sarah Lilley, and?Zack Lapinski. Our theme song is “Mr. Fortune,” by the Hitchhikers; all the other music was composed by?Luis Guerra .
Project Leader | Finance Decision Maker | Process Optimization | Change Management
1 年This is a great series and I encourage everyone to listen and understand how A.I. tools can improve efficiencies. Also helped ease my anxiety with all the headlines lately.