Mira Murati, The Creator of ChatGPT, Thinks AI Should Be Regulated
By John Simons
Somehow, Mira Murati can forthrightly discuss the dangers of AI while making you feel like it’s all going to be OK.
Murati is chief technology officer at OpenAI, leading the teams behind DALL-E, which uses AI to create artwork based on prompts, and ChatGPT, the wildly popular AI chatbot that can answer complex questions with eerily humanlike skill.
ChatGPT captured the public imagination upon its release in late November. While some schools are banning it, Microsoft announced a $10 billion investment in the company and Google issued a “code red,” fretting that the technology could disrupt its search business. “As with other revolutions that we’ve gone through, there will be new jobs and some jobs will be lost…” Murati told Trevor Noah last fall of the impact of AI, “but I’m optimistic.”
For most of January, ChatGPT surpassed Bitcoin among popular search terms, according to Google Trends. All the attention has meant the privately held San Francisco–based startup—with 375 employees and little in the way of revenue—now has a valuation of roughly $30 billion. Murati spoke to TIME about ChatGPT’s biggest weakness, the software’s untapped potential, and why it’s time to move toward regulating AI.
First, I want to congratulate you and your team on the recent news that ChatGPT scored a passing grade on a U.S. medical-licensing exam, a Wharton Business School MBA exam, and four major university law-school exams. Does it feel like you have a brilliant child?
We weren’t anticipating this level of excitement from putting our child in the world. We, in fact, even had some trepidation about putting it out there. I’m curious to see the areas where it’ll start generating utility for people and not just novelty and pure curiosity.
I asked ChatGPT for a good question to ask you. Here’s what it said: “What are some of the limitations or challenges you have encountered while working with ChatGPT and how have you overcome them?”
That is a good question. ChatGPT is essentially a large conversational model—a big neural net that’s been trained to predict the next word—and the challenges with it are similar challenges we see with the base large language models: it may make up facts.
In a very confident way too!
Yes. This is actually a core challenge. We picked dialogue specifically because dialogue is a way to interact with a model and give it feedback. If we think that the answer of the model is incorrect, we can say, “Are you sure? I think actually...” And then the model has an opportunity to go back and forth with you, similar to how we would converse with another human.
(Sign up here to get The Leadership Brief delivered to your email inbox every Sunday.)
Truly groundbreaking technologies solve a problem. What problem is ChatGPT solving?
Right now, it’s in the research review stage, so I don’t want to speak with high confidence on what problems it is solving. But I think that we can see that it has the potential to really revolutionize the way we learn. People are in classrooms of, say, 30 people. Everyone has different backgrounds, ways of learning, and everyone is getting basically the same curriculum. With tools like ChatGPT, you can endlessly converse with a model to understand a concept in a way that is catered to your level of understanding. It has immense potential to help us with personalized education.
But some schools are banning ChatGPT. Does this surprise you?
When we’re developing these technologies, we’re really pushing toward general intelligence, general capabilities with high reliability—and doing so safely. But when you open it up to as many people as possible with different backgrounds and domain expertise, you’ll definitely get surprised by the kinds of things that they do with the technology, both on the positive front and on the negative front.
A growing number of leaders in the field are warning of the dangers of AI. Do you have any misgivings about the technology?
This is a unique moment in time where we do have agency in how it shapes society. And it goes both ways: the technology shapes us and we shape it. There are a lot of hard problems to figure out. How do you get the model to do the thing that you want it to do, and how you make sure it’s aligned with human intention and ultimately in service of humanity? There are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities.
What’s the key ethical or philosophical question that we still need to figure out?
[AI] can be misused, or it can be used by bad actors. So, then there are questions about how you govern the use of this technology globally. How do you govern the use of AI in a way that’s aligned with human values?
Do you think these questions should be left to companies like yours, or should governments get involved in creating regulations?
It’s important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible. But we’re a small group of people and we need a ton more input in this system and a lot more input that goes beyond the technologies-—definitely regulators and governments and everyone else.
There’s always a fear that government involvement can slow innovation. You don’t think it’s too early for policymakers and regulators to get involved?
It’s not too early. It’s very important for everyone to start getting involved, given the impact these technologies are going to have.
Mira, can you give me one song or album, one book, and one movie that gives us some insight into who you are and what inspires you?
This economy—with its mass layoffs, rising prices and higher interest rates—is tough to navigate. TIME’s new advice column, Money Questions, is here to guide you, with expert-backed answers to your most urgent personal finance questions. What’s the best way to prepare for a recession? How will the new student loan plan affect your debt? Is it better to buy or rent a home right now? Our reporters consult with experts who can offer actionable guidance. Share your questions at [email protected].
Sr Enterprise Architecture Analyst/Modeler, Ontologist w/Active Secret clearance | Published Speaker | Mathematician, Logician | Recycled "that box" a long time ago
11 个月Extending the classroom example used, effectiveness of learning by various students may differ by wording preferences as well as levels. Vernacular is important; words matter.
Attended Government Post Graduate College Mardan
1 年I want a actor and model plz any one help me
Researcher
1 年AI models certainly have the potential to bring about benefits and progress to society, provided that they are utilized in an appropriate manner. As an emerging scientist, I strongly believe that it is important to regulate AI due to ethical considerations. However, it is worth noting that as with any new innovation, which, in this case, is ChatGPT, we should permit time for exploring its potential applications across various sectors, considering its limitations.
--
1 年Congratulations Mira Murati! Wishing you more successes in your future. Very proud of you ?? ?? suksese zem?r!??
Agent de sécurité polyvalent Service de loge et d'intervention sur site
1 年Very interesting article ! Control and Responsibility !?