The State of AI Regulation: [Interview by Ivette Solórzano of Televisa's N+ digital news channel]
Our interview is in the editing room right now at Televisa. Credit: Ivette Solórzano.

The State of AI Regulation: [Interview by Ivette Solórzano of Televisa's N+ digital news channel]

The State of Artificial Intelligence Regulation

I recently sat down with Ivette Solórzano of Televisa's N+/Noticias Más, a digital news service. Ivette is a leading voice in Latin American journalism, and it was an honor to speak with her again (we met previously for an interview on Generative Artificial Intelligence at N+'s emerging news studio at TalentLand Jalisco).


What do you do for a living?

My name is MJ Petroni, and I'm a cyborg anthropologist. A cyborg anthropologist is a person who studies the interaction between technology and humans. We focus not only on how we create and build technology, but also on how we transform and change through it.

My job is to foster Digital Fluency and AI Fluency so that everyone can make good decisions about new technologies and participate in these new digital economies.

We are at a time when artificial intelligence is being discussed everywhere due to its growing presence. Considering the evolution of ChatGPT, I'll start with the topic of regulation. Should governments regulate artificial intelligence?

Sam Altman, the CEO of OpenAI, is urging everyone to recognize the need to regulate it. For example, the United States Congress. The reason is that technology experts know that it is very difficult to fully understand the impact of artificial intelligence and achieve real competition. In addition, it is necessary to regulate some parts of social impact. We have to set ethical standards, you know? That apply evenly to all companies rather than waiting for self-regulation.

What should be the approach to regulating artificial intelligence? There are various ways of approaching artificial intelligence, aren't there? But I think that a possible perspective could be from the field of personal data protection. There are many angles from which we can examine this issue. But from what approach should we regulate artificial intelligence?

There are several different types of artificial intelligence. At the most familiar to us is data-driven artificial intelligence, which is more analytical. And then there is the new type called generative artificial intelligence, as in the case of ChatGPT, for example. The difference is that some of the things we think of as analytics aren't really "smart”—they don’t alway ‘learn.’

These types of artificial intelligence apply, for example, to personal information about our habits on the web or anything else. But what we are considering now is that we should also regulate the use of these types of artificial intelligence, especially generative AI. In the case of generative artificial intelligence, we use the term "black box" to mean that we cannot see exactly what is happening.

It is not easy to control or explain what is happening within generative AI systems, that collection of millions of algorithms. For this reason, regulating it is not so simple. It is more like a chemical ingredient of which we cannot exactly understand its nature—we must be careful about it until we understand it better or use it in critical applications.

How can you regulate something that you can't see or don't even know what it's like?

It's a problem. That's why we're saying, for example, that OpenAI is highlighting the importance of regulating itself and competitors in certain ways. For example, where we use it and how we apply it. It is essential to make sure that when we use it, we inform the users or the people who will be affected. In New York, for example, people are saying that if a company is going to use it in the hiring process to screen potential employees, they should be told.

It is not necessary to explain all the ways AI is being used, but when a company is using AI with user data or user impacts, there is an obligation to explain potential impacts so that people can make informed choices.

When we talk about artificial intelligence and social networks in general, the question arises of how to protect our personal data. With the advent of technology, how can we ensure that our personal data is not at risk from artificial intelligence?

Our personal data is vulnerable, for example, when used with analytical artificial intelligence. If we are using a cell phone and we provide our location or our habits on the web or the sites we like, a lot could be revealed about us which is quite private. The point is that this information is directly related to our profile or name.

And even if it is anonymous, it is not certain that it will always remain that way. That is one of the things we have to think about: who is watching us, who is analyzing us. With generative artificial intelligence, as in the case of ChatGPT, a model is used that incorporates many parts of the language, for example, millions and millions of parameters related to tons of content on the web, and even some social media.

But all this goes through different layers of algorithms and analysis. What happens when ChatGPT is prompted is not like a simple data set in which something that has been said is recalled, but it is mixed and amalgamated as a whole into a new 'hallucination.'

Like a universe of data?

It is more than that. For example, in the case of traditional data analysis, you might say, "I'm going to search between 10 or 100 movies and cite specific examples of character types." In contrast, with generative artificial intelligence, it's more like giving a film studies graduate the task of watching millions of videos or movies, and then ask them to create a character—you can't explain exactly why their intuition told them to write a particular character.

It is not directly related to a specific movie. Similarly, in the case of ChatGPT, if you ask it about a person who isn't very famous, you may get little or no answers, or very incorrect guesses—it's not like a web search (though that integration is becoming more common).

You could say generative AI is always generating hallucinations—most are acceptable, and some aren’t. But pure generative AI isn’t recalling facts the same way other machine systems do.??

Pretty much any place on the Internet where data is shared or where a company can see information about us, for example, on Facebook or other platforms, is subject to their studies or initiatives to train their models.

An election year is coming up, both in the United States and in Mexico, right? How can artificial intelligence be used and put to good use by political parties or the companies that work for these political parties? Because artificial intelligence will be present and will be used.

The term "deepfake" refers to fake, machine-generated content that looks real. There is no direct translation into Spanish, but we can use the term "deepfake" in Spanish as well. A deepfake is an artificial synthesis of video, audio, or content written in another person's style or appearance, often with the intent to deceive or misinform. For example, there could be a fake video of President Obama saying he likes Trump, but it looks real. This is due to advances in generative artificial intelligence, which allow us to generate compelling audiovisual and written content.

While reading something, it is important to question whether what is being said is really true and who said it. However, when it comes to videos or audios, we tend to believe that they are true. For example, if a Trump-related legal issue arises in the United States, we should think about the source of the news. Is it real or not? In the first quarter of the year there were some deepfakes showing an alleged attack on the Pentagon or a violent arrest of Trump by the police, although they did not actually happen. However, they looked real and caused quite a stir on Twitter.

Therefore, it is necessary to increase our level of Digital Fluency and AI Fluency. As a specialist in this field, I spend my time increasing our level of AI Fluency. For example, we may verify the accuracy of something by consulting multiple sources or by connecting directly with the source of the information to confirm its accuracy. But we must be prepared to face challenges to truth and authenticity. This represents a very big problem, even beyond the labor impacts. It also affects democracy and our confidence in the truth.

I'm going to come back to the question about Digital Fluency because I think even though we've already discussed it, it's important to understand. What is meant by Digital Fluency?

Digital Fluency involves more than just knowing a few phrases or specific technologies like ChatGPT, it also involves understanding the human impacts.

For example, in a business or company, we say that Digital Fluency goes beyond having the right tools, it also involves skills, business models, data, and thinking. Similarly, we need to consider the ethical impacts of using data or artificial intelligence like ChatGPT, and educate ourselves about the context around these tools so we can make informed decisions. It’s the difference between a gringo tourist with a phrasebook versus someone aware of the deep cultural context around the words—someone fluent. It involves more than just reading a few tech phrases or thinking about a few apps, but instead understanding the human and machine systems as a whole.

What regulations currently exist and which are in the process of being implemented? We know that in Europe there is an ongoing debate about the regulation of artificial intelligence. What is the current starting point in terms of regulation, and where are we headed?

We have different aspects of regulation. One of them is the regulation of data use, as established in the General Data Protection Regulation (GDPR).

These regulations define how the data of citizens of the European Union can be used. For example, if these data are going to be analyzed, it must be done within the limits established by the regulation. The problem arises when an algorithm developed in the United States is applied to data in the European Union, since certain requirements must be met and users must be notified about its use.?

Another issue is that entities using artificial intelligence must have the ability to explain how they make decisions with AI. For example, if you decide to grant a bank loan to a person, you must be able to explain how artificial intelligence is being used and why that decision was made.

There is also the right to be forgotten. For example, a person can send a request to Google to remove all references to them in its search system, such as search results, photo databases, among others. However, in the case of generative artificial intelligence, it is not so easy to make those changes. You can't just remove a few references to a person, and explaining all the decisions you make can be tricky. There are many capabilities that regulators have not previously imagined.

And on regulation of artificial intelligence—which country is in the lead?

I think both the European Union and the United States are spending a lot of time and effort regulating this issue. China is also involved in regulation, although its approach may be a bit more complicated due to national security concerns.

For example, in China it is being argued that generative artificial intelligence must be explainable and provide consistent answers to all people, among other things. However, in practice, achieving that can be nearly impossible. The inherent complexity of generative artificial intelligence and the multiple variables involved make it difficult to guarantee complete explainability and a uniform answer for everyone.

There are some of those regulations starting in the United States. We are seeing that some people are legislating and subsequently, through some legal cases, trying to set some limits in relation to generative artificial intelligence. There is also talk of creating an AI agency for regulation and oversight, which was a suggestion from OpenAI's Sam Altman to the US Senate.

What is generative intelligence and what is analytical intelligence?

Analytical artificial intelligence is, for example, when we process data and turn it into numbers that can be analyzed. For example, we can quantify the frequency with which something happens or determine the time needed to go from one place to another in the city. In contrast, generative artificial intelligence is more like sending a machine to school, providing it with language examples so that it can learn and understand language, and then asking it to write new things. In this case, the machine systems behind the chat interface we’re familiar with focus on capturing trends and patterns rather than feeding back specific reference data points or performing math.

Generative artificial intelligence is based on systems and algorithms that can perform various tasks from phrases or "prompts". It's more like having a conversation with a person than just seeing a set of predefined responses.

What are the questions we should be asking ourselves about artificial intelligence?

We all need to consider how our personal data is being used in artificial intelligence analytics and analytics systems. But we must also reflect on how we can use generative artificial intelligence not only to reduce costs or automate tasks, but to enhance our capabilities and achieve things that were previously expensive or even impossible.

For example, in areas like education or medicine, we need to address ethical issues and consider both the risks of using artificial intelligence and the risks of not using it. What would it be like if every student had an individual AI tutor? Is there an ethical imperative to offer that even if we can’t do it perfectly?

We also should reflect on three important issues in future discussions—loss of control, authenticity or veracity of outputs from generative AI, and labor impacts.

What is happening with that new social network of Mark Zuckerberg, Threads? What is your impression about it?

It is interesting because, apart from any one social network or platform, there is a need to have a place or a network where we can talk that respects "the public commons" (the common public space). It is very difficult to find that on Twitter, since there seems to be no control or regulation whatsoever. There are many problems related to authenticity, as well as racism and discrimination towards people from the gay or trans community, for example.

Mark Zuckerberg's company is trying to redesign the original purpose of Twitter, but due to its large number of users, which already reaches 2 billion on Instagram, it has a very powerful and fast network effect. It was one of the most downloaded apps in a single day in history. While Meta (the company behind Facebook, Instagram and now Threads) is not a perfect company and has definitely made big mistakes in the past (in part due to misaligned incentives where it wants to make money by exploiting data while users simply want to connect with each other), it does seem to be more responsive to ethical critiques than the Elon Musk era of Twitter—which literally automatically sends a poop emoji to any media inquiry sent to the company.

We may see the emergence of a federation of many interconnected social networks, which is a promise of Threads. Also, there is a preference or tendency of some people to look for a more organized network, while others prefer something more anarchic—so many networks will co-exist.

Thanks for the interview, MJ!

It’s been a pleasure, Ivette—I know we have a lot more to talk about!

要查看或添加评论,请登录

MJ Petroni的更多文章

社区洞察

其他会员也浏览了