Interview – Eduardo Felipe Matias – Istoé Dinheiro

Interview – Eduardo Felipe Matias – Istoé Dinheiro

Interview – Eduardo Felipe Matias

Istoé Dinheiro magazine - november 2023

By Edson Rossi

“Artificial intelligence brings a risk similar to that of climate change”, says Eduardo Matias

According to the expert, governments and civil society must urgently discuss the regulation of AI so that its potential harm does not outweigh the enormous benefits of the technology.


Two-time winner of Prêmio Jabuti [the most traditional literary award in Brazil] for the books "A Humanidade e suas Fronteiras: do Estado Soberano à Sociedade Global" ["Humankind and its Borders: from Sovereign State to Global Society"] and "A Humanidade Contra as Cordas: A Luta da Sociedade Global pela Sustentabilidade" ["Humankind Against the Ropes: Global Society's Fight for Sustainability"], Eduardo Felipe Matias has just spent a year at the Californian universities Berkeley and Stanford, specifically researching artificial intelligence, the central theme of his next book. “It already is, and will exponentially be, decisive for all governments, all institutions, and especially all companies,” said Matias, who holds a PhD in Law and is the partner in charge of the corporate area at Elias, Matias Advogados.


DINHEIRO — It seems to be a consensus that data and its use through artificial intelligence (AI) are crucial for those who hold power, whether political or economic. What are the gains and risks of this situation?

EDUARDO FELIPE MATIAS — You have two factors of power attribution today: artificial intelligence and data, which are interconnected because data feeds the algorithms. Artificial intelligence is what we call a general-purpose technology, which is why it is often compared to electricity, because of the enormous impact it brings to the entire society.

This is the theme of your next book, right?

Exactly. And then comes a question: we believe that decisions are made by free will. But the persuasion that algorithms promote allows for the manipulation of people to the point where this free will is not so free anymore. So, people believe they are making decisions, but they are being guided by the algorithms.

What are the risks?

Democracy is at risk. That's one side. Another side, evidently, is how much governments can use this power to make decisions or even lead public opinion towards decisions that they [governments] defend. Undoubtedly, platforms can serve as instruments for this kind of manipulation. Without a doubt, there are attempts.

Is there a risk to democracy itself?

There is the problem of deepfakes [sophisticated manipulation of images and sounds], which is something we are not duly paying attention to… Imagine on the eve of an election you can make someone say something you wouldn't expect, something that isn't true, and with that, the person loses votes. So, I think yes, if technology is not well controlled, it can put democracy at risk.

A completely new situation?

Fake news has always existed. The difference is that today, like everything on the internet, they gain volume and speed. It's harder to control. You could never spread a rumor to the entire country on the eve of an election 50 years ago. Today you can, through the internet. That's one point. Another, also serious, is that you witness the rise of authoritarianism, with populist governments. This is greatly benefited by technology. Take facial recognition and see what happens with governments that hold this technology, like in the case of China. It allows knowing [and surveilling] its citizens, having a dominance over them.

Is there any advanced debate on this topic?

There are some important trends. One is explainable artificial intelligence, another is responsible artificial intelligence, with obligations like audits and impact reports. You can't launch a product that might affect society without considering its impact. Then we get a little into the regulation being made in Brazil: how much can you delve into the details of these artificial intelligences? We don't even fully understand how they work. You might have to think about other issues, like control mechanisms, assessment mechanisms of how they work, to ensure that we know how positive they are.

Because they can bring solutions to complex problems…

Extremely important solutions. For climate change, for example. You have carbon capture, renewable energy, food production, solutions for health. The benefits are countless.


“Authoritarianism is greatly benefited by technology. Look at what happens with governments that adopt facial recognition, like in the case of China. It allows knowing the citizens, having a dominance over them.’’


And still, is regulation necessary?

Regulating in a very strict way might not be the best solution. You have to understand how it works. It won't be easy because the first factor is unpredictability. If it's unpredictable, how do you regulate? That's the first problem. The second problem is the difficulty in keeping up with the evolution. Artificial intelligence progresses very rapidly. And the legislative process is slow. Mainly because it is legitimate. And the bigger issue must be internationalization. Because if it's not international, it's pointless. You might have some places where technology is developed unrestrained, and then the risk will continue to exist, because it proliferates very rapidly. But maybe having an international umbrella can serve as a kind of guide, so that local regulations happen based on premises. But this is not going to happen.

Why not?

It took us a long time to have a global agreement on climate. And this agreement, unfortunately, is little put into practice. So, we can aspire to have several rounds of negotiation to have a global agreement on artificial intelligence, but we know it will take time and that it won't necessarily be followed. But we have a big global problem and need a global treatment.

Is there a significant impact on people, companies, and governments? Is this debate well understood?

Everyone starts to realize how important data is, to have an understanding even in terms of privacy. Data is increasingly being accumulated — from the cellphone in your pocket to the internet of things, data even in your refrigerator —, but you have a type of company that dominates both the data and artificial intelligence, which are the Big Techs, the large technology companies.

And then comes the issue of power concentration?

Let's take economic power first. These companies have immense wealth. This comes from the concentration of data and its use. An example: there are 9 billion daily searches conducted on Google. Add two other pieces of information: these companies invest $223 billion annually in Research & Development, most of it in AI [which also leads to the attraction of the greatest talents], plus the investment they make in other AI companies. The conclusion is that they are further dominating the market.

What is the systemic flaw that allowed this?

There's an important point here. The dematerialization of their businesses leads to them being global by excellence. These companies grow and grow disproportionately, and it's equally necessary to understand what's happening in terms of competition. Because for innovation, it's important to have startup companies in a position to compete in the market, to bring better solutions. But what we have witnessed is somewhat different from this. We are witnessing the formation of monopolies.

How to break this?

It's very difficult. Because there's a gatekeeper’s power. So, today, if a company wants to sell online in the United States, it depends on Amazon. Globally, Google, Meta, and Amazon held 64% of digital advertising in 2022. Google 39%, Meta 18%, Amazon 7%.

A power that is essentially American…

No. It doesn't just pertain to the United States. Perhaps we have more data from the United States, but the Chinese companies Tencent, Baidu, and to a certain extent Alibaba, are also in this game. Not to mention ByteDance, which owns TikTok, and is considered the most valuable startup on the planet, valued at $300 billion. All are industries with intensive use of data.

A new balance that's not so new.

A situation where the United States and China stand out. Because they are the countries that have the companies dominating this technology sector. And they are also the countries that invest the most in artificial intelligence. So, there's a new arms race. And it's almost inevitable that this self-perpetuates. That these countries increasingly stand out compared to the others. We also have a challenge to understand this geopolitics.

Are there other consequences?

On several fronts. One of them: the change in the international division of labor. If you can automate your factory, it's more interesting to keep that factory in a developed country, which has specialized people. Your logistical cost is much lower. Here you have a reversal of the process of reducing this gap [between developed and developing countries] that occurred at the height of globalization and the formation of global supply chains, which affects the international division of labor a bit. Once again, the richer get even richer.


“For innovation, it's important to have startups able to compete in the market. What we see is the opposite: the formation of monopolies’’


Legislation does not keep up with the consolidation of the economic power of these companies. Does this cause them to even confront state institutions?

You have the same difficulty in Brazil as in the United States. And perhaps the different parameter is in Europe. There are three separate things here. One is the regulation of large companies. I think there is a certain consensus that the formation of monopolies is not good, the harmful effects a monopoly can have on competition are more than understood. This is something dealt with, let's say, in one box. Another issue is the regulation of artificial intelligence itself.

And the third?

When social networks were invaded by extreme speeches and disinformation, becoming a toxic environment, a self-regulation begun, and this was sold as if it was sufficient. It's not simple to regulate this type of situation. By requiring these companies to overly control content, there's a risk that they become censors. By becoming censors, with the power they accumulate, they will have an even greater power to start controlling public discourse.

So, the side effect would be worse?

In practice, it's not simple to regulate this matter. Much more guidance is needed. The debate cannot be prevented from happening. Society has to discuss.

?

Eduardo Felipe Matias?is the author of the books “Humankind and its borders” and “Humankind against the ropes”, winners of Premio Jabuti, and coordinator of the book “Startups Legal Framework”. PhD in International Law from the University of Sao Paulo, he was a visiting scholar at the universities of Columbia, in NY, and Berkeley and Stanford, in California, and is a partner in the business law area of?Elias, Matias Advogados.

Interview originally published in Istoé Dinheiro magazine: “Inteligência artificial traz risco igual à quest?o climática”, diz especialista Eduardo Matias - ISTOé DINHEIRO (istoedinheiro.com.br)

#interview ?#AI?#artificialintelligence?#innovation #platforms #socialmedia #internationalrelations

Giancarlo Rapp Fernandes

Advogado Tributário e Contador | Litigation, Advisory and Tax Planning| Experiência em grandes escritórios e multinacionais| Escritor, Palestrante e Professor| Membro Gab. Presidencial da OAB/SP

1 年

Com certeza a regula??o deverá ocorrer. Uma opini?o séria de quem entende. Parabéns!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了