With the European elections, the threat of AI reappears

With the European elections, the threat of AI reappears

Every time elections approach, the fear of the use of Artificial Intelligence (AI) to interfere in voting reappears. And it is nothing new, it already happened in the US presidential elections in 2016, with the "Facebook-Cambridge Analytica" case (see notes).

?

In recent years, AI has advanced so much that it can be used successfully in the electoral field in two ways:

1.?????????????????? Micro segmentation of the electorate,

2.?????????????????? Elaboration of personalized messages.

Combined, their results are tremendously effective, as was proven in the case of "Facebook-Cambridge Analytica" where, by analyzing the data of the "digital life" of voters, it was possible to carry out a micro segmentation considering their interests, concerns and fears. And then, through the new Generative AI, it is possible to create specific content for each of the micro-segments (or voters).

The last necessary element is to get this personalized message to each person, and we already know that it is very easy through email, but especially through social networks and advertising (also personalized).

And all this is possible to do very quickly and cheaply. We've never been able to achieve this level of customization and detail before. It has been successfully carried out in the field of digital marketing for years.

With AI we can carry out an individualized electoral campaign on a massive scale.
And it's a VERY GOOD thing!!

?

In Spain, Organic Law 5/1985 on the General Electoral System (Article 50) establishes a period (electoral campaign) in which candidates for voting may carry out lawful activities with the aim of capturing votes. (according to the dictionary of the Royal Spanish Academy, "Capture" means "To attract someone or win their will or affection").

And what better way to do it than by identifying the interests, concerns and fears of each of the voters, and getting the message and content specific to their concerns individually.

Imagine only receiving the advertising that really interests you. Imagine if political parties sent you messages only on those issues that you have doubts about or care about.

?

Other side of the coin

The misuse of the instruments at the disposal of political parties during election campaigns has always existed.

To give an example outside our borders (Spain), former Prime Minister Boris Johnson during the Brexit campaign used the slogan "We send the EU £350 million a week; let's fund our National Health Service-NHS instead" which, although dismissed by the courts as punishable, is of dubious ethics due to the biased and intentional use of the message.

This message (distorted or not) was disseminated massively without being able to decide to whom it would reach.

Now the big difference is that with AI we can create personalized messages for each voter, massively, quickly, cheaply, and also deliver them repetitively to each of the voters in the privacy of their mobile or computer. (Good, Nice and Cheap... if used responsibly).

?

With great power comes great responsibility

The goal of an election campaign is to capture (convince) the undecided electorate to vote for you, and AI is a good tool to convince and capture the vote ..... but also to manipulate.

1.?Definition of "Capture" (Royal Spanish Academy): "To attract someone or win their will or affection."

2.?Definition of "Convince" (Royal Spanish Academy): "To incite, to move someone with reasons to do something or to change their opinion or behavior."

3.?Definition of "Manipulate" (Royal Spanish Academy): “Intervening with skillful and sometimes devious means, in politics, in the market, in information, etc., with distortion of truth or justice, and at the service of particular interests.”

Manipulation in electoral campaigns has always existed (to some extent, and not by all), but the tools available to carry it out were limited and ineffective.

Now the paradigm has shifted.

?

On the one hand, we have an electorate that is largely hooked on digital communications (Internet, Social Networks, Digital Media, Emails), and on the other hand we have the very powerful AI that is capable of analyzing us individually, generating specific content for each voter and sending it to our personal (digital) space.

It's the new Word-of-Mouth raised to the nth degree.
And therein lies the problem, AI works too well.

And the problem is exacerbated when only a party or a few have access to this technology. The combat is unequal and unfair. And even more so, if it is not used legally and ethically.

?

So, there are certainly limits to be set, and that is why the recently passed European AI Act (EU AI Act) considers unacceptable and prohibits the use of AI to "manipulate human behavior".

The difference between Convincing and Manipulating is not always obvious and will depend on the beliefs and values (ethics) of each one, so, although the laws will establish strong limits, we will once again depend on our criteria to discern between the legitimate will to capture and convince, with the malicious will of manipulation.

?

There is only one solution

It doesn't make sense (and isn't possible) to ban the use of AI. There would always be those who used it outside the law and nowadays it is impossible to distinguish between the truth and the fake content generated by AI, especially those that are highly realistic (Deep Fake).

Undoubtedly, it is essential to have a law that establishes the red lines (EU AI Act), which must be combined with a self-imposed Code of Ethics for each of the agents generating information and opinion.

And definitely, in the face of a hyper-accelerated world where information (true or not) and false prophets abound, the only defense is to develop our own critical capacity, with diversity of sources of information and, with pause.

?

?

Notes: The "Facebook-Cambridge Analytica" case

In the 2010s, the British consulting firm Cambridge Analytica used the data of 87 million Facebook users to use for political propaganda purposes, without their consent.

The data was obtained through an application called "This Is Your Digital Life", developed by computer scientist Aleksandr Kogan, who had contacted Facebook in 2013 and asked them to install his App on the social network, in order to use the data obtained for academic purposes. The application consisted of a series of questions to develop psychological profiles of users.

Kogan managed to get 277,000 people to use his app, but he managed to have information from 87 million accounts around the world. That's because, around the time, Facebook allowed apps to see who their users' friends were. If someone used any app, this app could access that person's contact list.

Cambridge Analytica later bought all the data collected by Kogan's app for $800,000 and used it to create "voter profiles" so it could personalize political messages, especially on social media.

Cambridge Analytica used them to provide analytical assistance to the campaigns of Ted Cruz and Donald Trump for the 2016 presidential election. She was also accused of interfering in the Brexit referendum.

The information about the misuse of the data came thanks to Christopher Wylie, a former Cambridge Analytica employee, in interviews with The Guardian and The New York Times.

For its part, Facebook defended itself by arguing that there was no data breach, only a fraudulent use of user information, as Kogan assured that the data would be used for academic purposes despite the fact that the ultimate purpose was political and advertising.

Although later, Mark Zuckerberg himself acknowledged that: "The social network has a responsibility to protect the information of its users and if we cannot do so then we are not worthy of serving them."

The crisis of trust in Facebook precipitated several changes in the social network related to transparency, such as users being able to know the apps that had allowed access to their data, and revoke it, if they consider it so.

Finally, in an attempt to forget this chapter, Facebook began an intense communication and image laundering campaign, which culminated in the change of its name to META.


#Art #Arte #AI #ArtificialIntelligence #Copyright #Law #GenAI #Creativity #DerechosdeAutor

Sources:

·Ley Orgánica 5/1985, de 19 de junio, del Régimen Electoral General. Artículo 50. “Se entiende por campa?a electoral, a efectos de esta Ley, el conjunto de actividades lícitas llevadas a cabo por los candidatos, partidos, federaciones, coaliciones o agrupaciones en orden a la captación de sufragios.” https://www.boe.es/buscar/act.php?id=BOE-A-1985-11672

·Definición de “Captar” (Real Academia Espa?ola): Atraer a alguien o ganar su voluntad o afecto. https://www.rae.es/

·Definición de “Manipular” (Real Academia Espa?ola): Intervenir con medios hábiles y, a veces, arteros, en la política, en el mercado, en la información, etc., con distorsión de la verdad o la justicia, y al servicio de intereses particulares. https://www.rae.es/

·Escándalo de datos de Facebook-Cambridge Analytica? https://es.wikipedia.org/wiki/Esc%C3%A1ndalo_de_datos_de_Facebook-Cambridge_Analytica

·Facebook y Cambridge Analytica: ?qué pasó y por qué es importante? https://fundaciongabo.org/es/blog/convivencias-en-red/facebook-y-cambridge-analytica-que-paso-y-por-que-es-importante

·Brexit: Boris Johnson £350m claim case thrown out by judges? https://www.bbc.com/news/uk-politics-48554853

·EU AI ACT https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

要查看或添加评论,请登录

社区洞察

其他会员也浏览了