3 reasons for combating AI-generated bots and fakes with standards
alswart / stock.adobe.com

3 reasons for combating AI-generated bots and fakes with standards

1. AI Regulation and Standardization go hand in hand in Europe

Despite a wide range of fascinating and beneficial use cases, AI can also be misused for the creation of bots and fakes. The European Union has already been active in addressing the challenges that such types of disinformation can pose to European society and economy. However, the measures have not yet been sufficient. For example, while the topic of disinformation is already being addressed on multiple levels (e.g., in an EU HLEG in 2017/18), the AI-automated massive scalability of the production of disinformation has not yet been acted on. Most existing measures against fakes and bots such as fact-checking services or AI-based analyses find themselves in an unwinnable arms race against better, faster, and higher scalable fabrication tools. This is where standardization comes in.

Standardization supports regulation. It harnesses expert advice and stimulates discussion among SDOs, public bodies, academic institutions, and acclaimed specialists. Not least of all, regulation and standardization in Europe are tightly interlinked by the so-called “New Approach” or “New Legislative Framework”.

For these reasons, standardization efforts have a prominent place in the context of the proposed Regulation on Artificial Intelligence and the proposed Digital Services Act. Also, StandICT.eu was established as an EU-funded initiative focused on supporting the participation and contribution of European specialists to SDO and SSO activities covering subject areas such as 5G, Cloud Computing, Cybersecurity, Big Data and IoT as the essential drivers of the digital age (StandICT.eu).

2. Expertise enables new ways of thinking

StandICT.eu 2023 kicked off on 1st September 2020 to build on the preceding project that ran from 2018 to 2020. One of the technical working groups established in the course of guiding the work of StandICT is the TWG Trusted Information. The group addresses standards and solutions for trustworthy AI, fakes and bots and is chaired by Sebastian Hallensleben (Chair and member of several standardization bodies in the field of Artificial Intelligence and AI Ethics as well as Head of Digitalization and AI at VDE .

According to him, the range of finished standards relating to AI-generated bots is still very small. For example, while there are already a variety of standards in the area of real name identities, there are not yet any for pseudonyms. In the course of digitalization and technological development, we therefore need to think about how trustworthiness of digital sources can be ensured and described so that the benefits of technologies such as artificial intelligence outweigh their risks. The TWG Trusted Information has taken on this task.

The group of experts believes that countermeasures against AI-generated fakes and bots require standards, e.g. standards for tracing information back to its source or creator or standards for bot-resistant pseudonymous identities. An idea that emerged in the course of the working group's discussions are “Authentic pseudonyms”.

Authentic pseudonyms can be helpful, for example, when a user spreads fake news or similarly critical content on social networks. Normally, after these accounts are blocked, new accounts are usually opened to continue the spread of the fakes. Authentic pseudonyms can offer a decisive benefit at this point. They are digital identities that:

  • guarantee to belong to a physical person but are not traceable to a specific person;
  • are singular in a given context, i.e. it is possible to have different authentic pseudonyms for different platforms, but within the same platform a person can only have one such pseudonym;
  • cannot be acquired by AI-automated bots so that the digital space can be “reclaimed” by real human beings.

Although many of such ideas are still in their infancy, they all aim to provide tools for opinion and trust building and for regulating fakes and bots.

3. A promising standardization landscape already exists

Even though there is still a lack of standards that contribute to trustworthy AI and combat the risks of AI-generated bots and fakes, important guidelines and standards that have relevance to the current discussions have already been developed:

It should be noted that these are only some of the working results that already exist. There are many more examples such as standards and approaches developed by CEN and CENELEC , OIDF, IETF, W3C, ENISA or initiative such as JTI (Journalism Trust Initiative).

More information about the challenges of AI-automated fakes and bots, the ongoing standardization activities and the standardization landscape can be found in the booklet "Trust In The European Digital Space In The Age Of Automated Bots And Fakes" developed by TWG Trusted Information and just published by StandICT.Eu 2023 in February 2022: https://www.standict.eu/news/trusted-information-digital-space

Conclusion

Although many ideas are still quite new, they represent another milestone in the standardization landscape. The question is to what extent the existing knowledge can be brought together to fill remaining gaps and provide the relevant stakeholders with the recommendations and guidelines that are actually needed.

AI-driven bots and fakes can affect many different areas. Therefore, practical examples and experiences must be bundled so that guidelines and standards can be developed that make a real difference. All stakeholders must be engaged here. From politics, economy and society to standardization, media and journalism.

Nora D?rr

Expertin für Innovationsmanagement mit Herzblut für nachhaltigen Verkehr und Herz für Nachwuchs

2 年
回复

要查看或添加评论,请登录

DKE (German Commission for Electrical, Electronic & Information Technologies)的更多文章

社区洞察

其他会员也浏览了