Do not let hypothetical risks hinder the positive AI development
Josefin Rosén, Trustworthy AI Specialist at SAS

Do not let hypothetical risks hinder the positive AI development

Society has significant needs where AI can contribute a great deal. However, the public debate often centres around hypothetical risks. While we should be cautious and address real risks, it benefits no one to focus on hypothetical ones that create unnecessary fear and distance.

The use of AI is growing rapidly, and concerns about how this new technology affects society are also increasing. The turbulence within OpenAI during the fall and winter – the company behind ChatGPT – has highlighted the significant economic values at stake and have also further highlighted the risks associated with AI and the different perspectives on risk management that can exist within the leadership of a service that has been groundbreaking in making AI accessible to the general public.

Over the past year, several experts have warned about the potentially negative effects of the technology. Deepfakes have become increasingly common in our digital channels, and frauds are becoming more sophisticated with the assistance of AI. There is also concern about how AI may impact various industries and professional groups.

AI is transformative and needs to be openly discussed. International summits like the AI Safety Summit in the UK last November are crucial, and there is a need for similar initiatives at various levels to discuss safety and responsible AI development regionally and within different industries and applications. However, to much focus is placed on hypothetical risks, and for many reasons, it is important and necessary that we shift the focus from sensational statements about hypothetical catastrophic AI damages to more sensible and balanced discussions about the real risks of AI. Sensational statements create unnecessary fear and distractions from addressing actual existing AI problems, such as bias, discrimination, and misinformation. We should be aware of the risks, including the hypothetical ones, but it should not hinder the positive development of AI.

When used responsibly, AI can change our lives for the better. For instance, AI can be employed to optimize and streamline resource usage, leading to positive effects on the environment and climate. Furthermore, the technology can drive innovations in medicine and healthcare, enhance efficiency, relieve healthcare personnel, and strengthen other vital societal functions. The examples mentioned here are just a few areas of societal benefit. As AI continues to develop and is applied in practical contexts, more and more applications will be discovered and enhanced.

Just like many other technologies, AI can be used irresponsibly. However, pausing the development and research in AI is not the right path. On the contrary, there is a risk that development continues in secret, completely without oversight. The actors developing AI for dubious or criminal purposes will persist, whether there is an international agreement on a pause or significant restriction in open forums. The development of AI in the open, free, and legal world needs to continue. And it must be done as responsibly, sustainably, ethically, and transparently as possible. For this we need guardrails that promotes responsible innovation.

The EU Parliament has recently agreed on the EU Commission's proposal for an AI regulation, the so-called AI Act. Since certain parts, including those related to high-risk AI, will only be applicable at the end of a transitional period, the Commission has initiated 'The AI Pact,' where organisations, including industry, are invited to voluntary demonstrate their commitment to the AI Act's objectives and voluntarily communicate the steps they are taking to prepare for compliance and ensure that the design, development and use of AI is responsible and trustworthy ahead of the legal deadline.

Companies and authorities that purchase and use products and services utilizing AI have a responsibility to stay continually informed and consciously address AI-related issues. Appointing someone responsible for AI is not enough. Just like environmental considerations have become a hygiene factor, responsibility regarding AI issues must be embedded in daily work – It needs to become inherent.

Transparency is key to trusting AI, and as consumers, customers, and citizens, we should be able to demand a clear understanding of how and when AI is used. This applies not only to companies developing services and products using AI but also to organizations using AI-related services and products in their systems.

We face a future of significant challenges but also great opportunities. Responsible, trustworthy AI needs to be planned for before the development begins, and it is important to conscientiously work with this throughout the AI lifecycle, from data to decisions. It is our collective responsibility to ensure that development does not veer in the wrong direction but instead realizes its potential.

https://www.sas.com/en_us/company-information/innovation/responsible-innovation.html

Josefin Rosén is a Trustworthy AI Specialist in SAS' Data Ethics Practice. She is based in Sweden.

Jennifer L.

Assistant Professor, Georgetown ? Psychology of Big Data & Theory of Machine

9 个月

Great piece! Thanks for sharing.? I wrote a piece for HBR on a related topic (instead of blaming algs, use them as magnifying glasses to detect bias in human judgment).? Would love to discuss your ideas more! https://hbr.org/2019/08/using-algorithms-to-understand-the-biases-in-your-organization

Lene Cecilie H?ydahl

Stabssjef, Teknologijuridisk stab. Politiets it-enhet

9 个月

Thank you for sharing this important perspective

要查看或添加评论,请登录

赛仕软件的更多文章

社区洞察

其他会员也浏览了