From Innovation to Responsibility: Ethical Considerations in Generative AI
Frederic Jacquet
AI & Ethics | Digital Experience | Advanced technologies & Quantum Computing
How do recent findings highlight the challenge facing organizations adopting generative AI, when they have no established ethical guidelines? What are the implications for balancing technological progress with ethical responsibility and fairness?
A recent Deloitte report points out an important challenge: many organizations are using generative AI not having ethical guidelines in place.?
This new trend that consists of rapidly adopting AI innovations comes with both risks and ethical dilemmas. It sounds important to find the right balance between taking advantage of new technologies and making sure we're being fair and responsible.
Let's explore key points we should take into consideration to guide the progress of AI. We need to understand how important ethical guidelines are. Organizations need to focus on building trust.
Here are some practical points:
“As one of the leaders in the world for AI, I feel tremendous excitement and responsibility to create the most awesome and benevolent technology for society and to educate the most awesome and benevolent technologists - that's my calling” - LI Fei Fei - AI Researcher & Professor, Stanford University
Navigating the integration of Generative AI into business operations demands a strategic approach, structured around three critical pillars:
How companies create and meet ethical rules for new technologies
In its report (1), Deloitte classifies general and generative AI, machine learning, neural networks, robots, natural language processing, neural networks, etc. as 'Cognitive Technologies'.
It is interesting to note that, for the people questioned as part of the survey, cognitive technologies offer the greatest potential in terms of social utility, with this category receiving 39% of responses. By way of comparison, the 'Digital Reality' category which includes technologies such as augmented reality (AR), virtual reality (VR), mixed reality (MR), voice interfaces, speech recognition, ambient computing, 360° video, immersive technologies, computer vision, and more, reaches 12%. Now, when it comes to identifying the emerging technologies that they feel present the greatest potential for ethical risk, survey respondents name ‘Cognitive Technologies’ at 57%.
“Reputational damage as a result of insufficient or ineffective data and AI governance can cause significant harm to a business, with greater impact on SMEs. …/… ?Without? good governance, transparency and monitoring, indiscriminate use of AI could lead to significant harm, discrimination, and injustice”.” -? Keeley Crockett, Luciano Gerber, Annabel Latham, Edwin Colyer (4)
Ethical issues are indeed liable to cause harm. Ignoring or minimizing these ethical issues associated with emerging technologies, or even deferring their treatment, is not without real costs. Deloitte groups them according to the themes of:?
It's also interesting to note that when it comes to assessing the perceived severity of the damage potentially caused to the organization by these ethical problems, the risk of ‘reputational damage’ is considered 4 times greater than ‘financial damage’.This ratio is in line with the "main ethical concerns" related to the use of generative AI, which places data confidentiality at 22%.
The incident involving the Washington Lottery's promotional AI app (5), which inadvertently produced a nude image from a user's selfie, illustrates the potential reputational risks for companies and individuals who deploy generative AI without proper oversight.?
Indeed, creating reliable AI not only requires consideration of legal, social, ethical and environmental factors, but also anticipation of online reputation damage. While industry giants have the means to react to "bad buzz", the same cannot be said for individuals and SMEs.?
领英推荐
"It's fun until someone loses their clothes". A simple reminder to deploy AI responsibly.
Confidentiality is another sensitive issue.
When a Samsung engineer unintentionally leaked internal source code via ChatGPT, the company decided to ban its employees from using conversational AI tools. The risk of sensitive internal information being leaked via these platforms is a real issue for companies operating in a highly competitive environment. The information shared with the AI in this case to generate presentations, included the source code of proprietary applications, as well as confidential meeting notes. This happened even though OpenAI clearly states in its terms of use that user-supplied content can be stored and used to improve and refine its services.?
LLMs are precisely designed to produce answers from the data they have learned. The risk of them inadvertently revealing confidential information is real.
AI Ethics: Origins in Learning Methods
To do Gen AI justice, it's worth remembering that these issues, while exacerbated by the popularity of these technologies, revealed their importance from the very first mass uses of AI. Indeed, as Vincent Perrin from IBM reminded us back in 2019 at a conference on "Ethics, trust and transparency in AI" (6), it all starts with AI learning methods. He brought to our attention that from supervised learning to unsupervised learning to reinforcement learning, biases can be introduced into algorithms at the very moment when designers are determining which source to trust.?
Given this context, the complexity of AI, such as ChatGPT4 with its trillion parameters, points out the challenge in explaining and even more, anticipating its predictions.
Conclusion
Companies today face a major challenge. Indeed, the rapid adoption of AI solutions, particularly Gen AI, is moving faster than the development of the necessary ethical frameworks.?
It is now time that they prepare to set ethical guidelines to maintain trust in new technologies.?
For this, their corporate strategies need to incorporate ethics from the very beginning of AI strategy formulation. The value of corporate training and pilot programs to ensure the responsible development and deployment of AI is no longer in doubt.?
It must be clearly understood today that these measures are not simply regulatory or procedural, but fundamental to developing a culture of accountability, fairness and transparency. So while the rise of generative AI presents a remarkable opportunity for innovation, it also poses a profound ethical responsibility.
Just as major innovations like electricity and the internet have dramatically changed our personal and corporate lives to the extent that we would have a tough time functioning without them, AI has emerged as the new revolution. What do you think will be the next decade’s revolution?
One more thing: My articles regarding AI ethics aim to shed light on the complexities and challenges of adopting generative AI without solid ethical foundations. It should be noted that my perspective is not that of an opponent of AI technology. Quite the opposite, I am a strong supporter of AI and its potential to revolutionize business and people’s lives. My advocacy for responsible adoption is based on a deep belief in maximizing the benefits of AI while minimizing its risks.?
By emphasizing the importance of ethical guidelines, my goal is to encourage informed, conscientious deployment of AI technologies. It’s through such critical yet supportive discussions that we can harness AI's full potential ethically and fairly. - F.J.
Sources and more Information:?