From Innovation to Responsibility: Ethical Considerations in Generative AI

From Innovation to Responsibility: Ethical Considerations in Generative AI

How do recent findings highlight the challenge facing organizations adopting generative AI, when they have no established ethical guidelines? What are the implications for balancing technological progress with ethical responsibility and fairness?

A recent Deloitte report points out an important challenge: many organizations are using generative AI not having ethical guidelines in place.?

This new trend that consists of rapidly adopting AI innovations comes with both risks and ethical dilemmas. It sounds important to find the right balance between taking advantage of new technologies and making sure we're being fair and responsible.

Let's explore key points we should take into consideration to guide the progress of AI. We need to understand how important ethical guidelines are. Organizations need to focus on building trust.

Here are some practical points:

  • Urgency for Ethical Guidelines: The rapid adoption of AI technologies progresses more rapidly than the construction of ethical standards. It's clear that in the debate on ethical risks and social benefits when we recognize that AI brings its share of potential problems compared to the considerable benefits it brings. This phenomenon is all the more sensitive when we're talking about generative AI, due to its popularity.
  • Trust and Technology: Establishing ethical standards is crucial for maintaining trust in emerging technologies. “Some people may be reluctant to use LLMs, fearing that they may be unreliable, irrelevant or even limited. This brings to mind the same fears that existed when the Internet first emerged in the 90s, it was easy to dismiss the Internet vs. encyclopedias or real library research” - MLB(2)
  • Recommendations for Organizations: It's fundamental for companies to adopt an ethical approach to AI. This can be put into practice in several concrete ways as mentioned ‘The Role of AI Ethics: Balancing Innovation with Social Responsibility’ (3):

  1. Consider the ethical implications of AI projects from the very beginning of the constitution of the company's AI strategy before the development processes.
  2. Implement measures to prevent ethical risks, such as training teams in AI ethics or setting up mechanisms to monitor and control AI systems.
  3. Be transparent about AI practices and report on efforts to comply with ethical principles.


As one of the leaders in the world for AI, I feel tremendous excitement and responsibility to create the most awesome and benevolent technology for society and to educate the most awesome and benevolent technologists - that's my calling” - LI Fei Fei - AI Researcher & Professor, Stanford University

Navigating the integration of Generative AI into business operations demands a strategic approach, structured around three critical pillars:

  • Ethical principles and governance: Ethical principles need to be established to lay the foundations for specific standards and policies for the governance of generative AI. A large proportion of companies have not yet really established these foundations. A center of excellence or an AI ethics committee have the task of establishing oversight of AI strategy. There is a need to ensure the responsible development of AI-related practices and applications according to standards, and accountability frameworks.
  • Training and education: Comprehensive training programs must be set up to cover the ethical principles and the technical aspects of AI. This subject is all the more sensitive when it comes to generative AI. This kind of initiative makes it possible to involve the company's employees in the ethics strategy. This positions them "on the side of the solution rather than on the side of the problem".
  • Pilot programs for implementation: A very pragmatic approach is to implement proof-of-concept and pilot programs to be in a position to validate AI in business operations. The aim is to experiment with use cases in a practical, realistic setting, to test them against the company's ethical, legal and regulatory considerations. These POCs also provide a framework for measuring operational risks and constraints before large-scale deployment.

How companies create and meet ethical rules for new technologies

In its report (1), Deloitte classifies general and generative AI, machine learning, neural networks, robots, natural language processing, neural networks, etc. as 'Cognitive Technologies'.

It is interesting to note that, for the people questioned as part of the survey, cognitive technologies offer the greatest potential in terms of social utility, with this category receiving 39% of responses. By way of comparison, the 'Digital Reality' category which includes technologies such as augmented reality (AR), virtual reality (VR), mixed reality (MR), voice interfaces, speech recognition, ambient computing, 360° video, immersive technologies, computer vision, and more, reaches 12%. Now, when it comes to identifying the emerging technologies that they feel present the greatest potential for ethical risk, survey respondents name ‘Cognitive Technologies’ at 57%.


“Reputational damage as a result of insufficient or ineffective data and AI governance can cause significant harm to a business, with greater impact on SMEs. …/… ?Without? good governance, transparency and monitoring, indiscriminate use of AI could lead to significant harm, discrimination, and injustice”.” -? Keeley Crockett, Luciano Gerber, Annabel Latham, Edwin Colyer (4)

Ethical issues are indeed liable to cause harm. Ignoring or minimizing these ethical issues associated with emerging technologies, or even deferring their treatment, is not without real costs. Deloitte groups them according to the themes of:?

  • Reputational damage,?
  • Human damage,
  • Regulatory penalties,
  • Financial damage,
  • Employee dissatisfaction.

It's also interesting to note that when it comes to assessing the perceived severity of the damage potentially caused to the organization by these ethical problems, the risk of ‘reputational damage’ is considered 4 times greater than ‘financial damage’.This ratio is in line with the "main ethical concerns" related to the use of generative AI, which places data confidentiality at 22%.

The incident involving the Washington Lottery's promotional AI app (5), which inadvertently produced a nude image from a user's selfie, illustrates the potential reputational risks for companies and individuals who deploy generative AI without proper oversight.?

Indeed, creating reliable AI not only requires consideration of legal, social, ethical and environmental factors, but also anticipation of online reputation damage. While industry giants have the means to react to "bad buzz", the same cannot be said for individuals and SMEs.?

"It's fun until someone loses their clothes". A simple reminder to deploy AI responsibly.

Confidentiality is another sensitive issue.

When a Samsung engineer unintentionally leaked internal source code via ChatGPT, the company decided to ban its employees from using conversational AI tools. The risk of sensitive internal information being leaked via these platforms is a real issue for companies operating in a highly competitive environment. The information shared with the AI in this case to generate presentations, included the source code of proprietary applications, as well as confidential meeting notes. This happened even though OpenAI clearly states in its terms of use that user-supplied content can be stored and used to improve and refine its services.?

LLMs are precisely designed to produce answers from the data they have learned. The risk of them inadvertently revealing confidential information is real.

AI Ethics: Origins in Learning Methods

To do Gen AI justice, it's worth remembering that these issues, while exacerbated by the popularity of these technologies, revealed their importance from the very first mass uses of AI. Indeed, as Vincent Perrin from IBM reminded us back in 2019 at a conference on "Ethics, trust and transparency in AI" (6), it all starts with AI learning methods. He brought to our attention that from supervised learning to unsupervised learning to reinforcement learning, biases can be introduced into algorithms at the very moment when designers are determining which source to trust.?

Given this context, the complexity of AI, such as ChatGPT4 with its trillion parameters, points out the challenge in explaining and even more, anticipating its predictions.

Conclusion

Companies today face a major challenge. Indeed, the rapid adoption of AI solutions, particularly Gen AI, is moving faster than the development of the necessary ethical frameworks.?

It is now time that they prepare to set ethical guidelines to maintain trust in new technologies.?

For this, their corporate strategies need to incorporate ethics from the very beginning of AI strategy formulation. The value of corporate training and pilot programs to ensure the responsible development and deployment of AI is no longer in doubt.?

It must be clearly understood today that these measures are not simply regulatory or procedural, but fundamental to developing a culture of accountability, fairness and transparency. So while the rise of generative AI presents a remarkable opportunity for innovation, it also poses a profound ethical responsibility.

Just as major innovations like electricity and the internet have dramatically changed our personal and corporate lives to the extent that we would have a tough time functioning without them, AI has emerged as the new revolution. What do you think will be the next decade’s revolution?


One more thing: My articles regarding AI ethics aim to shed light on the complexities and challenges of adopting generative AI without solid ethical foundations. It should be noted that my perspective is not that of an opponent of AI technology. Quite the opposite, I am a strong supporter of AI and its potential to revolutionize business and people’s lives. My advocacy for responsible adoption is based on a deep belief in maximizing the benefits of AI while minimizing its risks.?

By emphasizing the importance of ethical guidelines, my goal is to encourage informed, conscientious deployment of AI technologies. It’s through such critical yet supportive discussions that we can harness AI's full potential ethically and fairly. - F.J.



Sources and more Information:?

  1. Deloitte: ‘Ethical Technology: Principles for Emerging Tech’ [Link ]
  2. Extract of a discussion with Melvin Bouton Hurion [Reach him here on LinkedIn ]
  3. Read my article on DZone.com ‘The Role of AI Ethics: Balancing Innovation with Social Responsibility’ [Link ]
  4. ResearchGate: “Building Trustworthy AI Solutions: A Case for Practical Solutions for Small Businesses” [link ]
  5. Techspot.com : “AI accidentally produces a nude” [link ] (Thanks Sreekanth Pannala, Ph.D. for sharing.)
  6. Conference by Vincent Perrin on ethics, trust and transparency in AI” ?[link ]


要查看或添加评论,请登录

社区洞察

其他会员也浏览了