“Trust Issue” with Generative AI

“Trust Issue” with Generative AI

Me, as a Risk Professional, addressing risk of “trust issue” with Generative AI was inevitable.

Trust might be a hard topic in any discipline of our lives. Business environment is not different as we rely on many tools, technology driven concepts and believe that what we use – because many use, is the trustworthy and dependable. In general, it makes sense as humans are more prompt to mistakes compared to for example Excel Spreadsheets or autopilot system on aircraft. Many cases the product with high human impact passed years of testing that proved the concept of trust. How about Generative AI?

Normally we say - trust is something that must be earned- so how GenAI can become trustworthy? Well, there is a good and a bad news about it- the bad news is- it will take some time - the good one is - it will happen.

As it takes time how should we approach GenAI, today?

Trust is not inherent in Generative AI and fundamental risk is that end users can place a complete confidence in its outputs and make decisions or take actions based on false or bias.

1.??????Risks of Inaccuracy and Bias- To manage hallucinations and misinformation is crucial. Generative AI models are becoming increasingly sophisticated, and they can generate coherent language or images that are indistinguishable from human-created content. However, there are risks associated with these models, including the potential for inaccuracy and bias. This is because these models are trained on large, publicly available datasets of text and images and if the training data is inaccurate or includes bias, then the model can possibly generate false content which reflect the biases that exist in the real world.

2.??????Risk of Attribution - Generative AI outputs align with data sets that can include information from digitized books and curated data sets, and that information came from the real world which means matters related to attribution and copyrights are legally upheld. How to address risk of attribution when tool is intended to mimic human creativity? If a large language model outputs plagiarized content and the company uses that in their business, a human is accountable when the plagiarism is spotted, not the generative AI model. ?Trust is built on governance, risk mitigation, and alignment.?Organizations that want to build trust in their AI use must have strong governance practices in place. They must also mitigate risks and ensure that their people, processes, and technologies are aligned and can recognize potential of harm- today it might be a costly piece of work.

3.??????Risk related to ethics are important considerations for AI use.?As AI becomes more pervasive, it is important to consider the ethical implications of its use. Organizations must ensure that their AI use is transparent, responsible and fair. Generative AI models are often joined by disclaimers that the outputs may be inaccurate. However, many end users are people who have limited understanding of AI generally, do not read the terms and conditions, nor do they understand how the technology works. As a result, the explain-ability of these models suffers. To participate in risk management and ethical decision making, users should have accessible, non-technical explanations of generative AI, its limits and capabilities, and the risks it creates. Without these explanations, users are unable to make informed decisions about how to use generative AI models. This could lead to the misuse of these models, which could have negative consequences for society.

?Specific and as-of-today examples of how the explain ability of generative AI models could be improved:

  • Provide clear and concise explanations of how the models work.?This could be done in the form of user-friendly documentation or in-app tutorials.
  • Explain the limitations of the models and highlight the potential risks associated with using generative AI models.?This could include risks such as bias, privacy violations, and the spread of misinformation.
  • The enterprise must delicately balance the use of automated attribution with human oversight to avoid significant legal and brand implications.
  • AI literacy and risk awareness is becoming an important aspect of any company’s day-to-day operations- business leaders may want to invest more in training and learning sessions, explanatory presentations to business users, and fostering an enterprise culture of continuous learning.

要查看或添加评论,请登录

Karolina Szynkarczuk的更多文章

社区洞察

其他会员也浏览了