After writing four articles about the potential benefits, I would like to talk about the other side of the coin--the risks associated with using ChatGPT-like AI in internal audit. I asked ChatGPT about the risk of applying Chatgpt in internal audit, the AI answers are quite comprehensive but not that accurate in my opinion, therefore, below are my answers to the risk part.
There are three critical risks to be aware of before using AI in internal audit.
- The first is security and privacy. The use of AI for internal audit, particularly for summarization and writing assistance, can raise concerns about the confidentiality of sensitive information. This is because, at some stage, ChatGPT is subject to human review for fine-tuning, which increases the risk of exposing confidential financial data and personal information, even unintentionally.
- The second risk is the reliability of the information. Both the original academic paper and ongoing tests of ChatGPT show that the validity of GPT-related models is often questioned. It's not uncommon for ChatGPT to provide answers that sound plausible but are actually incorrect or unhelpful, much like a mansplainer, supremely confident in its answers, regardless of their accuracy.
- The third and final risk is bias. GPT-related natural language models are trained on a vast corpus of unlabeled text, which can reflect biases or stereotypes. In fact, English accounts for 56.9% of website content and anecdotally 90% of professional/academic content. As the original GPT-3 paper also suggests, GPT-related models tend to exhibit biases from the perspectives of gender, race, and religion.
To mitigate the three risks:
- Security and Privacy: The ideal solution would be for OpenAI to provide a comprehensive privacy and security framework. If this is not immediately possible, the internal audit department may need to develop its own framework to determine when and what information is appropriate to use with ChatGPT assistance. The worst option, though better than the alternatives, would be to conceal sensitive information or provide mock information for ChatGPT processing.
- Misinformation: Google and Microsoft are both working to improve the accuracy of GPT-related models, so this risk will likely become less and less significant over time. Currently, two mitigation measures can be considered: (1) incorporating internet search results into ChatGPT prompts, which can be achieved through Chrome extensions like WebChatGPT, and (2) only using ChatGPT for initial background research, with all conclusions being cross-checked for accuracy.
- Bias: Acknowledging this risk is the most effective way to mitigate it. The second best measure is to strictly adhere to internal audit standards, which will greatly increase the level of confidence in any conclusions reached.
In conclusion, I don't believe that the risks associated with using AI in internal audit hinder auditors from improving productivity. This is because internal audit is unlikely to be replaced by AI in the near future and auditors should familiarize themselves with AI as soon as possible to tackle the challenge of auditing AI in the next industrial revolution. I will address the concerns about being replaced by AI in my next article.