ChatGPT and the impact on internal audit
What is ChatGPT?
ChatGPT (https://chat.openai.com/) is an AI language model, developed by an independent research organisation OpenAI that has some level of partnership and cooperation with Microsoft. It uses Generative Pre-trained Transformer (GPT) technology to generate human-like text.
The model has been trained on a massive dataset of text data and can generate complex sentences with high semantic richness. It can understand contextual cues and recognise complex structures of conversations, making it an ideal tool for natural language-processing tasks such as question answering, text generation and text classification. The model is also customisable and can be fine-tuned to specific requirements. ChatGPT can be integrated into applications through its Application Programming Interface, allowing developers to build custom applications that leverage the power of deep learning for text generation.
Use of ChatGPT in internal audit (IA)
The impact of ChatGPT on internal audit could potentially be significant. IA has a crucial role in maintaining the integrity of an organisation's operations, and ChatGPT can be used to assist IA in their work in several ways:
·??????ChatGPT could be utilised to analyse vast amounts of data to identify potential frauds. It could possibly highlight unusual trends or patterns in the data for further investigation by IA.
·??????Another potential use of ChatGPT could be the analysis of data relating to potential risks, such as financial fraud or cyber attacks, and providing recommendations on how to mitigate these risks. This could assist in risk assessment and IA planning as well as in specific audits.
·??????ChatGPT could also be used in automating IA tasks and therefore increase efficiency and dramatically reduce the risk of human error. An example of this is that ChatGPT can be used to generate reports or perform routine data analytics testing.
·??????Finally, ChatGPT could be used to enhance the standards of IA. It can help by providing a standardised framework for analysing data and identifying potential areas of concern. This can help enable IA to adhere to best practices and help ensure that audit engagements are conducted in a consistent and objective manner.
领英推荐
Considerations, implications and risks for IA
Despite the potential advantages of ChatGPT for IA, there are certain risks with its deployment that should be considered by IA:
·??????Accuracy and reliability: IA depends on accurate and reliable information to make informed decisions, and as ChatGPT uses machine learning algorithms to generate responses, there is an increased risk to accuracy and reliability. Also, these responses might only be based on information that was reported and known as of September 2021 since it lacks access to real-time information. Moreover, the potential for bias to be introduced unintentionally or otherwise cannot be overlooked, which can affect the quality and relevance of its responses. This means that the accuracy and reliability of its responses can vary and may require additional human intervention to ensure its outputs are impartial and unbiased.
·??????Data risk and sources: ChatGPT's handling of sensitive and confidential data poses a security and confidentiality risk, thus, conversations and data must be appropriately secured and protected. Data entered during ChatGPT's public release phase is recorded and stored on its servers for algorithm improvement, which may include visible questions, answers and data to researchers. It is essential not to enter personally identifiable information to protect confidentiality. IA needs to consider the data source when auditing ChatGPT as some departments use internal data, while others use third-party or both. Clear data source definitions are necessary for accuracy, as obtaining data without proper vetting can cause legal and reputational damage.
·??????Introduction of new threats: By using ChatGPT to carry out activities that would be challenging for humans to do, new threats could develop. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders. Hence, it is crucial to collaborate with and learn from the cybersecurity community, which necessitates the exploration of and potential implementation of red teaming, formal verification, responsible disclosure of AI vulnerabilities, security tools and secure hardware.
·??????AI development and monitoring is key: AI systems require proper consideration of data sources, fitness for purpose, ownership rights, data annotation and information security. GDPR requires a data protection impact assessment (DPIA), with risks and privacy controls listed. IA helps check DPIA recommendations are implemented, and personal data is minimised and deleted when no longer needed. Accuracy varies when algorithms and data are modified. Continuous evaluation and monitoring of algorithms and data are necessary due to vulnerabilities manifesting themselves after a predetermined amount of time. The objective of an audit, where ChatGPT is used, should be to determine correctness to maintain accuracy and avoid biases.
·??????Regulatory compliance: The adoption of ChatGPT could make regulatory compliance more difficult. Strict data privacy laws, like GDPR and CCPA, are applicable to many industries and require businesses to safeguard customer information and guarantee that it is used lawfully. Due to the potential presence of personal data in the model's output and the difficulty in identifying and restricting the use of this data, this may make it difficult to ensure compliance with these laws.
·??????Promoting a culture of responsibility: To use ChatGPT in a way that is consistent with the firm's quality, ethical standards, and value, it is crucial for IA to collaborate with the other two lines of defence. This can be accomplished by developing formal policies on acceptable service usage, highlighting the importance of education, and staying up to date on statutory and regulatory requirements.
As organisations increasingly adopt ChatGPT for various processes, it becomes crucial for technology teams and IA to proactively identify the associated risks and ensure that proper controls are in place immediately. As a result, this approach not only creates trust and confidence but also enables the organisation to leverage this tool's benefits in a secure manner. However, it is important to emphasise that the focus on control must always be maintained. ChatGPT could assist IA in their work, but ultimately, human judgment is necessary to accurately interpret its outputs, therefore, it is vital to pay attention to the obvious risks that may arise through its use.
Disclaimer: The views reflected in this article are the views of the authors and do not necessarily reflect the views of the global EY organisation or its member firms.
Audit leader focusing on IT, Data, (Gen)AI, cyber security, Major Change
1 年Michael, look forward to discussing in our next catch up. Think we all need to understand our own ambition and appetite to get the most out of this inflection point in technical capabilities.