The Challenges of using ChatGPT in the enterprise world
A mighty AI like ChatGPT seems perfect to ignite your business - but at the same time it poses substantial risks and challenges when it comes to a compliant use within the enterprise.

The Challenges of using ChatGPT in the enterprise world

ChatGPT, the advanced artificial intelligence application, has taken the world by storm with its remarkable speech processing capabilities and versatility in various applications. However, it is important to understand that the current architecture of ChatGPT may not be suitable for all businesses and applications, especially in the context of Trusted AI governance and regulation.

In this article, I examine the limitations and challenges of using ChatGPT in a corporate environment, and the importance of responsible AI use.

Limited trust in ChatGPT results

One of the main challenges of using ChatGPT in a corporate environment is the limited trust in the results it provides. Unlike search engines like Google, ChatGPT does not provide verifiable sources for its results, but rather synthesizes answers based on learnt knowledge derived from its training data. As a result, it can be difficult to assess the accuracy and reliability of the results and the data on which the AI was trained, leading to a lack of confidence in its output.

In addition, the AI may reproduce biases present in its training data, leading to unfair and inaccurate results. Without the ability to control and view the training data, organizations may be unaware of the biases in their AI systems and the potential negative consequences of its decisions.

Transparency concerns

Transparency is a critical factor in the use of AI systems such as ChatGPT. However, it is often difficult to understand the exact decision-making processes and operations of such systems, making them akin to black boxes. This lack of traceability and transparency can lead to mistrust in the results and difficulties in evaluating and interpreting the results.

Research by PwC shows that companies place a very high value on security, transparency and confidentiality when using AI models, and these requirements may not be fully met by large language models such as ChatGPT.

Reproducibility and reliability issues

A further concern when using ChatGPT is the lack of reproducibility and reliability of its results. This unpredictability can make it difficult to predict and reproduce the machine's output, causing problems in applications that require accurate and consistent results.

An insurance company using ChatGPT for risk assessment, for example, may not be able to understand its operations or obtain consistent results, which could lead to unfair and incorrect decisions. As long as language models, such as ChatGPT, are used as a black box and integrated into business processes, they will continue to pose a challenge with unpredictable risk.

Privacy and Confidentiality Concerns with ChatGPT

The implementation of ChatGPT poses a significant risk to businesses, as the transparency of data processing and GDPR compliance is uncertain. It is unclear where and how the AI application stores new information, and whether it is considered intellectual property. In addition, the use of ChatGPT in many operational scenarios goes against the principles of GDPR, which could lead to sanctions and put companies in a difficult situation.

The governance and regulation of AI systems, including ChatGPT, is crucial to address limitations and boundaries. There are initiatives such as the European Commission's Trustworthy AI and frameworks of organizations such as PwC, Deloitte and VDE, which focus on creating ethical guidelines and standards for the use of AI in companies.

In conclusion, while ChatGPT is an impressive AI application, it is important to understand its limitations and challenges in a business environment and to use it responsibly.

The effectiveness of ethical AI certifications

While certifications can help promote the responsible and ethical use of AI, it is questionable whether they can truly guarantee this. There are still many uncertainties and challenges that need to be addressed to ensure the best interests of all stakeholders when using AI systems.

It is important to regulate the use of AI in a way that considers the interests of developers, users, and society at large. Self-assessment catalogs such as ALTAI or AIC4 and certifications such as Trustworthy AI, Ethical AI, and AI Trust Label can help, but it is also important to critically assess their ability to address the challenges and issues associated with the use of AI. Collaboration between developers, users, and regulators is needed to ensure that AI is used ethically and responsibly to have a positive impact on society.

Claims for responsible use and implementation of AI

Integrating AI into the business context requires both technical and non-technical considerations. Compliance, privacy, and regulations (particularly in the healthcare and financial sectors) require that AI be used under the control of the organization. Technologically, this requires the ability to run AI on-premises or in a hybrid cloud architecture to enable the use of trusted AI to anonymize personal data or filter confidential information, before hosting data in public clouds such as Microsoft Azure for example.

Equally important is the use of internal company data to train and control AI. AI is used not only for human interaction, but also for automated data analysis and processing. To optimize AI for the specific business context, it is critical to continuously train it with diverse data from the business.

The use of AI is immanently tied to the issue of responsibility. Whoever is using an AI has to take responsibility for the ramifications. This is valid for any kind of AI generated output or AI-induced decisions. From a short social media post, automated question answering in service operations up to automated decisions that drive business processes or make an autonomous vehicle to stop.

Companies using black boxes like ChatGPT are de facto unable to transfer this responsibility to a company like OpenAI and its cloud service offerings. Therefore in order to act responsible they must be able to understand and control the behaviour of an AI. For mission-critical processes or those that include confidential or private data, companies will always need to wrap even the best pre-trained AI by additional layers that guarantee transparency, reproducibility, reliability and the possibility to control and optimize the AI.

In addition, the use of a seemingly almighty AI like ChatGPT is not only irresponsible but in most cases also not efficient. ChatGPT’s architecture is designed for maximal flexibility and universality – for the cost of previously unreached dimensions in computing power and energy consumption. For most application scenarios within enterprises the use of highly specialized AI, pre-trained and optimized for dedicated tasks and on dedicated data is the better alternative – both in terms of cost and trust. And in many cases the only way to take full responsibility. To address this, a deliberately complementary architecture of AI software such as provided by MORESOPHY is required to ensure reliable data-driven business practices.

Conclusion

In conclusion, while ChatGPT is an impressive AI application, it is important to understand its limitations and challenges in a business environment and to use it responsibly. ChatGPT in particular, poses a significant risk to businesses in terms of data protection and confidentiality. While ethical AI certifications and initiatives such as the European Commission's Trustworthy AI and guidelines from PwC, Deloitte and VDE aim to promote the responsible use of AI, it is important to question their effectiveness in ensuring such use. It is crucial that AI is integrated into the context of the business, considering compliance and privacy regulations, and that internal company data is used to train and control the AI. The lack of trust, reliability, traceability and reproducibility is a common problem for cloud-based AI services. Overall, the use of specialized, reliably controllable and transparent AI is crucial in applying AI and indispensable to avoid uncontrollable harm to business and humans.

Birgit Reber

Mit Leidenschaft Chefredakteurin und Herausgeberin DOK.magazin / Edition DIGITUS

2 年

Ich GERNE DARAUS MIT und für das DOK. Spannungerzeugen.

回复

?Smart up the world with your classified IP! ChatGPT!“

要查看或添加评论,请登录

Prof. Dr. Heiko Beier的更多文章

社区洞察

其他会员也浏览了