Seven Commandments – ChatGPT and other Large Language Models (LLMs)
A key motive for writing this piece is to briefly explain the concept of ChatGPT and provide guidance that serves as guidelines for organisations and individuals on using such technology. I have tried to explain with less technical jargon - You might still find some but fear not :).
Since the release of the ChatGPT (a conversational agent), you can sense the excitement around its adoption but also concerns that emanate from a place of caution with using this technology. This technology, at its nascent stage like every other conversational agent released earlier, clearly serves as an experimentation tool, which means there are aspects of the technology that requires responsible iterations to reach a reasonable balance on its application across various domains. Nonetheless, we are already experiencing a wave of good, bad, and uncertain. The fact that ChatGPT can be used at such a scale by anybody, shows the outstanding work achieved. AI firms such as Anthropic, DeepMind and even OpenAI are working on releases of new conversational agents or language models Claude, Sparrow, and GPT-4, respectively, making the future even more exciting.
Below you can find some trends in computing across areas of machine learning that place Large Language Models (LLMs) before the arrival of ChatGPT in late 2022. LLMs are on the rise, and as we can see, early use in both household and commercial applications. Hence, these models can be perceived to fall in line with ‘scaling laws’ i.e., the improvement of their capabilities often get better due to larger parameters and data.
?Generative becomes General Purpose
I am sure readers must have often come across the term - “Generative AI”. It simply refers to historical content such as video, images, audio, texts, and others applied to generate new content based on a query. Other examples include – images, video and audio (DALL-E, Stable Diffusion, Midjourney, etc.) and for text (GPT-3, LaMDA, ChatGPT, etc.). These models are premised on algorithms such as diffusers and transformers, which suggest they can be applied across various domains due to their general-purpose abilities.
LLMs fondly called have grown into serving as infrastructure for commercial applications. And this informs the debate about how AI is becoming more of an engineering than a science field. It could also prove a point that Generative AI, for instance, is heavily dependent on high computing for scalability.
A Cautionary Tale – Seven Commandments
Apply when necessary
In my advisory capacity to organisations on using AI, my first question has always been; “Why do you need this technology”? It remains the same approach for adopting models of this nature. A compelling use case on value delivery could make it worth considering. ?LLMs, Generative AI, or Foundation Models are ‘nice to have’. However, as an example, a business optimising for value either to customers or shareholders, should not fall into the trap of nice to have if those key questions have not been appropriately answered. ?
Seek ye first understanding
There is no doubt the adoption of such technology has been aggressive with a tendency to grow astronomically. However, before actions such as business decisions, procurement, regulation, or integration for public interest purposes are taken, ensure you understand its workings. As simple as this sounds, the workings here refer to understanding how the technology generally works and how deployments affect the environment. It is critical to ensure such technology reaches its potential and we experience responsible AI advancements.
领英推荐
Be responsible
Responsible here throws caution into how such models are used. It is not a new discovery that models like ChatGPT and others can propagate various risks and ethical issues. Examples include; ChatGPT enabling script kiddies to write functional malware, and Generative AI that can generate falsehoods as seen across users’ responses. The lack of accountability from such tools or models in its early stage means the responsible AI mindset needs to kick in. Few concerns around accountability, privacy, safety, transparency and fairness should be prioritised within organisations to harness the benefits of adopting such models, if needed.
Resist the urge to hype
The release of ChatGPT has birthed more influencers in the industry than I have seen since the frenzy from Web3 progenitors. It could sound good to create awareness of such models but tread cautiously. We are all excited, yet, the essence of such models exist to sweat performance and deliver optimal value.
No Consciousness
Consciousness is one of the biggest debates about AI. But I will insist that Generative AI models like other models are firstly artefacts. Humans breathe “life” into them to make them extremely useful in their applications. Nonetheless, this does not automatically make them conscious due to the inherent nature on which some level of interactions happens in their creation. They are incapable of addressing challenges of self-motivation, abstract cognition, etc. These models are only as good as predicting the subsequent text or image based on the past content they have experienced. In a nutshell, these artefacts or tools lack the understanding of what they produce, even when ?it is passing a business or law exam.
Establish agency
For organisations that must adopt such technology, an accountable mechanism must be in place to manage expectations of such technology and mitigate against the risks associated with such technology. In essence, the governance approach must clearly answer the human design problem that ensures the appropriate human and organisational values are clearly baked into the application process. For AI research companies, credit should be given where necessary. However, the road to achieving human goals means the “alignment problem” must clearly answer the value definition of creating such tools.
Thou shalt not ban
For institutions and regulators, banning such technology creates an arbitrage for bad actors to fill the gap – whether proscribed or not, generative AI would still be popular, especially among creatives. Anyway, it is a normal reaction to new technologies but banning does not achieve anything. More importantly, awareness and education compress the information and knowledge asymmetries.
I love the student policy drafted by Prof. Ethan Mollick on using AI in his class. See below:
In conclusion, whether you refer to it as generative AI, Large Language Models or even foundation models, a recurring theme in all of these is the dual use ability of such technology to serve as transformative engines in society but also perpetuate huge risks. This article alone does not dive deep into the governance needed to deliver on huge promises and expectations from such technology. It is a complex subject that requires a clear understanding of ethics, policy, legal and risk considerations, yet I hope time is in my favour to breakdown these considerations in a future post :).
We Leverage Technology to Bridge the Talent Gap in Project Management in Sub-Saharan Africa.
1 年Hi Mr Victor Famubode, CSM . Thanks for sharing this insightful piece as well as all the tips on AI governance, application, and risk analysis on The Discourse (ClassicFM 97.3). Looking forward to more engagements and keeping in touch with you.
HR Professional || Recruitment & Strategy Consultant
2 年Great insight boss
Senior Software Engineer | DevOps Engineer | Full-Stack | BackEnd | Expert in C#, .NET Core, Angular, Azure | PSM 1
2 年Great read. Well done ??
Finance Associate/ Operation Associate/ Treasury Analyst/Customer Services/fintech/Data Analyst/McKinney forward follow
2 年Awesome