How L&D Can Tame the Risks of Generative AI
The risks of a Gen-AI strategy are sky-high, like the potential rewards. But with appropriate risk management measures, undergirded by solid investment, these risks can be managed and overcome.
We ended our last article on a high note, stating that the only limits on the potential of Generative AI (GAI) to revolutionize the L&D space are the ones we impose through lack of will and imagination. And we stand by that claim. However.
All the grand planning in the world is meaningless without implementation. And implementation brings with it risks and the corresponding investments to avoid or mitigate them.
Here is a sobering chart from our friends at KMPG, whose survey of business leaders we have referenced in our previous two articles:
Barriers to GAI adoption by firms (cross-industry panel of business leaders)
Source: KPMG Generative AI survey, March 2023
That’s a long-ish list. Let’s try to make some sense of it.
Risks
As every investor knows, there is no such thing as return without risk. Generative AI, as a novel technology, has its fair share of them.
Among the KPMG survey respondents, 92% believe that the implementation of GAI introduces moderate to high-risk concerns, while just under half (47%) said that they were only in the early stages of creating strategies to address them.
Regulation to help mitigate these risks already exists (US: AI Bill of Rights, EU: AI Act) but as the underlying technology is changing faster than the speed of most governments, we cannot rely on this to guide us alone.
Which of the many risks should we be most concerned about?
#1 Accuracy and Bias
This is probably the number one issue for L&D professionals.
False or misleading facts are embarrassing and potentially harmful for any business, but in the field of education - they are inexcusable.
Furthermore, the probabilistic nature of Large Language Models will naturally tend to reinforce historic biases if left unchecked.
All of this underscores the need for human input and human intervention. In order to retain the unequivocal trust of our clients, we cannot afford to unequivocally trust AI.
We must always bear in mind: AI cannot think and does not know when it is lying.
It is our job to think and to verify.
#2 Security and Privacy
Anyone who connects a device containing private information to the internet is incurring the risk that this data might fall into the hands of malicious actors, or be unintentionally exposed through error.
But the stakes are higher with Gen-AI-powered L&D, particularly given the scope of its analysis, looking at talent both at an organizational and individual level.
The nature of a “chat” is also more intimate, as people assume privacy and may therefore be less guarded or professional in their choice of words.
L&D solutions must regard their data as confidential as a user’s search history (yes, exactly!).
#3 Legal and IP
Large Language Models have created a headache for Intellectual Property experts, as the process of textual generation blurs the line between plagiarism (replicating another’s work) and inspiration (building on another’s work).
While the former is unethical and can lead to legal problems, the latter is how civilization progresses.
领英推荐
We’re unlikely to solve this point here. But as with accuracy and bias, there is a line where utility becomes over-reliance. The more heavily an instructor leans on an AI model that draws on a wide range of source material (such as ChatGPT), the greater the liability risk.
There are situations where this is less likely to be the case, such as - for instance - compliance training materials, where adhering to certain forms of language is not just permitted but advisable.
Investment
Despite all of this, executives are pressing ahead with plans to integrate Generative AI into their business. A positive sign.
77% of the survey respondents said they were confident in their ability to mitigate the risks associated with Gen-AI. This number increased slightly (to 80%) for those who have already deployed it in some way already.
A similar proportion indicated that they would be increasing their investment in this area by 50% or more in the coming 6-12 months.
This is no coincidence, as investing the proper funds, time and resources is ultimately the only way to deal with risk.
Recruiting the right talent
The extraordinary usability of the latest AI tools belies the fact that they are highly sophisticated machines that require sophisticated humans to properly design, calibrate, and maintain them.
A casual approach to the talent issue (“Who needs code? AI will do it for us!”) will lead to an over-reliance on generic solutions, and hence directly to the risks outlined above in security and accuracy.
Recruiting the proper talent may appear a challenging task given the nascent stage of the Generative AI industry.
Of the respondents surveyed by KMPG, a mere 1% already have the skills in-house, with the remainder planning to hire (24%), re-train existing staff (12%), or both (63%).
Economics dictates the supply will invariably rise to meet demand.
While the number of Gen-AI experts may be small now, the scale of demand will be prompting (so to speak) many experts in adjacent fields (e.g. within Machine Learning) to transition their skillsets to meet the coming tsunami of Gen-AI roles.
Building the necessary infrastructure
There are two challenges with integrating Gen-AI technology into an existing organization.
The first is the technology itself, which is evolving in real-time. This is to an extent a problem of picking the right horse, for which task your expert talent will be necessary (see above).
However, where analysis of company-wide data is involved, the interaction of newer technology with legacy databases may prove to be a bottleneck for which there is no easy fix. This means that you should target quicker wins first rather than promising an end-to-end transformation from the get-go.
Stress-testing our solutions
This article assumes that you have an L&D solution that effectively fulfills the needs of your target user base. But even with the right product, there are numerous kinks that must be identified ahead of time in order to avoid chaos post-rollout.
This means not only testing normal use cases, but also - and especially - stress-testing for edge cases. An example of the latter could be actively seeking to identify precisely how inaccurate or offensive output could be generated, and deploying fixes and ‘dead-ends’ to keep the solution focused on its educational goals.
This is not likely to be a one-and-done event or even a periodic occurrence, but an ongoing process.
Conclusion
The sizeable number of material risks and the non-trivial effort it will require to overcome them may seem daunting.
But to re-emphasize the point with which we began this series: Gen-AI is an unstoppable force, and leaders everywhere are embracing it as such.
For all the risks involved, a substantial majority of executives (72 percent) believe generative AI can play a critical role in building and maintaining stakeholder trust.
In other words, if we can hold the balance between innovation, risk mitigation, and proper funding, we can turn a risky venture into a safe bet.
Love it Sean!
Ukrainian in Ukraine. Defense Tech, Unmanned Systems Forces, Content & Storytelling.
1 年Excellent!
Rapunzl Co-Founder | Forbes 30U30 Education | 2022 Yass Finalist & 2023 Yass Fellow
1 年Thanks for sharing Sean!
Merchant services for Business Consultants , Coaches, and Small Business Owners
1 年Really interesting Sean!
Empowering Brands with Passion: PWD & Divyang Advocate | Seasoned Sales & Marketing Pro | Digital Marketing Maven | PR Enthusiast | Strategic Content Architect | Insightful Business Analyst | MPA & B.Tech Holder
1 年That's spot on Sean! Thanks for sharing!