Overcoming the Hurdles in Implementing Generative AI
David Berglund
Strategy & Innovation Executive | AI & Data | Value Creation | Generative AI | Public Speaker
It’s settled. GenAI is in, and companies are clamoring to test and deploy it. Over half (55%) of organizations are already experimenting with generative AI, and about 18% have already implemented it, according to a recent VentureBeat survey. More surprisingly, 5% of companies have said ‘No’ to Generative AI — deciding not to consider using it at all.?
Early adopters are focused on NLP tasks like chat and messaging (46%) as well as content creation (32%). The question worth asking is why aren’t more companies doing and spending more? The data shows that many companies struggle with adoption. Having the right talent and quality data are key to moving confidently with this novel technology.?
It’s been ~8 months (239 days) since ChatGPT was released, so it should be no surprise that many large companies are taking some time to feel comfortable with this technology which can provide a 90% answer to 100% of questions.
So what’s the hold-up??
With the unprecedented advancements in AI technologies, an increasing number of organizations are keen to leverage the power of generative AI to transform their operations (e.g., get more efficient) and amplify personalization. Below are several common concerns that companies have about adopting GenAI and practical solutions to address them effectively.
Navigating Data Privacy
Generative AI systems depend heavily on vast amounts of data, some of which might be sensitive customer information. So it's no surprise that many businesses fear potential data/privacy breaches.
Solution: Businesses can develop a proprietary large language model (LLM), thus ensuring data security. Alternatively, they can establish a bespoke agreement with their LLM partner that prohibits the sharing of proprietary data. Working with an AI tooling partner can also provide data partitioning and protection.
Addressing Data Context
LLMs trained on open internet data may lack the contextual specificity needed by businesses, raising questions about their applicability in different sectors.
Solution: Businesses can preprocess and incorporate structured data into their AI models (e.g., Unstructured). Prompt engineering and embedding vector databases offer an extended knowledge base for the LLMs. Fine-tuning LLMs with proprietary data can also adjust the model parameters to suit specific contexts (check out Cleanlab).
Mitigating Hallucinations
LLMs are trained on the open internet so may not provide 100% accurate responses. The propensity of LLMs to produce hallucinations, or confidently incorrect responses, is both a feature and a bug in these systems. It's also a significant concern, which many companies fear could lead to misinformation.
领英推荐
Solution: Businesses can use regulatory guidelines as 'guardrails' to guide the LLM's output. They can also use safety-focused models like Anthropic Constitutional AI. Just keep in mind that while fine-tuning the models can reduce the risk of hallucinations, it may not entirely remove it.
Demystifying the Black Box
The 'black box' nature of AI, where the model's decision-making process is often opaque, can be disconcerting, especially in regulated industries (e.g., finance/healthcare).
Solution: The 'Chain of Thought' prompting process can elucidate the model's reasoning. Alternatively, businesses can use 'ReAct' prompts and fine-tuning or employ models-as-a-service companies like Elemental Cognition, which offer formal reasoning methods with provable correctness.
Tackling Costs
The high GPU costs associated with generative AI might be a prohibitive factor for some businesses. Even securing Nvidia A100s can be a challenge.
Solution: Before you dive deep you should always analyze the ROI, then find ways to minimize costs. Consider engineering approaches like query concatenation, question caching via a vector database, and LLM cascading to help manage these costs effectively.
Eliminating Bias Amplification
Human biases may infiltrate generative AI models, potentially leading to an amplification of these biases that conflicts with the organization's commitments to diversity, equity, and inclusion.
Solution: Regular monitoring and evaluation of training data (and production performance) for biases are crucial. Businesses should design new processes to combat bias and establish audit trails to trace the data lineage used in content generation.
While the path to generative AI adoption may seem fraught with concerns, there are effective strategies to navigate these hurdles. By proactively addressing these issues, businesses can tap into the transformative potential of generative AI and bolster their operations and decision-making processes.
The path to generative AI adoption may seem fraught with concerns, but there’s hope! You can proactively address these issues and tap into the transformational power of generative AI.
Keep in mind: Standing on the sidelines isn't an option. There is a clear path to navigate these hurdles (above) while still benefiting from the transformative power of this emerging technology. The capabilities and market are evolving quickly, so don’t get left behind. Follow the steps here to move confidently into the new frontier of GenAI.