Overcoming the Hurdles in Implementing Generative AI

Overcoming the Hurdles in Implementing Generative AI

No alt text provided for this image

It’s settled. GenAI is in, and companies are clamoring to test and deploy it. Over half (55%) of organizations are already experimenting with generative AI, and about 18% have already implemented it, according to a recent VentureBeat survey. More surprisingly, 5% of companies have said ‘No’ to Generative AI — deciding not to consider using it at all.?

Early adopters are focused on NLP tasks like chat and messaging (46%) as well as content creation (32%). The question worth asking is why aren’t more companies doing and spending more? The data shows that many companies struggle with adoption. Having the right talent and quality data are key to moving confidently with this novel technology.?

It’s been ~8 months (239 days) since ChatGPT was released, so it should be no surprise that many large companies are taking some time to feel comfortable with this technology which can provide a 90% answer to 100% of questions.

No alt text provided for this image

So what’s the hold-up??

With the unprecedented advancements in AI technologies, an increasing number of organizations are keen to leverage the power of generative AI to transform their operations (e.g., get more efficient) and amplify personalization. Below are several common concerns that companies have about adopting GenAI and practical solutions to address them effectively.


Navigating Data Privacy

Generative AI systems depend heavily on vast amounts of data, some of which might be sensitive customer information. So it's no surprise that many businesses fear potential data/privacy breaches.

Solution: Businesses can develop a proprietary large language model (LLM), thus ensuring data security. Alternatively, they can establish a bespoke agreement with their LLM partner that prohibits the sharing of proprietary data. Working with an AI tooling partner can also provide data partitioning and protection.

Addressing Data Context

LLMs trained on open internet data may lack the contextual specificity needed by businesses, raising questions about their applicability in different sectors.

Solution: Businesses can preprocess and incorporate structured data into their AI models (e.g., Unstructured). Prompt engineering and embedding vector databases offer an extended knowledge base for the LLMs. Fine-tuning LLMs with proprietary data can also adjust the model parameters to suit specific contexts (check out Cleanlab).

Mitigating Hallucinations

LLMs are trained on the open internet so may not provide 100% accurate responses. The propensity of LLMs to produce hallucinations, or confidently incorrect responses, is both a feature and a bug in these systems. It's also a significant concern, which many companies fear could lead to misinformation.

Solution: Businesses can use regulatory guidelines as 'guardrails' to guide the LLM's output. They can also use safety-focused models like Anthropic Constitutional AI. Just keep in mind that while fine-tuning the models can reduce the risk of hallucinations, it may not entirely remove it.

Demystifying the Black Box

The 'black box' nature of AI, where the model's decision-making process is often opaque, can be disconcerting, especially in regulated industries (e.g., finance/healthcare).

Solution: The 'Chain of Thought' prompting process can elucidate the model's reasoning. Alternatively, businesses can use 'ReAct' prompts and fine-tuning or employ models-as-a-service companies like Elemental Cognition, which offer formal reasoning methods with provable correctness.

Tackling Costs

The high GPU costs associated with generative AI might be a prohibitive factor for some businesses. Even securing Nvidia A100s can be a challenge.

Solution: Before you dive deep you should always analyze the ROI, then find ways to minimize costs. Consider engineering approaches like query concatenation, question caching via a vector database, and LLM cascading to help manage these costs effectively.

Eliminating Bias Amplification

Human biases may infiltrate generative AI models, potentially leading to an amplification of these biases that conflicts with the organization's commitments to diversity, equity, and inclusion.

Solution: Regular monitoring and evaluation of training data (and production performance) for biases are crucial. Businesses should design new processes to combat bias and establish audit trails to trace the data lineage used in content generation.


While the path to generative AI adoption may seem fraught with concerns, there are effective strategies to navigate these hurdles. By proactively addressing these issues, businesses can tap into the transformative potential of generative AI and bolster their operations and decision-making processes.


No alt text provided for this image
Prompt: "A fantasy painting of a cute robot in galoshes jumping in a puddle on a wet, rainy day"

The path to generative AI adoption may seem fraught with concerns, but there’s hope! You can proactively address these issues and tap into the transformational power of generative AI.

Keep in mind: Standing on the sidelines isn't an option. There is a clear path to navigate these hurdles (above) while still benefiting from the transformative power of this emerging technology. The capabilities and market are evolving quickly, so don’t get left behind. Follow the steps here to move confidently into the new frontier of GenAI.

要查看或添加评论,请登录

David Berglund的更多文章

  • Innovating at New Heights

    Innovating at New Heights

    Innovation Lessons from the Dome of Florence In the early 15th century, Florence (Italy) boasted a cathedral under…

    1 条评论
  • Data Personified: From Numbers to Narratives

    Data Personified: From Numbers to Narratives

    Imagine if data was not just numbers on a screen or entries in a database, but a character in your company's story…

    1 条评论
  • Five Traits of Future-Ready AI

    Five Traits of Future-Ready AI

    The race is on to start building with AI, but that pounding urgency often causes teams to focus on building fast, not…

  • Durable Growth: How Lean Can Fulfill Generative AI's Potential

    Durable Growth: How Lean Can Fulfill Generative AI's Potential

    Most people overestimate the value of Generative AI in the short term and underestimate its long-term potential. Why?…

  • Get Lean with Generative AI

    Get Lean with Generative AI

    Applications like ChatGPT and Midjourney are not just consumer applications; they represent a paradigm shift in how we…

    1 条评论
  • Finding Problem-Solution Fit With Generative AI

    Finding Problem-Solution Fit With Generative AI

    Have you heard of the shiny object syndrome? In the unforgiving race of technological advancement, businesses often…

    1 条评论
  • ChatGPT and Beyond: Unleashing Large Language Models at Work

    ChatGPT and Beyond: Unleashing Large Language Models at Work

    There is no denying that OpenAI's ChatGPT has set a new standard for AI applications across industries. While its…

  • Innovation Advantage: Unlocking Impact with OKRs

    Innovation Advantage: Unlocking Impact with OKRs

    A successful innovation strategy goes beyond mere brainstorming and press releases. It requires a systematic approach…

    5 条评论
  • The Stoic Path: Applying Wisdom, Courage, Justice, and Temperance to AI and Data Strategy

    The Stoic Path: Applying Wisdom, Courage, Justice, and Temperance to AI and Data Strategy

    Amid the accelerating pace of technological change, companies are constantly searching for guiding principles to frame…

    1 条评论
  • THE GREATEST DANGER OF AI?

    THE GREATEST DANGER OF AI?

    Is it possible that the biggest risk facing organizations today is AI over-simplification? Over the last six months…

社区洞察

其他会员也浏览了