?? Weekly Dose of GenAI #8 ??
Indy Sawhney
Generative AI Strategy & Adoption Leader @ AWS ?? | Public Speaker | ?? AI/ML Advisor | Healthcare & Life Sciences
Welcome to the eighth edition of Weekly Dose of GenAI Adoption newsletter! This newsletter delves into the rapid integration of generative AI within the healthcare and life sciences sectors. Discover real-world insights that will empower you and your team to accelerate adoption of these transformative technologies in your organization. The newsletter is a cumulation of daily posts from this week, packaged for easy weekend read.
This week we dove into technical insights that are helping my customers scale GenAI adoption, as they start to deploy Generative AI as an Enterprise Platform.
??? Upscaling Generative AI as an Enterprise Platform ??
In my experience, adopting Generative AI (GenAI) transcends a mere application choice; it represents a strategic enterprise-wide platform decision that demands careful consideration and alignment with organizational goals. As enterprises embark on their GenAI journey, it's vital to treat scaling efforts with the same strategic approach as previous enterprise-wide initiatives, like cloud adoption.
To successfully scale GenAI, consider forming a steering committee that brings together cross-functional stakeholders. This committee will oversee strategic planning, resource allocation, and change management, ensuring a holistic approach to adoption. Have you been part of a steering committee for enterprise-wide initiatives? If so, you will relate with this approach.
Looking back at cloud adoption a decade ago, we can draw parallels to today's GenAI landscape. Just as organizations needed to re-architect their IT infrastructure and embrace new workflows for cloud integration, scaling GenAI requires a similar mindset shift and strategic planning. We should start by looking at business processes.
Engage with stakeholders from various LoB (lines of business)—clinical, IT, and business—to foster cross-functional collaboration and gather diverse input. Just like we had to explain the business benefits of cloud to everyone during cloud adoption, scaling GenAI warrants similar motion. Educate & Communicate. Don't make assumptions.
Adopt an agile approach to development, testing, and deployment, allowing for continuous learning and improvement as you scale GenAI applications. There is so much to be done. My preferred execution model is Scaled Agile Framework (SAFe) Program Increment planning. The goal is to align everyone on a shared vision, prioritize features, identify dependencies, and create a detailed plan for the upcoming Program Increment (PI), which is a time box of 8-12 weeks.
Lastly, don't forget the importance of data governance, quality and privacy. Address any challenges and comply with regulations to build trust and pave the way for successful adoption. I have witnessed numerous organizations postponing implementing a comprehensive data strategy and robust data governance practices during previous transformation initiatives, such as cloud adoption. I would strongly advise against repeating these oversights, as they can hinder long-term success and impede the realization of desired outcomes.
??Scaling Generative AI: Technical Insights from Experience (Part 1) ??
As I've been working with customers on their Generative AI (GenAI) initiatives, I've noticed some familiar challenges surfacing - reminiscent of the early cloud adoption days. As enterprises are gearing up to launch their first few GenAI workloads in production, they want to get ahead of potential roadblocks for smooth scaling and adoption across lines of business (LoBs). Here are a few crucial considerations based on recent engagements:
Robust Identity and Access Management (IAM) has surfaced a few times already. IAM is key in ensuring users can securely access only the necessary resources (LLMs, APIs, Vector Databases, Data, etc.) - most LoBs don't openly share data/assets across BU boundaries. Additionally, tracking and allocating resource consumption for each line of business (LOB) promotes fairness and manages costs across departments. Learning from the Cloud adoption days, organizations want the usual FinOps best practices on 'Metering Usage', 'Show Backs' and 'Charge Backs' sorted before mass adoption.
Organizations are starting to realize the need early on for a flexible enterprise GenAI platform that supports both commercial and open-source Large Language Models (LLMs) - allows users to choose the best-suited models for their specific use cases, and selecting the most cost-performant LLM based on business needs is important. Who wants to pay $$$ for vacation policy lookup? Also, ability for the organization to evaluate LLMs based on performance, accuracy, and relevance is emerging as an essential capability for enterprise customers. This is because enterprises are already getting humbled by the various creative ways in which employees/LoBs are considering leveraging GenAI. So much variety warrants choice and flexibility!
Enterprises want to be able to put necessary guardrails in place for what prompts can be sent to the LLM and what outcome can be shared back with the employees. As such, platform's capability to assessing the quality, ensuring safety of prompts, and adherence to corporate guidelines is critical for GenAI production launch. Likewise, implementing thorough testing and validation automation is surfacing as a core capability for path to production. To adhere with audit and compliance, organizations want to implement Large Language Model Operations (LLMOps) practices for model management, tuning, deployment, and monitoring streamlines operations.
?? Scaling Generative AI: More Technical Insights from the Field (Part 2) ??
Continuing our exploration of essential technical considerations (Part 1 - https://shorturl.at/CHFXb) for scaling Generative AI (GenAI) solutions, let’s draw on real-world experiences to highlight additional key factors:
Centralizing and managing reusable prompts streamline workflows and ensure consistent quality. Large enterprises have redundant workflows, across LoBs, that serve similar needs. To ensure they are not creating redundant solutions that get difficult and expensive to maintain, early adopters are already seeing the need for a prompt library that allows for quick exploration and adoption. Think of it as an 'Enterprise App Store for Prompts'. Some early adopters are envisioning an Enterprise App Store for Prompt Libraries at the LoB (line of business) level.
领英推荐
With continuous advancements in GenAI capabilities, early adopters are already starting to double down on Data Strategy. They want to find cost performant ways to provide support and access to multi-modal data types like text, images, audio, and video to internal consumers to address diverse use cases and tailor solutions to their specific needs. Early adopters are gravitating towards Data Lakes, supporting one of the Open Table Formats (Apache Iceberg being the most in demand). Facilitating easy access to high-quality data positively impacts model performance and accuracy.
Finally, early adopters are realizing that up-skilling all employees (across business and tech business units) takes time so interim solutions and platforms are warranted to speed up adoption. Empowering domain experts with user-friendly tools that integrate with the Enterprise GenAI platform helps accelerate adoption. Despite budget dilution and the need to support overlapping platforms in the short run, the approach allows organizations to tap into their in-house talent to develop innovative solutions to increase productivity and automate business processes.
?? Scaling Generative AI: More Technical Insights from the Field (Part 3) ??
In our ongoing series on technical considerations for scaling Generative AI (GenAI) solutions (Part 1 - https://shorturl.at/CHFXb & Part 2 - https://shorturl.at/Lrhh5), we continue to draw on real-world experiences to highlight additional key factors for success:
Early adopters are careful about reputational risk associated with scaling GenAI across their org. To mitigate the risk, maintain trust with internal stakeholders, and to ensure regulatory compliance, they are architecting their GenAI platform to protect sensitive corporate data, user prompts, and LLM outputs. As I had mentioned in an earlier post (https://shorturl.at/zx1SB), you cannot rely on LLMs (Large Language Models) to manage security and access for you. Also, LLMs may not always react to prompts as expected, leading to unintended consequences. As such, relying solely on prompts for consistency and end user experience is insufficient. Early adopters are implementing robust security measures, such as encryption, access controls, prompt input/output guardrails, and regular audits, to safeguard data and prevent unauthorized access.
Also, enterprises with extensive structured and unstructured data repositories, are facing an interesting budgetary and technical challenge as they scale GenAI adoption across their org! Allow me to explain - Large Language Models (LLMs) work by leveraging embeddings in vector databases, enabling efficient storage and retrieval of vast amounts of data. As adoption scales, transferring existing data into vector databases without retiring previous systems of record is straining budget and interfering with established enterprise data strategy! Vector database cost is determined by data volume, query volume, compute resources, and data transfer costs. Selecting a cost-effective, high-performance vector database is crucial for efficiently storing and retrieving large volumes of data. As such, early adopters are learning that bringing LLM model(s) to your data is more cost efficient and effective than moving your data to the model! You also don't need to reinvent your existing data strategy :)
?? Scaling Generative AI: More Technical Insights from the Field (Part 4) ??
In our ongoing series on technical considerations for scaling Generative AI (GenAI) solutions (Part 1 - https://shorturl.at/CHFXb; Part 2 - https://shorturl.at/Lrhh5 & Part 3 - https://shorturl.at/QFSYU), we continue to draw on real-world experiences to highlight additional key factors for success:
At the forefront of GenAI transformation is RAG (Retrieval Augmented Generation). RAG frameworks enable the seamless integration of large language models (LLMs) with domain-specific knowledge bases, allowing for accurate and relevant information retrieval and generation.
While some RAG implementations cater to simpler tasks such as question-answering or summarization, others are designed to tackle complex, domain-specific challenges that warrant advanced RAG frameworks. To ensure maximum flexibility and prevent redundant efforts, early adopters are starting to realize the benefit of providing a library of RAG implementations for their teams to choose from, based on their specific use case. This approach empowers various lines of business (LoBs) to leverage the power of GenAI without duplicating efforts, fostering collaboration and knowledge sharing across the enterprise.
This library of ‘ready to digest’ RAG frameworks allows different teams with unique requirements and use cases to deploy and tailor their RAG setup. For instance, the customer support team at one of my customers needed a RAG framework optimized for quickly retrieving relevant knowledge base articles of approved messaging, and health care provider engagement history to provide accurate and up-to-date responses. In contrast, their marketing team requires a RAG setup that curates the right message in alignment with the regional regulations, language and customer insights to generate compelling content. By having multiple RAG frameworks, the enterprise is ensuring each team has access to the most relevant solution and can fine tune their solution for desired outcome, leading to improved efficiency, accuracy, and overall performance across various business functions.
Here are a few RAG frameworks being actively deployed by early adopters - Fusion-in-Decoder (Fusion-RAG), Retrieval-Augmented Language Model Pre-training with Knowledge Graphs (Rank-RAG), Document Reader (Dr-RAG), Memory-Augmented RAG Models. I will dive deeper into the details of these RAG frameworks in a future post.
?? Share your thoughts on GenAI Adoption for HCLS below!
?? Subscribe to this newsletter on GenAI adoption - Don't miss this essential update on the transformative impact of generative AI in the healthcare and life sciences industry. Each week, we dive into various adoption strategies and use cases, from AI-powered marketing to accelerating drug discovery. Learn about cutting-edge GenAI technology trends, including Amazon Bedrock solutions and novel design patterns. Discover how leading healthcare organizations are harnessing the power of large language models to unlock insights from contract data and enhance customer service.
Indy Sawhney - Follow me on LinkedIn
#genai #ai #aws