?? Weekly Dose of GenAI #11 ??
Indy Sawhney
Generative AI Strategy & Adoption Leader @ AWS ?? | Public Speaker | ?? AI/ML Advisor | Healthcare & Life Sciences
Welcome to the 11th edition of Weekly Dose of GenAI Adoption newsletter! This newsletter delves into the rapid integration of generative AI within the enterprise. Discover real-world insights that will empower you and your team to accelerate adoption of these transformative technologies in your organization. The newsletter is a cumulation of daily posts from this week, packaged for easy weekend read.
This week we discussed topics like, Unleashing Human Potential through AI Collaboration, Importance of Change Management, ML/LLM-Ops, and Cost Optimization?for enterprises as they scale their GenAI POCs to production, how do we come to trust what the LLM tells us, combining Index and Semantic Search for Optimal GenAI Results, and the Case for a GenAI Enterprise Platform.
?? Unleashing Human Potential through AI Collaboration ??
Witnessed an interesting discussion this weekend at a social gathering, about the growing use of AI at work, especially healthcare and life sciences. Not surprisingly, people had pretty strong and differing views on whether AI is ready for prime time and how much we should rely on it versus human doctor's judgment and expertise. Both pros and cons were debated. Limitations of healthcare provider's experience/knowledge in dealing effectively with regionally endemic diseases, and the recent opioid crisis was brought up as an argument of where humans have demonstrated judgement gaps. At the same time, arguments were presented about how much you trust the underlying training data being used to train the AI models and what if the data is not balanced or has inherent biases that impact their outcome. Just goes to show the complex issues and real concerns we need to carefully think through as this revolutionary technology evolves.
In my opinion, while AI excels in processing vast amounts of data, identifying patterns, and automating repetitive tasks, humans bring unparalleled creativity, emotional intelligence, intuition, and the ability to navigate complex social dynamics. By combining these strengths, we can help foster groundbreaking innovation, enable personal growth, and amplify our collective capabilities.
The partnership between humans and AI isn't about replacement, but combining our unique capabilities to push boundaries. In the end, it is all about the human, not the AI.
GenAI POC being moved to PROD, without accounting for Change Management, ML/LLM-Ops, and Cost Optimization feels like Tony Stark being slammed into the wall during early development stages of his suit ?? (and it is not as funny as the movie clip)
Having witnessed some early adopters go through the launch journey for their first GenAI initiative, I have come to see them (and myself) get humbled by the learnings. Here is what I have learnt -
The biggest effort/cost of moving GenAI Application to PROD is organizational change management. This includes employee trainings, setting expectations on what your first GenAI application will (and more importantly, will not) do, coaching employees on prompt engineering, measuring KPIs before GenAI & after GenAI to measure ROI, communication strategy & messaging, exec sponsorship, etc.
Second, LLMs are sensitive to change in prompts and context (user queries). The output of your GenAI application will change with minor edits and unless you have your pipeline automated for LLM selection, prompt engineering, testing, and observability - you will run cycles based on feedback from the field after launch. Brace for follow up questions like, 'why did the application give this output and not this?', 'how did the model decide which document to share', 'I think the application is hallucinating', etc. A lot of these are curious questions and you can avoid them by planning for end user training, labs, etc. before move to PROD. Some of our customers had thought through this and had trained ambassadors across LoBs (lines of business) to field such requests in structured office-hours.
Third, Cost for the GenAI application can bring surprises, optimizing cost is an ongoing process, and optimization warrants making changes across the complete pipeline/architecture. Cost for GenAI application is not just LLM inference cost, but a combination of LLM model (commercial vs open source), Vector DB (data volume, query volume, compute), Latency, Data Transfer, Caching, etc. As such, having ML/LLM-Ops sorted during development allows teams to continuously tweak the pipeline to bring the cost down and keep it in check.
Leaders who have been through a digital transformation journey in the past anticipated such needs better than those that are participating in an enterprise transformation journey for the first time. Some of my customers leveraged a GSI or ProServe to help plan for this transformation and minimized the number of surprises in their journey.
?? While boards and c-suite explore the opportunity to improve their EBITDA through accelerated GenAI adoption, some leaders are starting to ask this important but complex question about the trustworthiness associated with large language models (LLMs) – how do we come to trust what the LLM tells us? ??
While there continues to be ongoing debates and discussions on how best to address this concern, I would recommend starting with assessing the level of transparency provided by the LLM vendor or your cloud provider. Understand the model's architecture, training data, and objectives. Leverage LLM evaluation tools and frameworks that allow you to evaluate LLMs for Explainability, Accuracy, Toxicity, Semantic Robustness, Factual Knowledge, and Prompt Stereotyping.
Since each LoB (line of business) may be working through their own GenAI programs, it would be helpful of the LLM evaluation process can be automated by building a LLMOps pipeline that can be executed by various teams within your enterprise. If your organization already has an AI Council, you should explore looping in your AI Council for governance and oversight on these evaluations, perhaps even publish a list of approved LLMs for various LoBs that align with the firm’s responsible & ethical AI policy.
Here are 2 blogs that dive deeper into the topic - https://lnkd.in/eWzwSFvp & https://lnkd.in/e5yErV4a - I am sure there are other commercial and open source LLM evaluation frameworks that can be leveraged as well.
?? Combining Index and Semantic Search for Optimal GenAI Results ??
Over a casual conversation, with a colleague (Todd C. Sharp, MSci) yesterday, we started discussing how cool it would be if we could have old fashioned index search results integrate with semantic search in Generative AI (GenAI) applications. Come to think of it, as knowledge workers and as end users of search engines like Google/Yahoo and others, we have been trained to use keyword search for knowledge extraction at work and on the internet for the past 2 decades.
I researched if this pattern was being adopted by any of the early adopters and was pleasantly surprised to learn that Perplexity.ai (https://lnkd.in/eYhffbkm) is already doing so i.e. enhancing generation of relevant and accurate outputs, by leveraging the interplay between index search and semantic search. Here is why this makes sense -
领英推荐
Index search relies on keywords and retrieves information based on exact matches. Semantic search, on the other hand, focuses on the meaning and context of search queries, enabling more nuanced and relevant results. By integrating both methods, GenAI applications can provide a more comprehensive and accurate output.
Over the past two decades, data curation has primarily focused on index searches and keywords. This may present a challenge for GenAI applications relying on semantic search, as they require a deep understanding of context and relationships between data points. To optimize the performance of GenAI RAG applications, organizations should consider adapting their data curation practices to include semantic connections and rich metadata.
So, what can we do about this and how do we need to evolve as end users and knowledge workers -
First, evolve data curation processes to incorporate semantic information and contextual relationships, enabling more effective semantic search in GenAI applications. Capture the context and meaning within the artifact. This will take time but we will benefit by getting intentional about it.
For your GenAI applications, enhance your RAG implementation to combine index and semantic search methods to achieve optimal results from GenAI applications, leveraging the strengths of both approaches.
Finally, continuously assess the performance of GenAI applications and refine search strategies as needed to ensure the best possible outcomes. One way to do so would be to observe how your enterprise end users use your enterprise wikis, Sharepoint, and other knowledge management portals to extract relevant information and see if they can access the same knowledge via GenAI applications. As always, it helps if you have an automated ML-LLM/Ops pipeline to run repeat tests as you fine tune your search architecture/implementation.
?? The Case for a GenAI Enterprise Platform ??
We have successfully transitioned from 16+ months of Generative AI (GenAI) POCs to organizations now moving GenAI workloads to production. As organizations increasingly leverage GenAI technologies to drive innovation, the need for a comprehensive GenAI enterprise platform becomes evident. Other than arresting budget dilution, such a platform can enable organizations to harness the full potential of GenAI while addressing critical aspects like reusability, governance, security, reliability, metering, LLMOps/MLOps - resulting in faster time-to-market and lower run costs. Let’s dive into the benefits of an Enterprise GenAI platform -
First and foremost, a centralized GenAI enterprise platform will arrest budget dilution. Large enterprises are working with multiple vendors, across LOBs as they rush to get first to bring their GenAI applications to their end users. They are adopting tools, platforms, LLMs, open source libraries, and GSIs that are duplicating efforts. While divergence is a great tactic during the experimentation phase, it costs a lot and is not sustainable in the long run. One of my early adopters has the best mental model on this journey – they are allowing for divergence as long as everyone agrees to an end date and a plan to converge within a reasonable time frame to central IT policies and roadmap.
Next, GenAI Enterprise Platform will encourage the development and sharing of reusable GenAI components, reducing duplication of efforts and fostering collaboration across teams. It allows for up-skilling of existing resources, develop talent pool, and hardens processes to support applications in production.
GenAI Enterprise Platform will help enforce a centralized governance framework that will promote responsible GenAI app development, adhering to regulations, responsible ai policies, data security, privacy, and ethics. It will allow reliability engineers to work through and enable high availability, scalability, and performance. Will help centralize LLMOps/MLOps to streamline LLM deployment, evaluation, monitoring, and continuous improvement (not to mention, restrict access to unapproved models)
One of the big lessons learnt from cloud transformation journey in the last decade, is the ability to meter consumption of resources by the enterprise - show-backs and charge-backs. Adopting centralized metering capabilities, budgets, and tagging will provide visibility into GenAI resource consumption, allowing organizations to monitor usage, optimize costs, and allocate resources efficiently.
Not only will GenAI Enterprise Platform help avoid the problems we have conquered during cloud adoption, it will also help accelerate time to market and lower run-cost, allowing organizations to quickly adapt to market changes and capitalize on new opportunities.
?? Share your thoughts on GenAI Adoption below!
?? Subscribe to this newsletter on GenAI adoption - Don't miss this essential update on the transformative impact of generative AI in the healthcare and life sciences industry. Each week, we dive into various adoption strategies and use cases, from AI-powered marketing to accelerating drug discovery. Learn about cutting-edge GenAI technology trends, including Amazon Bedrock solutions and novel design patterns. Discover how leading healthcare organizations are harnessing the power of large language models to unlock insights from contract data and enhance customer service.
Indy Sawhney - Follow me on LinkedIn
#genai #ai #aws ?
?
Predicting Pancreatic Cancer Risk Using AI at Mayo Clinic
9 个月Hey these are insightful! Thanks for sharing, Indy ??
CEO | A Healthier Democracy | Physician
9 个月The focus on creating an Enterprise GenAl Platform is spot on.?? ??The blend of customer interactions and industry leader insights adds immense value. Thanks for sharing Indy Sawhney !