Chapter 3: LLM Lifecycle, Installing an LLM, LLM Ops
Maria Pere-Perez
Databricks | Sr Director, AI Technology Partners | LinkedIn Top Voice
AI is a whole new world, and there’s a whole new dictionary to go with it. Subscribe to my newsletter “The ABC’s of AI ” to receive simple definitions of new buzzwords every week. Last week, I explained, Generative AI, Foundation Models, & LLMs.
Today's words are:?
I spent this week at the Everday AI conference , Dataiku ’s big event in New York. There were a few thousand people there.
Databricks was a Platinum sponsor at the event. The audience was hungry. And not just for the delicious food next to our large booth. They were hungry for knowledge. Folks came to us asking about Generative AI, specifically LLMs. They wanted to know: how can they deploy their own LLM?
Many of the people who came to our booth were not data scientists. There were a lot of business analysts and data engineers, who were new to AI.
So, in this post, I break down the processes for deploying an LLM::?
(Hint: These are not the same things. Before reading on, please make sure to read my previous blog first about what an LLM is.)
LLM (Large Language Model) Lifecycle
An LLM lifecycle is like the “life story” of a big chatbot. It's about how to create it, put it to work, and keep it running smoothly.
Let’s use growing a plant as analogy for the LLM Lifecycle. These are the steps:
That’s it!? This is all you need to know for developing and deploying an LLM. But let’s cover a couple of? additional words you might hear.
What is Inference?
Inference in the plant analogy can be thought of as "The Fruit-Bearing Plant".
You’ve gone through the processes of planting, growing, and caring for the plant. (In other words, you've created and deployed the LLM). The plant (or model) produces fruit (aka answers/predictions) when it receives sunlight and water (input data).
So, every time someone asks the LLM a question, it’s like the sunlight hitting the leaves of the plant. And the LLM, in turn, produces a fruit, which is the answer to the question. This process of producing an answer (fruit), based on the input (sunlight and water), is inference!
Put simply, inference is when our fully-grown LLM comes up with answers to the questions it receives.
What is Model Serving?
Model serving in the plant analogy is like "Setting Up a Fruit Stall". Once your plant grows and starts producing fruit (inference), you can share these fruits with others.
Setting up a fruit stall is a way to present your fruits to the public, allowing them to easily access and taste them. Similarly, after training a chatbot (or LLM), model serving is how you make it available to others. It's the infrastructure or system that lets people send questions and get answers from your chatbot.
Like a fruit stall, a chatbot needs a good location and a way to display and distribute information. Model serving makes sure that your chatbot is easy to use, responsive, and efficient.
Installing an LLM
A lot of people get confused. Installing an LLM is not the same thing as an LLM lifecycle.
领英推荐
Using the plant analogy, installing an LLM is more like "Transplanting a Mature Plant to Your? Garden." It becomes part of your garden and continues to grow.
What is LLM Ops?
LLM Lifecycle and LLM Ops are often used interchangeably, but they are different. And it’s confusing, because they have some of the same components.
Think of LLM Ops as methods and tools that gardeners can use to help plants grow. It's about managing LLMs in a production environment. These are the steps:
LLM Ops is a newer field that is gaining importance as LLMs are used more widely.
Model Drift
Imagine planting a pretty flower in your garden. You hope it will bloom in a certain way, based on what you've seen and how you're taking care of it. Everything's going well; the flower is blooming as you expected. But suddenly, the environment starts to change. Shit happens. Maybe it's too hot. Or maybe there was a radioactive chemical spill....
As the environment changes, your flower may start to look different than you expected. Your flower's behavior has changed, with the change in conditions. In machine learning, we call this model drift.
When you use our LLM machine learning model, it is trained on specific data. It is expected to perform well. But if the actual data is different from what the model learned, its performance might worsen. Model drift happens when shit happens.?
Data scientists must monitor, retrain, and redesign models to handle model drift, just like how a gardener adjusts care for plants. So, it's not "plant it and forget it". It's about ongoing care and adaptation!
Summary
In conclusion, deploying and managing an LLM is much like cultivating and caring for a garden.
The journey of an LLM involves many steps. First, you decide what the chatbot's purpose will be, like choosing a plant to grow. Then, you collect and refine the data it needs, such as selecting seeds and preparing soil. Finally, you monitor the chatbot's health and make adjustments as needed, like pruning and nurturing plants. This journey is complex but fulfilling.
We used a plant analogy to help explain ideas like inference and model serving. Inference is about harvesting fruit (knowledge) from your tree. And model serving is about making your fruits available to the general public.
LLMs may seem intimidating, but learning about their lifecycle and operations can make them less mysterious. This knowledge can lead to better and more responsible AI applications. In the same way that a garden needs a gardener's care, a successful LLM needs regular monitoring and maintenance to be relevant and efficient.
Epilogue
Gotta say something about my employer. Not because I have to, but because I love them.
Databricks just announced that everyone can deploy private LLMs using Databricks Model Serving .
Databricks Model Serving is like this incredible all-in-one gardening toolkit. With this toolkit, planting and nurturing LLMs becomes a breeze! Some highlights:
About the author: Maria Pere-Perez
The opinions expressed in this article are my own. This includes the use of occasional swear words, analogies, and humor. I currently work as the Director of ISV Technology Partnerships at Databricks. However, this newsletter is my own. Databricks did not ask me to write this. And they do not edit any of my personal work. My role at Databricks is to manage partnerships with AI companies, such as Dataiku, Pinecone, LangChain, Posit, MathWorks, Plotly, etc... In this job, I'm exposed to a lot of new words and concepts. I started writing down new words in my diary. And then I thought I’d share it with people. Click "Subscribe" at the top of this blog to learn new words with me each week.
Special Projects Professional & Regenerative Counselor | CSM, YTT200, NLP Certified | 5+ years in Google Cloud | Leading Strategic Initiatives, Fostering Wellness Cultures, and Global Teams Connector
1 年The sunflower is definitely scary. But it stuck! Love the planting & pruning analogy here ??
Principal Solutions Engineer @ TitanML
1 年Nice summary Maria Pere-Perez! The pictures are awesome. Did you make them too? My favorite is the sunflower one describing model drift.
Driving Strategy + Programs + Operations + Risk into Successes!
1 年YES, love outside analogies and use cases. Real World Maria Pere-Perez keep crushing it