Spotlight on Databricks RAG Tools, Vector Search, Feature & Function Serving
I use doggie photos, because Retrieval ~ Retriever.... well, kinda! ?? ??

Spotlight on Databricks RAG Tools, Vector Search, Feature & Function Serving

AI is a whole new world, and there’s a whole new dictionary to go with it. To read my future articles, join my network by clicking 'Follow'.

--------------------------------------------------------------------------------

Databricks announced 3 things last week:

  1. Databricks is rolling out a suite of RAG tools for its Data Intelligence Platform. The tooling assists customers in creating and managing high-quality LLM apps. And this helps all kinds of LLM apps with different business use cases.
  2. Public Preview: Databricks Vector Search. Databricks Vector Search is a powerful tool that lets you search through a wide range of unstructured data like text, images, and videos. This is like a Google search for your own unstructured data.
  3. Public Preview: Databricks Feature & Function Serving. This provides real-time access to structured data. It's like a fast helper that gets your data ready for AI use, using Python.

This blog will summarize all 3 announcements.

RAG means Retrieval Augmented Generation.

But first, let's remind ourselves…. What is RAG again??? Ah yes! About 3 months ago, I wrote my first blog. Chapter 1: RAG (Retrieval Augmented Generation), Vector Search, Vector DB. It was the first in my newsletter series called, The ABC's of AI: Breaking Down The Buzzwords.?

Here’s a quick summary of what I wrote: RAG is a cool technique used in AI to make responses from LLMs (like ChatGPT) more accurate. You know how sometimes ChatGPT can make up shit (kind of like a politician avoiding a question)? RAG helps reduce that by looking up information first before answering.

A RAG app combines two key processes: 1.) Fetching relevant information (retrieval) and then 2.) Generating a response based on that information (generation). In short, you look up the data, and then you give an answer based on that data.

RAG app combines "retrieving" (fetching) good info, and then "generating" a response.

Here's how it works: Say you ask a chatbot a question. The RAG app will first do a quick search (like through Google or your own data) to find facts. Then, it uses this info to give you a better answer.?

Databricks has a sophisticated approach for RAG. It incorporates your real-time proprietary data into LLM applications. (Proprietary data means data specific to your company and your customers. It could be your customer records, product inventory, support manuals, etc…)

Using your own data lets your customer-facing apps respond with greater precision. In other words, the customer doesn’t get a canned response. The customer gets a real answer. Answers are tailored specifically for each customer and their situation.

Databricks blends your business data (small balls) with Large Language Models (big ball).

Think of a chatbot that doesn't just reply to your questions. But it also remembers what you talked about before, what you ordered, etc…. This makes the chatbot's answers feel a lot more personal and right for you.

Businesses using RAG applications can offer personalized attention to customers. This enhances the customer experience by providing context-awareness and insightfulness. Interactions are more meaningful and satisfying. And this sets a new standard for customer engagement.

Introducing the Databricks suite of RAG tools

Databricks offers a really awesome set of RAG tools.

  1. ????Vector Search. This is a powerhouse for semantic search. It enables efficient search of unstructured data such as texts, images, and videos. This is like having your own personal Google for searching through all your enterprise data in your Lakehouse. If you have many documents, images, and videos, this tool helps you find what you want.?
  2. ????Feature & Function Serving. This offers low-latency access to structured data in real-time. This is like a super-fast way for RAG apps to ask questions and get immediate answers about specific data. (Queries in milliseconds). You can get immediate answers, because it gives you access pre-computed data (ready made insights or calculations). And you can perform real-time data transformations.
  3. Fully Managed Foundation Model. Pay-per-token base LLMs? Yes, please! This refers to ready-to-use Foundation Models that you can use and pay for based on how much you use them. This makes using these advanced AI models more flexible and cheaper.?
  4. Lakehouse Monitoring . This is a quality monitoring interface. It watches over how well the RAG apps are working in real-world situations. It helps to make sure the apps are doing their job with precision and speed. It watches for errors. It ensures data quality and makes sure everything is running smoothly and securely. It does data quality checks, making sure the data is accurate, up-to-date, and usable. It does performance monitoring and identifies any slowdowns or bottlenecks. It ensures that data access is compliant. It does user activity tracking and detects any unusual activity.
  5. LLM Development Tools. They help you test, compare, and fine-tune different LLMs to see which ones work best. It’s like having a sandbox and control center for working with AI. It includes AI Playground , i.e. cool features for playing around with different AI models. It also includes a Model Serving Architecture. This is for setting up, managing, and asking questions to any kind of LLM.?

????A closer look at Vector Search

Vector Search makes it easy to sort thru unstructured data.


Databricks 2nd blog was all about Vector Search. It's a deep dive into one of the Databricks suite of RAG tools.

I also talked about Vector Search three months ago. Chapter 1: RAG (Retrieval Augmented Generation), Vector Search, Vector DB. If you haven’t read it yet, it’s well worth a read.

Databricks Vector Search acts like your personal Google for enterprise data. It's super easy to search through loads of unstructured data like texts, images, and videos. It's a mighty tool in the Data Intelligence Platform. It helps you quickly sift through all your documents, pictures, and videos to find exactly what you need.

How does Databricks Vector Search work?

Here is a simple explanation of how Databricks Vector Search works.

  • Turning Data into Vectors (or Embeddings). Imagine you turn words or information into a list of numbers (called vectors or embeddings). These numbers help the computer understand how similar or different words are. When you search for 'dog', the AI will find similar words like 'puppy' because they are in the same number system.
  • Finding Similar Stuff. The AI then looks at how close the numbers (vectors) of your search word are to other words or information. It uses a special method (like cosine similarity) to figure out which ones are most alike. There are smart ways to do this quickly and accurately.

Vector search is about finding the best match for your search. It understands the meaning behind your words.

Why customers love? Databricks Vector Search

Customers love Databricks Vector Search for a few key reasons:

  • It does the heavy lifting. Usually, getting a database ready to store information involves a lot of steps. It involves cleaning and organizing data from different places. This can be a lot of work and cost a lot of time and money. But Databricks Vector Search does all this automatically. It pulls in data, gets it ready, and updates everything on its own, saving a lot of hassle.
  • Keeps data safe and organized. Keeping data secure and well-managed is super important, especially for big businesses. Databricks Vector Search uses the same security measures that protect all of Databricks' systems. This means data is not just safe, but also organized in a way that's easy to manage.
  • Super fast searches. Some databases can be slow or get overwhelmed easily, especially with lots of data. But Databricks Vector Search is built to be really fast, even with lots of complex searches. It can handle a big workload without needing a lot of adjustments or technical know-how from the user.

Databricks Vector Search simplifies and speeds up handling large data sets. It eliminates the need for extensive technical adjustments and ensures data security.

????A closer look at Feature & Function Serving

Databricks 3rd blog last week was all about Databricks Feature & Function Serving. It's a deep dive into another one of the Databricks suite of RAG tools.

I also wrote about Features and Feature Stores two months ago. Chapter 5 - Structured & Unstructured Data, ETL, Labeling & Annotation, Feature Engineering and Feature Store. If you haven’t read it yet, it’s well worth a read. I use adult beverages as an analogy to explain all these concepts.

A couple of key points from my previous blog that are helpful for this section:

  • Feature Engineering is a series of steps in preparing the raw data, so that it can be used by the ML model.
  • Data gets transformed into a Feature. If data were a raw ingredient, then features are a special cocktail mix (like a Margarita). You want to transform data into something helpful for tasks like guessing or predicting something. How you do you turn data into features? You can combine data, convert data, or group data.

How does Databricks Feature & Function Serving work?

Databricks Feature & Function Serving is a tool that allows easy access to structured data stored in Databricks. You can use it for your applications or models, even if they are not on Databricks. Here's a simple explanation of how it works:

  1. Storing and Organizing Data. First, your structured data (customer records, sales figures, purchase records, etc...) are stored in the Data Intelligence Platform.
  2. Creating Endpoint. Then, you create something called "endpoints". These are like access points or gateways. They let your external apps or models use the data stored in Databricks.
  3. Serving Data. When your application needs certain data, it asks one of these endpoints. The endpoint then gets this data from Databricks and sends it back to your application.
  4. Handling Traffic and Scale. If your application becomes popular, more people will use it. This will create a higher demand for data. These endpoints are then capable of handling the increased traffic. They adjust automatically to make sure the data keeps flowing smoothly without delays.
  5. Safety and Ease of Use. This whole process is secure, so your data is safe. Plus, Databricks takes care of the complicated tech stuff in the background. You just need to set it up with a few commands, and then it runs on its own.

Feature & Function Serving allows faster access to structured data.

Integration with Unity Catalog

You can organize and select the data you want to share for Feature & Function Serving in Unity Catalog. It's like a control center for handling different types of data. You can manage data features (like specific data sets or characteristics). And you share only what you want through your endpoints.

With Unity Catalog, you have one spot to look after all your data - the regular stuff and the data you're sharing. This makes managing your data easier and more secure.

Why Databricks Feature & Function Serving is awesome

Databricks Feature & Function Serving is a game changer for RAG apps

  • Real-time Data Access. Instant access to structured data is crucial for RAG apps. RAG apps rely on up-to-date and accurate information for generating responses.
  • Unity Catalog. Unity Catalog integration enables centralized management of both regular and shared data. This unification streamlines the data management process and improves governance and security.
  • Enhanced Security. With robust security measures, users can share and access data. They can be confident that their data is protected and compliant.
  • Scalability and Performance. Feature & Function Serving automatically scales to handle varying loads of data requests. This ensures consistent performance even during high demand.
  • Reduced Complexity. Databricks Feature & Function Serving handles the infrastructure and technical details. This reduces complexity for users. They are free to focus on their core data analysis and mode-building tasks.

Challenges of building a RAG app

It’s not easy for businesses to build RAG apps. There are three major challenges that they face today. Fixing these challenges is ensures successful implementation and operation of RAG applications.

Serving Real-Time Data.

  • The Challenge.There’s so much data out there, including both structured and unstructured. And getting the latest and most relevant data into RAG Apps in real-time is tough. It's hard to do this without making things too complicated. Companies have to connect lots of systems and manage complicated ways of moving data.
  • The Solution. Databricks has improved support for accessing and organizing data online. The latest update offers better functionality. Vector Search automatically organizes and retrieves various data types. It can handle text, images, and videos for RAG applications. It's efficient at handling errors and optimizing performance. For more organized data, like customer details, it offers super-fast data access. Plus, Databricks Unity Catalog keeps track of where your data is and who can access it. It does this both online and offline, making it easier to fix any issues and keep your sensitive data secure.

It's not easy for businesses to build RAG apps.

Comparing and Tuning Foundation Models.

  • The Challenge. A big part of making a RAG App good is picking the correct LLM model. Different models have their own strengths and weaknesses. These include how well they reason, how much info they can handle at once, and how much they cost. With new models coming out every week, it's tough to figure out which one is the best for what you need.
  • The Solution. Databricks' latest update makes developing and testing LLM models a lot easier. You can now use a range of AI models. These models include popular ones from Azure, AWS, and others. You can even use your own custom models. There is an AI Playground where you can chat with models. The Playground has a tool called MLflow that compares models. MLflow looks at important details such as accuracy and speed. Also, they are providing numerous AI models that are fully managed. You can pay for these models based on your usage, which makes it more flexible and cost-effective. The best part? You can combine different models to suit your specific requirements. The Databricks system ensures the security of your sensitive data.

Ensuring Quality and Safety in Production.

  • The Challenge. Once you start using an LLM, it can be tricky to tell if it's doing a good job. These programs are different from normal software. You can't easily spot right or wrong answers or clear mistakes. Determining if the AI provides good, safe, and appropriate responses is not easy. We've noticed that many customers at Databricks are cautious about using AI programs. They are hesitant to use these programs on a large scale. They're often not sure if the good results they see in small tests will still be good when used by a lot more people.
  • The Solution: This new release includes Lakehouse Monitoring. This is a tool that takes care of checking the quality and safety of RAG apps. It automatically looks through what the app is producing to find any harmful or incorrect content. The tool also helps in spotting errors by tracking where data and models come from. Plus, it uses feedback like user ratings to measure how well the app is doing. Databricks handles monitoring. And developers can focus on improving the app.

But it's much easier with great tools, like the Databricks suite of RAG tools.

How to make RAG apps simple and good?

Getting RAG apps to be good (such as giving answers that are correct, up-to-date, and make sense for the situation) is a tough task for many companies. But these new tools from Databricks are here to help with that. Here’s how they make things better and easier for customers:

  • They enhance data retrieval.These tools can find information from various data sources. They help with searching and finding the right information quickly. And the result is that the AI gives more accurate and relevant answers.

Good boy!

  • They integrate with enterprise data. In short, you can use your company's data. For example, your company’s instruction manuals or your customer’s buying history. Your LLM model can use this info to give answers that are specific and helpful for your business.
  • They improve AI responses. These tools make AI smarter, so that the AI can give better answers. This is super important when you're dealing with customers or need reliable information.
  • They manage AI model performance. These tools keep an eye on AI performance. They monitor and evaluate the performance of AI models in real-world scenarios. They check how well the AI is doing its job.
  • They facilitate development of LLMs. The tools provide support for developing and fine-tuning large language models.
  • They build and improve AI Models. They also help people who are creating and fine-tuning LLM models. This makes it easier to figure out the best setup for different situations.

Databricks' RAG tools enhance AI applications by improving data retrieval. They also make processing and utilizing data more efficient. These tools will make it easier for Databricks customers to create awesome LLM apps with their own business data.?

Summary

Alright, let's wrap this up. Last week, Databricks announced:

  1. Databricks Suite of RAG Tools: Making LLMs Smarter. These will revolutionize LLM for businesses. This is because Databricks incorporates proprietary company data with LLMs. The result is that chatbots can give answers that aren't generic. They're tailored to each customer and situation. It's like having a chatbot that gets you and knows what you need.
  2. Public Preview of Databricks Vector Search: Searching Unstructured Data. This RAG tool lets you search through lots of unstructured data like text and images. It's super fast and easy to use. This means finding the right data becomes a breeze. It does a lot of the complicated stuff automatically and keeps your data safe and organized.
  3. Public Preview of Databricks Feature & Function Serving: Prepping Structured Data for ML. This RAG tool gives you quick access to important data for LLM apps. It makes LLM apps work faster and smarter by using real-time data right away.

These new tools from Databricks are a big help for businesses using Gen AI. They make creating LLM apps more personalized and efficient. They simplify how data is handled, make things more secure, and are good for managing lots of data.

What’s Next? Keep Learning and Exploring

Looking ahead, there's a lot more to learn and try out with Databricks. I've been at Databricks for 5 years. And I haven't stopped learning. Follow me. I'll summarize some of Databricks upcoming blogs and webinars. You can learn with me. Plus, you can play around with their tools to see how cool they are in action.

Keep learning and exploring.

About the author: Maria Pere-Perez

The opinions expressed in this article are my own. This includes the use of analogies, humor and occasional swear words. I currently work as the Director of ISV Technology Partnerships at Databricks. However, this newsletter is my own. Databricks did not ask me to write this. And they do not edit any of my personal work. My role at Databricks is to manage partnerships with AI companies, such as Dataiku, Pinecone, LangChain, LlamaIndex, Posit, MathWorks, Plotly, etc... In this job, I'm exposed to a lot of new words and concepts. I started writing down new words in my diary. And then I thought I’d share it with people. Click "Subscribe" at the top of this blog to learn new words with me each week.

You can see my past blogs here.

要查看或添加评论,请登录

Maria Pere-Perez的更多文章

  • Spotlight on Agent AI Systems

    Spotlight on Agent AI Systems

    AI is a whole new world, and there’s a whole new dictionary to go with it. To read my future articles, join my network…

  • Spotlight on Databricks AI/BI Genie vs Mosaic AI

    Spotlight on Databricks AI/BI Genie vs Mosaic AI

    Databricks gives you TWO game-changing AI tools: AI/BI (with Genie) and Mosaic AI. They’re both incredible but built…

    4 条评论
  • Spotlight on Making AI Chatbots Smarter: LLM-as-a-Judge

    Spotlight on Making AI Chatbots Smarter: LLM-as-a-Judge

    AI is a whole new world, and there’s a whole new dictionary to go with it. To read my future articles, join my network…

    4 条评论
  • Spotlight on Databricks Model Serving and Lakehouse Monitoring

    Spotlight on Databricks Model Serving and Lakehouse Monitoring

    AI is a whole new world, and there’s a whole new dictionary to go with it. To read my future articles, join my network…

    2 条评论
  • Chapter 6: Responsible AI

    Chapter 6: Responsible AI

    AI is a whole new world, and there’s a whole new dictionary to go with it. To read my future articles, join my network…

  • Did you miss anything?

    Did you miss anything?

    AI is a whole new world, and there’s a whole new dictionary to go with it. To read my future articles, join my network…

    2 条评论
  • Spotlight on Data Intelligence Platform

    Spotlight on Data Intelligence Platform

    AI is a whole new world, and there’s a whole new dictionary to go with it. To read my future articles, join my network…

    3 条评论
  • Spotlight on MosaicML

    Spotlight on MosaicML

    AI is a whole new world, and there’s a whole new dictionary to go with it. To read my future articles, join my network…

  • Spotlight on Unity Catalog

    Spotlight on Unity Catalog

    AI is a whole new world, and there’s a whole new dictionary to go with it. To read my future articles, join my network…

    4 条评论
  • Spotlight: LangChain

    Spotlight: LangChain

    AI is a whole new world, and there’s a whole new dictionary to go with it. To read my future articles, join my network…

    1 条评论

社区洞察