Demonstrate the use of conditionals in LangGraph

Demonstrate the use of conditionals in LangGraph

Use case: Symptoms and diagnosis Medical chatbot Document retrieval is presented to the doctor for further analysis

What’s New in this article?

  • Load and use the hugging face dataset
  • Use hugging face embeddings
  • Extensive use of conditionals to check the relevance of the documents and detect hallucinations.
  • Creating a fallback option using initialize_agent to run the Pubmed query

Pre-requisites

Flow design and diagram

The flow of the code is as follows:

  • Retrieve the dataset from the vector DB
  • Grade the documents: Check whether the document is relevant or not This information is passed on to the next step
  • Generate: generates the answer based on the question. The generated answer is classified as useful, supported, and not useful. Not supported will trigger generate again. The useful one is evaluated further in the next step
  • Grounded with facts? This step checks whether the generated answer is grounded to the data in the RAG. This step evaluates whether the model is hallucinating. This can be a game changer for certain scenarios where the model hallucinates. If the answer is in the RAG, then once again the model checks whether the generated answer answers the question. If yes then the answer is accepted. If not it triggers a fallback approach that calls the PubMed query to get the answers. If the answer is not in the RAG, then the fallback method will be triggered which uses PubMed to answer the query and send it back to the user.

LangGraph

The graph is designed to retrieve, and grade documents, generate answers, and classify them into useful and not useful. In scenarios where it is not useful, the fallback mechanism is activated and results are used as a final answer.

The output of the code with various questions is here

Conclusion

By using the approaches demonstrated one can create as many conditionals in langgraph to ensure the accuracy of the output. Various thresholds for evaluating the results can be set. Confidence scores, and relevance scores(>0.8 in the pre-trained model). One can switch to different models with better results.

References

  1. LangGraph Examples

Avinash Narasimha

Data & AI Leader | Supply Chain, Manufacturing & Commercial | GenAI & AI at Scale | Digital Transformation | Cloud & Data Modernization | $100MM+ Value Creator | AI Strategy & Innovation | COE Builder | Business Partner

5 个月

Interesting!

要查看或添加评论,请登录

Sushma Rao的更多文章

社区洞察

其他会员也浏览了