Explainable Machine Learning
Amit Tiwari
Fraud Product Management| Specialist Data Analyst | Certified Financial Crime Specialist | Fraud Detection | DataOps | Ex-Amdocs | 2X-AWS Specialty Certified | Machine Learning | SCJP Certified
Now since we are living in a world where Machine Learning and inferences by a model is a normal phenomena where it used to be a rare and imaginary . With the help of contribution from various geeks and researchers ,we have reached this state where its a matter of few line of codes to convert that imaginary process in to a real world; however challenge is not over yet.
In order to attain the high accuracy and better performance , people started to use complex models ; which is causing all the intuition of model to be covered in a black box. Having said that ,imagine a model created to predict a numerical output using few features with a linear relation ; seems to be a fairly simple model and explainable as well (with the help of some mathematical equation).
However this is not the case in the real world ,where model complexity is reaching a new level and at the same time generated outcome is being used in decision making scenarios. This condition needs something extra ; some way to explain what model did with the input to yield a particular result.
A common example stated for this condition is "Loan Sanction" scenario . To decide whether a specific input combination (Data submitted by an end user) is favorable for loan sanction or not ; this is the decision making case where revenue is at stack. If models says "Yes" then it should be explainable ; May be its beneficial for the bank to understand & If output is "No" then it is helpful in explaining the end user why loan was not sanctioned . Reason and end user for this explanation may vary.
So ,this area is now a very trending and buzzing ; Explainable AI or ML. Shap is being used widely for this purpose. I tried my hands to use the same ; and it works really well. Even explainable to a non-technical person ; this is the main purpose of all the explanation. SHAP is derived from "SHapley Additive exPlanations" and it was initially given for a cooperative game theory by Lloyd Shapley.
For my use case ,i tried using sentiment analysis on IMDB movie ratings to know if rating is positive or negative. Please refer below image ,how it highlights that which sentence/word is contributing in the inference and in which direction.
领英推荐
I have uploaded the sample code at my gitlab account : https://gitlab.com/amittiwa_ds/explainable_ml_ai/-/blob/main/transformers_sentiment_analysis___shap_values.ipynb
If you are interested in this area then do follow below videos and post to know the details and history of the shap values.
Software Development Manager | Agile Project Manager
3 年Very interesting introduction to ML! BTW - using IMDB as an example is a great way to show how ML can impact our lives ?? Hope to see some additional projects in this domain, Cheers