Building explainable machine learning models
Thomas Wood
Director of Fast Data Science. Analysing unstructured data in clinical trials.
Sometimes as data scientists we will encounter cases where we need to build a machine learning model whose decisions are explainable to a human. This can go against our instincts as scientists and engineers, as we would like to build the most accurate model possible.
In my previous post about face recognition technology I compared some older hand-designed technologies, such as facial feature points, which are easily understandable for humans, to the state of the art face recognisers which use millions of parameters, are much more powerful, but harder to understand. This is an example of the trade-off between performance and interpretability.
How can we make a machine learning model explainable? Continue reading.
Principal Consultant at MathWorks - we're hiring!
6 年Nice article. I agree that explainability is possible with model agnostic methods without sacrificing performance. I've been investigating SHAPley values - I recommend this as an avenue for future exploration.
Building AI and ML technologies | Data Science | Time Series | Retail | FMCG | e-commerce | Real estate
6 年Nice article Tom.