Building explainable machine learning models

Building explainable machine learning models

Sometimes as data scientists we will encounter cases where we need to build a machine learning model whose decisions are explainable to a human. This can go against our instincts as scientists and engineers, as we would like to build the most accurate model possible.

In my previous post about face recognition technology I compared some older hand-designed technologies, such as facial feature points, which are easily understandable for humans, to the state of the art face recognisers which use millions of parameters, are much more powerful, but harder to understand. This is an example of the trade-off between performance and interpretability. 

How can we make a machine learning model explainable? Continue reading.

Paul Peeling

Principal Consultant at MathWorks - we're hiring!

6 年

Nice article. I agree that explainability is possible with model agnostic methods without sacrificing performance. I've been investigating SHAPley values - I recommend this as an avenue for future exploration.

回复
Roy Rosemarin

Building AI and ML technologies | Data Science | Time Series | Retail | FMCG | e-commerce | Real estate

6 年

Nice article Tom.

回复

要查看或添加评论,请登录

Thomas Wood的更多文章

社区洞察

其他会员也浏览了