Explainable AI (xAI)
Akash Mavle Corporate(Group)Head AI L and T Larsen and Toubro
Global, Corporate Group Head of AI at L&T Group |CTO, Sr.VP| IITB | Keynote AI Speaker | $ 27 billion, 3 startups, Entrepreneur | 26 yrs Member of Group Tech Council !| 17 yrs in AI | Gen AI Mob: 9689899815
For years domain specialists have been talking against the pure statistical approach to solve problems and the same is now happening with statisticians as compared to AI/ML brute force nerds. Gradually this was to happen. Pure play domain specialists who would build models from physics, thermodynamics and engineering disciplines have been doing model building for years. Domain specialists eventually realised that they cannot math model all situations, some problems they realised that are almost intractable with the 19980’s model of computing. This is where statisticians with few techniques like ANONA, Multivariate analytics, regression and principal component analysis conquered the domain without knowing the domain.
New age AI/ML ( I am not talking about GOFAI, good old friendly AI) too is as of now seen by both domain and stats guys as brute-force. Well not all of that is brute-force. But yet for business decision makers to start looking at results of AI/ML and start believing in those decisions or answers they definitely need the unlocking and unfolding of the AI/ML black boxes. This is where “Explainable AI” or simply xAI makes sense. xAI requires some amount of domain knowledge to answer various causal patterns ( Cause and effect of variables and their relation to the eventual end-result). In the process of democratization of AI/ML adoption this would be important step. Researchers at DARPA like D Gunning have been working on xAI and now there is steady momentum for this field.
Essentially a deep learning network ( same as old school AI neural network, but with renewed power of computing and provisioning offered by cloud computing) tries to answer a lot of questions without getting into the domain or business rules. It minimizes error or Stochastic Gradient for a equation, or it tries to change the learning parameters of the network. This eventually requires approach of how it has solved the problem. Yes DL solves the problem, but for human to believe the same, cause and effect journey must be experienced by human to start believing the DL network. This is what is covered in xAI.
More on this in my next article.