Explainable AI

in ai •  6 years ago 

AI brain.PNG

SC18 in Dallas proved once again to be a fascinating melting pot of HPC insights and observations, and it's intriguing to see the continuing convergence of AI into the supercomputing ecosystem. Along these lines I started to think about the movement towards 'Explainable AI'. Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science. Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a distribution of events, the results sought from these methods are intended to increase our clarity and knowledge of how the world works.

Too much detail?

Human researchers historically have been biased towards using models and tools that yield to our intuition. Nonlinear systems are seen as more chaotic and harder to understand. In recent decades iterative methods using computers to perform repetitive steps have helped address some of these challenges, although how they actually obtain their results can be more difficult for humans to understand. This has in part led the boom in data visualisation techniques, to overcome some of these challenges.

As AI gets more widely deployed, the importance of having explainable models will increase. With AI being used in tasks where incidents may arise resulting in legal action, it will be essential that not only are models and their associated training data archived and subject to version control, but also that the actions of the model are explainable.

Deep Learning adds significant complexity to the form of the models used. Deep Learning models may be constructed from many interconnected nonlinear layers supporting feature extraction and transformation. This high level of coupling between very large numbers of non-linear functions drives the need for extremely complex, highly parallel computations. This complexity is leveraged in Deep Learning to provide models that can address fine details and identify features within a problem that cannot be addressed by traditional means, but it is achieved at the cost of sacrificing simplicity of insight.

Explainable AI (XAI)

Explainable AI is a movement focused on the interpretability of AI models. This is not just about simplifying models, which can often remove the benefits achieved from complexity. Instead, XAI can and does focus on delivering techniques to support human interpretability. A range of approaches can be used, for example, simple methods such as:

  • 2D or 3D projections (this involves taking a larger multi-dimensional space and presenting in a lower dimensional order (2D or 3D)
  • Correlation graphs (2D graphs where the nodes represent variables and the thickness of the lines between them represent the strength of the correlation).

But, with XAI, there is often a decision point at the start of the modelling process as to how interpretable the data scientist wants the model to be. Machine Learning techniques such as Decision Trees, Monotonic Gradient Boosted Machines and Rules-based systems do lead to good results, but in cases where accuracy is more important than interpretability it often falls to visualisation techniques to support human insight. There exist a range of tools that can support these objectives such as:

  • Decision tree surrogates: this is essentially a simple to understand model, used to explain to explain a more complex one by using a simplified decision flow
  • Partial dependence plots: These provide a view of how on average the machine learning model functions. This provides a coarse, high-level overview that does lack detail
  • Individual conditional expectation (ICE): these provide a focus on local relationships and are often a good complement to partial dependence plots – in effect ICE can provide a drill down from partial dependence plots.

These techniques can help aid clarity. They may not be representing the full complexity of the data, but instead, serve to provide a better feel for the data in human terms. These capabilities are going to be key as we advance Deep Learning and AI, and in particular, there will also be intense demand for expert witness skills to help articulate understanding to non-data scientist and non-technical audiences. Part of this process will rely on good visualisation of large data sets leveraging powerful GPU technology to support these representations. So in a sense whilst our abilities to use GPUs for AI has in part created challenges of complexity they will undoubtedly also be part of the solution to enhancing understanding. It is therefore likely that one outcome of the explainable AI movement will be AIs that can help humans with the tricky task of model interpretation.

Vasilis Kapsalis is the Director of Deep Learning and HPC at Verne Global
Original Source: https://verneglobal.com/blog/explainable-ai

For more information on how Verne Global can support you AI or HPC implementation visit: https://verneglobal.com/

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://verneglobal.com/blog/explainable-ai

Congratulations @kapsalisv! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You received more than 10 upvotes. Your next target is to reach 50 upvotes.

Click here to view your Board of Honor
If you no longer want to receive notifications, reply to this comment with the word STOP

Do not miss the last post from @steemitboard:

Meet the Steemians Contest - The results, the winners and the prizes
Meet the Steemians Contest - Special attendees revealed
Meet the Steemians Contest - Intermediate results

Support SteemitBoard's project! Vote for its witness and get one more award!