We look forward to presenting Transform 2022 in person again on July 19 and virtually from July 20 to 28. Join us for insightful conversations and exciting networking opportunities. Register today!
Artificial intelligence (AI) is extremely effective at analyzing extreme amounts of data and making decisions based on information that exceeds the limits of human understanding. But it suffers from a serious flaw: it cannot explain how it came to the conclusions it presents, at least not in a way that most people can understand.
This “black box” property is beginning to cause some serious problems in the applications that AI enables, particularly in medical, financial, and other critical areas where the “why” of a particular action is often more important than the “what.”
A look under the hood
This leads to a new field of study called Explainable AI (XAI), which aims to provide AI algorithms with enough transparency to allow users outside the realm of data scientists and programmers to review their AI’s logic to ensure they are within bounds acceptable reasoning, bias and other factors.
As tech writer Scott Clark noted CMSWire Recently, explainable AI has provided the necessary insight into the decision-making process so users can understand why it is behaving the way it is. This allows organizations to identify flaws in their data models, ultimately leading to improved predictive capabilities and deeper insights into what is and isn’t working with AI-powered applications.
The key element of XAI is trust. Otherwise, every action or decision generated by an AI model leaves doubts in its tracks, and this increases the risk of its deployment in production environments where AI is expected to bring real value to the business.
According to the National Institute of Standards and Technology, explainable AI should be built on four principles:
- explanation – the ability to provide evidence, evidence or justification for each finding;
- meaningfulness – the ability to convey explanations in a way that users can understand;
- accuracy – the ability to explain not only why a decision was made, but also how it was made;
- Knowledge Boundaries – the ability to determine when its conclusions are not reliable because they exceed the limits of its design.
While these principles can be used to guide the development and training of intelligent algorithms, they are also intended to guide human understanding of what explainable means when applied to an essentially mathematical construct.
Attention buyers
According to Fortune’s Jeremy Kahn, the main problem with XAI right now is that it’s already become a marketing buzzword to push platforms out the door, rather than being a true product label built to sane standards.
By the time buyers realize that “explainable” can simply mean a lot of gibberish that may or may not have something to do with the task at hand, the system is in place and it is very costly and time-consuming to make a switch. Ongoing studies find that many of the leading explainability techniques are overly simplistic and fail to explain why a particular dataset was deemed important or unimportant for the algorithm’s output.
That’s partly why explainable AI isn’t enough, says Anthony Habayeb, CEO of AI governance developer Monitaur. What is really needed is an understandable AI. The difference lies in the broader context that the understanding has over the explanation. As any teacher knows, you can explain something to your students, but that doesn’t mean they will understand it, especially if they lack the prior knowledge base needed to understand it. For the AI, this means that users should not only have transparency about how the model is working now, but also how and why it was chosen for this particular task; what data went into the model and why; what problems arose during development and training and a host of other problems.
Explainability is, at its core, a data management problem. Developing the tools and techniques to study AI processes at such a detailed level to fully understand them, and to do so in a reasonable timeframe, will not be easy or cheap. And likely it will take the same effort on the part of knowledge workers to deploy AI in a way that allows it to understand the often disjointed, chaotic logic of the human brain.
After all, it takes two to have a dialogue.
VentureBeat’s mission is intended to be a digital marketplace for technical decision makers to acquire knowledge about transformative enterprise technology and to conduct transactions. Learn more about membership.