top of page

Explainable AI

Explainable AI

Explainable AI - justify decision making

For AI to gain the trust and confidence of the public, it must be able to explain and justify its decision making. One of the benefits of using a rules-based system, such as VisiRule, is that the system can provide explanations and justifications such as How it reached a decision, Why a question is being asked, What a specific term means and more.

​

What is a Good Explanation?

A good explanation can explain how a decision was reached using language which is 'compatible' with the person receiving the explanation. The explanation combines facts, rules and inferences of an appropriate complexity.

Explaining Insurance Expert System

Who is the Explanation for?

There are often multiple audiences for an explanation; end-users vs operators vs managers. So when explaining why a system denied a loan to a consumer, we might use statements such as 'Your employment record has several gaps' or 'Your income seems too low to make the repayments'; whereas, we might use different terms when explaining the same decision to an agent or broker.

​

When are we providing the Explanation?

We mostly want to explain how a decision was reached, say why a loan application was accepted or rejected. To do this we could look to identify which tests failed or succeeded and report on any calculations which failed.

 

We may also want to provide explanations and help to the user, to help them better understand why they are being asked a certain question, or what exactly it means.

​

What about Background Information?

The system needs to be able to explain basic terms and technical phrases. These can be loaded from an FAQ-styled KB. If the amount of FAQs is large, then we recommend you consider using the VisiRule ChatBot service.

​

How does VisiRule Support Explanations?

VisiRule allows you to attach text and text-oriented functions, as well as images and videos and links, to all the principal decision points in a chart, i.e. questions, answers, expressions. These can then provide context sensitive explanations.

​

How, What, Why and When?

VisiRule supports various interlocutors; and, in addition, developers can define their own. These are to shape the content when providing the explanation.

​

Visual Explanations: Text vs Graphics vs Video 

Explanations in VisiRule are not limited to text. They can be visual, say photographs of equipment or human body parts, or video-based.

 

Visual Explanations

VisiRule can dynamically display the execution path or current question/answer trail in-situ  as a way of conveying how a decision is being explored or was reached. Using the chart itself for this adds a visual context.

Explanations for ChatBots

ChatBot Integration

You can deliver explanations for a ChatBot conversation using interlocutors such as How, Why, What.

 

Explaining Explanations

One of the advantage of using a chatbot, is that it can allow the user to ask questions about an explanation being offered in a natural and intuitive way.

NHS Employment ChatBot with Explanations
VisiRule Software Stack

Expert System Rules

Expert Systems use rules to replicate the behaviour of human experts. Rules can come from experts who can also provide in-depth explanations as to why a rule is appropriate, what it represents and how it affects the outcome. When rules are induced from data, all we typically have are the counts or statistics generated by the algorithm used.

​

Trust

It is important to gain the trust of all those in the AI process. DARPA recently announced a $2b initiative to help gain trust. DARPA Director, Steve Walker stated: 'What we’re trying to do with explainable AI is have the machine tell the human ‘here’s the answer, and here’s why I think this is the right answer’ and explain to the human being how it got to that answer”.

The LPA AI Technology Stack 

At the technical level, the execution mechanism VisiRule mainly uses is backward-chaining inferencing with argument de-referencing. Forward-chaining is supported through the business rules layer below VisiRule, namely Flex. Fuzzy Logic and formal treatments of Uncertainty Handling are supported in Flint.

​

LPA has also recently developed VisiRule FastChart which enables VisiRule charts to be created from historical data using PMML-based Decision Trees. In this way, you can use machine learning and data mining to create your charts. Explaining machine generated rules derived from data by algorithms is also possible. 

 

There is scope within VisiRule for Machine Learning using feedback loops, as charts themselves can be viewed as data structures by higher level programs. In this way, charts can monitor and adjust their performance in light of emerging trends.

 

LPA also offers other AI tools alongside VisiRule to support other AI techniques such as Fuzzy Logic, Bayesian Updating, and Case-Based Reasoning.

bottom of page