We stand with Ukraine

The Best Part of Symbolic AI: Full Explainability

Marco Varone - 27 May 2022

Explainability is indispensable to enterprise artificial intelligence (AI) applications. It directly impacts the business value generated from AI technologies and their overall sustainability.

There are numerous reasons organizations must provide explainability. The most immediate reason pertains to regulatory compliance. There are several regulations that mandate organizations explain the results of AI processes resulting in decisions impacting insurance coverage, credit, loans, and more. Those who fail to provide explainable AI may be liable to lawsuits from multiple parties, including customers.

Furthermore, full explainability is required for people to truly trust AI systems—both internally and externally. Customers need to know AI applications are making fair decisions about them. More importantly, explainability is necessary for quality assurance of AI and business user adoption of these technologies.

What Is Explainability?

Explainability is the means of logically explaining—in words—the reasons AI applications produce their specific results. It’s similar to (but ultimately distinct from) interpretability which is the ability to understand what numerical outputs of models mean for business problems.

Explainable AI is essential for language understanding applications, which typically focus on cognitive processing automation, text analytics, conversational AI, and chatbots. There are several explainability techniques for statistical AI, some of which are fairly technical. Nonetheless, the easiest, most readily available, and effective means of creating explainability is by using symbolic AI.

Symbolic AI is based on business rules, vocabularies, taxonomies, and knowledge graphs, making it much easier to explain results than those created by black box, deep neural networks with hundreds or thousands of parameters and hyperparameters. Symbolic AI is 100% based on explicit knowledge at every level, which makes it an excellent means of explaining every language understanding use case.

There is plenty more to understand about explainability though, so let’s explore how it works in the most common AI models.

Statistical AI Explainability

All enterprise users and beneficiaries of language understanding systems with AI must be able to explain the results of these technologies. That broad user base encompasses everyone from C-level executives to subject matter experts and business users such as insurance claims specialists. While it is not theoretically impossible for pure machine learning or statistical AI approaches to deliver explainable AI, they require greater amounts of effort, time, and quantitative skills—which not everyone has, particularly at the scale of deep learning.

For approaches solely involving advanced machine learning, data scientists can puzzle over techniques like LIME, ICE, and PDP when attempting to determine which specific features, measures, and weights of input data are creating certain outputs. Data lineage is also helpful for explaining AI results with statistical models by enabling organizations to retrace everything that happened to them, from production data to training data. Although these approaches provide insight into machine learning model performance, they’re better for interpretability than they are explainability.

Symbolic AI Explainability

In comparison, explainability is a natural by-product of symbolic AI. Similar to the impact of data lineage on statistical AI models, symbolic AI always allows users to trace back results from the specific reasoning involved in their production. Business rules, for example, provide an infallible means of issuing explanations for symbolic AI.

The creation of intelligent chatbots from business rules illustrates how this approach works. These bots deliver the same output or response every time these rules are invoked. Though there may be variation in the specific words used for the sake of being more human-like, the meaning of those words will always be the same. For example, if a chatbot’s response is “yes,” there are a variety of synonyms that can be used to convey this same meaning, such as “sure,” “certainly,” and “correct.” Using the most appropriate synonym in the proper context produces a human-like, natural feeling to the chatbots’ responses.

Additionally, vocabularies and taxonomies furnish unmatched semantic understanding for rules. The knowledge contained in these terms, definitions, and hierarchy of terms is explicit and always used transparently. Such a lack of ambiguity about what words mean and their relation to each other is optimal for explaining the results of symbolic AI systems.

Knowledge graphs aid the use of rules, taxonomies, and vocabularies in two distinct ways. First, they provide an ideal location to store this valuable enterprise knowledge, which often pertains to particular business concepts (e.g., customer definitions, health insurance terminology, medical codes for diagnoses and procedures, etc.). Additionally, they facilitate inference techniques and machine reasoning capabilities that deliver logical, easy-to-understand outputs.

Hybrid System Explainability

Hybrid approaches that use machine learning and symbolic AI are gaining popularity for language understanding applications and one variation of the approach enables users to accelerate machine learning model training and improve their accuracy by leveraging linguistic rules and the knowledge stored in knowledge graphs.

Though hybrid models built in this way are not fully explainable, they do impart explainability into several key facets of the models. For example, you can create explainable feature sets by using symbolic AI to analyze your data and extract the most important information. These features can, in turn, establish a more explainable foundation for your trained model.

These explainable aspects make this hybrid approach a good fit for many use cases, but best suited for those that are either internal or not mission critical (e.g., categorization of highly complex documents, anti-money laundering processes, etc.).

 

An Enterprise Necessity

Explainability is a necessity for enterprise understanding of AI-based language applications. Symbolic AI approaches eliminate the black box limitations that prevent explainability with pure machine learning. The former delivers a traceable explanation for how systems arrived at their decisions.

Consequently, explainability has become one of the foremost advantages of relying on symbolic AI approaches. These approaches are easier to use and more accessible to a broad user base than statistical methods like PDP are because of the transparency of business rules, taxonomies, knowledge graphs, and reasoning systems. Explainability is not just a by-product of symbolic AI. Symbolic AI is a means of delivering explainability for language understanding.

Explainable Use Cases with a Hybrid Model

Learn about specific instances in which hybrid models can add key layers of explainability to complex processes.

Learn More
The Best Part of Symbolic AI: Full Explainability