Choosing the Right AI: The Importance of Explainability, Consistency, & Maturity

July 2, 2024
Reading Time: 
5 mins

When choosing an AI technology for a given problem, we often spend a lot of time evaluating whether it is possible for the technology to get the job done.

In the public sector, this is not enough. There are several other aspects we must consider like transparency, equity, and risk mitigation/ management. In particular, in the public sector it is critical to

  1. Explain and document decisions for transparency
  2. Ensure consistency in how decisions are made, processes are run, etc. to ensure we can be held accountable and serve our communities fairly and equitably
  3. Mitigate and manage risk since efficient and functioning government processes are critical to the wellbeing of the community

Each aspect that must be considered (transparency, equity, and risk management) can translate into a technological criterion for evaluating AI:

Transparency --> Explainability:  A technology is explainable if you can define how a given output was produced or calculated based on the provided input. Being able explain why a technology produced a certain output is critical for ensuring that processes that use AI maintain their transparency.

Accountability, Fairness, & Equity --> Consistency:  A technology is consistent if, given the same set of inputs, you consistently get the same output from the system. We want to ensure the technology treats two identical inputs in the exact same way. Consistency also allows for greater accountability because it is difficult to be accountable for a system that can randomly provide differing results.

Risk Management --> Maturity: A technology is mature if it is currently relied upon and widely used by critical systems in the real-world. Mature technologies are less risky since they have been tried and tested in production environments.

In this blog we will explain how we evaluate these 6 common kinds of AI technologies against the criteria of Explainability, Consistency, and Maturity. It is worth noting that over time, as technologies evolve, they will become more explainable, more consistent, and more reliable/mature.

Large Language Models (LLM)

How an LLM Works
  • Not Explainable: It is difficult to understand why the LLM is generating some text for a given prompt.
  • Not Consistent: LLMs are not consistent. This is why you'll get different answers even if you ask ChatGPT the same question multiple times.
  • Not Mature: LLMs are not yet widely used in any production-grade critical system.

Retrieval Augmented Generation (RAG)

How RAG systems work
  • Somewhat Explainable: In a RAG system, the results of the ranking algorithm are provided to the LLM, and can therefore help explain the source material for the LLM. That said, it is still unclear as to why the LLM summarizes the results in the manner it outputs.
  • Not Consistent: LLMs are used in RAG systems, and LLMs are not consistent.
  • Not Mature: RAGs are not yet widely used in any production-grade critical systems. We anticipate this changing, however, within a couple of years as it becomes easier for organizations to build their own RAG systems based on their data.

Generative Adversarial Network (GAN) for Images

How GAN Models work
  • Not Explainable: It is difficult to understand why the GAN is generating some image for a given prompt.
  • Not Consistent: Similar to LLMs, the same prompt can generate several different outputs
  • Not Mature: GANs for images are not yet widely used in any production-grade critical systems.

Computer Vision Models

How Computer Vision Models work
Non-Neural Network Approaches
  • Explainable:  The pattern images to-be-identified are known and the algorithm is relatively straightforward.
  • Somewhat Consistent: Many non-neural network approaches use some "randomness" in the algorithm, but this results in inconsistent results.
  • Mature:  Used in several places like OCR (for PDFs), TSA/ airport security, iPhone FaceID, etc.
Neural Network Approaches
  • Not Explainable: It is difficult to understand the input to output mapping of most neural networks, and this is especially true for the deep convolutional neural networks used in computer vision
  • Not Consistent: Neural networks have built-in randomness, which means that the same input may not necessarily produce the same output each time
  • Mature: Neural Network based computer vision techniques are used in some security applications (neural networks are used in the iPhone's FaceID to identify spoofing). Note: Often times, neural network approaches are paired with non-neural network approaches in critical applications to improve safety and performance.

Optimization & Simulation Models

How Optimization Models work
  • Explainable: Optimization models and simulations typically use clearly defined algorithms that a software engineer can understand. Furthermore, additional algorithms can easily be written on top of the simulation algorithm to analyze and explain any results.
  • Sometimes Consistent: Some simulations will use randomness as part of the simulation (e.g. weather simulations) in order to better mimic the system. These simulations will not always provide a consistent output for a given set of inputs. Optimization models often use a simulation as part of the optimizing process. Thus, if the optimization model uses a simulation model that is not consistent, then the optimization model may also not be consistent.
  • Mature: We use optimization and simulation models constantly in our day-to-day to help us make decisions. Most scientific and situational, predictive modeling is based on optimization & simulation models.

Rule-Based/ Decision Tree Models

How Rules-Based systems work
  • Explainable: Rule-Based/ Decision Tree Models are incredibly easy to explain because with the output you can also receive a path through the decision tree from input to the output.
  • Consistent: Most Rule-Based/Decision Tree Models use straightforward, deterministic rules. So, as long as the input is the same, the path through the decision tree will always be the same.
  • Mature: Several existing government workflow technologies leverage rules and decision trees to automate process flows and make sure the right information gets seen by the right individuals.

In Summary

Summary of the 6 common kinds of AI technologies discussed in this blog, by Explainability, Consistency, and Maturity.

Download the full version in PDF format today.

Thank you! You should receive the PDF in your email inbox shortly.
Oops! Something went wrong while submitting the form.