Machine learning models are demonstrating impressive accuracy on various tasks and have gained widespread adoption. However, many of these models are not easily understood by the people that interact with them. This understanding, referred to as “explainability” or “interpretability,” allows users to gain insight into the machine’s decision-making process.
Explainability and interpretability are not interchangeable terms. Interpretability refers to how cause and effect can be observed in a system, like predictive ability taking into account changes to data inputs for algo parameters. Explainability refers to laying bare internal mechanics of an AI-based system literally.
That’s why IBM has launched AI Explainability 360, an open source toolkit of algorithms that support the interpretability and explainability of machine learning models. The algorithms have case-based reasoning, directly interpretable rules, post hoc local explanations, post hoc global explanations, and more case-based reasoning, directly interpretable rules, post hoc local explanations, post hoc global explanations, and more.