Grokking LIME: How can we explain why an image classifier "knows" what’s in a photo without looking inside the model?
Kilian Kluge
Machine learning models show outstanding performance at many tasks. But it often remains a mystery to their users how they arrived at their predictions. This quickly becomes a problem in application settings where “The computer says so!” is not a sufficient explanation.
LIME is one of the most widely-known algorithms for explaining black-box models. First published in 2016, it has helped popularize the concept of “model-agnostic” explanations and inspired subsequent developments. Thus, it’s an approach worth understanding and an excellent starting point for exploring interpretable machine learning.
In this talk, to truly grok how LIME generates its explanations, we will implement the algorithm from scratch. Starting with nothing but an image file and a machine learning model, we will work through each of the six steps necessary to generate visual explanations.
Along the way, we will discuss the “Four Principles for Explainable AI” published by NIST in the fall of 2021. These human-centered principles provide a practical guideline for designing and implementing explanations for machine learning models.
Don’t worry: No knowledge of machine learning is required to follow this talk. Aside from familiarity with the basics of numpy
arrays, all you need is your curiosity.
Kilian Kluge
Affiliation: XAI-Studio & Inlinity AI
My journey into Python started in a physics research lab, where I discovered the merits of loose coupling and adherence to standards the hard way. I like automated testing, concise documentation, and hunting complex bugs.
I recently completed my PhD on the design of human-AI interactions and now work to use Explainable AI to open up new areas of application for AI systems.