Grokking LIME: How can we explain why an image classifier "knows" what’s in a photo without looking inside the model?
Kilian KlugeComputer Vision, Neural Networks / Deep Learning, Transparency / Interpretability
How can LIME explain machine-learning models without peeking inside? Let's find out!
Guillaume LemaitrePredictive Modelling, Statistics, Transparency / Interpretability
Inspect and try to interpret your scikit-learn machine-learning models
Julia OstheimerBest Practice, Business & Start-Ups, Career & Freelancing, Corporate, Diversity & Inclusion, Ethics (Privacy, Fairness,… ), Transparency / Interpretability, Use Case
You wanna know how you can explain your grandparents what #MachineLearning is? Attend the #PyConDE #PyData tutorial on how to translate #ML terms into everyday language of any audience. #communication #101 #tutorial #softskills #AI
Cheuk Ting HoCommunity, Governance, Python fundamentals, Security, Transparency / Interpretability
Trojan Source Malware has been tested on Python and it works. Shall the Python and open-source communities be concerned?
Larissa HaasData Visualization, Ethics (Privacy, Fairness,… ), Transparency / Interpretability
XAI meets NLP - approaches, workarounds and lessons learned while making an NLP project explainable