this /static/media/twitter/9UB3Z3.png

Machine learning (in particular deep learning) continues to achieve tremendous success on many domains. However, in almost everyday, researchers identify a new vulnerability of machine learning (ML) models. In particular, ML models i) are open to adversarial attacks, ii) can leak private information, iii) can make biased predictions. Hence, testing ML models before putting them in production is crucial. European Commission proposed a regulation on AI which makes "high-risk" AI systems mandatory to test. The regulation is expected to be put in place in a couple of years. Similar regulations all across the globe will likely follow. In this talk, I'll talk about the vulnerabilities of the ML models and how to test them. During the talk, I'll present an emerging ecosystem of Python libraries that help practitioners to test and validate their ML models.

Yunus Emrah Bulut

Affiliation: Karlsruhe Institute of Technology (KIT)

Researcher and practitioner with a focus on AI/ML safety, robustness and trustworthiness. Co-author of the books "AI for Data Science" and "Data Scientist Bedside Manner".