Human-Centered Interpretable Machine Learning
How we can Explore, Explain, and Debug Predictive Models? During the talk I will show (1) why we need interpretable and human centered machine learning, (2) how we can explore models with R packages from the family of DrWhy.AI, (3) how these packages helped in a real world credit scoring problem.It is not enough to have predictive model with high AUC on some test dataset. Concept drift, model stability, bias in the training data, there are reasons why we need to better understand factors that drive model predictors. Fortunately, recent developments in the area of explainable artificial intelligence helps us to understand how complex predictive models are working. I will overview techniques like LIME, SHAP, Break Down, Ceteris Paribus, present implementation in the collection of R packages from the family DrWhy.AI.
Associate Professor, Samsung R&D / Warsaw University of Technology
Over 15 years of experience in R&D in Business (Netezza, IBM, IQor, OECD, Samsung) and Academia (Warsaw University of Technology, University of Warsaw).
I am interested in predictive modelling of large and complex data, data visualisation and model interpretability. My main research project is DrWhy.AI – tools and methods for exploration, explanation and debbugging of predictive models. Other research activities are focused on aplications, mainly high-throughput genetic profiling in oncology. Also I’m interested in evidence based education, evidence based medicine, general machine learning and statistical software engineering (an R enthusiast). Big believer of data-literacy.
I like travels, board games, audiobooks and hiking. Living in Warsaw with my wife and two kids.