Looks like you're using adblock. Please consider supporting us by whitelisting coursedio.online
To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video
with Keith McCormick
Learn best practices for how to produce explainable AI and interpretable machine learning solutions.
Exploring the world of explainable AI and interpretable machine learning
Target audience
What you should know
Understanding the what and why your models predict
Variable importance and reason codes
Comparing IML and XAI
Trends in AI making the XAI problem more prominent
Local and global explanations
XAI for debugging models
KNIME support of global and local explanations
Challenges of variable attribution with linear regression
Challenges of variable attribution with neural networks
Rashomon effect
What qualifies as a black box?
Why do we have black box models?
What is the accuracy interpretability tradeoff?
The argument against XAI
Introducing KNIME
Building models in KNIME
Understanding looping in KNIME
Where to find available KNIME support for XAI
Providing global explanations with partial dependence plots
Using surrogate models for global explanations
Developing and interpreting a surrogate model with KNIME
Permutation feature importance
Global feature importance demo
Developing an intuition for Shapley values
Introducing SHAP
Using LIME to provide local explanations for neural networks
What are counterfactuals?
KNIME's Local Explanation View node
XAI View node demonstrating KNIME
General advice for better IML
Why feature engineering is critical for IML
CORELS and recent trends
Continuing to explore XAI