Interpreting Machine Learning models is no longer a luxury but a necessity. In this session, we will explore practical techniques to interpret ML models using real time datasets across domains. Explainable AI is a developing field and many of the ideas presented here are pretty new.
Below are the broad topics to be covered :
•Feature Importances
•Partial Dependence Plots
•ICE Plots
•Model Prediction Explanations with LIME
•Building Interpretable Models with Surrogate Tree- based Models
•Model Prediction Explanation with SHAP values