Explainable AI(XAI) has become legally and ethically mandatory. Artificial Intelligence often surpasses human performance and understanding. European and US laws, for example, have increasingly enforced the regulations and sanctions on automatic decisions that cannot be explained. In 2020, an AI system must produce trustworthy unbiased results and non-discriminating explanations.
We will go through user-friendly XAI Python and Tensorflow 2 visualizations tools of otherwise impossible AI decisions to explain. We will go through case studies with LIME, SHAP, WIT, CEM, and more.