We see more and more stories in the news about machine learning algorithms causing real-world harm. People's lives are affected by the decisions made by machines. Human trust in technology is based on our understanding of how it works and our assessment of its safety and reliability.
To trust a decision made by a machine, we need to know that it is reliable and fair, that it can be accounted for, and that it will cause no harm. We need assurance that it cannot be tampered with and that the system is secure. Learn how to achieve AI fairness, robustness, explainability and accountability.