You may have heard about machine learning bias or fairness in machine learning. But, if you have an algorithm in front of you, how do you know if that algorithm is biased? How is it biased? What do those biases mean in practice?
These labs—hands-on notebooks in Python—teach students how to detect, identify, discuss and address bias in real-world machine learning algorithms. We delve into how these algorithms are situated in larger social contexts, prompting students to discuss who designs these algorithms, who uses them, and who gets to decide what it means for algorithms to be working properly.
We've taken to Twitter to document Machine Learning failures in the real-world as they come up. This record is meant to help us do better in the future. Follow us at @mlfailures.