As ML practitioners build, evaluate, and deploy machine learning models, they should keep fairness considerations (such as how different demographics of people will be affected by a model’s predictions) in the forefront of their minds. Additionally, they should proactively develop strategies to identify and ameliorate the effects of algorithmic bias.
To help practitioners achieve these goals, Google’s engineering education and ML fairness teams developed a 60-minute self-study training module on fairness, which is now available publicly as part of Google’s Machine Learning Crash Course (MLCC).