The invisible power of fairness. How machine learning shapes democracy

print-friendly
In proceedings of The 32nd Canadian Conference on Artificial Intelligence. Preprint
Beretta E., Santangelo A., Lepri B., Vetrò A., De Martin J.C.
22 March 2019

Many machine learning systems make extensive use of large amounts of data regarding human behaviors. Several researchers have found various discriminatory practices related to the use of human-related machine learning systems, for example in the field of criminal justice, credit scoring and advertising. Fair machine learning is therefore emerging as a new field of study to mitigate biases that are inadvertently incorporated into algorithms. Data scientists and computer engineers are making various efforts to provide definitions of fairness. In this paper, we provide an overview of the most widespread definitions of fairness in the field of machine learning, arguing that the ideas highlighting each formalization are closely related to different ideas of justice and to different interpretations of democracy embedded in our culture. This work intends to analyze the definitions of fairness that have been proposed to date to interpret the underlying criteria and to relate them to different ideas of democracy.

URL: https://arxiv.org/abs/1903.09493