Identifying Imbalance Thresholds in Input Data to Achieve Desired Levels of Algorithmic Fairness

Second International Workshop on Data Science for equality, inclusion and well-being challenges (DS4EIW 2022), Osaka, Japan 17-20 Dec. 2022, Page 4700-4709
Mecati M., Adrignola A., Vetrò A., Torchiano M.
AttachmentSize
PDF icon 2022-ds4eiw.pdf1.13 MB
17-20 Dec. 2022

Software bias has emerged as a relevant issue in the latest years, in conjunction with the increasing adoption of software automation in a variety of organizational and production processes of our society, and especially in decision-making. Among the causes of software bias, data imbalance is one of the most significant issues. In this paper, we treat imbalance in datasets as a risk factor for software bias. Specifically, we define a methodology to identify thresholds for balance measures as meaningful risk indicators of unfair classification output. We apply the methodology to a large number of data mutations with different classification tasks and tested all possible combinations of balance-unfairness-algorithm.The results show that on average the thresholds can accurately identify the risk of unfair output. In certain cases they even tend to overestimate the risk: although such behavior could be instrumental to a prudential approach towards software discrimination, further work will be devoted to better assess the reliability of the thresholds.The proposed methodology is generic and it can be applied to different datasets, algorithms, and context-specific thresholds.

Link https://doi.org/10.1109/BigData55660.2022.10021078