Integrating SQuARE data quality model with ISO 31000 risk management to measure and mitigate software bias

3rd International Workshop on Experience with SQuaRE Series and Its Future Direction (IWESQ 2021),Taipei (Taiwan) 8 Dec 2021, pp.17-22
Simonetta A., Vetrò A., Paoletti C.M., Torchiano M.
AttachmentSize
PDF icon 2021-iwesq.pdf1.24 MB
8 Dec 2021

In the last decades the exponential growth of available information, together with the availability of systems able to learn the knowledge that is present in the data, has pushed towards the complete automation of many decision making processes in public and private organizations. This circumstance is posing impelling ethical and legal issues since a large number of studies and journalistic investigations showed that software-based decisions, when based on historical data,
perpetuate the same prejudices and bias existing in society, resulting in a systematic and inescapable negative impact for
individuals from minorities and disadvantaged groups. The problem is so relevant that the terms data bias and algorithm
ethics have become familiar not only to researchers, but also to industry leaders and policy makers. In this context, we believe that the ISO SQuaRE standard, if appropriately integrated with risk management concepts and procedures from ISO 31000, can play an important role in democratizing the innovation of software-generated decisions, by making the development of this type of software systems more socially sustainable and in line with the shared values of our societies. More in details, we identified two additional measure for a quality characteristic already present in the standard (completeness) and another that extends it (balance) with the aim of highlighting information gaps or presence of bias in the training data. Those measures serve as risk level indicators to be checked with common fairness measures that indicate the level of polarization of the software classifications/predictions. The adoption of additional features with respect to the standard broadens its scope of application, while maintaining consistency and conformity. The proposed methodology aims to find correlations between quality deficiencies and algorithm decisions, thus allowing to verify and mitigate their impact.