Status: ongoing
Period: January 2023 – December 2025
Funding: 5.4 M.€, of which 122 K€ for Politecnico di Torino. Funding for the Nexa Center to be defined
Funding organization: Italian Ministry of University and Research
Person(s) in charge: Antonio Vetrò (Senior Researcher)
Executive summary
FAIR is an Italian national project that aims at addressing the research questions, methodologies, models, technologies, and even ethical and legal rules for building Artificial Intelligence systems capable of interacting and collaborating with humans.
Background
The “Future Artificial Intelligence Research (FAIR)” project aims to help address the research questions, methodologies, models, technologies and even ethical and legal rules for building Artificial Intelligence systems capable of interacting and collaborating with humans, perceiving and acting within changing contexts, being aware of their limitations and capable of adapting to new situations, being aware of the perimeters of safety and trust, and being mindful of the environmental and social impact their implementation and execution may entail.
The project was submitted by the National Research Council as the proposing party, in collaboration with the AIIS (Artificial Intelligence and Intelligent Systems) National Laboratory of CINI (National Inter-university Consortium for Informatics) under the The National Recovery and Resilience Plan funded by the European Union – NextGenerationEU”, Action 1.3, “Creation of “Partnerships extended to universities, research centers, and companies for the funding of basic research projects,” by the output of software systems.
Objectives
The research activities are carried out within 10 spokes that will involve more than 350 researchers (https://fondazione-fair.it/spoke/).
Each spoke is characterized by a specific thematic area and its own set of research challenges with the aim of addressing FAIR challenges from different vantage points. The spokes are: Human-centred AI, Integrative AI, Resilient AI, Adaptive AI, High-quality AI, Symbiotic AI, Edge-exascale AI, Pervasive AI, Green-aware AI, Sustainable Bio-socio-cognitive AI.
Spokes act as catalysts for both the development of innovative AI technologies and new AI services in strategic sectors for Italy, involving the industrial sector, both at the level of large companies and innovative small and medium-sized enterprises. The Spokes interacts with 7 “transversal projects”(https://fondazione-fair.it/progetto/).
The Nexa Center is involved in the activities of the Spoke 7 Edge-exascale AI. The The goal of this work package is to investigate all implications of AI on edge and exascale devices from an ethical, economical, societal and environmental point of view, including: trustworthiness, accountability and responsibility, discrimination risk identification, explainability, transparency, energy implications , also based on spatial and thematic data, both real and synthetic. The Nexa Center is responsible of the task “Ethically-sensitive dataset labeling” : the goal of the task is to research, design and prototype data labels that can inform stakeholders (data curators , model builders, end users, etc.) about the risk of downstream ethical and societal issues derived from specific datasets’ characteristics. The following aspects will be investigated: low trustworthiness due to poor data quality, lack of accountability due to poor data documentation, risk of systematic discrimination of specific social groups (e.g. gender, age, education level) caused by low balance in the distribution of protected attributes. By synthetically documenting selected datasets characteristics, the dataset measurements and labeling scheme will support the creation, selection and adoption of datasets in a more responsible way, providing better transparency, facilitating early interventions, mitigating possible data cascades, and ultimately enabling human-respectful AI innovations.
Results
At the time of writing this report, the project is in its initial phases, and it did not produce any relevant public result yet.