Responsible Artificial Intelligence

A Politecnico di Torino master course (Master of Science in Computer Engineering) aiming at raising awareness on the principles of a responsible development of Artificial Intelligence, as well as on the social conditioning on the developments of AI and, in wider terms, digital technologies. 

The deployment of Artificial Intelligence (AI) systems in several domains of society – e.g., welfare, insurance, finance, medicine, justice, education, etc. – has been raising important issues including those related to accountability, fairness and transparency of AI technologies. As a consequence, the notion of “responsible AI” has been recently adopted by a variety of actors involved or affected by the development of AI: companies, civil society organizations, universities, governments and public institutions, scientific associations (e.g., IEEE and ACM). The main educational goal of this course is to introduce students to the principles of responsible AI within the context of a general understanding of the social impact of technology and, vice versa, to the social conditioning on the developments of AI (and, more broadly, of technology). As a consequence the course will lead the students to better understand their role in society (including companies) both as computer engineering professionals and as citizens possessing certain technical-scientific skills.

During the course, the students will develop the ability to discuss and reflect upon the ethical and societal aspects of AI technologies. More in details, the students will be able to:

  • (L.O.#1) know and understand the duties of a software professional in relation to AI technologies and in general to software systems that become integrated into the infrastructure of society, as recommended, among others, by ACM and IEEE;
  • (L.O.#2) analyze an AI system as a socio-technical system, highlighting how cultural, historical, socio-political, and economic aspects influence its development (and vice versa);
  • (L.O.#3) evaluate both the risks of negative consequences for individuals, vulnerable groups and in general for the public interest, related to design choices in the development of AI tools, and their positive impact;
  • (L.O.#4) formulate various alternatives in the design of AI systems highlighting the potential benefits, the potential risks, the underlying values, also in relation to long term scenarios.