This workshop follows the seminar organized in December 2017 at the Université Paris Dauphine (see SRA 2017). The 2019 workshop is jointly organized by the LAMSADE, DIMACS, the 3A Institute the 3A Institute, the Chaire Gouvernance et Régulation, the Club des régulateurs and a number of partners in CNRS research groups (GDR): GDR IA, GDR RO, GDR Policy Analytics and GDR CIS.
The purpose of the workshop is to explore the challenges linked to the socially responsible design of algorithms. Through this theme we seek to address issues such as:
- algorithmic fairness, including questions of data provenance, bias and ownership;
- explicability and interpretability of algorithms;
- trust, legitimacy and acceptability of algorithms;
- privacy, individual rights to information and societal needs; and
- decision autonomy and identification of the loci of liability where autonomous artifacts are present.
The subject of the workshop is becoming increasingly important, as public news stories and
opinions form due to important controversies related to visible applications of artificial intelligence and algorithmic decision making, such as challenges with autonomous vehicles, college admissions, online profiling and marketing, and predictive justice. Interestingly, fewer concerns have been publicly raised for automatic decision-making devices and artifacts that have existed for a long time, such as autopilots and advanced automation in factories and critical infrastructure, although this may change as incidents related to failures in these autonomous systems are investigated in relation to newer applications of autonomous systems.
In light of these issues, there are numerous scientific challenges and directions which could merit focus from researchers in computer science, mathematics, robotics, economics, sociology, ethics, law and a range of other disciplines, such as:
- Mechanism design principles as guidelines for algorithm design;
- Formal verification of software and requirements engineering, including ongoing adaptive verification where there is algorithmic learning;
- Causality and argumentation, as far as trust and acceptability are concerned;
- Participatory algorithmic design and monitoring to provide channels for legitimisation of design decisions;
- Potential principles for standards and codes of practice for autonomous artefact design, management and governance across different cultural, social, economic and environmental contexts;
- Compact models and post-hoc explanations in machine learning; and
- Developing some core questions of salience from this area of investigation of relevance to a new applied science for managing AI systems safely and responsibly to scale.
The workshop is organized by invitation only and has two principal objectives:
- establish a state of the art around the subject to identify areas for future research and action, and produce a position paper or special issue;
- explore the possibility of submitting a proposal for an EU COST Action (to be submitted in Spring 2020) and/or to other funding calls.