Motivations
There is increasing concern around us about the impact of using automatic devices making decisions for several aspects of our life, including credit scoring, admis- sions to Universities, pricing of goods, recommender systems, up to automatic vehicles or predictive justice. However, the use of algorithms in order to automa- tise decision making is not recent; actually algorithms exist even before computer science became the industry we know today. What is exactly happening?
- Increasing decision autonomy of automatic devices.
- Increasing learning capacity of automatic devices.
- Software editing and data storage are concentrated to few industrial players.
- Evidence of biased decisions, of counterintuitive decisions, of inappropriate use of data, of unforeseen consequences.
The reason of this document (following discussions which took place during the workshop about this topic in Paris the 11 and 12 of December 2017 is to try to establish a possibly scientific perspective to such concerns and then identify what researchers concerned with such topics can do.
What is the problem?
In reality there exist several different problems which press and blogs tend to put together under different « titles » basically sharing a number of keywords: Artificial Intelligence, Data Protection, Algorithmic Transparency etc.. Most of them tend to raise concerns of the general public of how such technologies could impact our life. It pays, however to clarify a number of issues.
1 – The common topic and concern is the transfer of decision autonomy to automatic devices. However, such transfer raises a number of questions.
- Who decides that a certain decision process can be automatized?
- Supposing a certain decision process is bound to be automatised who decides how to automatise it? We know that even for routinely decision processes there exist several different possible ways to implement an algorithm executing it.
- What properties should the outcome of the automatised decision satisfy?
- Where the data used by such automatised decision process do come from? Are these private? Public? Are legitimate? Can we check whether collecting the data did we introduced biases?
- Who is liable for the automatic decisions taken by that device?
The last question raises a more general topic: how much are we in general ready to transfer decision autonomy to devices which are not liable for their decisions? And ultimately who is going to pay for these decisions?
2 – The second common topic concerns the acceptability and trust (by any set of stakeholders) of decisions taken by an automatic device using any type of algorithmic procedure. This also raises a number of questions.
- Given an algorithm or to be more precise a bundle of algorithms set- ting an automatic decision procedure can we trace precisely what these algorithms do?
- Provided that we can trace the execution of the algorithms can we pro- vide « explanations » (interpretable, understandable, usable) to any type of stakeholder
- Provided we can trace and explain the behaviour of an algorithm can we provide the « ultimate reasons » for which the algorithm/automatic device made a precise decision? If it is the case can we replicate the decision providing the same input?
- Supposing the algorithm cannot guarantee replicability (for instance in case the algorithm learns each time is executed we cannot guarantee that for a given input the output will remain the same) what type of ex- planations/justifications/reasons would be considered satisfying in case of a dispute?
- Using an automatic device for a certain class of decision processes may have long term impacts which we cannot acknowledge at the time of deciding to automatise such processes. At what horizon should we consider auditing the overall impact of using such automatic device?
Generally speaking the questions we raise here derive from the necessity to be able to argue/confute/oppose a decision taken from an automatic device. The issue here is how to limit the discretionality of such devices and what type of decisions are we ready to accept without discussing them?
3 – Algorithms are not necessarily software. However, what we are talking about is the use of algorithms/bundle of algorithms under form of software im- plementing an automatic decision making device. Supposing we design an algorithm satisfying requirements about the type of outcome and the type of process we desire (the previous two topics), can we guarantee that the software implementation still satisfies such requirements? This raises some further topics.
- How do we provide formal verification for requirements of the type we are discussing here?
- Are such requirements compatible with efficiency of the algorithms?
- How do we handle the trade-offs between data availability and data
ownership or between privacy and traceability of data manipulation? - If open software provides a guarantee for more transparency in imple- menting algorithms, how does this complies with requirements about security, possibly using encryption?
The issue about the use of appropriate cryptography when automatic deci- sion devices are designed is expected to be crucial in a very near future.
4 – The above questions raise a final issue: is it possible to regulate the design and use of automatic decision making devices and the design and use of the algorithms used for this purpose? This raises several different questions.
- Who should be in charge of such a regulation? Do we need an indepen- dent authority such like the FDA, certifying the use of such devices? Should we also certify the designers?
- Should the design of new automatic decision making devices be pub- licly acknowledgeable? Should we know in advance the purpose for which a decision process could be automatised and participate in de- ciding if it should be actually be automatized?
- Given the huge impact on the economy is such an hypothesis viable and realistic? What would be the impact of such a regulation?
- Can we efficiently regulate the production of artifacts with a very short life cycle?
The above questions deliberately leave undiscussed further issues such as the ethics of manipulating data and the moral dimension of automatic decision making.
Let’s summarise. The principal problem when we consider the option of auto- matic decision making is the transfer of decision autonomy and liability to a device which « per se » has no liability and/or autonomy. If the following two hypotheses are correct:
- decision making processes exclusively data driven are a myth;
- there are always at least two different options in designing a mechanism aiming to make decisions with respect to a given problem;
Then the problem of the « social responsibility of algorithms » should be seen under multiple perspectives (scientific, political, social, economic and legal) concerning the designers, producers, users under their individual responsibility and then con- cerning the society as a whole and its responsibility in choosing a shape for itself.
Conclusions
If scientific findings allow to conceive and design automatic decision making de- vices these will occupy an increasingly large place in our every day lives. For the several legitimated concerns we can express and consider there are enormous eco- nomic, societal and political advantages in using such devices in many applications and real world problems.
It is a matter of fact that already several professions and business making activ- ities are reshaping their identity based on the assumption that such devices are al- ready present and that in the next years/decades they will redesign the way through which many decision processes are today handled.
Such trends are inevitable. By the way our societies have seen such evolutions several times in the last centuries. However, if this is the case, then it is even more important to govern these trends. Under such a perspective we definitely need further scientific investigation about the several different questions raised in this document.