- Serge Abiteboul, INRIA, ENS, FR, Issues in Ethical Data Management.
- Yann Chevaleyre, LAMSADE, Université Paris Dauphine, Paris, FR, An overview on interpetability in Machnine Learning.
- Krzysztof Choromanski, Google, New York. Differentially-private random projection trees.
- Vince Conitzer, Computer Science, Duke University, USA. Moral Decision Making Frameworks for Artificial Intelligence.
- Mikaël Cozic, Pierre Valarcher, LACL, Université Paris Est, FR. PEPS ALGOCIT.
- Nello Cristianini, Computer Science, Bristol University, UK, Living in a data obsessed society.
- Gabrielle Demange, PSE, EHESS, FR, Algorithms: Which requirements?
- Joe Halpern, Computer Science, Cornell University, USA, Moral Responsibility, Blameworthiness and Intention: in search of formal definitions.
- Thierry Kirat, Morgan Sweeney, IRISSO/CR2D, Université Paris Dauphine, FR. Algorithms and the Law. Currents and Future Challenges for Legal Professions.
- Nicolas Maudet, LIP6, Université Pierre et Marie Curie, Paris, FR. Explaining Algorithmic Decisions.
- Francesca Musiani, ISCC, CNRS, Paris, FR. “How about alternative algorithms?” The attempt to “re-decentralise” Internet services.
- Benjamin Nguyen, LIFO, Université d’Orleans, FR. Anonymization and Fair Data Processing.
- Dino Pedreschi, Computer Science, Università di Pisa, IT. Data ethics and machine learning: decentralisation, algorithmic bias and how to discover them
- Fred Roberts, DIMACS and CCICADA, Rutgers University, USA. Avoiding Bias in Implementations of Randomized Protocols for Security Screening.
- Giovanni Sartor, University of Bologna, IT. The Ethical Knob: ethically-customisable automated vehicles and the law.
- Alexis Tsoukiàs, LAMSADE, Université Paris Dauphine, FR. Social responsibility of algorithms: what is about?
- Carmine Ventre, Computer Science, University of Essex, UK. Towards pragmatic mechanism design.
Abstracts
Serge Abiteboul, INRIA, ENS, FR, Issues in Ethical Data Management
Data science holds incredible promise of improving people’s lives, accelerating scientific discovery and innovation, and bringing about positive societal change. Yet, if not used responsibly, this technology can propel economic inequality, destabilize global markets and affirm systemic bias. In this talk, we consider issues such as violation of data privacy, or biases in data analysis. We discuss desirable properties in data analysis such as fairness, transparency, or diversity. A goal of the talk is to draw the attention of the computer science community to the important emerging subject of responsible data management and analysis. We will present our perspective on the issue, and motivate research directions.
Yann Chevaleyre, LAMSADE, Université Paris Dauphine, Paris, FR, An overview on interpetability in Machnine Learning
Interpretability is a quite old topic in machine learning, which recently gained a lot of traction. In this talk, we will discuss about the need for interpretability in machine learning. We will also talk about what is meant by interpretability. In the first part, we will discuss about interpretable models, how to learn them, and how to make them accurate. Next, we will present how predictions may also be made interpretable, in particular in the field of deep learning, where models are often considered as black-boxes.
Vince Conitzer, Computer Science, Duke University, USA, Moral Decision Making Frameworks for Artificial Intelligence
AI systems increasingly need to make decisions with a moral component. Should a self-driving car prioritize the safety of its passengers over that of others, and to what extent? Should an algorithm that decides which donors and patients to match in a kidney exchange take features such as the patient’s age into account, and to what extent? I will discuss two approaches to these problems: extending game-theoretic frameworks, and learning from examples of human decisions.
PEPS ALGOCIT. Mikaël Cozic, Pierre Valarcher
The multidisciplinary group(*) (philosophy, computer science, law, economics, educational science, business and sociology) has study through the particular case of « APB » algorithm some recommendations and best practices for the development of “public algorithms”. We present some contributions.
(*)ALGOCIT group (Université Paris Est Créteil) : M. Béjean, P. Cegielski, J. Cervelle, R. Le Roux, F. Semmak, J. Tesson, A. Thauvron, S. Thoron, M. Valarcher, N. Wagener
Nello Cristianini, Computer Science, Bristol University, UK, Living in a data obsessed society
Modern Artificial Intelligence is powered by statistical methods and fuelled by large amounts of data. This reliance on machine learning enabled us to bypass the need to fully understand a phenomenon before replicating it into a computer, paving the way to much progress in fields as diverse as machine translation, computer vision, and speech recognition. This approach is affecting also other disciplines: we call this the big-data revolution. For this reason, data has been called the new oil: a new natural resource that businesses and scientists alike can leverage.
A new unified data infrastructure that mediates a broad spectrum of our daily transactions, communications, and decisions has emerged from the data revolution of the past decades. Modern AI and Big Data Technologies are so closely interconnected that it is not possible to have one without the other. Along with great benefits, this approach comes also with many risks, some linked to the fact that an increasing number of activities is mediated by this infrastructure, including many decisions that affect individuals. Very often, the valuable data being used is our own personal data.
New AI technologies permit this infrastructure to infer our inclinations and predict our behaviour for an increasing range of activities, whether social, economic or regulatory. As opting-out is no longer a realistic option, we must strive to understand the effects this new reality can have on society.
Machines making autonomous decisions in key parts of our infrastructure can lead to unintended bias, discrimination, and even have unintended effects on public opinion, or our markets. Regulation cannot be effective if we do not understand the interaction between machine decisions and society.
The current trend towards collection, storage, analysis and exploitation of large amounts of personal data needs to be understood, its implications need to be assessed, particularly those that will affect our autonomy, our rights, public opinion, and other fundamental aspects of our life.
Gabrielle Demange, PSE, EHESS, FR, Algorithms: Which requirements?
Algorithms are used in many different contexts: by governments, by quasi-monopolies (PageRank by Google), by competitive firms in financial markets, by advertisers to target consumers to name a few. In each context, the main desirable properties differ. For example, for high frequency trading, the control of risks is judged the predominant requirement in EU Markets in Financial Instruments Directive; for consumers, the control of their data (privacy) motivates the EU General Data Protection Regulation. I will discuss some of the desirable properties in relationship with the economic context.
Joe Halpern, Computer Science, Cornell University, USA, Moral Responsibility, Blameworthiness and Intention: in search of formal definitions
The need for judging moral responsibility arises both in ethics and in law. In an era of autonomous vehicles and, more generally, autonomous AI agents, the issue has now become relevant to AI as well. Although hundreds of books and thousands of papers have been written on moral responsibility, blameworthiness, and intention, there is surprisingly little work on defining these notions formally. However, we will need formal definitions in order for AI agents to apply these notions. In this talk, I take some preliminary steps towards defining these notions.
Thierry Kirat, Morgan Sweeney, IRISSO/CR2D, Université Paris Dauphine, FR, Algorithms and the Law. Currents and Future Challenges for Legal Professions
Our communication identifies the main challenges that predictive justice models raise for legal professions. So far, predictive justice software provides a way to deal with all (known) court decisions and enables to find the most relevant ones for one’s research. Predictive justice can also automate/computerize many tasks for lawyers (barristers), judges, and business lawyers.
We will expose implied current and future changes, which affect the behaviour of those legal professionals as well as the current regulation schemes. The objective is to assess implementing algorithms’ consequences on legal professions and, beyond, their regulation.
Nicolas Maudet, LIP6, Université Pierre et Marie Curie, Paris, FR, Explaining Algorithmic Decisions
Different institutions have recently put a “right to explanation” forward.
In this talk, we shall first discuss possible definitions of this notion, and provide a brief historical overview on this question. We will then present recent advances in the area, involving different types of models.
Francesca Musiani, ISCC, CNRS, Paris, FR, “How about alternative algorithms?” The attempt to “re-decentralise” Internet services.
From algorithmic biases in content management to homogenisation of users’ preferences, not to mention the ongoing risk of institutional surveillance via the private sector, a handful of « giant » platforms dominate our online environment with potentially negative effects and very little accountability. This presentation will address the rise of a few alternative projects attempting a « re-decentralization » of Internet services, the opportunities they provide, and the important challenges they face.
Benjamin Nguyen, LIFO, Université d’Orleans, FR, Anonymization and Fair Data Processing
Current data protection laws in France closely scrutinize personal data processing. Indeed, in the case of such a process many constraints apply: data collection must be limited, retention limits are imposed, and more generally, the processing must be fair. Conversely, such constraint do not exist if the data is anonymous (i.e. it is not possible or at least very difficult and costly to link a data item to a real individual) – again this can be viewed as fairness, since anonymous data is by definition harmless for the individuals concerned. However, data anonymization is still an open problem. Many state of the art anonymization techniques used in statistics (such as pseudonymization, or k-anonymization) cannot be mathematically proven to have any formal guarantees. Other techniques, such as differential privacy, although able to provide these guarantees, are on the contrary difficult to use in practice, and difficult to understand by the general public. Another field investigated is cryptographic techniques, which could fully enable private data processing, such as fully homomorphic encryption. For the moment (and for a foreseeable future), these techniques are not efficient enough to be used on Big Data.
Thus, the question of fair data processing remains open: is anonymization a good road to follow? Shouldn’t other aspects also be considered, such as the concepts promoted by the privacy field, such as openness, user control, auditability, etc.? Finally, how should we design algorithms to run on Big Data be used in order to be fair?
Dino Pedreschi, Computer Science, Università di Pisa, IT, Data ethics and machine learning: decentralisation, algorithmic bias and how to discover them
Machine learning and data mining algorithms construct predictive models and decision-making systems based on big data. Big data are the digital traces of human activities – opinions, preferences, movements, lifestyles … – hence they reflect all human biases and prejudices. Therefore, the models learnt from big data may inherit such biases and various artefacts, leading to discriminatory or simply wrong, decisions. In my talk, I discuss many real examples, from crime prediction to credit scoring and to image recognition, and how we can tackle the problem of discovering discrimination using the very same tool that exacerbates the issue: data mining.
Fred Roberts, DIMACS and CCICADA, Rutgers University, USA. Avoiding Bias in Implementations of Randomized Protocols for Security Screening.
An emphasis on increased security at large gathering places such as sports stadiums, concert halls, and public areas of airports has led to new ideas to defend the public in these areas. When not everyone can be screened or when not everyone’s background can be checked every time, protocols to do so randomly are called for. However, it is important that these be fair, unbiased, and, in addition, do not create the impression of being biased. We shall explore recent randomized protocols in security screening and background checks and the relevant notions of fairness and unbiasedness.
Giovanni Sartor, University of Bologna, IT. The Ethical Knob: ethically-customisable automated vehicles and the law.
Accidents involving autonomous vehicles (AVs) raise difficult ethical dilemmas and legal issues. It has been argued that self-driving cars should be programmed to kill, that is, they should be equipped with pre-programmed approaches to the choice of what lives to sacrifice when losses are inevitable. Here we shall explore a different approach, namely, giving the user/passenger the task (and burden) of deciding what ethical approach should be taken by AVs in unavoidable accident scenarios. We thus assume that AVs are equipped with what we call an ‘‘Ethical Knob’’, a device enabling passengers to ethically customise their AVs, namely, to choose between different settings corresponding to different moral
approaches or principles. Accordingly, AVs would be entrusted with implementing users’ ethical choices, while manufacturers/programmers would be tasked with enabling the user’s choice and ensuring implementation by the AV.
Alexis Tsoukiàs, LAMSADE, Université Paris Dauphine, FR, Social responsibility of algorithms: what is about ?
Algorithms have been used since centuries in order to structure and assist decision-making and complex resource allocation processes. Why today our societies are concerned by the use of algorithms in our everyday life? In this talk, we try to explain why there is a legitimate societal problem and why and how computer scientists and more generally people involved with the design of algorithms should take care of the impact of the increased decision autonomy of devices using software-implemented algorithms.
Carmine Ventre, Computer Science, University of Essex, UK. Towards pragmatic mechanism design.
Mechanism design defines a theoretical framework to study socially responsible algorithms. In fact, mechanisms couple the quality (e.g., optimality) of solutions with notions, such as incentive compatibility, to deliver societal impact. However, the literature on the subject offers more about the limitations of mechanisms and less about what they can actually do. In this talk, I will be discussing a research agenda that aims at building the theoretical foundations for a more applied use of mechanism design by looking at reasonable ways to bypass impossibility results or relax unreasonable assumptions.