Blog

Right to Explanation and Artificial Intelligence

Written by

17 de June de 2019

The significant increase in the use of artificial intelligence (AI) algorithms, both in public and private sectors, has fostered the debate on how to apply a right to explanation to automated decisions. This concern has been fueled by several media reports that have revealed cases in which the use of certain AI algorithms have generated undesirable discrimination and bias. As an example we can cite the famous case of Amazon’s AI algorithm that gave preference to men over women for job selection.

In this article, we will make a general analysis of the articles of the GPDR that are used as a basis for a right to explanation and, later, we will briefly analyze two academic works that have gained prominence in Europe in relation to the subject. As the legal wording of European law is similar to the Brazilian Data Protection Law (Lei Geral de Proteção de Dados Pessoais – LGPD), Act nº 13.709/2018, we believe that it is important to follow the debates developed there since they can significantly influence the understanding of the right to explanation in the Brazilian context.

Both GDPR and LGPD do not use the specific term “right to explanation” but confer rights to the data subject to obtain meaningful information on the logic related to the automated decision. The art. 20 of the LGPD establishes the right to obtain “clear and adequate information regarding the criteria and procedures used for the automated decision”. Similarly, Articles 13, 14 and 15 of the GPDR state that the data subject should be provided with “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”. Thus, the debate on the right to explanation in the European Union, that is currently starting in Brazil, is based on the interpretation of these three articles of the GDPR.

In addition to the discussion about the legal definition, a brief clarification should be made about how AI algorithms work, due to how their technical characteristics hinder the possibility to obtain a clear explanation on how they achieve their decisions. In general, an AI works as a system that receives a set of inputs to be trained, detects patterns in that dataset, and learns to perform certain classifications or predictions. Subsequently, new data will be inserted into the system to process them according to the patterns learned in its training (outputs). It should be noted that an AI learning process continues as it is being trained with new data. Another important point is that nowadays we only have AI algorithms that specialize in performing specific tasks (narrow AI), such as identifying a face in a photo, doing a credit analysis, diagnosing a certain disease, or even playing Go. We are not talking about artificial intelligence similar to human intelligence (artificial general intelligence), something that we do not even know if is possible to achieve.

Illustration referring to a simple neural network model, which is one of the specific techniques of the artificial intelligence genre.

One of the main differences between regular algorithms, which can be called “If this than that” (IFTTT), and AIs is that in the first case the decision-making logic must be thought before being programmed, while with IFTTT the algorithm itself learns the decision-making logic by training data. Nowadays, there is a great technical difficulty in understanding why an AI do a specific decision and translate it into an understandable explanation for a regular person, the so-called black box problem,. When one considers that an AI can analyze hundreds of thousands of variables in its decisions, such as the Linkedin algorithm that analyzes 100,000 different variables, one has a clearer dimension of the problem.

Taking these elements into account, we briefly describe two sides of the debate on the scope of the right to explanation. Watcher,et al, in an article that has been widely debated, argue that there is only a right to a restricted explanation in GDPR, despite the title of their work being “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”. The authors first state that there are two kinds of explanations for automated AI decisions: (i) one based on system functionality, which would encompass a more general explanation of the algorithm (e.g. which categories of data are analyzed) and the envisaged consequences of its use; and another on (ii) specific decisions, which would clarify how individual variables and inputs were considered and weighed to arrive at a specific result. Due to technical (black box) and legal limitations (protection of intellectual property rights, jurisprudence of the Directive 95/46 and restrictive interpretation of the wording of arts 13, 14 and 15 of the GDPR), the authors argue that a current right to Explanation involves only a right to information about system functionality, what they call “a right to be informed”.

In the opposite direction Powles and Selbst criticize Watcher et al’s argument, stating that the division between (i) explanations about the functionality of the system and (ii) explanations on specific decisions does not make sense, partly because it is not correctly based on how AI systems really work. For the authors, for most AI systems, if it is possible to give a complete description of the functioning of the system, it is also possible to explain a specific decision if the input data is available. However, they still recognize that the black box problem is the biggest impediment to the right to explanation, regardless of the legal interpretation given to this right. From a legal point of view, Powles and Selbst argue that GDPR has significantly increased protections for data subjects when compared to the Directive 95/46, therefore the interpretation of the term “meaningful information” should be based (i) on the subject’s understanding and (ii) on how he can this explanation to defend his rights. In other words, it would not be possible to provide only a generic explanation that has no practical utility in the exercise of the holder’s rights. In addition, they add that GDPR’s writing would privilege the protection of the data owner even in an eventual conflict with intellectual property rights. Despite the emphasis on the data subject’s rights on both the GDPR and the LGPD, it still seems hazy to us how the conflict between intellectual property rights and a right to explanation will develop in the near future.

To conclude this text, we emphasize that one of the most difficult points, especially for professionals in the social sciences, is to determine the limits and technical possibilities of a right to explain of AI decisions, considering the limitations imposed by the characteristics of the technology (black box). The development of means of explanation is an area of computer science still evolving, but which has a direct impact on the possibilities of legal interpretation of this right.

The views and opinions expressed in this article are those of the authors.

Written by

Tags

Categorised in:

, ,

Leave a Reply

Your email address will not be published. Required fields are marked *

Veja também

See all blog posts