Blog

How can artificial intelligence affect courts?

Written by

12 de March de 2017

Researches on artificial intelligence are increasing greatly in recent years. This expansion it associated with an increasing availability of data shared by users, allowed by a context of expansion of internet access. This sheer amount of information is often used in teaching processes for algorithms, like machine learning. In these cases the algorithm is not previously programed to do just X or Y but it is constructed in a way that it can learn from a range of inputs.

Machine learning allows a program to analyze a set of data and then learn how to make predictions, or take decisions, based on what was learned. This subfield of computer science is already a reality in our daily lives, from facial recognition programs, like the one used by Facebook, to areas like marketing, speech translation, improvement of search algorithms, DNA researches, etc.

Strong and Weak Artificial Intelligence (AI)

There are also researchers trying to apply these tools in law, like utilising AI in court rulings. But it is important to try to comprehend a little about how AI works in general in order to avoid some exaggerated extrapolations. That’s because it has been frequently appearing news articles about the replacement of lawyers and judges by robots in a near future. In article of The Guardian is stated that:

‘Software that is able to weigh up legal evidence and moral questions of right and wrong has been devised by computer scientists at University College London, and used to accurately predict the result in hundreds of real life cases. The AI “judge” has reached the same verdicts as judges at the European court of human rights in almost four in five cases involving torture, degrading treatment and privacy[…] The algorithm examined English language data sets for 584 cases […] In each case, the software analysed the information and made its own judicial decision. In 79% of those assessed, the AI verdict was the same as the one delivered by the court.”

An inattentive reading could suggest  that the program would be the equivalent of a human conscience, capable of judging several cases based on the analysis of a large number of jurisprudence.

However, current advances in artificial intelligence are not capable of simulating a human brain, which is referred to as strong AI, and there is still strong debate about whether this would even be possible.

In this respect, Professor Nikolaos Aletras, one of the project leaders, clarifies that researchers do not expect that judges and lawyers will be replaced by AI in the future, but it is extremely possible that IA tools could help them in their rulings. A judge analyzing a new case may use a similar program to compare law cases, indicating which similarities and differences were found or even how an IA would rule it based on previous rulings.

So the possibilities of artificial intelligence nowadays are at the level of weak AI, a category in which the algorithm is able to perform only specific tasks, not having a general learning capacity. Although they are not at the same level of a broad intelligence, as the case of human beings, such programs are quite sophisticated, creating opportunities for diverse applications.

Machine Learning and preventive detention

Another important research was conducted by the National Bureau of Economic Research in the USA. Economists and computer scientists have developed software to measure the likelihood of defendants fleeing or committing new crimes while they are awaiting trial in liberty. The algorithm assigns a risk score based on information from the case (which offense they are suspected of, where and when the person was detained), the rap sheet of the accused, and age.

The program has been trained with information from hundreds of thousands of New York criminal cases, and it has been tested on hundreds of thousands of new other cases, proving to be more effective at assessing risk than judges.

“They estimate that for New York City, their algorithm’s advice could cut crime by defendants awaiting trial by as much as 25 percent without changing the numbers of people waiting in jail. Alternatively, it could be used to reduce the jail population awaiting trial by more than 40 percent, while leaving the crime rate by defendants unchanged.”

These results demonstrate that such a tool could bring benefits to the Brazilian penal system, since one in three prisoners are in preventive detention awaiting their trial, probably unnecessarily in a significant number of cases. These avoidable preventives only worsen the overcrowded scenario of brazilian prisons, since it has 659.020 people imprisoned

Transparency and prejudice replicated by the machine

However, there is a fundamental need for accountability and transparency of these algorithms because it may reproduce human prejudices, if the data provided for the program’s learning is biased. This fact was detected in a software developed by a private US company to calculate the likelihood of criminal recidivism. The algorithm was wrong in a significative number of predictions when they were of African-Americans, showing racial disparities in risk score.

Therefore It is important that such tools are auditable in order to avoid unfair and non transparent decision-making criteria.

The algorithm cited above, developed by the National Bureau of Economic Research, sought to avoid this problem by using only the demographic data of the defendants.

In an upcoming future it is very likely that this type of artificial intelligence will assist systems of justice daily, increasing their efficiency and ensuring a better application of justice. But we, as a society, must always ensure accountability of these tools, in order to avoid  prejudice and undesirable biases which can deny justice.

 

Written by

Researcher at the Institute for Research on Internet and Society, undergraduate Law Student at the Federal University of Minas Gerais (UFMG). Member of the Study Group on Internet, Innovation and Intellectual Property (GNET). Former member of the Human Rights Clinic (CDH) and of the University Popular Legal Advisory (AJUP), both from UFMG.

Tags

Categorised in:

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Veja também

See all blog posts