Blog

Why does science need to talk more about ethics? A COMPAS case study and analysis of the presence of machine bias in automated decisions

Written by

27 de January de 2020

Have you heard about ‘technological solutionism’? It is the belief that the use of technology can be the solution to all of humanity’s problems. This symbolizes a belief that automation and the use of artificial intelligence, for example, are antidotes to various problems, ranging from the injustice of the criminal system, to the chaotic traffic of the city of Belo Horizonte on rainy days.

This text aims to discuss the limits of the belief in the objectivity of the algorithms, and some of the most emblematic cases of machine bias, that is, algorithmic bias when this alleged objectivity fails.

The fundamental belief in the objectivity of algorithms

In times of uncertainty, there is a natural tendency to rely on some form of salvation. The question is: can technology fulfill this role? After all, it is always in constant change and evolution, its limits seem non-existent, and it carries a character of ‘objectivity’ that cannot be attributed to human decisions, often considered faulty due to a fundamental failure of judgment of the person who made it.

When we think of a human beings, flesh and blood, deciding on important issues of public life, we are thinking of a person who carries a whole life baggage that can affect his decision. We expect, by law, that a judge will be impartial when ruling a case, but the crisis in the institutions of Brazil today creates extreme uncertainty about the judiciary, which sometimes seems to decide with a political nature and an inherent partiality based on the context that the country Is living.

It is important to understand the vision that an automated decision would be more impartial and objective than a human decision, and therefore preferential to controversial issues of public life. According to researcher Antoinette Rouvroy, from the University of Namur, in Belgium, this shift in perception can be seen when we compare the old use of statistics with the current use of algorithmic analysis. In traditional statistical analysis, we establish prior criteria and ask questions of a part of the population that is ‘sample of a whole’ in order to get an answer that seems general and impartial.

The algorithmic analysis, in turn, performs a collection of ‘unclassified data’, that is, data that has no meaning at first, and allows the context and responses to be taken from the correlation given by that data. It is from this moment that the belief arises that information that comes from algorithms would be more objective or ‘pure’ than a human decision – it is the data talking, and not voices from the head of a judge that can never be audited.

Automating court decisions in the COMPAS case – Minority Report case, is that you?

The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), is a tool developed by the company Equivant, and used by the Department of Corrections of states like Wisconsin, Florida and New York, in the United States of America. Its purpose is to assist judges in deciding whether a defendant can face his case in freedom. How does it do that? Easy. Assigning a ‘danger’ score to analyze the likelihood of the person escaping or relapsing.Through the algorithmic analysis of factors such as the offender’s socioeconomic status, family history, work and crime rate in the neighborhood where he lives, the algorithm classifies the accused and creates profiles that rate them as ‘more’ or ‘less’ dangerous, on a scale that gives them scores ranging from 1 to 10, judging the possibility of the accused’s recurrence.

In order to understand the effectiveness of COMPAS, ProPublica gathered the risk scores assigned to more than seven thousand people arrested in Broward County, Florida, between 2013 and 2014, and compared how many of the individuals had been repeat offenders in the two years following the decision, using the same parameter used by the creators of the system.

The results obtained showed that the system used had machine bias, that is, a discriminative tendency that permeates its decision process, in addition to having a very low success rate in its forecasts. First, the algorithm was twice as likely to assign a dangerous score to a black person than to a white person. In addition, older people were automatically considered to be at lower risk and less violence, regardless of the crimes they had already committed. Finally, the system’s hit rate for predicting recurrence rates in violent crimes was just twenty percent.

This kind of predictive analysis of dangerous behavior is reminiscent of the dystopian film Minority Report, from 2002, in which a system predicts crimes before they happen and slows the murder rate in the country to zero. The problem posed by the film is the question of free will: is it really possible to guarantee that a behavior will happen before it happens?

All the results, present on the independent website of ProPublica, were disseminated in several traditional media, and generated a great public debate about the use of algorithmic analysis in the creation of profiles to determine parole. As a direct consequence of the publication, the New York City Council passed the first national law that addresses the issue of algorithmic discrimination: the algorithmic accountability bill, which determines a task force to study the use of algorithms that affect people’s personal lives, as well as any possible discrimination arising from them.

Machine bias: why do we need to think about this?

“It is possible to audit an algorithm, but it is not possible to audit a judge’s conscience.”. This is a famous phrase when we think about the automation of judicial decisions. Algorithmic analysis, it has been said, is seen as an objective way of analyzing the multiplicity of data and creating a more impartial image of the reality we have.

This type of solution does not come without problems. COMPAS is just one example of the distortions that this analysis can create. Another example is the tool built by Amazon, with the objective of analyzing resumes of candidates for a job vacancy, which showed preference for male resumes over women ‘s resumes. This was due to the fact that the database used by the algorithm observed hiring patterns of the company over a period of ten years in which it was mostly dominated by men.

Conclusion

The database and the criteria for analyzing the data will define the type of decision projected by the algorithm. What we need to think about, both in the example of COMPAS and in the example of the data analysis tool from Amazon, is that decisions considered biased replicate the reality in which we are inserted. An algorithm that is tougher against black people in the penal system reflects a reality in the prison system, which in itself is permeated by criminal selectivity. An algorithm that devalues ​​the curriculum of women in the technical area reflects an extremely exclusive job market and the gender gap in the area of ​​science and technology.

The reflection on ethics in the analysis of algorithmic decisions implies understanding the information that is given by a system that tries to read reality, and the constant reflection and criticism about the generated decisions. This is extremely necessary, especially before this type of tool is used in an relevant public service, such as the criminal system. As a society, we must think of concepts such as the democratization of knowledge about automated decisions, the popular auditing of automated decisions, and the possibility of thinking about a social duty based on an algorithmic decision.

Would we be bold enough to think not only of avoiding the deficient cycles of society we have, but also of encouraging a concept of justice in the decisions that create the society that we want to be?

The views and opinions expressed in this article are those of the authors.

Written by

Emanuella Ribeiro Halfeld é pesquisadora do grupo SIGA-UFMG (Sociedade da Informação e Governo Algorítmico). Estagiária na Clínica de Direitos Humanos - UFMG. Graduanda em Direito na Universidade Federal de Minas Gerais.

Leave a Reply

Your email address will not be published. Required fields are marked *

Veja também

See all blog posts