Artificial Intelligence and Regulation – New York Case
Written by
Odélio Porto Júnior (See all posts from this author)
15 de January de 2018
In December 2017, New York City’s City Council has passed a bill that seeks to ensure transparency of algorithms used for decision-making by the police, judiciary and other government agencies in the city. The bill’s scope, however, is limited to the algorithms used by the Public Administration, not addressing uses in the private sector.
The main purpose of the bill is to create a temporary task force, whose mission is to develop a report with recommendations to the City on how to ensure safety, transparency and democratic use of algorithms by the government.
Although short, the bill Intro nº. 1696-A creates a basic concept for “automated decision system”, stated as:
The term “automated decision system” means computerized implementations of algorithms, including those derived from machine learning or other data processing or artificial intelligence techniques, which are used to make or assist in making decisions.
Also, it sets the following goals for the task force:
- Criteria for identifying which agency automated decision systems should be subject to one or more of the procedures recommended by such task force pursuant to this paragraph;
- Development and implementation of a procedure through which a person affected by a decision concerning a rule, policy or action implemented by the city, where such decision was made by or with the assistance of an agency automated decision system, may request and receive an explanation of such decision and the basis therefor;
- Development and implementation of a procedure that may be used by the city to determine whether an agency automated decision system disproportionately impacts persons based upon age, race, creed, color, religion, national origin, gender, disability, marital status, partnership status, caregiver status, sexual orientation, alienage or citizenship status;
- Development and implementation of a procedure for addressing instances in which a person is harmed by an agency automated decision system if any such system is found to disproportionately impact persons based upon a category described in subparagraph (c);
- Development and implementation of a process for making information publicly available that, for each agency automated decision system, will allow the public to meaningfully assess how such system functions and is used by the city, including making technical information about such system publicly available where appropriate; and
- The feasibility of the development and implementation of a procedure for archiving agency automated decision systems, data used to determine predictive relationships among data for such systems and input data for such systems, provided that this need not include agency automated decision systems that ceased being used by the city before the effective date of this local law.
After analyzing these topics, it is clear that this project is not very ambitious, depending on future decisions to really regulate the use of decision-making algorithms in NY. However, the initiative is still important because it recognizes the problem of transparency in automated decision systems, giving an initial step to start public debate.
This bill was approved in a broader context in which the city seeks not only to anticipate future issues involving decision algorithms, but also to deal with real problems that are already happening. In the United States, the Criminal Justice system is using algorithms to calculate whether a defendant should be or not in liberty while awaiting his trial, with artificial intelligence algorithms calculating his escape risk by analyzing his profile.
Another example of controversial use, which is already been criticized, is the algorithms adopted by the New York police department, to predict which areas of the city should be most ostensibly policed, a project that expects to receive $ 54 million dollars of investments over the next 5 years.
Critics have raised questions about how this algorithm functions, declaring it has only reinforced established biases in practice, generating predictions that only increase the policing of neighborhoods that are already excessively targeted by the police.
It will be in cases like these that citizens will desire to have a right to an explanation of how these algorithms work, since they are increasingly becoming part of policy implementation and, therefore, are influencing the exercise of citizens’ rights.
In a country like Brazil where there are several problems regarding Public Administration, there are a great deal of incentives to invest in the use of artificial intelligence by public institutions. One example is the project of the Superior Council of Labor Justice (“Conselho Superior da Justiça do Trabalho”), which seeks to develop, in a future partnership with the University of Brasília (Universidade de Brasília), an artificial intelligence that will analyze past labor cases in order to train an algorithm to make recommendations to judges deciding a similar case.
As such initiatives emerge, citizens should be aware that a minimal set of regulations must be implemented to ensure that AI systems will be transparent and will respect human rights. In order to do this, we must be aware of how other countries have begun to deal with these issues to clearly define what are the common problems and what are the specificities of each country.
The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Institute for Research on Internet and Society.
Written by
Odélio Porto Júnior (See all posts from this author)
Researcher at the Institute for Research on Internet and Society, undergraduate Law Student at the Federal University of Minas Gerais (UFMG). Member of the Study Group on Internet, Innovation and Intellectual Property (GNET). Former member of the Human Rights Clinic (CDH) and of the University Popular Legal Advisory (AJUP), both from UFMG.