When the Korean Advanced Institute of Science and Technology (KAIST) announced on Tuesday an initiative for the development of autonomous weapons – warfare artifacts controlled not by humans, but by artificial intelligence – a group of more than 50 researchers in Artificial Intelligence reacted calling for a boycott. The group, made up of scientists from more than 30 countries (including a Brazilian researcher), protested against the partnership between KAIST and Hanwha Systems, currently one of the largest conglomerates in the Korean arms industry. The partnership resulted in the opening of a laboratory focused on the convergence between Artificial Intelligence and National Defense. But why such an adverse reaction?
On August 22, 1864, a group of European nations convened to sign the First Geneva Convention for the Improvement of the Condition of the Wounded and Sick in the Armed Forces in Campaign. After terrifying and bloody experiments involving soldiers and even civilians during a period of intense military conflict on the Old Continent, a movement to create minimum standards for humanitarian treatment of those wounded in wars resulted in the embryo of what would later be called Human Rights. Even during military conflicts, nations understood that there should be restrictions on the destructive power and the negative consequences of military conflicts. Therefore, three other Conventions and three additional protocols were subsequently created to establish humanitarian standards in warfare.
The concern of researchers with the development of military artifacts controlled partly or completely by artificial intelligence is then a natural consequence of this process of self-restraint developed over more than a century. Since October 2017 the United Nations has been concerned about the ethical consequences of the unrestrained and unscrupulous development of so-called autonomous weapons. For experts and government officials from various fields – from Humanitarian Law to Computer Science – to leave life-or-death decisions at the hand of AI algorithms will bring very serious problems for the identification, accountability, and eradication of inhumane conduct in the military or even outside of a context of warfare.
In an awareness-raising video, worthy of a Black Mirror episode, the Ban Lethal Autonomous Weapons initiative shows a dystopian – but not so distant – future where lethal self-contained drones are able to utilize a variety of already existing technologies (such as facial recognition and profiling through big data) to identify and eliminate targets.
Concerns about autonomous weapons touch upon various fields. The first concern is that these devices fall into the wrong hands: criminals, terrorists, dictatorial governments. However, the difficulty in tracking down and blaming a weapon that has the ability to decide whether to kill or not alone also makes it problematic even in so-called democratic governments: without proper regulatory precautions, autonomous weapons may end up being used by corrupt authorities to directly or indirectly (eg through self-censorship by fear) to stifle opposing political views. After all, without proper accountability mechanisms, and without a serious discussion about the liability for actions taken by artificial intelligence algorithms, it can be almost impossible to blame someone.
The fact that these algorithms can use big data to trace specific profiles through relatively simple orientations is also worrying and opens up another area of concern in the Privacy and Data Protection field: Knowing that these can be used by robots to draw a target in their heads, it is possible that individuals may desire greater control over the accessibility and openness of their personal data.
An arms race in artificial intelligence will have a disintegrating effect on global consensus standards of international law, especially those that restrict the use of military force and are designed to protect civilians during wartime. Stand-alone weapons are likely to reduce the limit on the use of force, making war more likely. The reality is that institutions of peace and security are already beginning to change. Unmanned aerial vehicles have allowed significant violations of essential global standards against the use of force. Cyber attacks also confuse the lines between war and peace and will only grow in number and sophistication. The introduction of autonomous weapons and warfare systems in the military planning of countries will only make this picture more complicated.
According to Denise Garcia, specialist in the intersection between International Law and lethal automata. for the negotiations to move towards a more concrete and ambitious path, some developments would be welcome: First, the International Committee of the Red Cross (ICRC) could endorse a request for a pre-emptive ban on any weapons systems that would eliminate human control critical functions and killing decisions. In the past, the ICRC has taken significant transformative positions against landmines, cluster bombs and more recently against the ban on nuclear weapons. His position has enormous moral influence and would make a difference in this critical situation. Secondly, the forum for negotiations should not be the Convention on Certain Conventional Weapons (CCC), but one in which decisions are taken by majority rather than by unanimity. Finally, one of the five permanent members (P5) of the UN Security Council could embrace the leading state’s role in banning any weapons system that would eliminate significant human control over critical functions and killing decisions. Working with middle powers in Europe and Latin America, a P5 member could form a group of like-minded states that would work on a legally binding instrument with commonly accepted global norms to protect the future of peace.