Following the release of the Matrix box office hit, which turns 20 this year, productions such as Black Mirror, Westworld, Ex Machina, Minority Report and Her are just a few examples of cinematic works that deal with realities created from human interaction with developed technologies through the use of automation and data science processes. Today, these discussions transcend fiction and gain increasing space in forums and global governance events. However, this space was conquered in a justified way. In recent years, digital technologies have become one of the most powerful shapers of social order today. Thus, the growing concern about the impact of automation technologies on the power relations of today’s society has become visible through studies and analyses carried out on different aspects of the thematic.
Co-authors around the globe
Recently, an article published by Angela Chen, reporter of The Verge, brought to the spotlight the use of microworking within the Finnish penitentiary system and raised, once again, a lot of questions on the use of this practice by various companies. Defined as a series of small tasks that together form a large unified project that is developed or finalized by people connected to the Internet, the term microwork is commonly used to describe brief activities that require human intelligence to be completed in a reliable manner. In the field of automation, its function is to produce data, to check the quality of algorithmic results and to replace them in case of failures.
In addition, microwork provides a means of generating extra income for those who can not meet traditional workwise expectations due to certain circumstances in life such as parents staying at home and people with health problems. For those unemployed, microwork often represents the sole source of income, as demonstrated in the study developed by Paula Tubaro and other researchers.
Autonomy arises as one of the most important values advocated by these workers, they appreciate the flexibility and freedom of choice of tasks. They decide how, where and when to perform them.
However, negative points are also brought by these actors. According to a study published in the journal Management Information Systems Quarterly, while some of the interviewed microworkers felt that their work was valued and respected by the contractors, others were frustrated with certain aspects of microworking platform operation. The unjustified rejection of the completed work is one of the examples brought.
The feeling of being exploited as a result of the inequalities present in the compensation structure, the abandonment of the policies of the platform, where procedures are defined to favor the applicants, and the lack of qualification due to simple and repetitive work are some of the negative points brought by microworkers.
Thus, the duality of the activity is reflected in two conflicting but coexisting feelings of empowerment and marginalization. Workers feel empowered when structures allow choice and feel marginalized when the same structures restrict action. Recently, crucial ethical concerns are gaining more space in the crowdsourcing community and these should be considered and serve as the basis for the design and development of platforms for this purpose. However, it is believed that, based on the current political scenario, there is little chance that governments around the world will intervene and protect these workers. The submission to strenuous jobs for pennies seems to be a trend already.
The plot thickens
In addition to problems related to work relationships behind the scenes of automation, ethical issues and power asymmetries linked to privacy and data protection also arise in this field. Several researchers, including Virginia Eubanks in her book, Automating Inequalities, show how the prejudiced, punitive and moralistic view of poverty has never left us and been included in today’s automated and predictive decision-making tools.
Criticism of racist and sexist tendencies in automation systems must be woven. However, for many researchers, the debate should not be grounded on such questions as “is data processing more equitable if the error rate is the same for all races and genders?” As Alondra Nelson argues, algorithmic accountability is essential, however, the effort to pursue more “inclusive” dystopian technology, making black communities more identifiable in facial recognition systems that are disproportionately used for surveillance, for example, is extremely disturbing.
It is, therefore, necessary that broader issues be taken into account, for example, whether these tools should be encouraged to be developed and used. In Algorithms of Oppression, Safiya Umoja Noble studied the representation of black women in search tools and found disturbing evidence of sexist and racist implications in their results. Noble’s work, however, is not tied to efforts to classify research results as “legitimate” or “illegitimate,” valid or discrepant, as a corporatist view of algorithmic responsibility might require. She turns the research results into a matter of social justice rather than a mere commercial or technical problem that prevents maximization of profit. Noble in her work reconnects to the roots of the accountability movement, insisting that owners of algorithmic systems must act to ensure that their performance is recognized as fair by the community it is relevant to.
In this way, it is believed that the big questions to be answered are: What systems really deserve to be built? What problems need to be solved most? Who is better qualified to build them? It is also necessary that we have genuine accountability mechanisms that are external to companies and accessible to people. However, while these questions are seen as outdated or utopian we may be fated to the reality described by Eubanks where many believe that complex determinations of benefits are not something that well-intentioned technology experts can “fix”, as the system itself is already accepted, even if deeply problematic. Thus abuses by law enforcement officials, surveillance capitalism employed by large tech companies and decisions based on credit scoring systems that violate fundamental rights are just a few of the consequences of the dystopic use of new technologies based on automation.
Casting the main actors
The race for the vanguard of artificial intelligence also brings with it questionings about the form of its regulation. At the last Internet Governance Forum, held in France in 2018, for the first time the event was attended by UN Secretary-General António Guterres as well as by the president of the host country, Emmanuel Macron in this case, the importance of an ethical approach regarding the development of artificial intelligence was repeatedly pointed out. The head of the United Nations recalled the importance of developing new technologies to help achieve the goals for sustainable development, but also the need for discussions on the subject to be brought to other areas of knowledge, such as the social sciences.
It is not by chance that the theme is recurring at international conferences and governance events. The merging of applications and corporations, resulting in a growing concentration of data, and consequently the power conferred by these assets, in the hands of a few represents a major threat to democracies around the globe.
As Professor and Researcher Julia Powles puts it, there is still a long delay in relation to a radical reappraisal of the accountability of the main tech actors who control vast volumes of data. Rulers and communities should seek to act to discourage and devalue the accumulation of data through creative policies, including bans, taxes, data sharing, and carefully defined community benefits policies, all with the support of the law. Smarter data policies would reenergize competition and innovation, both unquestionably slowed down with the concentrated market power of tech giants.
But the questioning of a form of regulation that does not preclude employability and technological development while ensuring human dignity and fair working conditions still hangs in the air.
As the Secretary-General of the United Nations reminded us in his opening speech to the IGF 2018:
“technology must enable us not to dominate ourselves […] We can not leave our destiny in the digital age in the invisible hand of market forces. However, the classic forms of regulation do not apply to many challenges of this new generation.”
Many tech giants are creating corporate principles to ensure that their systems are designed and implemented in an ethical way. Some create ethics officers or review boards to oversee these principles. But tensions remain as many question whether these promises will be maintained, as companies can change their policies and idealism can bow to financial pressure.
Some activists, specialists and even some companies are starting to argue that the only way to ensure ethical practices is through government regulation. As advocated by activist and lawyer Renata Ávila, global leaders, especially those who stand for values of equality and social justice, need to be aware of the risks posed by rapid digital commodification for vulnerable people around the world and their impact on democracy and the dignity.
More than extras
The still timid presence of the Global South in the list of national strategies for the development of Artificial Intelligence is not independent of the socioeconomic inequalities inherited from the past both between and within countries. Factors such as lack of resources, the current national and international legal regulation often limit the research and innovation capacity of some countries.
On the other hand, the big-time profitability has led large companies to seek the next billion data-gathering fields which are now being found and exploited in developing countries. With promises of improvements in public safety, resource management, transparency, and social well-being, the enthusiasm generated by the use of automation has taken over the public sector in several states, especially in countries that do not have adequate privacy protection.
However, many Global South countries are not receiving necessary access to the data collected in their own territories. Even when information is related to issues of public importance, such as data extracted from urban environments, it is curtailed by contractual rules. In this way, citizens can not access or take advantage of this information.
Today, privacy discussions focus on personal data, not on the digital accumulation of strategic data in closed silos. However, national interest in data should be also treated in a context of economic and social justice. In many cases, the structures of data collection and distribution do not represent healthy partnerships between the private sector and governments, as Yasodara Cordova, a researcher at the Digital Harvard Kennedy School, points out.
And … Action!
There is a need to minimize the social risks of technological change, prevent abuses of power, and understand the ethical risks of dehumanization of technological applications. In this sense, as presented in the contributions of the UN Expert Group Meeting on “Role of Public Institutions in the Transformative Impact of New Technologies” to the UN Secretary-General’s High Level Panel on Digital Cooperation, regulations and efforts to promote literacy and digital inclusion, awareness of the importance of the ethical use of new technologies as well as the prioritization of social well-being, will generate confidence in the use of technology and will enable them to be adopted in a more propitious manner in order to achieve sustainable development. Otherwise, economic inequalities and asymmetries of power will continue to be perpetuated. As the writer William Gibson once said: “The future is already here—It’s just not very evenly distributed”