Why is understanding and contesting technologies urgent?
Written by
Rafaela Ferreira (See all posts from this author)
12 de December de 2023
If we are people immersed in concrete and cruel social problems — such as the lack of food and housing or the high rates of death and incarceration of black people —, why dispute politically over issues that seem more abstract and distant, such as recent and often inaccessible technological innovations? Making the material effects of the risks and opportunities generated by technology clearer, I invite you to (re)discover the reason why this debate is urgent and must be present “from the favela to the asphalt”, as well as the role of digital rights in transforming the reality in which we live.
Recognizing the real barriers
In Brazil and various places around the world, we face urgent and cruel social problems, such as hungriness, unemployment, and the systematic violence that affects marginalized groups, such as women, transgender, black, indigenous, and LGBTQIAP+ people and communities. These issues cannot be neglected, they are basic premises for any debate about society. In the end, to discuss and act, it is crucial to have basic conditions of survival and dignity — who will care about the protection of personal data collected on digital platforms when you feel hungry or when you are sick and without access to basic healthcare, for example?
However, when considering the importance of the dispute over seemingly more abstract issues, such as the protection of personal data and the regulation of digital platforms, it becomes essential to understand that these themes are not as far from reality as they may seem at first glance. In an increasingly hyperconnected world, these seemingly distant points have direct and tangible implications for our lives.
Social inequality and income concentration are factors that directly influence how different social groups experience technological transformations. While some people can take advantage of technological advances, have an opinion on the way these products are regulated, and understand how they work, others find themselves on the sidelines — either without even having access to their use or, when using them, in a kind of “functional illiteracy” due to the lack of digital literacy in the face of technological products that are common in their daily lives, even though they are often imperceptible and potentially harmful. A good example of this is facial recognition cameras that are deployed in public spaces and that can fail, causing, for example, mistaken arrests, as seen in several cases.
In this case, understanding technopolitical issues and defining the direction of use and development of these products should not be a luxury reserved for just a few. That’s why it is a pressing need to fully address the real barriers that stand in the way of a fairer and more equal future for all.
Technology as a tool for maintaining these barriers
The persistence of hungriness, despite having advanced technologically to the point of having resources to eradicate it, raises essential questions about the role of technology in reducing inequalities. This is not an isolated issue, several situations highlight how technology can be a tool for maintaining social barriers.
As already mentioned, the use of facial recognition is a current latent example. Its use has revealed errors and biases alarming, increasing the marginalization of already vulnerable communities, such as black people, women, and disabled people. These failures are just the tip of the iceberg of a history of technology used in harmful ways.
It is not new that technology is used to reinforce unequal power structures. A war industry, which has driven the production of technological innovations for centuries, highlights this point. The Holocaust also highlights how technology was used as a tool of extermination, revealing a dark past and warning of the dangers of using technology for hidden purposes.
On the other hand, when it comes to the benefits of the digital world — such as access to services, information, education, and global communication —, data about digital exclusion and difficulties in guaranteeing meaningful connectivity indicate access barriers, especially for marginalized groups.
Furthermore, even when there is digital inclusion, barriers find creative ways to persist: studies indicate that digital platforms, despite being apparently democratic, can be the stage for content moderation mechanisms that, subtly, perpetuate discriminatory attitudes — such as the hegemony of whiteness — when deciding which content will be privileged over others. This was the conclusion of the text entitled “Reports of algorithmic discrimination on Instagram under a magnifying glass” in our translation, written by Alessandra Gomes and Ester Borges. It is based on the report of a black influencer who realized that she had greater reach when she published photographs of white people.
Another recent example is the case of an Artificial Intelligence (AI) tool that, upon receiving the command to represent “a black woman, with afro hair, wearing African print clothes in a favela setting” (in our translation), generated the image of a black woman with a gun, perpetuating harmful stereotypes. This experience is a reflection of the influence of decisions made in the creation and use of technology and highlights ethical and regulatory failures in the field of AI systems.
These are just a few among countless examples that show how technology is not neutral; it reflects and can maintain or amplify existing inequalities. Understanding and disputing the role of technology is crucial, then, to break these harmful patterns and build a more equitable and inclusive future for all.
Innovation and techno-politics for other possible realities
Technological innovation is often celebrated as the key to solving problems. However, it is crucial to ask: What problems are we trying to solve, and for whom? The search for innovation must be intrinsically linked to the search for equal access to human and fundamental rights. Thus, when applied ethically and inclusively, it can be a powerful ally, in contrast to a reductionist vision, which bets without criticism on a “technological solutionism”.
This way, it will be possible to encourage an approach to innovation effectively committed to socioeconomic progress, including identifying which products and solutions do not make sense. In this sense, examples such as autonomous weapons and the use of facial recognition in public security highlight the importance of a conscious and responsible choice about which technologies should be developed and implemented.
In the development of new technologies, the political dispute must be directed toward solving fundamental problems, such as hunger, access to housing, health, and education. To achieve this, it is also essential to recognize that each community has its vision of what constitutes “good living” and therefore innovation must be adapted and contextualized to meet the specific needs of each group.
The political dispute is not restricted only to the development of technologies but extends to their regulation, which means, in this context, establishing parameters that guarantee their ethical and safe use and development. This regulation cannot be defined by a few but must be formulated in different spaces of open political debate, in which the diversity of voices and perspectives contributes to defining the limits and guidelines for such technologies.
The dispute for technological innovation must be shaped, therefore, to promote broad socioeconomic development, considering the singularities and specific needs of each community. It is a call to ensure not just access, but the meaningful use and effectiveness of the rights affected by these technologies, ensuring that they are allies to the promotion of justice and not accentuate existing inequalities.
The (re)construction and regulation of new technologies as allies
Given this scenario, it is clear that, when we look more closely, technological innovations are more concrete than we imagine and are instruments for building alternatives to the harsh reality that currently prevails, since technology reproduces social values and, simultaneously, can maintain or modify them. Therefore, understanding and shaping the technopolitical landscape is crucial to building a fairer and more equal future. Like this, technological innovation must be directed towards promoting broad socioeconomic development, considering the needs of each community and ensuring that it effectively benefits social justice, boosting collective well-being and inclusion.
At the Institute for Research on Internet and Society (IRIS), we understand that this perspective is transversal, considered in our institutional positioning and mission, as well as in everything we produce as a center for research and political advocacy. If you’ve made it this far, we invite you to follow us on our site and our social media for more space for discussion about the impacts of technologies on society.
There are also several entities engaged in promoting debate and disseminating useful knowledge about new technologies from this perspective: LabJaca, Olabi, Data_Labe, and Manas Digitais are some examples of valuable institutions that contribute to this objective. From these networks, let us continue to cultivate, in the present, a future with respect and promotion of human rights in the digital environment.