It is present in the cell phone’s spell checker and in search engines like Google; in the content recommendations of Spotify and Netflix; in voice assistants like Siri and Alexa, in addition to Chat GPT. It also organizes feed posts according to what it defines as relevant. These are just a few everyday examples of how artificial intelligence has been installed in the smallest details in the lives of people who have access [1] to relatively affordable technological consumer goods like a smartphone.

But even those who do not have access to the internet (which applies to 15.3% of the Brazilian population) may have already been impacted by the use of a system of this nature. A person who was a beneficiary of Emergency Aid may not know it, but the decision about access to the resource was mediated by an algorithm – and it is important to say: without the possibility of human review [2]. A person who was looking for a job from 2020 onwards and registered on the National Employment System (SINE) job portal also depended on the tool: the matching of registered vacancies with the workers considered most suitable to fill those vacancies was done through artificial intelligence [3].

If the person lives in Recife, very soon, all they will need to do is walk down the streets: in 2022, the city government put out a tender for facial recognition technology for public safety purposes, which will be implemented in 108 digital clocks throughout the city. Despite protests from civil society, as multiple studies have shown that this technology has racist and transphobic biases, the proposal moved forward. The last three examples go beyond the realm of individual choice: they involve the use of technology in sensitive areas of public policy that can seriously impact the exercise of human rights.

In response to the demands of this situation, Brazil is preparing to regulate Artificial Intelligence through Bill 21-A/2020, which is still under review in the Senate. The committee of legal experts responsible for drafting the document has received more than one hundred contributions from various sectors of society, suggesting principles, rules, guidelines, and foundations for regulating Artificial Intelligence in Brazil. You can find IP.rec’s contribution here.

The committee produced a report compiling the main positions received, covering various topics: definition of artificial intelligence, the regulatory model to be adopted, accountability regime, ethical dimensions, among others. Exploring and analyzing the vast diversity of positions on the Bill was a multi-layered exercise: getting to know and better understanding how each interested party thinks about the topics, as well as each sector; learning from several of them; and diverging from many others. This set of reflections gave rise to this text, as well as the following ones that will address the subject.

This text aims to start the discussion with one of the most fundamental questions surrounding the debate on artificial intelligence: are machines neutral? Although it may seem abstract, this is a crucial question for understanding the ontological foundation from which the different types of policies and regulatory models will be built.

Within the field of humanities, particularly in communication and social sciences, the idea that systems belonging to the domain of technology are neither neutral nor objective is already well established. One of the classic texts in this regard is “Do Artifacts Have Politics?” by Langdon Winner [4], written in 1986. On the other hand, in technical discussions about AI in fields such as computer science, computer engineering, and related areas, as well as in broader common sense, the idea of machine neutrality still dominates [5].

Exploring the myth of machine neutrality

Here are some typical ideas that are often part of the machine neutrality argument:

“What matters is not the technology itself, but the economic or social system in which it is inserted.” Or: “Technologies are neutral tools that can be used for good or evil.” Or: “If prejudices are the result of society, we cannot speak of an intrinsic prejudice of the machine.”

According to the logic of these statements, since they are neutral and objective, machines could even be considered more reliable than humans when it comes to performing certain tasks. It would be easier to “correct” a bias emitted by a machine [6] – for example, by deleting contaminated data or making some changes to the design of a model – than to “correct” an entire society. Finding technological solutions to social problems seems faster, cheaper and simpler to implement than making deeper social changes [7]. Does that make sense to you? Then let’s talk a little more.

The desire to interpret technical artifacts in terms of political language is not exclusive to those who are critical of these technologies and their supposed neutrality. Throughout history, technology enthusiasts have appeared claiming that some new invention would bring more democracy, modernity, and freedom to society, or would solve some social problem. Devices such as the factory system, automobiles, telephones, radios, and televisions have all, at some point, been described as democratizing and liberating forces.

If we take literally the idea that what matters is not the technology itself, but the context in which it is embedded, we get the following logical sequence: once the necessary investigative work has been done to reveal its social origins, everything that matters would already be explained; the technical dimension would be of no importance. Instead, what Langdon Winner suggests is that we pay special attention to the characteristics of technical objects and the meaning of these characteristics. That is, when an invention, design, or arrangement of a device is conceived and constructed in such a way that it goes beyond its immediate uses and produces a set of consequences in a particular community. Often, this happens unintentionally.

Winner tells a story that took place somewhat far from our Brazilian reality, but that can help you better understand how this happens all the time in practice. It might even remind you of a closer example. According to him, for those familiar with American highways, there is something peculiar about Long Island, near Jones Beach Park – overpasses that are astonishingly low by standard measurements, about 2.5 meters from the curb. He explains that this characteristic was not a coincidence and was intentionally designed to achieve a specific social effect: to keep poor and black people away from the avenues surrounding the parks. This is because these people typically used public transportation, which, being over 3 meters high, could not pass under the overpasses. Robert Moses, the contractor responsible for this project, built numerous engineering works throughout his life, many of which continue to shape cities to this day, often prioritizing the use of automobiles over public transportation. The history of architecture, urban planning, and public works contains many examples of this nature, whether explicitly or implicitly. Artifacts arranged in such a way that they produce a set of consequences, long before people choose to use them for purpose X or Y. In other words, intrinsically political.

In addition to being inherently political, technologies can be deeply discriminatory. In Race After Technology, Ruha Benjamin [8] argues that automation, while seemingly neutral, has the potential to hide, accelerate, and deepen discrimination. An example of this would be facial recognition systems, which have a false positive rate that is 10 to 100 times higher among people of color than among white people [9]. Thus, more than merely reflecting the external context, digital technologies are entangled in processes of surveillance, exploitation and control, producing new forms of oppression.

If machines are not neutral, what types of solutions would be feasible?

And so we move on to the next myth: the idea that isolating and removing “bad data” or “bad algorithms” would be enough to solve a problem. This type of proposition treats bias as a purely statistical problem: once the distortion is removed, the problem is over [10]. The blind spot here concerns framing, since an AI system does not operate in a vacuum with respect to social dynamics.

An algorithmic resolution alone would hardly encompass all the layers in a system where there is potential for bias: in addition to those of a statistical and computational nature, there are biases in institutional procedures and practices; historical biases; biases related to human thought. Most of the time, these are implicit biases, but they permeate individual, group and institutional decision-making processes. Thus, the idea that a problem can be “corrected” and overcome may be realistic in the field of statistics, but when we are dealing with aspects of society, it is a continuous process[11].

There are several contributions that reinforce the need to approach artificial intelligence from a holistic perspective. Ruha Benjamin[12] talks about the need to not only focus attention on a better and less biased technology, but to consider the entire ecosystem. Unoble and Le Bui[13] reinforce the importance of building a moral framework for ensuring justice in AI that considers the ways in which technologies are inevitably connected and immersed in power. Kate Crawford[14] says that there is a kind of fantasy that AI systems are disembodied brains that absorb and produce knowledge independently of their creators, infrastructures, and the wider world. What is left out of this abstract idea is the entire apparatus that makes this technology possible: machines, human workers, invested capital, carbon footprints.

Humanities contributions have proven to be increasingly important for understanding artificial intelligence in all its complexity. Moreover, questioning the aspects of a technology is more than just being for or against its adoption: it is about providing the conditions for an informed and high-quality public discussion.

 

[1]In Brazil, the smartphone is the most used device to access the internet. In 2022, 88% of the population has access to the device. However, this access is not distributed evenly: in classes D/E, only 76% have access; if we look at the age group, the elderly have the lowest percentage, 72%. Source: https://www1.folha.uol.com.br/tec/2022/07/smartphone-e-cada-vez-mais-dominante-no-acesso-a-internet.shtml

 

[2]TAVARES, Clarice; FONTELES, Juliana; SIMÃO, Barbara; VALENTE, Mariana. Emergency Aid in Brazil: Challenges in the Implementation of a Data-Based Social Protection Policy. Digital Rights. February 2022. Available at: https://www.derechosdigitales.org/wp-content/uploads/01_Informe-Brasil_Inteligencia-Artificial-e-Inclusao_PT_22042022.pdf

 

[3]BRUNO, Fernanda; CARDOSO, Paula; FALTAY, Paulo. National Employment System and the automated management of unemployment. Derechos Digitales [online]. March 2021. Available at: https://ia.derechosdigitales.org/wp-content/uploads/2021/04/CPC_informe_BRASIL.pdf

 

[4]WINNER, Langdon. Do artifacts have politics? ANALYTICA, Rio de Janeiro, vol 21 nº 2, 2017, p. 195-218.

 

[5]NOBLE, Safiya Umoja; LE BUI, Matthew. We’re Missing a Moral Framework of Justice in Artificial Intelligence. in The Oxford Handbook of Ethics of AI. Oxford University Press, 2020.

 

[6]COGLIANESE, Cary. Administrative Law in The Automated State. Daedalus, Vol. 150, no. 3, p. 104, 2021, U of Penn Law School, Public Law Research Paper No. 21-15, Available at SSRN: https://ssrn.com/abstract=3825123.

 

[7] LENHARDT, Amanda; OWENS, Kellie. Good intentions bad inventions: The four myths of healthy tech. Data & Society. October 2020.

 

[8]BENJAMIN, Ruha. Race After Technology: Abolitionist Tools for The New Jim Code. Polity Press, 2019.

 

[9]HINER, Jason. Ruha Benjamin: Why tech made racial injustice worse, and how to fix it. CNET [online]. Published on June 25, 2020. Accessed on March 2, 2023. Available at: https://www.cnet.com/culture/why-tech-made-racial-injustice-worse-and-how-to-fix-it/

 

[10]SCHWARTZ, Reva; VASSILEV, Apostol; GREENE, Kristen; PERINE, Lori; BURT, Andrew; HALL, Patrick. Towards a standard for identifying and managing bias in Artificial Intelligence. National Institute of Standards and Technology: U.S. Department of Commerce. March 2022. Available at: https://doi.org/10.6028/NIST.SP.1270

 

[11]HAO, Karen. This is How AI Bias Really Happen – And Why it’s so hard to fix. MIT Technology Review. Published on February 4, 2019. Available at: https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/

 

[12]See (HINER, 2020).

 

[13]NOBLE, Safiya Umoja; LE BUI, Matthew. We’re Missing a Moral Framework of Justice in Artificial Intelligence. in The Oxford Handbook of Ethics of AI. Oxford University Press, 2020.

 

[14]CRAWFORD, Kate. Atlas of AI. Yale University Press, 2021.

Clarissa Mendes

Graduada em Relações Internacionais (FIR) e Ciências Sociais (UFPE), com intercâmbio na Facultad de Filosofia y Letras na Universidad de Valladolid (Espanha). Na UFPE, é mestra e doutoranda em Sociologia, além de integrante do Núcleo de Estudos e Pesquisas em Políticas de Segurança (NEPS). Atualmente, é pesquisadora nas áreas de Inteligência Artificial e Tecnologias de Realidade Virtual e Realidade Aumentada no IP.rec.

Share

Related posts