Imagine that GPT Chat 5 will have almost ten times Einstein’s IQ. If Einstein was able to completely change and transform our humanity. Then we are one update away…

…From the Age of Ultron! [1][2]

To those who are not experts on the subject, although dystopian, the hypothesis that machines can surpass human intelligence may seem plausible. If you have already used GPT Chat, you may have been impressed by how realistic the conversation can seem. Computers are already capable of beating world champions in complex games such as chess, backgammon and poker. Huge feats are already possible, associated with the immense volume of information that these systems are capable of processing in a few moments.

You don’t have to be a layman in the subject to be seduced by this possibility: even experts, especially in the field of computer science, have been making similar claims for decades. The most recent case is that of Geoffrey Hinton, considered the ‘godfather of AI’ for having created the algorithm that underlies neural networks. To build it, the scientist sought to simulate the functioning of neurons and connections in the human brain. He recently declared that artificial intelligences would already be 1,000 times more intelligent than humans and that the risk of them taking control of society is real. “The algorithm we created in the 1980s is more efficient than the brain. Today’s large language models can know more than a person, with far fewer connections.”

Many of these concerns are in line with the idea of ​​technological singularity. The concept describes a moment in which technological acceleration occurs to such an extent that machines and AI systems would surpass human understanding and control, causing irreversible changes. The debate was popularized by Vernor Vinge in 1993, when he discussed an imminent point in the future when rapid advancement could lead to drastic changes in society. In the wake of this, Ray Kurzweil predicted that the journey towards singularity would occur through an artificial intelligence of a superhuman nature, capable of conceiving ideas previously unimaginable by humans and developing more sophisticated tools. This could mean everything from overcoming diseases to the search for technological immortality.

The singularity is a frequent concern in science fiction. In William Gibson’s classic Neuromancer (1884), “Turing policemen” would have a special role in regulating artificial intelligences capable of improving their own programs, in order to ensure that they never exceed a certain level of intelligence. The entire plot, in fact, is centered on the efforts of one of these AIs to circumvent this control. Films such as Blade Runner (1984), Ex Machina (2015) and Her (2013), each present, in their own way, somewhat dystopian visions of artificial intelligence, centered on a kind of transcendence of the machine.

Anthropomorphic characteristics in a machine?
Since the beginning of the field, the language that has emerged around Big Data and machine learning has suggested an equation between human and machine intelligence by evoking activities that are both human and biological. First of all, the term “artificial intelligence,” coined by John McCarthy in 1956 to name the field of studies that discussed automation, did not bring with it this ambiguity due to mere terminological carelessness: the research developed in fact carried the expectation that machines could perform human tasks, in addition to tasks linked to cognition, such as abstraction and language. In the lexicon of the field, the following equivalences are used: “neural” networks; “data ‘feeds’ a computer, which ‘digests’ the information; machines can “learn” and “think”; [3].

The term machine learning is one of these: it suggests that the computer would have agency, and would somehow be sentient, insofar as it “learns” – a term normally applied to sentient beings. In practice, the machine’s “learning” is closer to a metaphor: it means that the machine can improve its performance in its programmed, routine, and automated tasks. When a machine “learns,” it does not mean that it has a kind of brain made of metal; it means that it has become more accurate in performing a specific task, according to a specific metric defined by someone. There is no knowledge or wisdom or agency acquired. This kind of linguistic confusion helps to further blur the boundaries between human and non-human present in the field.

Similarly, in facial detection, a facial recognition system does not “know”, as a human would, what is or is not a face. In technical terms, it refers to “detecting a set of pixel values ​​that generally correlate well with the faces that are present in the collected training data”. The validation mechanism involved is not based on teaching a computer the intrinsic meaning of what a face would be. The process of anthropomorphizing the language associated with AI is far from being a linguistic oversight: it is consonant with a series of assumptions, theories and implementations based on breaking down these boundaries.

How do you measure the intelligence of something? By playing chess?
In the absence of a clear understanding of how the brain works and what might or might not be a good basis for the concept of intelligence, computer scientists introduce various proxies for intelligence, one of the main ones being behavior in games. The use of more complex games has been one of the preferred methods for testing programs in the history of AI research since 1950.

There are several examples: in 1979, the computer program BKG 9.8 defeated the world backgammon champion, Luigi Villa. But the best-known case is the confrontation between the world chess champion Garry Kasparov and the computer program Deep Blue, developed by IBM. The first match, in 1996, was won by Kasparov. The program was then improved – if it could previously analyze 100 million moves per second, its capacity was increased to 250 million. The following year, the famous rematch took place, and the machine defeated the human. More recently, in 2011, the computer Watson, also from IBM, beat humans in a quiz game, Jeopardy. The game Go, played for thousands of years in China, was one of the last barriers – its world champion was defeated in 2016 by the computer AlphaGo. But what does this mean?

“We have almost 50 years of human/computer competitions, but does that mean that any of these computers are intelligent? No, it does not. For two reasons: the first is that chess is not a test of intelligence; it is a test of a particular skill, the ability to play chess. If I could beat a chess champion and still not be able to pass the salt on the table when asked, would I be considered intelligent? The second reason is that thinking that chess is a test of intelligence is based on the false cultural assumption that brilliant chess players are brilliant minds, more genius than other people. Yes, many intelligent people can be excellent at chess, but chess, like any other singular skill, does not denote intelligence.” [4]

The use of games to assess machine performance has shaped research agendas, as well as implicitly prioritizing certain types of intelligence over others. However, it is important to remember that unlike everyday life, games offer a closed world, with defined parameters and clear victory conditions.

After all, what is human intelligence made of?
The idea that non-human systems (whether computers or animals) would be analogous to the human mind and that, with sufficient training or sufficient resources, an intelligence like ours could be created from scratch is a particularly strong mythology in the field of AI. Since the middle of the 20th century, the belief that human intelligence could be formalized and reproduced by machines has been debated by theorists. In 1950, Alan Turing, for example, predicted that by the end of the century “the use of words and general educated opinion will have changed so much that it will be possible to speak of thinking machines without expecting to be contradicted”.

Many of these positions are based on the computational theory of mind (or computationalism), which proposes that thought is a form of computation, and therefore, complex brains like ours are an information processing system. Since it resembles digital computers, the brain could be reproduced or simulated by a highly sophisticated computer. One example of this is the Human Brain Project, which has based its entire research program on the hypothesis that digital computers will soon be able to simulate the animal brain. Its main proponent, Henry Markham, recently stated that “consciousness is simply the product of the massive exchange of information between a trillion brain cells… I do not see why we cannot generate a conscious mind.”

The blind spot in many of these approaches concerns the interdisciplinary knowledge needed to understand how intelligence works beyond the computational realm, centered on information processing. Neuroscientist Miguel Nicolelis points out that the human brain is full of superior functions that are not computable: intelligence, intuition, creativity, mathematical abstraction, empathy, altruism, fear of death, aesthetic sense, definitions of beauty, creativity, to name a few examples. For him, the brain has three fundamental properties: the malleability to adapt and learn; the ability to allow several individuals to synchronize their minds around a task, goal or belief; and the unparalleled capacity for abstraction. Despite the efficiency of digital technology in transmitting information through sequences of 0 and 1, the brain’s analog system has an unparalleled complexity of processes. In short, AIs may even surpass us – and by far – in their ability to calculate, but they are incapable of attributing any meaning to these calculations. This is a qualitative, not quantitative, difference.

Furthermore, from a sociological perspective, the problem with hypotheses like these is that they disregard the ways in which humans are embodied, relational, and situated within broader ecologies. It is as if intelligence existed independently, separate from social, cultural, historical, and political forces. It is always important to remember that the concept of intelligence has caused innumerable harms over the centuries, being used to justify relations of domination, from slavery to eugenics.

The myth of AI exceptionalism

Any sufficiently advanced technology is indistinguishable from magic.

(Arthur Clarke, author of the science fiction short story The Sentinel, which inspired the film 2001: A Space Odyssey)

Too often, exaggerated predictions about AI align with the idea of ​​algorithmic exceptionalism—that because AI systems can perform fantastic feats of computation, they must be more intelligent and objective than their human creators, who are always susceptible to failure. The underlying premise is that mathematical formalisms could help us understand humans and society with predictive certainty. There is an epistemological twist at play: the complexity of the world could be reduced, with increasing accuracy, to machine learning translation as we separate informational noise from “truth” expressed in data.

Kate Crawford and Alex Campolo call this enchanted determinism: AI systems would be seen as enchanted, belonging to a realm beyond the knowable world, and yet deterministic, in that they discover patterns that can be applied with predictive certainty to everyday life. This gives AI an almost theological belief: uninterpretable even to the engineers who created them, giving the systems an aura of being too complex to regulate. The technique of “obscuration by mystification” is often used in public sectors to argue for the inevitability of a phenomenon. And considering something inevitable is dangerous: it implies considering that there are no alternatives, that there is nothing that can be done but to accept it. We are told to focus on the innovative nature of the method rather than on what is most basic: its purpose. For the authors, enchanted determinism obscures power and restricts informed public discussion, public scrutiny or even the possibility of outright rejection.

Enchanted determinism has two complementary faces: utopianism and technical dystopianism. Technical utopianism sees computational interventions as universal solutions applicable to any problem. Technical dystopianism, on the other hand, blames algorithms for their negative consequences, as if they were independent agents that are not accountable to anyone or to the contexts that shape them and in which they operate. Both are metaphysical twins: they treat machine intelligence as something singular or superlative, which would be the solution or ruin for all problems, in an ahistorical vision in which only technology has power over things.

Neither intelligent nor artificial. What are we talking about then?
According to neuroscientist Miguel Nicolelis, artificial intelligence is neither intelligent nor artificial. “It is not artificial because it is created by us, it is natural. And it is not intelligent because intelligence is an emergent property of organisms interacting with the environment and with other organisms, a product of the Darwinian process of natural selection. The algorithm can walk and do things, but it is not intelligent by definition.”
From a broader perspective, sociologist Kate Crawford makes the same claim that AI does not have either of these two attributes, but points to other aspects: she reinforces that these systems are both embodied and material, made of natural resources, fuel, human labor, infrastructure, logistics, stories and classifications. They do not have autonomy or rationality and cannot discern anything without extensive and intensive computational training, with large sets of data and predefined rules. By depending on all these attributes to exist, they both reflect and reproduce social relations and understandings of the world.

Even in the computer science community, the term “artificial intelligence” has come and gone in and out of fashion over the decades. In general, it is said to be an umbrella that encompasses a set of techniques. Thus, the most common terminology that has been used in technical literature is the use of more specific terminologies, such as machine learning. According to Kate Crawford, the term AI is often used during funding submission periods, as it has greater advertising potential and is easier to understand by non-specialists. What is important to note, more than a closed concept, is that each way of defining artificial intelligence plays a role, defining how it will be understood, measured, valued and governed.

In practical terms, although we know that the definition is far from being a consensus, Paz Canales highlights that there is still a long way to go before what has been implemented in Latin America can be effectively called AI. In most cases, these are modest technological developments, which concern automated decision-making processes or algorithms that simplify the processing of large volumes of information, but whose labels are adopted by governments because they have a greater appeal, associated with efficiency and modernity. The most important thing, for the author, is not to lose sight of the perspective of public interest in the adoption of these technologies, avoiding falling into techno-solutionism.

Many of these systems are based on models that encode stereotypes about minorities and eliminate the context from which the information comes, reinforcing discriminatory dynamics and spreading misinformation. Meanwhile, automated decisions are already being used in Brazil, for example, to access public policies such as Emergency Aid and the National Employment System (SINE). In most cases, with few mechanisms for evaluation and accountability. A bill that will regulate artificial intelligence in the country is currently being processed, PL 2338/2023, with the potential to mitigate or deepen many of these impacts on the Brazilian population.

It is not magic: it is statistics on a large scale [5]. The most urgent thing to understand about artificial intelligence and its risks is not about a hypothetical domination of humanity by ultra-intelligent robots or any other scenario worthy of science fiction. It is the impact of technologies that are already being implemented at this moment on the population, especially the most vulnerable, and whose effects can already be felt. It is to question what is being implemented, for whom it is being implemented and who is responsible for deciding.


Notes
[1] Dialogue between Felipe Castanhari and Cauê Moura in an episode of the Podpah podcast broadcast on June 27, 2023. Transcript with excerpts. Full version available at: https://www.youtube.com/watch?v=9S5ls_QltgU

[2] In the film The Avengers: Age of Ultron, the plot is centered on an artificial intelligence that would be used for Tony Stark’s global defense program to ensure peace, called Ultron. However, upon awakening, the AI ​​concludes that in order to achieve the objective for which it was designed it must exterminate humanity.

[3] Our translation. In the original: “data are ‘fed’ to a computer and ‘digests’ information and machines ‘learn’ and ‘think’.”. See: p. 66 by ELISH and BOYD, 2018. [4] Our translation. See: NEVILLE-NEIL, 2017. [5] See: CRAWFORD, 2021.


References
BROUSSARD, Meredith. Artificial Unintelligence: How Computers Missunderstand the World. London: MIT Press, 2018.

CAMPOLO, Alexander; CRAWFORD, Kate. Enchanted determinism: Power without responsibility in artificial intelligence. Engaging Science, Technology, and Society, 2020.

CANALES, Maria Paz. What do We Talk About When We Talk About AI? In Latin America in a Glimpse. Digital Rights. 2020.

CRAWFORD, Kate. Atlas of AI. New Haven and London: Yale University Press, 2021.

GANASCIA, Jean-Gabriel. Artificial Intelligence: Between Myth and Reality. In UNESCO Courier, nº3, July-September 2018.

ELISH, M. C.; BOYD, Danah. Situating Methods in the magic of Big Data and AI. Page 66. Communication Monographs, 2018.

MEYRAN, Regis. Miguel Benasayag: It is humans, not machines, who create meaning. In UNESCO Courier, No. 3, July-September 2018.

NEVILLE-NEIL, George. 2016. The Chess Player Who Couldn’t Pass the Salt. Communications of the ACM, v. 60, n. 4, 2017.

NICOLELIS, Miguel; CICUREL, Ronald. The Relativistic Brain. Natal, Montreux, Durham, São Paulo: Kios Press, 2015 .

Clarissa Mendes

Graduated in International Relations (FIR) and Social Sciences (UFPE), with an exchange program at the Faculty of Philosophy and Letters at the University of Valladolid (Spain). At UFPE, she holds a master's and doctoral degree in Sociology, as well as being a member of the Center for Studies and Research in Security Policies (NEPS). She is currently a researcher in the areas of Artificial Intelligence and Virtual Reality and Augmented Reality Technologies at IP.rec.

Share

Related posts