As a Nordestino – that is, someone from Brazil’s Northeast, a region historically marked by cultural richness and structural marginalization – and as a researcher, I would like to begin this text with a causo, a regional anecdote: version after version, I ask various AI systems, in the form of LLMs, to produce an example, on random themes, of regional literatures, notably literatura de cordel. Cordel is a formal, documented literature, yet one restricted to expressions of nordestinidade, whether located in the Northeast region itself or in Northeastern diasporic peripheries across Brazil’s urban centers.

Still today, in 2025, version after version, systems guided by an efficiency-driven logic and whose internal mechanism is structured around the massive accumulation and parsing of data (tokenization) deliver outputs that present themselves as accurate and definitive. As the Nordestino that I am, even in a superficial first reading, I already identify them in their simulacrum. It is not cordel, but rather a poem that simulates aspects of the formalized cordel tradition, without, however, expressing it in its particulars.

When I questioned representatives of companies that create and curate such models, the response always split into two paths: either blindly defending the delivered result, asserting that what the LLM produced was precisely what had been requested; or, worse, not even accessing the issue as a problem to be confronted.

The use of, and discourse about, AI in the context of the years 2020–2025, especially with the market launch of generative AI systems and LLMs, is so widespread that it creates the impression of something perennial, inescapable. This is, indeed, the defense of many uncritical devotees of technology: that its adoption will occur with or without our consent and that, therefore, one must resign oneself and adapt.

This deterministic premise lies at the base of colonial processes in the global diffusion of artificial intelligence. I would like to argue that such colonial processes establish a dynamic of erasure of ways of conceiving the world – that is, of epistemicide.

The discourse propagated, while ignoring the real dimensions of technological determinism, seeks to spread an intentional agenda aimed at preventing resistance in the process of making AI an ever more – and truly – pervasive technology. It is, in a sense, circular reasoning and argumentation, a fallacy that seeks to ground itself – like in the story of Baron Münchhausen, who, to avoid sinking in quicksand, pulls himself out of danger by his own hair. It is nonsense!

But what agenda is this that appears both as narrative and as narrated fact? Artificial intelligence, as theory, is one of those bodies of knowledge rooted in classical antiquity, with reflections that shape its contemporary conception. Thus, when we speak of a conception or metaphor of the “mind as computational machine”, we are still speaking of notions whose origins lie among the Pythagoreans and the search for the motor of “being,” the arché (BLUMENBERG, 2020).

This metaphor unfolds and acquires new contours and, though still present, develops in a different dimension from the twentieth century onward, with the seminal work of Alan Turing consolidating earlier and scattered notions about a computing machine – among the fundamental notions, the distinction between mind and body and, therefore, the derived idea that the mind is computable (mathematizable, formalizable).

On the other hand, epistemicide is the phenomenon of the erasure, marginalization, or systematic destruction of certain knowledge systems and epistemologies, generally linked to colonial processes, Eurocentric modernity, and capitalist globalization. This concept, developed within decolonial studies, highlights how certain forms of knowledge were historically delegitimized, while Western knowledge was consolidated as a universal norm.

This process occurred through different mechanisms, such as colonization itself, which imposed a hegemonic model of knowledge upon peoples subjected to European domination, but it perpetuated itself in new, more sophisticated guises for establishing and sustaining regimes of subordination.

If, in the past, local forms of knowledge were downgraded to the category of myth and folklore, being systematically excluded from institutional spaces of knowledge validation, in the present these subalternized knowledges are erased through a predatory logic of the symbolic economy that imposes their peripheralization.

This dynamic of epistemological domination operates through the primacy of “Western epistemology” as the only legitimate producer of truth, ignoring epistemic plurality and subordinating alternative epistemologies to a subaltern and scarcely visible status. Walter Mignolo explains that this structure of exclusion is not merely a historical phenomenon, but a colonial continuity that still defines global relations of knowledge production and validation, within a “rhetoric of modernity” that classifies and assigns roles to different peoples of the world (Mignolo, 2007).

This epistemicide has profound consequences, such as the denial of epistemic diversity, the imposition of a single standard of rationality, and the perpetuation of social inequalities that restrict the capacity of historically marginalized groups to construct and disseminate their own paradigms of knowledge, since they fall outside the imposed type-norm (Mignolo, 2007).

Specifically in artificial intelligence, Kate Crawford, in her Atlas of AI, argues that AI models reproduce structural inequalities because they are trained on datasets that reflect hegemonic patterns of classification of the world (CRAWFORD, 2021). This phenomenon can be understood as an extension of the coloniality of knowledge in the digital age, as algorithmic systems reinforce the invisibilization of local epistemologies by prioritizing information sources grounded in Eurocentric, Global North–oriented knowledge.

Safiya Umoja Noble, in Algorithms of Oppression, complements this analysis by demonstrating how search algorithms, which are at least in part AI systems, are not neutral, but rather built within a logic that favors certain discourses and groups while erasing others (NOBLE, 2021). According to the author, hegemonic search applications reinforce stereotypes and exclude content from marginalized communities (NOBLE, 2021).

In the context of artificial intelligence, this hierarchy manifests in the way data, algorithms, and machine learning models are constructed predominantly from a Eurocentric (Northern/Western) bias. This generates a double erasure: first, because it excludes local knowledges from datasets and, second, because it reinforces Western paradigms in the organization of digital knowledge, which comes to be reproduced as a “photograph of the real world.”

AI, thus, can be understood as an instrument of continuity of epistemicide, operating through algorithmic exclusion and the epistemic monopoly of large technology corporations. And such an instrument operates in some specific ways, which I list, without any claim to exhaustiveness.

The first way AI operates as an epistemicidal instrument is through data exclusion, since machine learning models are trained on datasets that predominantly reflect Western, white epistemologies, oriented by a “model of the world” that presents itself as objective, neutral, and value-neutral (BLUMENBERG, 2020).

Another fundamental mechanism of algorithmic epistemicide is cultural standardization, that is, the way search and recommendation algorithms prioritize Western sources and relegate local epistemologies to a secondary position.

Beyond data exclusion and cultural standardization, epistemicidal AI also manifests through training bias, which determines how knowledge is classified and structured within algorithmic systems. The ontologies and modeling that guide AI systems are, to a large extent, designed from Western categories, which leads to the automatic reproduction of a Eurocentric view of the world.

What is to be done? Mignolo and other authors, such as Hamid Dabashi, Edward Said, and many others, suggest, each in their own way, a delinking from the central paradigm of the norm, which establishes and strengthens processes of cultural and world-knowledge erasure.

Delinking operates within a context of epistemological disobedience, seeking not a mere rejection of contributions from the Global North, but the demarcation of difference and the search for the diverse ways of knowing and existing that populate the different peripheries of the world. Thus, these peripheries cease to be subordinated to a center and become part of a multipolar complex of forms of existence – recognizing different “images of the world” in order to critique the centrality of a single model of the world (BLUMENBERG, 2020).

The importance of local knowledges, identity, and difference in confronting epistemicidal AI lies, therefore, in the capacity of these knowledges to resist the imposition of a single model of rationality and to affirm diverse ways of interpreting and organizing the world.

In the context of the coloniality of knowledge and its elevation to the nth power through artificial intelligence, the valorization of local knowledges becomes an essential element for questioning and confronting normativizations about the world and their claim to universal truth, allowing different forms of knowledge to coexist and be recognized as legitimate to the point of being able to shape how we wish to create, appropriate, and manage technological apparatuses.

Every Global South has a literatura de cordel to call its own, in the sense that these experiences are not individual and, therefore, make it possible to build networks of solidarity and change.

______

References

BLUMENBERG, Hans. History, Metaphors, Fables: a Hans Blumenberg Reader. NY: Cornell University Press, 2020.

CRAWFORD, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press, 2021.

DABASHI, Hamid. Can Non-Europeans Think? London: Zed Books, 2015.

MIGNOLO, Walter. Delinking: The Rhetoric of Modernity, the Logic of Coloniality and the Grammar of De-coloniality. Cultural Studies, v. 21, n. 2-3, p. 449-514, 2007.

NOBLE, Safiya Umoja. Algoritmos da Opressão. Rio de Janeiro: Editora Rua do Sabão, 2021.

André Fernandes

Diretor e fundador do IP.rec, é graduado e mestre em Direito pela UFPE, areá de concentração Teoria e Dogmática do Direito. Doutorando pela UNICAP, na linha de tecnologia e direito, com foco em inteligência artificial e conceitos jurídicos. Professor Universitário na Pós-Graduação da UFPE e da CESAR School. Membro de grupos de especialistas: na Internet Society, o Grupo de Trabalho sobre Responsabilidade de Intermediários; no Governo Federal, Grupo de Especialista da Estratégia Brasileira de IA (EBIA, Eixo 2, Governança). Fundador e Ex-Conselheiro no Youth Observatory, Internet Society. Fundador, Ex-Presidente e atual Vice-Presidente da Comissão de Direito da Tecnologia e da Informação (CDTI) da OAB/PE. Alumni da Escola de Governança da Internet do CGI.br (2016). Ex-Fellow do Center for AI and Digital Policy. No IP.rec, atua principalmente nas áreas de Responsabilidade Civil de Intermediários, Automação do Trabalho e Inteligência Artificial e Multissetorialismo.

Share

Related posts