

The electoral process is back on the national agenda for minority elections, focused on city halls and city councils. Two points deserve to be highlighted: the number of candidates who control the base of the state bureaucracy in Brazil, the municipal level and, once again, the revived debate in technology about the harmful uses for disinformation purposes.
With each electoral cycle, as a trauma from the 2018 elections and the unwilling adoption of a management for the production and appropriation of disinformation, the reception of elections, state agents and traditional media with a hypothetical scenario of dissolution of pillars of truth and objectivity is renewed. As happened in the last elections, and even more so after the launch of generative artificial intelligence models, the question that cannot be silenced: can we combat disinformation? More precisely: do we have the capacity to combat misinformation created using content artificially generated by AI?
Right off the bat, the answer is a resounding “no”.
The first order of the negative involves the fact that lies and truth do not appear to have originated in the 21st century, but have been part of human culture since its birth. More than that, the sophistication of the tools used to create lies (false information or disinformation) often does not replace, but rather exploits, the element of trust that makes up the human fabric of society.
This means, quite concretely, that the viral effect of disinformation comes less from its inner truth or lie, less from the information itself and more from the people who circulate, believe, disseminate and endorse it uncritically. The “WhatsApp aunt” is more often than not less of a soap opera villain, an articulator of lies on the internet, and more of someone committed to deep ideologies – as we all are.
The second type of negative is linked to the fact that, although a technology that creates conditions for the partial replacement of false or artificial content is old, the new is presented in its platformized availability and accessible with a few clicks in the nearest search engine.
Deepfakes are products resulting from the artificial processing of data in algorithms widely used for artistic, satirical and critical purposes. Deepfakes are also products that, supported by painful action, transform the factual truth to make something that was never a concrete event seem real.
Now let’s add up the negatives: if absurd content gains a viral effect due to ideological commitment (“vaccines kill”, “cock baby bottle”, “pizza gate” etc.), how will this ideological commitment manifest itself from content that, at a glance, seems credible? That therefore requires us to stop and look at it and not agree with its previous form?
Like a series of low-budget, questionable-taste movies, disinformation has already gone through its “origin”, its “now”, and is facing a “return” – the villain this time is Artificial Intelligence. And it’s not that AI, as embedded in current means of production, is an innocent heroine. But the question that looms on the horizon is: who are the humans who appropriate this?
It is to answer this question, for example, that the country’s highest courts, notably the STF and the TSE, have sought programs to combat disinformation by creating educational campaigns, partnerships with fact-checking entities and research entities – among them IP.rec. The effort is commendable and necessary, but it does not change the degree of response to the question, in its multiple negative dimensions.
If a presidential election in the 1980s was ruined by the dissemination of fake news on the national network, and if in the remote corners of Brazil the practice of offenses tirelessly combated by the electoral justice system is rampant, there is no way to face a scenario in which a widespread, pulverized and accessible technology such as an AI-produced deepfake can be used to combat content.
In its role of enforcing the law, the Superior Electoral Court issued a rule that holds those who use this type of technological expedient to spread lies accountable, expressly discriminating (based on the principle of legality) that the use of algorithmic systems and models constitutes the same type of electoral offense. Even so, it creates due diligence obligations so that large companies retire, immediately after the news, fake content disseminated through their applications.
We have heroes, we have plots, we even have anti-heroes (State power should not be at the poles of values), where is the good guy?
The hero is the critical education, for the liberating action, of each voter. It is an anticlimactic ending, but objectively true. The “vulnerabilities” of the tacit or underground commitments and agreements made by individuals and society, compromising with the absurd and the fantastic, are the basis for exploiting and making fake news go viral.
Yes, artificial intelligence makes it harder to understand the true and the simulated, but all interpretation falls on the individual who interprets. If this individual lives based on Humpty-Dumpty values, whose manual says that “what I say means what I want it to mean”, and transforms the entire semantics and history of values of society into a big mess, well…
It is important to end with a bit of unsolicited philosophy: technological devices and objects only conclude as an act in use; all technology is, in a certain sense, a possibility of occurrence, therefore, power. It is less about the hammer and more about who holds the hammer.
