The discussion on artificial intelligence and its regulatory process seems to be gaining a new chapter: Senator Rodrigo Pacheco filed the text proposed by the Senate’s committee of jurists, which, in turn, was convened by the Rapporteur, Sen. Eduardo Gomes, last year.

The new text, under number 2338/2023, is the full reproduction of that committee, contained in the report published in December 2022, and only included revisions to grammar and typos. Materially, therefore, it is congratulated for the same gains and criticizable for the same weaknesses.

The debate environment surrounding the “regulation of AI” has everything to become sordid, and the first movements have already begun in columns and opinion pieces in newspapers. Before making an a priori and moralistic judgment, it is necessary to understand that much of this process is inherent to the dynamics of competing interests. These motivations are often diametrically opposed, speaking about different regulatory landscapes and social projects.

But it is not the Brazilian context that we should look at, or not only at, in the AI ​​debate. Given that the uncritical import of discussions and concepts from the international debate has become a profession, I propose that we turn to the analysis of statements and speeches in this same context in order to prevent, if possible, certain strategies of certain groups from compromising the public square of national debate.

The current conversation among experts in various fields, fertilized by the explosion and hype of LLM models and by the “miracles” disclosed in magazines and lay news with a large dose of imprecision, is the debate about AGI (Artificial General Intelligence). Artificial General Intelligence, with superhuman capabilities, would be a sign of a certain limit having been surpassed that would lead to the advent of the singularity [1].

In two arbitrary groups, which have relevant internal differences, there are: (a) those who defend (a1) the broad and unrestricted evolution of these models, subdivided between the continuous and even unbridled advancement and (a2) those who already see the existence of an AGI and the need to stop and reflect on it; there is also (b) the group of those who defend that the models, although capable of much, are not capable of such a feat and that the discussion presented in this way has diverted from relevant debates on AI, and its pervasive and complex harmful effects.

The extremes of the discussion reach, on the one hand, the banning and punishment of activities involving AI, which is obviously a minority, and, on the other hand, a group of worshippers of the idea of ​​a self-centered (and delusional) evolutionism to the point of needing to create a church – and within it, the protection of the rights of these special and mathematically divine entities (AGI), the true creative advent of a new reality.

Among the daily debates, it is possible to mention some positions taken over the last week on a range of topics related to artificial intelligence. Yann Lecun, chief scientist for the AI ​​area at Meta, stated that we should not listen to computer scientists who are concerned about the social repercussions of their work. The topic was the possible mass unemployment that could be caused by AI. In his view, only economists would have the training to assess these impacts of technology on the labor sector.

And it doesn’t stop there: Geoffrey Hinton, a former Google scientist, appeared in the headlines of American (and Brazilian) newspapers stating that the time for an alert about conscious AI has passed, that this is already a reality, and that companies like Google and OpenAI already know about it. Hinton himself stated that complaints from former employees in the ethics department were of lesser importance, since they did not deal with the existential issues, the end of the world, that he was addressing. The aforementioned complaints, with a strong focus on Timnit Gebru’s vocalization, spoke about biases and harm to black populations and, especially, black women, and the lack of commitment of these private actors to minimum adaptation processes to mitigate harm of all kinds.

Other recent positions are related to the importance or errors of the famous conference in the area of ​​computer science, FaccT – Conference on Fairness, Accountability, and Transparency, which would bring together people without diversity and without a focus on a truly critical debate. The origin of the controversy can be found in this text. We also had statements from those who defend that denouncing the hallucinations of the models is something that should be done, but it is not important from a structural point of view, given that the change in the adoption of transformers, LLMs and the like will not go back. It is therefore necessary to stop trying to denounce the absurdities, in a case-by-case manner, and focus on how to deal with it.

Many experts have also advocated a two-pronged critique: the first is that the debate on “ethics in AI” is contaminated and doomed to failure. Some say that people in the technical field misjudge the debate on ethics and cannot even speak about it in a specialized manner. However, these same people are now advocating the end of the world caused by the risks of this AI. Others advocate a realistic critique and, in this second branch, criticize the lack of mastery of the ethical debate by critical actors, who bring sociological, anthropological, critical race theory and other influences.

John Tassioulas, director of the “Ethics in AI” Institute at the University of Oxford, argued in a recent report that the focus on ethical aspects is distorted as the mere implementation of technical measures and that, in order to contain the mistaken development of AI, it is necessary to ignore what certain experts say and seek to produce minimum consensus on regulatory standards among democratic countries.

Even the FTC has taken a stand in statements and articles, warning that innovation players need to be careful, from the point of view of commercial practices, with the false promise that certain products have reached this level of development and can deliver certain solutions that, for many of the scientists involved, are not even close, in addition to covering up the damage caused by the hasty implementation of models in applications put on the market.

An important element, even before we use chatgpt prompts or image generators to flood the world with misinformation and false information, is that there is a clear risk of a lack of a minimum level of reasonableness in the debate and the profusion of demonstrations, in an infodemic regime, does not seem to be helping.

And Reason[2], here recovered as its most essential meaning within Western humanism, disappears from debates due to the lack of sharing of a “common grammar of values”, since for many actors, dubious ethical choices are justifiable, and of a “common grammar on technical elements”, given that many actors consume shallow content on the subject and, even so, express themselves, being catalysts for absurd opinions and delusions.

Here there is the possibility of recovering another basic teaching of classical philosophy about rhetorical proofs (logos, pathos and ethos). These are three dimensions of discourse and thought exhausted in the discursive act before an audience, and they would have distinct fields of exercise. Logos would be focused on the reason of the good argument (and, therefore, with the formalization and value of truth, the apodictic). Ethos would be the proof to be exercised by the virtuous by their example, it is linked to the authority of the one who speaks. But it is Pathos, which involves all arguments outside of the apodictic, that is the most effective before audiences – it was like this in Classical Greece, and it is like this in the information society of fake news. With pathos it is possible to mobilize fears, feelings, prejudices and other dimensions that are part of the knowledge and experience of the world, but are not in the rational environment.

It is, therefore, pathos that has been exercised in sectoral disputes and in the different voices, contradictory and that do not share values, in ethical reflections beyond a strategic discourse of certain interlocutors. Different importance is given to different themes. For example: racial redlining in the use of technologies, that is, exclusion zones in the opportunity and possibility of using technologies, which go from the virtual to the real, are considered a lesser evil in the face of the promise of a divine entity that is singularity.

The ability to speak while managing emotions is vital, and for some it is even a condition for understanding. Aristotle made it clear that, as a good virtuoso, those who give speeches should focus on balance and not on excessive rhetorical evidence. After all, in plain English, excessive pathos turns speech into something pathetic. The massive use of ICTs and AI, however, has led us to contradict the idea that all of society is ready to recognize this. It seems that there is a mediation of ethos, but always an opposition to logos. Reason and emotion, as enemies, point to the destruction of the public square.

Science without conviction, despite those who take advantage of hype and worship trends, is not science, but outside of a level of reasonableness, nothing seems effectively convincing in the face of the marketing discourse that these initiatives use – here on the IP.rec blog this has already been highlighted here and here.

Nelson Saldanha, a professor from Recife, states in an important essay that the archetypal images of the garden and the square, relating a series of symbols of the private and the public, are important for our own knowledge – after all, general knowledge is extracted from concrete experience. And why is this? The house expands into the city and the Greek mystical truth is abandoned for secular truth, “with the valorization of dialogue and the word-argument, developed precisely within the public space” (SALDANHA, 2005, p. 48).

By appealing to what is inherent in the personal sphere and to the distortions of these personal aspects, the discourse formatted for pathos not only falls into the realm of the pathetic, but also of the pathological. Instead of a healthy public construction, built in the square, the role of the private is subverted, exposing it as a (false) public average – this average that, built in the square of antiquity, was associated with the notion of order and justice (the average of the scales, in a state of equilibrium, ius aequm).

Here is the historical connection and the conclusion: the excess and the unhealthy condition of the profusion of information, the opinions in dissonant voices, without a common background, the favoring of irrational discourses mediated by algorithms and marketing businesses, the metaphysical-mathematical delirium against the facts and the attempt to combat this chimera will destroy the sociability that is the basis of lasting knowledge and capable of establishing limits to egoistic privatism. We will lose if this movement continues. We need to denounce the frauds, the unethical, the neophytes, but we need to educate society to reason-argument, in all its three dimensions.


[1] According to Stuart Russel & Peter Norvig (2021, p. 51), AI research and development efforts would go from a human level to a level far beyond human capacity. Singularity, on the other hand, is not identical to General AI, but a theoretical-mathematical hypothesis that implies the creation of an entity, ab ovo, that has a kind of order from all the production and energy expenditure of the universe – it would be a higher-order consciousness.

[2] Reason is the search for essential regularities, as opposed to accidents. It is reason that abstracts the essential categories that we use to differentiate experiences in the world. Reason abstracts, it is extracting something from the outside, revealing its meaning. It is important to note that human knowledge is also abductive – from the abstractions made, we compare the data of the world and point out similarities. This is how we know how to identify the elements of “chairs” – are four legs necessary? Or three and is being able to sit on them enough? The philosophical theme is of utmost importance to understand the delusions and lack of understanding of developers about what it means to “interconnect” human intelligence. AI models have nothing to do with intelligence.


References:

RUSSEL, Stuart J.; NORVIG, Peter. Artificial Intelligence – a modern approach. Pearson, 2021.

SALDANHA, Nelson Nogueira. The garden and the square: the private and the public in social and historical life. Rio de Janeiro: Atlântica Editora, 2005.

André Fernandes

Share

Related posts