Recently, it was reported that MIT developed the first psychopathic artificial intelligence ever created. Nicknamed Norman, in reference to the character Norman Bates from the film *Psycho*, directed by Alfred Hitchcock, the robot is, in fact, a simple system capable of describing images. However, what led to it being described as psychopathic is that it was fed biased data, which demonstrates that the choice of data influences the behavior of a machine learning algorithm.

 

To explain: when designing a machine learning algorithm, regardless of the function it will perform—whether it’s image or sound pattern recognition, decision-making about the best scenario among several possible choices, among others—the algorithm goes through a training phase and a testing phase. Usually, the dataset is divided for these two purposes. Developers then introduce the data with which the machine is supposed to learn to function as expected. In Norman’s case, MIT researchers suggest that the same method can result in different interpretations of the images submitted to the algorithm; it all depends on the dataset used for training the machine. For example: in one of the images submitted to Norman, it determined that the image looked like “a man shot to death,” while another machine, with which it was compared, said the same image resembled “a close-up of a flower vase.” It is the same system, but fed with different data.

 

Before this, in mid-2017, researchers at Stanford University developed an algorithm capable of differentiating between homosexual and heterosexual people through facial recognition, with greater accuracy than the humans who participated in the tests. The developers of this AI intended to protect the LGBT community. However, they themselves created a tool capable of putting them in danger. Naturally, human rights and LGBT advocacy groups condemned the work.

 

These two examples lead us to the discussion of ethics in technological development, more specifically in the development of artificial intelligence. It is known that the field of Computer Science has never deeply explored the ethical issues that surround the production of these new technologies. A good indication of this is that top universities in this field, including Stanford, Harvard, and the University of Texas at Austin, announced this year that they plan to implement courses with this objective into their academic curricula.

 

Historically, the ethical limits outlined by the state for technological development have always targeted inventions that threatened human survival and the environment. Thus, governments began to limit destructive technologies, such as chemical and biological weapons, through bi- or multilateral agreements. Additionally, the banning of organic pollutants and the growing concern for biodiversity preservation are examples of topics included in government agendas. More rarely, agreements aimed at technological activities resulting in some form of irreversible moral damage have also emerged [1].

 

Now, however, there is a new category of machine learning algorithms that greatly affect human life, often infringing on rights and provoking various (negative) consequences for people, with the potential to cause this irreversible moral damage. For example, when considering the use of predictive software for establishing criminal sentences, which calculates penalties based on criteria over which the convicted person has no control or awareness—and consequently cannot contest or oppose the decision—technology begins to have a devastating potential. The same happens when an out-of-control autonomous car must decide between running over a group of children or killing the passenger. Some level of ethics is necessary for such decisions to be made.

 

In this sense, what is meant by ethics? And by morality? It is understood that morality refers to the set of norms, values, principles of behavior, and customs of a particular society or culture. Ethics, in turn, studies the principles underlying these social norms.

 

Thus, in the debate on the ethics of artificial intelligence technology development, it is essential to question the adequacy of these technologies to the moral rules already accepted by society. Anything that exceeds these limits should be the subject of in-depth debate so that society can take a stance on it.

 

It is important to remember that technology is not neutral and that the choices made in its development have social impacts. Just because something is technically possible does not mean it should be done. A great example is the previously mentioned artificial intelligence that enables facial recognition of homosexuals: it could become a weapon if it falls into the hands of dictatorial governments or those that persecute people who engage in non-normative relationship practices. Especially in the case of artificial intelligence algorithms—currently seen as the solution to social problems, as they are used in the most diverse activities—there is no transparency standard to follow. This opens the door to various abuses, ranging from discrimination in targeted online advertising, the use of data by Cambridge Analytica to manipulate public opinion during elections, to MIT’s psychopathic robot.

 

Given this situation, the initiative taken by the aforementioned universities to bring ethics into academic discussions is of utmost importance. It forms the foundation for students who will potentially become developers at the largest technology companies in the world. As Professor Lawrence Lessig argues, the regulation of technologies is done, in addition to state codes, by technological codes. In other words, the developers of these technologies themselves can be regulators if they incorporate good practices such as transparency in the functioning of the algorithm, the phases of data collection, use, and treatment used as input for the algorithm, and the training of AI, among others, without waiting for limits imposed by the state or other entities. Otherwise, without this concern for ethics, regulation by code itself will continue to occur, but with undesired effects.

 

[1] JASANOFF, Sheila. *The Ethics of Invention.* W. W. Norton & Company, 1st ed., New York, 2016.

Raquel Saraiva

Share

Related posts