In the present day, the difficulty of adapting rules originated in an analog world to the typical conflicts of the digital world is becoming increasingly pressing. This is described in a simple but accurate principle of modern life, according to which “technology changes at an exponential rate, while social, economic, and legal systems change at an incremental pace.” The consequence of this fact is the emergence of a great abyss between an old and a new world and the conflicts brought to this by the development of new technologies.

One of these conflicts is the growing use of algorithms for decision-making. Algorithms are present in everything, from UberTM to advertisements. They are used to choose between resumes of candidates for a job opening, to check credit and decide on health insurance. But many of them are built as “black boxes,” offering no transparency on their operation.

Such algorithms use big data to solve problems, base decision-making, and, depending on the data or other factors, they can lead to discrimination. They can also have a direct impact on the choices and options offered to people, on their right to privacy and control over personal data, as well as on various other rights. Since algorithms are dependent on historical data, vulnerable individuals are more likely to receive unfavorable outcomes, as past events tend to be reflected in algorithmic results. Nonetheless, vulnerable groups are those who may suffer the most from this type of discrimination, as happens, for example, when they have credit denied.

In a recent decision in the case “State of Wisconsin vs. Eric Loomis”, the Wisconsin Supreme Court in the United States raised its concerns about the tool used by the justice system to determine the likelihood of defendants committing crimes in the future. The court expressed that judges can consider the score obtained by the defendant in the process of calculating the sentence, but that warnings should be attached to the score, as a way of signaling the “limitations and cautions” of the tool.

In May 2016, ProPublica, an independent, nonprofit news organization, published an analysis of the tool referred to by the Wisconsin Supreme Court decision, called COMPAS. This software is popularly used to analyze defendants not only in the state of Wisconsin, but also in other jurisdictions in the United States. ProPublica’s analysis concluded that the program is often wrong and develops bias against black individuals who would not go on to commit crimes in the future, falsely labeling these people as future criminals at twice the rate as white defendants.

In the same vein, computer scientist Latanya Sweeney, who is black, noticed that when she searched for references to her name on Google’s search engine, she obtained results with discriminatory bias. With this in mind, she conducted research at the Data Privacy Lab at Harvard University, where she works, which resulted in the article “Discrimination in Online Ad Delivery.” She used first names associated with races and found statistically significant discrimination in the ads shown by Google AdSense as a search result.

This discrimination in decisions made by programs that use decision support algorithms occurs precisely because the codes are not subject to any type of transparency or scrutiny by the technical community.

Cathy O’Neil calls these algorithms “mathematical weapons of destruction” [1] due to their characteristics: being widely disseminated, capable of affecting the lives of many people; mysterious, because there is no possibility for individuals who are being judged by them to assess their formula; and destructive, because they can unjustly ruin people’s lives, such as when they determine the dismissal of an individual or a group of individuals. She emphasizes that, while the algorithm is programmed to solve a problem, it may not only fail to solve it but also worsen the situation.

In addition, she explains that algorithms are complex decision-making processes and that the same algorithm can be used for good or for ill. An example of this is the data related to an individual’s medical history: it can be used to provide a patient with better care upon entering an emergency service; at the same time, it can be used as a filter by a company that wants to hire employees but does not want to incur substantial costs with health insurance plans.

In the same sense, Frank Pasquale, in his book “The Black Box Society — The Secret Algorithms That Control Money and Information” [2], states that the accountability processes available today usually fail because the algorithms used in decision-making tools remain opaque, or the data used is opaque, or even a combination of both factors.

As an example, Pasquale points out that banks are using computer programs to decide whether or not to grant credit to an individual, such as mortgages, loans, or credit cards. By calculating a score, banks can increase or decrease the interest rate based on the reports generated by the programs, so that those who achieve a higher score have a lower interest rate, for example. However, according to him, research shows that about 28% of these reports generated by computer programs contained clearly erroneous information about the individuals being analyzed, which consequently leads to evaluation errors.

In addition, he emphasizes that these scores vary from one entity to another, which also affects the outcomes of who is granted credit. In other words, data collection and practices in general should be more open to external scrutiny.

Pasquale also asserts that a paradigm shift is necessary to make the web less dependent on advertisements and the personalization of ads, which he considers to have been the origin of privacy issues for users.

Starting from the premise of transparency, Kroll et al. realized that the accountability mechanisms used in these decision-making processes have not kept pace with technology. The tools currently available for legislators, courts, and political agents were developed to oversee people making decisions and often fail when applied to computers. For example, how can one prove the intent, or malice, of a computer program? Additional measures are therefore needed to hold automated systems accountable for their potential inaccuracies or unjustified or incorrect results.

Thus, the authors assert that making the source code available is not always necessary (because there are alternatives from computer science) or sufficient (because of the complexity of the code) to determine the fairness of a process. Similarly, transparency may be undesirable, such as when it allows tax evaders or terrorists to manipulate the system and exploit its loopholes.

In summary, the central point should always be the protection of citizens and society as a whole, so that systems do not infringe fundamental rights. Technology is creating new opportunities, more subtle and flexible, to build decision-making algorithms that better align with legal and social goals. With greater collaboration between computer science, law, and public policy, it is possible to make such decisions less ambiguous and more transparent to the public.


References:

[1] O’NEIL, Cathy. Weapons of math destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016.

[2] PASQUALE, Frank. The Black Box Society — The Secret Algorithms That Control Money and Information. Harvard University Press, 2015.

 

Raquel Saraiva

Share

Related posts