Keio University

[Special Feature: AI Society and Public Space] The Impact of Artificial Intelligence for Evaluating People on Human Relationships and Its Ethical Implications

Publish: February 05, 2019

Participant Profile

  • Minao Kukita

    Associate Professor, Graduate School of Informatics, Nagoya University. Specialization: Philosophy of Language, Philosophy of Technology

    Minao Kukita

    Associate Professor, Graduate School of Informatics, Nagoya University. Specialization: Philosophy of Language, Philosophy of Technology

1. Introduction

In recent years, as artificial intelligence (AI) has been put into practical use in various situations, concerns about its social impact have been rising. Along with this, discussions regarding the ethics of artificial intelligence have become active both domestically and internationally, involving various stakeholders (1). Issues raised include the safety and controllability of AI, transparency, accountability, impact on inequality and fairness, human rights, and human dignity. In addition to these issues, this article focuses on the potential impact of AI use on human relationships and its ethical implications.

2. What is Artificial Intelligence?

To consider the ethical issues of artificial intelligence (i), it is necessary to characterize AI, even if only roughly. However, this is a rather difficult task. John McCarthy and others, who founded the research field of "artificial intelligence," characterized the challenge of AI as "making a machine behave in a way that would be called intelligent if a human were so behaving." However, what we consider "intelligent" is not always clear. Furthermore, some behaviors that are not considered "intelligent" for humans are regarded as significant achievements in the field of AI. Examples include recognizing human faces or grasping and picking up objects.

Characterizing artificial intelligence based on the concept of "intelligence" runs into the extremely difficult question of "what is intelligence?" or "what does it mean to be intelligent?" Following Jerry Kaplan (2), we will avoid such questions and consider artificial intelligence simply as "the continuous progress of automation." When viewed this way, the ethical issues of AI can be understood as "new ethical problems arising from the automation of things that were not previously automated."

However, if we characterize AI in this way, the examples are too diverse to discuss collectively. Therefore, this article focuses specifically on "systems that automatically evaluate, judge, and classify humans."

3. AI that Evaluates People

One of the technologies driving the current third AI boom is machine learning, represented by deep learning. Machine learning typically finds subtle patterns in large amounts of data that humans cannot find, and identifies, classifies, or categorizes subjects. This technique has made it possible to automate identification tasks, such as distinguishing images containing cats from those that do not. While humans can distinguish images of cats, AI sometimes demonstrates superior identification capabilities. For example, in games like Shogi or Go, AI has become able to evaluate given board positions more accurately than professional human players.

However, where current machine learning exerts its greatest power (and brings the greatest benefit to those who use it) is in the categorization and behavioral prediction of humans. Knowing with high accuracy what kind of people have what kind of needs or preferences, and what kind of behavioral tendencies they have, is extremely important in business. Big data and AI have proven to be extremely useful technologies for this purpose. With the spread of the internet and mobile technologies such as smartphones, vast amounts of machine-readable data about all of people's online activities, and increasingly their offline activities, are being acquired, recorded, and stored. With the development of machine learning techniques, it has become possible to extract people's needs, preferences, and behavioral patterns from that data. Consequently, giant IT companies possessing massive amounts of data can take appropriate actions for appropriate targets at the appropriate timing. This has brought them enormous profits. For the first time in the half-century history of AI, it has become a technology that supports major success in business.

4. Problems with AI that Evaluates People

However, the evaluation and categorization of humans by AI is not only applied in marketing. AI is also used in recruitment and performance evaluation by companies, as well as in the justice system (police and courts). Those providing such services usually advertise that, unlike humans, AI is "unbiased," "not influenced by preferences," and "accurate." However, in reality, it is pointed out that the preferences and biases of the humans who created the algorithms, as well as the biases in the data used for learning, are reflected (3).

The "Remote Risk Assessment (RRA)" system developed by AC Global Risk highlights various problems with human evaluation by AI (4). This system is said to determine whether a person is dangerous based solely on the tone of their voice during a ten-minute conversation over the phone (answering set questions in their native language), rather than the content. As an answer to President Trump's demand to "vett immigrants thoroughly," AC Global Risk advertised RRA as the "ultimate solution to the monumental refugee crisis currently facing the United States and other countries." While AC Global Risk has refused to answer questions from The Intercept regarding the software's details, experts who reviewed public materials have called it "bullshit" and "bogus." Björn Schuller, an authority on speech emotion recognition, told The Intercept, "Giving the impression that you can detect lies from voice alone with any degree of accuracy is ethically problematic. If someone advertises that they can do that, they themselves should be considered the risk." In US immigration inspections, speech patterns and appearance are used as pretexts for investigating or denying entry to people. Experts fear that RRA might "spread such biases as a routine and make them appear 'objective' at first glance."

However, RRA and similar human evaluation algorithms are currently used in various parts of society. In many cases, they are neither as accurate as advertised nor free from bias. The COMPAS system, used in the US to predict the likelihood of recidivism and referenced during sentencing, was found to have an accuracy no better than random guesses by laypeople and possessed biases similar to those of humans (5). A recruitment evaluation system using AI that Amazon was secretly developing was found to have a bias that undervalued women and was scrapped because the development team could not address the issue (6). Despite these problems coming to light one after another, companies and governments remain enthusiastic about evaluating people with algorithms. This is because it is a simple and efficient "solution" to complex and difficult problems.

5. The Other as Risk

Human evaluation algorithms classify and cluster people based on data. They then perform specific evaluations and labeling for people belonging to certain groups. Based on this inference, people judged to be high-risk are placed at a disadvantage, such as being passed over for hiring, given long prison sentences, or denied entry. For example, members of a group evaluated to contain a high percentage of violent individuals are deemed likely to be violent and are excluded as a risk (Figure 1). The most extreme example is the "signature strike" used in the US "War on Terror." In countries like Afghanistan and Pakistan, the US uses data on people's age, behavioral patterns, location, and networks of human connections to estimate whether a person is a terrorist and conducts drone bombings. Regardless of whether they actually have the intent or plan to attack the US, if a person possesses enough characteristics common to terrorists, they are targeted as a terrorist (7).

At the root of this methodology (and spreading through society by its practice) is a view that perceives others as bundles of data that can be processed by machines like computers and smartphones, and only as potential losses or benefits to oneself. Machine learning systems detect the possibility that a person with a complex combination of attributes is somehow "risky" from the vast amount of data overflowing on the web. People judged as "risky" are often discarded collectively for the sake of efficiency and under the name of a false "objectivity." Whether each individual is truly dangerous is never scrutinized. This is because doing so is inefficient. Instead, it is more efficient, and therefore "rational," to discard everyone labeled as a risk. There, others are not treated as flesh-and-blood individuals but merely as data points. If the attributes referenced in this process are ethnicity, gender, or religion, it is criticized as discrimination and unfair. However, human evaluation systems based on big data are currently generating new seeds of discrimination at a tremendous pace. Furthermore, as shown in the Amazon example in the previous section, automated human evaluation systems often allow old-fashioned discrimination to slip into judgments unnoticed.

Figure 1: Unfair Inference

6. Technology as Media

There is an idea that technology is a "medium" or "interface" between humans and the world. That is, humans perceive, recognize, and interpret the world through technology, and act upon the world through technology. In this sense, it can be said that technology is part of our cognitive and behavioral abilities. If so, a change in technology means a change in how we perceive the world and how we act upon it. Generally, technology enables us to know the world better and to use this world efficiently.

As ICT advances and technology acquires high autonomy, our relationship with the environment and others is about to change significantly. Previously, to act better in the world, we needed to know more about the world and others. With the development of ICT, especially AI and robotics, we will be able to deal with the environment and others efficiently without knowing them in detail. ICT does not bring us information, but rather functions like a screen that prevents us from coming into contact with information from the outside world. In the future, our perception of the world and our actions upon it will increasingly depend on technology. However, at that time, the physical and psychological distance between us and the world, and between us and others, may expand indefinitely.

The ethical implications of this are significant. This is because psychological research has revealed that psychological distance affects our moral judgments and actions. For example, as psychological distance increases, we become less tolerant of others, and thinking based on self-interest is promoted.

As mentioned in the previous section, as AI-based human evaluation systems are used in many social situations, we will increasingly perceive others as bundles of machine-readable data and think of them as sources of potential loss and benefit to ourselves. However, this severely limits the human relationships that might be built with others. As the "Prisoner's Dilemma" game shows, it is difficult to bring a relationship that starts from self-interest and suspicion of the other party into a relationship of mutual trust and cooperation.

But human relationships are open to much richer possibilities. Human character is not necessarily a fixed thing that can be measured objectively by a machine. It changes dynamically within human interactions. If a person is trusted, they make an effort to live up to that trust. In other words, the act of trusting the other party can be a factor that makes them truly trustworthy. Or, affection can be born simply by meeting frequently. Cooperation based on trust and affection born in this way brings mutual benefits, which further promotes trust and affection. However, such a cycle that promotes good human relationships is unlikely to start from an automated evaluation system based on data available on the web.

Conclusion

The thoughtless use of big data and AI in evaluating humans may promote treating others as bundles of data that computers can process and viewing them more in terms of efficiency. Furthermore, AI can reflect the biases of the humans and society that developed it, having the effect of fixing and strengthening them. Additionally, big data and AI are creating new seeds of discrimination by negatively labeling specific groups.

Currently, AI is often used in ways that generate large profits by unfairly disadvantaging or exploiting the socially vulnerable. On the other hand, AI can also be utilized as a tool to help visualize and rescue the suffering socially vulnerable. When considering the application of AI, it is vital to think about for what purpose it was created, what side effects it brings, who it benefits, and who it tramples upon.

i) The term "artificial intelligence" is used to refer to technical products in some cases, and to the field of research and development of such products in others. This article uses the term in both senses.

(1) Yuko Murakami, "The Present State of Ethics in Artificial Intelligence: The Significance of Philosophy of Technology and Ethics in R&D," IEICE Fundamentals Review, 11(3), pp. 155-163, 2018.

(2) J. Kaplan, Artificial Intelligence: What Everyone Needs to Know, Oxford University Press, 2015.

(3) Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, translated by Naoko Kubo, Intershift, 2018.

(4) A. Kofman, "The Dangerous Junk Science of Vocal Risk Assessment," The Intercept, November 25, 2018.

The Dangerous Junk Science of Vocal Risk Assessment

(5) Hirokazu Anase, "'AI Judges' Were Not Fair at All! The Poor Reality of Artificial Intelligence Trials," Rui Net, August 12, 2018.

http://www.rui.jp/ruinet.html?i=200&c=400&t=6&k=2&m=338047

(6) Jong-gi Ha, "Considering the Reason Why Amazon's Recruitment AI 'Discriminated Against Women'," Forbes Japan, October 16, 2018.

Considering the Reason Why Amazon's Recruitment AI 'Discriminated Against Women' | Forbes JAPAN Official Site

(7) Minao Kukita, "The Logic and Ethics of Remote Warfare," α-Synodos, Vol. 257+258, 2018.

*Affiliations and titles are as of the time of publication.