Participant Profile
Mitsuo Wakameda
Chief Senior Manager, Digital Trust Promotion Division, NEC Corporation; Director, Data Trading Alliance.After graduating from the Faculty of Letters at Sophia University, he joined NEC. He launched the company-wide big data business in 2013 and the Digital Trust Promotion Division in 2018. He conducts joint research at the Keio University Global Research Institute (KGRI).
Mitsuo Wakameda
Chief Senior Manager, Digital Trust Promotion Division, NEC Corporation; Director, Data Trading Alliance.After graduating from the Faculty of Letters at Sophia University, he joined NEC. He launched the company-wide big data business in 2013 and the Digital Trust Promotion Division in 2018. He conducts joint research at the Keio University Global Research Institute (KGRI).
Fumiaki Kobayashi
Member of the House of Representatives; Deputy Director of the Liberal Democratic Party Youth Division; Secretary General of the Administrative Reform Promotion Headquarters.Served as Parliamentary Vice-Minister for Internal Affairs and Communications and Parliamentary Vice-Minister of the Cabinet Office in the third and fourth reshuffled Abe cabinets, focusing on radio waves, communications, information reform, and My Number policy. After working for NTT DOCOMO, he was first elected to the House of Representatives in 2012.
Fumiaki Kobayashi
Member of the House of Representatives; Deputy Director of the Liberal Democratic Party Youth Division; Secretary General of the Administrative Reform Promotion Headquarters.Served as Parliamentary Vice-Minister for Internal Affairs and Communications and Parliamentary Vice-Minister of the Cabinet Office in the third and fourth reshuffled Abe cabinets, focusing on radio waves, communications, information reform, and My Number policy. After working for NTT DOCOMO, he was first elected to the House of Representatives in 2012.
Hiromi Arai
Researcher, RIKEN Center for Advanced Intelligence ProjectWithdrew from the doctoral program at the Graduate School of Interdisciplinary Science and Engineering, Tokyo Institute of Technology, after completing the required credits. Ph.D. in Science. Assumed current position after serving as an Assistant Professor at the Information Technology Center, The University of Tokyo. Also serves as a JST PRESTO Researcher. Specializes in privacy protection technology, data mining, etc.
Hiromi Arai
Researcher, RIKEN Center for Advanced Intelligence ProjectWithdrew from the doctoral program at the Graduate School of Interdisciplinary Science and Engineering, Tokyo Institute of Technology, after completing the required credits. Ph.D. in Science. Assumed current position after serving as an Assistant Professor at the Information Technology Center, The University of Tokyo. Also serves as a JST PRESTO Researcher. Specializes in privacy protection technology, data mining, etc.
Kenji Yasuoka
Faculty of Science and Technology Professor, Department of Mechanical EngineeringSpecial Keio University alumni. Completed the doctoral program in the Department of Applied Physics, Graduate School of Engineering, Nagoya University in 1997. Ph.D. in Engineering. Assumed current position in 2010. Deputy Director of the Keio University Global Research Institute (KGRI). Specializes in molecular dynamics and chemical physics.
Kenji Yasuoka
Faculty of Science and Technology Professor, Department of Mechanical EngineeringSpecial Keio University alumni. Completed the doctoral program in the Department of Applied Physics, Graduate School of Engineering, Nagoya University in 1997. Ph.D. in Engineering. Assumed current position in 2010. Deputy Director of the Keio University Global Research Institute (KGRI). Specializes in molecular dynamics and chemical physics.
Tatsuhiko Yamamoto (Moderator)
Graduate School of Law ProfessorKeio University alumni (1999 Faculty of Law, 2005 Ph.D. in Law). Ph.D. in Law. Assumed current position after serving as an Associate Professor at the Faculty of Law, Toin University of Yokohama. Deputy Director of the Keio University Global Research Institute (KGRI). Specializes in Constitutional Law. Author of "Osoroshii Big Data" (Terrifying Big Data) and editor of "AI to Kenpo" (AI and the Constitution).
Tatsuhiko Yamamoto (Moderator)
Graduate School of Law ProfessorKeio University alumni (1999 Faculty of Law, 2005 Ph.D. in Law). Ph.D. in Law. Assumed current position after serving as an Associate Professor at the Faculty of Law, Toin University of Yokohama. Deputy Director of the Keio University Global Research Institute (KGRI). Specializes in Constitutional Law. Author of "Osoroshii Big Data" (Terrifying Big Data) and editor of "AI to Kenpo" (AI and the Constitution).
AI Networking and the Current Situation in Japan
Today, we will discuss the relationship between a networked society incorporating artificial intelligence (AI) and public space with experts from various fields.
The definition of "public space" is a difficult problem in itself, but here, I am imagining a non-exclusive, inclusive space open for free communication. If so, the main points of this roundtable discussion are whether the progress of AI networking will bring about social "exclusion" or "inclusion," and in what direction Japan is trying to steer in this regard.
For example, China is currently deploying a surveillance camera network called "Skynet" (Tianwang) using AI facial recognition technology. It can immediately identify who ran a red light. While some say this has improved public safety, negative aspects have also been pointed out, such as causing a chilling effect on political criticism and the further loss of free and open communication.
In addition, a credit reporting agency under the Alibaba Group called "Sesame Credit" (Zhima Credit) uses AI to "rate" an individual's social creditworthiness on a scale of 950 points based on electronic payment records, asset status, and social media friendships. This score, or credit score, is widely shared and used by both the public and private sectors. For those with high scores, it is very meaningful as they can get mortgages at low interest rates, rent houses without deposits, or have success in matchmaking.
However, what cannot be ignored is the life of those with low scores. Not only do they find it harder to get loans or face handicaps in job hunting, but their freedom of movement is also effectively restricted, such as being unable to buy plane tickets or having difficulty obtaining foreign visas. If you have a low score, you may receive discriminatory treatment and risk losing opportunities for social participation. Moreover, once a low score is assigned, one falls into a "negative spiral." This indicates the possibility that human rating by AI could cause an unprecedented class society and create an exclusionary space that is the exact opposite of a "public space."
In this roundtable, I would first like to discuss the current situation in Japan. The government is promoting AI networking through initiatives like "Society 5.0," but I get the impression that the negative impact on public space has not been discussed that much. Of course, the Ministry of Internal Affairs and Communications' "Draft AI Utilization Principles" and government discussions advocate for "human-centric" or "inclusive and diverse societies," which in themselves can be highly evaluated. However, my impression is that the specific discussions have not been fully fleshed out.
How does this situation compare to, for example, the EU or the United States? Mr. Arai, you often attend international conferences; what is your impression?
In technical conferences such as those on machine learning, I perceive that interest in issues surrounding these AI applications is very high. We often hold panel discussions by inviting people from various fields such as sociology and industry in a cross-disciplinary manner.
By comparison, I feel that there are not as many actions in Japan.
There are cross-disciplinary meetings in Japan as well, at least in form. Are you saying it's different from those?
For example, in international conferences, besides symposia, there is a difference in the excitement as a research subject, such as the number of papers on related topics. Especially in the United States, everyone has a high interest in "anti-discrimination."
I suppose that's because of the issue of racial discrimination.
That's right. Interest on the corporate side is also high. At last year's FAT* (ACM Conference on Fairness, Accountability, and Transparency), a conference that handles fairness across various fields such as machine learning and law, there was a report that the identification accuracy of facial recognition apps for Black women was low. In response, companies replied and reported that they had improved the accuracy. This is an example where companies responded to the actions of researchers.
Facial Matching and Its Risks
I see. Mr. Wakameda, what is your perspective from a corporate standpoint?
It is true that in Japan, sensibility toward human rights, such as racial discrimination, is not as high as in the West.
Microsoft recently published a proposal regarding facial recognition technology, stating that "governments should tighten regulations" due to fears of promoting racial discrimination and violating privacy. Subsequently, Google announced that it would stop providing general-purpose APIs (Application Programming Interfaces) for facial recognition until issues are resolved to avoid misuse. I feel there is a tendency for facial recognition technology to be highlighted a bit too much, though.
What is the background behind the attention focused on facial matching?
In the United States, the reason it is highlighted is likely the high sensibility toward racial discrimination against people of color, immigrants, and religious minorities. I think there is great concern about the use of facial recognition technology, especially by law enforcement agencies, which allows for the mechanical identification of specific individuals in public spaces.
In Europe, at a match in the UEFA Champions League, a world-famous soccer tournament, an initiative was carried out to use facial recognition technology to find specific criminals in a crowd, and it actually achieved results. This is a sophisticated use where tens of thousands of walking crowds are photographed and matched by remote cameras, and it is a system that raises alerts based on the "probability of being the person."
It is only a probability, isn't it?
I don't know the details, but it seems to be an operation that does not identify automatically, but rather acts on the premise of human eyes and human judgment. However, regarding this case, a human rights organization expressed human rights concerns.
Because it is judged by probability, there is always a risk of misidentification.
It is a fact that no matter how high-precision a product is, 100% accuracy is not always guaranteed due to various environmental conditions. It is required to correctly understand this point as a characteristic of facial recognition technology and to consider human rights, such as by devising operations to manage risks.
Regarding the use of facial recognition using cameras, there is another technical destiny. That is to temporarily acquire face data (codes called face feature values that identify the person) not only for the target person to be matched but for all persons who enter the camera frame. Since this is a system that matches against a database based on these identification codes, it is important to consider human rights and privacy after recognizing the characteristics of the technology, such as by incorporating a function to promptly delete the data of persons who are not matching targets.
Even if we say it is deleted promptly, the fact remains that face information of other subjects entering the camera frame is acquired. We must not neglect the effort to gain consensus on the balance with the benefits obtained from it (for example, citizen safety) after properly explaining these technical constraints and risks in advance.
The Need for Accountability
More than 20 years have passed since Dr. Ann Cavoukian proposed "Privacy by Design," but it can be said to be an action guideline that should be referred to now, when the value of trust is being emphasized once again. Furthermore, to conduct economic activities globally, it is necessary to consider the impact not only on privacy but also on human rights in general.
Recently, in the AI field, the term "Ethics by Design" (incorporating a certain sense of ethics into the design or design process of algorithms, etc.), which has a slightly broader scope than Privacy by Design, is also frequently used internationally.
It is true that in Japan there are few actions like direct "anti-discrimination" regarding human rights, but due to the way the media reports as if actions are being traced by AI or cameras, there is a risk of "flaming" (online backlash) even if it is not necessarily used in that way. Vague anxiety leads to "flaming," and it seems there are situations where companies are hesitating somewhat excessively in response.
On the other hand, there are also scattered near-miss cases caused by a lack of understanding or awareness of points that should be considered for privacy.
By the way, Keidanren is also creating guidelines for becoming AI-Ready businesses as part of its "AI Utilization Strategy." For example, for human resource development, it points out the need for personnel who have knowledge of ethics and human rights, rather than simply increasing the number of data scientists.
Avoiding the risk of flaming sounds like a somewhat passive impression. Is the fact that companies are starting to consider the implementation of AI with consideration for "human rights" and "public interest" a way of keeping pace with international trends? Or are there more proactive reasons?
Indeed, saying we avoid the risk of flaming is a stance from a somewhat corporate perspective. NEC has designated "NEC Safer Cities," which involves ICT utilization in public spaces and smart cities—the theme of this session—as one of its growth strategies.
Naturally, visualizing various information in public spaces is an important element, and expectations for sensor data, represented by cameras, are high. As a characteristic of cameras, cases where it is difficult to obtain explicit consent from data subjects will increase. Therefore, it can be said that business itself will no longer be viable without a process of considering the most appropriate way of notification or disclosure according to the situation each time.
Also, companies and services that are excellent in terms of accountability and transparency should be chosen, and I hope that a system will be created where steady actions can be properly evaluated. At NEC, we recognize that the priority is high because it directly links to our business.
Between Technology and Utilization
I am a technical person, but no matter how good something is technically, in the end, how humans use it—which might be what ethics is about—is a balance I think is important.
From our perspective, we tend to first keep building up what can be done with technology. Especially the talk about AI has been technology-led, and it has progressed quite a bit partly due to the evolution of computer GPUs (general-purpose processing units). Breakthroughs like computers capable of parallel processing have come out all at once. How do we balance technology and humans, and how do we ultimately use it? I feel that precisely because I am a technical person, we must discuss this properly.
Are you saying that we must discuss public interest and ethics even at the development stage of technology? On the other hand, there is an argument that technology and utilization are separate. That technology is neutral, and the problem lies in how it is used. Previously, I think such a "technology/utilization" separation theory was strong.
It's true that until now, technology was technology, and we have worked with the idea of just creating something good. Of course, we consider costs and such, but I feel that ethical things have tended to be put on the back burner.
Basically, it has been that way for a long time, hasn't it? Traffic rules are created after cars are made. As civilization progresses, the necessary sense of ethics emerges, and the mechanism of law to practice that ethics is built up. I think this is the order.
I understand that the internet civilization has blossomed over the last 30 years. As technology advances, a completely different sense of ethics regarding information and privacy compared to before is being formed in each person. Therefore, the discussion is emerging that it is about time to create standardized rules internationally. I think this is, in a way, the orthodox order.
On the other hand, there is an argument that it would be too late. For example, nuclear power can be energy or a bomb. It is dual-use. To say something a bit idealistic, the problem of nuclear weapons can be considered a consequence of not seriously considering this duality at the technology and development stage. Couldn't there be an argument that it is important for technology and ethics to be nurtured simultaneously, rather than "technology first, then ethics"? Of course, "legal" regulations, which are different from ethics, should come later.
On the other hand, if you demand too much ethical consideration at the technical stage, it may hinder technological innovation, so there is naturally an argument that we should go "ethics-free." In fact, at the research stage, autonomy is constitutionally guaranteed under Article 23, "Academic Freedom."
However, I wonder. Yuval Noah Harari, author of the bestseller "Homo Deus," points out that AI and genetic engineering will cause an unprecedented transformation of society in human history, creating a super-hierarchical society divided into elites and a useless class. If AI has such an impact, isn't there at least some need to discuss the duality of AI even at the technical stage?
I think there will be various discussions at the stage where it becomes a product for actual use. Regarding research, since engineers are working toward some goal they want to optimize, it is conceivable to incorporate a sense of ethics into that goal. I believe that coordination with society is indispensable in determining what kind of ethical sense to incorporate.
For example, in classification such as passing or failing an entrance exam, if a rule is made to keep the influence of the examinee's gender within a certain range, it is possible to create a passing standard that selects the most desirable candidates for the company under that constraint. Additionally, there is research on making prediction models described by complex rules as explainable as possible. I don't think what to aim for is something to be decided by engineers alone.
So policy judgments are inevitable even in the design of AI. If so, it means that some kind of channel for dialogue with society is necessary, after all.
That's right. As I mentioned earlier, I think there are various activities in academic societies and also on the corporate side.
I was originally in the field of science and gradually moved closer to engineering. Right now, an AI utilization project has started at KGRI (Keio University Global Research Institute), which is a place for integrating humanities and sciences within the university.
How researchers who are not AI researchers like myself can use AI. We are now starting to provide a place for students to study such things so that they can go out into society and play an active role using it. I hope it becomes a channel for dialogue by having everyone discuss the space between people who research AI and society.
AI is not neutral either, so "dialogue" is very important.
What is a Japanese-style Data Economic Zone?
When various things are born in a free world and spread to a certain extent, I think there is a timing when it is better to standardize them a bit. In the last few years, things utilizing AI and data have rapidly emerged worldwide, and I think we have entered such a timing.
Domestic discussion is important, but we must discuss it globally after all. Regarding data, there is an economic zone called GAFA (Google, Apple, Facebook, Amazon), an economic zone where the country of China has become a platform, and the EU economic zone. When Japan is asked what it will do, if we can properly propose a Japanese-style inclusive and highly reliable data economic zone starting from the individual to the world along with the utilization of AI and gain empathy, I think it will be a chance to break through the current Cold War not only for Japan but for the world.
Precisely because it hasn't been decided yet, I think it's very important to take it positively and for us to go and lead the discussion.
I would like to hear more details about your idea of a Japanese-style data economic zone.
First, in the world of GAFA, it is completely left to the private sector and is corporate-led. The feeling is that individuals go along with it because it becomes convenient, and it can't be helped to provide personal information in exchange. In the case of China, a certain kind of coerciveness of the state is at work.
In the case of Japan, the axis of judgment is placed on the individual, while on the other hand, everything is connected by APIs with the administration and companies, aiming to enable smart interactions under their own judgment freely. I think this economic zone will be built on a sense of trust among the three parties, different from the previous two.
Institutionally, is it close to an information bank (information trust function) where you leave the operation and management of your information to a trusted third party?
I think there is also an information bank model, but what the government is currently discussing is a society where everything is exchanged smartly among the private sector, the administration, and individuals. For example, if we move, currently we have to go to Municipality A to withdraw our resident record and go to Municipality B to enter it. This will change so that if you go to Municipality B, it will be automatically withdrawn from A, and moreover, the power company will be properly notified, and the transfer destination will also change with the person's consent.
Whether it's facial recognition technology or scoring by AI, it will not be accepted if it's used in a way where who you are is grasped or you are being scored without your knowledge. There is a high demand for services where the individual, not someone else, is the starting point—for example, "I want to prove my skills and experience" or "I want to receive services by face recognition"—and I believe personal data will inevitably be entrusted to such services.
Even with the same technology, the difference between whether I am the starting point or someone else is doing it to me is important. As a polar opposite to the Chinese data utilization model exemplified by Mr. Yamamoto at the beginning, "person-centric" might be accepted as our country's data utilization model.
The key to this business model is not writing long risk-hedge language, but a UI (User Interface) that clearly conveys the purpose and risks of data utilization, which is exactly human-centered design itself.
I agree with that, but when proposing to the world, I think it's better not to dare call it Japanese-style. "Person-centric" is the guideline Japan should take, while China is "state-centric" and GAFA is "corporate-centric," right? So I think it can be organized by saying the starting points are different.
What is "Person-Centric"?
I am also involved in various government meetings, and in them, it is emphasized in various forms that it is "human-centric" AI utilization. But what this human-centric or "person-centric" specifically means has not yet been sufficiently boiled down.
It sounds right as a slogan, but what is it? For example, in the credit scoring I mentioned at the beginning, is it "human-centric" to evaluate that person "accurately" and to fairly evaluate that person's efforts? I say this because, in order to evaluate "accurately," it becomes necessary to seamlessly collect that person's behavioral records. That is ultimately a surveillance society, isn't it?
Then, is protecting that person's privacy "human-centric"? Or, by being "person-centric," does it mean that information the person does not want to disclose does not have to be given to the AI? In this way, if we emphasize the person's privacy and autonomy, holes will appear in the data, and the prediction accuracy of the AI will drop. Then, those who are good at "presentation" or "appearance" on the data might benefit, and people who have worked hard properly might lose out. In short, is it "human-centric" to seamlessly give information to AI and accurately grasp the person at the expense of privacy and autonomy, or is it "human-centric" to emphasize privacy and autonomy at the expense of accuracy?
At a recent OECD meeting, it was pointed out that privacy and fairness—that is, privacy and the accuracy of AI prediction—are actually in a trade-off relationship, and how to coordinate the two was discussed. Related to this, the question of whether AI should be made to read genetic information that the person cannot correct or change in order to increase the accuracy of prediction also becomes an issue.
Also, regarding the slogan "inclusive," looking at the situation in China, I am not without doubts as to whether it will really be so. I wonder if it might instead become an exclusive society. If Japan makes "person-centric" the goal, I think it will be important how to prepare safeguards to prevent that from happening.
Saying "holes will appear" is also from a corporate-centric or state-centric perspective, isn't it? But if we go with person-centric, as Mr. Yamamoto has been saying in various places, personal information is always about building a relationship of trust with the other party while the person is free to put it in or take it out. In fact, even now, people associate with others socially while saying things like "I graduated from such-and-such university" or "Actually, I had this failure."
In the end, I think it's about which side has the authority over that input and output. If this starting point is the "person," I don't think it will be a perspective of whether it's missing or not. From the person's perspective, it's just that what they put out is registered.
I see. It's the idea of PDS (Personal Data Store: a mechanism where individuals manage their own information and decide how it is utilized). Exactly, you provide what you want to be evaluated on, and you keep other things closed.
Doesn't "person-centric" refer not only to the starting point of data exchange, but also to mechanisms from the consumer's perspective, such as a process where a company evaluates you, a buffer confirmation is inserted, and if you are satisfied with it, you receive the service?
Can hiring and credit (giving credit at financial institutions, etc.) also work with that? In other words, in hiring or credit, if it's a method where you only provide what you want to be seen, the prediction accuracy of the AI will surely drop, won't it? Like not wanting to disclose criminal record information.
I believe that credit and recruitment should be considered separately. First, regarding credit, since an individual wants to receive a service, it follows that they should provide credit information to the company as compensation. Therefore, if someone wants to obtain a certain level of credit, they already provide information such as their actual current debt or family structure.
Also, regarding the topic of recruitment, there seems to be a misunderstanding that it becomes something special the moment AI gets involved. Humans also inevitably have biases due to the environment they grew up in and other factors. That is why we conduct recruitment interviews with multiple people. If you evaluate with a single AI, it will definitely show bias. However, I think that if you line up various AIs with different biases, it actually becomes the same as what humans do.
So the concrete form of being "human-centric" changes depending on the field. The last point is a frequently pointed out issue: what is the difference between traditional recruitment and recruitment using AI?
I think there are parts where the receptivity differs slightly between humans and AI. I also think there are cases where humans have simply been doing the same thing all along, and it has just been replaced by AI. For example, there might be a misunderstanding because of preconceived notions such as "AI is perfect."
Expectations are just too high.
In information processing systems referred to as AI, we can embed rules to be followed and evaluate prediction accuracy, but I suspect there is a fear or backlash against AI because it is misunderstood as something autonomous or superhuman. Also, I think the fact that testing methodologies for AI as a product are immature and difficult for the general public to understand is a challenge for the development side.
If the human side is clear about wanting to operate based on certain decision-making criteria, I believe we can design AI to match that. Therefore, I think it would be good if people from as many different fields as possible work on this with a common understanding.
The Use of Scoring and Accountability
There is technology that captures what a person looks at through eye movement—for example, identifying which books on a bookshelf they showed interest in—to infer that person's preferences. Knowing which books someone showed interest in at a bookstore might, depending on the case, reveal their thoughts or beliefs. A quick glance might reflect an inner self that even the person themselves isn't aware of, and depending on how it's used, it can be quite sensitive. What if that were linked to your ID?
However, if it's technology that captures the eye movements of a train driver to see where they are focusing while driving, it leads to the resolution of social issues, such as visualizing skills that should be passed on or deterring accidents caused by inattention. In other words, doesn't it depend on the requirements definition based on the purpose and ethics?
Scoring is the same; for example, if we score driving ability and can see how it differs from when someone was younger, or how it differs between yesterday and today, we can provide driving assistance accordingly.
Rather than a simple judgment like "We'll take away your license if you get dementia," wouldn't it be great if the system accurately tracked a person's driving history over a long period and judged, "You seem tired today, so let me compensate for this part"?
I see. That might be exactly what a "human-centric" use looks like. Instead of excluding people categorically and uniformly based on age or diagnosis, it predicts the specific characteristics and tendencies of that individual to take a personalized response. This seems like an implementation of AI that contributes to the "respect for the individual" mentioned in the Constitution. Of course, for that purpose, it is necessary to keep that individual's data for a long period, which may be a trade-off with privacy. However, if measures are taken such as explaining the scope of collection or preventing use for other purposes, that can be suppressed to some extent.
In corporate recruitment as well, people who were previously excluded categorically due to human bias might actually be included by using AI to diversify inputs. Regarding people with disabilities, traditional human recruitment might make hiring difficult because stereotypical images bring that element to the forefront, but with AI, the element of disability can be objectified and relativized.
I suppose the challenge is accountability. There will still be people who are not hired, and how can we explain it to them? Even in the era when humans did the hiring, the reasons for rejection were basically not explained, but because the input information used for evaluation was limited, it was a world where it was "understood without being said."
However, as we move to AI and input information becomes diversified, it will become unclear which of one's actions led to the rejection. In one company's recruitment app, they even collect the finger movements of applicants when they answer questions. In such a case, the meaning of "not explaining" might change significantly from the past. In the case of recruitment using AI, the input information and algorithms become a black box, so those who are rejected are left at a loss, not knowing the reason. It is even possible for them to be fixed in the lower strata of society without the opportunity to climb back up. This is the so-called problem of "virtual slums."
When we talk about realizing an inclusive society through AI, I think a certain level of accountability is necessary. How is that looking from a technical standpoint?
For example, there is research on how to explain deep learning or complex models, but one of the challenges there is that what humans can understand when receiving an explanation is limited, so information has to be omitted.
This can create a gap with what is actually running, or conversely, it can lower the accuracy of the model. If that happens, for a for-profit company, for instance, the question is whether they can tolerate a drop in accuracy, or whether an explanation based on a model with reduced accuracy is correct.
Also, I think explanations can be provided from various perspectives, but the question is whether humans will accept them. Even if explained, it is possible that it contradicts the recipient's knowledge or intentions. In that case, can they accept or utilize the result? I think that is quite a difficult point.
The world of politics also operates while balancing emotion and logic—precisely "heart" and "reason." Whether you can get people to say, "Since that person says so, let's give it a try," is where a politician's skill is tested. When conflict occurs, can you get them to trust you and reach a compromise? I think the sense of conviction in finally persuading or encouraging someone is something that is difficult for anyone but a human to provide.
So, returning to the story of whether humans can accept AI's judgments, AI can present highly accurate analysis results based on big data such as images, but that alone can be hard to be convinced by or accept. I think the way to handle the relationship between heart and reason, and between technology and humans, is for an expert—for example, a doctor in a medical setting—to explain it, or for a HR representative to ultimately explain the interview results.
I see. The EU's GDPR (General Data Protection Regulation) also states that when hiring or loan decisions are made solely by AI judgment, the right to human intervention and the right to receive an explanation regarding the significant parts of the judgment must be guaranteed.
To prevent an exclusionary society caused by AI, how Japan incorporates this EU "right to an explanation" seems like it will be an important point.
Is "AI Better"!?
Since the amount of information that needs to be handled is increasing, I believe there are situations where using information technology is inevitable. In areas like medical diagnostic imaging, data is increasing so much that the need for support on the ground through automatic data processing has been pointed out.
My view is that it is a good thing to introduce AI that mimics professionals as diagnostic support. However, I believe it is the doctor's job to properly bring that information to the patient.
I was struck by the point that in the world of politics, the skill lies in the sense of trust and conviction—the "Since that person says so, it can't be helped" aspect.
Similarly, as AI permeates society, I want to aim for a goal where the sense of trust and conviction in the digital society—such as "Since it's an AI service provided by NEC, it must be fine"—becomes a differentiating factor as a result.
So fostering that kind of trust will become a form of corporate value.
Just the other day, we held a symposium featuring the manga artist Mayumi Kurata, and she said she was an "AI-underprivileged person." When I asked her to speak about utopia and dystopia regarding AI without any preconditions, she commented, "There are all kinds of doctors, but I don't like it when diagnosis results vary depending on whether they are a great doctor or not. I have expectations for AI that won't miss even the smallest lesion and will provide the same appropriate diagnosis for everyone."
Also, regarding the use of AI in corporate recruitment and such, she said, "If it unearths possibilities that the person themselves hadn't even thought of, wouldn't that be a very good thing?"
I also recently asked my seminar students whether they would want to be hired by AI or by a human when they go job hunting, and it was split exactly half and half.
In the future, there could be a worldview where being judged by AI is actually more trustworthy. That AI sees you more correctly than a human does.
But if everyone uses the same algorithm, it's scary because it seems like every company will end up with the same stereotypical people.
I think that's what companies will end up thinking about. I also served as a recruitment manager during my time at Docomo, and we would shift our goals—this kind of talent last year, that kind of talent this year—and change the interviewers as well. If we didn't, bias would creep in.
That's right. If we were to do that with AI, I think it could be done by designing it with instructions like, "Set diversity to this level."
If AI judgments can work reasonably well by setting diversity parameters, then wouldn't the conversation shift toward replacing decision-making in politics or trials with AI as well? In that case, what is the meaning of "human-centric"?
Regarding trusting AI, humans are actually quite lazy, and I think there's a stance where if things are handled well for them, they'll make decisions without checking the explanations in detail.
When obtaining consent, even if you show them a privacy policy or provide information about data use, they might just say they don't understand such difficult topics. People listen well to things that are convenient for them, but troublesome talk doesn't easily enter their ears. Human decision-making is thought to be quite haphazard and ambiguous. How should we deal with this fuzzy part of human decision-making?
One could say it is also human to think, "AI is more correct, and leaving it to AI eliminates troublesome things, allowing me to live more comfortably." That's not the Arendtian "human" with a public spirit, but it's still a "human." Respecting others, debating, thinking, going to vote, and maintaining democracy is troublesome.
In that case, an automated AI society like China's might also be called "human-centric."
I referred to these last 30 years as the Internet civilization, and the basic principle within Internet civilization is, after all, freedom. We want to be free.
However, I think it's human nature to want to avoid making decisions that one doesn't have to make. Yet, people want to make the decisions they truly want to make for themselves. That will surely remain, so I think that's fine.
Will that really remain? We "should" do it, but will we really "do" it?
In the early days of the Internet, there was certainly an atmosphere that things had become convenient with the creation of message boards and social networks, but after that, divisions on the net were born and negative aspects began to appear.
So, since many problems have become visible, I hope that in order to enjoy freedom, things will go well if we skillfully insert mechanisms for problem-solving that are different from the original design philosophy.
How to Design a Free Public Space
Mr. Arai just mentioned "design." To maintain freedom, might some kind of design become necessary?
Until now, freedom has probably been centered on a laissez-faire, negative concept of freedom—that is, excluding interference from others, whether through government regulation or technical regulation. But to maintain a democratic, open society and positive freedom, elements of design-oriented intervention might become important.
If the path Japan should aim for is not the Chinese one, what kind of "design" or "mechanisms" will be necessary?
Before the technical talk, while there is the convenience of technologies like the Internet and AI, the new challenges they cause are, from the perspective of sustainability, a new social responsibility for companies. Looking at the past, the invention of the automobile brought about social issues like pollution and traffic accidents, but if you take early action against expected risks, such as developing hybrid cars, it is instead evaluated as a social activity and can even be launched as a new business.
Translating this to the digital age, if you invest in technologies that affect privacy, such as inferring a person's inner thoughts or identifying individuals through AI, you can gain the trust of consumers and opportunities for new business by also investing in privacy-protecting technologies. In other words, it would mean designing services where individuals, companies, and society are all happy.
To give a simple example, if there is technology that finds and identifies people with high precision, but the purpose is to sense and analyze the state of products on supermarket shelves, you could design it so that if it identifies a person, it immediately deletes that data—making it, so to speak, a sensor that doesn't recognize people and only analyzes the state of the shelves.
That is exactly the practice of the Privacy by Design concept. Furthermore, an approach called "Human Rights by Design," which looks at human rights as a whole and not just privacy, is being proposed. I especially want management levels to understand and practice this.
Ideally, companies that make such forward-looking investments in technology and service development, rather than just focusing on immediate economic value, will be evaluated internationally.
I completely agree. When Japan claims to be "human-centric" in a way that is different from China, the US, or Europe, I think it's important to proactively promote the idea that "this technology will realize this kind of inclusive society," rather than just how to avoid the risk of public backlash.
Is there technology to realize a society that is more inclusive and ensures more diversity than ever before?
Technically, the feasibility is sufficient, but whether society will accept it is also important.
Does that mean companies won't use it after all?
I think that's a possibility. Companies will be in trouble if things don't sell, so they have to listen to user demands.
AI can also adjust decision-making, but how to live with that is a problem on the human side. I think it would be good if the discussion regarding the design of AI judgments could serve as a catalyst to further develop the response on the side of society.
So ultimately it depends on the maturity of society. Changing the criteria for social evaluation of companies will also be important in the future.
However, if society hasn't caught up to that point at this stage, I think the nature of legal control will also be an issue. Mr. Kobayashi, what are your thoughts on that?
Before deciding what to do with AI, we need to decide what to do with Japan's future.
Japan is currently facing three major changes. First is population decline. Second is the arrival of the 100-year life era. Third is the overwhelming progress of technology. Since these are unavoidable changes, I think it's very important how we accept them and turn them into opportunities.
Based on this premise, regarding the use of AI, which is a symbol of technological progress, Japan actually has a great opportunity.
There is often talk that "AI will steal jobs." Countries like China, India, and the US face unemployment risks as social unrest due to their large youth populations, but Japan, which is under population decline, does not have that worry; rather, it is essential to promote its introduction socially.
However, as the word "okami" (the authorities) still exists in Japan, there is a psychological difficulty in moving unless the administration sets the rules.
That is why, regarding the nature of domestic regulation, it is desirable for the political and administrative side to quickly establish rules with an eye toward the utilization of technology. Without waiting for legislation, we should take action as quickly as possible using soft methods like guidelines and directives.
At the same time, to prevent the world from being dominated by a data economy where only China and GAFA have the advantage, we need to make "human-centric" data utilization rules the global standard. To do that, we need to involve global multi-stakeholders, which is very difficult, but because there are no international standards now, it is an opportunity.
A Forum for Discussion on "Human-Centricity"
I see. This university has organizations like KGRI that are responsible for promoting the integration of humanities and sciences and globalization. Starting from such organizations, there may be areas where the university can help in creating global indicators.
I think AI is probably unfolding faster than everyone expects right now. Even at KGRI, where we are trying to work on this, it is important to create a place within the university where people doing various things can gather to discuss from more angles how to be useful for the next research and the next society, and to present or have dialogues. The university is a place where free and active discussion is allowed.
As a platform or a provider of a forum, a university is a place where things can be said flatly and freely, and where academic backing can be provided. So I think it's very important to have working people come back to the university more and have various discussions.
However, the world won't change just by discussing. We need to review the problems at our feet and produce concrete results.
For example, Japan has 1,718 municipalities, and they hold a lot of important data, but 1,718 different types of information systems are running, and the paper for administrative procedures also has 1,718 different formats. Even if we want to utilize the data, it's a very difficult situation. Furthermore, there are more than 1,718 types of personal information protection ordinances under the Act on the Protection of Personal Information for handling that data.
I believe that by solving this situation quickly and changing the scenery in front of us, everyone's awareness will change and their actions will start to change, thinking, "Something about our world has changed; how can we make more use of this?" That is what will truly lead to Japan moving forward.
The "Act on the Next-Generation Medical Infrastructure" was enacted with the aim of turning medical information into big data, collecting and linking it, and making it useful for medical sciences research. However, if the collection and storage systems and file formats differ at each medical institution, advanced information linkage becomes difficult. This is similar to the problem Mr. Kobayashi pointed out; standardization of systems is urgently needed. However, this area clashes with competition between vendors. In Japan, the relationship between the layer that standardizes and the layer that competes has not yet been sufficiently organized.
I also completely agree with the "human-centric" and "human-centric" ideas, but I think more attention needs to be paid to the fact that several trade-off relationships will emerge. The current relationship between standardization and competition is one such example. Under China's "digital Leninism," standardization is powerfully promoted under government leadership, so data gathers at a tremendous pace. That won't happen in Japan.
Besides that, as I have mentioned, privacy, AI prediction accuracy (correctness), transparency, and efficiency should all stand in trade-off relationships. I think it's necessary to bring "human-centricity" down to concrete discussions and precisely debate these trade-off relationships.
For example, to increase AI prediction accuracy, one must be prepared to discard privacy to some extent. I feel that this kind of realistic discussion of weighing values does not yet exist in Japan.
I think there are also stories of users providing personal data for free services or coupons. While the disclosure of information about oneself is the individual's freedom, they might not understand how the data will be used as a result, or what kind of trade-offs there are with privacy violations.
However, I don't think it's necessarily only bad things, and it might not all be a trade-off. Regarding scoring, there could be services aimed at a "plus-sum" outcome, where people who couldn't raise funds based on traditional financial information alone can gain credit and new opportunities based on non-financial information like life logs, or discover potential they hadn't even noticed themselves.
And as our country experiences a super-aging society ahead of the rest of the world—symbolized by the phrase "the era of the 100-year life"—there is no doubt that "human-centric" personal data utilization will become extremely important.
If we are to be "human-centered" as advocated in Society 5.0, then in addition to deliberations by the government and corporations, we need more and more active participation from citizens.
To Realize a Fair Society
Creating something like a basic law to some extent is the best way to stimulate national debate. Right now, we are at the stage of publishing principles for AI utilization, and there is no movement toward legislation yet, is there?
I think it's a matter of sequence: first, utilization progresses, and once an image of a new society becomes visible to everyone, the necessity of establishing rules is discussed. I feel that we haven't shared that vision of society yet. In that regard, Yukichi Fukuzawa is a wonderful example; in "Things Western (Seiyō Jijō)," he used illustrations to represent the cutting-edge technologies of the time as "steam," "medicine," "electricity," and "telegraphy," sharing a vision of the future society with the Japanese people.
I believe in technology and think it is the best tool for realizing a fair society. Thanks to technology, histories are "visualized," efforts are evaluated, and people can participate fairly in society regardless of disabilities, difficulties, or where they live.
For example, there are 1.2 million Japanese nationals living abroad. The voter turnout for these people in elections is a mere 2%. This is because it is very difficult to travel to the Japanese embassies or consulates in each country. We are currently working on legal reforms to utilize technology so that online voting can be possible as early as the House of Councillors election four years from now.
I think most people probably don't really understand how they can use the technology.
That may be true. What past political administrations must reflect on is that they were not "human-centric."
We thought we were communicating by saying, "Citizens, we have made rules. Here you go," but originally, if we could have communicated the background and the vision of the society we are aiming for—saying, "Actually, this rule was made so that your life becomes like this"—it would be easier for people to have an image of their own way of life in the AI era.
I believe AI is a technology with the potential to truly enable the realization of a fair and inclusive society, depending on how it is used. I suppose the challenge is how to specifically demonstrate and bring out that potential without turning away from the risks.
While traditional Japanese society has advocated for "respect for the individual" and "equality" in the Constitution, in reality, it has not been able to fully realize these ideals. With the use of AI, under a new era name, these ideals might finally be realized. This should be acknowledged honestly. However, unless we broadly discuss the specific and realistic direction in a way that integrates the humanities and sciences, Japanese society could turn into the exact opposite: an exclusionary, pre-established harmony-style surveillance society.
I would be happy if this roundtable discussion serves as an opportunity for readers to realize that we are currently at that turning point.
Thank you all for the lively discussion today.
(Recorded on December 17, 2018)
*Affiliations and titles are as of the time this magazine was published.