Writer Profile

Kyoko Yoshinaga
Graduate School of Media and Governance Project Associate ProfessorOther : Non-resident Fellow, Georgetown University Institute for Technology Law & Policy
Kyoko Yoshinaga
Graduate School of Media and Governance Project Associate ProfessorOther : Non-resident Fellow, Georgetown University Institute for Technology Law & Policy
The World's First Comprehensive AI Regulation Law Enacted
The "AI Act" (commonly known as the AI Act), the world's first comprehensive and direct regulation of AI, has been enacted in the EU (published in the Official Journal on July 12, 2024, and will come into force sequentially starting August 1, 20 days later). In addition to being enacted to address AI risks, it is also intended to facilitate the distribution of AI within the market by unifying the markets of the 27 member states. Furthermore, according to the EU, regulating AI in a unified and comprehensive manner will provide legal certainty.
The law regulates AI according to risk. Specifically, it covers (1) prohibited AI practices, (2) high-risk AI systems, (3) specific AI systems, and (4) general-purpose AI models (which were not anticipated in the initial drafting stage). Prohibited AI practices include, for example, subliminal techniques (techniques that secretly act on the subconscious) or manipulative techniques; using vulnerability information such as age or disability to adversely affect human behavior or decision-making; performing so-called "social scoring" that evaluates and classifies individuals or groups based on social behavior or personal characteristics to cause harm or unfair treatment to humans; crime prediction based solely on profiling; creating facial recognition databases scraped from the internet; emotion recognition in workplaces or educational institutions except for medical or safety reasons; the use of biometric categorization systems that collect sensitive personal information except where legal; and the use of real-time remote biometric identification systems in public spaces for law enforcement purposes, with some exceptions.
The majority of the provisions are regulations for high-risk AI that could pose significant risks to human health, safety, or fundamental rights, including transparency requirements such as the establishment of risk management systems and information disclosure to stakeholders. Furthermore, providers of high-risk AI systems must undergo a conformity assessment before placing them on the market or putting them into service. For specific AI, developers and deployers (utilizing businesses) are subject to lighter transparency requirements; for example, they must inform end-users that they are interacting with AI (such as chatbots or deepfakes) (general-purpose AI will be discussed later). Currently, most AI in the EU single market consists of minimal-risk AI (such as AI-powered video games or spam filters), which have no specific legal obligations. Note that the law does not apply to AI for military or research purposes.
Even for companies without a base in the EU, the law applies if they deploy services to the EU or if the output of high-risk AI systems is used in the EU, so Japanese companies falling into these categories will also be affected. From this perspective, an effect similar to the "Brussels Effect," where the GDPR (the personal data protection regulation) influenced the world, is anticipated. Businesses that violate the law will face heavy fines.
The Beginning of the AI Regulation Debate
The global debate on AI regulation became active around 2016. In March of that year, there was shocking news that an AI Go program using "deep learning" had defeated a human champion. While deep learning brings various possibilities, it also presents the so-called "black box problem," where it exceeds human understanding as to why a certain result was produced. Depending on how AI models are built and used, biases can be amplified, leading to discriminatory results that favor only specific groups, or humans can be arbitrarily manipulated unconsciously. As it began to be pointed out that this could adversely affect society as a whole, discussions on AI regulations started.
Japan has been a global pioneer in proposing principles for AI research and development, contributing to the debate on AI regulation. In 2016, Japan held the G7 presidency, and at the "G7 ICT Ministers' Meeting" in April of that year, Japan proposed eight principles for AI research and development. This triggered international discussions on AI principles, leading to the agreement on the OECD AI Principles and the G20 AI Principles in 2019. The following year, the "Global Partnership on Artificial Intelligence" (GPAI), an international organization to discuss the implementation of the OECD AI Principles, was established. As an expert member, I am also involved daily in research and surveys of practices that serve as references for governments, companies, and organizations.
When Japan again held the G7 presidency in 2023, a new framework called the "Hiroshima AI Process" was created based on the results of the summit in Hiroshima, where G7 ministers took the lead in discussing the creation of international rules for the development and utilization of AI. When OpenAI released ChatGPT at the end of November the previous year, the risks posed by generative AI also became apparent, so it was decided to include countermeasures in the discussions. In the Hiroshima AI Process, through a "multi-stakeholder process" that broadly sought opinions from various stakeholders emphasized by Japan (countries outside the G7, the public and private sectors, academia, and civil society), an international guiding principle and a Code of Conduct for AI developers were released on October 30, 2023, in addition to the G7 Leaders' Statement.
AI Regulation in the United States
In the United States, the White House Office of Science and Technology Policy (OSTP) announced five principles for AI in October 2022 under the title "Blueprint for an AI Bill of Rights." Since "Bill of Rights" refers to the human rights protection provisions in the U.S. Constitution, I think it was a very clever naming choice that evokes that connection. Subsequently, in July 2023, the Biden administration gathered seven leading AI development companies, including Google, OpenAI, and Anthropic, to have them commit to safe, secure, and trustworthy AI development as a voluntary initiative (Voluntary AI Commitments). Two months later, eight more companies, including Adobe, IBM, and NVIDIA, also joined. The administration is actively working on policies for AI ethics and responsible AI.
Additionally, as guidance for a risk management framework that companies can refer to, the National Institute of Standards and Technology (NIST) released the "AI Risk Management Framework 1.0" in January 2023. Furthermore, on October 30, 2023, immediately after the G7 Code of Conduct was released, President Biden issued the "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." It is often misunderstood that this Executive Order signaled a shift in the U.S. from a soft law approach (non-legally binding guidelines, etc.) to a hard law approach (legally binding laws), but that is not the case (I confirmed this point by visiting Washington, D.C. at the end of June to speak with government officials and think tank researchers). This Executive Order is an order from the President, as the head of the executive branch, to federal government officials and administrative agencies, and it does not require specific actions from companies. The specific content of the guidance is left to each government agency.
At the local government level, there are examples of enacted laws regulating AI. In New York City, a law was passed requiring notification if AI is used in hiring activities, but it is said to be not functioning very well.
At the federal level, the United States still does not have a comprehensive personal data protection law. Bills have been introduced and then disappeared for some time. In fact, it is said that because this does not exist, AI development companies have been able to advance their research and development. Regarding AI as well, legislative bills by members of Congress, particularly concerning accountability, are being actively introduced, but there is no prospect of them being enacted.
AI Regulation in Japan
In Japan, as mentioned earlier, the Ministry of Internal Affairs and Communications (MIC) released the "AI R&D Guidelines for International Discussions" in July 2017, consisting of nine principles (adding the "Principle of Collaboration" to the eight principles for AI R&D contributed to the OECD). Subsequently, from the perspective of utilization, the "AI Utilization Guidelines" were released in August 2019. Around the same time, in March 2019, the Cabinet Office issued the "Social Principles of Human-Centric AI" (Decision of the Integrated Innovation Strategy Promotion Council). Furthermore, the Ministry of Economy, Trade and Industry (METI) released the "Governance Guidelines for Implementation of AI Principles Ver. 1.1" in January 2022, which organized action goals that AI businesses should practice when respecting the "Social Principles of Human-Centric AI" and presented hypothetical practical examples.
Furthermore, based on the "Social Principles of Human-Centric AI," the "AI Guidelines for Business" were released on April 19, 2024, integrating the MIC's AI R&D Guidelines and AI Utilization Guidelines with METI's "Governance Guidelines for Implementation of AI Principles," while considering the emergence of new technologies. I was also involved in the formulation of these guidelines as a member of METI's "Study Group on AI Guidelines for Business," and it is commendable that MIC and METI joined forces to consolidate them into a single set of guidelines. Having worked for government agencies for many years as a think tank researcher, it is very rare for multiple ministries to collaborate on issuing guidelines, and I hope that such inter-ministerial cooperation will continue in the future.
Thus, while Japan currently adopts a non-legally binding soft law approach as a framework for comprehensively regulating AI, it is responding in individual fields by amending existing laws in line with the progress of AI (for example, the amended Financial Instruments and Exchange Act and the Act on Improving Transparency and Fairness of Specified Digital Platforms). (For details, see Naohiro Furukawa and Kyoko Yoshinaga, "Responsible AI and Rules" (Kinzai Institute for Financial Affairs, May 2024)).
Trends in Other Foreign Countries
Looking abroad, the UK, like Japan, is taking a sector-by-sector approach centered on self-regulation. Israel, which has many AI startups, also takes the stance that many issues can be handled with existing laws; if intervention by the relevant authorities is necessary, they respond sector by sector while balancing the need to address risks specific to the AI context and the speed of change through soft law approaches and modular experiments. Singapore also places soft law at the center of its comprehensive regulation and provides a tool (AI Verify toolkit) for governance and technical evaluation.
On the other hand, the EU is currently the only one adopting hard law as an approach to comprehensively regulate AI, but bills to comprehensively regulate AI have been introduced in Canada, South Korea, and Brazil. China has introduced voluntary principles and guidelines for integrating ethics into the entire AI lifecycle for general scientific and technological research, and has hard law regulations for specific types of AI (recommender systems, deep synthesis technology, and generative AI). (For details on the above countries, refer to the CEIMIA report A Comparative Framework for AI Regulatory Policy [PDF]. I am also serving as an advisor for the second report.)
Methods of Regulation
Whether to use hard law or soft law depends on each country, and one is not necessarily better than the other. Furthermore, there is actually not much difference (the reason will be explained later). Since the circumstances each country faces are diverse—including economic conditions, cultural backgrounds, legal cultures, the existence of existing laws (e.g., provisions in personal information protection laws, civil law, criminal law, etc.), and corporate cultures—it is best to take measures suited to that country.
In the case of Japan, as is already the case, it seems best to start with non-legally binding soft law (guidelines) as a means of comprehensive AI regulation and then regulate with hard law (legislation) in individual fields as necessary. Even with soft law, unlike the U.S., which lacks a comprehensive personal data protection law at the federal level, Japan has a solid personal information protection law (Japan used to be positioned somewhere between the U.S., which emphasizes the economy, and the EU, which emphasizes human rights, but since the EU's GDPR, Japan has also been amending its laws to align closely with the GDPR). Moreover, Japan originally has strong social sanctions and high corporate compliance awareness. In Japanese committees for IT-related policies and legal system amendments, businesses are often included as members, which creates a certain incentive to comply; even without legal binding, if guidelines are issued by the government, most companies will try to address them seriously. I often hear from overseas contacts, "Japan is lucky. In our country, if it's not legally binding, no one follows it."
Looking at the situation of personal information protection, in Japan, if it is reported that a company has leaked personal information, that company's reputation drops immediately. Therefore, Japanese companies are nervous about complying with the Personal Information Protection Act and are taking measures. In Japan, especially among listed companies, there is a tendency to avoid even slight risks and not take challenges. Therefore, if AI were suddenly regulated comprehensively by law in Japan, no one would want to develop AI. This would hinder innovation, decrease Japan's international competitiveness, and adversely affect the economy. Japan especially needs to utilize AI effectively because the decrease in the labor force due to the declining birthrate and aging population is serious. Furthermore, many AI risks can be handled by Japan's existing laws.
However, there are fields that must be strictly regulated in line with technological progress. Regulation will be necessary in military contexts, government AI utilization, or if AI moves to the next step, such as Artificial General Intelligence (AGI). Furthermore, if companies stop following the AI Guidelines for Business and begin developing and utilizing AI as they please, leading to actual negative impacts on people and society, regulation will be unavoidable.
How to Ensure Interoperability
The question of how to ensure interoperability if each country's AI regulations are different becomes a point of discussion. In the international community, because the circumstances each country faces and the degree of technological progress are too different, it is difficult to form a consensus through legally binding hard law. In that case, consensus can only be reached in the form of "principles" in broad areas—that is, consensus through soft law. However, when discussing at international conferences, I feel that whether it is hard law or soft law doesn't matter much, as they are becoming similar. In fact, every country advocates principles of human-centric AI, safety, fairness, transparency, and accountability. Furthermore, technical standardization by ISO is also progressing.
At GPAI, practical matters are discussed, and recommendations, materials, and best practices that are easy for countries and companies to refer to are created through various projects. Japanese government agencies are also closely monitoring and supporting GPAI's movements and sharing and discussing them at G7 and OECD meetings, so it can be said that they are influencing each other. Participants from the Global South also participate in daily discussions at GPAI. Furthermore, a framework called the Hiroshima AI Process Friends Group has been created, with 53 countries and regions, including the EU, participating (as of June 2024).
In this way, international organizations are influencing each other and striving to form a consensus. Note that countries called the Global South are particularly concerned about jobs being taken by AI and access to AI (whether they can actually use AI). Technology-advanced countries need to create rules from the perspective of developing countries as well. However, in the end, the power to regulate AI rules lies with the countries that are ahead in AI technology. If a country cannot gain a competitive advantage in terms of technology, it will end up having the rules of other countries forced upon it.
Regulation of General-Purpose AI
Until now, AI has been called "narrow AI" or "weak AI," performing specific tasks and trained on labeled datasets to operate within predefined environments, making it somewhat predictable. However, the emergence of generative AI such as ChatGPT, which is a Large Language Model (LLM), has made significant progress toward general-purpose AI (AGI), called "strong AI," which performs a wide range of intellectual tasks and has become familiar to people. While there are various problems such as hallucinations (misinformation), privacy, intellectual property rights, bias, and deepfakes, as general-purpose AI advances, concerns are spreading about its use in cybercrime and, ultimately, its potential to lead to the extinction of humanity.
Therefore, in the EU, scientists warned that an approach of classifying AI systems as high-risk or not according to their intended purpose would create loopholes for general-purpose AI systems (foundation models). Organizations like the Future of Life Institute also argued that such systems should be incorporated into the AI Act, leading to a separate chapter for "General Purpose AI" (also abbreviated as GPAI, which is confusing as it is the same as the organization name mentioned earlier). Specifically, GPAI model providers are obligated to provide technical documentation and specification procedures, comply with the EU Copyright Directive, and publish summaries of content used for pre-training. Providers of GPAI models with systemic risks bear obligations such as model evaluation, conducting adversarial testing, tracking and reporting serious incidents, and ensuring cybersecurity protection (though models for R&D purposes before being placed on the market are excluded). This serves as a reference for our country as well. (Note: However, it is necessary to be careful as GPAI such as generative AI is not equal to AGI, but merely a step toward AGI.)
At the federal level, the U.S. does not yet have regulations for general-purpose AI models. Regarding countermeasures for generative AI risks, NIST released a draft "AIRMS Generative AI Profile" as guidance on April 29, 2024.
On the other hand, AI Safety Institutes are being established one after another for the development and utilization of safe and secure AI, including responses to risks from general-purpose AI. Following the UK and the U.S., Japan established one in February 2024, and it is said that considerations are also underway in Canada and India (as of June 2024). These institutes study and promote evaluation methods and standards for AI safety, and by collaborating with each other, they are expected to contribute to resolving the aforementioned interoperability issues. (In France, Inria (National Institute for Research in Digital Science and Technology), which is also a support center for GPAI (the organization), collaborated with the UK AI Safety Institute in February 2024.)
What is Needed for the Regulation of Emerging Technologies—Flexibility/Agility, Multi-stakeholder, and Interdisciplinary Perspectives
The emergence of generative AI is said to have been much faster than predicted. In the future, AutoGPT, where AI learns spontaneously and operates autonomously without humans entering prompts (instructions), will likely become mainstream. The "black box" problem of AI will become increasingly serious, and things that humans cannot foresee will occur. In addition to private use, there is also the problem of so-called dual-use, where it can be converted to military use. The faster technology progresses, the more flexible and rapid a response is required. In the regulation of AI, it is necessary to involve various stakeholders, including the public and private sectors, academia, and civic groups, in discussions.
In this regard, Japan's AI Guidelines for Business were made into soft law to allow for flexible and rapid (agile) responses and to avoid hindering innovation. They also adopt a multi-stakeholder approach.
Furthermore, initiatives with interdisciplinary perspectives are necessary at the sites of corporate AI development. Unlike previous IT, AI risk issues affect humanity and society as a whole, so the perspectives of experts in law, economics, sociology, philosophy (ethics), psychology, and cultural anthropology, in addition to engineers, are useful. Also, since risks differ depending on the context in which AI is used, it is necessary to include experts from the relevant fields (e.g., healthcare, finance, etc.) in discussions.
To Survive the AI Era—What is the Role of Universities?
AI developers sometimes do not know the constraints of the Personal Information Protection Act or Copyright Act, or they may not be aware of the trends in discussions on AI ethics in the international community. Legal scholars also tend to focus only on regulation without knowing the technology well, so they need to learn from each other. For example, in undergraduate and graduate departments researching AI, it is necessary to teach basic knowledge of law and ethics in addition to programming. In legal training as well, it is necessary to acquire basic literacy in technology.
In the future, as a role for universities, programs that allow students to earn degrees in multiple fields could be considered to cultivate interdisciplinary perspectives for surviving the AI era. In the United States, law schools that train lawyers implement Joint & Dual Degree Programs (programs where one can simultaneously earn a J.D. (Juris Doctor) and a master's degree from another graduate school). Also, at Georgetown University Law Center, where I belong, one can receive degrees such as a Master of Laws in Technology Law & Policy (LL.M.) for those with a law degree, or a Master of Law and Technology (M.L.T.) for those without a law degree, signifying mastery of both law and technology. Furthermore, discussions about AI are held by inviting experts in philosophy and cultural anthropology.
In this way, to solve the complex problems brought by AI, a combination of knowledge from a wide range of academic disciplines, as well as application skills and flexibility based on basic academic ability, is required. It is hoped that we will face AI skillfully while engaging in discussions with people from various academic fields with a global perspective.
*This research was supported by the JST Moonshot R&D Program, JPMJMS2215. *Affiliations and titles are as of the time this magazine was published.