Keio University

[Special Feature: AI and Intellectual Property Rights] Fumio Shimpo: Generative AI and AI Regulation

Publish: June 05, 2023

Writer Profile

  • Fumio Shimpo

    Faculty of Policy Management Professor

    Fumio Shimpo

    Faculty of Policy Management Professor

1. Generative AI and Human Intelligence

John Stuart Mill argued, "The highest intellect which it is possible for a human being to acquire, is not the knowing of one thing only, but the combining of a minute knowledge of one or a few things with a general knowledge of many things" (Inaugural Address Delivered to the University of St. Andrews, J.S. Mill, translated by Issei Takeuchi, Iwanami Shoten, 2011, p. 28).

Many people who have used ChatGPT, a text-generating AI, are likely astonished by the naturalness of its responses, its smooth interaction, and its encyclopedic content. However, while generative AI may appear to have acquired the highest intellect, it is actually merely outputting answers by combining information related to a question from a large language model constructed by learning vast amounts of text data. In other words, generative AI is different from what Mill calls the highest intellect.

Can human intelligence be enhanced by delegating the task of "combining general knowledge about many things" to generative AI, or will human intelligence decline as it is surpassed by AI's knowledge? The current situation, where people are becoming suspicious of generative AI due to its many uncertain and undetermined elements, has resulted in the emergence of sudden calls for AI regulation.

2. From AI Boom to Practical Application

AI research and development has reached the present day through three booms: the 1950s, the 1980s, and since 2011. Until now, typical examples of AI utilization have been cases where AI is "embedded" in products or services, such as AI-equipped home appliances like vacuum cleaners, smart speakers, and autonomous vehicles. Opportunities to use "specialized AI" targeting specific fields or domains were common.

Generative AI can learn from massive amounts of data to generate new data, demonstrating its power in creating diverse content such as images, videos, audio, music, text generation, and translation. While it is a "highly versatile AI," it is not what is called "Artificial General Intelligence (AGI)." However, it gives a premonition of the dawn of AGI capable of demonstrating abilities closer to humans, and it cannot be denied that it feels as though a Pandora's box toward its realization has been opened.

In previous AI booms, the mainstream was AI that recognized, identified, and inferred from input information, such as voice input and image recognition. For example, the expected role of AI was to output accurate text matching the content of input audio or to identify and extract specific individuals' faces from a vast number of images. On the other hand, while generative AI also outputs text, it generates diverse content as if a human were thinking or creating, depending on the input information or instructions. Instead of searching for a specific image, it creates a new one; instead of merely transcribing, it writes an essay.

Even when the practicality of AI was limited, fictional or virtual threats were often emphasized—the idea that AI would develop in the future and become a threat to humanity, like the Terminator in movies. The versatility of generative AI, which is moving beyond that stage, will undoubtedly be a turning point for recognizing specific dangers and threats, alongside the realization of AI's incredible utility.

3. Versatility of Generative AI and the Abstract Nature of Risk

Risks surrounding AI research, development, and use have been discussed in detail both domestically and internationally, including by the Ministry of Internal Affairs and Communications' Conference on AI Network Society. The "OECD Council Recommendation on Artificial Intelligence," adopted by the OECD Ministerial Council in May 2019, aims to promote AI innovation and trust by promoting the responsible management of trustworthy AI while respecting human rights and democratic values.

Due to the impact of generative AI like ChatGPT, there is a growing illusion that the accumulation of previous discussions is useless because the specific risks associated with its versatility cannot be foreseen. However, it should be noted that because of the abstract nature of these risks, we simply have not yet been able to evaluate how useful the discussions to date actually are.

Despite the fact that necessary principles for AI research, development, and utilization have been considered and already proposed, resistance to legal regulation and arguments that regulation is unnecessary are being repeated with the emergence of new technology and the promotion of innovation. Rather than ending with simplistic criticisms that "regulation equals evil" and hinders innovation, we should consider the "regulation, correction, and discipline" that is inherently and essentially necessary for the use of generative AI, which has been avoided until now.

Regarding legal issues surrounding generative AI, it is true that there are aspects that cannot be evaluated at this point, as the extent to which unexpected problems—such as those that cannot be handled by previous discussions—might occur depends on the future expansion of generative AI functions and the invention of new usage methods. However, it is necessary to grasp to some extent the legal issues envisioned when considering future specific regulations and discipline. Although this is merely an illustrative and hypothetical list, we will likely have to consider the following problems.

Since the spread of the Internet, every time a new technology or means of communication has appeared in the field of information law, issues surrounding intellectual property rights, including the use of copyrighted works, and the rights to personal information and privacy have been discussed first. Even when a new field called robot law emerged from information law, discussions related to intellectual property and personal information were again the first to be brought to the table. Since the discussion on legal issues surrounding generative AI is showing signs of repeating the same process as previous discussions, I am reaffirming that issues of intellectual property and personal information are unavoidable when emerging technologies appear.

As a starting point for typical trial and error regarding the use of generative AI and legal issues, I would like to list the following points.

(1) Impact on Democracy

(a) Impact on decision-making and judgment in the structures of governance (legislative, judicial, administrative); (b) Impact on elections and candidates for public office, and problems associated with political use.

In decision-making and judgment, it is necessary to research precedents and vast amounts of required information, so the use of AI is clearly useful in the legislative, judicial, and administrative fields. However, if AI becomes involved even in final judgments, some might think it can be positioned similarly to a human advisory body. But because humans cannot judge whether an AI's judgment is correct, there is a risk that we will not even be able to determine if a judgment on a matter exceeding human wisdom is correct.

(2) Impact on Expressive Activities

(a) Changes in communication and expressive activities; (b) Bias, discrimination, and ensuring fairness and justice in expressive activities; (c) Impact on the right to know; (d) Impact on intellectual activity itself (the need to distinguish between intelligence, knowledge, and insight); (e) Cessation of expressive activities and thinking due to dependence on generative AI.

By simply querying generative AI, one can not only have necessary information picked out from a vast amount of data but also output information that complements human intellectual activity. Consequently, we will come to rely on AI not only for information retrieval but also for analysis, organizing points of contention, and various creative activities. As a result, with the future spread of generative AI, will our intelligence be enhanced through its use, or will we fall into a state of suspended thinking and lose our intelligence as a result of excessive dependence? It is unlikely to head in one direction uniformly; rather, it is thought that the direction will change depending on the literacy, usage methods, and usage awareness of those using generative AI.

The use of generative AI will improve accuracy as the precision of output results increases, but whether one can obtain the expected answer will also depend on the ability to ask questions (prompting) in a way that makes it easy for the AI to derive that answer. In other words, in addition to existing information literacy, communication skills with AI will be required.

(3) Protection of Intellectual Property

(a) How to protect creations (outputs) and products (information) by generative AI; (b) Issues related to the use of generative AI and intellectual property rights, including copyrights, trademarks, and design rights.

The book by J.S. Mill introduced at the beginning questions "university education," and the use of generative AI will bring about major changes in the way education is conducted at universities. For example, considering the problem of plagiarism in report assignments, how to judge cases will be a subject for future study: whether writing using generative AI constitutes plagiarism (the issue of using generative AI itself); when a student accused of plagiarism claims they used generative AI even though they didn't (shifting responsibility to generative AI); or when a student is flagged for plagiarism after using text from reference materials obtained from a friend without knowing it was generated by AI (illegal/unjust acts by a third party in good faith).

(4) Protection of Personal Moral Interests (Personal Information, Privacy, Portrait Rights, etc.)

(a) Changes in the environment for handling personal information (difficulty of data protection); (b) How to protect personal information handled without the individual's knowledge or awareness; (c) The increased possibility of being able to infer sensitive personal information after the fact even if sensitive information was not initially acquired. The discussions required for protecting personal moral interests, including the guarantee of personal information protection and privacy rights, are diverse.

As a noteworthy judgment, the Italian data protection authority announced a ban on the use of ChatGPT due to violations of the EU's GDPR (General Data Protection Regulation). After OpenAI, the developer, responded with measures to ensure transparency and protect rights, the decision was lifted on April 28, 2023. The measures announced by OpenAI include confirming that users have the right to opt-out (stop the use of personal data), describing the necessary explanations for this in the privacy policy, and introducing a form to request opt-outs to allow exclusion from training data and chat history. On the other hand, regarding the assurance of accuracy, it merely states that it is technically impossible to correct inaccurate information and explains that users should understand and use ChatGPT knowing that the accuracy of personal information in its responses cannot be guaranteed.

Furthermore, regarding personal information entered by users, it states that handling will be based on legitimate interests along with the opt-out option. It can be said that part of the matters to be considered as data protection issues related to generative AI has become clear.

(5) Identifying and Addressing Illegal and Improper Use

Setting boundaries for the proper use of generative AI will become difficult. We must carefully consider how to prevent the use of generative AI not only for illegal acts such as crimes but also as a tool for promoting or aiding improper conduct.

The challenge will be dealing with "generative AI-utilizing crimes/misconduct," where the act of using generative AI itself is illegal, and "generative AI-related crimes/misconduct," where generative AI is used as a support tool to execute existing illegal acts. Examples of the former include attack methods like "prompt injection," where malicious prompts (instructional text) are entered into generative AI for unauthorized use. The latter includes acts such as receiving guidance on creating computer viruses or manufacturing explosives, or using generative AI to execute existing crimes.

5. Points to Note in Efforts Toward AI Regulation

In considering new regulations, it is naturally expected that in the future, the nature of those regulations will be considered by referring to answers obtained by inputting questions into generative AI. There is no room for doubt that it is useful to seek the necessary knowledge for considering reliable and effective regulations by having AI comprehensively and exhaustively learn from past regulatory cases and their effects. However, when an era arrives where regulations are considered centered on "regulation of AI, by AI, for AI," and it is no longer permitted to present counter-proposals to the optimal solutions derived by AI, a difficult future awaits humanity.

Perhaps anticipating such a sense of crisis, the G7 Digital and Tech Ministers' Meeting "Ministerial Declaration" (April 30, 2023) presented items for discussion to determine the direction of AI regulation, such as promoting global interoperability of AI governance and establishing a forum to discuss generative AI under the heading "Promoting Responsible AI and AI Governance." However, what became clear here is that the same discussions as previous approaches to regulation are being repeated. For example, Japan has been consistent in its direction that response through soft law, such as guidelines and self-regulation, is preferable rather than responding through strict legal regulations. At the opposite pole is the EU, which shows no sign of yielding its policy that response should be through strict regulation.

6. EU AI Regulation and the Brussels Effect

In a situation where countries are hesitant about AI regulation, only the EU has clearly indicated its regulatory method in advance. The "Proposal for a Regulation on Artificial Intelligence (AI Act)", an EU bill to regulate the use of AI, was published by the European Commission on April 21, 2021. it establishes usage regulations, including bans, according to the risk of the AI system. It aims to establish new legislation by expanding product safety regulation obligations—currently imposed on manufacturers and importers of products sold (placed on the market) in the EU market—to AI systems classified as high-risk, making them subject to the "CE marking" (a mark indicating that the product meets EU standards), and building a conformity assessment and third-party certification system for that purpose.

There is a theory called the "Brussels Effect," which refers to the phenomenon where regulations proposed by the EU substantially influence global rule-making (The Brussels Effect: How the European Union Rules the World, Anu Bradford, supervised translation by Katsuhiro Shoji, Hakusuisha, 2022). It refers to a mechanism that can exert regulatory power in the market when five conditions are met: (1) market size, (2) regulatory capacity, (3) stringent rules, (4) inelastic targets, and (5) indivisibility. The EU's new AI regulation is expected to be a field that literally exerts the Brussels Effect toward future AI research, development, and social implementation.

The objectives of the AI regulation shown in the AI Act proposal are: (a) harmonized rules for placing on the market, putting into service, and using AI systems in the EU; (b) prohibition of specific AI practices; (c) requirements and obligations for high-risk AI systems; (d) ensuring the consistency of transparency rules for AI systems intended to interact with natural persons, emotion recognition systems, biometric categorization systems, and AI systems used to generate or manipulate image, audio, or video content; and (e) market monitoring and surveillance. For generative AI, (d) stipulates ensuring the transparency of AI systems. However, for text-generating AI, consideration is being given to including it in (c) high-risk AI systems, as well as adding disclosure obligations stipulated for ensuring transparency and "labeling" for that purpose.

7. Where AI Regulation is Headed

When new technology appears, discussions to regulate that technology are often held, but what should be regulated is not the technology itself, but the discipline of the humans who use it. Furthermore, the background to the sudden proliferation of regulatory theories alongside the attention on generative AI is largely due to fear of the unknown and opaque elements.

AI regulation is not a problem merely associated with the advancement of information processing; the essence of the issues to be considered is the problems associated with autonomous judgment by AI, and those discussions have already been held since the beginning of the third AI boom. What is being tested now is the awareness on the human side regarding AI's autonomy.

The situation where we laugh at AI for boldly lying—such as returning a profile of a different person when you enter your own name and ask a question—will not last very long. Soon, we will not even be able to verify the credibility (fact-check) of information even if it is wrong. If that happens, we will have to develop AI to request fact-checks, but then we will fall into an infinite loop of having to develop AI to confirm that those fact-checks are correct.

In the future, AI will exponentially improve the accuracy of its output results, and its autonomy will improve dramatically beyond what we imagine. AI is a technology developed by humanity. However, ironically, the discussions surrounding AI regulation accompanying the evolution of generative AI vividly represent the situation where human wisdom is not keeping up with that technology.

I want to continue my studies as a legal scholar so that an era does not arrive where only AI can derive the answers for the nature of AI regulation required for Trustworthy AI. While consulting with generative AI.

*This research was supported by the JST Moonshot Research and Development Program, JPMJMS2215.

*Affiliations and titles are as of the time this magazine was published.