1. Background
Nowadays, the world is at a historical intersection of a new round of technological revolution and industrial transformation. Following industrialization and informatization, intelligence has become a new developmental trend of the era. (Sun and Li 2022) Driven by national policies that promote the digital economy and the demand for high-quality economy development, AI technology and industry have maintained rapid advancement. As technological innovation becomes more active and industrial integration deepens, technologies such as intelligent automation, recommendations, search, and decision-making have deeply integrated into enterprise operations and social services, which brings significant economic and social benefits. In summary, artificial general intelligence (AGI) is playing an increasingly crucial role in optimizing industrial structures, enhancing economic activity, and aiding economic development. (Guo and Hu 2022)
Generally speaking, artificial intelligence refers to algorithms or machines that achieve autonomous learning, decision-making, and execution based on a given amount of input information. The development of AI is built on the improvement of computer processing power, advancements in algorithms, and the exponential growth of data. (Cao and Fang 2019) Since John McCarthy first proposed the concept of artificial intelligence in 1956, the progress of AI has not always been smooth. It has experienced three periods of prosperity driven by machine learning, neural networks, and internet technologies, as well as two periods of stagnation due to insufficient computing power and imperfect reasoning models. (Jiang and Xue 2022) With the deepening implement of AI and the recent popularity of technologies like GPT-4, a new wave of Artificial Intelligence Generated Content (AIGC) has emerged, demonstrating the capabilities of AGI. However, Generative Artificial Intelligence (GAI) has also raised concerns due to its inherent technical flaws and issues like algorithmic black boxes, decision biases, privacy breaches, and data misuse, leading to a crisis of trust.
In this context, the key to address the challenges of AGI development lies in providing a governance framework that balances ethics, technology, and law. (Zhao 2023) This framework should respect the laws of technological development while aligning with the requirements of legal governance and logic of scientific and technological ethics. However, both theoretical research and practical experience indicate that current governance on AGI lack specificity, systematicity, comprehensiveness, and a long-term perspective. So, it is urgently needed to use systematic scientific legal methods to ensure and promote a positive cycle between technological breakthroughs and high-level competition. This approach should aim to integrate technological, industrial, institutional, and cultural innovation, and advance the innovative development of AGI as well.
This article uses issues arising from representative GAI products and services as examples to explore the legal challenges brought about by the innovative development of AGI. Additionally, it discusses how to safeguard the innovative development of AGI by examining the current situation of China's response to these challenges. Based on this analysis, the article proposes legal solutions to promote the innovative development of AGI in the future, with the aim of enriching theoretical research in this field.
2. Challenges of Generative Artificial Intelligence Technology
Science and technology are the primary productive forces, and scientific and technological progress is an indispensable driver of industry development. With advancements in technologies such as GAI, people have discovered that AI is capable of accomplishing tasks previously unimaginable. However, people have also realized that the safety challenges, which are posed by AI's development and its deep integration into daily life, are becoming increasingly complex.
Artificial intelligence is mainly divided into specialized artificial intelligence and general artificial intelligence. Specialized artificial intelligence, also known as "narrow AI" or "weak AI," refers to AI programmed to perform a single task. It extracts information from specific data sets and cannot operate outside the designed task scenarios. Specialized AI is characterized by its strong functionality but poor interoperability. General artificial intelligence (AGI), also known as "strong AI," "full AI," or "deep AI," possesses general human-like intelligence, which enables it to learn, reason, solve problems, and adapt to new environments like a human. AGI can address a wide range of issues without the need for specially encoded knowledge and application areas.
With the emergence of large models like GPT-4 that demonstrate powerful natural language processing capabilities, the possibility of achieving AGI with "big data model + multi-scenario" has increased. Although no technology has yet fully reached the level of AGI, some scholars believe that certain generative AI models have initially achieved a level close to AGI. (Bubeck 2023)
Currently, the security issues brought by GPT-3.5, characterized by autonomous intelligence, data dependency, the "algorithmic black box," and "lack of interpretability," have attained widespread attention. If technology products that truly meet AGI standards emerge, even more significant security challenges could be brought, potentially having more severe consequences and broader impacts on national security, social ethics, and individual life and property safety. Therefore, it is essential to explore the specific risks posed by generative AI to find ways to ensure that the innovative development of GAI benefits human society without causing harm.
2.1. Ethical Risks in Science and Technology
Scientific research and technological innovation must adhere to the norms of scientific and technological ethics, which are crucial for the healthy development of scientific activities. Currently, generative AI can generate content in text, image, audio, and video formats, and their application fields are extremely broad. But the lack of established usage norms for this technology poses ethical risks, leading to distrust in the application of AI. This issue is especially serious during the transition from weak AI to strong AI, where AI's increasing autonomy presents unprecedented challenges to traditional ethical frameworks and the fundamental nature of human thought.
GAI services excel in areas such as news reporting and academic writing, making the technology an easy tool for creating rumors and forging papers. The academic journal "Nature" has published multiple analytical articles on ChatGPT, discussing how large language models (LLMs) like ChatGPT could bring potential disruptions to academia, the potential infringement risks are associated with generated content, and the necessity of building usage regulations. (Stokel-Walker and Van Noorden 2023) It is foreseeable that the lack of clear ethical standards could lead to frequent occurrences of academic fraud, misinformation, and rumor spreading, thereby destroying trust in AI technology. This distrust could even extend to situations where AI technology is not used. (Chen and Lin 2023)
Moreover, the responses provided by GAI through data and algorithms are uncertain. With the continuous iteration of GAI, some technologies have been considered to have reached the level of AGI, approaching human-like intelligence. As GAI develops further, it raises profound questions about whether the technology will independently adopt ethical principles similar to those of humans. AI is now progressing towards an era of strong AI with increasingly general capabilities, and we human may find it challenging to control or even participate in the process of intelligent production, making the regulation of scientific and technological ethics even more critical.
2.2. Challenges in Responsibility Allocation
In recent years, incidents caused by autonomous driving technologies from companies like Google, Tesla, and Uber have intensified the ethical debate over whether humans or AI should take responsibility. In the context of GAI service, the enhanced autonomy of AI, coupled with the need to optimize generated content based on external feedback, poses significant challenges. The traditional legal framework for causality is difficult to apply due to the numerous hidden layers within algorithmic black boxes, leading to regulatory challenges. This complexity increases the difficulty of seeking remedies and defending rights after infringement, which makes it harder to effectively protect user rights and exacerbating public distrust in AI.
On the one hand, the legal and ethical standards of AI are still underdeveloped, resulting in many infringement incidents. When such incidents occur, determining the liable party and correctly allocating responsibility becomes a major challenge. The concept of the "responsibility gap," introduced by Andreas Matthias in 2004, refers to the inability of algorithm designers and operators to foresee future outcomes during the autonomous learning process of the algorithm. This implies that humans do not have sufficient control over the actions of machines and cannot be held liable for the fault of machine builders and operators under the traditional assignment of fault. (Matthias 2004)
On the other hand, GAI technology has "universal accessibility". Its usage and cost thresholds are not so high, so a wide range of people can easily access and use the technology. This accessibility increases the risk of infringement incidents. For example, spreading rumors can be easily facilitated by AI, making it simple to create and disseminate false information. Some users may intentionally spread and create false information and rumors to boost web traffic, which increases the frequency of misinformation dissemination. (Chen and Lin 2023)
2.3. Intellectual Property Challenges
With the widespread application of GAI, concerns have arisen regarding the legality of the training data sources for large AI models and whether the content they generate can be considered as a work.
While it is widely accepted that GAI, as a computer program, can be protected as intellectual property, significant controversy remains over the intellectual property issues related to the massive data training. The lack of clear boundaries or definitions regarding intellectual property in data can easily result in a "tragedy of the commons." Conversely, overemphasizing the protection of data as intellectual property can hinder technological development, resulting in an "anti-commons tragedy." (Peng 2022) Scholars are actively discussing how to balance the protection of intellectual property within data and the advancement of technological innovation.
Furthermore, there is debate over whether the content generated by AI can be recognized as a work. GAI produces content based on extensive data training and continuously refines the output according to user feedback. Therefore, it is challenging to determine that the content is entirely autonomously generated by AI, which leads to disputes. Some scholars argue that GAI mimics the human creative process and its content is not a product of human intellect. However, in practice, a few countries do recognize computer-generated content as a work. For instance, the UK's Copyright, Designs and Patents Act (CDPA) provides that content generated by a computer can be protected as intellectual property. Though there is no consensus on the issue of ownership of content generated by AI, most scholars agree that AI itself cannot be the rights holder of a work.
2.4. Data Security Risks
Data elements have immense potential value. If this value is fully realized by the following pattern "potential value - value creation - value realization", it can significantly drive social and economic development. (Chen B. 2023) As users become more aware of protecting their data privacy and as the risks associated with data breaches increase, finding a balance between data protection and data-driven AI research is crucial for achieving public trust in AI technology.
In GAI technology, the first type of risk is the inherent security risk of the training data. The training outcomes of GAI models directly depend on the input data. However, due to limitations in data collection conditions, the proportion of data from different groups is not balanced. For example, current training corpora are predominantly in English and Chinese, making it difficult for other minority languages to be integrated into the AI world, thus presenting certain limitations.
The second type of risk arises from the processes of data collection and usage. With the advancement of internet technology, the amount of personal information has increased and become easier to collect. The growing scale of data is both the key to achieving GAI services and a primary source of trust crises. The training data volume for GPT-4 has reached 13 trillion tokens. Although mainstream GAI service providers have not disclosed their data sources, it is known that these data mainly come from public web scraping datasets and large human language datasets. It is a challenge to access and process such data in a secure, compliant, and privacy-protective manner, demanding higher standards for security technical safeguards.
2.5. Algorithm Manipulation Challenges
In the AI era, the uncontrollability brought by the statistical nature of algorithms, the autonomous learning ability of AI, and the inexplicability of deep learning black-box models have become new factors leading to crisis of user trust. From the perspective of technical logic, algorithms play a core role in the hardware infrastructure and applications of GAI, shaping user habits and values. (Zhang L.H. 2021) Due to the black-box problem in the decision-making processes of AI models, this uncontrollable technical defect brings most of the algorithmic challenges.
Firstly, algorithms lack stability. GAI faces various attack methods targeting their data and systems, such as virus attacks, adversarial attacks, and backdoor attacks. For instance, feeding malicious comments into the model can effectively influence the recommendation algorithm, resulting in inaccurate recommendation outputs.
Secondly, the explainability of algorithms needs improvement. On the one hand, people are unclear about the processes and operational mechanisms within large models that contain vast amounts of parameters. On the other hand, it is also unclear which specific data from the database influence the AI algorithm's decision-making process.
Lastly, algorithmic bias and discrimination issues remain unresolved. Internally, if the algorithm developers set discriminatory factors or incorrectly configure certain parameters during the development stage, the algorithm will inherently exhibit biased tendencies. Externally, since GAI optimizes its content based on feedback, any biases and discrimination present in the feedback data will affect the final generated content.
3. China's Legal Practice for AGI
Artificial intelligence is a double-edged sword, bringing both convenience and risks to society. The trust crisis caused by the application of AI technology hinders further innovation and development. Currently, countries around the world have accumulated certain governance experiences to align the rule of law with the development of AI technology. There has been substantial research on governance principles, implementation rules, and accountability mechanisms.
The European Union has adopted a strict regulatory approach, aiming to address AI-related issues through specialized institutions and legislation. In April 2021, the European Commission proposed the "Artificial Intelligence Act" that focuses on threats posed by AI to personal safety and fundamental rights. On September 28, 2022, the European Commission released the proposal " AI Liability Directive," further clarifying the allocation of responsibility within AI systems, especially regarding damage compensation and risk management. In March 2024, the EU passed the world's first comprehensive AI regulatory law, the "Artificial Intelligence Act," which emphasizes strict regulation of high-risk AI systems, marking a new phase in AI governance.
In contrast, the United States adopts a lenient regulatory approach and has yet to establish systematic governance legislation for AI. For example, in October 2022, 他the White House issued "The Blueprint for an AI Bill of Rights: Making Automated Systems Work for The American People." While this document provides guidance for federal measures on AI development, it is not legally binding and has limited governance effectiveness. However, as the pace of AI development accelerates, the U.S. government is gradually increasing its intervention in AI development. For instance, in October 2023, President Biden signed an executive order of regulatory measures related to AI to ensure the U.S. takes a leading role in managing AI risks. In June 2024, the FTC launched an investigation into certain companies' AI investments and partnerships.
In summary, different countries make reasonable choices based on their actual circumstances and adopt various regulatory attitudes and methods for AI supervision. With the ongoing research and development of AI and the expanding scope of its risks and impacts, both the U.S.'s lenient regulatory model and the EU's stringent regulatory model have their merits. China is forming a governance approach with Chinese characteristics based on other countries' experiences. This involves enacting laws and regulations to prevent potential risks of AI, while providing clear and reasonable guidance for enterprises’ development.
3.1. Compliance with the Law: The Normative Values for AGI Development
Given the risks associated with the development of AI, China government has promptly intervened by enacting laws to promote the regulated development of AI within a framework of rule of law, thereby preemptively addressing the challenges posed by AGI. The "Artificial Intelligence Law Draft" included in the State Council's 2023 legislative work plan. On October 27, 2023, during the ninth collective study session of the 19th Central Political Bureau, it was explicitly stated that "we must strengthen the assessment and prevention of potential risks in AI development to ensure AI is safe, reliable, and controllable." All of these demonstrates China's proactive attitude and emphasis on supporting and regulating AI technology and industry development.
The Cyberspace Administration of China, along with six other departments, jointly issued the "Interim Measures for the Management of Generative AI Services" (hereinafter referred to as the "Interim Measures"), which came into effect on August 15, 2023. The "Interim Measures" focus on ex-ante regulation and effectively enhancing capabilities of GAI security governance through preventive supervision. The "Artificial Intelligence Law (Scholars' Draft)" was released in March 2024. By refining and reconstructing the regulatory targets, bodies, tools, and content of AI risk, it outlines the basic framework of the AI regulation system.
Therefore, while vigorously developing AI, China places great emphasis on the safety challenges and sets clear safety governance objectives. A comprehensive governance approach is adopted, incorporating regulations, standards, and technical support, to implement an agile governance model. Of course, regarding the current state of AI development in China, the ultimate goal of regulation is to maintain the development of a new wave of scientific and technological advancements, including AGI. Therefore, beyond the safety principles, China also focuses on accelerating the development of AGI. By fostering a market environment that encourages competitive innovation, China promotes the sharing and innovation of AI enterprises.
3.2. Trustworthiness: The Fundamental Value of AGI Development
Trustworthiness is the primary principle or "imperative clause" that must be followed in the current stage of AGI innovation. It is also the focus of AI governance policy formulation. (Chen J.D. 2023) Although the specific definition of trustworthy AI has yet to be unified, its core principles include stability, interpretability, privacy protection, and fairness. Stability refers to the ability of AI to make correct decisions in the presence of environmental noise and malicious attacks. Interpretability means that AI decisions must be understood by humans. Privacy protection refers to the AI system's ability to safeguard personal or group privacy from breaches. Fairness implies that the AI system should accommodate individual differences and treat different groups equitably.
In China, Academician He Jifeng of the Chinese Academy of Engineering first proposed the concept of "trustworthy AI" at the Xiangshan Science Conference in November 2017. (Zhang Y.K. 2021) To continue fostering the development of trustworthy AGI, China aims to establish a comprehensive governance framework that integrates ethical guidelines, robust laws and regulations, and advanced technical safeguards. In December 2017, the Ministry of Industry and Information Technology issued the "Three-Year Action Plan to Promote the Development of a New Generation of Artificial Intelligence Industry (2018-2020)". In June 2019, the New Generation Artificial Intelligence Governance Expert Committee released the "New Generation AI Governance Principles - Developing Responsible AI", outlining the framework and action guidelines for AI governance. By issuing policies, China aims to guide the legal development of AI and address specific challenges posed by AI.
Furthermore, fostering collaboration between government, business, and research institutions, China encourages enterprises to actively participate in AI governance. In June 2020, Ant Group unveiled the Trusted AI Technology Architecture at the Global Artificial Intelligence Conference. In July 2021, Jing Dong Exploration Research Institute and the China Academy of Information and Communications Technology jointly released China's first "Trusted AI White Paper" at the World Artificial Intelligence Conference. Both companies highlight privacy protection, stability, interpretability, and fairness as the four fundamental principles of trustworthy AI. These efforts create a balanced environment that supports technological advancement and societal trust in AI.
3.3. Human-Centric: The Value Orientation of AGI Development
The safety baseline for the innovation and development of AGI encompasses three main elements: people, technology, and trust. Technology serves as the foundation for the robust growth and stability of the AGI industry. Trust is the pillar that promotes the continuous and healthy development of the AGI sector. And people are the core protection objects of trustworthy AI laws and policies. A human-centric approach is the fundamental principle of AGI innovation and development. In fact, the issue of trust is not entirely dependent on the underlying logic of AI development and application, but also on how well the law supervises AI technology. The key question is whether AI's trustworthiness can be achieved from a legal regulatory perspective.
In recent years, China has undertaken various legal explorations and practices to promote a human-centric approach to AGI. In terms of legislation, Shanghai issued the "Shanghai Regulations on Testing and Application of Intelligent Connected Vehicles" in December 2021, the "Shanghai Regulations on Promoting the Development of the AI Industry" in September 2022, and the "Pudong New Area Regulations on Promoting Innovation in Driverless Intelligent Connected Vehicles" in November 2022. These regulations emphasize the trustworthiness of AI algorithms, ethics, governance, and supervision, and provide detailed provisions on technical standards, data security, and personal information protection in the field of intelligent connected vehicles.
Additionally, the "Shanghai Regulations on Promoting the Development of the AI Industry" stipulate that the Shanghai Municipal will establish the "Shanghai AI Strategic Advisory Expert Committee" to provide consultation on major strategies and decisions in AI development. It also sets up the "AI Ethics Expert Committee" to formulate ethical guidelines and promote discussions and standard-setting on major ethical issues in AI both domestically and internationally.
In July 2021, the Shanghai Municipal Commission of Economy and Informatization and the Shanghai Municipal Market Supervision Administration issued the "Guiding Opinions on Promoting the Construction of a New Generation AI Standard System" (hereinafter referred to as the "Opinions"). The "Opinions" focus on areas such as intelligent connected vehicles, medical imaging diagnostics, visual image identity recognition, and intelligent sensors, aiming to accelerate the construction of a comprehensive trustworthy AI evaluation system, as well as establish common standards for testing and evaluation. On safety ethics, the "Opinions" propose guidance and regulation of AI development through safety and ethical standards, enhance safety assurance capabilities, establish proactive governance rules, and reinforce standards development in privacy protection norms and application scenario safety norms.
As social understanding deepens, the participants involved in ensuring the innovative development of AGI will become more diverse, fostering coordinated interaction among various entities and elements in the industry chain. Based on the legal frameworks of "Cybersecurity Law", "Data Security Law", and "Personal Information Protection Law", Shenzhen, Shanghai, and Beijing are accelerating AI legislation, establishing graded and classified AI application norms in public spaces, and forming AI governance systems with local characteristics. In addition, industry associations, alliances, and research institutions play an active role on formulating and publishing. Achievements in areas such as safety and reliability provide a reference for the human-centric development of AGI. Cases like the "first case of facial recognition"(Bing Guo vs. Hangzhou Safari Park Co., Ltd., Service Contract Dispute Case) have drawn widespread public attention, increasing public understanding and demand for AI, and significantly enhancing participation levels. This demand-driven approach compels the development of AGI towards a human-centric direction.
4. Legal Strategies for the Innovative Development of AGI
Given the characteristics of autonomy, data dependence, the "algorithm black box", AI faces significant security and ethical challenges in the fields of technological development, application derivation, data security, and privacy protection. These challenges could potentially have severe consequences and impacts on national security, social ethics, and personal life and property. Therefore, ensuring that the innovative development of AGI is beneficial to human society is a significant challenge that must be addressed. Research into AI governance is urgently needed.
4.1. Establishing Norms for Technology Ethics Supervision and Management
Currently, technology ethics governance faces issues such as inadequate mechanisms, imperfect systems, and uneven development, which are insufficient to meet the needs of AI industry innovation and competitiveness. So, it is necessary to accelerate the establishment of multi-domain technology ethics norms and the enhancement of supervision. On the one hand, precise identification and tracking of risks in multiple application fields of GAI services should be conducted to improve governance responsiveness and regulatory efficiency. On the other hand, clear ethical rules for GAI should be defined, in order to promote pre-regulation and comprehensive oversight, and guide companies to comply.
4.1.1. Establish a Mechanism for Identifying and Tracking Technology Ethics Risks
Based on the concept of risk classification, the purpose of technology ethics risk identification is to provide a preliminary factual basis for differentiated response mechanisms. (Shi ang Liu 2022) This approach helps to enhance regulatory clarity, guide GAI service providers towards compliance. It is also conducive to identifying the deficiencies or risk that exist in its data sources, operation paths, output contents in advance, so as to avoid delays in post-event regulation. There is no doubt that existing science and technology need to be fully utilized to improve the specific rules for risk classification in GAI field. Through in-depth assessment of the safety hazards, technological maturity, and vulnerabilities of technology, different levels of ethical risk in GAI should be regulated to varying degrees. Once classification rules are clear, further risk tracking work should be carried out.
However, if the scope and degree of pre-review are set improperly, it might inhibit the R&D and training efficiency of GAI, objectively slowing down its development. Therefore, a reasonable review scope should be built on the basis of integrating security and development, to achieve the balance between security and innovation.
4.1.2. Accelerate the Establishment of Technology Ethics Review and Supervision Systems
On April 4, 2023, the Ministry of Science and Technology of China issued an announcement seeking public comments on the "Measures for the Ethical Review of Science and Technology (Trial)." Article 4 of it proposes that technology ethics reviews should adhere to the scientific, independent, fair, and transparent principles, providing guidance for an open review system and procedures. Compared with self-filing and self-assessment systems, external regulatory methods like technology ethics reviews are more mandatory that can urge technology developers to improve the compliance of technology use. And the clear guidelines provide a method to address ethical challenges of AI, through effectively enhancing the trustworthiness of AI and upholding social fairness and justice.
4.2. Improve Rules for Identifying and Bearing Liability for Infringement
The realization of legal liability relies on the existence and determination of the responsible entity and is based on the principle of attribution. Clarifying the liability-bearing entities and the principles of attribution for AI infringements helps in both preemptively avoiding risks and encouraging the development of AI industry. Although the increasing autonomy of AI challenges traditional causality theories, leading to difficulties in attributing responsibility, the influence of human values and intentions in every stage of AI algorithm design, development, and deployment justifies using subjective fault in algorithm design and deployment as a basis for liability.
4.2.1. Adopt a Preventive Liability Approach
Preventive liability involves managing potential risks through preemptive measures before they materialize into actual harm. Given the uncontrollability and unpredictability of new AI technologies, overly stringent preventive liability could stifle innovation by imposing high compliance costs on enterprises. Thus, institutional design should balance the interests of data handlers, algorithm developers, service providers, and users. Instead of forcing a binary choice between "prohibition" and "complete tolerance of harm," it should allow for intermediate or partial exclusion to avoid turning litigation into a zero-sum game. (Hu 2015)
4.2.2. Clarify the Liability-Bearing Entities
GAI operates a value production chain based on massive data, which caters to both consumer interactions (C-end) and industry services (B-end). Given the multi-field, multi-entity applications of generative content, the "safe harbor" principle should be applied in liability allocation. This means technology providers do not automatically bear all responsibilities and obligations but are liable only if they fail to meet specified duties and take necessary risk prevention measures. Meanwhile, service providers without subjective fault should follow the principle of fault liability, exempting them from tort liability.
4.2.3. Standardize Compensation Liability Methods
On a macro level, compensation for loss addresses the victim’s financial interests, providing only monetary restitution. (Li 2010) On a micro level, AI-induced damages to ethical order, life, or emotions cannot be fully remedied with money alone. To ensure victims receive adequate compensation, various laws and regulations must be well-coordinated to explore reasonable compensation calculations and optimal methods for comprehensive interest relief. Conducting phased interest and dynamic assessment, courts should fully consider AI feasibility and economics, and prioritize compensation amounts that protect long-term victim interests and overall social benefits.
4.3. Balance Fair Competition and Innovation
The text generation models behind GAI are characterized by their large scale, self-supervised training, and strong generalization capabilities. This means that building, training, and maintaining these models require substantial human resources, computing power, and data. Once trained, these large-scale AGI models can easily outperform smaller AI models in specific fields. In other words, the GAI industry requires significant upfront investment and long development cycles. However, as long as the model is released, its high efficiency, lower costs and broad applicability give its developing companies a significant competitive advantage in the market. To address the potential monopoly risks of GAI, it is necessary to balance intellectual property protection with antitrust measures, and to balance protecting competition with encouraging innovation.
4.3.1. Balance the Scope of Intellectual Property Law and Antitrust Law
Intellectual property (IP) right is a private right with exclusivity, and its inherent monopoly rights are legitimate, used to enhance market power. However, if this right is abused, it may lead to exclusion and restriction of competition. Antitrust law, on the other hand, is public law that limits monopoly power, respects private rights (like IP rights), but prevents abuse of these rights. If the exercise of IP rights excludes or restricts competition, it becomes also subject to antitrust regulation. For GAI, there is a need to protect IP rights and to ensure competitive regulations as well. Seeking a balance between IP protection and antitrust measures to protect competition and stimulate innovation.
4.3.2. Adhere to Prudent Regulation Principles to Encourage Innovation
On the one hand, innovation should be encouraged to create a suitable developmental and competitive market environment for the GAI industry. Establishing a fair, open, and orderly market environment ensures the healthy development and social benefits of GAI. On the other hand, attention should be paid to the monopoly risks in its upstream and downstream industries, and timely regulations should be enforced. For instance, the application of GAI may promote vertical integration strategies by large tech companies, leading to monopolistic and anti-competitive effects. In this context, regulatory authorities need to monitor monopolies in the chip, cloud computing, and downstream application markets, as well as to implement targeted regulations when necessary.
4.3.3. Improve Market Rules in the AI Field
Due to the rapid development of digital technology, the competitive patterns in the market are constantly being renovated, and there are numerous cases of online competition involving well-known internet companies and leading enterprises.
Thus, there is an urgent need to strengthen normative guidance on industry competition and enhance the exemplary role of judicial decisions. For example, most of the typical cases published by the Beijing Intellectual Property Court in 2023 are closely related to the digital economy and the adjudication rules of related cases are summarized. The publication of these typical cases clarifies the market rules in emerging AI fields, providing solutions to previous disputes and difficulties in regulating unfair competition.
4.4. Enhance Data Security of Artificial Intelligence
Data is the raw material that forms the foundation of artificial intelligence. Given that GAI relies on large models, it demands substantial data volumes and necessitates focus on strengthening data security protections and avoiding data security risks. (Fan and Zhang 2022) From the era of mobile internet to the era of artificial intelligence, the use of data has consistently expanded in both breadth and depth. This evolution underscores the need to ensure data security through robust market competition, comprehensive legal mechanisms, and advanced technical security measures to effectively safeguard user privacy and data security.
4.4.1. Establish a Data Classification and Grading Protection System
In the governance of AGI, data security protection should be the central focus. Based on data’s characteristics—such as shareability, reusability, multi-ownership, high dynamism, and weighted usage attributes—a dynamic approach should be employed to balance data development and security. To manage the large-scale parameters of GAI models systematically, it is essential to first establish a data classification and grading protection system that integrates the application fields of AI services and the inherent properties of AI algorithms. On this system, a corresponding data security protection mechanism should be implemented to match different types and risk levels of data.
4.4.2. Improve Data Trading Systems
GAI relies on data. Massive data training and continuous algorithm optimization enhance the accuracy of generated content. Currently, the training data for GAI often comes from online text data, which varies widely in quality and harbors unpredictable risks. Reducing these risks involves multi-faceted collaboration in algorithm optimization and data annotation. Additionally, accelerating data market transactions can enhance data quality through market dynamics. Using this secure and reliable data extensively in training can also help reduce the R&D costs for AI technology providers.
4.4.3. Promoting Preemptive Data Security Regulation
Preemptive data security regulation involves implementing measures to identify and mitigate risks during data processing and usage, so as to ensure data security upfront. This proactive regulatory approach guides the AI training process towards greater standardization, prevents damage from delayed post-incident regulation, and better safeguards users’ rights to information and choice, thereby algorithm trustworthiness can be enhanced.
4.4.4. Review Existing Privacy Protection and Compliance Mechanisms
Current practices in mobile internet personal information protection interpret the necessity principle very strictly to prevent improper data collection and aggregation. For instance, the "Methods for Identifying Illegal Collection and Use of Personal Information by Apps" stipulate that personal information cannot be collected solely for improving service quality or developing new products. While this strict compliance approach protects user privacy, it also restricts the data available for training AI systems. Conversely, relaxing these restrictions might introduce privacy and security risks. Therefore, how privacy protection rules can be applied to future AI application scenarios warrants careful consideration and discussion.
4.5. Strengthen AI Algorithm Regulation
The rapid development of GAI services not only highlights the explosive growth in data demand for large models, but also indicates a significant increase in the complexity of algorithm. Algorithm transparency and explainability face unprecedented challenges. To address the algorithm crisis, it is necessary to build a comprehensive governance system around AI algorithms and implement specialized, systematic governance.
4.5.1. Establish a Legal Framework and Regulatory Mechanisms
A robust legal framework and regulatory mechanism are needed to prevent algorithmic discrimination. This includes focusing on the legality, scale, completeness, and representativeness of source and training data to prevent algorithmic biases stemming from data inadequacies. Service providers should regularly review and update data, eliminate discriminatory data content and ensure that data sources are sufficiently large and comprehensive. Laws such as the "Cybersecurity Law" and the "Data Security Law" can delineate reasonable boundaries for algorithm governance activities and impose corresponding care duties on service providers. Additionally, specific legislation for areas like facial recognition and algorithmic recommendations can provide precise regulation of algorithmic activities within AI applications, which facilitates indirect identification and mitigation of algorithm risks.
4.5.2. Balance Algorithm Transparency, Explainability, and Innovation
Considering the technological properties and application patterns of AI algorithms, GAI services require a well-considered approach to algorithm governance. It is not feasible to demand higher transparency for GAI algorithms due to the increasing technical difficulty of making algorithms transparent, as parameter counts and hidden layers grow. Moreover, excessive transparency requirements could undermine the innovation incentives of developers and users.
4.5.3. Promote Multi-Principal Co-Governance of Algorithms
Establishing an autonomous industry oversight committee at the national level can provide guidance and supervision for the AI technology sector. Taking advantage of professional expertise, this committee can implement a classification and grading regulatory principle. By categorizing and governing algorithms based on their application scenarios, the committee can assist regulatory authorities with algorithm registration, auditing, and accountability. This multi-principal co-governance approach aims to ensure the reasonable application of GAI technology in content generation and to continuously refine the development, use, and regulation of AGI algorithms.
4.6. Conducting International Cooperation on AI Governance
In this highly interconnected era, technological advancements and applications have transcended national boundaries. Therefore, the governance of AI industry cannot be achieved by any single country alone, but requires the joint efforts of the global community. The convening of the World Artificial Intelligence Conference and the High-Level Conference on Global AI Governance highlights the importance of global cooperation. Through international collaboration and exchange, technological innovation and ethical values can complement each other, leading to the development of a more comprehensive and inclusive governance framework. This framework can address global challenges in AI development, and promote AI towards a safer, more inclusive, and sustainable direction.
First of all, countries should actively promote international cooperation and participate in the formulation of regulations and ethical guidelines in the field of AI, so as to create a comprehensive and effective governance system for trustworthy development of AI (Li 2017). Additionally, through actively engaging in international dialogues, an international governance framework and standards with broad consensus can be established to address key issues such as cross-border data flows, privacy protection, transparency, and accountability (Cao 2023). Joint standards within cooperation and exchange ensures that the application of AI technologies aligns with global consensus, thereby reducing potential risks and disputes.
Meanwhile, it is necessary to establish multilateral and bilateral cooperation mechanisms, which can encourage the participation of international organizations, academia, industry, and civil society. Promoting the international exchange of talent and collaborative research is crucial because in the process of exchanging research results, various research teams can learn from each other and jointly overcome technical challenges. This not only accelerates the development of AI technologies, but also helps to create a more open and collaborative innovation environment for the progress of the global AI industry.
5. Conclusions
Strategic emerging industries are the new pillars of future development. The legal landscape in the digital era should anticipate the future form of global governance for AGI. The era of AGI is not far off, with GAI technologies advancing rapidly in a short period. Their wide range of applications highlights the revolutionary significance of AGI, making the AI industry a new focal point of global competition. However, the innovative development of the AGI industry also faces challenges related to technological ethics, intellectual property, accountability mechanisms, data security, and algorithmic manipulation, which undermine the trustworthiness of AI.
Therefore, it is necessary to further develop a legal regulatory framework for the AI industry and improve the governance ecosystem for technological ethics. By introducing relevant codes of conduct and ethical guidelines, we can promote the healthy and sustainable development of the AI industry within a legal framework. Addressing the aforementioned issues requires strategic research and the pursuit of feasible technical solutions. By establishing technological ethics standards, improving the system for regulating liability, protecting competition while encouraging innovation, enhancing AI data security measures, and standardizing algorithmic regulation in the AI field, the obstacles on the path to innovative development of AGI can eventually be removed.
Author Contributions
Conceptualization, methodology, writing-original draft preparation, project administration, funding acquisition, B.C.; data curation, J.C.; writing-review and editing, B.C and J.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the key project of Humanities and Social Science study from the Ministry of Education of P R.C. (grant number 19]JD820009) and the major project in Judicial Research of the Supreme People's Court of P.R.C. (grant number ZGFYZDKT202317-03).
Data Availability Statement
All data underlying the results are available as part of the article and no additional source data are required.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Bubeck Sébastien et al. 2023. Sparks of Artificial General Intelligence: Early Experiments with GPT-4. arXiv 04. [CrossRef]
- Cao Jianfeng. 2023. Towards Responsible AI: Trends and Outlook for AI Governance in China. Journal of Shanghai Normal University(Philosophy & Social Sciences Edition) 04(2023):5-15.
- Cao Jianfeng, and Fang Llingman. 2019. The Path and Enlightenment of EU's Ethics and Governance of Artificial Intelligence.AI-View 04(2019):40-48.
- Chen Bing. 2023. Scientific Construction of Data Element Trading System. Frontier 06(2023):68-80.
- Chen Bing, and Lin Siyu. 2023. Facing the Trust Crisis in Artificial Intelligence and Accelerating the Development of Trustworthy AIGC. First Financial Daily. Available online: https://m-yicai-com.translate.goog/news/101739971.html?_x_tr_sl=zh-CN&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=sc (accessed on 30 June 2024).
- Chen Jidong. 2023. Theoretical System and Core Issues of Artificial intelligence Law. Oriental Law 01(2023):62-78.
- Fan Yuji, and Zhang Xiao. 2022. The Mode Transformation, Selection, and Approach of Data Security Governance. E-Government 04(2022): 119-129.
- Guo Yanbing, and Hu Lijun. 2022. Study on the Impact of Al and Human Capital on Industrial Structure Upgrading: Empirical Evidence from 30 Chinese Provinces. Soft Science 05(2022):21-26.
- Hu Wei. 2015. Rules and Ways of Liability on Mining Damage. Journal of Political Science and Law 02(2015):121-128.
- Hu Xiaowei, and Liu Li. 2024. The Full Process Regulatory Logic and Institutional Response of Artificial Intelligence Risks. Study and Practice 05(2024):22-30.
- Jiang Lidan, and Xue Lan. 2022. The Current Challenges and Paradigm Transformation of New-Generation Al Governance in China. Journal of Public Management 02(2022):6-16.
- Li Chengliang. 2010. Eco-injury: From the Perspective of Law of Torts. Modern Law Science 01(2010):65-75.
- Li Xiuquan. 2017. Challenges and Countermeasures of Safety, Privacy, and Ethics in AI Applications. Science & TechnologyReview 15(2017):11-12.
- Matthias Andreas. 2004. The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata. 6 Ethics and Information Technology 175:175-183.
- Peng Hui. 2022. Its Logical Structure and Boundary Setting of Data Ownership: From the Perspective of the "Tragedy of the Com-mons" and "Tragedy of the Anti-commons”.Journal of Comparative Law 01(2022):105-119.
- Shi Jiayou, and Liu Zhongxuan. 2022. The Rule of Law Path of Ethical Governance of Science and Technology: Taking the Governance of Genome Editing as an Example. Academia Bimestris 05(2022):185-195.
- Stokel-Walker Chris, and Van Noorden Richard. 2023. What ChatGPT and Generative AI Mean for Science. Nature 614 (7947): 214-216.
- Sun Weiping, and Li Yang. 2022. On the Ethical Principles of the Development of Artificial Intelligence. Philosophical Analysis 01(2022):6-17.
- Zhang Linghan. 2021. Algorithm Accountability in Platform Regulation. Oriental Law 03(2021):24-42.
- Zhang Yuanke. 2021. Exclusive Interview with Academician Ji-feng He: The Most Important Leverage for Achieving Trustworthy Artificial Intelligence Lies in People. First Financial Information. Available online: https://view.inews.qq.com/k/20210716A07WBI00?web_channel=wap&openApp=false (accessed on 30 June 2024).
- Zhao Jingwu. 2023. The Theoretical Misunderstanding and Path Transition in the Application Risk Governance of Generative Artificial Intelligence Technology. Jingchu Law Review 03(2023):47-58.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).