1.1. Background
As science moves faster than moral understanding, people even struggle to articulate their unease with the perils novel technologies introduce [
1]. Just as William Gibson points that: ’The future is already here – it’s just not very evenly distributed. ’Whether people are aware of it or not, Artificial intelligence (AI) is taking us into the fourth industrial revolution, known as Industry 4.0. This is likely to result in the applicability of AI-based technologies across multiple industries, particularly those involved in process or manufacturing activities. Healthcare, petroleum, power generation, automotive, and related fields are examples of industries that could potentially benefit from the implementation of AI-based technologies, including Machine Learning (ML) and Deep Learning (DL) [
2]. According to the McKinsey Global Institute, AI will raise the global GDP by more than
$15 trillion [
3]. However, The risks of different types of privacy protection and regulation on AI cannot be overlooked as well [
4]. Early this year, more than 30 thousand people, including Steve Wozniak, Elon Musk, and more, are so concerned the rapid development of powerful AI system that they call on all AI labs to immediately pause for at least 6 months [
5]. As Sam Altman points that:
’
Society will face major questions about what AI systems are allowed to do, how to combat bias, how to deal with job displacement, and more... A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.’ [
6]
Do we really have enough time to put regulation in place and catch up with the artificial intelligence? In 2021, the European Commission drafted the world’s first proposal for an Act on regulating artificial intelligence aiming to create a solid European regulatory framework for trustworthy AI, which will protect all people by preventing the risk of data breaches, misinformation and non-compliance with intellectual property rights et al. However, the Act will still need to go through more negotiation before it finally come into power. Other relevant laws and regulations can be classified as these domain like Data, Electronic Communications, Cyber security, Consumer Rights Protection et al. While the chemical, food, and pharmaceutical industries established years ago use evidence based models that ensure the safety of these products EU-wide, these frameworks have yet to be see within AI regulation [
7]. In the past five years, the Data Protection Commission published more than one hundred cases [
8], which ranged from data breaches to privacy transparency policy. Among all the risks, the most common and most emerging privacy or security risk was difficulty maintaining compliance across various regulatory regimes with different requirements, such as data breaches during the use of AI or the data localization policy in the EU [
9]. Since the
General Data Protection Regulation (GDPR) came into force, authorities have issued a few hundred more fines [
10]. Some of the fines imposed on prominent platform companies like Google, Amazon, Instagram, Equifax, and others have sparked considerable interest and stimulated thought on the connection between privacy and personal information, trade secrets and company data, and how to balance the growth of AI industry with regulation [
11].
The comparable confusion regarding the equilibrium between innovation and regulation of artificial generative intelligence has emerged in China as well. With the promulgation and implementation of laws and regulations such as the Data Safety Law and the Personal Information Protection Law, China has continuously improved the working mechanism of data security. In December 2022, the central committee of the Communist Party of China and the State Council issued the policy entitled "Building the basic data system and better utilizing the role of data production factors". This policy elevated the data circulation and trading compliance to national strategic height, as well as, aiming to establish efficient compliance and inside and outside the data circulation and trading system.
Interim provisions on the management of artificial intelligence services, jointly promulgated by the Cyberspace Administration of China and other seven departments, officially came into force on August 15,2023. This new policy centers its attention on the realm of pre-regulatory or preventive supervision. However, it remains conspicuously bereft of a definitive resolution concerning the regulatory conundrum posed by the generation of inappropriate content by generative AI services. Expedient measures have now been taken that parallel endeavors are undertaken to mitigate the risks associated with data breaches and privacy infringements arising from the utilization of artificial intelligence. In accordance with the latest report, the Nation’s Internet Information System of China conducted an exhaustive examination of 8,608 websites and digital platforms over the course of the previous year. This comprehensive review yielded a cascade of regulatory actions, including formal warnings issued to 6,767 entities, the imposition of fines or punitive measures upon 512, and the suspension of functions or updates for 621 others. Additionally, a stringent response was directed towards 420 mobile applications, leading to their removal from circulation. The licenses of illicit websites were either revoked or duly recorded with the competent telecommunication authorities, leading to the cessation of operations for 25,233 unauthorized websites. Furthermore, 11,229 pertinent case leads were meticulously transferred for further inquiry and action [
12]. One of the well-known cases is the cybersecurity inspection on the Chinese ride-hailing platform Didi Global. In July 2022, the State Internet Information Office (SIIO) imposed a fine of
$1.19 billion on Didi Global Inc in accordance with the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Administrative Penalty Law of China, among other laws and regulations.
In contrast to the European Union’s proposed AI Act and China’s efforts to prevent privacy risks posed by artificial intelligence, Southeast Asian countries have adopted a draft document titled "Guide to AI Ethics and Governance" that encourages companies to consider cultural differences and does not specify any unacceptable risk categories. As officials in Singapore and the Philippines have pointed out, hasty regulation could stifle their countries’ AI innovation. It appears that Southeast Asian countries are taking a "business-friendly" [
13] approach to AI regulation. Similarly, other Asian countries such as Japan and South Korea have also eased AI regulation.
With different AI regulatory policies taking place in different countries and regions, there is an urgent need for a scientific argumentation on the influencing factors of AI regulation and whether or not legal regulation may take place, in order to promote a virtuous circle between AI technological breakthroughs and manageable development. Given the potential upheaval that AI could bring to the productivity landscape, we are facing new puzzles about the social innovation and regulation in AI system. The challenge is adopting regulation that is flexible enough to allow Al to ’create’ in the domain of intellectual property [
14]. Is it possible to establish a consistent global regulatory framework?While the belief that something needs to be done is widely shared, there is far less clarity about what exactly can or should be done, or what effective regulation might look like [
15].
1.2. Literature Review
This paper examines the legal frameworks pertinent to the governance of artificial intelligence (AI), concentrating on the delineation of jurisdiction and responsibilities assigned to various stakeholders within the AI milieu through the mechanisms of administrative law. Such regulatory stratagems are orchestrated to preemptively attenuate the inherent risks of AI applications, with the ultimate ambition of endorsing the beneficence of these technologies for humankind. At the heart of this legal inquiry is the imperative to precisely articulate a definition for AI, as this definition is instrumental in ascertaining the reach and intensity of regulatory oversight. Notwithstanding the ubiquity of the term "artificial intelligence" in common parlance and its extensive portrayal across diverse media platforms, the scholarly and policy-making arenas are yet to converge upon a universally endorsed explication of the term [
16]. Nilsson delineates AI as the exhibition of intelligent comportment by artificial agents, encompassing attributes such as cognition, inference, learning, communication, and the capacity for feedback within intricate environments [
17]. The European Commission’s 2018 blueprint for AI strategy characterizes these systems as manifesting intelligent behavior through environmental analysis and executing actions with a modicum of independence to fulfill explicit objectives [
18]. Presently, we find ourselves amidst the ’narrow AI’ epoch, wherein AI constructs are proficient in a limited array of tasks. Prospectively, the advent of ’General AI’ is anticipated, which aspires to replicate a broad spectrum of human capabilities [
19]. Furthermore, AI can be construed as the capacity for adaptation in contexts marred by a paucity of knowledge and resources [
20]. This conceptualization posits AI as an overarching term that encapsulates methodologies devised to synthesize intelligence artificially, thereby equipping machines with the faculty to emulate human actions [
21]. While unanimity in the academic discourse concerning a definition for AI remains evasive, the definitions proffered herein can be embraced as instrumental in demystifying the technical essence of AI in an academic framework. This elucidation serves as a vital precursor, establishing an intellectual base for the ensuing formulation and enforcement of jurisprudential statutes.
The spectrum of regulatory practices is both comprehensive and exhibits significant variation across different international jurisdictions. For example, state apparatuses commonly enact oversight across various sectors to maintain economic stability. These areas include, but are not limited to, regulatory frameworks governing financial institutions, such as banks and capital markets. Additionally, state regulatory purview encompasses sectors such as education, food production and distribution, transportation, and healthcare. In the contemporary scholarly landscape, considerable attention has been allocated to the regulatory challenges posed by artificial intelligence (AI). It is vital to acknowledge the singular capabilities that AI technologies possess, which are inherently distinct and without historical precedent. This uniqueness provides a strong impetus for the proposition that AI requires its own bespoke and independent regulatory framework, distinct from those applied to existing technologies [
22]. As AI systems gain increased autonomy and as the frequency and depth of human-AI interactions intensify, there emerges an exigent need for a careful evaluation of potential regulatory, ethical, and legal impediments. Governments are instrumental in fostering digital innovation and promoting the development of digital technologies for societal benefit [
23]. Without appropriate regulatory frameworks, encompassing both soft and hard law approaches, even the most altruistically intended "Tech for Good" initiatives are susceptible to failure [
24]. When it comes to global AI regulation framework, some researcher pointed that international cooperation is vital in establishing common AI governance standards and addressing cross-border AI challenges [
25]. The foundational work of Pigou illuminated various socio-economic challenges, including tariff policy, unemployment, price control and public finance, positing the necessity of rigorous regulation at all levels of governance state, provincial, district, and local to ensure societal welfare [
26]. Contemporary discourse suggests that AI regulation should align with the Council of Europe’s standards on human rights, democracy, and the rule of law, insisting that any legal framework for AI development and deployment should embed principles that protect human dignity, uphold human rights, and respect democratic norms and the rule of law [
27]. The High-Level Expert Group on Artificial Intelligence (HLEG AI) has underscored the imperative for new legal measures and governance structures to adequately shield the public from potential adverse impacts of AI, while simultaneously ensuring proper enforcement and oversight without impeding beneficial innovation [
28]. Ensuring an appropriate level of technological neutrality and maintaining the proportionality of regulatory measures is paramount in mitigating the vast array of potential risks associated with AI utilization [
29]. Moreover, stringent regulation of AI has been identified as a contributing factor in enhancing public willingness to engage with AI-powered robotic technologies [
30]. Policy makers face a variety of regulatory strategies, the selection of which depends on numerous factors, including the degree of uncertainty, the nature of the interests involved, and the context or magnitude of AI development and usage [
28]. Notably, once the need for regulation becomes evident, implementing corrective measures can be challenging due to entrenched decisions and established power dynamics [
31]. Some scholars discuss the legal procedures of regulating on AI. Buiten discussed the regulatory process of AI bias in terms of data input, algorithmic structure and content models [
32]. Particular consideration is given to the domain of medical treatment, where AI introduces complex ethical questions. Scholarly proposals have thus been discussed for the establishment of regulatory mechanisms to navigate these emerging challenges. Such discourse evidences the multifaceted nature of AI regulation, highlighting a clear mandate for holistic and adaptive legal responses to the evolving landscape of AI technology [
33].
A body of scholarly research has levied substantial critique against existing regulatory theories, especially within the purview of AI technology legislation. Such efforts to legislate with foresight in the digital domain have been largely marked by failure [
34]. Within this context, a regulatory framework for Artificial Intelligence (AI) is advocated to provide considerable latitude for technological progression [
35]. Furthermore, there is a contention that the complexities introduced by AI have not been subjected to sufficient scrutiny, which suggests that the inception of a comprehensive regulatory system for AI may be premature [
36]. In the scholarly critique of regulatory practices, concerns have been raised that poorly conceived regulations could potentially impede the progress and deployment of beneficial Artificial Intelligence (AI) technologies. Such regulations may fail to advance safety and control measures, thus undermining their intended purpose [
37]. A strategic regulatory approach, characterized by judicious restraint—or "masterly inactivity"—is posited as a preferable pathway. This approach suggests that masterly inactivity except when prompted by law enforcement is the economically most advantageous policy open to them [
38]. This principle advocates for a cautious approach that allows for the natural evolution of AI, may yield more favorable outcomes in the long term compared to precipitous regulatory actions taken without a comprehensive understanding of the AI landscape. Further, the public interest theory of regulation faces critiques primarily originating from the Chicago School of Law and Economics [
39]. Libertarian scholars, including Nozick, have highlighted a pronounced divergence between rule enforcement as adjudicated by the judiciary compared to regulatory agencies [
42]. On the one hand, Much of government regulation of industry was originated and is geared to protect the position of established firms agains competition [
40]; On the other hand, regulators find themselves at a strategic disadvantage due to information asymmetries, a lack of knowledge to properly understand the implications of technologically enabled social relations as well for lack of resources and institutional mechanisms to intervene timely before technology has been developed and widely adopted [
7]. Like all regulation, it can be used both to enhance public welfare and to facilitate sovereign abuse of the public. More regulated legal systems appear to cost more and to produce higher delay, without offsetting benefits in terms of perceived justice [
41]. Contrast with regulation, private litigation has many advantages, which is of no special interest to the government, and hence disputes can be resolved apolitically [
42].
The regulatory dialogue regarding the inherent risks of artificial intelligence (AI) necessitates an exhaustive analysis. AI, as a cornerstone of the informational technology sector and a frontier innovation, is anticipated to exert substantial impacts on economic development. In scenarios where explicit regulatory frameworks are absent, emergent AI enterprises may confront the daunting task of maneuvering through a patchwork of inconsistent regulatory demands. This complexity could exacerbate their regulatory compliance obligations and potentially impede innovation by inhibiting or completely deterring entrepreneurial risk-taking. It is, therefore, critical to articulate a foundational theoretical framework and establish supervisory structures that are integral to AI regulation. Such a framework should aim to balance the promotion of innovation with the imperative of containing the risks associated with AI. Furthermore, the prevailing system of law enforcement and judicial processes has not yet evolved to include specific provisions for administrative regulation or the assessment of corporate liability concerning AI-related offenses. This gap prompts a crucial inquiry into how law enforcement entities might adapt existing legal norms to regulate issues arising from AI. A complex aspect of this inquiry involves ascertaining the appropriate allocation of liability in situations where risk of infringement arises from AI-powered production. Moreover, the international arena displays a diversity in the maturity levels of AI technologies across different jurisdictions, with the corresponding regulatory costs and benefits of AI manifesting variably. Given these discrepancies, it is essential to consider whether these varied conditions affect the feasibility of enacting a comprehensive and consistent global regulatory regime for artificial intelligence.