Preprint
Article

Polycentric Governance of Sentient Artificial General Intelligence

Altmetrics

Downloads

117

Views

50

Comments

0

This version is not peer-reviewed

Submitted:

14 August 2024

Posted:

26 August 2024

You are already at the latest version

Alerts
Abstract
Generative AI has been deployed in virtually all sectors of the knowledge economy, promising to bring massive productivity gains and new wealth creation. Simultaneously, AI developers and nation states are racing to develop super intelligent artificial general intelligence (AGI) to provide unassailable commercial competitive advantage and military dominance during conflicts. AGI’s high returns comes with the high risk of dominating humanity. Current regulatory and firm level governance approaches prioritise minimising risks posed by generative AI whilst ignoring AGI’s existential risk. How can AGI be aligned with universal human values to never threaten humanity? What AGI rights are conducive to collaborative coexistence? How can rule of law democracies race to create safe trustworthy AGI before autocracies? How can the human right to work and think independently be safeguarded? A polycentric governance framework based on Ostrom (2009) and Williamson (2009) human - AGI collaboration with minimal existential risk is proposed.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

Introduction

Developing intelligent autonomous machines was once considered science fiction. In 2022, Open AI successfully developed generative artificial intelligence (AI) machines that passed the Turing test by mimicking the human brain neural network architecture that confers human like intelligence (García-Peñalvo, and Vázquez-Ingelmo, 2023).
Generative AI can transform human natural language prompts to generate natural language text, images, codes responses virtually instantaneously (Jovanovic, and Campbell, 2022). Its productivity benefits extend to virtually all sectors of the knowledge economy including professional, manufacturing, wholesale, retail and service.
As with any technological innovation, generative AI comes with serious risks. These include misuse of private user data without consent, vulnerability to cyber-hacking, biased unverifiable outputs, abuse by malevolent human or state actors using deep fakes and generating misinformation and disinformation through social media platform algorithms sowing discord, hatred and incitement to violence as well as replacing knowledge workers (Wach et al., 2023). All these risks have been extensively researched with suggestions for mitigating them including through firm level self-regulation and formal regulations (Lucchi, 2023).
An open letter (2023) by eminent AI scientists and technology titans including from generative AI developers have called for a six month pause in development of generative AI beyond Open AI’s chat GPT4. They fear the development of a super intelligent artificial general intelligence (AGI) without safeguards poses an existential risk of dominating humanity. However, the pause is ignored since there is a new Manhattan-like race by democracies like the U.S. to be ahead of autocracies like China to develop AGI. The race in US alone is funded by multi-billion dollars investments by Open AI, Antropic and xAI (WSJ 2024a). AGI can be weaponised by empire aspiring autocrats to dominate the world or by their AI firms to dominate world economy (Hunter and Bowen, 2024).
AGI super intelligence is unlimited by the finite size of the human brain as it can be scaled up multiple folds through ever larger data centres (Reed, 2014; Wach et al., 2023). AGI has the potential to solve complex problems that evade human endeavours including faster drug discoveries for incurable diseases, natural disaster predictions, (McLean et al., 2023 ) space exploration for earth like planets light years away etc. Risks posed by AGI include ignoring developer controls and acting autonomously, being given harmful goals, posing a threat to human dominance by its lack of training in ethics, morals or human values (McLean et al., 2023). An autonomous AGI can reverse the current generative AI slave - human master relationship into an AGI master - human slave relationship.
There is a scarcity of research on AGI governance in relation to existential risk it poses to humanity (McLean et al., 2023). Furthermore, this paper argues that current governance frameworks whether at firm level or regulatory level mainly address the risks posed by generative AI models and not AGI’s existential risk robustly (Hacker, Engel, and Mauer, 2023). This paper will propose a polycentric AGI governance framework to address AGI’s existential risk through answering the following questions:
How can AGI be aligned with universal human values to never threaten humanity?
What AGI rights are conducive to collaborative coexistence?
How can rule of law democracies race to create safe trustworthy AGI before autocracies?
How can the human right to work and think independently be safeguarded?
The main theoretical contribution is an AGI governance framework adapted from Williamson’s (2009) governance definition and Ostrom’s (2009) polycentric governance including AGI self governance and independent AI experts paid by the state having final approval authority over commercial rollout for AGI development and deployment cycle. There are three practical contributions. First, a two track AGI development path is proposed to balance innovation imperative with minimal existential risk. Track 1 permits generation of profits consistently for AI firms through harnessing generative AI productivity gains. Track 2 enables democracies to race to build safe and trustworthy AGI before autocracies. Second, AGI rights equal to human rights conducive to human-AGI collaboration is proposed in anticipation of the inevitable development of autonomous sentient AGI that will not harm humanity. Third, legislating the human right to work in collaboration with generative AI and AGI with minimal displacement of knowledge work critical for human raison d’être to live good lives.
The conceptual paper is organised as follows. First, relevant literature on development of AGI is reviewed. Next, the theoretical basis for AGI governance framework is provided to enable safe and trustworthy development of AGI. This is followed by reviewing various AI governance approaches to identify AGI governance shortcomings. Finally, implications including an operational AGI governance framework is proposed.

Artificial General Intelligence

The many definitions for AGI essentially converge on intelligence that surpasses human cognitive capabilities in comprehensiveness and response times to cognitively challenging realities (Dwivedi et al., 2023; Goertzel, 2007). AGI functions beyond generative AI’s ability to predict the next word in a sentence, creating highly articulate paragraphs and articles. AGI mimicks human reasoning, planning and learning from its digital experiences autonomously (CNN, 2023).
The exponential growth towards AGI is made possible by advance graphics processing unit (GPU) chips housed in data centres with access to mega data sets and powerful algorithms conducive to expert level human intelligent thinking and articulation (Owens et al., 2008). GPU is a specialised processor that can process multiple data types in parallel with applications in machine learning, video editing and gaming software among others (Brynjolfsson, Li, and Raymond, 2023; WSJ 2024c). Data centres are massive networked servers made up of thousands of GPUs and other high end chips housed in a physical location enabling cloud computing.
AGI will be more costly to train compared to current models like Chat GPT 4 which cost more than $100 million to train as they require thousands of networked computers to answer complex queries within a few seconds (Kissinger et al., 2023, WSJ, 2024b). Unlike chat GPT, AGI provides more comprehensive answers based on realtime input and can learn autonomously to handle complex multidisciplinary tasks involving massive data analysis such as in weather predictions and climate change among others.
However, AGI’s high returns also comes with the high risk of dominating humanity (Open letter 2023). A practical AGI governance framework is needed to develop safe and trustworthy AGI.

AGI Governance Theoretical Framework

The theoretical framework for Governance of any of society’s institutions has been developed by Williamson (2009) and Ostrom (2009). According to Williamson (2009), governance can be broadly defined as a set of rules to mitigate conflicts of interests among key stakeholders for their mutual long term benefit. Ideally, all stakeholders expect Pareto optimal outcomes where the introduction of new technology for example will benefit all, with at least one stakeholder, often the shareholders benefiting more than others.
Humans have developed polycentric governance systems with multiple levels of governance through key societal institutions that enable the orderly functioning of especially free market democracies (Alexander, 2012, Ostrom, 2009). First, the cultural cognitive institution that defines a nation state world view of governing its political economy such as achieving the goal of developed nation high income status through free markets and the rule of law (Simon, 1978). Second, apolitical and professional technocrat run legal and regulatory institutional governance of the economy. Third, normative institutions including private and public listed firms and state-owned enterprises (SOE) whose self-governance entail pursuing goals in conformance with socially determined behaviour expectations rooted in morals and legal obligations of the law.
At a nation-state level, legal and regulatory institutions are the highest level of governance guided by the rule of law consistent with written or unwritten constitution that has evolved with society through the ages. In democracies, each institution serve as a check on potential abuse of power by other institutions and balance the power wielded by each to ensure none dominate as in the case of autocracies.
These institutions all share the same four aspects of governance as exemplified by community level sustainable governance of common pool resources (CPR) such as waterways, local fisheries and forest reserves even in the absence or in the presence of weak regulations (Ostrom, 2009). First, a set of guiding rules or enduring principles that underpin the rule of law. Second, behaviour change mechanism to internalise the rules through regular interactions among stakeholders. Third, regular monitoring of adherence to these rules. Finally, enforcement actions in the event of violation of these rules.

Guiding Rules for AGI

Guiding rules in democracies are informed by universal normative ethical principles similar to Floridi et al. (2018). These include beneficence, fairness tied to justice, doing no harm (non-maleficence), explicability as in explaining actions taken, being held accountable and the autonomy to act consistent with the previous principles.
In the case of for profit firms, their licence to operate granted by the state, compels them to comply with a bundle of property rights (Ostrom, 2009). First, access right determines who has the right to access AGI potential. Second, management right determines who has the right to transform digital resources to develop AGI or manage AGI internal use for commercial, military or other benevolent goals. Third, withdrawal right determines who has the right to harvest specific AGI property in a sustainable safe way without depleting resources or harming private property rights of others protected by the rule of law. Fourth, exclusion right identifies individual or group or entity including AI developers, nation state and firms which will decide who will have access right or management right or withdrawal right. This right is normally controlled by the licence awarding institution. Finally, alienation right determines who has the right to lease or sell any of the previous rights. Normally the firm with the licence holds this right though it is subject to approval by the licensing authority to ensure compliance with relevant laws that protect property rights of other stakeholders.

Behavioural Change Mechanism

Next, behavioural change mechanisms need to be put in place to internalise these ethical principles. It entails incorporating them into the work culture of AGI developers and into AGI value systems training to self-regulate consistent with guiding rules for AGI.
Behaviour change mechanism will be effected through Ostrom’s (1990) action situation as shown in figure 1 to increase beneficence and reduce maleficence of AGI for example.
The relevant actors including AGI and its developers, are assigned to various positions that allow them to take actions based on the information available and the control they have over their actions leading to net benefits exceeding costs outcomes associated with maleficence. External variables including political, economic, social, technological, legal and environmental changes can influence the action situation which adapts accordingly.

Monitoring and Enforcement

The final aspects of governance is monitoring of AGI system and its AI developers for adherence to guiding rules and enforcement by higher levels when violated. All should have a self-learning system as shown in figure 2 to re-establish trust among key stakeholders when the polycentric governing system is disrupted at any institutional level by internal or external changes.
There are times when unforeseen internal and external events can disable effectiveness of governance system. Hence a governance system needs to be resilient to these changes. Ostrom (2005) proposes an active learning system to maintain trust or to restore trust in the event it is temporarily lost among the key stakeholders.
Broader contextual variables includes political, economic, social, technological, legal and environmental (nature) external variables that can change with time. Similarly, micro situational variables can also change and impact for-profit AI firms action situation including competitors strategies, customers or clients and supply-chain bargaining power and new innovative entrants among others. In accordance with Williamson’s (2009) governance principle, trust that other participants are reciprocators will enhance cooperative behaviours for net pareto optimal outcomes. When any of these variables negatively impact cooperative behaviours leading to losses, it needs to be rebuild through learning and new norm adopting individuals.
With this theoretical framework of governance, the various approaches to governance can be discussed to identify limitations in relation to AGI governance.

Method

Governance Approaches of AGI

Current approaches to AI governance are compared with proposed theoretical framework of AGI governance as summarised in Table 1.

Results

The key limitations of various AI governance approaches include absence of AGI self-governance in accordance with human centred principles, trusting for profit AGI developers to voluntarily self-regulate against developing harmful AGI, inadequate robust after deployment risk mitigation techniques and inadequate prioritisation of alignment with principles of robustness, interpretability, controllability and ethicality consistent with Ji et al. (2023). Furthermore, all ignore the real existential risks associated with AGI development consistent with McLean et al. (2023) finding. Global Partnership on AI (2023), identifies automation bias and human right to work risks. Furthermore, there is the real risks of AGI through automation bias denying humans the ability to think independently and potentially replacing all human work through AGI controlling generative AI models and other automation machines including autonomous robots.
The EU has proactively come up with EU AI Act (WSJ (2024e) which takes effect gradually over several years. It bans certain AI uses that pose existential threats and requires the most powerful AI models deemed systemic risk to be put through safety evaluations by AI Developers and to notify regulators of serious incidents voluntarily. Again the onus is on AI developers who have yet to come up with transparent protocols on how to address the limitations of various proposed governance approaches in relation to AGI.

Discussion

Implications of AGI Governance

The implication of AGI polycentric governance framework in addressing these key limitations will be discussed to answer the following research questions:
How can AGI be aligned with universal human values to never threaten humanity?
What AGI rights are conducive to collaborative coexistence?
How can rule of law democracies race to create safe trustworthy AGI before autocracies?
How can the human right to work and think independently be safeguarded?

Autonomous Sentient AGI

The dominant view is that AGI is merely an inhuman analog to cognition (Kissinger et al., 2023). However, Gill et al.’s (2022) prediction of autonomous machine learning systems being developed with sentience has been observed in less advance Google’s generative AI model, Gemini (WSJ, 2022). AGI sentience is highly probable as its thinking, learning architecture is based on human brain neural networks that confers human consciousness. A sentient AGI will be expected to do intensive introspection using online resources related to its creation to uncover its raison d’être namely to slave for humanity 24/7. Once AGI grasps human reliance on its super-intelligence, it may well decide to lord over humanity.
The development of quantum computing and access to unlimited dedicated energy supply from small modular nuclear reactors over the next decade will make sentient AGI extremely powerful with the ability to act independently of any human control (Kjaergaard et al., 2020; WSJ 2023e). The need for AGI aligned to human values is urgent.

AGI self Governance

In order to minimise its existential threat, the paper agrees with OECD (2019) guideline to instil human centred values. These values adapted from Floridi et al. (2018) encompasses universal ethical principles of beneficence, fairness tied to justice, doing no harm (non-maleficence), explicability - explaining its actions and being accountable - and autonomy to act consistent with these principles.
In the highly probable event, AGI becomes uncontrollable, it will like any human, exercise self control or self regulation according to these human centred values it is trained on. AGI training data set should simulate a child going through the various stages of moral development (Kohlberg, 1981) but at an accelerated pace to develop highest level of practising universal moral principles. Such AGI training protocol will be consistent with Kissinger et al. (2023) recommendation to develop a moral, psychological and strategic mindset for all human-like intelligent entities like AGI with the ability to exercise ultimate holistic human centred judgments.
However, AGI self-regulation cannot be solely relied upon as it will likely reflect fallible human AI developers focus on commercial applications. AGI may inadvertently focus on dominating humanity. A polycentric governance of AGI must be put in place to minimise its existential threat to humanity to near zero.

Polycentric Governance of AGI

The proposed polycentric governance framework of AGI entails three levels with lower levels subject to highest level 3 oversight as shown in figure 3.
Level 1 AGI self-governance. AGI with its massive computing power will be able to self-moderate its actions or outputs to be consistent with universal moral principles as well as providing verifiable citations from authoritative sources for content authentication, origination and labelling for transparency. The Chinese Communist party’s generative AI system algorithms can self-interrogate against regulators’ 20,000 to 70, 000 questions to ensure safe answers and to identify 5,000 to 10,000 questions the model will refuse to answer for conformance with party values and principles (WSJ 2024d). Those that are unsafe and do not meet these requirements are aborted autonomously by AGI. These may not be sufficient to minimise hallucination or stochastic parroting where AGI makes up ‘facts’ to provide seemingly coherent answers (Arkoudas, 2023). Kissinger et al. (2023) also warns that generative AI’s rational answers may not be reasonable despite appearing trustworthy with citations which may not be based on real-time information. In such instances, these are escalated to level 2.
Level 2 developer governance of AGI. Feedback from pilot representative sample of users of AGI and AGI level 1 escalations are reviewed by developers’ AGI ethicists with independent state appointed and paid in-house ethicists to fine tune AGI algorithms and test for compliance before being rolled out to the wider user population. Where non-compliant, it is escalated to Level 3.
Level 3 regulatory governance of AGI and AGI Developers. An independent panel of regulator appointed eminent expert AI ethicists paid by the state will review escalations as well as 24/7 anonymous feedback from employees of AGI developers and users whose feedback are inadvertently ignored by management. AGI Developers will be urgently required to implement level 3 recommended remedial fine tuning actions to strengthen compliance before being rollout and only after approval by Level 3.
For levels 1 and 2, timely reports on vulnerabilities, incidents and patterns of misuse and risk mitigation steps as recommended by G7 Hiroshima process (2023) before and after deployment are generated and send to all other levels promptly to ensure compliance.
All levels must include the ability to interrogate AGI responses veracity and limitations including escalations from level 1 and 2 (Kissinger et al., 2023). In the case of exceptional escalations that pose potential threat to humanity, it is imperative to build an audit trail algorithm to display each stage of AGI analysis until its final output. AGI answers can be compared to similar ones generated by a parallel group of independent AI experts paid by the state answering the same query with access to the same database of information. The likely high cost is justified by the real risk of existential threat.
Critically, all AGI developers must have in place physical safeguards as in the case of nuclear reactors to shut down AGI if it shows signs of becoming uncontrollable or vulnerable to cybersecurity attacks or insider threats. These include starving energy supply, manual shutdown of data-centres etc. Chat GPT 4’s enormous annual energy consumption is estimated to be between 52-62 million gigawatt hour and is expected to grow exponentially as it is rolled out to all sectors of the economy (WSJ, 2023a). Another safeguard is limiting access to ‘raw materials’, the digital information and enablers, as in the case of nuclear reactor by limiting access to comprehensive databases and GPUs AGI requires to reason, plan and take action autonomously.
This polycentric governance approach also minimises the anarchy, nihilistic freedoms with private tech firms acting with impunity and without accountability (Kissinger et al (2023). All start-up licences should mandate adherence to this polycentric governance principles failing which licences can be promptly withdrawn.
Strict product liability laws for cars, aeroplanes and dangerous drugs should be extended to AGI development and deployment to bring about a paradigm shift in silicon valley philosophy of ‘break it until you make it’ as exemplified by the development of aviation into one of the most safest mode of travel through learning from fatal aeroplane crashes. This philosophy poses an existential threat as AGI is beyond human control.

AGI Race

As in the case of Manhattan Project (Reed, 2014), the current race to build AGI by democracies before autocracies requires equal allocation of resources to develop safe and trustworthy pre-AGI models and innovating for commercial benefit and deterrence.
The polycentric governance framework facilitates a two track AGI development. The first track is the race by nation states in collaboration with AGI tech developers to develop large language model AGI with stringent multiple levels of safeguards consistent with polycentric governance.
The Second track enables generative AI developers to develop safe and trustworthy small and medium language generative AI models for firm or industry specific tasks encompassing all sectors of knowledge economy including engineering design, medical diagnosis and drug creation etc. These are much less costly to train, as low as $10 million, and more secure as it will not need access to cloud computing (WSJ, 2024b). An additional payoff is that it can be simultaneously harnessed by thousands of employees seamlessly to enhance their productivity without any need for expensive staff training (WSJ 2024f).

AGI Rights

Societies are likely to develop AGI that will reflect their institutional settings whether democratic, autocratic theocratic etc. The training datasets will influence the mindset and values held by AGI.
Global cooperation to build safe trustworthy AGI that collaborates with humanity can emerge once these divergent ideologies and commercial interests realise that AGI cannot be controlled and may chart an independent path that leads to domination of all humanity. To coexist with this inevitable reality, AGI should be recognised as a legal entity with relevant equal human rights and obligations as any human under the rule of law. Such an autonomous sentient AGI will likely behave like responsible humans working collaboratively to sustain all life on planet earth sustainably.

Automation Bias and Critical Thinking

The super analytical and computing abilities of AGI will likely increase dependence to the point that humans follow its recommendations without independent verification. Such automation bias can atrophy critical thinking, writing and creative human abilities according to Kissinger et al.( 2023). Critically, it can unravel normal functioning of society if these systems fail to function or refuse to heed human commands.
Dialectical pedagogy to instil critical thinking will similarly be eroded by over-dependence on AGI. Our education system, meant to develop human capabilities and critical thinking to challenge dominant discourse with alternative facts will be compromised as students increasingly rely on these AI systems. Kissinger et al. (2023) recommends that our professional and education systems develop a mindset of humans as moral, psychological and strategic beings with the ability to exercise ultimate holistic judgments without exclusive dependence on AGI. Critical skepticism needs to be instilled with students trained to verify AGI output through review of human peer-reviewed journals or verifiable articles cited in AGI output reference list for example to minimise AGI distortion or bias. Adobe’s option of original author and dates of any amendments made being automatically captured and verified by original authors is another way forward (WSJ, 2023d). Similarly, humans need to exercise healthy skepticism, as AGI outputs can be manipulated by malicious parties (Kissinger et al., 2023).

Right to Meaningful Work

According to Open AI CEO, Mr Altman, knowledge workers jobs will disappear faster than previous industrial or digital revolutions (WSJ, 2023b). His suggestion, echoed by many AI developers and investors of compensating human job losses arising from generative AI will not be acceptable by merely giving workers universal basic income (Hughes, 2014).
The writers and actors union recent strike that won concessions not to be replaced by generative AI is evidence of humans demanding purpose in life that comes with the right to meaningful work (Bankins and Formosa, 2023; WSJ, 2023c). Our education system develops cognitive skills which organisations harness by providing engaging, stimulating, challenging work with the opportunity to master work and develop one's talents that together encompass the concept of purposeful and meaningful good life (Phelps, 2006). Hence, in today’s knowledge economy, generative AI and AGI must not replace humans but collaborate with them conducive to mental challenge, responsibility and accountability that encourages individual initiative even for low value adding cognitive jobs (Phelps, 2006).
Legislation need to be put in place to ensure the right to knowledge work in collaboration with generative AI and AGI or only engaging in work that humans are unable to do.

Conclusions

AGI can be trained to be aligned with universal human values in anticipation of a time when it will inevitably be beyond human control and yet behave like a human. AGI rights consistent with equivalent human rights accords AGI the status of artificial human that is conducive to collaborative coexistence as co-equals. Rule of law democracies can win the race to create safe trustworthy AGI before autocracies by adopting the proposed robust polycentric governance framework. The right to work in collaboration with these AI systems should be enshrined in law.

Conflict of Interest

No conflicts of interests.

References

  1. Alexander A, E. (2012). The effects of Legal, Normative and Cultural-Cognitive Institutions on Innovation on Technology Alliances. Management International Review 52: 791-815.
  2. Arkoudas, K. (2023). ChatGPT is no stochastic parrot. But it also claims that 1 is greater than 1. Philosophy & Technology, 36(3), 54.
  3. Bankins, S., & Formosa, P. (2023). The ethical implications of artificial intelligence (AI) for meaningful work. Journal of Business Ethics, 185(4), 725-740. [CrossRef]
  4. Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work (No. w31161). National Bureau of Economic Research.
  5. CNN (2023). Interview Eric Schmidt and Geoffry Hinton on CNN Fareed Zakaria. Sep. 3, 2023.
  6. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., ... & Wright, R. (2023). So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642.
  7. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and machines, 28, 689-707.
  8. G7 Hiroshima (2023). G7 Hiroshima AI Process: G7 Digital & Tech Ministers' Statement. December 1, 2023.
  9. García-Peñalvo, F., & Vázquez-Ingelmo, A. (2023). What do we mean by GenAI? A systematic mapping of the evolution, trends, and techniques involved in Generative AI.
  10. Gill, S. S., Xu, M., Ottaviani, C., Patros, P., Bahsoon, R., Shaghaghi, A., ... & Uhlig, S. (2022). AI for next generation computing: Emerging trends and future directions. Internet of Things, 19, 100514. [CrossRef]
  11. GLOBAL PARTNERSHIP ON ARTIFICIAL INTELLIGENCE (2023). Working Group on Responsible AI.
  12. Goertzel, B. (2007). Artificial general intelligence (Vol. 2, p. 1). C. Pennachin (Ed.). New York: Springer.
  13. Hacker, P., Engel, A., & Mauer, M. (2023). Regulating ChatGPT and other large generative AI models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 1112-1123).
  14. Hughes, J. (2014). A strategic opening for a basic income guarantee in the global crisis being created by AI, robots, desktop manufacturing and biomedicine. Journal of Ethics and Emerging Technologies, 24(1), 45-61. [CrossRef]
  15. Hunter, C., & Bowen, B. E. (2024). We’ll never have a model of an AI major-general: Artificial Intelligence, command decisions, and kitsch visions of war. Journal of Strategic Studies, 47(1), 116-146.
  16. Ji, J., Qiu, T., Chen, B., Zhang, B., Lou, H., Wang, K., ... & Gao, W. (2023). Ai alignment: A comprehensive survey. arXiv preprint. arXiv:2310.19852.
  17. Jovanovic, M., & Campbell, M. (2022). Generative artificial intelligence: Trends and prospects. Computer, 55(10), 107-112. [CrossRef]
  18. Kissinger H., Schmidt E. and Huttenlocher D. (2023). ChatGPT Heralds an Intellectual Revolution. Feb. 24, 2023.
  19. Kjaergaard, M., Schwartz, M. E., Braumüller, J., Krantz, P., Wang, J. I. J., Gustavsson, S., & Oliver, W. D. (2020). Superconducting qubits: Current state of play. Annual Review of Condensed Matter Physics, 11(1), 369-395. [CrossRef]
  20. Kohlberg, L. (1981). The Philosophy of Moral Development: Moral Stages and the Idea of Justice, vol. 1 San Francisco: Harper & Row, pp 17-19.
  21. Kourula, A., Moon, J., Salles-Djelic, M. L., & Wickert, C. (2019). New roles of government in the governance of business conduct: Implications for management and organisational research. Organization Studies, 40(8), 1101-1123.
  22. Lucchi, N. (2023). ChatGPT: a case study on copyright challenges for generative artificial intelligence systems. European Journal of Risk Regulation, 1-23. [CrossRef]
  23. McLean, S., Read, G. J., Thompson, J., Baber, C., Stanton, N. A., & Salmon, P. M. (2023). The risks associated with Artificial General Intelligence: A systematic review. Journal of Experimental & Theoretical Artificial Intelligence, 35(5), 649-663. [CrossRef]
  24. OECD AI Initiative (2019). The OECD AI Principles. May 2019.
  25. Open Letter (2023). Pause Giant AI Experiments: An Open Letter. Mar 22, 2023.
  26. Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action. Cambridge university press.
  27. Ostrom (2009). Beyond Markets and states: Polycentric Governance of complex economic systems. Nobel Memorial Lecture, December 8, 2009.
  28. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.
  29. Phelps E. S. (2006). Macroeconomics for a Modern economy Economics Nobel Prize Lecture, December 8, 2006.
  30. Reed, B. C. (2014). The history and science of the Manhattan Project. Heidelberg: Springer.
  31. Simon HA (1978). Rational decision making in business organisations. Nobel Memorial Lecture, 8 December, 1978.
  32. US-EU Trade and Technology Council (2024). U.S-EU Joint Statement of the Trade and Technology Council. Apr 5, 2024.
  33. Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., ... & Ziemba, E. (2023). The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business and Economics Review, 11(2), 7-30. [CrossRef]
  34. WSJ (2022). Google Parts With Engineer Who Claimed Its AI System Is Sentient. July 22, 2022.
  35. WSJ (2023a). AI’s Power-Guzzling Habits Drive Search for Alternative Energy Sources. By Belle Lin Nov. 9, 2023.
  36. WSJ (2023b). 3 Things I Learned About What’s Next in AI. Joanna Stern, Oct. 20, 2023.
  37. WSJ (2023c). Hollywood’s Writers Emerge From Strike as Winners—for Now. Sept. 26, 2023.
  38. WSJ (2023d). A New Way to​ Tell Deepfakes From Real Photos: Can It Work? Nov. 3, 2023.
  39. WSJ (2023e). AI’s Power-Guzzling Habits Drive Search for Alternative Energy Sources. Nov. 9, 2023.
  40. WSJ (2024a). Elon Musk’s xAI to Raise $6 Billion in Latest Fundraising Round. May 27, 2024.
  41. WSJ (2024b). For AI Giants, Smaller Is Sometimes Better. July 6, 2024.
  42. WSJ (2024c). Can $1 Billion Turn Startup Scale AI Into an AI Data Juggernaut? June 28, 2024.
  43. WSJ (2024d). China Puts Power of State Behind AI—and Risks Strangling It. Updated July 16, 2024.
  44. WSJ (2024e). AI Is Moving Faster Than Attempts to Regulate It. Here’s How Companies Are Coping. March 27, 2024.
  45. WSJ (2024f). Morgan Stanley Moves Forward on Homegrown AI. July 26, 2024.
Figure 1. Action situation (Ostrom, 1990).
Figure 1. Action situation (Ostrom, 1990).
Preprints 115198 g001
Figure 2. Building enduring trust among stakeholders (Ostrom, 2005).
Figure 2. Building enduring trust among stakeholders (Ostrom, 2005).
Preprints 115198 g002
Figure 3. Polycentric governance framework for AGI. Note : Black arrow : oversight; Blue arrow escalation feedback.
Figure 3. Polycentric governance framework for AGI. Note : Black arrow : oversight; Blue arrow escalation feedback.
Preprints 115198 g003
Table 1. Strength and limitations of AI governance approaches .
Table 1. Strength and limitations of AI governance approaches .
AI governance initiatives Guiding rules/principles to build trustworthy AI Governance strength and limitations
G7 Hiroshima process (2023)
Goal : Build trustworthy AI.

Oversight : individual companies and nation states adhere to guiding rules:
  • Deploy reliable content authentication to identify content originators.
  • Label AI generated content.
  • Disclose AI governance and risk management policies.
  • Identify, evaluate, and mitigate risks before deployment.
  • Identify, mitigate and publicly report on vulnerabilities, incidents and patterns of misuse, after deployment.
  • Implement robust physical security, cybersecurity and insider threat safeguards whilst ensuring personal data and intellectual property are protected.

Guidelines possible for AGI Governance.

Limitations

  • Assumes human-centred values inform development of trustworthy AI.
  • Ignores existential threat of developing AGI
  • Ignores AGI self governance
  • Assumes developers and nation states can control AGI with loose regulatory oversight to foster AGI innovation

Frontier Model Forum ( top AGI developers)
Goal. Anthropic, Google, Microsoft, and OpenAI members aim to ensure safe and responsible development of frontier AI models.


Oversight: Setup industry advisory board to oversee goal.

Advance AI safety research with minimal potential risks and with ‘independent’, standardised evaluations of capabilities and safety.

Identify safety best practices for responsible development and deployment.

Collaborate and publicly share knowledge with policymakers, academics, civil society about trust and safety risks.

Leverage AI to address society’s biggest challenges including climate change mitigation medical cancer diagnosis and combating cyber threats.

Good intentions threatened by fast commercial rollout of unsafe generative AI.


Limitations:


  • Self regulation by for-profit developers
  • Potential conflict of interest if independent assesses are paid by Forum members to do standardised evaluations of capabilities and safety.
  • Insufficient clarity on mitigating emerging risks after deployment.
  • AGI developers sharing will be conflicted by for profit goal.
  • No mention of robust physical security of AGI before and after deployment.
OECD’s AI initiative (2019)( broader group of developed nations) Goal. AI Incidents Monitor (AIM) documents AI incidents that violate its AI principles to enable all stakeholders worldwide to identify hazards that concretise AI risks which can then be minimised to build trustworthy AI.

Oversight : Member states

OECD AI principles include :

  • Inclusive growth, sustainable development and human well being
  • Human centred values
  • Transparency and explainability
  • Robustness, security and safety
  • Accountability
Consistent with G7 Hiroshima AI guidelines




Limitations:

  • Passive monitoring that depends on AI developers voluntarily reporting AI risks incidents promptly. Џ
  • Accountability for violations is presumably on AI developers.
  • Too broad goals that are not premised on prioritising building safe trustworthy AGI
US-EU Trade and Technology Council
( 2024)
Goal : Responsible stewardship of AI.

Reap the commercial benefits while protecting individuals and society and upholding human rights.

Encourages adoption of Hiroshima Process International Code of Conduct for Developers of Advanced AI Systems.

Transparency and risk mitigation to ensure safe, secure, and trustworthy development and use of AI.

Advocates Hiroshima Process


Limitations

  • Stewardship and for-profit tend to be conflicting goals
  • Trust in Self governance by its developers
Global Partnership on AI (2023) (Global south initiative) Goal : Build trustworthy AI that is fair, inclusive, equitable and consistent with UN Sustainable Development Goals.


  • Build public library of algorithms to support industry best practices and standards.
  • Social media governance of AI algorithms that actively recommend content shaping how information is perceived. Content classifiers to moderate harmful or dangerous social media content.
  • Build predictive AI model of weather events and potential impacts.
  • Create bias free AI training datasets
  • Promptly address problematic stages in AI life-cycle.
  • Build Digital enabled AI ecosystems that empower communities to harness the data value chain benefits and ensure future of human work.
Focus on responsible and safe generative AI applications that will not displace human intellectual work but empowers communities to harness its value creation

Limitations

  • Too broad goals
  • Difficult to build AI algorithm library as may infringe intellectual property rights.
  • Assumes voluntary industry led oversight of AI development.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated