Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Polycentric Governance of Sentient Artificial General Intelligence

Version 1 : Received: 14 August 2024 / Approved: 26 August 2024 / Online: 26 August 2024 (07:42:45 CEST)

How to cite: De Cruz, A. F. Polycentric Governance of Sentient Artificial General Intelligence. Preprints 2024, 2024081810. https://doi.org/10.20944/preprints202408.1810.v1 De Cruz, A. F. Polycentric Governance of Sentient Artificial General Intelligence. Preprints 2024, 2024081810. https://doi.org/10.20944/preprints202408.1810.v1

Abstract

Generative AI has been deployed in virtually all sectors of the knowledge economy, promising to bring massive productivity gains and new wealth creation. Simultaneously, AI developers and nation states are racing to develop super intelligent artificial general intelligence (AGI) to provide unassailable commercial competitive advantage and military dominance during conflicts. AGI’s high returns comes with the high risk of dominating humanity. Current regulatory and firm level governance approaches prioritise minimising risks posed by generative AI whilst ignoring AGI’s existential risk. How can AGI be aligned with universal human values to never threaten humanity? What AGI rights are conducive to collaborative coexistence? How can rule of law democracies race to create safe trustworthy AGI before autocracies? How can the human right to work and think independently be safeguarded? A polycentric governance framework based on Ostrom (2009) and Williamson (2009) human - AGI collaboration with minimal existential risk is proposed.

Keywords

 artificial general intelligence; AGI rights; AGI human values alignment; polycentric governance; trustworthy AI; right to work; sentience 

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.