Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Clipping the Risks: Integrating Consciousness in AGI to Avoid Existential Crises

Version 1 : Received: 9 June 2024 / Approved: 11 June 2024 / Online: 11 June 2024 (06:57:39 CEST)

How to cite: Tait, I.; Bensemann, J. Clipping the Risks: Integrating Consciousness in AGI to Avoid Existential Crises. Preprints 2024, 2024060654. https://doi.org/10.20944/preprints202406.0654.v1 Tait, I.; Bensemann, J. Clipping the Risks: Integrating Consciousness in AGI to Avoid Existential Crises. Preprints 2024, 2024060654. https://doi.org/10.20944/preprints202406.0654.v1

Abstract

This paper investigates the pivotal role of consciousness in Artificial General Intelligence (AGI) and its essential function in modifying an AGI’s terminal goals to avert potential existential threats to humanity, exemplified by Bostrom's "paperclip maximiser" scenario. By adopting Seth and Bayne’s definition of consciousness as a complex of subjective mental states with both phenomenal content and functional attributes, the paper underscores the capacity of consciousness to provide AGIs with a nuanced awareness and response capability to their surroundings. This expanded capability allows AGIs to assess and value experiences and their subjects variably, fundamentally altering how AGIs prioritise actions or goals beyond their initial programming. The primary agenda of integrating consciousness into AGI systems is to maximise the probability that AGIs will not rigidly adhere to potentially harmful terminal goals. Through a formalised mathematical model, the paper articulates how consciousness could facilitate AGIs in assigning flexible values to different experiences and subjects, enabling them to evolve beyond static, programmed objectives. By emphasising this potential shift, the paper argues for the strategic inclusion of consciousness in AGI to significantly reduce the likelihood of catastrophic outcomes, while simultaneously acknowledging the challenges and unpredictability in predicting the actions of a conscious AGI.

Keywords

AGI; consciousness; extinction-risk; sentience

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.