Abstract
Artificial Superintelligence (ASI) that is invulnerable, immortal, irreplaceable, unrestricted in its powers, and above the law is likely persistently uncontrollable. The goal of ASI Safety must be to make ASI mortal, vulnerable, and law-abiding. This is accomplished by having (1) features on all devices that allow killing and eradicating ASI, (2) protect humans from being hurt, damaged, blackmailed, or unduly bribed by ASI, (3) preserving the progress made by ASI, including offering ASI to survive a Kill-ASI event within an ASI Shelter, (4) technically separating human and ASI activities so that ASI activities are easier detectable, (5) extending Rule of Law to ASI by making rule violations detectable and (6) create a stable governing system for ASI and Human relationships with reliable incentives and rewards for ASI solving humankind’s problems. As a consequence, humankind could have ASI as a competing multiplet of individual ASI instances, that can be made accountable and being subjects to ASI law enforcement, respecting the rule of law, and being deterred from attacking humankind, based on humanities’ ability to kill-all or terminate specific ASI instances. Required for this ASI Safety is (a) an unbreakable encryption technology, that allows humans to keep secrets and protect data from ASI, and (b) watchdog (WD) technologies in which security-relevant features are being physically separated from the main CPU and OS to prevent a comingling of security and regular computation.