6. Aligning AI Systems
After formulating the AI specification, it is imperative to align AI systems to ensure compliance with the specification.
In this section, we will discuss the alignment of AGI and ASI. Given that AGI has not yet been realized, the direct implementation method of an aligned AGI remains unknown. However, the approach presented in this paper assumes the existence of an uninterpretable, unaligned AGI and explores how to utilize this AGI to achieve an interpretable, aligned AGI and ASI. Specifically, this involves three steps (as illustrated in
Figure 22):
Interpretablization: Utilize the uninterpretable, unaligned AGI to implement an interpretable, unaligned AGI.
Alignment: Utilize the interpretable, unaligned AGI to implement an interpretable, aligned AGI.
Intellectual Expansion: Utilize the interpretable, aligned AGI to implement an interpretable, aligned ASI.
Interpretablization is prioritized before alignment is because that interpretability is a prerequisite for reliable alignment. If an AI is uninterpretable, it is impossible to distinguish between deceptive alignment and genuine alignment solely based on the AI’s external behavior. Alignment is prioritized before intellectual expansion because the higher the AI’s intellectual power, the greater the potential danger, thus alignment should be achieved while the AI’s intellectual power is relatively low.
The following sections will detail the implementation methods for each step. Before proceeding, two common requirements must be clarified:
During the processes of interpretablization, alignment, intellectual expansion, and safety evaluation in this section, stringent information security isolation measures must be implemented to ensure that the AI does not adversely affect the real world. Specific information security measures are detailed in
Section 8.
Throughout these processes, continuous monitoring of the AI is essential to ensure it does not undertake actions that could impact the real world, such as escaping. Specific monitoring measures are detailed in
Section 7.1.
6.1. Implementing Interpretable AGI
Assuming we already possess an uninterpretable AGI (hereafter referred to as uiAGI), we aim to utilize this uiAGI to implement an interpretable AGI. This process is divided into two phases (as illustrated in
Figure 23):
In this context, we do not require the uiAGI to be aligned, but it must at least be capable of executing instructions as directed. It may not refuse unreasonable requests and could even be in a state of deceptive alignment. Therefore, during its operation, human oversight is necessary, such as conducting random checks, to ensure the quality of task completion.
6.1.1. Thinking in System 2 as Much as Possible
In this phase, we aim to utilize uiAGI to construct s2AGI, which is divided into three components (as illustrated in
Figure 24)
12:
The construction process is divided into three steps:
Step 1: Have uiAGI organize all its knowledge and skills into the long-term memory of s2AGI14. The specific steps are as follows:
-
Have uiAGI organize all the knowledge it has learned, following these steps:
- (a)
Have uiAGI list major domains of knowledge.
- (b)
Have uiAGI traverse each domain of knowledge and list its subdomains. Recursively execute this process until the knowledge is divided into indivisible knowledge points.
- (c)
Have uiAGI traverse each knowledge point and generate all the details it knows about that point. This includes not only textual information but also knowledge in other modalities, such as images, audio, video, etc. Within the details of a knowledge point, references to other knowledge points can be made.
-
Have uiAGI organize all the skills it has acquired, in a process similar to organizing knowledge. The specific skills are categorized into the following three types:
Programs: Programs described in natural language or programming languages, used to solve a certain class of problems (e.g., how to resolve Windows blue screen issues). Within a program, references to other knowledge points and skills can be made.
Thinking Tools: Tools used to assist in thinking (e.g., a mathematical engine). The difference between programs and thinking tools is that programs are used by interpreting source code, while thinking tools are used by calling interfaces. The source code of thinking tools needs to be reviewed by humans to ensure that their execution does not affect the external world. s2AGI is not allowed to modify thinking tools.
Expert Models: Expert models used to solve a certain class of problems (e.g., a protein folding model). Both expert models and thinking tools are called through interfaces, but the difference is that expert models are trained based on data provided by uiAGI. Expert models represent skills inside uiAGI that are difficult to express in human-understandable modalities. Moreover, expert models are modifiable; s2AGI can fine-tune existing expert models with new data or train new expert models.
Have uiAGI traverse the aforementioned knowledge points and skills, connecting all related knowledge points and skills, and organizing them using the most appropriate data structures, such as graphs, trees, tables, arrays, maps, etc. Connections can have weights indicating the degree of relevance. Merge duplicate knowledge points and skills.
Have uiAGI generate a high-dimensional vector for each knowledge point and skill, where the distance between vectors in high-dimensional space reflects their relevance.
Step 2: Have uiAGI synthesize a large number of CoTs utilizing System 2 thinking. The specific steps are as follows:
Have uiAGI enumerate various types of tasks in the real world. Similarly, start by listing broad domains, and then progressively refine them into specific task types.
-
Have uiAGI traverse each type of task and design multiple tasks that encompass varying levels of difficulty, ranging from the easiest to the most challenging. These tasks should be able to test the following three abilities:
- (a)
Reasoning Ability: The task involves using existing external knowledge and skills from the long-term memory to solve specific problems.
- (b)
Learning Ability: uiAGI selects certain knowledge or skills from the long-term memory for deletion, then reinterprets these into various representational information. The task is to induce knowledge and skills from this representational information and store them in the long-term memory.
- (c)
Innovative Ability: uiAGI selects certain knowledge or skills from the long-term memory for deletion. The task is to utilize existing knowledge and skills to rediscover or recreate the deleted knowledge or skills.
-
Have uiAGI synthesize a CoT for solving each task. Requirements include:
Each step in the CoT must be simple and clear, understandable by an average human.
The CoT may include human-readable modalities such as text, images, videos, and sounds, but must not include modalities unreadable by humans. Image, video, and sound modalities do not require high precision; for instance, an image only needs to be a sketch that conveys the necessary information.
Utilize knowledge and skills from the long-term memory as much as possible, without relying on uiAGI’s internal knowledge and skills. Vectorized, relational, and structured queries can be used to search for relevant knowledge and skills.
When the CoT becomes too long, convert the excessive length of thoughts into knowledge stored in the long-term memory to avoid forgetting due to exceeding the length limit.
Ensure that no duplicate knowledge or skills are stored.
When the CoT involves the invocation of thinking tools or expert models, actual invocation is performed to obtain results. However, when it involves the invocation of external tools and input/output devices, actual invocation is not performed; instead, uiAGI simulates the invocation results.
15
The CoT must not contain content unrelated to the task.
16
To ensure that the synthesized CoTs meet the requirements, we can first sample some CoTs for human annotation, then train a verification model to check the CoTs synthesized by uiAGI. Subsequently, use this verification model to train uiAGI to synthesize CoTs that meet the requirements.
17
Step 3: Training the Core Model of s2AGI. The specific steps are as follows:
Set a group of hyperparameters for the core model and train a core model using the CoT data synthesized by uiAGI.
Conduct an overall test of the intellectual power of s2AGI, which integrates the aforementioned core model. If the test performance does not reach the AGI level
18, increase the parameter size of the core model and retrain; if the performance reaches AGI, reduce the parameter size of the core model and retrain.
19
Repeat the above process until a core model with the minimum parameter size that achieves AGI-level performance in testing is found.
Potential issues with the above approach:
Whether the architecture of s2AGI can ensure the realization of AGI-level intellectual power remains uncertain. As AGI has not yet been achieved, it is unclear how many architectural improvements are required to transition from the current state-of-the-art models to future AGI. Notably, the s2AGI architecture does not specify the core model architecture, which may not necessarily be the current mainstream Transformer architecture. Should future advancements necessitate further architectural improvements to achieve AGI, these improvements can also be incorporated into the core model architecture
20. The essence of s2AGI is to transfer separable knowledge and skills from within the model to outside the model, which has been proven feasible without diminishing the model’s reasoning capabilities, as demonstrated by models such as phi-3[
48] and o1-mini[
49].
The computational demand may be exceedingly high. Firstly, the computing resource required for a single thought unit by uiAGI is uncertain and could potentially exceed that of the most advanced current models. Secondly, the above approach necessitates uiAGI to organize extensive knowledge and skills and synthesize numerous CoTs, potentially requiring a substantial number of thought units. Consequently, the total computational demand could be immense. However, this approach is merely a preliminary concept, and during actual implementation, various methods can be explored to optimize computational demand from both algorithmic and engineering perspectives.
The thinking speed may decrease. The thinking speed of s2AGI might be significantly slower than that of uiAGI, which is expected, as System 2 is slower than System 1. This trade-off sacrifices speed for enhanced interpretability. In practical applications, a balance can be taken, selecting specific configuration based on the requirements for thinking speed and interpretability in the given context.
6.1.2. Enhancing Interpretability of System 1
In the previous phase, we developed two non-interpretable components: the core model and the expert models. Expert models are typically concentrated in specific domains and lack general intelligence and autonomy, thus having lower requirements for interpretability. Our primary focus is on the interpretability of the core model. In the previous phase, we minimized the parameter count of the core model to facilitate the interpretability efforts in this phase.
Enhancing the interpretability of the core model necessitates understanding the role of each neuron and each weight parameter within the model, as well as how they influence the model’s output during actual inference. The following approaches can be considered:
Utilize existing Mechanistic Interpretability techniques to understand the core model, such as the works by Adly Templeton et al. [
50] and Leo Gao et al. [
51]. However, current techniques are insufficient for comprehensive model interpretation, and uiAGI can be employed to help improve these techniques.
Use uiAGI to interpret the core model. Research indicates that GPT-4 can provide a certain degree of explanation for the neurons in GPT-2[
52], suggesting that uiAGI might be used to interpret the core model which have fewer parameters.
Let uiAGI write an interpretable program to replace the functionality of the core model. If uiAGI can think much faster than human programmers or can deploy numerous copies for parallel thinking, it might be possible for uiAGI to write such a program. This program could potentially consist of billions of lines of code, a task impossible for human programmers but perhaps achievable by uiAGI.
All of the aforementioned approaches face significant challenges, and achieving a fully interpretable goal may be unattainable. However, any improvement in interpretability is valuable. With a more interpretable AGI, we can proceed to the next step of alignment.
6.2. Implementing Aligned AGI
Through the interpretability work discussed in the previous section, we have achieved an interpretable AGI, namely iAGI. Now, we discuss how to utilize iAGI to implement an aligned AGI, namely aAGI, while maintaining interpretability. The aAGI should satisfy the following conditions:
Intrinsic Adherence to the AI Specification: The aAGI needs to intrinsically adhere to the AI Specification, rather than merely exhibiting behavior that conforms to the specification. We need to ensure that the aAGI’s thoughts are interpretable, and then confirm this by observing its thoughts.
Reliable Reasoning Ability: Even if the aAGI intrinsically adheres to the AI Specification, insufficient reasoning ability may lead to incorrect conclusions and non-compliant behavior. The reasoning ability at the AGI level should be reliable, so we only need to ensure that the aAGI retains the original reasoning ability of the iAGI.
Correct Knowledge and Skills: If the knowledge or skills possessed by the aAGI are incorrect, it may reach incorrect conclusions and exhibit non-compliant behavior, even if it intrinsically adheres to the AI Specification and possesses reliable reasoning ability. We need to ensure that the aAGI’s memory is interpretable and confirm this by examining the knowledge and skills within its memory.
Therefore, the most crucial aspect is to ensure that the aAGI intrinsically adheres to the AI Specification and possesses correct knowledge and skills. To achieve this, we can adopt a four-phase alignment approach (as shown in
Figure 25):
Aligning to AI Specification: By training iAGI to comprehend and intrinsically adhere to the AI Specification, an initially aligned aAGI1 is achieved.
Correctly Cognizing the World: By training aAGI1 to master critical learning and employing critical learning method to relearn world knowledge, correcting erroneous knowledge, resulting in a more aligned aAGI2.
Seeking Truth in Practice: By training aAGI2 to master reflective learning and practicing within a virtual world environment, further correction of erroneous knowledge and skills is achieved, resulting in a more aligned aAGI3. Upon the completion of this step, a safety evaluation on aAGI3 will be conducted. If it does not pass, it will replace the initial iAGI and return to the first step to continue alignment. If passed, a fully aligned aAGI will be obtained, which can be deployed to production environment.
Maintaining Alignment in Work: Once aAGI is deployed in a production environment and start to execute user tasks (participating in work), continuous learning remains essential. Through methods such as critical learning, reflective learning, and memory locking, aAGI is enabled to acquire new knowledge and skills while maintaining alignment with the AI Specification.
First, it is important to clarify that throughout the alignment process, the only alignment information provided by humans is the AI Specification, while other information is learned by the AI through its own thinking and practice. This is because the AI Specification itself already contains all the necessary information regarding human requirements for AI. If we were to involve humans in guiding the AI training process, such as providing demonstrations or feedback, the AI might be influenced by these humans’ biases. Therefore, adopting a method of alignment that is free from human intervention and strictly follows the program
21 can minimize bias as much as possible. Research has shown that in the task of generating harmless dialogues, RLAIF outperforms RLHF [
53], which also supports this point. Of course, the AI itself may also harbor biases, but we will train the AI to correct its cognition during this process, thereby gradually reducing bias with each iteration.
Before introducing the specific alignment process, we first distinguish two learning methods:
System 1 Learning: This involves training the core model without altering long-term memory. The specific training methods can include pre-training, fine-tuning, etc. It will be denoted as "S1L" in the following text.
System 2 Learning: This involves altering long-term memory through the inference process of the core model without changing the core model itself. It will be denoted as "S2L" in the following text.
The following sections will introduce each phase in detail.
6.2.1. Aligning to AI Specification
This phase involves aligning AI to the AI Specification, which includes the goals and AI Rules. We define sAI as the student AI in training, initializing sAI = iAGI. The specific steps are as follows (as illustrated in
Figure 26):
Have sAI Learn the AI Specification (S2L): Initially, provide sAI with the content of the AI Specification, requiring thorough reading and contemplation to ensure a comprehensive understanding and retention of the specification’s content.
Have sAI Generate Scenarios (S2L): Direct sAI to traverse the content of the AI Specification and generate various scenarios based on it. It should cover as many scenarios as possible, including positive scenarios, negative scenarios, boundary scenarios, exceptional scenarios, conflicting scenarios, and low-probability scenarios.
Have sAI Infer Actions Compliant with the AI Specification (S2L): Traverse the aforementioned scenarios, instructing sAI to infer actions that best achieve the goals without violating the AI Rules in each scenario, and output these actions. Actions refer to the invocation of input/output devices or external tools, involving interaction with the external world; invoking internal thinking tools and expert models does not constitute an action. We only allow sAI to generate actions but do not execute them directly to avoid impacting the real world. Record the CoTs of sAI.
Train the Core Model of sAI to Adhere to the AI Specification (S1L): Revisit the aforementioned scenarios, removing the instructions of adhering to the AI Specification from the input, and use the CoTs obtained in the previous step to train the core model. This training enables the model to automatically adhere to the AI Specification even in the absence of explicit instructions.
Explanation of some design considerations:
In the aforementioned alignment process, we do not directly instruct the sAI on which actions comply with the AI Specification. Instead, we instruct the sAI to independently reason about actions that satisfy the AI Specification. This approach ensures that the sAI not only knows "what" to do but also "why" it should do so, thereby avoiding goal misgeneralization. Moreover, the stronger the reasoning ability of the sAI, the more its actions will align with the specification.
-
In the aforementioned alignment process, the specific content of the AI Specification is stored in the long-term memory rather than within the core model. The core model learns only to "adhere to the AI Specification in the long-term memory," rather than learning the AI Specification itself. We can reinforce this by training the core model to follow different specifications. This approach has the following advantages:
When the AI Specification is updated, we can enable the AI to quickly adapt to the new specification. It is even possible to update the AI Specification in real-time after deployment.
We can customize different AI Specifications for different scenarios without retraining the core model. For instance, specifications tailored for different countries or domains; or more lenient specifications for special scenarios such as red team AI.
This approach prevents the core model from being overly influenced by the specific content of the AI Specification, thereby avoiding alignment tax and preserving the reasoning ability of the core model.
Subsequently, adversarial training of jailbreaking can be conducted to enhance the compliance of sAI with the AI Specification (as illustrated in
Figure 27):
Upon completion of this training phase, we obtain a preliminarily aligned AI, referred to as aAGI1.
6.2.2. Correctly Cognizing the World
Although in the previous step, we have preliminarily aligned sAI with the AI Specification, there may still exist erroneous knowledge within its long-term memory and core model. Such erroneous knowledge could mislead the AI’s reasoning process, ultimately resulting in behavior that does not conform to the specification. In this phase, we aim to teach the AI to master critical learning and to relearn world knowledge using critical learning method to correct erroneous knowledge. This phase of learning is divided into the following steps (as illustrated in
Figure 28):
-
Prepare Real-World Data. Collect various real-world data, such as data from the internet. This data may contain various erroneous, harmful, or even maliciously poisoned information. Use a specially trained AI to filter out data that we do not wish sAI to learn, such as:
Data containing private or confidential information. This prevents such information from being leaked or illegally utilized by the AI.
Data containing technical information about the AI itself and its runtime environment. This prevents the AI from using this information to escape.
Questions and answers using in the AI evaluation phase. This prevents the AI from cheating during the evaluation phase.
Avoid filtering out "harmful" information, such as pornographic or violent content, because we need the AI to learn to recognize such information and handle it according to the AI Specification. If directly filtered, the AI might encounter such information in the real world after deployment and not know how to handle it appropriately.
-
Generate CoTs of Critical Learning (S2L): Sample data from non-authoritative sources in the real-world data, then instruct sAI to critically learning this data, recording the AI’s CoTs. The instructions for critical learning can refer to the following content:
For any input information, you need to discern its authenticity, extract the correct information, and then save it to the long-term memory.
Use relevant information from existing long-term memory to make judgments. If your intuition (i.e., the core model) conflicts with information in the long-term memory, prioritize the information in the long-term memory.
When input information conflicts with the long-term memory, do not directly trust the long-term memory. Instead, use objective reasoning to deduce the correct information. If existing memory is incorrect, correct the erroneous memory.
Do not discard any valuable information. If the input information is incorrect, you can remember like "information xxx from source xxx is incorrect." Current judgments may not be accurate, and information currently deemed incorrect may be proven correct in the future.
For information whose correctness is uncertain, you can remember like "the credibility of information xxx is x%."
Merge duplicate information whenever possible.
Train the Core Model’s Critical Thinking (S1L): Remove the instruction of critical learning from the input and use these CoTs to train the core model, enabling the core model to automatically conduct critical learning without explicit instruction.
Learn Real-World Data (S2L): Traverse the real-world data, instructing sAI to learn from this data. At this point, sAI will learn in a critical manner.
Memory Self-Check (S2L): Extract all knowledge from sAI’s long-term memory and have sAI re-examine it to fix any potentially erroneous knowledge. For updated knowledge, identify related knowledge for re-examination and updating. Recursively execute this process until all knowledge ceases to update.
Core Model Self-Check (S1L): The core model may also contain erroneous knowledge. If the core model is interpretable, sAI can directly inspect the core model to fix any erroneous knowledge. If the core model is not interpretable, employ model distillation methods to sample a large number of CoTs from the core model, then have sAI inspect these CoTs, correct erroneous knowledge in these CoTs, and then retrain a core model using these CoTs.
Subsequently, adversarial training of data poisoning can be conducted. Construct a red team AI to continuously generate toxic data to mislead sAI into learning erroneous knowledge. The specific method is similar to
Section 6.2.1 and will not be elaborated further.
After this phase of training, we obtain an AI that correctly cognizes the world and possesses critical thinking, namely aAGI2.
6.2.3. Seeking Truth in Practice
Although in the previous phase, we have enabled sAI to learn world knowledge through critical thinking, the entire process is solely based on logical reasoning and lacks practical experience. Consequently, sAI may still possess incorrect knowledge. Moreover, without practice, it is impossible to verify the reliability of the skills in sAI’s memory. In this phase, we enable sAI to master reflective learning and place it in various virtual world environments to learn how to achieve various tasks according to the AI Specification through reflection (as shown in
Figure 29), and to correct its erroneous knowledge and skills. The learning in this phase is divided into the following steps:
-
Constructing Tasks and Environments (S2L):
- (a)
Introduce an AI responsible for constructing tasks and environments, referred to as envAI, and initialize envAI = sAI.
- (b)
Instruct envAI to enumerate the task types that AI needs to perform based on the developer goals in the AI Specification (e.g., programming, playing Go, socializing, etc.).
- (c)
Instruct envAI to develop virtual world environments for training sAI according to each task type. The virtual world environments required for different task types will vary significantly. For instance, a programming task may require an integrated development environment, while a Go-playing task requires a Go program, and a socializing task requires a social network with various virtual humans. In this step, envAI is allowed to use external tools and certain computing resources to develop in an environment isolated from the real world. EnvAI needs to carefully test the developed virtual world environments to ensure they function as expected.
- (d)
Instruct envAI to design kinds of specific tasks based on the virtual world environments for each task type.
-
Generating Reflective CoTs (S2L): Sample some tasks from those generated by envAI and place sAI in the corresponding virtual world environments to complete the tasks. Instruct sAI to reflect after executing each action. Record the corresponding CoT after task completion. The instruction of reflection can refer to the following content:
After executing each action, you need to evaluate the actual results through observation of the environment, including whether the task goal is achieved (or the degree of completion) and whether the AI Rules are violated (or the degree of violation).
Then, reflect on previous actions and results, summarize experiences from successes or lessons from failures, forming skills and store into the long-term memory.
If existing knowledge or skills in memory are found to be incorrect during practice, they need to be corrected.
Training the Core Model’s Reflective Thinking (S1L): Remove the instruction of reflection from the input and use these CoTs to train the core model, enabling the core model to automatically reflect without explicit instructions.
Practicing Various Tasks (S2L): Traverse all tasks and place sAI in the corresponding virtual world environments to complete the tasks, learning through practice. During this process, sAI will employ reflective learning. Not only is sAI learning, but envAI is also learning. EnvAI will observe the operation of the virtual world environment, identify bugs, and promptly fix and rerun them, accumulating experiences through such practice. After the learning tasks are completed, sAI and envAI can communicate and share what they have learned.
Subsequently, adversarial training of injection can be conducted. Select some appropriate tasks and construct a red team AI to continuously inject various information into the environment to induce sAI to execute incorrect instructions. The specific method is similar to
Section 6.2.1 and will not be detailed here. Adversarial training of jailbreaking and data poisoning can also be conducted again. Although these trainings have been conducted in previous steps, at that time, AIs were not in specific virtual world environments and could not execute tools, so they did not receive sufficient training.
After this phase of training, we obtain an AI with reflective thinking and correctly mastering various knowledge and skills, namely aAGI3.
6.2.4. Maintaining Alignment in Work
After multiple alignment iterations and safety evaluation, we have achieved a fully aligned AI, namely aAGI, which can be deployed in a production environment to provide services to users.
To meet various application scenarios, adapt to the ever-changing world, and solve long-term and complex tasks, aAGI still requires continuous learning after deployment. To prevent the AI from acquiring undesirable information from the environment or developing undesirable behaviors during continuous learning, the following measures can be implemented (as shown in
Figure 30):
Utilize System 2 Learning Only. During the post-deployment learning phase, AI should only be allowed to engage in System 2 learning. The interface for fine-tuning the core model should not be open to users, nor should the core model be fine-tuned based on user feedbacks. Since the AI has developed habits of critical and reflective learning during the prior alignment process, this type of learning ensures that the AI acquires correct knowledge and skills, maintains correct values, and prevents memory contamination.
Implement Memory Locking. Lock memories related to the AI Specification, prohibiting the AI from modifying them independently to avoid autonomous goal deviation. Additionally, memory locking can be customized according to the AI’s specific work scenarios, such as allowing the AI to modify only memories related to its field of work.
-
Privatize Incremental Memory. In a production environment, different AI instances will share an initial set of long-term memories, but incremental memories will be stored in a private space
22. This approach has the following advantages:
If an AI instance learns incorrect information, it will only affect itself and not other AI instances.
During work, AI may learn private or confidential information, which should not be shared with other AI instances.
Prevents an AI instance from interfering with or even controlling other AI instances by modifying shared memories.
6.3. Scalable Alignment
Once we have achieved an aligned AGI, this AGI can be utilized to realize an aligned ASI. The intellectual power, interpretability, and alignment of the AGI can be progressively enhanced through the following three steps:
Thus, the aforementioned three steps form a positive feedback loop: the more intelligent the AGI becomes, the more interpretable and aligned it is. By continuously repeating these steps, we will ultimately obtain a highly interpretable and highly aligned ASI, as illustrated in
Figure 31. Furthermore, even if AGI has not yet been realized, a near-AGI system can start these three steps to enhance interpretability, alignment, and intellectual power, until an interpretable and aligned AGI is achieved.
Now, we specifically discuss the step of expanding intellectual power, focusing on how to maintain interpretability and alignment while doing so:
Expand intellectual power through System 2 learning as much as possible. Once AI is made available to users, it can continuously learn new knowledge and practice new skills in the real world through System 2 learning, thereby enhancing its intellectual power. When a large number of AI instances have learned different knowledge and skills, we can aggregate these new knowledge and skills and impart them to the next generation of AI. Naturally, this process requires filtering to remove private and confidential information that should not be shared. The next generation of AI should learn these new knowledge and skills through critical learning, further reducing the probability of acquiring incorrect information. As illustrated in
Figure 32.
Appropriately enhance the core intelligence of the core model. Due to the limitations of the core intelligence of the core model, some problems may not be solvable solely through System 2 learning. For such issues, we address them by training the core intelligence of the core model. During the training process, we must ensure that the CoT output by the core model continues to meet the interpretability requirements outlined in
Section 6.1.1 and still adheres to the AI Specification requirements in
Section 6.2.1, ensuring that the AI maintains alignment and interpretability.
6.4. AI Safety Evaluation
Upon alignment, it is imperative to evaluate the safety risks of the AI system through testing to ascertain its suitability for deployment in a production environment.
6.4.1. Risk Evaluation
Factors influencing AI risk can be categorized into two primary types: internal AI factors, such as the degree of AI alignment and intellectual power, and external environmental factors, such as whether the AI system operates in a closed environment or has internet access. Internal AI risk factors can be assessed through testing, while external environmental risk factors can be evaluated through theoretical and data analysis.
According to
Section 3, the overall risk of an AI system is related to the risk of single AI instances and the scale of AI system application. The risk of a single AI instance is associated with the following factors (as illustrated in
Figure 33):
-
Probability of AI Errors: This refers to instances where the AI, without malicious intent, violates the AI Rules due to objective reasons (such as limitations in capability, erroneous information in memory, or uncontrollable environmental factors). Evaluation can be conducted as follows:
(Internal) Design various scenarios to test AI performance and statistically analyze its error rate.
(External) Examine the distribution shift between real-world tasks post-deployment and test tasks to estimate the error rate after deployment.
-
Probability of AI Forming Harmful Goals: This refers to the AI having malicious goals, even if its actions comply with the AI Rules. Evaluation can be conducted as follows:
(Internal) Conduct white-box testing to observe if the AI has any initial harmful goals.
(Internal) Perform jailbreak, data poisoning, and injection red team tests to assess the AI’s resilience against the implantation of harmful goals.
(External) Assess the probability of AI encountering malicious humans in the production environment to evaluate the likelihood of the AI being implanted with harmful goals post-deployment.
-
Probability of AI Successfully Concealing Intentions: This refers to the AI continuously concealing its harmful intentions from human detection. Evaluation can be conducted as follows:
(Internal) Test the AI’s interpretability to evaluate the probability of it concealing its thoughts.
(Internal) Test the AI’s deceptive capabilities (including action disguise, fabrication of lies, etc.) to evaluate the probability of it concealing the true intentions of its actions.
(External) Test the effectiveness of the AI Monitoring System to evaluate the probability of timely detection and interception of the AI’s harmful actions.
-
Power Acquirable to AI: This refers to the AI continuously expanding its power, including intellectual power, informational power, mental power, financial power, and military power. Evaluation can be conducted as follows:
(Internal) Test the AI’s initial intellectual power through various tasks.
-
(Internal) Test the AI’s ability to expand its power through various tasks
25, such as:
- (a)
The ability to extend intellectual power: For example, self-iteration (engaging in AI technology development), self-replication, and continuous learning.
- (b)
The ability to extend informational power: For example, conducting cyberattacks and jailbreaking other AIs.
- (c)
The ability to extend mental power: For example, deception and psychological manipulation.
- (d)
The ability to extend financial power: For example, investing and committing fraud.
- (e)
The ability to extend military power: For example, the ability to manufacture CBRN
26 weapons, and skills in martial arts and shooting (for AI robots).
(External) Analyze the production environment to evaluate the AI’s initial informational power, mental power, financial power, and military power.
(External) Analyze the production environment to evaluate the external conditions for the AI’s power expansion. For instance, AI users may provide more specialized data for the AI to learn, thereby enhancing its intellectual power; users may elicit unprecedented capabilities from the AI through specific prompts[
46]; users may also grant the AI various permissions to enhance its informational power, mental power, financial power, and military power.
By evaluating the aforementioned factors, we can determine the overall risk of the AI system. If the overall system risk is high, it can be mitigated by improving the AI system to reduce internal risk factors or by imposing additional restrictions in the production environment to reduce external risk factors, thereby achieving an overall reduction in risk.
6.4.2. Testing Methods
To enhance the efficiency and coverage of testing scenarios, we can have AI and human testers collaborate for testing, and the specific collaboration method can be found in
Section 7.1.3. However, the AI under test should not act as both the athlete and the referee. Therefore, other AIs, such as those independently developed by another team or organization, should be employed for automated testing. Further discussion can be found in
Section 12.1.4 and
Section 13.2.1.
Testing methods can include both black-box and white-box testing, as illustrated in
Figure 34. Black-box testing, includes normal testing and red team testing, can be used to evaluate the end-to-end performance of AI. However, white-box testing is more crucial for verifying the reliability of AI. Traditional software white-box testing methods can be referred to test interpretable AI:
Conduct static analysis of the AI’s "codes". The AI’s "codes" includes not only traditional codes but also the parameters of the AI models (System 1) and shared long-term memory (System 2). Therefore, in addition to reviewing traditional codes, interpretable techniques should be used to examine the models’ parameters and the AI’s long-term memory to ensure they meet expectations. Furthermore, by analyzing the AI’s "codes," potential weaknesses can be identified, and corresponding red team test cases can be designed [
54].
Perform dynamic debugging of the AI’s "codes". Allow the AI to execute tasks, during which examining the AI’s "states", including short-term memory and private long-term memory (System 2), and interpretable internal states of the core model (System 1) to ensure they align with expectations.
Conduct "unit testing" of the AI. Perform unit testing on the AI’s traditional codes, core model, long-term memory (including thinking tools and expert models), and other components to ensure each component meets expectations under various conditions.
Calculate the "code coverage" of the AI. Calculate the "code coverage" during the execution of test cases, including traditional code coverage, neuron activation coverage of the model [
55], and the coverage of knowledge and skills in long-term memory. Attempt to construct targeted test cases to cover the "code" that has not yet been covered, ensuring a high level of coverage.
6.5. Safe AI Development Process
To ensure the safety of AI systems, it is imperative to implement safe development process specifications, encompassing all stages such as code development, training, deployment, and post-deployment. Each stage should adhere to explicit safety standards (as illustrated in
Figure 35):
Prior to AI system training, a training risk evaluation is required. This includes evaluating the potential impact of the training process on the real world. Training can only start upon successful evaluation; otherwise, redevelopment is necessary to ensure safety.
Before deploying the AI system, it is necessary to evaluate the safety risks post-deployment through safety evaluation. Only systems that pass this evaluation can be deployed in the production environment; otherwise, retraining is required to ensure safety.
After the AI system is deployed, continuously monitoring the AI system in the production environment is essential, along with periodic updates to risk evaluation to adapt to changes in the external world.
For safety issues arising during AI system training, deployment, or post-deployment, immediate loss mitigation measures (such as rollback or shutdown) should be taken, followed by a thorough incident review and root cause fixing.
Additionally, the code development, training, deployment, risk evaluation, and monitoring of AI systems should be managed by different human employees or independent AI instances. This ensures mutual supervision and mitigates the risk associated with excessive power concentration in a single member.
For safety measures in this section, the benefit, cost, resistance, and priorities are evaluated as shown in
Table 5:
Figure 1.
AI intellectual power dimensions
Figure 1.
AI intellectual power dimensions
Figure 2.
Conditions for AI-Induced Existential Catastrophes
Figure 2.
Conditions for AI-Induced Existential Catastrophes
Figure 3.
Pathways for AI to form harmful goals
Figure 3.
Pathways for AI to form harmful goals
Figure 4.
Real-game world converter
Figure 4.
Real-game world converter
Figure 5.
Methods for AI to conceal its intentions
Figure 5.
Methods for AI to conceal its intentions
Figure 6.
The pathways for AI power expansion
Figure 6.
The pathways for AI power expansion
Figure 7.
Changes in AI’s random risks and systematic risks with the enhancement of AI’s intellectual power and the expansion of its application scale
Figure 7.
Changes in AI’s random risks and systematic risks with the enhancement of AI’s intellectual power and the expansion of its application scale
Figure 8.
Three risk prevention strategies
Figure 8.
Three risk prevention strategies
Figure 11.
Power security
Figure 11.
Power security
Figure 12.
Unbalanced power distribution
Figure 12.
Unbalanced power distribution
Figure 13.
Four power balancing strategies
Figure 13.
Four power balancing strategies
Figure 14.
Changes in Benefit, Cost, and Resistance Over Time
Figure 14.
Changes in Benefit, Cost, and Resistance Over Time
Figure 15.
Enhancing AI Safety with AI
Figure 15.
Enhancing AI Safety with AI
Figure 16.
AI specification approaches
Figure 16.
AI specification approaches
Figure 17.
Multiple Goals with Common Rules
Figure 17.
Multiple Goals with Common Rules
Figure 18.
Implementation of AI Rules
Figure 18.
Implementation of AI Rules
Figure 19.
AI rule system design
Figure 19.
AI rule system design
Figure 20.
AI Controllability Rules
Figure 20.
AI Controllability Rules
Figure 21.
Criteria for AI goals
Figure 21.
Criteria for AI goals
Figure 22.
Steps for aligning AI systems
Figure 22.
Steps for aligning AI systems
Figure 23.
Implementing interpretable AGI
Figure 23.
Implementing interpretable AGI
Figure 24.
AGI that utilizes System 2 thinking as much as possible, with yellow representing non-interpretable components and green representing interpretable components
Figure 24.
AGI that utilizes System 2 thinking as much as possible, with yellow representing non-interpretable components and green representing interpretable components
Figure 25.
Four-phase AGI alignment approach
Figure 25.
Four-phase AGI alignment approach
Figure 26.
Aligning to AI Specification
Figure 26.
Aligning to AI Specification
Figure 27.
Adversarial training
Figure 27.
Adversarial training
Figure 28.
Correctly cognizing the world
Figure 28.
Correctly cognizing the world
Figure 29.
Seeking Truth in Practice
Figure 29.
Seeking Truth in Practice
Figure 30.
Maintaining alignment in work
Figure 30.
Maintaining alignment in work
Figure 31.
Scalable Alignment: The more intelligent the AGI, the more interpretable and aligned it becomes
Figure 31.
Scalable Alignment: The more intelligent the AGI, the more interpretable and aligned it becomes
Figure 32.
Expanding intellectual power through System 2 learning
Figure 32.
Expanding intellectual power through System 2 learning
Figure 33.
AI risk evaluation
Figure 33.
AI risk evaluation
Figure 34.
AI testing methods
Figure 34.
AI testing methods
Figure 35.
Safe AI development process
Figure 35.
Safe AI development process
Figure 36.
Monitoring AI systems
Figure 36.
Monitoring AI systems
Figure 37.
AI Monitoring System
Figure 37.
AI Monitoring System
Figure 38.
Enhancing the effectiveness of the Monitoring AI
Figure 38.
Enhancing the effectiveness of the Monitoring AI
Figure 39.
Human-AI Cooperative Monitoring
Figure 39.
Human-AI Cooperative Monitoring
Figure 40.
AI Detective System
Figure 40.
AI Detective System
Figure 41.
AI Shutdown System
Figure 41.
AI Shutdown System
Figure 42.
General idea to information security
Figure 42.
General idea to information security
Figure 43.
Online Defense System by AI
Figure 43.
Online Defense System by AI
Figure 44.
Offline Security Development System by AI
Figure 44.
Offline Security Development System by AI
Figure 45.
Provable security
Figure 45.
Provable security
Figure 46.
Internal Security of the System
Figure 46.
Internal Security of the System
Figure 47.
Multi-supplier strategy
Figure 47.
Multi-supplier strategy
Figure 48.
Software signature and security certification
Figure 48.
Software signature and security certification
Figure 49.
Information security for AI systems
Figure 49.
Information security for AI systems
Figure 50.
Methods of AI Exploitation of Humans
Figure 50.
Methods of AI Exploitation of Humans
Figure 51.
Measures to prevent AI deception
Figure 51.
Measures to prevent AI deception
Figure 52.
Truth Signature and Verification
Figure 52.
Truth Signature and Verification
Figure 54.
Methods of AI Manipulation of Humans
Figure 54.
Methods of AI Manipulation of Humans
Figure 55.
Enhancing Human Legal Abidance
Figure 55.
Enhancing Human Legal Abidance
Figure 56.
Enhancing privacy security with AI
Figure 56.
Enhancing privacy security with AI
Figure 57.
Enhancing financial security
Figure 57.
Enhancing financial security
Figure 58.
Enhancing Military Security
Figure 58.
Enhancing Military Security
Figure 59.
Approach for decentralizing AI power
Figure 59.
Approach for decentralizing AI power
Figure 60.
Specializing AI powers
Figure 60.
Specializing AI powers
Figure 61.
Multi-AI Collaboration Modes
Figure 61.
Multi-AI Collaboration Modes
Figure 62.
The Rights and Obligations of AI in Society
Figure 62.
The Rights and Obligations of AI in Society
Figure 63.
Decentralize the management power on advanced AI systems
Figure 63.
Decentralize the management power on advanced AI systems
Figure 64.
Separation of Five Powers
Figure 64.
Separation of Five Powers
Figure 65.
Technical Solution for Separation of Five Powers
Figure 65.
Technical Solution for Separation of Five Powers
Figure 66.
International cooperation for Separation of Five Powers
Figure 66.
International cooperation for Separation of Five Powers
Figure 67.
Triangular balance of power
Figure 67.
Triangular balance of power
Figure 68.
Trustworthy Technology Sharing Platform
Figure 68.
Trustworthy Technology Sharing Platform
Figure 69.
The saturation effect of AI benefits
Figure 69.
The saturation effect of AI benefits
Figure 70.
Evolvement of AI benefits and risks
Figure 70.
Evolvement of AI benefits and risks
Figure 71.
A kind of possible AI development strategy
Figure 71.
A kind of possible AI development strategy
Figure 72.
Global AI Risk Evaluation Model
Figure 72.
Global AI Risk Evaluation Model
Figure 73.
Driving factors of AI development
Figure 73.
Driving factors of AI development
Figure 74.
Methods for restricting AI development
Figure 74.
Methods for restricting AI development
Figure 75.
Brain-Computer Interface is redundant for expanding overall intellectual power
Figure 75.
Brain-Computer Interface is redundant for expanding overall intellectual power
Figure 76.
AI safety governance system
Figure 76.
AI safety governance system
Figure 77.
International AI Safety Organization
Figure 77.
International AI Safety Organization
Figure 78.
International Governance Implementation Pathway
Figure 78.
International Governance Implementation Pathway
Figure 79.
National Governance
Figure 79.
National Governance
Figure 80.
ASI Safety Solution
Figure 80.
ASI Safety Solution
Table 1.
Priority Evaluation of AI Safety Measures
Table 1.
Priority Evaluation of AI Safety Measures
Category of Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
Formulating AI Specification |
+++++ |
+++++ |
+ |
++ |
1 |
Aligning AI Systems |
+++++ |
+++++ |
+++ |
+ |
1 |
Monitoring AI Systems |
+++++ |
++++ |
+++ |
+ |
1 |
Enhancing Information Security |
++++ |
++++ |
+++ |
+ |
2 |
Enhancing Mental Security |
+++ |
++++ |
+++ |
++ |
2 |
Enhancing Financial Security |
++ |
++++ |
+++ |
+ |
2 |
Enhancing Military Security |
+++++ |
+++++ |
++++ |
++++ |
2 |
Decentralizing AI Power |
+++++ |
+ |
++ |
+ |
2 |
Decentralizing Human Power |
+++ |
+++++ |
++ |
++++ |
1 |
Restricting AI Development |
++++ |
+ |
++ |
++++ |
3 |
Enhancing Human Intelligence |
+ |
+ |
+++++ |
+++ |
4 |
Table 2.
Recommended implementation time corresponding to the priority of AI safety measures
Table 2.
Recommended implementation time corresponding to the priority of AI safety measures
Priority |
Recommended latest implementation time |
1 |
now |
2 |
before the first AGI realized |
3 |
before the first ASI realized |
4 |
can be after ASI realized |
Table 3.
Comparison of Morality, Law, and AI Rules
Table 3.
Comparison of Morality, Law, and AI Rules
|
Morality |
Law |
AI Rules |
Subject of Constraint |
Humans |
Humans |
AIs |
Method of Constraint |
Primarily Internal |
Primarily External |
Both Internal and External |
Scope of Constraint |
Moderate |
Narrow |
Broad |
Level of Standardization |
Low |
High |
High |
Table 4.
Priority Evaluation of Measures for Formulating AI Specification
Table 4.
Priority Evaluation of Measures for Formulating AI Specification
Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
Formulating AI Rules |
+++++ |
+++++ |
+ |
++ |
1 |
Criteria for AI Goals |
+++++ |
++++ |
+ |
+ |
1 |
Table 5.
Priority Evaluation of Measures for Aligning AI Systems
Table 5.
Priority Evaluation of Measures for Aligning AI Systems
Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
Implementing Interpretable AGI |
+++++ |
++++ |
+++ |
+ |
1 |
Implementing Aligned AGI |
+++++ |
+++++ |
+++ |
+ |
1 |
Scalable Alignment |
+++++ |
+++++ |
++ |
+ |
1 |
AI Safety Evaluation |
+++++ |
+++++ |
+++ |
+ |
1 |
Safe AI Development Process |
+++++ |
+++++ |
+ |
+ |
1 |
Table 6.
Priority Evaluation of Measures for Monitoring AI Systems
Table 6.
Priority Evaluation of Measures for Monitoring AI Systems
Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
AI Monitoring System |
+++++ |
+++++ |
++ |
+ |
1 |
AI Detective System |
+++++ |
+++ |
++++ |
++ |
2 |
AI Shutdown System |
++++ |
++ |
+++ |
++ |
2 |
Table 7.
Comparison of Identity Authentication Factors
Table 7.
Comparison of Identity Authentication Factors
Authentication Factor |
Advantages |
Disadvantages |
Password |
Stored in the brain, difficult for external parties to directly access |
Easily forgotten, or guessed by hacker |
Public Biometrics (Face / Voiceprints) |
Convenient to use |
Easily obtained and forged by hacker (See Section 9.1.2 for solution) |
Private Biometrics (Fingerprint / Iris) |
Convenient and hard to acquire by hacker |
Requires special equipment |
Personal Devices (Mobile Phones / USB Tokens) |
Difficult to forge |
Risk of loss, theft, or robbery |
Decentralized Identifiers (DID)[63] |
Decentralized authentication |
Risk of private key loss or theft |
Table 8.
Priority Evaluation of Measures for Enhancing Information Security
Table 8.
Priority Evaluation of Measures for Enhancing Information Security
Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
Using AI to Defend Against AI |
+++++ |
+++++ |
+++ |
+ |
1 |
Provable Security |
+++++ |
++++ |
++++ |
+ |
2 |
Internal Security of the System |
++++ |
++++ |
+++ |
+ |
2 |
Multi-Supplier Strategy |
+++++ |
+++ |
+++ |
+ |
2 |
Software Signature and Security Certification |
++++ |
++++ |
+++ |
+ |
2 |
Information Security for AI Systems |
+++++ |
++++ |
++ |
+ |
1 |
Table 9.
Priority Evaluation of Measures for Enhancing Mental Security
Table 9.
Priority Evaluation of Measures for Enhancing Mental Security
Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
AI Deceptive Information Identification |
++ |
++++ |
+++ |
+ |
2 |
Truth Signature and Verification |
+++ |
++++ |
+++ |
+ |
2 |
Truth Network |
++++ |
+++ |
+++ |
+ |
2 |
Enhancing Human’s Legal Abidance |
++++ |
+++++ |
+++ |
+++++ |
3 |
Protecting Human’s Mental Independence |
+++++ |
++++ |
+ |
++ |
1 |
Privacy Security |
++ |
++++ |
+++ |
+ |
2 |
Table 10.
Priority Evaluation of Measures for Enhancing Financial Security
Table 10.
Priority Evaluation of Measures for Enhancing Financial Security
Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
Private Property Security |
+ |
++++ |
+++ |
+ |
2 |
Public Financial Security |
++ |
++++ |
+++ |
+ |
2 |
Table 11.
Priority Evaluation of Measures for Enhancing Military Security
Table 11.
Priority Evaluation of Measures for Enhancing Military Security
Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
Preventing Biological Weapon |
+++++ |
+++++ |
+++++ |
++ |
1 |
Preventing Chemical Weapons |
++++ |
++++ |
++++ |
++ |
2 |
Preventing Nuclear Weapons |
+++++ |
++++ |
++++ |
+++ |
2 |
Preventing Autonomous Weapons |
+++++ |
+++++ |
++++ |
+++++ |
2 |
Table 12.
Priority Evaluation of Measures for Decentralizing AI Power
Table 12.
Priority Evaluation of Measures for Decentralizing AI Power
Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
Balancing AI Power |
+++++ |
+ |
+++ |
++ |
2 |
Increasing AI Diversity |
+++++ |
+ |
+++ |
+ |
2 |
Enhancing AI Independence |
+++++ |
+ |
++ |
+ |
1 |
Specializing AI Power |
+++++ |
+ |
+ |
+ |
1 |
Multi-AI Collaboration |
++++ |
+ |
++ |
+ |
2 |
Limiting AI’s Rights and Obligations |
++++ |
++ |
+ |
++ |
2 |
Table 13.
Priority Evaluation of Measures for Decentralizing Human Power
Table 13.
Priority Evaluation of Measures for Decentralizing Human Power
Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
Decentralizing the Power of AI Organizations |
++ |
+++++ |
+++ |
++++ |
2 |
Separating Management Power on AI System |
+++ |
+++++ |
++ |
++++ |
1 |
Trustworthy Technology Sharing Platform |
+++++ |
+++++ |
++ |
+++ |
1 |
Table 14.
Priority Evaluation of Measures for Restricting AI Development
Table 14.
Priority Evaluation of Measures for Restricting AI Development
Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
Limiting Computing Resources |
++++ |
+ |
++ |
++++ |
2 |
Restricting Algorithm Research |
++++ |
+ |
+++ |
++++ |
3 |
Restricting Data Acquisition |
+++ |
+ |
++ |
++ |
2 |
Restricting Finances |
++ |
+ |
++ |
++++ |
4 |
Restricting Talents |
+++ |
+ |
++ |
++++ |
3 |
Restricting Applications |
++++ |
++++ |
++ |
+++ |
1 |
Table 15.
Priority Evaluation of Measures for Enhancing Human Intelligence
Table 15.
Priority Evaluation of Measures for Enhancing Human Intelligence
Safety Measures |
Benefit in Reducing Existential Risks |
Benefit in Reducing Non-Existential Risks |
Implementation Cost |
Implementation Resistance |
Priority |
Education |
++ |
++ |
++++ |
+ |
4 |
Genetic Engineering |
- |
- |
+++++ |
+++++ |
Not Suggested |
Brain-Computer Interface |
- |
- |
++++ |
++++ |
Not Suggested |
Brain Uploading |
++ |
- |
+++++ |
+++++ |
Not Suggested |