Preprint
Article

This version is not peer-reviewed.

The Topics, Methods, and Significance of a Higher Cognition Theory

Submitted:

18 November 2024

Posted:

20 November 2024

You are already at the latest version

Abstract

The present paper outlines a pathway for the study of higher cognition. In the forward, the classical model of higher cognition is first introduced. The following contents are divided into five sections. Section 1 emphasizes the mutual dependence between empirical research and normative theory in three major subdomains of cognitive science, namely, reasoning, decision-making, and competition. A unified approach towards integrating reasoning with decision-making and competition is explained. Section 2 describes the modeling of hesitation within cognitive processes, which can be formulated in terms of cognitive fluctuations and permits a dynamical description. Specifically, the notion of logical charge is introduced to explain reasoning dynamics. What may be termed as the motion of logical charge is shown to be associated with a logical current and cognitive field, which in turn draws decision-making towards one of two poles, that being either commitment or refusal. Section 3 extends the powerful tools of dynamic analysis, previously applied for cognitive dynamics, to the domain of economics. It is shown how an interpretation of the Standard Model in the context of economic dynamics lends itself to a comprehensive framework that describes market dynamics, sub-economic dynamics, economic externality dynamics, the model of ordinary rationality, and the inequality mechanism in political economics. Section 4 details a stochastic statistical model relevant to quantum yes-no experiments. Finally, Section 5 provides a general discussion for the future of higher cognition research.

Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  

Foreword

This paper aims to provide an overview of a paradigm shift in higher cognition theory, including its topics, methods, and significance. The scope of higher cognition theory is broad, encompassing areas as diverse as reading comprehension, memory, psycholinguistics, brain science, and socio-cultural studies, to simply name a few. The extent to which higher cognition theory has been studied with respect to each subfield varies. The three primary subfields of reasoning, decision-making, and competition exhibit a relatively high level of disciplinary development, attributable to the maturity of experimental techniques and the existence of widely accepted standard theories within each field. The standard theory for the psychology of reasoning is logic, that of decision-making is axiomatic decision theory, and that of competition is game theory. These three subfields encompass six distinct disciplines. One of the objectives of this paper is to construct a unified theory to integrate these subfields.
By the latter half of the 20th century, interdisciplinary research had become a focal point of cognitive science. However, it was only in the first half of the 21st century that cognitive science has begun to turn its attention towards unified theories. Such unified theories can be seen as an extension of traditional cognitive science, but integrating the modeling tools of other fields. Their development parallels the argument by Shu-Shan Cai [1], who argues that the 21st century is the era of integrative science. Integrative science has two significant distinguishing characteristics: first, it requires greater refinement of modeling and conceptualizations as compared to existing theory, and second, it places a greater emphasis on empirical research. Cognitive science, by its nature, is an empirical science, with theories requiring experimental confirmation. Any theories arising from cross-disciplinary integration will not only require careful new experimental design but also the development of more robust statistical methods for analysis, the beginnings of which will be developed in this paper.
In the process of conducting empirical research, experimental methods inherently constrain observation. When experimental methods must be applied to theory, the limitations of these methods are referred to as observational interference. The magnitude of these limitations determines the degree of disturbance. One of the founders of quantum mechanics, Paul Dirac, noted that the higher the degree of experimental disturbance, the smaller the world we can observe. Consequently, quantum theory encompasses descriptions of the high-interference microscopic world, and it elevates observation to a special position, described by the R-process. A high level of disturbance is also found in certain areas of cognitive science, particularly in language-related experimental tasks. In such tasks, we can only observe the experimental outcomes without direct knowledge of the participants’ cognitive states and processes, resulting in a significant degree of observational interference. This paper specifically addresses theoretical models for observations characterized by high levels of disturbance.
Further, this paper endeavors to advance research into the study of the U-process in conjunction to the R-process. It was the view of Roger Penrose that the theory of microscopic systems should consist of two parts [6]: the evolution process of the system itself, that is, the U-procedure, and the observation of the system, known as the R-procedure. For instance, in quantum mechanics, the dynamics of a quantum particle, and more specifically the evolution of its wavefunction, is entirely characterized by the Schrödinger equation. This is a U-procedure, but while it exists, it is not directly observable. When an observation is made on such a system, the wavefunction collapses to a specific eigenstate. John von Neumann referred to these types of observations as yes-no measurements, also known as yes-no quantum experiments, and they constitute an R-process. In quantum field theory, by the process of second quantization, the U-process is promoted to an operator, known as a U-operator [4]. Quantum field theory can also incorporate an R-operator, corresponding to a measurement operator. Penrose believed that the reciprocation between the U-procedure and R-procedure reflected the completeness of a quantum system. Thus, taking inspiration from physics, we may integrate the U-R structure into models of higher cognition to achieve a similar completeness in modeling empirical science.
This paper is organized into five sections, titled as follows: (1) The Classical Model and Paradigms of Higher Cognition; (2) The Dynamical Modeling of Hesitation with a Cognitive Field; (3) Extensions of Higher Cognition Theory to Economics; (4) Gauge Statistics for Yes-No Quantum Experiments; and (5) A Closing General Discussion.

1. The Classical Model and Paradigms of Higher Cognition

1.1. Cognitive Science and its Correspondence with Normative Theory

There are many misunderstandings between researchers that work in the realm of psychological theories and empirical research, versus those who work on normative theories. On one hand, psychologists often argue that fallacies, biases, and other non-normative behaviors in reasoning and decision-making excludes the need for normative theories such as logic or decision theory. On the other hand, logicians, when constructing logical systems, typically focus solely on the formal structure of reasoning without considering how individuals actually reason. Similar situations arise between psychologists and theorists in the fields of decision-making and the psychology of competition. These methodological issues in higher cognition research need to be clarified. Several of these are now addressed.
First, the psychology of reasoning studies how people reason, while logic informs us about what constitutes that reasoning. Without logic, it would be impossible to determine whether individuals are engaged in reasoning. An experimental task is classified as a reasoning task precisely because it possesses a specific logical structure, and completing that task requires reasoning. It is this structure that is provided by the field of logic.
Second, experiments that aim to probe into higher cognition typically employ evaluation tasks. For instance, in a reasoning task, participants are presented with several premises and must determine the validity of a conclusion (that is, whether it is true or false). The correctness of this conclusion is, in fact, dictated by the principles of logic.
Third, a systematic design of experiments requires normative theories. Given a specific experimental task, the participants’ assessments of the conclusion as either true or false represent the only raw data obtainable. We may identify these data points as eigenvalues, and their associated cognitive states as cognitive eigenstates. Observation causes the cognitive process to transition to a particular eigenstate. However, we cannot directly observe the mental processes underlying participants’ reasoning. For instance, mental logic theory posits that individuals reason by following reasoning schema and predicts that, for a certain class of reasoning problems, participants will achieve a high accuracy. Yet, even if the experiment yields a high correctness rate, we remain uncertain as to whether a reasoning pattern was actually present among the participants. In such cases, self-reported data can be utilized to extract information: participants report their perceived relative difficulty after each task, and linear regression can then be applied to generate weights for each reasoning schema [8]. However, to rigorously assess the relative difficulty of the tasks necessitates the test items to be designed to encompass all possible difficulty levels, which can only be achieved through the framework of normative theories.
Fourth, cognitive science regards human thinking as organized into different cognitive pathways. Normative theories enable us to distinguish three distinct cognitive modes and delineate their boundaries in rigorous terms. Logic defines reasoning, axiomatic decision theory defines decision-making, and game theory defines competitive behavior. The establishment of normative theories is a hallmark of a mature field of research, one that empirical research is insufficient to support alone; normative theories provide a theoretical framework to conceptualize and model a specific field. Unified theories build on this foundation, amalgamating conceptual structures and modeling methods from different fields.
Finally, it is important to note that psychological theories already rely significantly on normative theories. In the field of reasoning psychology, there are two main competing schools of thought: mental logic theory [8] and mental model theory [13,14]. The former emphasizes the role of reasoning patterns, which are expressed through the formal syntactic structures provided by logic. The latter posits that individuals reason by understanding the meanings of premises and constructing mental models. These mental models depend on the formal semantics of logic. Thus, the reason mental logic theory and mental models theory have emerged as the primary competing paradigms in reasoning psychology is in their mutual compatibility to normative theories in logic. Similarly, within the framework of prospect theory in the psychology of decision-making, the identification of irrational biases is defined in relation to normative decision theory and rationality. In other words, without normative theories, we would struggle to define what constitutes a bias.

1.2. Unified Theories Combining Reasoning, Decision-Making, and Behavioral Game Theory

It need not be stated that a scientific field possesses its own body of knowledge and domain-specific formalism that permit a rigorous description of the field. Unified theories across different fields do not seek to merge their respective bodies of knowledge, but rather to unify their formalisms into a common framework. For instance, the formalism of higher cognition plays an indispensable role in describing economics, particularly in market psychology. Indeed, decision-making theory and game theory have long been part of the common language of economics.
In the following sections, we will explain, within the context of market psychology, why it is essential to integrate the psychology of competition, decision-making theory, and reasoning theory, as well as their respective formalisms, into one unified whole. We find that the multiple descriptions of market participants in terms of game theory, decision-making theory, and reasoning theory captures the cognitive fluctuations of their minds and behaviors that reflect changes in cognitive capacity and cognitive state, one that cannot be adequately described by existing models.
Let us first review the representation of the Nash equilibrium in non-cooperative game theory. The basic syntactic structure of non-cooperative games is quite simple. Consider n players, where each player i has a set of possible actions A i = { a i 1 , , a i m } . Each player establishes a total preference relation, denoted as i . It is important to note that, in individual decision theory, a decision maker’s preference relation is based on their own set of possible actions. In contrast, in game theory, the preference relation of any player can only be established based on what is referred to as the set of action profiles. Considering the possible action sets of all players A i i = 1 , , n , the Cartesian product can be expressed as:
× A i = { a 1 , , a i , , a n | a i A i }
In this context, each n -tuple a 1 , , a i , , a n is referred to as a situation. In other words, a specified game constitutes a set of situations, and each player must establish their own total preference relation over this set of situations. That is to say, for all players i , each must establish their own i on × A i i = 1 , , n . Once the syntactic structure of non-cooperative games is understood, it is not difficult to grasp its key meta-property, namely the well-known Nash equilibrium. It is important to note that the language of Nash equilibrium requires a separate characterization for each player. Therefore, to reformulate the expression for the n -tuple, we have:
a 1 , , a i , , a n = a 1 , , a i 1 , a i , a i + 1 , , a n = a i , a i
Here, a i = a 1 , , a i 1 , a i + 1 , , a n . The Nash equilibrium is a specific scenario a i , a i * , namely that of a i * , a i * such that for each player i , and for any a j A i , j i , it holds that:
a i * , a i * i a j , a i *
The concept of the Nash equilibrium requires some thoughtful interpretation in mathematical terms. In simple terms, it suggests that in a non-cooperative game, each player loses, but all lose equally. It is important to note that the language used to characterize the definition of the Nash equilibrium captures the actions a i of any individual and the set of actions of all other individuals a i in the same situation. It is representative of the separation approach, a typical technique in mathematics for characterizing fixed-point problems.
The foundational theoretical framework of the theory of competition in cognitive science remains the Nash framework. Within this framework, a strict mathematical distinction is made between non-cooperative and cooperative games, and the overarching meta-properties of both, namely the Nash equilibrium and the Nash solution. However, a significant body of behavioral game theory research highlights a phenomenon where players oscillate between non-cooperative and cooperative games, which can be termed as fluctuations [2]. For instance, the classic prisoner’s dilemma, presented in nearly all game theory textbooks, is originally designed as a non-cooperative game. However, altering the game’s conditions—such as increasing the duration of rewards and penalties or allowing repeated play—can lead players to shift from a non-cooperative state to a cooperative one. These behavioral fluctuations are directly observable and as such are classic examples of fluctuations. The fluctuations identified by behavioral game theory in empirical studies cannot be adequately explained within the standard Nash framework of game theory. The underlying causes and corresponding theoretical explanations must be sought in the realm of individual decision-making theory.
To construct a unified theory that decomposes a game problem into decision-making problems for each player, it is essential to translate the formalism of game theory into the formalism of decision-making theory. This requires some technical adjustments. When game theory is cast to address a specific player i , a situation a 1 , , a i , , a n can be rewritten as a i , a i . We will now make a further revision, transforming a i , a i into α i α i . The rewritten α i α i resembles a function, which is not conventionally within the scope of game theory; however, this is a critical step in bridging the gap between the formalism of game theory and the formalism of decision theory. We will see why this is the case shortly.
The book by Leonard Savage [7] is recognized as the seminal work in contemporary axiomatic decision-making theory. Below, we will use Savage’s formalism to characterize the structure of decision-making problems. A decision-making problem is represented as a triplet F , S , H , where F is a set of action functions, S is a set of states, and H is a set of outcomes. For a given action function f F and an environmental state s S , we have f s = h , h H . It is important to note that for a specific state s , the value of f s is unique. Therefore, in any non-ambiguous context, h can be omitted. For any two action functions f 1 , f 2 , we define a preference relation f 1 f 2 , indicating the preference of f 1 over f 2 . Now, note by comparison that α i α i from the previous paragraph and f i s here are structurally similar. We can treat α i in the former as the action function f i in the latter, α i as the state variable s , and thus transform α i α i into f i s .
Market fluctuations originate, in the strictest sense, from the reasoning processes of market participants. These reasoning processes are purely within the mind, difficult to observe directly, and are subject to various individual differences. These details fall within the domain of mental decision logic, and what follows is a brief explanation.
Let us first examine language conversion and predicate relationships. Previously, we translated α i , α i in the formalism of game theory into f i s in the formalism of decision-making theory. Next, we will convert the formalism of decision-making theory into that of reasoning theory. This involves treating action functions as predicates and state variables as logical variables. That is to say, we transform f s into A x . At this stage, it is no longer necessary to reference the indices i that originate from game theory and traverse across the individual players. Reasoning is a purely mental process, and the mind is embodied in individuals. Predicates can represent certain unary properties or binary and even multivariate relationships.
The first advantage of this predicate technique is that it allows the editing of an option set for a classic decision problem or an action function set for a Savage decision problem. A decision maker may be disinterested in a particular option or unwilling to pursue a certain action function, leading them to abandon that option or action. In other words, the decision maker can establish predicate relationships between options of interest or actions they are willing to take. This represents the most direct logical step in editing a decision problem, carrying significant psychological and cognitive implications.

2. The Dynamical Modeling of Hesitation with a Cognitive Field

The previous section covered an analysis rooted solely in classical methods and classical theory. This section introduces two new cognitive phenomena in the mental process: hesitation and commitment. These two cognitive phenomena extend beyond the scope of classical analysis. Following this, the concept of the cognitive field will be introduced. Upon this foundation, we will begin discussing cognitive dynamics.

2.1. The Unification of Hesitation and Cognitive Fluctuations

It should be acknowledged that the various causes of fluctuations described in the previous section are almost perfectly reflected within the mind of any participant, albeit to different extents. It is also important to note that human cognition is multi-channel. We find an analogous concept in the path integral formalism in quantum field theory (QFT). Using the formalism of QFT, we may identify, for instance, the correspondence between the number of cognitive media and the number of cognitive pathways.
As mentioned earlier, there are two competing schools in the field of reasoning psychology: mental logic theory and mental model theory. The former posits that individuals employ reasoning patterns, with emphasis placed on the syntax, while the latter argues that people reason by understanding the meanings of premises and constructing mental models, with emphasis placed on the semantics. Both schools of reasoning are supported by significant experimental evidence, indicating that mental logic and mental models are effective cognitive pathways. However, when these two sets of experimental tasks are combined into a new task, the results do not support either mental logic theory [8] or mental model theory [13,14]. The reasoning is that solving the composite experimental task does not follow the predicted cognitive pathways; participants do not consistently traverse the mental logic pathway in one step and the mental model pathway in another, so to speak.
During the transition and merging of cognitive pathways, cognitive bottlenecks often occur. This is analogous to a traffic merge. At an ideal merge, one vehicle should pass at a time in an orderly sequence. However, we observe that in non-ideal circumstances, some drivers try to rush through, while others drive cautiously, a phenomenon analogous to quantum fluctuations. Similarly, when two cognitive pathways converge, individuals may sometimes tend to overthink, leading to quantum fluctuations that are difficult to observe directly. Remarkably, when applying Feynman’s sum-over-paths formula to make quantitative calculations, the results align closely with statistical outcomes.
Hesitation is a common phenomenon in the cognitive processes of individuals. Consider the following two scenarios: during a multiple-choice examination, respondents often hesitate over which answer to choose, and a customer may hesitate in buying a new phone due to sensitivities to price. This repeated hesitation manifests as various superpositions between the states of buying and not buying. The superposition of the states is formed by the sum of each of the two base states, multiplied by a coefficient (which can be a real number, complex number, or matrix). The process of hesitation can vary in duration, frequency, and intensity, taking on a flow-like nature analogous to a current. We may term this as a market flow, and analogous to the magnetic field associated with electric current in physics, a specific market flow is invariably accompanied by what we may term as a cognitive field. The cognitive field represents the concrete content and considerations behind the hesitation. This may include affordability, necessity, emotional impact, social consequences, and any combination of these factors, among others.

2.2. Decision-Making and the Cognitive Field

It is well known from classical electromagnetism that moving charges generate an electric current, and this current is always accompanied by a magnetic field. The magnetic field is a vector field with magnitude and direction, which has an associated magnetic force and possesses magnetic field lines. It has a north and south pole but zero divergence; charges situated within the magnetic field are influenced by their magnetic moment, aligning themselves along the magnetic field lines and becoming drawn towards either one of the poles. We may use the magnetic field as a model for the cognitive field.
We observe that the process of hesitation in cognition is typically finite. Consider the following two examples: during an exam, students have a limited time to choose their answers, and a buyer has only the time until the store closes to decide on their purchase. The choice to buy or to not buy is a binary choice, like the binary nature of the magnetic poles. The cognitive field encompasses the lines of thought surrounding market flows; market pressures draw cognitive moments towards either of the poles, resulting in a decision being made, in broader economic terms. Just as the magnetic field directs motion towards either the north or the south, the cognitive field directs decision-making towards either committing or refusing, such as making a purchase or not making a purchase.

2.3. The Dynamical Analysis of Cognition

To consider the phenomenon of hesitation within the cognitive process, it is essential to introduce dynamical analysis. In quantum theory and particularly quantum electrodynamics, the dynamical approach examines the characteristics of quantum particles with charge and spin. Charge refers to the conserved quantity that emerges from sources; spin refers to the intrinsic angular momentum due to rotations in the internal space of a particle, termed the phase space.
Spin represents an intrinsic property of particles that lacks a comparable reference in Newtonian mechanics. Such intrinsic properties are difficult to observe directly. To establish local symmetries for different states of particles, a type of gauge particle known as a gauge field must be introduced to balance the phase variations in phase space. Concurrently, a differential operator known as the covariant derivative is also required to balance the rate of change of phase variations. These factors are not directly observable and are fundamental reasons for the high levels of interference observed in quantum physics.
Taking the construction of reasoning dynamics as an example, let us first introduce the concept of logical charge. When individuals engage in reasoning, their cognition carries logical charge. During the reasoning process, this logical charge is in motion. The motion of logical charge generates a logical flow, which is accompanied by a cognitive field, just as moving electric charges produce an electric current, which in turn generates a magnetic field. The cognitive field can both apply reasoning patterns and construct mental models, and constrains the motion of logical charges. Beyond simply cognitive science, cognitive dynamics has wide-ranging applications in economics, which will be briefly introduced in the next section.

3. Extensions of Higher Cognition Theory to Economics

The author has previously shown that applying the same techniques of dynamical analysis as higher cognition theory to economics forms a unified theory of economic dynamics [9], which resembles the Standard Model of particle physics. The following sections will provide a brief overview of each component of this theory.

3.1. Market Dynamics

Based on the core conceptual framework of economic dynamics, the author has shown how a market dynamics theory [10] modeled after quantum electrodynamics may be constructed. Supply and demand are defined as binary tuples of buying and selling intentions, with these intentions encoded by market charges, and where decision-making is a function of the market flows and cognitive fields. Inspired by physics, a theory is presented analogous to a gauge field theory, where individual economic rationality serves in place of the global gauge potential, the market as the global gauge field strength, bounded rationality as the local gauge potential, and market behaviors of participants as the local gauge field strength. The familiar concepts of gauge field theories, such as gauge transformations, the Lagrangian, covariant derivatives, and the least-action principle, all find analogues in this dynamical framework. This framework re-contextualizes the age-old problem in the social sciences, that is, of the individual versus the group, with the notions of locality and globality in gauge field theory. In addition, it establishes a single-charge dynamical system with the U 1 gauge symmetry of quantum electrodynamics.

3.2. Sub-Economic Dynamics

The author has previously discussed the anthropological support for the proponents of strong free markets, and provides a survival-oriented argument for the duality of human impulses, characterized by the theory of dual regards (regard for oneself and regard for others). This duality is shown to reduce to two basic impulses: fear impulses and achievement impulses. Unlike the author’s previous work on market dynamics, which is modeled after quantum electrodynamics, sub-economic dynamics considers the fundamental workings of economics, and thus must be modeled using quantum chromodynamics, the theory that governs subatomic structure. This theory of sub-economic dynamics [11] examines the relationship between human impulses and market charges, where impulses are modeled by flavor charges, and where we introduce the notion of fractional charges, analogous to fractional charges and flavor charges carried by quarks in quantum chromodynamics.
By employing Freud’s three-component theory of personality—the id, ego, and superego—three types of color charges are introduced, forming a three-dimensional internal space of individual impulses. Through the introduction of gauge transformations in three-dimensional state space, sub-economic dynamics is shown to share S U 3 gauge symmetry with quantum chromodynamics. Additional similarities with quantum chromodynamics are also found in the model. For instance, just as quarks exist only in bound states, interacting with gluons, impulses can only exist in bound states, interacting with the consciousness. Gluons are the mediators of strong interactions and exhibit asymptotic freedom; a similar analogue of asymptotic freedom exists in sub-economic dynamics, which is discussed.

3.3. Economic Externality Dynamics

In a market, the prices of goods are initially determined by only buyers and sellers; however, if a third-party influences pricing, this is referred to as an economic externality. For instance, regulating market prices through fiscal policy is a form of economic externality. The author proposes a model of economic externality dynamics [12] based on the framework of isospin dynamics from theoretical physics. Isospin refers to the transformation of a down quark into an up quark under the mediation of the weak interaction. Correspondingly, economic policy can shift the impulses of individual market participants, such as transforming a fear impulse to an achievement impulse, and vice-versa, and can therefore be modeled using the same concept of isospin. Market dynamics and economic externality dynamics are then synthesized into a composite system modeled after electroweak theory, with which it shares S U 2 gauge symmetry as well as two crucial concepts: the Weinberg angle and neutral current. The mediators of weak interactions possess mass, making weak force a short-range force; similarly, various visible mechanisms of economic externality, such as economic policy, incur a burden of costs (regulative or otherwise) analogous to a form of mass, and also represent a short-range force on the market.

3.4. The Model of Ordinary Rationality

The ordinary man is a concept that has been long debated in the Western philosophical tradition, particularly because Western juries are composed of theoretically ordinary individuals. The author has previously noted that ordinary rationality is defined by eight fundamental principles [9]: the principle of heightened selectivity, the principle of subjective certainty, the principle of null decisions, the sunk cost principle, the principle of hesitation, the principle of sentimentality, the face-saving principle, and the principle of aspiration. Ordinary rationality shares three meta-properties with the Higgs field in theoretical physics. First, the vacuum is not empty; it represents a state of minimal energy with a non-zero expectation value. Second, both are inertial systems with zero spin. As noted by T. D. Lee [3], inertial systems can break any symmetry.
Third, they both exist in degenerate states and cannot be isolated for observation, hence lacking distinct eigenvalues. The role of the Higgs mechanism is to generate mass terms within the Lagrangian framework, resulting in spontaneous symmetry breaking. The mechanism of ordinary rationality similarly generates the degree of market effectiveness and consequences of economic externalities. The Higgs boson occupies a central position on the Standard Model; the ordinary person is the principal actor in the market, playing a decisive role in the effectiveness of any fiscal policy and determining the market performance of goods. A deeper exploration of the Higgs mechanism reveals more striking similarities, such as the relationship between emotional accumulation and Goldstone fields, individual–group differences and free gauge fields, and the connection between the Berry phases and the dynamic phases of wavefunction as well as their significance in economic dynamics, among others. These are all topics that may be discussed at length in the model of ordinary rationality.

3.5. Pareto Efficiency and an Economic Analogue of Gravitation

The author presents a model for political economics based on the formalism of General Relativity [9]. Pareto efficiency is a purely economic concept that refers to a state of social welfare distribution in which it is impossible to improve any individual’s welfare without diminishing the welfare of other individuals. Pareto efficiency is unrelated to notions of fairness; in its formulation, there is nothing that would bar the rich to become richer while the poor become poorer. We may interpret this as a curved space, which, for reasons that will become clear shortly, is characterized by inequality. In a state of Pareto efficiency, connecting each individual’s welfare state forms a curve known as the Pareto path, where tangent vectors at every point are infinitesimally parallel. This makes the Pareto path one of shortest length, referred to as a geodesic, which is an ideal state in pure economics. The desire and opportunity for each individual to improve their welfare are termed Pareto improvements, and we may regard the deviation from the geodesic as a form of curvature. In Einstein’s general relativity, curvature is the geometric representation of gravity, which we borrow to create the analogous concept of economic gravity.
Here, we introduce two versions of the Einstein equivalence principle as they relate to political economics. In Newtonian mechanics, the acceleration due to gravity is the manner in which gravity is expressed, representing the cost individuals must incur to improve their welfare. Economic dynamics theory explains the relationship between the geometric representation of economic gravity and its Newtonian expression under the assumption of social welfare inequality. In the context of a curved space, there is no longer a globally flat coordinate system; instead, individual local frames must be established, which we may interpret as the differences between individuals’ welfare. These individual local frames must be interconnected, reflecting the social welfare system that attempts to account for differences in individual welfare. In this sense, economic dynamics may be seen to serve as a geometric framework for political economics.

4. Gauge Statistics for the Yes-No Quantum Measurement

Roger Penrose, in his book The Road to Reality [6], points out that to understand quantum mechanics, we need an underlying theory that may be generalized to explain the characteristics of our cognitive world. Physics and psychology differ from mathematics and logic; the latter two are considered analytical sciences, while the former are empirical sciences. The defining feature of empirical sciences is that their theories require experimental support, and the language of experiments is statistics. This is because any experiment is termed an experiment precisely because we can objectively observe only samples, not populations. In this sense, quantum mechanics is a statistical theory.
We also know that scientific observation is limited by the means of observation. This limitation is referred to as the degree of interference in observation. Once again, recall Dirac’s assertion that in our scientific observations, the higher the degree of interference, the smaller the world we can observe, a phenomenon he termed microscopic observation. Quantum mechanics is not only a theory about the microscopic world, but also a theory of microscopic observation. The concept of microscopic observation is far more general and can be extended and applied to many more scientific fields, including the social sciences of psychology and economics.
In the microscopic world, observation inevitably perturbs phenomena. Phenomena in the microscopic world naturally evolve in accordance with the complex-valued Schrödinger equation; in mathematical terms, this is a U-procedure. However, each observation causes it to collapse to an eigenvalue, specifically a real number; this is an R-procedure. This U-R process alternates repeatedly [6]. In the perspective taken by Penrose, a complete theory of quantum mechanics includes both. Here, we focus on the discussion of the R-process.
Quantum field theory also exhibits a probabilistic nature. Multiple approaches are taken for the second quantization of the U-process; in addition to classical canonical quantization and path integral quantization, there is also early work on stochastic quantization by G. Parisi and Yong-Shi Wu [7]. This paper proposes a stochastic statistical theory for the R-process from the perspective of quantum experiments and their statistical methods. The key results of this new theoretical approach are a description of sampling in quantum experiments and the application of a reinterpreted Born rule.

4.1. Quantum the Yes-No Measurement

Von Neumann categorized quantum experiments as yes-no experiments [4]. Penrose elaborates extensively on this concept [6, §22.6] and further explains the experimental process involved in yes-no experiments. He points out that the mathematical description of measurement is entirely distinct from the Schrödinger equation. More generally, measurement corresponds to an operator Q ^ , which, when acting on a state, causes the state to collapse to one of Q ^ ’s eigenstates. As for which eigenstate the collapse occurs to, quantum mechanics dictates that this is purely random, though there is a rule for calculating the probability [6, §22].
A basic quantum yes-no experiment requires a particle emitter A and a particle detector B . Consider two possible cases. In case 1, suppose A emits a particle C , and B successfully detects C ; here, we say C has passed through the yes gate. In case 2, suppose A emits a particle C , but B does not detect C ; in this case, we say C has passed through the no gate. Since we cannot directly observe C ’s trajectory, a reasonable theoretical assumption is that, regardless of whether C enters the yes gate or the no gate, we consider A to have emitted C . Further, this assumption is consistent with our current understanding of quantum experiments.
The yes-no measurements possess a broad generality, and are highly effective for describing experiments in social sciences and artificial intelligence. The author has previously characterized reasoning experiments in higher cognition research as yes-no experiments. In experiments on mental logic and mental models, the tasks are language-based and referred to as evaluation tasks. For each experimental question, several premises are first presented, followed by a conclusion for the subject to evaluate, which may be valid or invalid. The subject’s task is to determine the correctness of this conclusion, answering either yes or no. Here, the correct answer is predetermined based on formal logic. If answered correctly, the subject is considered to have entered the yes gate; otherwise, subject is considered to have entered the no gate.
Since the subject’s problem-solving process is not directly observable, it is theoretically postulated that, whether entering the yes or no gate, the subject has made the certain efforts to reason through the problem, which consumes mental energy and cognitive effort—what we term as doing. This reasoning experiment is a classic example of a yes-no experiment. We may characterize yes-no observations with the Dirac δ -function, given as follows:
δ x = , x = x 0 0 , x x 0 δ x d x = 1
The piecewise definition of the Dirac δ -function may be interpreted as follows: when the particle enters the yes gate (i.e., the detector, correct answer, or predicted future event), the function value is infinite; meanwhile, when the particle enters the no gate (i.e., is not detected, the task is answered incorrectly, or the prediction is inaccurate), the function value is zero. The integral definition of the Dirac δ -function is a definite integral of the first definition, which equals unity. Here, the integral definition tells us that regardless of whether the particle enters the yes or no gate, there is no doubt to its existence at some location. Philosophically speaking, the first branch of the Dirac δ -function provides its epistemological foundation, while the second branch represents its ontological commitment. The Dirac δ -function’s dual nature provides a model to characterize a yes-no observation.
In the distribution theory of functional analysis, the Dirac δ -function is well-defined distribution. The integrand of the definite integral of the Dirac δ -function over all space is called the testing function, which is required to have at least one supporting point x = x 0 . Physically, this supporting point could be the detection of the particle by the quantum detector, a correctly solved task in a reasoning experiment, or the accurate prediction of a future economic event in economic forecasting. Different applications of the generalized yes-no experiment will have their respective selective function f x and their supporting points x 0 , as expressed in the following formula:
f x δ x d x = f x 0
The selective function can be viewed as a selection process, akin to the 20-questions game, a two-player guessing game. One player thinks of an object, such as an apple, while the other player may ask up to 20 yes-or-no questions, such as: "Is it a tool? Is it a food item? Is it meat? Is it some form of grain?" and so forth, gradually narrowing down to the correct answer. The renowned theoretical physicist John Archibald Wheeler once remarked that quantum observation is akin to playing a 20-questions game with nature. We find a similar scenario in the theory of higher cognition; in this case, observation is comparable to playing a 20-questions game with the mind. The same is true of economic forecasting, which resembles playing the same game with future economic events. Wheeler’s remark encapsulates the essence of microscopic scientific observation, which extends across disciplines.

4.2. Wavefunctions and the Born Probability

The integral definition of the Dirac delta is mathematically well-defined in measure theory, with its integral being a Lebesgue integral. This integral equals a constant, merely confirming the existence of the quantum yes/no experiment without revealing the internal structure of the yes-no experiment. This internal structure requires further characterization by the wavefunction. The wavefunction is expressed mathematically with Dirac’s bra-ket formalism. Here, bras represent the left bracket φ | , and kets represent the right bracket | ϕ . Quantum mechanics is a theory concerning microscopic observation, and the Dirac formalism precisely captures this essence.
Let | ϕ represent the phenomenon to be observed, φ | represent the experiment observing this phenomenon, and A i represent the set of experimental tasks in the experiment. Take a standardized examination as an example: a test-taker’s knowledge ϕ is challenging to observe directly, so an examination φ is administered, with each question A i serving as a measurement, and ϕ provides answers as responses. In this setup, the dependent variable ϕ becomes a function of the independent variable φ , denoted by ϕ φ , known as the wavefunction. Note that this is the R-procedure of the wavefunction, which can be represented as φ | A i | ϕ . This representation is the well-known Dirac formalism in quantum mechanics.
The wavefunction has a probabilistic interpretation according to the Copenhagen school, and can be expressed as a vector of complex numbers, representing a possibility. The square of the modulus of these complex numbers is known as the Born probability. Taking the wavefunction to represent possibilities, in Dirac’s words, the square of a possibility equals a probability. This probabilistic interpretation is a statement of the meaning of the wavefunction and is referred to as its amplitude (or complex) semantics. The Dirac bra-ket notation of the wavefunction and the Copenhagen amplitude semantics together form the dual structure of the wavefunction formalism [9].

4.3. Stochastic Sampling and the Sample Space

Let us now consider the problem from a statistical perspective: the Dirac delta function may be interpreted as a representation of a binary sample space. For any specific quantum yes-no experiment, the outcome has only two possibilities: yes or no. Thus, its sample space is binary, which we may denote as S = Y , N . For any finite sample, the experiment’s result will yield a defined count a of yes and a defined count b of no. These two satisfy a magnitude duality relationship, a special form of strong-weak duality. Generally, let F represent any yes-no experiment, and A represent any chosen finite sample; then we have:
F S A = a , b
For any given sample data a , b , we can always express it in complex form as a + i b , with its modulus squared representing the sample’s Born probability, denoted by P ; we take this as convention going forwards. Note that this grouping of the two components of the sample as a complex number reflects the inherent uncertainty in quantum experiments, which has the inevitable limitations of the observer; recall the notion of observational interference. For instance, in reasoning experiments, an observer can directly observe only whether a participant answers a reasoning question correctly or incorrectly but cannot directly (though perhaps indirectly) observe the participant’s cognitive process. In this context, the imaginary unit i conveys two layers of meaning: the observer and the information obtained from observation.
A specific experimental sample typically consists of two elements: the experimental task and the sample size (i.e., number of repetitions of the task). Different fields have distinct sample characteristics. In physics, an experimental task may be a specific apparatus and set of tests, with the sample size indicating the number of test trials. In psychology, an experimental task could refer to a set of reasoning questions (such as a set of GRE questions), with the sample size representing the number of participants or test-takers. An effective sample must satisfy the statistical power required by statistical analysis. We denote an effective sample as A = A x , y , where x represents the experimental task, and y denotes the number of operations.
In any meaningful sense, it is impossible to observe a total population; only samples can be observed. From a statistical perspective, sampling therefore holds a privileged position. For a system where a series of randomly selected samples undergoes a quantum yes-no experiment, each producing a Born probability, the collection of probabilities forms a partition function.

4.4. Statistical Dynamics and Coherence

The ensemble of Born probabilities provides the basic elements for dynamical analysis. A sample’s statistical expression is a complex number, with its exponential form possessing a specific phase corresponding to a unique Born probability. Thus, a phase difference exists between each pair of samples, corresponding to the product of two Born probabilities. This indicates that the sample space S = Y , N is rotational, belonging to the U 1 symmetry group. The two-dimensional sample space S = Y , N can also be viewed as an isospin space, satisfying S U 2 symmetry.
As discussed, each sample phase represents a state of the system’s wavefunction. The phase difference between samples may be interpreted as a sort of spin, which, at least for electrons that follow a U 1 gauge theory, has two spin states. Setting either the yes count to zero or the no count to zero provides two ground states | 0 and | 1 , and all other states can be represented as superpositions of these two ground states. The sample space becomes a phase space of a dynamic system, and following Gibbs’ terminology, an individual sample may be termed a phase point.
The superposition of states between the two ground states of the sample phase space represent a state of ambiguity, where, in some sense, the particle enters both the yes-gate and the no-gate. This state of ambiguity can be considered the source of the interference fringes seen in the double-slit experiment. We conclude that when one must enter both the yes-gate and the no-gate, represented by a non-ground-state wavefunction, an inherent ambiguity arises.

4.5. A Stochastic Model

Statistics is often said to be the language of experimentation. Naturally, a stochastic (statistical) model is aptly suited for a rigorous description of yes-no experiments, and we may construct one by the following definitions:
Definition 1. 
Let M denote a yes-no-type experimental system. Let M = { m i } represent a collection of specific experimental tasks within M . We introduce the operation set Ω = { ω i } for these experimental tasks. Ω is a countably infinite set with measure zero. A variable x i is introduced on Ω to span all operations within Ω .
Definition 2. 
A binary sample space S = Y , N is defined within M . Each operation within this sample space takes values of either yes ( Y ) or no ( N ).
Definition 3. 
Consider the power set P Ω of Ω . Since Ω is a countably infinite set, P Ω is evidently a continuum. We define any non-zero element R of P Ω (where R is a non-empty subset of Ω ) as an experimental sample. An effective sample requires a sufficient number of repeated tasks to meet statistical power. A series of randomly selected samples is termed stochastic sampling. We introduce a sample variable x j to span all possible samples within P Ω .
Definition 4. 
For any given sample x j , an intermediate variable x i j is defined within the sample, spanning all operations in the given sample.
Definition 5. 
A randomly selected sample is measurable. Since this is a yes-no type experiment, the squared norm of the probability amplitude is defined as the Born probability of this sample. According to Penrose, this signifies the probability within a random sample of obtaining a specific pair of yes and no counts.
Definition 6. 
For a series of randomly selected samples, the corresponding set of Born probabilities is called its partition function, also referred to as the Born ensemble.
As defined above, the stochastic Born ensemble reflects the procedural structure of experiments, where its probability (density) function is continuous but not smooth. Thus, its measure is Lebesgue integrable and characterized by the Langevin equation. We further observe that within the Born ensemble, fluctuations occur in yes-no counts across samples, reflecting the experimental characteristics, and in probability (density) across random samples. The former we term as the of the yes-no type experiment, while the latter we refer to as the of the Born ensemble.
To conclude: in quantum yes-no experiments, the raw data obtained from each sample consists of a pair of yes and no counts. The first statistical step involves converting this data into a complex number. From this complex form, we may take one of two directions: the syntactic direction, in which the complex number is expressed in exponential form to obtain the dynamical phase, and the semantic direction, in which the squared modulus gives the Born amplitude probability. This dual structure of syntax and semantics in the language framework [9] is a standard approach in mathematical logic. This approach not only gives formal structure to the yes-no experiment, but also opens the door to further theoretical analysis, such as whether the assumptions of duality and completeness between syntax and semantics hold, and how they might be demonstrated.

4.6. Gauge Statistics

In the language of gauge field theory, the wavefunction is set within a dual-layer, dual-tier, four-part structure. Dual-layer refers to the distinction between the global and local view, while refers to the two components in each view: the gauge potential and gauge field strength. To derive the gauge field strength from the gauge potential requires the application of an appropriate differential operator on the wavefunction.
In standard inference statistics, the traditional practice is to use the sample mean x to estimate the population mean μ (such as in the t-test). Within the stochastic sampling model introduced here, this is analogous to treating each sample as a phase point and then averaging data from all sample phase points to estimate the overall mean of the operational set. This can be regarded as a global estimation, where the global phase is a statistical constant, represented by θ = C . However, in the stochastic sampling model, the dynamic phase is a function of the stochastic sample variable x j , denoted by θ = θ x j . Therefore, this approach is localized. Indeed, we have achieved a dual-layer gauge structure that has both a global and local view.

5. A Closing General Discussion

Knowledge is a product of human cognition, one essential to the survival and flourishing of societies. Understanding how we process such vast amounts of knowledge—both scientific knowledge and the knowledge of everyday life—is a question that cognitive science seeks to explain. Increasingly, cognitive science emphasizes interdisciplinary research, which requires extracting concepts of cross-disciplinary significance and building mathematical models based on cross-disciplinary tools.
With these considerations in mind, this paper introduces a framework for the study of higher cognition. It emphasizes the interdependent relationship between empirical research and normative theory within the three main subfields of reasoning, decision-making, and competition. A theoretical framework for integrating reasoning, decision-making, and competition is proposed and elaborated. Furthermore, the concept of hesitation is introduced into the cognitive process, reflecting features of cognitive fluctuation and enabling the use of the methods of dynamic analysis. Dynamic analysis of reasoning leads to the concept of logical charge. A moving logical charge generates a cognitive field, which in turn has the capacity to draw decisions towards a pole, representing either commitment or refusal. Further, it is shown how this dynamical approach to analyzing higher cognition proves highly descriptive in economics. Within the framework of the Standard Model in theoretical physics, the formalism of economic dynamics is outlined, including market dynamics, sub-economic dynamics, economic externality dynamics, the ordinary rationality mechanism, and the inequality mechanism in political economics. Finally, a stochastic statistical model is proposed for quantum yes-no experiments.
Since the onset of the cognitive revolution in the 1950s, cognitive science has thrived. It has not only received substantial theoretical development and research, but has also been applied across a multitude of fields, including education, social security, healthcare, in the armed forces, in journalism, in public opinion guidance and most significantly in artificial intelligence. Given this extensive influence, a pressing priority for cognitive science is to establish a core foundation and set of standard modeling tools for the discipline.
In 1900, Hilbert famously presented 23 mathematical problems at the International Congress of Mathematicians, embodying the academic spirit and disciplinary approach of the Göttingen school and substantially guiding the development of mathematics in the 20th century. Today, cognitive scientists are at a pivotal moment, a ripe time to present their own questions of cognitive science to lead the development of cognitive science in the 21st century. The recent developments in artificial intelligence has made this call more urgent.

Acknowledgements

The author thanks Yong-Shi Wu for his invaluable input on the fourth section.

References

  1. Cai, S. (2023). Basic research, core technology, and synthesis innovation of the mega-science era. Journal of Academic Frontier. June 2023, 16–35. [CrossRef]
  2. Camerer, C. F. (2011). Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University Press.
  3. Lee, T. D. (1988). Symmetries, asymmetries, and the world of particles. University of Washington Press.
  4. Von Neumann, J. (2018). Mathematical foundations of quantum mechanics (N. A. Wheeler, Ed.; New edition.). Princeton University Press.
  5. Parisi, G., and Wu, Y. S., (1980). Perturbation theory without gauge fixing. Science of Sintering, 24, 483–496.
  6. Penrose, R. (2004). The road to reality: a complete guide to the laws of the universe. Jonathan Cape. Oxford University Press.
  7. Savage, L. J. (1972). The foundations of statistics. Dover Publications.
  8. Yang, Y., Braine, M. D.S., O’Brien, D. P., (1998). Some empirical justifications of one mental predicate-logic model. (1998). In M. D. S. Braine, D. P. O’Brien (Eds.), Mental Logic. Psychology Press. [CrossRef]
  9. Yang, Y. (2022). The contents, methods, and significance of economic dynamics: Economic dynamics and standard model (I). Science, Economics, Society. Vol. 40, No. 5. [CrossRef]
  10. Yang, Y. (2023a). Principles of market dynamics: Economic dynamics and standard model (II). Science, Economics, Society. Vol. 41, No. 1. [CrossRef]
  11. Yang, Y. (2023b). Principles of sub-economic dynamics: Economic dynamics and standard model (III). Science, Economics, Society. Vol. 41, No. 3. [CrossRef]
  12. Yang, Y. (2023c). Principles of economic externality dynamics: Economic dynamics and standard model (V). Science, Economics, Society. Vol. 41, No. 5. [CrossRef]
  13. Yang, Y., & Johnson-Laird, P. N. (2000a). Illusions in quantified reasoning: How to make the impossible seem possible, and vice versa. Memory & Cognition, 28(3), 452–465. [CrossRef]
  14. Yang, Y., & Johnson-Laird, P. N. (2000b). How to eliminate illusions in quantified reasoning. Memory & Cognition, 28(6), 1050–1059. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated