1. Introduction
Studying the behavior of self-organizing and complex adaptive systems has been a subject of increasing interest for at least half a century. In recent years, Bayesian mechanics has emerged as a general probabilistic framework for modeling autonomous physical systems, particularly those that exhibit statistical tracking between the trajectories of their internal states and the parameters of their external states [
1,
2]. Bayesian mechanics relies on the free energy principle as its central theoretical underpinning, which enables the information geometry associated with such systems to be encoded with probabilistic representations (“beliefs") about both their internal and external states. As such, it is conducive to the development of an ontology that highlights three key aspects: (1) the dynamic and indeterminate nature of “reality," (2) the iterative and hierarchical structure of its constituent processes, and (3) self-relational understanding of its phenomena.
Over the last two decades, the term “new materialism" (or
neo-materialism) is used as an umbrella term to encompass a range of philosophical stances that radically reject a number of deeply entrenched views in the history of Western philosophy concerning the nature of matter and reality [
3,
4]. Despite a significant diversity of perspectives within new materialism, which at times even reveal diametric oppositions, the three aspects mentioned above are among its primary concerns. However, certain strands of new materialism are more aligned with the conceptual assertions of Bayesian mechanics and the free energy principle (FEP) than others.
The aim of the present paper, as well as subsequent ones, is to systematically develop a neo-materialistic account of FEP and Bayesian mechanics, while exploring its far-reaching implications. The primary objective of this paper is to introduce the essential tools necessary for the development of such an account. The first section provides a concise introduction to the classical view of the FEP. The following section briefly outlines the most prevalent forms of new materialism, and discusses their compatibility with the core tenets of the FEP. Subsequently, Manuel DeLanda’s notion of the structure of possibility spaces is introduced as a central element of our account, along with its relevance to the FEP. Finally, by utilizing the concepts introduced earlier, the FEP is examined through the lens of Karen Barad’s agential realism, thereby establishing the groundwork for forthcoming discussions.
2. A Brief Overview of Classical FEP
The
free energy principle (FEP), proposed by Karl Friston in 2006 [
5], is a general physics of information that explains how autonomous agents or self-organizing physical systems maintain their structure and functionality over time despite a changing environment. This persistence through time is evidenced by repeated observation of the system’s characteristic states and dynamics. FEP can be seen as an adaptation of the variational principle of least action, which posits that a system must appear as if its active and internal states follow paths of least action, minimizing
variational free energy, in order to satisfy the aforementioned conditions. [
6].
Variational free energy is a functional that depends on approximate Bayesian beliefs regarding the external states. As the name implies, it is associated with the concept of free energy in statistical physics [
7]. More precisely, it has been demonstrated that, under Laplace assumption, the form of variational free energy is equivalent to the form of Gibsonian free energy [
8]. Nevertheless, depending on its mathematical expression, it allows for various interpretations that are closely related and relevant to well-known ideas in cognitive modeling [
9]. In particular, it serves as an upper bound for the statistical divergence between the sensory states of a system and its expectations or predictions regarding those states. In other words, it enables optimization of the prediction error or
surprisal that a system encounters based on its model of the world, also known as the
generative model.
FEP suggests that systems exhibit behavior as if they seek to minimize free energy (surprisal) through two processes: (1) continuously updating their generative model to better align with incoming sensory data from the environment, and (2) performing actions that induce changes in the environment, further improving the congruence between sensory data and the generative model. This integrated framework, which views action and perception as interconnected, is known as
active inference [
9].
A key concept in FEP is the notion of
Markov blanket, which partitions the entire state space of a system into internal and external states while also establishing their interdependence though sparse coupling. Markov blanket acts as an interface and boundary within the state space and is not necessarily a spatiotemporal boundary. In many instances, however, it manifests as a physical boundary in space-time, such as cell membranes [
2,
10,
11].
The states within Markov blanket are influenced by the system and, in turn, influence the system itself, while also being influenced by the environment. Thus, the internal states of the system can vicariously track the external states without direct access to them. This statistical boundary enables the system to maintain its organization and resist disorder. Furthermore, minimizing free energy across Markov blanket maximizes the evidence in support of the system’s own generative model. This process is known as
self-evidencing [
12].
5. Possibility Spaces
Manuel DeLanda’s notion of the structure of possibility spaces was first introduced in his book
Intensive Science, Virtual Philosophy [
32] and further developed in
Philosophy and Simulation [
33]. It is inspired by Gilles Deleuze’s concept of
virtual multiplicity and relies on the distinction between
actual and
virtual, as well as between
properties and
capacities. A knife has the
property of being sharp, which is always
actual. However, it also possesses the
capacity to cut, which is potentially
virtual. Capacities are inherently relational and binary, involving both the ability
to affect (to cut) and
be affected (to be cut). Unlike its always
actual property of sharpness, the knife’s capacity to cut may or may not be actualized. When the knife interacts with objects that have the capacity to be cut, such as cheese or bread, its capacity to cut is realized. However, when dealing with objects such as a solid block of titanium, this capacity to cut remains
virtual and unactualized. Furthermore, depending on the specific capacities of the objects it interacts with, the knife possesses innumerable capacities, such as the capacity to kill, carve, peel, and so on. It is crucial to understand that these non-actualized virtualities are not any less
real than manifested actualities. [
34].
Another related term is the concept of
tendency. Tendencies have the ability to change the properties of a whole, even to the extent of altering its identity. For example, an ice cube has a tendency to undergo a phase transition at a certain temperature, leading to a change in its property from solid to liquid. Similar to capacities, tendencies can be both
real and unmanifested (
virtual) [
33].
For any system with a given state space, the collection of all tendencies (both manifested and unmanifested) and capacities (both exercised and unexercised) form the
possibility space for that system. Each possibility space possesses a distinct structure defined by a
multiplicity. DeLanda defines multiplicity as “a nested set of vector fields related to each other by symmetry-breaking bifurcations, together with the distributions of attractos which define each of its embedded levels" [
32].
The concept of
singularity, as an intrinsically topological feature, forms the foundation of attractors in chaos theory. These attractors have had a lasting impact on complexity science by providing insights into the emergence of events, forms, and structures in a mechanism-independent manner. Attractors can be seen as singularities that guide trajectories in their vicinity, yet they do not manifest directly within the physical system. Furthermore, the transition from one attractor to another can be modeled by introducing perturbations or shocks, which involve varying a parameter of the model [
35].
Singularities (or attractors), which are always virtual, play a significant role within the multiplicity. As mentioned above, trajectories within the system asymptotically approach attractors but never pass through them, thereby never rendering them
actual. Rather, attractors characterize the evolution of (actualized) trajectories through the state space. Nevertheless, in accordance with Alexander’s dictum (“to exist is to have causal powers")[
36], attractors are considered
real despite their
virtuality.
Different topological forms of attractors have been identified. The state spaces of linear systems are typically structured by point attractors, whereas that of nonlinear equations can exhibit multiple attractors of various types. A point attractor indicates a tendency towards a steady state, while limit cycles represent stable oscillations. In linear systems, the tendency to approach an attractor is entirely deterministic, allowing the state of a process at any given time to be theoretically deduced from knowledge of the state space structure. However, in state spaces with multiple attractors, each having its own basin of attraction, the state of a process at any moment depends not only on the equations but also on the historical path the process has followed. As a result, it cannot be solely deduced from the equations. Even if two trajectories start from infinitesimally close initial conditions, they can end up in entirely different basins of attractions [
37].
Moreover, identical topologies and distributions of singularities can emerge in distinct physical processes that may lack apparent similarities or commonalities in terms of their mechanisms. This mechanism-independence is clearly illustrated by the Lorenz system, where the three differential equations governing the dynamics of the system, comprising what is known as the Lorenz attractor, do not depend on the specific physical properties or mechanisms of chaotic events. Consequently, a wide range of complex phenomena, including those in atmospheric, physiological, economic, ecological, and other domains, can be effectively modeled using the same equations. Complexity scientists have consequently coined the term
critical universality to describe this phenomenon [
35,
38].
A proper treatment of the modal ontology of possibility spaces requires an extensive and sustained philosophical argument, which is beyond the scope of this paper (see, e.g., [
32]). Nonetheless, it is worth noting a few related points. The structure of a possibility space can be likened to “enabling constraints," a concept reminiscent of Gibsonian affordances [
39] as well as Alicia Juarrero’s notion of “causality as constraint" [
40]. This idea of freedom within constraints also resonates with the ideas of complexity theorist Edgar Morin, who emphasizes the restrictions that complex adaptive systems impose on their elements as emergent properties of the systems themselves [
41,
42]
DeLanda’s notion of possibility spaces becomes particularly relevant when considering the function of Markov blankets in partitioning state spaces. It provides us with a framework to understand how an agent’s active and sensory states shape the range of possibilities it can explore. This aligns with the classical view of active inference regarding individuation, cognitive niche construction, and enculturation. Moreover, it entails an epistemological shift from focusing solely on an agent’s internal structure and cognitive attitudes to considering the constraints that influence the agent’s behavior.
The concept of Markov blanket is crucial in understanding embedded normativity as well. This concept suggests that normativity is not solely a property of the landscapes agents experience but also a property of the agents themselves, arising from their ability to set and pursue goals. Embedded normativity implies that normativity can operate extrinsically, regulating the intrinsic metabolic and modulatory normativity of autonomous agents, and organizing the way human agents acquire norms. In simpler terms, normativity can both influence the agent from the outside (through the environment) and exist internally as part of the agent’s internal model of the world.
In the context of active inference and the FEP, the boundary between inside and outside typically refers to the statistical boundaries of the active agent. However, defining the precise boundary of an agent is not straightforward. For instance, traditionally cognition was considered to reside solely within the brain, with the rest of the world being its object. However, the parity principle challenges this view, suggesting that if a part of the world functions as a process that, if it occurred within the brain, would be recognized as part of the cognitive process, then that part of the world should be considered part of the cognitive process as well [
43]. Future papers will explore this idea further by showing how viewing agents as
assemblages (in the technical sense used in assemblage theory) can provide new insights into the relationship between an agent and its interface with the possibility spaces.
The structure of possibility spaces can also be extended to encompass the structure of scientific theories as well. Instead of being a single monolithic model, a scientific theory can be understood as a “population of models" consisting of interconnected families of models [
44]. Some of these models play more foundational roles, while others branch off from them. In this heterogeneous population, most members are causal models that convey information about causal relationships between events. However, a smaller number of models represent
quasi-causal relations between singularities [
32].
In other words, the bulk of the population of models making up a scientific theory interfaces with
actual world, whereas a smaller subset interfaces with the virtual world. The virtual aspect represents what Deleuze describes as “true" or
well-posed problems. According to him, “the virtual possesses the reality of a task to be performed or a problem to be solved" [
45], with each distinct solution enabled by the actual part of the theory. For instance, as Deleuze notes, “an organism is nothing if not the solution to a problem." [
45]. To further illustrate this, consider the shared “problem" faced by the molecules in soap bubbles and salt crystals: finding a point of minimal energy. The solution, however, manifests differently in each case. The molecules in a soap bubble collectively behave to minimize surface tension, while the molecules in a salt crystal minimize bonding energy.
Therefore, the virtual component of a given scientific theory (e.g., fundamental laws, variational principles, etc.) is not falsifiable per se, unlike the actual component which acquires falsifiability through its status as (correct or incorrect) solutions to the problems posed by the virtual. This perspective provides yet another solution to the misguided criticism of FEP as being “unfalsifiable," as well as to the “map-territory fallacy" fallacy [
46,
47].
6. Intra-Action and Agential Realism
Intra-action, a term coined by Karen Barad and to some extent echoing John Dewey’s concept of
trans-action [
48], is a key concept in performative new materialism. It emphasizes a dynamic relational perspective that goes beyond the traditional realism versus relativism debate. Intra-action challenges the conventional notion of individual entities acting upon each other from a distance, proposing instead a process of co-constitutive emergence where “observed objects" are fundamentally inseparable from the “agencies of observation" [
28]. Importantly, in this context, the term
agency does not refer to a static attribute possessed by someone or something. Rather, it denotes the enactment of a processual reconfiguration of the world [
27]. In this sense, agency is understood as both tracing the possibility space and actively participating in its ongoing reconfiguration.
Matter, on the other hand, is described as a “congealing of agency" [
27], a continual process of stabilizing and destabilizing through iterative intra-activity.
In the context of active inference and the FEP, intra-action can be seen as referring to the dynamic processes in which the agent and its environment actively shape one another. Rather than passively receiving information, the agent engages in active inference to generate beliefs about the environment based on sensory inputs. These beliefs then guide the agent’s actions, which can in turn modify the environment and consequently alter the sensory inputs it receives. This ongoing cycle of inference and action can be understood as a form of intra-action, highlighting the continuous co-constitution of the agent and its environment.
Barad’s
agential realism is heavily inspired by Niels Bohr’s philosophy-physics (even though it has received some criticisms due to its insufficient faithfulness to Bohr’s ideas [
49]). In Bohrian philosophy-physics,
phenomena, which are the dynamic relationship between the observed physical phenomenon and the measuring apparatus, are not separate from reality but rather constitute it, emerging through the inseparable intra-activity of agencies. The
apparatus, in this context, is not a passive instrument but a dynamic entity that actively participates in the production of phenomena [
28]. It serves as a material practice that constantly reconfigures the boundaries and properties of agencies-within-phenomena. This perspective aligns with the FEP’s understanding of the agent as self-organizing systems that forms beliefs about the causal origins of sensory states in the environment, providing evidence for their own existence.
In addition, active inference is inherently perspectival, as it involves making sense of and engaging with the world from a specific point of view. This perspective-taking aligns with Barad’s agential realism, which emphasizes the mutual constitution of subjects and objects. Similarly, in active inference, the subject (the agent) and the object (the environment) are not independent entities but continually co-constitute each other through their mutual probabilistic tracking of each other. Furthermore, just as intra-action highlights the mutual constitution of entities across different scales, active inference operates over nested timescales to maximize model evidence. This multi-scale perspective underscores the dynamic and relational nature of systems within the framework of the FEP, resonating with the intra-active understanding of reality.
As for its epistemological stance, Barad’s position presents a challenge to absolutism while still retaining the potential for objectivity. It proposes a form of relationalism that goes beyond reconciling realism and relativism, instead offering resources to transcend the debate by rejecting the notion of separateness. Intra-action, as the core of this perspective, serves as a key tool for understanding and navigating these discussions. Barad’s perspective on objectivity stems from a view of referentiality that results from the intra-active perspective. As Matz Hammarström notes, in this view, “the objective referent [is] the
phenomenon... and not a pre-existing object." [
50]. This perspective aligns with a form of constructivism that is relationalist rather than relativist. While it agrees with relativism in rejecting absolute conception of reality, it goes beyond the typical emphasis on the constitutive role of the subjectivity. “Instead it shares and, through its post-humanist stance, radicalizes Dewey’s trans-actional idea of the entanglement of the organism-in-environment-as-a-whole." [
50].
It is crucial to emphasize that, in this perspective, as Hammarström observes, “[t]he measurement (the preception) does not
create the `object’; it is not the human subject that measures the world into existence. Through our perceptions/measurements we (and other forms of existence) actualize some of the [virtualities of possibility spaces]. A measurement does not measure something non-existent into into existence; it actualizes one of the existing [virtualities] of the perceptible" [
50].
7. Conclusion and Future Research
In conclusion, this paper has introduced several essential conceptual tools to construct a neo-materialistic account of FEP. It examined different types of New Materialisms in relation to the FEP and argued that performative new materialism may be the most compatible strand of New Materialism for examining FEP. Furthermore, the concept of possibility spaces has been introduced, emphasizing its relevance to some of the key aspects of FEP, particularly the notion of Markov blanket as defining and continually updating the boundaries within possibility space has been discussed. In this context, the concept of embedded normativity has been explored as well, suggesting that normativity is not just a property of the landscapes that agents experience but also a property of the agents themselves. Finally, the paper has delved into the concept of intra-action and agential realism, highlighting their alignment with the FEP and active inference, and emphasizing the dynamic and relational perspective in which the agent and its environment mutually constitute each other.
Subsequent papers in this series will systematically elaborate on the details of the neo-materialistic account of the FEP, building upon the introduced concepts. They will provide a comprehensive account of agents as assemblages and explore the implications of this perspective for active inference and the FEP. Furthermore, they will offer a more extensive description and typology of intra-active agents, drawing insights from kinetic materialism.
Future research can further explore these concepts and their implications for our understanding of cognition and behavior. It would be interesting to explore how the concept of intra-action can be applied to other areas of cognitive science and neuroscience, and how the ideas of possibility spaces and embedded normativity can be further developed and operationalized. Additionally, the role of the Markov blanket in defining and updating the boundaries of cognitive agents can be examined in more detail, potentially leading to new insights into the nature of cognition and the relationship between the brain and its environment.