Preprint
Article

Modeling and Predicting Self-Organization in Dynamic Systems Out of Thermodynamic Equilibrium; Part 1

Altmetrics

Downloads

124

Views

97

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

31 October 2024

Posted:

01 November 2024

Read the latest preprint version here

Alerts
Abstract
Self-organization in complex systems is a process in which internal entropy is reduced and emergent structures appear that allow the system to function in a more competitive way with other states of the system or with other systems. It occurs only in the presence of energy gradients, facilitating energy transmission through the system and entropy production. Being a dynamic process, self-organization requires a dynamic measure and dynamic principles. The principles of decreasing unit action and increasing total action and the principle of decreasing unit entropy and increasing total entropy are dynamic variational principles that are viable to utilize in a self-organizing system. Based on this, average action efficiency can serve as a quantitative measure of the degree of self-organization. Positive feedback loops connect this measure with all other characteristics of a complex system, providing all of them with a mechanism for exponential growth, and indicating power law relationships between each of them as confirmed by data and simulations. In this study, we apply those principles and the model to agent-based simulations of ants traveling between two locations on a 2D grid. We find that those principles explain self-organization well and that the results confirm the model. We derive a compact model of ant-behavior based on the action of their trajectories, and then estimate a variety of metrics from the simulated behavior. By measuring action efficiency we can have a new answer to the question: "What is complexity and how complex is a system?". This work shows the explanatory and predictive power of those models, which can help understand and design better complex systems.
Keywords: 
Subject: Physical Sciences  -   Theoretical Physics

1. Introduction

1.1. Background and Motivation

Self-organization is key to understanding the existence of, and the changes in all systems that lead to higher levels of complexity and perfection in development and evolution. It is a scientific as well as a philosophical question, as our understanding deepens and our realization of the importance of the process grows. Self-organization often leads to more efficient use of resources and optimized performance, which is one measure of the degree of perfection. By degree of perfection here we mean a more organized, robust, resilient, competitive, and alive system. Because competition for resources is always the selective evolutionary pressure in systems of different natures, the more efficient systems will survive at all levels of Cosmic Evolution.
Our goal is to contribute to the explanation of the mechanisms of self-organization that drive Cosmic Evolution from the Big Bang to the present, and into the future, and its measures[1,2,3]. Self-organization has a universality independent of the substrate of the system - physical, chemical, biological, or social - and explains all of its structures [4,5,6,7]. Establishing a universal, quantitative, absolute method to measure the organization of any system will help us understand the mechanisms of functioning and organization in general and enable us to design specific systems with the highest level of perfection [8,9,10,11,12].
Previous attempts to quantify organization have used static measures, such as information [13,14,15,16,17,18,19] and entropy [20,21,22,23,24,25], but our approach offers a new, dynamic perspective. We use the expanded Hamilton’s action principle and derive from it a dynamic action principle for the system studied, where average unit action for one trajectory is continuously decreasing and total action for the whole system is continuously increasing with self-organization.
Despite its significance, the mechanisms driving self-organization remain only partially understood due to the non-linearity in the dynamics of complex systems. Existing approaches often rely on specific metrics like entropy or information which are static, and while they are valuable, they have limitations in their universality and ability to predict the most organized state of a system. In many cases, they fail to describe the dynamics of the processes that lead to the increase of order and complexity. For example, a less organized system may require more bits to be described using information, compared to the same system in a more organized, i.e. efficient state. These traditional measures often fall short in providing a comprehensive, quantitative framework that can be applied across various types of complex systems, and to describe the mechanisms that lead to higher levels of organization. This gap in understanding highlights the need for a new measure that can universally and quantitatively assess the level of organization in complex systems and the transitions between them.
The motivation for this study stems from the desire to bridge this gap by introducing a novel measure of organization based on dynamical variational principles. More specifically we use Hamilton’s principle of stationary action, which is the basis of all laws of physics. In the limiting case, when the second variation of the action is positive, this makes it a true principle of least action. The principle of least action posits that the path taken by a physical system between two states is the one for which the action is minimized. Extending this principle to complex systems, we propose Average Action Efficiency as a new, dynamic measure of organization. It quantifies the level of organization and serves as a predictive tool for determining the most organized state of a system. It also correlates with all other measures of complex systems, justifying and validating its use.
Understanding the mechanisms of self-organization has profound implications across various scientific disciplines. Understanding these natural optimization processes can inspire the development of more efficient algorithms and strategies in engineering and technology. It can enhance our understanding of biological and ecological processes. It can allow us to design more efficient economic and social systems. Studying self-organization also has profound scientific and philosophical implications. It challenges traditional notions of causality and control, emphasizing the role of local interactions and feedback loops in shaping global patterns. In our model, each characteristic of a complex system is simultaneously a cause and an effect of all others. By developing a universal, quantitative measure of organization, we aim to advance our understanding of self-organization and provide practical tools for optimizing complex systems across different fields.

1.2. Novelty and Scientific Contributions

In this paper we will explore the following topics. Here are the highlights of them.

1.2.1. Dynamical Variational Principles

Extension of Hamilton’s Least Action Principle to Non-Equilibrium Systems This work extends Hamilton’s principle by applying it to stochastic and non-equilibrium systems, offering a framework that proposes a connection of classical mechanics to entropy and information-based variational principles to analyze self-organization in complex systems. We define novel dual dynamical principles, such as increasing total action, information and entropy while decreasing unit action, information and entropy, which reflect the system’s evolution and self-organization. These principles manifest in power-law relationships and reveal a unit-total (or local-global) duality across scales. Agent-Based Simulations confirm these dual variational principles, which support a multiscale approach to modeling hierarchical and networked systems by linking micro-level interactions with macro-level organizational structures.

1.2.2. Positive Feedback Model of Self-Organization

Positive Feedback Model with Power-Law and Exponential Growth Predictions We introduce and test a model of positive feedback loops within self-organizing systems, predicting power-law relationships and exponential growth. This model goes beyond empirical observations by mathematically deriving the power-law relations, providing a predictive framework for system dynamics in complex systems.
Prediction of Power-Law and Exponential Growth Patterns in Complex System Characteristics By showing that the feedback mechanisms between the characteristics can predict power-law relationships among system variables, the paper goes beyond qualitative descriptions. Traditional models often observe power-law scaling relationships without predicting them. Here, the work mathematically derives these relationships, offering a framework that could extend to empirical verification across disciplines.

1.2.3. Average Action Efficiency (AAE)

Introduction of Average Action Efficiency (AAE) as a Dynamic Measure AAE provides a real-time, universal metric for quantifying organization, based on the motion of the agents, surpassing traditional static measures. Application of this measure can be explored across disciplines, including biology and engineering. It is validated through simulations. AAE enables real-time system diagnosis and control, with applications in robotics, environmental management, and adaptive systems.

1.2.4. Agent-Based Modeling (ABM)

Dynamic ABM Our ABM includes dynamic effects such as pheromone feedback, allowing for realistic simulations of complex behaviors that validate the theoretical framework. This model demonstrates the applicability of the variational principles and dualism in stochastic and dissipative settings, enhancing the framework’s utility for future research and experimental studies.

1.2.5. Intervention and Control in Complex Systems

Real-Time Metric for Adaptive Control With AAE as a real-time measurable metric, we establish a basis for guiding systems toward optimized states. This diagnostic potential is relevant in fields like engineering and sustainability.

1.2.6. Average Action-Efficiency as a Predictor of System Robustness

Average Action Efficiency (AAE) is proposed not only as a measure of organization but also as a predictor of system robustness. We suggests that systems with higher AAE are more robust and resilient to perturbations, providing a measurable link between action efficiency and system stability. This has scientific relevance for fields like ecology, network theory, and engineering, where robustness is key to system survival and functionality in changing environments.
Theoretical Framework Linking AAE to System Efficiency and Stability The paper’s theory posits that AAE reflects the level of organization within a system, where higher AAE corresponds to more efficient, streamlined configurations that minimize wasted energy or time. This theoretical underpinning aligns well with the concept of robustness, as more organized and efficient systems are generally better equipped to withstand disturbances due to their optimized internal structure.
Positive Feedback Loops Reinforcing Stability The positive feedback model presented in the paper suggests that as AAE increases, there is an exponential reinforcement of organized structures within the system. This self-reinforcing organization implies that systems with high AAE are not only efficient but also maintain structural coherence, which can enhance their ability to absorb and recover from perturbations. This resilience is a characteristic of robust systems in fields like ecology and network theory.
Simulation Results Demonstrating Stability at High AAE The agent-based simulations provide empirical support by showing that systems reach stable, organized states with increased AAE, despite initial stochastic movements and random perturbations. For example, in the ant simulation, paths converge to efficient routes over time, demonstrating the system’s ability to stabilize around high-efficiency configurations. This illustrates that systems with higher AAE can naturally resist or recover from randomness, showing robustness.

1.2.7. Philosophical Contribution

Fundamental Understanding of Self-Organization and Causality This work deepens the theoretical understanding of self-organization, suggesting that each characteristic within a system functions as both cause and effect, providing a foundation for new research on causality in complex systems.
Contribution to the Philosophy of Self-Organization and Evolution Beyond technical applications, this work deepens the philosophical understanding of self-organization by framing it as a universal process governed by variational principles that transcend specific system boundaries. The dynamic minimization of unit action combined with total action growth introduces a novel concept of evolution that is proposed to apply to all open, complex systems. This conceptualization could inspire further philosophical inquiry into the nature of causality, emergence, and evolution in complex systems.

1.2.8. Novel Conceptualization of Evolution as a Path to Increased Action Efficiency

This paper introduces a unique evolutionary perspective, where self-organization drives systems toward states of increased action efficiency. This approach departs from more static views of evolution in complex systems, framing evolution not merely as survival optimization but as an open-ended journey toward dynamically minimized unit actions within the context of system growth. We propose that evolution in complex systems is driven at least in part by increasing action efficiency, offering a quantitative basis for directional evolution as systems optimize organization over time. This evolution of internal structure, in general is coupled with the environment, a question that we will explore in further research.

1.3. Overview of the Theoretical Framework

We use the extension of Hamilton’s Principle of Stationary Action to a Principle of Dynamic Action, according to which action in self-organizing systems is changing in two ways: decreasing the average action for one event and increasing the total amount of action in the system during the process of self-organization, growth, evolution, and development. This view can lead to a deeper understanding of the fundamental principles of nature’s self-organization, evolution, and development in the universe, ourselves, and our society.

1.4. Hamilton’s Principle and Action Efficiency

Hamilton’s principle of stationary action is the most fundamental principle in nature, from which all other physics laws are derived [26,27]. Everything derived from it is guaranteed to be self-consistent [28]. Beyond classical and quantum mechanics, relativity, and electrodynamics, it has applications in statistical mechanics, thermodynamics, biology, economics, optimization, control theory, engineering, and information theory [29,30,31]. We propose its application, extension, and connection to other characteristics of complex systems as part of the complex systems theory.
Enders notably says: "One extremal principle is undisputed “the least-action principle (for conservative systems), which can be used to derive most physical theories, ...Recently, the stochastic least-action principle was also established for dissipative systems. Information theory and the stochastic least-action principle are important corner stones of modern stochastic thermodynamics." [32] and "Our analytical derivations show that MaxEPP is a consequence of the least-action principle applied to dissipative systems (stochastic least-action principle) [32].
Similar dynamic variational principles have also been proposed in considering the dynamics of systems away from thermodynamic equilibrium. Martyushev has published reviews on Maximum Entropy Production Principle (MEPP) saying: "A nonequilibrium system develops so as to maximize its entropy production under present constraints. " [33]. "Sawada emphasized that the maximal entropy production state is most stable to perturbations among all possible (metastable) states." [34], which we will connect with dynamical action principles in the second part of this work.
The derivation of the MEPP from LAP was first done by Dewar in 2003[35] [36], basing his work on Jayne’s theory from 1957 [37,38], and extending it to non-equilibrium systems.
The papers by Umberto Lucia "Entropy Generation: Minimum Inside and Maximum Outside" (2014) [39] and "The Second Law Today: Using Maximum-Minimum Entropy Generation" [40] examine the thermodynamic behavior of open systems in terms of entropy generation and the principle of least action. Lucia explores the concept that within open systems, entropy generation tends to a minimum inside the system and reaches a maximum outside it, which relates to our observations of dualities of the same characteristic.
François Gay-Balmaz and Hiroshi Yoshimura derive a form of dissipative Least Action Principle (LAP) for systems out of equilibrium. Specifically, they extend the classical variational approaches used in reversible mechanics to dissipative systems. Their work involves the use of Lagrangian and Hamiltonian mechanics in combination with thermodynamic forces and fluxes, and they introduce modifications to the standard variational calculus to account for irreversible processes [41,42,43]
Arto Annila derives the Maximum Entropy Production Principle (MEPP) from the Least Action Principle (LAP) and demonstrates how the principle of least action underlies natural selection processes, showing that systems evolve to consume free energy in the least amount of time, thereby maximizing entropy production. He links LAP to the second law of thermodynamics and, consequently, MEPP[44]. evolutionary processes in both living and non-living systems can be explained by the principle of least action, which inherently leads to maximum entropy production. [45]. Both papers provide a detailed account of how MEPP can be understood as an outcome of the Least Action Principle, grounding it in thermodynamic and physical principles.
The potential of the stochastic least action principle has been shown in [46] and a connection has been made to entropy.
The concept of least action has been generalized by applying it to both heat absorption and heat release processes [47]. This minimization of action corresponds to the maximum efficiency of the system, reinforcing the connection between the least action principle and thermodynamic efficiency. By applying the principle of least action to thermodynamic processes, the authors link this principle to the optimization of efficiency.
The increase in entropy production was related to the system’s drive towards a more ordered, synchronized state, and this process is consistent with MEPP, which suggests that systems far from equilibrium will evolve in ways that maximize entropy production. Thus, a basis is provided for the increase in entropy using LAP [48]
The least action principle has been used to derive Maximum entropy change for nonequilibrium systems. [49]
Variational methods have been emphasized in the context of non-equilibrium thermodynamics for fluid systems, especially in relation to MEPP emphasizing thermodynamic variational principles in nonlinear systems [50]
MEPP and the Least Action Principle (LAP) are connected through the Riemannian geometric framework, which provides a generalized least action bound applicable to probabilistic systems, including both equilibrium and non-equilibrium systems [51].
Herglotz principle introduces dissipation directly into the variational framework by modifying the classical action functional with a dissipation term. This is significant because it provides a way to account for energy loss and the irreversible nature of processes in non-equilibrium systems. Herglotz’s principle provides a powerful tool for non-equilibrium thermodynamics by allowing for the incorporation of dissipative processes into a variational framework. This enables the modeling of systems far from equilibrium, where energy dissipation and entropy production play key roles. By extending classical mechanics to include irreversibility, Herglotz principle offers a way to describe the evolution of systems in non-equilibrium thermodynamics, potentially linking it to other key concepts like the Onsager relations and the MEPP [52–54].
In Beretta’s fourth law of thermodynamics, the steepest entropy ascent could be seen as analogous to a least action path in the context of non-equilibrium thermodynamics, where the system follows the most "efficient" path toward equilibrium by maximizing entropy production. Both principles are forms of optimization, where one minimizes physical action and the other maximizes entropy, providing deep structural insights into the behavior of systems across physics [55].
In most cases in classical mechanics, Hamilton’s stationary action is minimized, in some cases, it is a saddle point, and it is never maximized. The minimization of average unit action is proposed as a driving principle and the arrow of evolutionary time, and the saddle points are temporary minima that transition to lower action states with evolution. Thus, globally, on long-time scales, average action is minimized and continuously decreasing, when there are no external limitations. This turns it into a dynamic action principle for open-ended processes of self-organization evolution and development.
Our thesis is that we can complement other measures of organization and self-organization by applying a new, absolute, and universal measure based on Hamilton’s principle and its extension to dissipative and stochastic systems. This measure can be related to previously used measures, such as entropy and information, as in our model for the mechanism of self-organization, progressive development, and evolution. We demonstrate this with power-law relationships in the results.
This paper presents a derivation of a quantitative measure of action efficiency and a model in which all characteristics of a complex system reinforce each other, leading to exponential growth and power law relations between each pair of characteristics. The principle of least action is proposed as the driver of self-organization, as agents of the system follow natural laws in their motion, resulting in the most action-efficient paths. This explains why complex systems form structures and order, and continue self-organizing in their evolution and development.
Our measure of action efficiency assumes dynamical flow networks away from thermodynamic equilibrium that transport matter and energy along their flow channels and applies to such systems. The significance of our results is that they empower natural and social sciences to quantify organization and structure in an absolute, numerical, and unambiguous way. Providing a mechanism through which the least action principle and the derived measure of average action efficiency as the level of organization interact in a positive feedback loop with other characteristics of complex systems explains the existence of observed events in Cosmic Evolution. The tendency to minimize average unit action for one crossing between nodes in a complex flow network comes from the principle of least action and is proposed as the arrow of time, the main driving principle towards, and explanation of progressive development and evolution that leads to the enormous variety of systems and structures that we observe in nature and society.

1.5. Mechanism of Self-Organization

The research in this study demonstrates the driving principle and mechanism of self-organization and evolution in general open, complex, non-equilibrium thermodynamic systems, employing agent-based modeling. We propose that the state with the least average unit action is the attractor for all processes of self-organization and development in the universe across all systems. We measure this state through Average Action Efficiency (AAE).
We present a model for quantitatively calculating the amount of organization in a general complex system and its correlation with all other characteristics through power law relationships. We also show the cause for the progressive development and evolution which is the positive feedback loop between all characteristics of the system that leads to an exponential growth of all of them until an external limit is reached. Always, the internal organization of all complex systems in nature reflects their external environment where the flows of energy and matter come from. This model also predicts power law relationships between all characteristics. Numerous measured complexity-size scaling relationships confirm the predictions of this model [32,33,34].
Our work addresses a gap in complex system science by providing an absolute and quantitative measure of organization, namely AAE, based on the movement of agents and their dynamics. This measure is functional and dynamic, not relative and static as in many other metrics. We show that the amount of organization is inversely proportional to the average physical amount of action in a system. We derive the expression for organization, apply it to a simple example, and validate it with results from agent-based modeling (ABM) simulations which allow us to verify experimental data, and to vary conditions to address specific questions[35,36]. We discuss extensions of the model for a large number of agents and state the limitations and applicability of this model in our list of assumptions.
Measuring the level of organization in a system is crucial because it provides a long-sought criterion for evaluating and studying the mechanisms of self-organization in natural and technological systems. All those are dynamic processes, which necessitate searching for a new, dynamic measure. By measuring the amount of organization, we can analyze and design complex systems to improve our lives, in ecology, engineering, economics, and other disciplines. The level of organization corresponds to the system’s robustness, which is vital for survival in case of accidents or events endangering any system’s existence [37]. Philosophically and mathematically, each characteristic of the system is a cause and effect of all the others, similar to auto-catalytic cycles, which is well-studied in cybernetics [38].

1.6. Negative Feedback

Negative feedback is evident in the fact that large deviations from the power law proportionality between the characteristics are not observed or predicted. This proportionality between all characteristics at any stage of the process of self-organization is the balanced state of functioning which is usually known as a Homeostatic, or dynamical equilibrium state of the system. Complex systems function as wholes only at values of all characteristics close to this Homeostatic state. If some external influence causes large deviations even on one of the characteristics from this homeostatic value, the system functioning is compromised[38].

1.7. Unit-Total Dualism

We find a unit-total dualism: unit quantities of the characteristics are minimized while total quantities are maximized with systems’ growth. For example, the average unit action for one event, which is one edge crossing in networks, is derived from the average path length and path time, and it is minimized as calculated by the average action efficiency α . At the same time, the total amount of action Q in the whole system increases, as the system grows, which can be seen in the results from our simulation. This is an expression of the principles of decreasing average unit action and increasing total action. Similarly, unit entropy per one trajectory decreases in self-organization, as the total entropy of the system increases with its growth, expansion, and increasing number of agents. Those can be termed the principles of decreasing unit entropy and of increasing total entropy. The information for describing one event in the system, with increased efficiency and shorter paths is decreasing, while the total information in the system as it grows is increasing. They are also related by a power law relationship, which means, that one can be correlated to the other, and for one of them to change, the other must also change proportionally.

1.8. Unit Total Dualism Examples

Analogous qualities are evidenced in data for real systems and appear in some cases so often that they have special names. For example, the Jevons paradox (Jevons effect) was published in 1866 by the English economist William S. Jevons [39] . In one example, as the fuel efficiency of cars increased, the total miles traveled also increased to increase the total fuel expenditure. This is also named a "rebound effect" from increased energy efficiency [40]. The naming of this effect as a "paradox" shows that it is unexpected, not well studied, and sometimes considered as undesirable. In our model, it is derived mathematically as a result of the positive feedback loops of the characteristics of complex systems, which is the mechanism of its self-organization, and supported by the simulation results. It is not only unavoidable, but also necessary for the functioning, self-organization, evolution, and development of those systems.
In economics, it is evident that with increased efficiency, the costs decrease which increases the demand, which is named the "law of demand" [41]. This is another example of a size-complexity rule, whereas the efficiency increases, which in our work is a measure of complexity, the demand increases, which means that the size of the system also increases. In the 1980s the Jevons paradox was expanded to a Khazzoom–Brookes postulate, formulated by Harry Saunders in 1992 [42], which says that it is supported by the "growth theory" which is the prevailing economic theory for long-run economic growth and technological progress. Similar relations have been observed in other areas, such as in the Downs–Thomson paradox [43], where increasing road efficiency increases the number of cars driving on the road. These are just a few examples that point out that this unit-total dualism has been observed for a long time in many complex systems and it was thought to be paradoxical.

1.9. Action Principles in this Simulation, Potential Well

In each run of this specific simulation, the average unit action has the same stationary point, which is a true minimum of the average unit action, and the shortest path between the fixed nodes is a straight line. This is the theoretical minimum and remains the same across simulations. The closest analogy is with a particle in free fall, where it minimizes action and falls in a straight line, which is a geodesic. The difference in the simulation is that the ants have a wiggle angle and, at each step, deposit pheromone that evaporates and diffuses, therefore the difference with gravity is that the effective attractive potential is not uniform. Due to this the potential landscape changes dynamically. The shape of the walls of the potential well changes slightly with fluctuations around the average at each step. It also changes when the number of ants is varied between runs.
The potential well is steeper higher on its walls, and the system cannot be trapped there in local minima of the fluctuations. This is seen in the simulation as initially, the agents form longer paths that disintegrate into shorter ones. In this region away from the minimum, the action is truly always minimized, with some stochastic fluctuations. Near the bottom of the well, the slope of its wall is smaller, and local minima of the fluctuations cannot be overcome easily by the agents. Then the the system temporarily gets trapped in one of those local minima and the average unit action is a dynamical saddle point.
The simulation shows that with fewer ants, the system is more likely to get trapped in a local minimum, resulting in a path with greater curvature and higher final average action (lower average action efficiency) compared to the theoretical minimum. With an increasing number of ants, they can explore more neighboring states, find lower local minima, and find lower average action states. Therefore, increasing the number of ants allows the system to explore more effectively neighboring paths and find shorter ones. This is evident as the average action efficiency improves when there are more ants, which can escape higher local minima and find lower action values (see Fig. 8). As the number of ants (agents, system size) increases, they asymptotically find lower local minima or lower average action, improving average action efficiency, though never reaching the theoretical minimum.
In future simulations, if the distance between nodes is allowed to shrink and external obstacles are reduced, the shape of the entire potential well changes dynamically. Its minimum becomes lower, the steepness of its walls increases and the system more easily escapes local minima. However, it still does not reach the theoretical minimum, due to its fluctuations near the minimum of the well. The average action decreases, and average action efficiency increases with the lowering of this minimum, demonstrating continuous open-ended self-organization and development. This illustrates the dynamical action principle.

1.10. Research Questions and Hypotheses

This study aims to answer the following research questions:
  • How can a dynamical variational action principle explain the continuous self-organization, evolution, and development of complex systems?
  • Can Average Action Efficiency (AAE) be a measure for the level of organization of complex systems?
  • Can the proposed positive feedback model accurately predict the self-organization processes in systems?
  • What are the relationships between various system characteristics, such as AAE, total action, order parameter, entropy, flow rate, and others?
Our hypotheses are:
  • A dynamical variational action principle can explain the continuous self-organization, evolution and development of complex systems.
  • AAE is a valid and reliable measure of organization that can be applied to complex systems.
  • The model can accurately predict the most organized state based on AAE.
  • The model can predict the power-law relationships between system characteristics that can be quantified.

1.11. Summary of the Specific Objectives of the Paper

1. Define and Apply the Dynamical Action Principle: Define and apply the dynamical action principle, which extends the classical stationary action principle to dynamic, self-organizing systems, in open-ended evolution, showing that unit action decreases while total action increases during self-organization.
2. Demonstrate the Predictive Power of the Model: Build and test a model that quantitatively and numerically measures the amount of organization in a system, and predicts the most organized state as the one with the least average unit action and highest average action efficiency. Define the cases in which action is minimized, and based on that predict the most organized state of the system. The theoretical most organized state is where the edges in a network are geodesics. Due to the stochastic nature of complex systems, those states are approached asymptotically, but in their vicinity, the action is stationary due to local minima.
3. Validate a New Measure of Organization: Based on 1 and 2, develop and apply the concept of average action efficiency, rooted in the principle of least action, as a quantitative measure of organization in complex systems.
4. Explain Mechanisms of Progressive Development and Evolution: Apply a model of positive feedback between system characteristics to predict exponential growth and power-law relationships, providing a mechanism for continuous self-organization. Test it by fitting its solutions to the simulation data, and compare them to real-world data from the literature.
5. Simulate Self-Organization Using Agent-Based Modeling: Use agent-based modeling (ABM) to simulate the behavior of an ant colony navigating between a food source and its nest to explore how self-organization emerges in a complex system.
6. Define unit-total (local-global) dualism: Investigate and define the concept of unit-total dualism, where unit quantities are minimized while total quantities are maximized as the system grows, and explain its implications as variational principles for complex systems.
7. Contribute to the Fundamental and Philosophical Understanding of Self-Organization and Causality: Enhance the theoretical understanding of self-organization in complex systems, offering a robust framework for future research and practical applications.
This research aims to provide a robust framework for understanding and quantifying self-organization in complex systems based on a dynamical principle of decreasing unit action for one edge in a complex system represented as a network. By introducing Average Action Efficiency (AAE) and developing a predictive model based on the principle of least action, we address critical gaps in existing theories and offer new insights into the dynamics of complex systems. The following sections will delve deeper into the theoretical foundations, model development, methodologies, results, and implications of our study.

2. Building the Model

2.1. Hamilton’s Principle of Stationary Action for a System

In this work, we utilize Hamilton’s Principle of Stationary Action, a variational method, to study self-organization in complex systems. Stationary action is found when the first derivative is zero. When the second variation is positive, the action is a minimum. Only in this case, do we have the true least action principle. We will discuss in what situations this is the case. Hamilton’s Principle of Stationary Action asserts that the evolution of a system between two states occurs along the path that makes the action functional stationary. By identifying and extremizing this functional, we can gain a deeper understanding of the dynamics and driving forces behind self-organization and describe it from first principles.
This is a first order approximation, simplified model, as an example, and the lagrangian for the agent based simulation is described in following sections.
The classical Hamilton’s principle is:
δ I ( q , p , t ) = δ t 1 t 2 L ( q ( t ) , q ˙ ( t ) , t ) d t = 0
where δ is an infinitesimally small variation in the action integral I, L is the Lagrangian, q ( t ) are the generalized coordinates, q ˙ ( t ) are the time derivatives of the generalized coordinates, p is the momentum, and t is the time. t 1 and t 2 are the initial and final times of the motion.
For brevity, further in the text, we will use when appropriate L = L ( q ( t ) , q ˙ ( t ) , t ) , and I = I ( q , p , t ) .
This is the principle from which all physics and all observed equations of motion are derived. The above equation is for one object. For a complex system, there are many interacting agents. That means that we can propose that the sum of all actions of all agents is taken into account. This sum is minimized in its most action-efficient state, which we define as being the most organized. In previous papers [8,10,11,12] we have stated that for an organized system we can find the natural state of that system as the one in which the variation of the sum of actions of all of the agents is zero:
δ i = 1 n I i = δ i = 1 n t 1 t 2 L i d t = 0
where I i is the action of the i-th agent, L i is the Lagrangian of the i-th agent, and n represents the number of agents in the system, t 1 and t 2 are the initial and final times of the motions.
A network representation of a complex system.When we represent the system as a network, we can define one edge crossing as a unit of motion, or one event in the system, for which the unit average action efficiency is defined. In this case, the sum of the actions of all agents for all of the crossings of edges per agent per unit time, which is the total number of crossings (the flow of events, ϕ ), is the total amount of action in the network, Q. In the most organized state of the system, the variation of the total action, Q, is zero, which means that it is extremised as well and for the complex system in our example this extremum is a maximum.

2.2. An Example of True Action Minimization: Conditions

This is just an example to understand the conceptual idea of the model. Later we will specify it for our simulation with the actual interactions between the agents.
  • The agents are free particles, not subject to any forces, so the potential energy is a constant and can be set to be zero because the origin for the potential energy can be chosen arbitrarily, therefore V = 0 . Then, the Lagrangian L of the element is equal only to the kinetic energy T = m v 2 2 of that element:
    L = T V = T = m v 2 2
    where m is the mass of the element, and v is its speed.
  • We are assuming that there is no energy dissipation in this system, so the Lagrangian of the element is a constant:
    L = T = m v 2 2 = constant
  • The mass m and the speed v of the element are assumed to be constants.
  • The start point and the end point of the trajectory of the element are fixed at opposite sides of a square (see Fig. A1). This produces the consequence that the action integral cannot become zero, because the endpoints cannot get infinitely close together:
    I = t 1 t 2 L d t = t 1 t 2 ( T V ) d t = t 1 t 2 T d t 0
  • The action integral cannot become infinity, i.e., the trajectory cannot become infinitely long:
    I = t 1 t 2 L d t = t 1 t 2 ( T V ) d t = t 1 t 2 T d t
  • In each configuration of the system, the actual trajectory of the element is determined as the one with the Least Action from Hamilton’s Principle:
    δ I = δ t 1 t 2 L d t = δ t 1 t 2 ( T V ) d t = δ t 1 t 2 T d t = 0
  • The medium inside the system is isotropic (it has all its properties identical in all directions). The consequence of this assumption is that the constant velocity of the element allows us to substitute the interval of time with the length of the trajectory of the element.
  • The second variation of the action is positive, because V = 0 , and T > 0 , therefore the action is a true minimum.

2.3. Building the Model

In our model, the organization is proportional to the inverse of the average of the sum of actions of all elements (8). This is the average action efficiency and we can label it with a symbol α . Here average action efficiency measures the amount of organization of the system. In a complex network, many different arrangements can correspond to the same action efficiency and therefore have the same level of organization. Thus, the average action efficiency represents the macrostate of the system. Many possible microstates of combinations of nodes, paths, and agents on the network can correspond to the same macrostate as measured by α . This is analogous to temperature in statistical mechanics representing a macrostate corresponding to many microstates of the molecular arrangements in the gas.
We multiply the numerator by Planck’s constant h. Now it takes the meaning that the average action efficiency is inversely proportional to the average number of action quanta for one crossing between two nodes in the system, in a given interval of time. This also provides an absolute reference point h for the measure of organization. The units in this case are the total number events in the system per unit time, divided by the number of quanta of action.
In general,
α = h n m i , j = 1 n m I i , j
where n is the number of agents, and m is the average number of nodes each agent crosses per unit time. If we multiply the number of agents by the number of crossings for each agent, we can define it as the flow of events in the system per unit of time, ϕ = n m
Then:
α = h ϕ i , j = 1 n m I i , j
In the denominator, the sum of all actions of all agents and all crossings is defined as the total action per unit time in the system. When it is divided by the Planck’s constant it takes the meaning of the number of quanta of action, Q.
Q = i , j = 1 n m I i , j h
For simplicity and clarity of the presentation, when appropriate, we will set h=1.
Then the equation for average action efficiency can be rewritten simply as the total number of events in the system per unit time, divided by the total number of quanta of action:
α = ϕ Q
In our simulation, the average path length is equal to the average time because the speed of the agents in the simulation is set to one patch per second.
t = l
When the Lagrangian does not depend on time, because the speed is constant and there is no friction, as in this simulation, the kinetic energy is a constant (assumption #2), so the action integral takes the form:
I = t 1 t 2 L d t = t 1 t 2 T d t = T ( t 2 t 1 ) = T Δ t = L Δ t
Where Δ t is the interval of time that the motion of the agent takes.
This is for an individual trajectory. Summing over all trajectories, we get the total number of events, the flow, times the average time of one crossing for all agents. The sum of all times for all events is the number of events times the average time. Then for identical agents, the denominator of the equation for average action efficiency Eq. 9 becomes:
i = 1 n m I i , j = n m L t = ϕ L t
Therefore:
α = h ϕ ϕ L t
and:
α = h L t
We are free to set the mass to two and the velocity is one patch per second. Therefore, we can have the kinetic energy to be equal to one.
Since Planck’s constant is a fundamental unit of action, even though action can vary continuously, this equation represents how far is the organization of the system from this highly action-efficient state, when there will be only one Planck unit of action per event. The action itself can be even smaller than h [44]. This provides a path to further continuous improvement in the levels of organization of systems below one quantum of action.
An example for one agent:
To illustrate the simplest possible case, for clarity, we apply this model to the example of a closed system in two dimensions with only one agent. We define the boundaries of the fixed system to form a square.
The endpoints here represent two nodes in a complex network. Thus the model is limited only to the path between the two nodes. The expansion of this model will be to include many nodes in the network and to average over all of them. Another extension is to include many elements, different kinds of elements, obstacles, friction, etc.
Figure 1 shows the boundary conditions for the system used in this example. In this figure, we present the boundaries of the system and define the initial and final points of the motion of an agent as two of the nodes in a complex network. It shows the comparison between two different states of organization of the system. It is a schematic representation of the two states of the system, and the path of the agent in each case. Here l 1 and l 2 are the lengths of the trajectory of the agent in each case. (a) a trajectory of an agent in a certain state of the system, given by the configuration of the internal constraints, l 1 . (b) a different configuration allowing the trajectory of the element to decrease by 50 % , l 2 - the shortest possible path.
For this case, we set n = 1 , m = 1 , which is one crossing of one agent between two nodes in the network. An approximation for an isotropic medium (Assumption #7) allows us to express the time using the speed of the element when it is constant (Assumption #3). In this case, then we can solve v = l Δ t which is the definition of average velocity for the interval of time as Δ t = l v , where l is the length of the trajectory of the element in each case between the endpoints.
The speed of the element v is fixed to be another constant, so the action integral takes the form:
I = L Δ t = L l v
When we substitute this equation in the equation for action efficiency for when n=1 and m=1, we obtain:
α = h I = h v L l
For the simulation in this example, l is the distance that the ants travel between food and nest. Because h, v, and l are all constants, we can simplify this as we set
C = h v L
And rewrite:
α = h v L l = C l
We can set this constant to C = 1 , when necessary.

2.4. Analysis of System States

Now we turn to the two states of the system with different actions of the elements, as shown in Figure 1. The organization of those two states is respectively:
α 1 = C l 1 in state 1 , and α 2 = C l 2 in state 2 of the system .
In Figure 1, the length of the trajectory in the second case (b) is less, l 2 < l 1 , which indicates that state 2 has better organization. The difference between the organizations in the two states of the same system is generally expressed as:
α 2 α 1 = C l 2 C l 1 = C 1 l 2 1 l 1 = C l 1 l 2 l 1 l 2
This can be rewritten as:
Δ α = C Δ l i = 1 2 l i
Where Δ α = α 2 α 1 , Δ l = l 1 l 2 , and i = 1 2 l i = l 1 l 2 .
This is for one agent in the system. If we describe the multi-agent system, then, we use average path-length.

2.5. Average Action Efficiency (AAE)

In the previous example, we can say that the shorter trajectory represents a more action-efficient state, in terms of how much total action is necessary for the event in the system, which here is for the agent to cross between the nodes. If we expand to many agents between the same two nodes, all with slightly different trajectories, we can define that the average of the action necessary for each agent to cross between the nodes is used to calculate the average action efficiency. Average action efficiency is how efficiently a system utilizes energy and time to perform the events in the system. More organized systems are more action-efficient because they can perform the events in the system with fewer resources, in this example, energy and time.
We can start from the presumption that the average action efficiency in the most organized state is always greater than or equal to its value in any other configuration, arrangement, or structure of the system. By varying the configurations of the structure until the average action efficiency is maximized, we can identify the most organized state of the system. This state corresponds to the minimum average action per event in the system, adhering to the principle of least action. We refer to this as the ground or most stable state of the system, as it requires the least amount of action per event. All other states are less stable because they require more energy and time to perform the same functions.
If we define average action efficiency as the ratio of useful output, here it is the crossing between the nodes, and, in other systems, it can be any other measure, to the total input or the energy and time expended, a system that achieves higher action efficiency is more organized. This is because it indicates a more coordinated, effective interaction among the system’s components, minimizing wasted energy or resources for its functions.
During the process of self-organization, a system transitions from a less organized to a more organized state. If we monitor the action efficiency over time, an increase in efficiency could indicate that the system is becoming more organized, as its components interact in a more coordinated way and with fewer wasted resources. This way we can measure the level of organization and the rate of increase of action efficiency which is the level and the rate of self-organization, evolution, and development in a complex system.
To use action efficiency as a quantitative measure, we need to define and calculate it precisely for the system in question. For example, in a biological system, efficiency might be measured in terms of energy conversion efficiency in cells. In an economic system, it can be the ratio of production of an item to the total time, energy, and other resources expended. In a social system, it could be the ratio of successful outcomes to the total efforts or resources expended.
The predictive power of the Principle of Least Action for Self-Organization
For the simplest example here of only two nodes, calculating theoretically the least action state as the straight line between the nodes we arrive at the same state as the final organized state in the simulation in this paper. This is the same result from minimizing action and from any experimental result. It results in the geodesic of the natural motion of objects. When there are obstacles to the motion of agents, the geodesic is a curve described by the metric tensor. To achieve this prediction for multiagent systems we minimize the average action between the endpoints. Therefore the most organized state in the current simulation is predicted theoretically from the principle of least action. Therefore, the Principle of Least Action provides a predictive power for calculating the most organized state of a system, and verifying it with simulations or experiments. In engineered or social systems, it can be used to predict the most organized state and then construct it.

2.6. Multi-Agent

Now we turn to the two states of the system with different average actions of the elements on Figure 1. The organization of those two states is respectively:
α 1 = C l 1 in state 1 , and α 2 = C l 2 in state 2 of the system .
The average length of the trajectories in the second case is less, l 2 < l 1 , which indicates that state 2 has better organization. The difference between the organizations in the two states of the same system is generally expressed as:
α 2 α 1 = C l 2 C l 1 = C 1 l 2 1 l 1 = C l 1 l 2 l 1 l 2
This can be rewritten as:
Δ α = C Δ l i = 1 2 l i
Where Δ α = α 2 α 1 , Δ l = l 1 l 2 , and i = 1 2 l i = l 1 l 2 .
This is when we use the average lengths of the trajectories and when the velocity is constant and the time and length are the same. In general, when the velocity varies we need to use time.

2.7. Using Time

In this case, the two states of the system are with different average actions of the elements. The organization of those two states is respectively:
α 1 = C t 1 in state 1 , and α 2 = C t 2 in state 2 of the system .
In Figure 1, the length of the trajectory in the second case (b) is less, the average time for the trajectories is t 2 < t 1 , which indicates that state 2 has better organization. The difference between the organizations in the two states of the same system is generally expressed as:
α 2 α 1 = C t 2 C t 1 = C 1 t 2 1 t 1 = C l 1 t 2 t 1 t 2
This can be rewritten as:
Δ α = C Δ t i = 1 2 t i
Where Δ α = α 2 α 1 , Δ t = t 1 t 2 , and i = 1 2 t i = t 1 t 2 .
Which, recovering C, is:
Δ α = h v L Δ t i = 1 2 t i

2.8. An Example

For the simplest example of one agent and one crossing between two nodes if l 1 = 2 l 2 , or the first trajectory is twice as long as the second, this expression produces the result:
α 1 = C 2 l 2 = α 2 2 or α 2 = 2 α 1 ,
indicating that state 2 is twice as well organized as state 1. Alternatively, substituting in eq. 29 we have:
α 2 α 1 = C 2 1 2 = C 2 ,
or there is a 50% difference between the two organizations, which is the same as saying that the second state is quantitatively twice as well organized as the first one. This example illustrates the purpose of the model for direct comparison between the amounts of organization in two different states of a system. When the changes in the average action efficiency are followed in time, we can measure the rates of self-organization, which we will explore in future work.
In our simulations, the higher the density and the lower the entropy of the agents, the shorter the paths and the time for crossing them, and the more the action efficiency.

2.9. Unit-Total (Local-Global) Dualism

In addition to the classical stationary action principle for fixed, non-growing, non-self-organizing systems:
δ I = 0
we find a dynamical action principle:
δ I 0
This principle exhibits a unit-total (local-global, min-max) dualism:
1. Average unit action for one edge decreases:
δ i , j = 1 n , m I i , j n m < 0
This is a principle for decreasing unit action for a complex system during self-organization, as it becomes more action-efficient until a limit is reached.
2. Total action of the system increases:
δ i , j = 1 n , m I i , j > 0
This is a principle for increasing total action for a complex system during self-organization, as the system grows until a limit is reached.
In our data, we see that average unit action, in terms of action efficiency decreases while total action increases Figure 9. Both are related strictly with a power law relationship, predicted by the model of positive feedback between the characteristics of the system.
Analogously, unit internal Boltzmann entropy for one path is decreasing while total internal Boltzmann entropy is increasing for a complex system during self-organization and growth Figure 10. These two characteristics are also related strictly to a power law relationship, predicted by the model of positive feedback between the characteristics of the system.
For the Gauss’ principle of least constraint [45] this will translate as the unit constraint (obstacles) for one edge decreases, the total constraints in the network of the whole complex system during self-organization increases as it grows and expands.
For Hertz’s principle of least curvature [27], this will translate as the unit curvature for one edge decreases, the total curvature in the network of the whole complex system during self-organization increases as it grows and expands and adds more nodes.
Some examples of unit-total (local-global) dualism in other systems are: In economies of scale as the size of the system grows, the total production cost increases as the unit cost per one item decreases. In the same example, the total profits increase, but the unit profit per item decreases. Also, as the cost per one computation decreases, the cost for all computations grows. As the cost per one bit of data transmission decreases the cost for all transmissions increases as the system increases. In biology, as the unit time for one reaction in a metabolic autocatalytic cycle decreases in evolution, due to increased enzymatic activity, the total number of reactions in the cycle increases. In ecology, as one species becomes more efficient in finding food, its time and energy expenditure for foraging a unit of food decreases, the numbers of that species increase and the total amount of food that they collect increases. We can keep naming other unit-total (local-global) dualisms in systems of a very different nature, to test the universality of this principle.

3. Simulations Model

In our simulation, the ants are interacting through pheromones. We can formulate an effective Lagrangian to describe their dynamics. The Lagrangian L depends on the kinetic energy T and the potential energy V. We can start building it slowly by adding necessary terms to the Lagrangian. Given that ants are influenced by pheromone concentrations, the potential energy component should reflect this interaction.
Components of the Lagrangian: 1. Kinetic Energy (T): In our simulation, the ants have a constant mass m, and their kinetic energy is given by:
T = 1 2 m v 2
where v is the velocity of the ant.
2. Effective Potential Energy (V): The potential energy due to pheromone concentration C ( r , t ) at position r and time t can be modeled as:
V eff = k C ( r , t )
where k is a constant that scales the influence of the pheromone concentration.
Effective Lagrangian (L): The Lagrangian L is given by the difference between the kinetic and potential energies:
L = T V
For an ant moving in a pheromone field, the effective Lagrangian becomes:
L = 1 2 m v 2 + k C ( r , t )
Formulating the Equations of Motion:
Using the Lagrangian, we can derive the equations of motion via the Euler-Lagrange equation:
d d t L x ˙ i L x i = 0
where x i represents the spatial coordinates (e.g., x , y ) and x ˙ i represents the corresponding velocities.
Example Calculation for a Single Coordinate:
1. Kinetic Energy Term:
L x ˙ = m x ˙
d d t L x ˙ = m x ¨
2. Potential Energy Term:
L x = k C x
The equation of motion for the x-coordinate is then:
m x ¨ = k C x
Full Equations of Motion:
For both x and y coordinates, the equations of motion are:
m x ¨ = k C x
m y ¨ = k C y
The ants are moving following the gradient of the concentration.
Testing for stationary Points of Action:
  • Minimum: If the second variation of the action is positive, the path corresponds to a minimum of the action.
  • Saddle Point: If the second variation of the action can be both positive and negative depending on the direction of the variation, the path corresponds to a saddle point.
  • Maximum: If the second variation of the action is negative, the path corresponds to a maximum of the action.
Determining the Nature of the Stationary Point:
To determine whether the action is a minimum, maximum, or saddle point, we examine the second variation of the action, δ 2 I . This involves considering the second derivative (or functional derivative in the case of continuous systems) of the action with respect to variations in the path.
Given the Lagrangian for ants interacting through pheromones
The action is:
I = t 1 t 2 1 2 m r ˙ 2 + k C ( r , t ) d t
First Variation:
The first variation δ I leads to the Euler-Lagrange equations, which give the equations of motion:
m r ¨ = k C ( r , t )
Second Variation:
The second variation δ 2 I determines the nature of the stationary point. In general, for a Lagrangian L = T V :
δ 2 I = t 1 t 2 δ 2 T δ 2 V d t
Analyzing the Effective Lagrangian:
1. Kinetic Energy Term T = 1 2 m r ˙ 2 : The second variation of the kinetic energy is typically positive, as it involves terms like m ( δ r ˙ ) 2 .
2. Potential Energy Term V eff = k C ( r , t ) : The second variation of the effective potential energy depends on the nature of C ( r , t ) . If C is a smooth, well-behaved function, the second variation can be analyzed by examining 2 C .
Nature of the Stationary Point:
  • Kinetic Energy Contribution: Positive definite, contributing to a positive second variation.
  • Effective Potential Energy Contribution: Depends on the curvature of C ( r , t ) . If C ( r , t ) has regions where its second derivative is positive, the effective potential energy contributes positively, and vice versa.
Therefore, given the typical form of the Lagrangian and assuming C ( r , t ) is well-behaved (smooth and not overly irregular), the action I is most likely a saddle point. This is because:
  • The kinetic energy term tends to make the action a minimum.
  • The potential energy term, depending on the pheromone concentration field, can contribute both positively and negatively.
Thus, variations in the path can lead to directions where the action decreases (due to the kinetic energy term) and directions where it increases (due to the potential energy term), characteristic of a saddle point.
Incorporating factors such as the wiggle angle of ants and the evaporation of pheromones introduces additional dynamics to the system, which can affect whether the action remains stationary, a saddle point, a minimum, or a maximum. Here’s how these changes influence the nature of the action:

3.0.1. Effects of Wiggle Angle and Pheromone Evaporation on the Action

1. Wiggle Angle: Impact: The wiggle angle introduces stochastic variability into the ants’ paths. This randomness can lead to fluctuations in the paths that ants take, affecting the stability and stationarity of the action. Mathematical Consideration: The additional term representing the wiggle angle’s variance in the Lagrangian adds a stochastic component, P ( θ , t ) :
L = 1 2 m v 2 + k C ( r , t ) + P ( θ , t )
where P ( θ , t ) = σ 2 ( θ ) · η ( t )
where variance in the wiggle angle θ is σ 2 ( θ ) , and η ( t ) is a random function of time that introduces variability into the system.
This term will then influence the dynamics by adding random fluctuations at each time step, making the effect of noise vary over time rather than being a constant shift.
Consequence: The action is less likely to be strictly stationary due to the inherent variability introduced by the wiggle angle. This can lead to more dynamic behavior in the system.
2. Pheromone Evaporation: Impact: Pheromone evaporation reduces the concentration of pheromones over time, making previously attractive paths less so as time progresses. Mathematical Consideration: Including the evaporation term in the Lagrangian:
L = 1 2 m v 2 + k C ( r , t ) e λ t
Consequence: The time-dependent decay of pheromones means that the action integral changes dynamically. Paths that were optimal at one point may no longer be optimal later, leading to continuous adaptation.

3.1. Considering the Nature of the Action

Given these modifications, the nature of the action can be characterized as follows:
1. Stationary Action: - Before Changes: In a simpler model without wiggle angles and evaporation, the action might be stationary at certain paths. - After Changes: With wiggle angle variability and pheromone evaporation, the action is less likely to be stationary. Instead, the system continuously adapts, and the action varies over time.
2. Saddle Point, Minimum, or Maximum: - Saddle Point: The action is likely to be at a saddle point due to the dynamic balancing of factors. The system may have directions in which the action decreases (due to pheromone decay) and directions in which it increases (due to path variability). - Minimum: If the system stabilizes around a certain path that balances the stochastic wiggle and the decaying pheromones effectively, the action might approach a local minimum. However, this is less likely in a highly dynamic system. - Maximum: It is unusual for the action in such optimization problems to represent a maximum because that would imply an unstable and inefficient path being preferred, which is contrary to observed behavior.
Practical Implications
1. Continuous Adaptation: - The system will require continuous adaptation to maintain optimal paths. Ants need to frequently update their path choices based on the real-time state of the pheromone landscape.
2. Complex Optimization: - Optimization algorithms must account for the random variations in movement, the rules for deposition and diffusion and the temporal decay of pheromones. This means more sophisticated models and algorithms are necessary to predict and find optimal paths.
Therefore, incorporating the wiggle angle and pheromone evaporation into the model makes the action more dynamic and less likely to be strictly stationary. Instead, the action is more likely to exhibit behavior characteristic of a saddle point, with continuous adaptation required to navigate the dynamic environment. This complexity necessitates advanced modeling and optimization techniques to accurately capture and predict the behavior of the system.

3.2. Dynamic Action

For dynamical non-stationary action principles, we can extend the classical action principle to include time-dependent elements. The Lagrangian is changing during the motion of an agent between the nodes as the terms in it are changing.
1. Time-Dependent Lagrangian that explicitly depends on time or other dynamic variables:
L = L ( q , q ˙ , t , λ ( t ) )
where ( q ) represents the generalized coordinates, ( q ˙ ) their time derivatives, ( t ) time, and ( λ ( t ) ) a set of dynamically evolving parameters. 2. Dynamic Optimization - the system continuously adapts its trajectory q(t) to minimize or optimize the action that evolves over time:
I = t 1 t 2 L ( q , q ˙ , t , λ ( t ) ) d t
The parameters λ ( t ) are updated based on feedback from the system’s performance. The goal is to find the path q ( t ) that makes the action stationary. However, since λ ( t ) is time-dependent, the optimization becomes dynamic.
Euler-Lagrange Equation
To find the stationary path, we derive the Euler-Lagrange equation from the time-dependent Lagrangian. For a Lagrangian L ( q , q ˙ , t , λ ( t ) ) , the Euler-Lagrange equation is:
d d t L q ˙ L q = 0
However, due to the dynamic nature of λ ( t ) , additional terms may need to be considered.
Updating Parameters λ ( t )
The parameters λ ( t ) evolve based on feedback from the system’s performance. This feedback mechanism can be modeled by incorporating a differential equation for λ ( t ) :
d λ ( t ) d t = f ( λ ( t ) , q ( t ) , q ˙ ( t ) , t )
Here, f represents a function that updates λ ( t ) based on the current state q ( t ) , the velocity q ˙ ( t ) , and possibly the time t. The specific form of f depends on the nature of the feedback and the system being modeled.
Practical Implementation
In our example of ants with a wiggle angle and pheromone evaporation. The effective Lagrangian will look like this:
L = 1 2 m v 2 + k C ( r , t ) e λ ( t ) t + P ( θ , t )
with all of the terms defined earlier.
The action I would be:
I = t 1 t 2 1 2 m v 2 + k C ( r , t ) e λ ( t ) t + P ( θ , t ) d t
Dynamical System Adaptation
The system adapts by updating λ ( t ) based on the current state of pheromones and the ants’ paths.
Solving the Equations
1. Numerical Methods: Usually, these systems are too complex for analytical solutions, so numerical methods (e.g., finite difference methods, Runge-Kutta methods) are used to solve the differential equations governing q ( t ) and λ ( t ) . 2. Optimization Algorithms: Algorithms like gradient descent, genetic algorithms, or simulated annealing can be used to find optimal paths and parameter updates.
By extending the classical action principle to include time-dependent and evolving elements, we can model and solve more complex, dynamic systems. This framework is particularly useful in real-world scenarios where conditions change over time, and systems must adapt continuously to maintain optimal performance. This approach is applicable in physical, chemical, and biological systems, and in fields such as robotics, economics, and ecological modeling, providing a powerful tool for understanding and optimizing dynamic, non-stationary systems.
The Lagrangian changes at each time step of the simulation, therefore we cannot talk about static action, but a dynamic action. This is dynamic optimization and reinforcement learning.
The average action is quasi-stationary, as is fluctuates around a fixed value, but, internally, each trajectory which it is composed of is fluctuating stochastically given the dynamic Lagrangian of each ant. It still fluctuates around the shortest theoretical path, so the average action is minimized far from the stationary path, even though close to the minimum it can be stuck in a neighboring stationary action path temporarily. In all these situations, as described above, the average action efficiency is our measure for organization.

3.3. Specific Details in our Simulation

For our simulation the details of the concentration changes at each patch C ( r , t ) at each update are the sum of three contributions and can be included as:
1. C i , j ( t ) is the preexisting amount of pheromone at each patch at time t.
2. Pheromone Diffusion: The changes of the pheromone at each patch at time t, are described by the rules of the simulation: 70% of the pheromone is split between all 8 neighboring patches on each tick, regardless of how much pheromone is in that patch, which means that 30% of the original amount is left in the central patch. On the next tick 70% of those remaining 30% will diffuse again. At the same time, using the same rule, pheromone is distributed from all 8 neighboring ants to the central one. Note: this rule for diffusion does not follow the diffusion equations in physics, where there is always flow from high concentration to low.
C i , j ( t + 1 ) = 0.3 C i , j ( t ) + 0.7 8 k , l = 1 1 C i + k , j + l ( t )
where | k | + | l | 0
The first term in the equation shows how much of the concentration of the pheromone from the previous time step is left in the next, and the second term shows the incoming pheromone from all neighboring patches, as 70% 1/8 of each concentration is distributed to the central one.
3. The amount of pheromone an ant deposits after n steps can be expressed as:
P ( n ) = 1 10 P ( 0 ) ( 0.9 ) n
Where P ( 0 ) = 30
The stochastic term, P ( θ , t ) depends on the ( σ 2 ( θ ) ), which is the variance of a uniform distribution and for the parameters in this simulation, is [46]
σ 2 ( θ ) = 50 2 12

3.4. Gradient Based Approach

We can use either the concentration’s value or the concentration gradient in the potential energy term. Using the gradient is a more exact approach but even more computationally intensive.
In further extension of the model, we can incorporate a gradient-based potential energy term. In this case, the concentration dependent term is: k C ( r , t ) instead of k C ( r , t ) and the Lagrangian becomes:
L = 1 2 m v 2 + k C ( r , t ) e λ ( t ) t + P ( θ , t )

3.5. Summary

1. We obtained the Lagrangian with the exact parameters for the specific current simulation that produced the data. Up to our current knowledge we don’t know of other studies which have published the Lagrangian approach to agent based simulations of ants.
2. The Lagrangian is impossible to solve analytically, to our current knowledge, due to the stochastic term. Also, the equation for the concentration of pheromones is at a given patch, but the equation for the amount deposited by the ants depends on how many steps, n, they have taken since they visited the food or nest. Each ant has a different path, so n in the equation will be different for each ant and it will be depositing a different amount of pheromones. This in general cannot be done analytically, it is dependent on the stochastic paths of each ant, therefore one way to solve it is numerically through the simulation. Also, in the equation, the concentration is for each patch i,j. We solve it numerically through the simulation.
3. The average path length obtained from the simulation serves as a numerical solution to the Lagrangian because it results from the model that incorporates all the dynamics described by the Lagrangian. This path length reflects the optimization and behaviors modeled by the Lagrangian terms, including kinetic energy, potential energy influenced by pheromone concentrations, and stochastic movement. The simulation is using the reciprocal of the average path-length as the average action efficiency. This takes into account all of the effects on the Lagrangian.
4. The average action could be stationary close to the theoretically shortest path, i.e. near the minimum of the average action, but further away from it it is always minimized, experimentally and from theoretical considerations. In the simulation, it is measured that longer paths always decay to shorter paths. There can only be some deviations very close to the shortest path due to memory effects and stochastic reasons which will decay with longer annealing and changing parameters such as exploration by increasing the wiggle angle, changing the pheromone deposition, diffusion and evaporation rates, changing the speed and mass of the ants, and other factors. When the average action efficiency is growing it means that the average unit action is decreasing. When the action is stationary, as it is at the end of the simulations, as seen in the time graphs, the average action efficiency is also stationary - it does not grow anymore in the time and for the parameters of the current simulation. This is because the size of the world and the number of ants are fixed in each run of the simulation. In systems which can continue to grow, the limits will be much further away. A similar process is happening in real complex systems such as organisms, ecosystems, cities, and economies. Due to the stochastic variations, we can consider only average quantities.

4. Mechanism

4.1. Exponential Growth and Size-Complexity Rule

Average action efficiency is the proposed measure for level of organization and complexity. To test it we turn to observational data. The size-complexity rule states that the complexity increases as a power law of the size of a complex system [32]. This rule has been observed in systems of a very different nature, with some explanation for the proposed origin [34,47]. In the next section on the model of the mechanism of self-organization, we derive those exponential and power law dependencies. In this paper, we show how our data aligns with the size-complexity rule.

4.2. A Model for the Mechanism of Self-Organization

We apply the model first presented in our paper from 2015 [10] and used in the following papers [11,12] to the ABM simulation here, and specify only some of the quantities in this model for brevity, clarity and simplicity. Then, we show the exponential and power law solutions for this specific system. The quantities that we show in the results, but, that are not included in the model, participate in the same way in the positive feedback loops, and have the same power law solutions, as seen in the data. This positive feedback loop model is universal for arbitrary number of characteristics of self-organizing systems, and can be modified to include any of them.
Below is a visual representation of the positive feedback interactions between the characteristics of a complex system, which in our 2015 paper [10] have been proposed as the mechanism of self-organization, progressive development, and evolution, applied to the current simulation. Here i is the information in the system, calculated by the total amount of ant pheromones, t is the average time for all of the ants in the simulation crossing between the two nodes, N is the total number of ants, Q is the total action of all ants in the system, Δ S is the internal entropy difference between the initial and final state of the system in the process of self-organization of finding the shortest path, α is the average action efficiency, ϕ is the number of events in the system per unit time, which in the simulation is the number of paths or crossings between the two nodes, ρ , the density of the ants, is the order parameter and Δ ρ is the increase of the order parameter, which is the difference in the density of agents between the final and initial state of the simulation. The links connecting all those quantities represent positive feedback connections between them.
The positive feedback loops in Figure 2 are modeled with a set of ordinary differential equations. The solutions of this model are exponential for each characteristic and have a power law dependence between each two. The detailed solutions of this model are shown.
We acknowledge the mathematical point that, in general, solutions to systems of linear differential equations are not always exponential. This depends on the eigenvalues of the governing matrix, which must be positive real numbers for exponential growth to occur. Additionally, the matrix must be diagonalizable to support such solutions.

4.2.1. Systems with Constant Coefficients

• For linear systems with constant coefficients, the solutions often involve exponential functions. This is because the system can be expressed in terms of matrix exponentials, leveraging the properties of constant coefficient matrices.
• Even in these cases, if the coefficient matrix is defective (non-diagonalizable), the solutions may include polynomial terms multiplied by exponentials.

4.2.2. Systems with Variable Coefficients

• When the coefficients are functions of the independent variable (e.g., time), the solutions may involve integrals, special functions (like Bessel or Airy functions), or other non-exponential forms.
• The lack of constant coefficients means that the superposition principle doesn’t yield purely exponential solutions, and the system may not have solutions expressible in closed-form exponential terms.

4.2.3. Higher-Order Systems and Resonance

• In some systems, especially those modeling physical phenomena like oscillations or circuits, the solutions might involve trigonometric functions, which are related to exponentials via Euler’s formula but are not themselves exponential functions in the real domain.
• Resonant systems can exhibit behavior where solutions grow without bound in a non-exponential manner.
While exponential functions are a key part of the toolkit for solving linear differential equations, especially with constant coefficients, they don’t encompass all possible solutions. The nature of the coefficients and the structure of the system play crucial roles in determining the form of the solution.
In our specific system, the dynamics predicts exponential growth. We do not consider friction, negative feedback, or any dissipative processes that would introduce complex or negative eigenvalues. Instead, the system is driven by positive feedback loops, which lead to positive real eigenvalues. These conditions ensure that the matrix is diagonalizable and that the system exhibits exponential growth, as expected under these assumptions.
Our model operates under the assumption of constant positive feedback, which justifies the exponential growth observed in our simulations. This is a valid simplification for our study, focusing on systems with reinforcing interactions rather than dissipative forces. In future work we will expand it to include dissipative forces.

5. Mechanism

5.1. Exponential Growth and Size-Complexity Rule

Average action efficiency is the proposed measure for level of organization and complexity. To test it we turn to observational data. The size-complexity rule states that the complexity increases as a power law of the size of a complex system [32]. This rule has been observed in systems of a very different nature, with some explanation for the proposed origin [34,47]. In the next section on the model of the mechanism of self-organization, we derive those exponential and power law dependencies. In this paper, we show how our data aligns with the size-complexity rule.

5.2. A Model for the Mechanism of Self-Organization

We apply the model first presented in our paper from 2015 [10] and used in the following papers [11,12] to the ABM simulation here, and specify only some of the quantities in this model for brevity, clarity and simplicity. Then, we show the exponential and power law solutions for this specific system. The quantities that we show in the results, but, that are not included in the model, participate in the same way in the positive feedback loops, and have the same power law solutions, as seen in the data. This positive feedback loop model is universal for arbitrary number of characteristics of self-organizing systems, and can be modified to include any of them.
Below is a visual representation of the positive feedback interactions between the characteristics of a complex system, which in our 2015 paper [10] have been proposed as the mechanism of self-organization, progressive development, and evolution, applied to the current simulation. Here i is the information in the system, calculated by the total amount of ant pheromones, t is the average time for all of the ants in the simulation crossing between the two nodes, N is the total number of ants, Q is the total action of all ants in the system, Δ S is the internal entropy difference between the initial and final state of the system in the process of self-organization of finding the shortest path, α is the average action efficiency, ϕ is the number of events in the system per unit time, which in the simulation is the number of paths or crossings between the two nodes, ρ , the density of the ants, is the order parameter and Δ ρ is the increase of the order parameter, which is the difference in the density of agents between the final and initial state of the simulation. The links connecting all those quantities represent positive feedback connections between them.
The positive feedback loops in Figure 3 are modeled with a set of ordinary differential equations. The solutions of this model are exponential for each characteristic and have a power law dependence between each two. The detailed solutions of this model are shown.
We acknowledge the mathematical point that, in general, solutions to systems of linear differential equations are not always exponential. This depends on the eigenvalues of the governing matrix, which must be positive real numbers for exponential growth to occur. Additionally, the matrix must be diagonalizable to support such solutions.

5.2.1. Systems with Constant Coefficients

• For linear systems with constant coefficients, the solutions often involve exponential functions. This is because the system can be expressed in terms of matrix exponentials, leveraging the properties of constant coefficient matrices.
• Even in these cases, if the coefficient matrix is defective (non-diagonalizable), the solutions may include polynomial terms multiplied by exponentials.

5.2.2. Systems with Variable Coefficients

• When the coefficients are functions of the independent variable (e.g., time), the solutions may involve integrals, special functions (like Bessel or Airy functions), or other non-exponential forms.
• The lack of constant coefficients means that the superposition principle doesn’t yield purely exponential solutions, and the system may not have solutions expressible in closed-form exponential terms.

5.2.3. Higher-Order Systems and Resonance

• In some systems, especially those modeling physical phenomena like oscillations or circuits, the solutions might involve trigonometric functions, which are related to exponentials via Euler’s formula but are not themselves exponential functions in the real domain.
• Resonant systems can exhibit behavior where solutions grow without bound in a non-exponential manner.
While exponential functions are a key part of the toolkit for solving linear differential equations, especially with constant coefficients, they don’t encompass all possible solutions. The nature of the coefficients and the structure of the system play crucial roles in determining the form of the solution.
In our specific system, the dynamics predicts exponential growth. We do not consider friction, negative feedback, or any dissipative processes that would introduce complex or negative eigenvalues. Instead, the system is driven by positive feedback loops, which lead to positive real eigenvalues. These conditions ensure that the matrix is diagonalizable and that the system exhibits exponential growth, as expected under these assumptions.
Our model operates under the assumption of constant positive feedback, which justifies the exponential growth observed in our simulations. This is a valid simplification for our study, focusing on systems with reinforcing interactions rather than dissipative forces. In future work we will expand it to include dissipative forces.

6. Simulation Methods

6.1. Agent-Based Simulations Approach

Our study examines the properties of self-organization by simulating an ant colony navigating between a food source and its nest. The ants start with a random distribution, and then their trajectories become more correlated as they form a path. The ants pick up pheromone from the food and nest and lay it on each patch when they move. The food and the nest function as two nodes that attract the ants which follow the steepest gradient of the opposite pheromone. The pheromone is equivalent to information, as forming a path requires enough of it to ensure that the ants are able to follow the quickest path to their destinations rather than moving randomly. The ants can represent any agent in complex systems, from atoms and molecules to organisms, people, cars, dollars, or bits of information. Utilizing NetLogo for agent-based modeling and Python for data visualization and analysis, we measure self-organization via the calculated entropy decrease in the system, density order parameter, and average path length, which are contingent on the ants’ distribution and possible microstates in the simulated environment. We further explore the effects of having different numbers of ants in the system simulating the growth of the system. In this study we look only at the final values of the characteristics at the end of the simulation when the self-organization is complete, or the difference between their values in the final and initial state. Then, we can demonstrate the relationship between each two characteristics as the population increases. Our model predicts that, according to one of the mechanisms of self-organization, evolution, and development of complex systems, all of the characteristics of a complex system reinforce each other, grow exponentially in time, and are proportional to each other by a power-law relationship [10]. The principle of least action is proposed as a driver for this process. The tendency to minimize action for crossing between the nodes in a complex network is a reason for self-organization and the average action efficiency is a measure of how organized the system is.
This simulation can utilize variables that affect the world, making it easier or harder to form the path. In the collected data, only the number of ants was changed. Increasing the number of ants makes it more probable to find the path, as there is not only a higher chance of them reaching the food and nest and adding information to the world, but also a steeper gradient of pheromone. This both increases the rate of path formation and decreases the length of the path. The ants follow the direction of the steepest gradient around them, but, their speed does not depend on how steep is the gradient.
The simulation methods, such as for diffusion, are chosen by established in the literature criteria for computational speed and for realistic outcome. The values for the parameters are chosen by modifications in the program as to optimize the path formation.

6.2. Program Summary

The simulation is run using an agent-based software called NetLogo. In the simulation, a population of ants forms a path between two endpoints, called the food and nest. The world is a 41x41 patch grid with 5x5 food and nest centered vertically on opposite sides and aligned with the edge. To help with path formation, there is a pheromone laid by the ants on a grid whenever the food or nest is reached. This pheromone exhibits the behavior of evaporating and diffusing across the world. The settings for ants and pheromones can be configured to make path formation easier or harder.
Each tick of the simulation functions as time, which represents a second in our simulation, according to the following rules. First, the ants check if there is any pheromone in its neighboring patches that are in a view cone with an angle of 135 degrees, oriented towards its direction of movement. From the position which the ant is in the current patch, it faces the center of the neighboring patch with the largest value of the pheromone in its viewing angle. It is important to note that the minimum amount of pheromone an ant can detect is 1 / M , where M is the maximum amount of pheromone an ant can have, which in this simulation is 30. If there is not enough pheromone found in view, then the ant checks all neighboring patches with the same limitation for minimum pheromone. If any pheromone is found, if faces towards the patch with the highest amount. The ant then wiggles a random amount within an integer interval of -25 to 25 degrees, regardless of whether it faced any pheromone, and moves forward at a constant speed of 1 patch per tick. If the ant collides with the edge of the world, it turns 180 degrees and takes another step. In this simulation, the ants do not collide with obstacles or with each other. After it finishes moving, the ant checks if there is any food or nest in its current patch. The program performs two different checks depending on whether the ant has food. If the ant has food, the program checks for collision with the nest patches, and removes the food from the ant if there is collision with the nest. If the ant does not have food, the program checks for collision with the food patches, and gives the ant food if there is collision with the food. The end effect is that when an ant reaches the food, it picks up food pheromone, and when it reaches the nest, it picks up nest pheromone.
In both cases, the ant’s pheromone is set to 30, and the path-length data is updated. After the checks for collision with the food or nest, it drops 1/10 of its current amount of pheromone at the given tick at the patch where it is located. When all the ants have been updated, the patch pheromone is updated. There is a diffusion rate of 0.7, which means that 70% of the pheromone at each patch is distributed equally to neighboring patches. There is also an evaporation rate of 0.06, which means that the pheromone at each patch is decreased by 6 percent. There are more behaviors than these available in the simulation.

6.3. Analysis Summary

The program stores information about the status of the simulation on each tick of the simulation. Upon completion of one simulation, the data is exported directly from the program for analysis by Python. Some of the data, such as Average Action Efficiency, is not directly exported from the program but must be generated by Python from other datasets. The data are fit with a power law function in python. To generate the graphs, the matplotlib Python library is used. The data seen in the graphs is the average of 20 simulations and has a moving average with a window of 50. Furthermore, any graph that requires the final value in the dataset obtains the value by averaging the last 200 points of the dataset without the moving average.

6.4. Average Path Length

The average path length, <l>, estimates the average length of the paths between food and nest created by the ants. On each tick, the path-length variable for each ant is increased by the amount by which it moved, which is 1 patch per tick for this simulation. When an ant reaches an endpoint, the path-length variable is stored in a list and reset to zero. This list is for all of the paths completed on that tick, and at the end of the tick, the list is averaged and added to the average path length dataset. If no paths were created, 0 is added to the average path length to serve as a placeholder; this can easily be removed in the analysis step because it is known that the path length cannot reach a length of 0. It is also important to note that, due to the method used to calculate this dataset, there will be a clear peak if a stable path is formed. This is because the path length of all the ants must begin at zero, the dataset is not representative before the peak, because the shorter paths from the start of the simulation are averaged. The peak itself shows a shifting trend with self-organization when parameters are changed. The average path length data in this simulation is identical to the average path time and can be used interchangeably whenever time is needed instead of distance. If the speed was varying, then the distance and the time would be different.

6.5. Flow Rate

The flow rate, ϕ , is the number of paths completed at each tick, or crossing between the nodes in this simple network, which is the number of events in the system, defined that way. It is the measure of how many ants reached the endpoints on each tick. This can simply be measured by counting how many ants reach the food or nest, and adding this value to the dataset. In this measure, there are a lot of fluctuations, so a moving average is necessary to make the graph readable.

6.6. Final Pheromone

The final pheromone is the total amount of pheromone at the end of the simulation, which is information for the ants. The total amount of pheromone is calculated on each tick by summing all the pheromone values from the nest and the food for each patch which vary during the simulation. By final pheromone, we mean the average of the pheromone for the last 200 ticks at the end of the simulation,

6.7. Total Action

Action is calculated as the energy used times the time for each trajectory. Since kinetic energy is constant during the motion, it can be set to 1, so the individual action becomes equal to the time for one edge crossing, which is equal to the length of the trajectory. To get the total action for all agents, it is multiplied times the number of all events, or all crossings. The effective potential and the random wiggle angle are reflected in the length of the trajectory, therefore in the average path time. The calculation for total action is based on flow rate and average path time. It is calculated after the simulation in Python using the equation Q = ϕ < t > .

6.8. Average Action Efficiency

The definition for average action efficiency < α > is the average amount of action per one event in the system or for one edge crossing. This is calculated by dividing the number of events by the total action in the system. The calculation for action efficiency is based on the data for average path time. It is calculated by the equation < α >=1/<t>. This is based on the formula < α >= ϕ /Q. Note that the calculation for average action efficiency is first performed on the individual datasets, then the modified datasets are averaged, rather than averaging the datasets, then applying the equation.

6.9. Density

The density of the ants can be used as an order parameter. The density is changing as a result of reducing randomness in the motion. The program starts at maximum randomness, or internal entropy and while the path is forming, the local density of the ants is increasing. In our simulations, the total number of ants is fixed, and no ants enter or leave the system during each run. However, the density of ants within the simulation space changes over time as they redistribute themselves. Ants are initially distributed uniformly, but as they follow pheromone trails, they tend to concentrate in specific regions, particularly along frequently used paths. This leads to local increases in density along these paths and corresponding decreases in less-used areas, reflecting the emergence of self-organized patterns. Between the runs, when the total number of ants in the simulation is changed the the density is scaled proportionally to reflect this change in the total number of ants.
To calculate the density of the ants, we need to calculate the average of how many ants are in each patch. To achieve this, we approximate a box around the ants in the system to represent the area that they occupy. First, the center of the box is calculated by the equation C x , y = p x , y for all ants, where p x , y is the position of each at each tick. Then, the length and width of the box are calculated by S x , y = 4 ( p x , y C x , y ) 2 . Finally, the area can be calculated with the formula A = S x S y . By using this method of averaging the dimensions of the box instead of simply taking the furthest ant, it is ensured that a group of ants has priority over a few outliers. In Python, after the simulation is finished, the density of ants per patch can be calculated for each population, N, by ρ = N / A . It is the total number of ants divided by the area of the box in which the ants are concentrated. At the beginning of the simulation, the box takes the whole world, and as the ants form the path, it gradually decreases in size, corresponding to an increased density.

6.10. Entropy

The system starts with maximum internal entropy, which decreases as paths are formed over time. First, using the same method as described in Section 6.9, a box is calculated around the ants within the system to represent the area that they occupy.
We consider the agents in our simulation to be distinguishable because we have two different types of ants and each ant in the simulation is labeled and identifiable. The Boltzmann entropy is S = k B l n ( W ) .
Where the number of states W is the area A that they occupy to the power of the number of ants N.
W = A N
Plugging this into the Boltzmann formula, we get:
S = k B ln A N
Setting
k B = 1
We obtained the expression which we used in our calculations:
S = N ln A
The box is the average size of the area A in which the ants move. As the box decreases, the number of possible microstates in which they can be decreases.

6.11. Unit Entropy

Unit entropy measures the amount of entropy per path in the simulation. This is calculated in Python by dividing the internal entropy by the flow rate. It measures unit entropy at the end of the simulation, so the final 200 points of the internal entropy data are averaged, as are the final 200 points of the flow rate data. The averaged final entropy is then divided by the averaged flow rate: s f / ϕ .

6.12. Simulation Parameters

Parameter Values and Settings
Tables 1 to 4 below show the simulation parameters. Table 1 shows the properties that affect the behavior of ants, such as speed of motion, wiggle angle, pheromone detection, and size of the ants. Table 2 shows the settings that affect the properties of the pheromone, such as diffusion rate, evaporation rate and the initial amount of pheromone that the ants pick up when they visit the food or the nest. Table 4 shows settings that affect the world size and initial conditions of the ants. Table 3 shows the size and the positioning of the food and nest.
In Table 3, the settings are that the food/nest are boxes centered vertically on the screen. They do not move during the simulation. Horizontally, the back edges are aligned with the edge of the screen. They have a size of 5x5. To create this, set the following properties listed in the Table 3, then press the "box-food-nest button".
Analysis Parameters All datasets are averages of 20 runs for each population. There is also a moving average of 50 applied after standard averaging.

6.13. Simulation Tests

We ran several tests to show that the simulation and analysis were working correctly.

6.13.1. World Size

Checking how many patches the world contains for the current setting. Running a command in NetLogo that counts how many patches are in a world with a size of 40 prints a value of 1681, and 1681 = 41 . This means that when the world ranges from -20 to +20, the center patch is included, making a total of 41 patches in each direction.

6.13.2. Estimated Path Area

We run a test to check how well the estimation of how much area the ants occupy. We observed the algorithm working in vertical direction, when the ants are randomly dispersed and when they are on the horizontal path. When the ants were disperse, the estimated width was 46.8, which is slightly above the real world size of 41. When the ants formed a path, the estimated width was 5.6, which is close to the observed, with only a few outliers. So, the function that estimates path width might be a few patches off, but this is the due to stochastic behavior when averaging the positions. If, however, we did not use averaging, then the outlier ants would have an undesirable impact on the estimated width, and make the measurement fluctuate much more. The methods for checking the width and length of the path are identical, and these are both used in calculating the area occupied by the ants, which is an important step in calculating entropy and density.

7. Results

In this section, we present the data in Figs. 3 to 38, for self-organization as measured by different parameters as an output of the agent-based modeling simulations. For clarity we should emphasize that those quantities are predicted by the model in Section 4.2, but, the data are produced by the simulation. The model predictions are tested by fitting the data of the simulation.
First, we present time data for raw output, Figure 4, Figure 5, Figure 6 and Figure 7, from which points were obtained for the power law graphs. We show the evolution of some of the quantities from the beginning to the end of the simulation. The phase transition from disorder initially to order can be seen. The last 200 points averaged which have been used for the power law figures can be observed. The number of ants in the runs varies from 70 to 200. The time in the simulation runs from 0 to 1000 ticks. The derived data from the time graphs are presented in Figs. 7 to 38, which are fit with power law functions to compare with the predictions of the model, and the fit parameters are represented in Table 5.

7.1. Time Graphs

The raw data are presented as output measures vs time for four quantities, Figure 4, Figure 5, Figure 6 and Figure 7. All variables measure the degree of order in the system. The time data show the phase transition from a disorganized to an organized state as an increase of the order parameter, Figure 4, as a decrease in internal entropy, Figure 5, the amount of information, Figure 6 and the number of events per unit time, Figure 7. Those data are exponential in the region before the inflection point of the curves, where growth is unconstrained.
The density serves as an order parameter, Figure 4, which changes similarly to entropy, Figure 5, information, Figure 6, and flow, Figure 7. In the first case, the system starts with a close to zero order parameter which increases to some maximum value and then saturates as the system is fixed in size. In the case of entropy, the system starts at maximum internal entropy and it drops to a minimum value as the system reaches the saturation point.
Pheromone is the amount of information in the system which is proportional to the degree of order Figure 6, and flow rate is also proportional to all of the other measures, indicating the number of events as defined in the system Figure 7. Both of these start at an initial minimum value and undergo a phase transition to the organized state, after which they saturate, due to the fixed size of the system. Those are measures directly connected to action efficiency and are some of the most important performance metrics for self-organizing systems.
The data in Figure 4 show the whole run as the density increases with time as self-organization occurs. The more ants, the larger the density. The increase of density depends on two factors: 1. The shorter average path length at the end, and 2. the increased number of ants.
Figure 4. The density of ants versus the time as the number of ants increases from the bottom curve to the top. As the simulation progresses, the ants become more dense.
Figure 4. The density of ants versus the time as the number of ants increases from the bottom curve to the top. As the simulation progresses, the ants become more dense.
Preprints 138214 g004
Figure 5 shows the initial entropy when the ants are randomly dispersed which scales with the number of ants because the number of microstates corresponding to the same macrostate grows. When the ants form the path, the amount of decrease of entropy is larger for the larger number of ants, even though the absolute value of entropy at the end of the simulation also scales with the number of ants, since the number of microstates when the ants form the final path also scales with the number of ants. Entropy at the initial and final state of the simulation is seen to grow with the increase of the number of agents.
Figure 5. The internal entropy in the simulation versus the time as the number of ants increases from bottom to top. Entropy decreases from the initial random state as the path forms.
Figure 5. The internal entropy in the simulation versus the time as the number of ants increases from bottom to top. Entropy decreases from the initial random state as the path forms.
Preprints 138214 g005
The pheromone is a measure of the information in the system. Its change during the simulation is shown on Figure 6. It scales with the number of ants. Each simulation starts with zero pheromones as the ants are dispersed randomly and do not carry any pheromones, but as they start forming the path they lay pheromones from the food and nest respectively and the more ants are in the simulation, the more pheromones they carry. Larger systems contain more information as each agent is a carrier of information.
Figure 6. The total amount of pheromone versus the time passed as the number of ants increases from bottom to top. As the simulation progresses, there is more pheromone for the ants to follow.
Figure 6. The total amount of pheromone versus the time passed as the number of ants increases from bottom to top. As the simulation progresses, there is more pheromone for the ants to follow.
Preprints 138214 g006
Figure 7 shows the flow rate vs. time during the simulation. The number of events, which is the number of visits to the food and nest, is inversely proportional to the average path length, and respectively average path time, scales with the number of ants as expected. Initially, the number of crossings is zero but it quickly increases and is greater for the larger systems. After the ants form the shortest path, it saturates and stays close to constant for all simulations.
Figure 7. The flow rate versus the time passed as the number of ants increases from bottom to top. As the simulation progresses, the ants visit the endpoints more often.
Figure 7. The flow rate versus the time passed as the number of ants increases from bottom to top. As the simulation progresses, the ants visit the endpoints more often.
Preprints 138214 g007
The average path length also can be used as a measure of the time and degree of self-organization.

7.2. Power Law Graphs

All figures representing the relationship between the characteristics of this system at the most organized state at the end of the simulation demonstrate power law relationships between all of the quantities as theoretically predicted by the model, as seen on figs. 7 to 38. They are all on a log-log scale. This serves as a confirmation of the model, and on the other hand as a theoretical explanation for the simulation. The power law data correspond to scaling relations measured in many systems of different natures [32,34].
For clarity, the "#" symbol represents the number of ants in the simulation. For this data, the number of ants does not change withing the simulation; rather, each number represents a different set of initial parameters.

7.2.1. Size-Complexity Rule

Figure 8 shows the size-complexity rule: as the size increases, the action efficiency as a measure of the degree of organization and complexity increases. This is supported by all experimental and observational data on scaling relations by Geoffrey West, Bonner, Carneiro, and many others [12,32,33,34]. They confirm Kleiber’s law [47], and other similar laws, such as the area speciation rule in ecology and others.
Figure 8. The average action efficiency at the end of the simulation versus the number of ants, on a log-log scale. As more ants are added, they are able to form more action-efficient structures by finding shorter paths.
Figure 8. The average action efficiency at the end of the simulation versus the number of ants, on a log-log scale. As more ants are added, they are able to form more action-efficient structures by finding shorter paths.
Preprints 138214 g008

7.2.2. Unit-Total Dualism

The following graphs serve as empirical support for the unit-total dualism described in this paper. Figure 9 shows the unti-total dualism between the average action efficiency and total action.
Figure 9. The average action efficiency at the end of the simulation versus the total action as the number of ants increases on a log-log scale. As there is more total action within the system, the ants become more action-efficient.
Figure 9. The average action efficiency at the end of the simulation versus the total action as the number of ants increases on a log-log scale. As there is more total action within the system, the ants become more action-efficient.
Preprints 138214 g009
The total action is a measure of all energy and time spent in the simulation by the agents in the system, as it can be seen on Figure 9 . As the number of agents increases, the total action increases. This demonstrates the duality of decreasing unit action and increasing total action as a system self-organizes grows, develops, and evolves. It also demonstrates the dynamical action principle as the unit action per one event decreases with the growth of the system, as seen in the increase of the average action efficiency, while the total action increases. This is an expression of the dualism for the decreasing unit action principle and the increasing total action principle, for dynamical action as systems self-organize, grow, evolve, and develop.
Figure 10 is an expression of the unit-total duality of entropy. When the unit entropy in the system tends to decrease its total entropy increases.
Figure 10. Unit entropy at the end of the simulation versus internal entropy on a log-log scale. As the total entropy for the simulation increases, the entropy per agent decreases.
Figure 10. Unit entropy at the end of the simulation versus internal entropy on a log-log scale. As the total entropy for the simulation increases, the entropy per agent decreases.
Preprints 138214 g010

7.2.3. The Rest of the Characteristics

Next we show the rest of the power law fits between all of the quantities in the model, figs. 10 to 38. All of them are on a log-log scale, where a straight line is a power law curve on a linear-linear scale. These graphs match the predictions of the model and confirm the power-law relationships between all of the characteristics of a complex system derived there.
Figure 11 shows the average action efficiency at the end of the simulation versus the time required to traverse the path as the size of the system, in terms of number of agents, increases. In complex systems, as the agents find shorter paths, this state is more stable in dynamic equilibrium and is preserved. It has a higher probability of persisting. It is memorized by the system. If there is friction in the system, this trend will become even stronger, as the energy spent to traverse the shorter path will also decrease. To the macro-state at each point, there are many micro-states, corresponding to the variations of the paths of individual agents.
Figure 12 shows the average action efficiency at the end of the simulation versus the density increase of the agents as the size of the system increases in terms of number of agents. Density increases the probability of shorter paths, i.e. less time to reach the destination, i.e. larger action efficiency. In natural systems as density increases, action efficiency increases, i.e. level of organization increases. Another term for density is concentration. When hydrogen gas clouds in the universe under the influence of gravity concentrate into stars, nucleosynthesis starts and the evolution of cosmic elements begins. In chemistry increased concentration of reactants speeds up chemical reactions, i.e. they become more action efficient. When single-cell organisms concentrate in colonies and later in multicellular organisms their level of organization increases. When human populations concentrate in cities, action efficiency increases, and civilization advances.
As statistical Boltzmann entropy decreases by a greater amount during self-organization, as seen on Figure 13, the system becomes more action-efficient. Decreased randomness is correlated with a well-formed path as a flow channel, which corresponds to the structure (organization) of the system. Here, the increase of entropy difference obeys the predictions of the model being in a strict power law dependence on the other characteristics of the self-organizing complex system.
Figure 14 shows the average action efficiency at the end of the simulation versus the flow rate as the size of the system in terms of number of agents increases. The flow rate measures the number of events in a system. For real systems, those can be nuclear or chemical reactions, computations, or anything else. In this simulation, it is the number of visits at the endpoints, or the number of crossings. As the speed of the ants is a constant in this simulation, the number of visits or the flow of events is inversely proportional to the time for crossing, i.e. the path length, therefore action efficiency increases with the number of visits.
Figure 15 shows the average action efficiency at the end of the simulation versus the amount of pheromone, or information, as the size of the system in terms of number of angents increases. The pheromone is what instructs the ants how to move. They follow its gradient towards the food or the nest. As the ants form the path, they concentrate more pheromone on the trail, and they lay it faster so it has less time to evaporate. Both depend on each other in a positive feedback loop. This leads to increased action efficiency, with a power-law dependence as predicted by the model. In other complex systems, the analog of the pheromone can be temperature and catalysts in chemical reactions. In an ecosystem, as animals traverse a path, the path itself carries information, and clearing the path reduces obstacles and, therefore the time and energy to reach the destination, i.e. action.
Figure 16 shows the total action at the end of the simulation versus the size of the system in terms of the number of agents. The total action is the sum of the actions of each agent. As the number of agents grows the total action grows. This graph demonstrates the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 17 shows the total action at the end of the simulation versus the time required to traverse the path as the size of the system, in terms of the number of agents increases. With more ants, the path forms better and gets shorter, which increases the number of visits. The shorter time is connected to more visits and increased size of the system, which is why the total action increases. This graph also demonstrates the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 18 shows the total action at the end of the simulation versus the increase of density of agents as the size of the system in terms of number of agents increases. The larger the system is, it contains more agents, which corresponds to greater density, more trajectories, and more total action. This graph demonstrates as well the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 19 shows the total action at the end of the simulation versus the absolute decrease of entropy as the size of the system in terms of number of agents increases. As the total entropy difference increases, which means that the decrease of the internal entropy is greater for a larger number of ants, the total action increases, because there are more agents in the system and they visit the nodes more often. Greater organization of the system is correlated with more total action demonstrating again the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 20 shows the total action at the end of the simulation versus the flow rate, which is the number of events per unit time, as the size of the system in terms of number of agents increases. As the flow of events increases, which is the number of crossings of ants between the food and nest, the total action increases, because there are more agents in the system and they visit the nodes more often by forming a shorter path. This also demonstrates the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 21 shows the total action at the end of the simulation versus the amount of pheromone as a measure for information, as the size of the system in terms of the number of agents increases. As the total number of agents in the system increases, they leave more pheromones, which causes forming a shorter path, increases the number of visits, and the total action increases. Again, this graph demonstrates the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 22 shows the total pheromone as a measure for the amount of information at the end of the simulation versus the size of the system in terms of number of agents. As the total number of ants in the system increases, they leave more pheromones and form a shorter path, which counters the evaporation of the pheromones. This increases the amount of information in the system, which helps with its rate and degree of self-organization.
Figure 23 shows the total pheromone at the end of the simulation versus the average path time required to traverse the path as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, and as they visit the food and nest more often and there are greater number of ants they leave more pheromones. The increased amount of information in turn helps form an even shorter path which reduces the pheromone evaporation increasing the pheromones event more. This is a visualization of the result of this positive feedback loop.
Figure 24 shows the total pheromone as a measure of information at the end of the simulation versus the density increase as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, and as there are more ants, their density increases, and as they visit the food and nest more often and there is a greater number of ants and lower evaporation, they leave more information.
Figure 25 shows the total pheromone as a measure of amount of information at the end of the simulation versus the absolute decrease of entropy as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher. As there are more ants, the entropy difference increases. The entropy during each simulation decreases, and as they visit the food and nest more often and there is a greater number of ants and less evaporation they accumulate more pheromones.
Figure 26 shows the total pheromone as a measure of the amount of information in the systems at the end of the simulation versus the flow rate, which is the number of events (crossings of the edge) per unit time, as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher. They visit the food and nest more often, and as there are more ants, the number of visits increases proportionally, the evaporation decreases, and they accumulate more pheromones.
Figure 27 shows the flow rate in terms of number of events at the end of the simulation versus the size of the system in terms of number of agents. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, visit the food and nest more often, and the number of visits increases proportionally.
Figure 28 shows the flow rate in terms of number of events per unit time at the end of the simulation versus the time required to traverse between the nodes as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, visit the food and nest more often, and as there are more ants, the number of visits increases proportionally.
Figure 29 shows the flow rate in terms of number of events (edge crossings) per unit time at the end of the simulation versus the time required to traverse between the nodes as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, this leads to an increase in density, and as there are more ants, the number of visits increases proportionally.
Figure 30 shows the flow rate in terms of number of events (edge crossings) per unit time at the end of the simulation versus the absolute decrease of entropy as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, the absolute decrease of entropy is larger, and as there are more ants, the number of visits increases proportionally.
Figure 31 shows the absolute amount of entropy decrease versus the size of the system in terms of number of agents as it increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, they start with a larger initial entropy and the difference between the initial and final entropy grows. More ants correspond to greater internal entropy decrease, which is one measure of self-organization. It is one of the scaling laws in the size-complexity rule.
Figure 32 shows the absolute amount of entropy decrease versus the average time required to traverse the path at the end of the simulation as the size of the system in terms of number of agents as they increase. As the total number of ants in the system increases, they form a shorter path, the entropy decrease is greater, as the degree of self-organization is higher. When the path is shorter, this corresponds to shorter times to cross between the two nodes, the internal entropy decreases more.
Figure 33 shows the absolute amount of entropy decrease versus the the amount of density increase at the end of the simulation as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, and as there are more ants their density increases, and the internal entropy difference increases proportionally.
Figure 34 shows the amount of density increase versus the size as it increases in terms of number of agents. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, and as there are more ants, the density increases proportionally.
Figure 35 shows the amount of density increase versus the average time required to traverse the path as the size increases in terms of number of agents. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, visit the food and nest more often, the time to cross between the nodes decreases, and the density increases proportionally.
Figure 36 shows the average time required to traverse the path versus the increasing size of the system in terms of number of agents. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, visit the food and nest more often, and the time for the visits decreases proportionally, increasing action efficiency.
Figure 37 shows the final entropy at the end of the simulation versus the size of the system in terms number of agents. The final entropy in the system increases when there are more agents, and therefore more possible microstates of the system.
Figure 38 shows the initial entropy at the beginning of the simulation versus the size of the system in terms number of agents. The initial entropy reflects the larger number of agents in a fixed initial size of the system and scales with the size of the system as expected. The initial entropy in the system increases when there are more agents in the space of the simulation, and therefore more possible microstates of the system.
Figure 39 shows the unit entropy at the end of the simulation versus the size of the system in terms number of agents.
Figure 40 shows the unit information per one path at the end of the simulation versus the size of the system in terms number of agents. It shows that as the system increases in size, it has more ability to self-organize and to form shorter paths, therefore needing less information for each path, which in this system is one event in the system. More organized systems find shorter paths for their agents, and need less information per path.
Figure 41 shows the unit information per one path at the end of the simulation versus the total information in the system. As the system grows, the total information in the system is increasing and it has more ability to self-organize and to form shorter paths, therefore needing less information for each path, which in this system is one event in the system. More organized systems find shorter paths for their agents, and need less information per path but increases the total amount of information in the system.
In Table 5 we show the values of the fit parameters for the power law relationships:

8. Discussion

Hamilton’s principle of stationary action has long been a cornerstone in physics, showing that the path taken by any system between two states is one that minimizes action for the most potentials in classical physics. In some cases, it is a saddle point never being a true maximum. Our research extends this principle to the realm of complex systems, proposing that the average action efficiency (AAE) serves as a predictor, measure, and driver of self-organization within these systems. By utilizing agent-based modeling (ABM), particularly through simulations of ant colonies, we demonstrate that systems naturally evolve towards states of higher organization and efficiency, consistent with the minimization of average physical action for one event in a system. In this simulation, as the number of agents in each run is fixed, all characteristics undergo a phase transition from an unorganized initial state to an organized final state. All of the characteristics are correlated with power-law relationships in the final state. This provides a new way of understanding self-organization and its driving mechanisms.
In an example of one agent, a state of the system where it has half of the action compared to another state, the system is calculated to have double the amount of organization. An extension of the model to open systems of n agents provides a method for calculating the level of organization of any system. The significance of this result is that it provides a quantitative measure for comparing different levels of organization within the same system and for comparing different systems.
The size-complexity rule can be summarized as the following: for a system to improve, it must become larger i.e. for a system to become more organized and action-efficient, it needs to expand. As a system’s action efficiency increases, it can grow, creating a positive feedback loop where growth and action efficiency reinforce each other. The negative feedback loop is that the characteristics of a complex system cannot deviate much from the power law relationship. If we externally limit the growth of the system, we also limit the increase in its action efficiency. Then the action becomes stationary which means that the average action efficiency and the total action in the system stop increasing. Otherwise for unbounded, growing systems, action is dynamic, which means that the action efficiency and total action continue increasing. This applies to dynamic, open thermodynamic systems that operate outside of thermodynamic equilibrium and have flows of energy and matter. The growth of any system is driven by its increase in action efficiency. Without reaching a new level of action efficiency, growth is impossible. This principle is evident in both organisms and societies.
Other characteristics such as the total amount of action in the system, the number of events per unit of time, the internal entropy decrease, the density of agents, the amount of information in the system (measured in terms of pheromone levels), and the average time per event are strongly correlated and increase according to a power law function. Changing the population in the simulation influences all these characteristics through their power law relationships. Because these characteristics are interconnected, measuring one can provide the values of the others at any given time in the simulation using the coefficients in the power law fits (Table). If we consider the economy as a self-organizing complex system, we can find a logical explanation for the Jevons paradox, which may need to be renamed to Jevons rule, because it is an observation of a regular property of complex systems, and not an unexplained counter-intuitive fact, as it has been considered for a long time.
We uncover a unit-total dualism, as in some of the characteristics, such as action and entropy, while the action and entropy per one event decrease, the total action, and entropy proportionally increase in the whole system. The unit and total quantities are correlated with power law equations. In the case of action, this leads to a dynamical action principles, where unit action is decreasing in self-organization, while total action is increasing. This variational principle is observed in self-organizing complex systems, and not in isolated agents.
The formation of the path in the simulation is an emergent property in this system due to the interactions of the agents and is not specified in the rules of the simulation. This least action state of the system, which is its most organized state is predicted from the principle of least action. This means that we have a way to predict emergent properties in complex systems, using basic physics principles. The emergence of structure from the properties of the agents is a hallmark of self-organizing systems and it appears spontaneously. Emergence is a property of the entire system and not of its parts.

9. Conclusions

This study reinforces the increase of average action efficiency during self-organization and with the size of systems as a driver and a measure for the evolution of complex systems. This offers new opportunities for understanding and describing the processes leading to increased organization in complex systems. It offers prospects for future research, laying a foundation for more in-depth exploration into the dynamics of self-organization and potentially inspiring the development of new strategies for optimizing system performance and resilience.
Our findings suggest that self-organization is inherently driven by a positive feedback loop, where systems evolve towards states of minimal unit action and maximal organization. Self-organization driven by the action principles could be the simplest explanation and thus pass the Occam’s razor. It could be the answer to "Why do complex systems self-organize at all?". Action efficiency always acts together with all other characteristics in the model, not in isolation. It drives self-organization through this mechanism of positive and negative feedback loops.
We found that this theory is working well for the current simulation. With additional details and features, it can be applied to more realistic systems. This model is testable and falsifiable. It needs to be always retested because every theory, every method, and every approach has its limits and needs to be extended, expanded, enriched, and detailed as new levels of knowledge are reached. We expect this from all scientific theories and explanations. This model may be tested for any network, for example, metabolic networks, ecological networks, Internet, road networks, etc.
Our simulations demonstrate that the level of organization is inversely proportional to the average physical action required for system processes. This measure aligns with the principle of least action, a fundamental concept in physics, and extends its application to complex, non-equilibrium systems. The results from our ant colony simulations consistently show that systems with higher average action efficiency exhibit greater levels of organization, validating our hypothesis.
When the processes of self-organization are open-ended and continuous, the stationary action principles do not apply anymore except in limited cases. We have dynamical action principles where the quantities are changing continuously, either increasing or decreasing.
We uncovered an extension of the principle of least action to complex systems, which can be an extended variational principle of decreasing unit action per one event in a self-organizing complex system, and it is connected with a power law relation to another mirror variational principle of increasing total action of the system. Other complexity variational principles are the decreasing unit entropy per one event in the system, and the increasing of the total entropy as the system grows, evolves, develops, and self-organizes. We term those polar sets of variational principles, unit-total duality.
Other dualities to explore are, that the unit path curvature for one edge of the complex networks decreases, according to the Hertz’s principle of least curvature, as the total curvature for traversing all paths in the system increases. The unit path constraint for the motion of one edge decreases, according to the Gauss principle of least constraint, as the total constraint for the motion of all agents as the system grows increases. There are possibly many more variational dualities to be uncovered in self-organizing, evolving, and developing complex systems. Those dualities can be used to analyze, understand, and predict the behavior of complex systems. This is one explanation for the size-complexity rule observed in nature and the scaling relationships in biology and society. The unit-total dualism is that as unit quantities decrease, with the system becoming more efficient as a result of self-organization, total quantities grow and both are connected with positive feedback and are correlated by a power law relation. As one example we find a logical explanation for the Jevons and other paradoxes, and the subsequent work of economists in this field, which are also unit-total dualities inherent to the functioning of self-organizing and growing complex systems.
While our results are promising, our study has limitations. The simplified ant colony model used in our simulations does not capture the full spectrum of complexities and interactions present in real-world systems. Future research should aim to integrate more detailed and realistic models, incorporating environmental variability and agent heterogeneity, to test the universality and applicability of our findings more broadly and for specific systems.
Additionally, the interplay between average action efficiency and other organizational measures, such as entropy and order parameters, deserves further investigation. Understanding how these metrics interact could deepen our comprehension of complex system dynamics and provide a more holistic view of system organization.
The implications of our findings are significant for both theoretical research and practical applications. In natural sciences, this new measure can be used to quantify and compare the organization of different systems, providing insights into their evolutionary processes. In engineering and artificial systems, our model can guide the design of more efficient and resilient systems by emphasizing the importance of action efficiency. For example, in ecological and biological systems, understanding how organisms optimize their behaviors to achieve greater efficiency can inform conservation strategies and ecosystem management. In technology and artificial intelligence, designing algorithms and systems that follow the principle of least action can lead to more efficient processing and better performance.
Our findings contribute to a deeper understanding of the mechanisms underlying self-organization and offer a novel, quantitative approach to measuring organization in complex systems. This research opens up exciting possibilities for further exploration and practical applications, enhancing our ability to design and manage complex systems across various domains. By providing a quantitative measure of organization that can be applied universally, we enhance our ability to design and manage complex systems across various domains. Future research can build on our findings to explore the dynamics of self-organization in greater detail, develop new optimization strategies, and create more efficient and resilient systems.

9.1. Future Work

In Part 2 of this paper we measure the entropy production of this system, and include it in the positive feedback model of characteristics, leading to exponential and power law solutions. Then we verify that the entropy production also obeys the power law relationships with all of the other characteristics of the system. For example, in comparing to the internal entropy, we conclude that as the internal entropy is reduced, the external entropy production increases proportionally, which can be connected to the Maximum Entropy Production Principle, where internal entropy minimization leads to maximization of external entropy production, therefore we can say that self-organization leads to internal entropy decrease and formation of flow channels which maximize external entropy production.
In Part 3 of this paper we will show data for the results of the simulation that the rate of increase of self-organization as the size of the system is increasing is also in power law with all other characteristics. We will include the rates of change of all characteristics as a part of the model of positive feedback loops between them.
In Part 4 we will include also the negative feedbacks loops between the characteristics and additional factors, as dissipation, obstacles and changing boundary conditions, in the model, and derive predictions, which we can test with this simulation.
In Part 5 of this paper we will show the results of a 3D simulation of a growing system, where the rates of growth are also a function of the levels of all of the characteristics, derive the solutions, and test them with results from simulations.

Author Contributions

Conceptualization, G.G.; Theory, G.G.; Model, G.G.; methodology, M.B.; software, M.B.; validation, G.G. and M.B..; formal analysis, M.B.; investigation, G.G.; resources, G.G.; data curation, G.G.; writing—original draft preparation, G.G.; writing—review and editing, G.G.; visualization, M.B. and G.G.; supervision, G.G.; project administration, G.G.; funding acquisition, G.G. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors thank Assumption University for providing a creative atmosphere and funding and its Honors Program, specifically Prof. Colby Davie, for continuous research support and encouragement. Matthew Brouillet thanks his parents for their encouragement. Georgi Georgev thanks his wife for patience and support for this manuscript.

References

  1. Sagan, C. Cosmos (Random House, New York, 1980).
  2. Chaisson, E. J. Cosmic evolution (Harvard University Press, 2002).
  3. Kurzweil, R. The singularity is near: When humans transcend biology (Penguin, 2005).
  4. De Bari, B. , Dixon, J., Kondepudi, D. & Vaidya, A. Thermodynamics, organisms and behaviour. Philosophical Transactions of the Royal Society A, 2023. [Google Scholar] [CrossRef]
  5. England, J. L. Self-organized computation in the far-from-equilibrium cell. Biophysics Reviews 3 ( 2022. [CrossRef] [PubMed]
  6. Walker, S. I. & Davies, P. C. The algorithmic origins of life. Journal of the Royal Society Interface, 2086. [Google Scholar] [CrossRef]
  7. Walker, S. I. The new physics needed to probe the origins of life. Nature, 2019. [Google Scholar] [CrossRef]
  8. Georgiev, G. & Georgiev, I. The least action and the metric of an organized system. Open systems & information dynamics, 2002. [Google Scholar] [CrossRef]
  9. Georgiev, G. Y. Free energy rate density and self-organization in complex systems. Proceedings of ECCS 2014, 2016)., 321–327 (Springer. [CrossRef]
  10. Georgiev, G. Y. et al. Mechanism of organization increase in complex systems. Complexity. [CrossRef]
  11. Georgiev, G. Y. , Chatterjee, A. & Iannacchione, G. Exponential self-organization and moore’s law: Measures and mechanisms. Complexity. [CrossRef]
  12. Butler, T. H. & Georgiev, G. Y. Self-organization in stellar evolution: Size-complexity rule. In Efficiency in Complex Systems: Self-Organization Towards Increased Efficiency, 53–80 (Springer, 2021). [CrossRef]
  13. Shannon, C. E. A mathematical theory of communication. Bell System Technical Journal, 1948. [Google Scholar] [CrossRef]
  14. Jaynes, E. T. Information theory and statistical mechanics. Physical Review, 1957. [Google Scholar] [CrossRef]
  15. Gell-Mann, M. Complexity measures - an article about simplicity and complexity. Complexity, 1995. [Google Scholar] [CrossRef]
  16. Yockey, H. P. Information Theory, Evolution, and The Origin of Life (Cambridge University Press, 2005).
  17. Crutchfield, J. P. & Feldman, D. P. Information measures, effective complexity, and total information. Physical Review E, 2003. [Google Scholar]
  18. Williams, P. L. & Beer, R. D. Information-theoretic measures for complexity analysis. Chaos: An Interdisciplinary Journal of Nonlinear Science, 2010. [Google Scholar] [CrossRef]
  19. Ay, N. , Olbrich, E., Bertschinger, N. & Jost, J. Quantifying complexity using information theory, machine learning, and algorithmic complexity. Journal of Complexity.
  20. Kolmogorov, A. N. Three approaches to the quantitative definition of information. Problems of Information Transmission, 1965. [Google Scholar] [CrossRef]
  21. Grassberger, P. Toward a quantitative theory of self-generated complexity. International Journal of Theoretical Physics, 1986. [Google Scholar] [CrossRef]
  22. Pincus, S. M. Approximate entropy as a measure of system complexity. Proceedings of the National Academy of Sciences, 2301. [Google Scholar] [CrossRef]
  23. Costa, M. , Goldberger, A. L. & Peng, C.-K. Multiscale entropy analysis of complex physiologic time series. Physical review letters, 2002. [Google Scholar] [CrossRef]
  24. Lizier, J. T. , Prokopenko, M. & Zomaya, A. Y. Local information transfer as a spatiotemporal filter for complex systems. Physical Review E, 2008. [Google Scholar] [CrossRef]
  25. Rosso, O. A. , Larrondo, H. A., Martin, M. T., Plastino, A. & Fuentes, M. A. Distinguishing noise from chaos. Physical Review Letters, 2007. [Google Scholar] [CrossRef]
  26. Maupertuis, P. L. M. d. Essay de cosmologie (Netherlands, De l’Imp. d’Elie Luzac, 1751).
  27. Goldstein, H. Classical Mechanics (Addison-Wesley, 1980).
  28. Taylor, J. C. Hidden unity in nature’s laws (Cambridge University Press, 2001).
  29. Lauster, M. On the principle of least action and its role in the alternative theory of nonequilibrium processes. In Variational and Extremum Principles in Macroscopic Systems, 207–225 (Elsevier, 2005). [CrossRef]
  30. Nath, S. Novel molecular insights into atp synthesis in oxidative phosphorylation based on the principle of least action. Chemical Physics Letters, 9561. [Google Scholar] [CrossRef]
  31. Bersani, A. M. & Caressa, P. Lagrangian descriptions of dissipative systems: a review. Mathematics and Mechanics of Solids, 2021. [Google Scholar] [CrossRef]
  32. Bonner, J. T. Perspective: the size-complexity rule. Evolution, 1890. [Google Scholar] [CrossRef]
  33. Carneiro, R. L. On the relationship between size of population and complexity of social organization. Southwestern Journal of Anthropology, 1967. [Google Scholar] [CrossRef]
  34. West, G. B. Scale: the universal laws of growth, innovation, sustainability, and the pace of life in organisms, cities, economies, and companies (Penguin, 2017).
  35. Gershenson, C. , Trianni, V., Werfel, J. & Sayama, H. Self-organization and artificial life. Artificial Life, 2020. [Google Scholar] [CrossRef]
  36. Sayama, H. Introduction to the modeling and analysis of complex systems (Open SUNY Textbooks, 2015).
  37. Carlson, J. M. & Doyle, J. Complexity and robustness. Proceedings of the national academy of sciences, 2545. [Google Scholar] [CrossRef]
  38. Heylighen, F. & Joslyn, C. Cybernetics and second-order cybernetics. Encyclopedia of physical science & technology, 2001. [Google Scholar]
  39. Jevons, W. S. The coal question; an inquiry concerning the progress of the nation and the probable exhaustion of our coal-mines (Macmillan, 1866).
  40. Berkhout, P. H. , Muskens, J. C. & Velthuijsen, J. W. Defining the rebound effect. Energy policy, 2000. [Google Scholar] [CrossRef]
  41. Hildenbrand, W. On the" law of demand". Econometrica: Journal of the Econometric Society, 1983. [Google Scholar] [CrossRef]
  42. Saunders, H. D. The khazzoom-brookes postulate and neoclassical growth. The Energy Journal, 1992. [Google Scholar]
  43. Downs, A. Stuck in traffic: Coping with peak-hour traffic congestion (Brookings Institution Press, 2000).
  44. Feynman, R. P. Space-time approach to non-relativistic quantum mechanics. Reviews of Modern Physics. [CrossRef]
  45. Gauß, C. F. Über ein neues allgemeines Grundgesetz der Mechanik. (Walter de Gruyter, Berlin/New York Berlin, New York, 1829).
  46. LibreTexts. 5.3: The uniform distribution (2023). Accessed: 2024-07-15.
  47. Kleiber, M. et al. Body size and metabolism. Hilgardia.

Short Biography of Authors

Preprints 138214 i001
Matthew Brouillet is a student at Washington University as part of the Dual Degree program with Assumption University. His major is Mechanical Engineering, and he has experience with computer programming. He developed the NetLogo programs for these simulations and utilized Python code to analyze the data. He is working with Dr. Georgiev to publish numerous papers in the field of self-organization.
Preprints 138214 i002
Dr. Georgi Y. Georgiev v is a Professor of Physics at Assumption University and Worcester Polytechnic Institute. He earned his Ph.D. in Physics from Tufts University, Medford, MA. His research focuses on the physics of complex systems, exploring the role of variational principles in self-organization, the principle of least action, path integrals, and the Maximum Entropy Production Principle. Dr. Georgiev has developed a new model that explains the mechanism, driving force, and attractor of self-organization. He has published extensively in these areas and he has been an organizer of international conferences on complex systems.
Figure 1. Comparison between the geodesic l 2 and a longer path l 1 between two nodes in a network.
Figure 1. Comparison between the geodesic l 2 and a longer path l 1 between two nodes in a network.
Preprints 138214 g001
Figure 2. Positive feedback model between the eight quantities in our simulation.
Figure 2. Positive feedback model between the eight quantities in our simulation.
Preprints 138214 g002
Figure 3. Positive feedback model between the eight quantities in our simulation.
Figure 3. Positive feedback model between the eight quantities in our simulation.
Preprints 138214 g003
Figure 11. The average action efficiency at the end of the simulation versus the time required to traverse the path as the number of ants increases on a log-log scale. Action efficiency increases as the time to reach the destination shortens, i.e. the path length becomes shorter.
Figure 11. The average action efficiency at the end of the simulation versus the time required to traverse the path as the number of ants increases on a log-log scale. Action efficiency increases as the time to reach the destination shortens, i.e. the path length becomes shorter.
Preprints 138214 g011
Figure 12. The average action efficiency at the end of the simulation versus the density increase measured as the difference between the final density minus the initial density as the number of ants increases, on a log-log scale. As the ants get more dense, they become more action efficient.
Figure 12. The average action efficiency at the end of the simulation versus the density increase measured as the difference between the final density minus the initial density as the number of ants increases, on a log-log scale. As the ants get more dense, they become more action efficient.
Preprints 138214 g012
Figure 13. The average action efficiency at the end of the simulation versus the absolute amount of entropy decrease, as the number of ants increases, on a log-log scale. As the ants get less random, they become more action-efficient.
Figure 13. The average action efficiency at the end of the simulation versus the absolute amount of entropy decrease, as the number of ants increases, on a log-log scale. As the ants get less random, they become more action-efficient.
Preprints 138214 g013
Figure 14. The average action efficiency at the end of the simulation versus the flow rate as the number of ants increases, on a log-log scale. As the ants visit the endpoints more often, they become more efficient.
Figure 14. The average action efficiency at the end of the simulation versus the flow rate as the number of ants increases, on a log-log scale. As the ants visit the endpoints more often, they become more efficient.
Preprints 138214 g014
Figure 15. The average action efficiency at the end of the simulation versus the amount of pheromone, or information, as the number of ants increases on a log-log scale. As there is more information for the ants to follow, they become more efficient.
Figure 15. The average action efficiency at the end of the simulation versus the amount of pheromone, or information, as the number of ants increases on a log-log scale. As there is more information for the ants to follow, they become more efficient.
Preprints 138214 g015
Figure 16. The total action at the end of the simulation versus the number of ants on a log-log scale. As there are more agents in the system, the total amount of action increases proportionally.
Figure 16. The total action at the end of the simulation versus the number of ants on a log-log scale. As there are more agents in the system, the total amount of action increases proportionally.
Preprints 138214 g016
Figure 17. The total action at the end of the simulation versus the time required to traverse the path as the number of ants increases on a log-log scale.
Figure 17. The total action at the end of the simulation versus the time required to traverse the path as the number of ants increases on a log-log scale.
Preprints 138214 g017
Figure 18. The total action at the end of the simulation versus the increase of density as the number of ants increases on a log-log scale. As the ants become more dense, there is more action in the system.
Figure 18. The total action at the end of the simulation versus the increase of density as the number of ants increases on a log-log scale. As the ants become more dense, there is more action in the system.
Preprints 138214 g018
Figure 19. The total action at the end of the simulation versus the absolute increase of entropy difference as the number of ants increases, on a log-log scale. As the entropy difference increases, there is more action within the system.
Figure 19. The total action at the end of the simulation versus the absolute increase of entropy difference as the number of ants increases, on a log-log scale. As the entropy difference increases, there is more action within the system.
Preprints 138214 g019
Figure 20. The total action at the end of the simulation versus the flow rate as the number of ants increases, on a log-log scale. As the ants visit the endpoints more often, there is more total action within the system.
Figure 20. The total action at the end of the simulation versus the flow rate as the number of ants increases, on a log-log scale. As the ants visit the endpoints more often, there is more total action within the system.
Preprints 138214 g020
Figure 21. The total action at the end of the simulation versus the amount of pheromone as the number of ants increases on a log-log scale. As there is more information for the ants to follow, there is more action within the system.
Figure 21. The total action at the end of the simulation versus the amount of pheromone as the number of ants increases on a log-log scale. As there is more information for the ants to follow, there is more action within the system.
Preprints 138214 g021
Figure 22. The total pheromone at the end of the simulation versus the number of ants, on a log-log scale. As more ants are added to the simulation, there is more information for the ants to follow.
Figure 22. The total pheromone at the end of the simulation versus the number of ants, on a log-log scale. As more ants are added to the simulation, there is more information for the ants to follow.
Preprints 138214 g022
Figure 23. The total pheromone at the end of the simulation versus the time required to traverse the path as the number of ants increases on a log-log scale. As it takes less time for the ants to travel between the nodes, there is more information for the ants to follow and as there is more pheromone to follow, the trajectory becomes shorter - a positive feedback loop.
Figure 23. The total pheromone at the end of the simulation versus the time required to traverse the path as the number of ants increases on a log-log scale. As it takes less time for the ants to travel between the nodes, there is more information for the ants to follow and as there is more pheromone to follow, the trajectory becomes shorter - a positive feedback loop.
Preprints 138214 g023
Figure 24. The total pheromone at the end of the simulation versus the density increase as the number of ants increases on a log-log scale. As the ants become more dense, there is more information for them to follow.
Figure 24. The total pheromone at the end of the simulation versus the density increase as the number of ants increases on a log-log scale. As the ants become more dense, there is more information for them to follow.
Preprints 138214 g024
Figure 25. The total pheromone at the end of the simulation versus the absolute increase of entropy difference as the number of ants increases on a log-log scale. As the entropy difference increases, there is more information for the ants to follow and greater self-organization.
Figure 25. The total pheromone at the end of the simulation versus the absolute increase of entropy difference as the number of ants increases on a log-log scale. As the entropy difference increases, there is more information for the ants to follow and greater self-organization.
Preprints 138214 g025
Figure 26. The total pheromone at the end of the simulation versus the flow rate as the number of ants increases on a log-log scale.
Figure 26. The total pheromone at the end of the simulation versus the flow rate as the number of ants increases on a log-log scale.
Preprints 138214 g026
Figure 27. The flow rate at the end of the simulation versus the number of ants, on a log-log scale. As more ants are added to the simulation and they are forming shorter paths in self-organization, the ants are visiting the endpoints more often.
Figure 27. The flow rate at the end of the simulation versus the number of ants, on a log-log scale. As more ants are added to the simulation and they are forming shorter paths in self-organization, the ants are visiting the endpoints more often.
Preprints 138214 g027
Figure 28. The flow rate at the end of the simulation versus the time required to traverse between the nodes as the number of ants increases, on a log-log scale. As the path becomes shorter, the ants are visiting the endpoints more often.
Figure 28. The flow rate at the end of the simulation versus the time required to traverse between the nodes as the number of ants increases, on a log-log scale. As the path becomes shorter, the ants are visiting the endpoints more often.
Preprints 138214 g028
Figure 29. The flow rate at the end of the simulation versus the increase of density as the number of ants increases on a log-log scale. As the ants get more dense, they are visiting the endpoints more often.
Figure 29. The flow rate at the end of the simulation versus the increase of density as the number of ants increases on a log-log scale. As the ants get more dense, they are visiting the endpoints more often.
Preprints 138214 g029
Figure 30. The flow rate at the end of the simulation versus the absolute decrease of entropy as the number of ants increases, on a log-log scale. As the entropy decreases more, the ants are visiting the endpoints more often.
Figure 30. The flow rate at the end of the simulation versus the absolute decrease of entropy as the number of ants increases, on a log-log scale. As the entropy decreases more, the ants are visiting the endpoints more often.
Preprints 138214 g030
Figure 31. The absolute amount of entropy decrease versus the number of ants, on a log-log scale. As more ants are added to the simulation, there is a larger decrease in entropy reflecting a greater degree of self-organization.
Figure 31. The absolute amount of entropy decrease versus the number of ants, on a log-log scale. As more ants are added to the simulation, there is a larger decrease in entropy reflecting a greater degree of self-organization.
Preprints 138214 g031
Figure 32. The absolute amount of entropy decrease versus the time required to traverse the path at the end of the simulation as the number of ants increases, on a log-log scale. As it takes more time to move between the nodes with fewer ants, there is more of a decrease in entropy, and vice versa.
Figure 32. The absolute amount of entropy decrease versus the time required to traverse the path at the end of the simulation as the number of ants increases, on a log-log scale. As it takes more time to move between the nodes with fewer ants, there is more of a decrease in entropy, and vice versa.
Preprints 138214 g032
Figure 33. The absolute amount of entropy decrease versus the amount of density increase as the number of ants increases on a log-log scale. As the ants become more dense, there is a larger decrease in entropy.
Figure 33. The absolute amount of entropy decrease versus the amount of density increase as the number of ants increases on a log-log scale. As the ants become more dense, there is a larger decrease in entropy.
Preprints 138214 g033
Figure 34. The amount of density increase versus the number of ants, on a log-log scale. As more ants are added to the simulation, and they form shorter paths, density increases proportionally.
Figure 34. The amount of density increase versus the number of ants, on a log-log scale. As more ants are added to the simulation, and they form shorter paths, density increases proportionally.
Preprints 138214 g034
Figure 35. The amount of density increase versus the time required to traverse the path as the number of ants increases on a log-log scale. When there are more ants it takes less time to traverse the path, and there is more of an increase in density.
Figure 35. The amount of density increase versus the time required to traverse the path as the number of ants increases on a log-log scale. When there are more ants it takes less time to traverse the path, and there is more of an increase in density.
Preprints 138214 g035
Figure 36. The time required to traverse the path versus the number of ants, on a log-log scale. As more ants are added to the simulation, it takes less time to move between the nodes because they form a shorter path at the end of the simulation.
Figure 36. The time required to traverse the path versus the number of ants, on a log-log scale. As more ants are added to the simulation, it takes less time to move between the nodes because they form a shorter path at the end of the simulation.
Preprints 138214 g036
Figure 37. The final entropy at the end of the simulation versus population on a log-log scale. As the population increases, there is more entropy in the final most organized state.
Figure 37. The final entropy at the end of the simulation versus population on a log-log scale. As the population increases, there is more entropy in the final most organized state.
Preprints 138214 g037
Figure 38. Initial entropy on the first tick of the simulation versus the population on a log-log scale. As the population increases, there is more entropy.
Figure 38. Initial entropy on the first tick of the simulation versus the population on a log-log scale. As the population increases, there is more entropy.
Preprints 138214 g038
Figure 39. Unit entropy at the end of the simulation versus population on a log-log scale. As there are more agents, there is less entropy per path at the end of the simulation.
Figure 39. Unit entropy at the end of the simulation versus population on a log-log scale. As there are more agents, there is less entropy per path at the end of the simulation.
Preprints 138214 g039
Figure 40. Unit information at the end of the simulation versus population on a log-log scale. As there are more agents, there is less information per path at the end of the simulation as the path is shorter.
Figure 40. Unit information at the end of the simulation versus population on a log-log scale. As there are more agents, there is less information per path at the end of the simulation as the path is shorter.
Preprints 138214 g040
Figure 41. Unit information at the end of the simulation versus the total information in the system on a log-log scale. As there are more agents, there is less information per path at the end of the simulation as the path is shorter, and more total information as the size of the system in terms of number of agents is larger.
Figure 41. Unit information at the end of the simulation versus the total information in the system on a log-log scale. As there are more agents, there is less information per path at the end of the simulation as the path is shorter, and more total information as the size of the system in terms of number of agents is larger.
Preprints 138214 g041
Table 1. Settings of the properties in this simulation that affect the behavior of the ants.
Table 1. Settings of the properties in this simulation that affect the behavior of the ants.
Parameter Value Description
ant-speed 1 patch/tick Constant speed
wiggle range 50 degrees random directional change, from -25 to +25
view-angle 135 degrees Angle of cone where ants can detect pheromone
ant-size 2 patches Radius of ants, affects radius of pheromone viewing cone
Table 2. Settings of the properties in this simulation that affect the behavior of the pheromone.
Table 2. Settings of the properties in this simulation that affect the behavior of the pheromone.
Parameter Value Description
Diffusion rate 0.7 Rate at which pheromones diffuse
Evaporation rate 0.06 Rate at which pheromones evaporate
Initial pheromone 30 units Initial amount of pheromone deposited
Table 3. Settings of the properties in this simulation that affect the position and size of the food and the nest.
Table 3. Settings of the properties in this simulation that affect the position and size of the food and the nest.
Parameter Value Description
projectile-motion off Ants have constant energy
start-nest-only off Ants start randomly
max-food 0 Food is infinite, food will disappear if this is greater than 0
constant-ants on Number of ants is constant
world-size 40 World ranges from -20 to +20, note that the true world size is 41x41
Table 4. Settings of the properties in this simulation that affect various other conditions.
Table 4. Settings of the properties in this simulation that affect various other conditions.
Parameter Value Description
food-nest-size 5 The length and width of the food and nest boxes
foodx -18 The position of the food in the x-direction
foody 0 The position of the food in the y-direction
nestx +18 The position of the nest in the x-direction
nesty 0 The position of the nest in the y-direction
Table 5. This table contains all the fits for the power-law graphs. The "a" and "b" values in each row follow the equation y = a x b , and the R 2 is shown in the last column.
Table 5. This table contains all the fits for the power-law graphs. The "a" and "b" values in each row follow the equation y = a x b , and the R 2 is shown in the last column.
variables a b R 2
α vs. Q 7.713 · 10 36 6.787 · 10 2 0.977
α vs. i 1.042 · 10 35 6.131 · 10 2 0.981
α vs. ϕ 1.510 · 10 35 6.055 · 10 2 0.982
α vs. Δ s 1.020 · 10 35 6.675 · 10 2 0.978
α vs. Δ ρ 1.647 · 10 35 5.947 · 10 2 0.964
α vs. t 1.622 · 10 34 6.175 · 10 1 0.995
α vs. N 1.168 · 10 35 6.673 · 10 2 0.977
Q vs. i 8.502 · 10 1 9.012 · 10 1 1.000
Q vs. ϕ 2.000 · 10 4 8.897 · 10 1 1.000
Q vs. Δ s 6.202 · 10 1 9.829 · 10 1 0.999
Q vs. Δ ρ 7.133 · 10 4 8.784 · 10 1 0.990
Q vs. t 1.410 · 10 19 8.888 0.972
Q vs. N 4.550 · 10 2 9.830 · 10 1 1.000
i vs. ϕ 4.281 · 10 2 9.873 · 10 1 1.000
i vs. Δ s 7.064 · 10 1 1.090 0.999
i vs. Δ ρ 1.755 · 10 3 9.740 · 10 1 0.988
i vs. t 1.407 · 10 19 9.887 0.976
i vs. N 6.445 1.090 0.999
i u vs. i f 4.626 · 10 2 1.281 · 10 2 0.873
i u vs. N 4.516 · 10 2 1.391 · 10 2 0.864
ϕ vs. Δ s 1.521 · 10 3 1.104 0.999
ϕ vs. Δ ρ 4.175 9.864 · 10 1 0.988
ϕ vs. t 5.438 · 10 16 1.002 · 10 1 0.977
ϕ vs. N 1.427 · 10 2 1.104 0.999
Δ s vs. Δ ρ 1.301 · 10 3 8.939 · 10 1 0.991
Δ s vs. t 4.439 · 10 17 9.035 0.969
Δ s vs. N 7.598 1.000 1.000
s i vs. N 7.598 1.000 1.000
s f vs. N 5.793 9.745 · 10 1 1.000
s u vs. N 4.059 · 10 2 1.298 · 10 1 0.938
s u vs. s f 5.121 · 10 2 1.329 · 10 1 0.935
Δ ρ vs. t 1.103 · 10 16 9.975 0.949
Δ ρ vs. N 3.308 · 10 3 1.110 0.991
t vs. N 7.061 · 10 1 1.075 · 10 1 0.970
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated