Preprint
Article

Modeling and Predicting Self-Organization in Dynamic Systems Out of Thermodynamic Equilibrium; Part 1

Altmetrics

Downloads

124

Views

97

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

24 October 2024

Posted:

25 October 2024

Read the latest preprint version here

Alerts
Abstract
Self-organization in complex systems is a process in which internal entropy is reduced and emergent structures appear that allow the system to function in a more competitive way with other states of the system or with other systems. It occurs only in the presence of energy gradients, facilitating energy transmission through the system and entropy production. Being a dynamic process, self-organization requires a dynamic measure and dynamic principles. The principles of decreasing unit action and increasing total action and the principle of decreasing unit entropy and increasing total entropy are dynamic variational principles that are viable to utilize in a self-organizing system. Based on this, average action efficiency can serve as a quantitative measure of the degree of self-organization. Positive feedback loops connect this measure with all other characteristics of a complex system, providing all of them with a mechanism for exponential growth, and indicating power law relationships between each of them as confirmed by data and simulations. In this study, we apply those principles and the model to agent-based simulations of ants traveling between two locations on a 2D grid. We find that those principles explain self-organization well and that the results confirm the model. We derive a compact model of ant-behavior based on the action of their trajectories, and then estimate a variety of metrics from the simulated behavior. By measuring action efficiency we can have a new answer to the question: "What is complexity and how complex is a system?". This work shows the explanatory and predictive power of those models, which can help understand and design better complex systems.
Keywords: 
Subject: Physical Sciences  -   Theoretical Physics

1. Introduction

1.1. Background and Motivation

Self-organization is key to understanding the existence of, and the changes in all systems that lead to higher levels of complexity and perfection in development and evolution. It is a scientific as well as a philosophical question, as our understanding deepens and our realization of the importance of the process grows. Self-organization often leads to more efficient use of resources and optimized performance, which is one measure of the degree of perfection. By degree of perfection here we mean a more organized, robust, resilient, competitive, and alive system. Because competition for resources is always the selective evolutionary pressure in systems of different natures, the more efficient systems will survive at all levels of Cosmic Evolution.
Our goal is to contribute to the explanation of the mechanisms of self-organization that drive Cosmic Evolution from the Big Bang to the present, and into the future, and its measures[1,2,3]. Self-organization has a universality independent of the substrate of the system - physical, chemical, biological, or social - and explains all of its structures [4,5,6,7]. Establishing a universal, quantitative, absolute method to measure the organization of any system will help us understand the mechanisms of functioning and organization in general and enable us to design specific systems with the highest level of perfection [8,9,10,11,12].
Previous attempts to quantify organization have used static measures, such as information [13,14,15,16,17,18,19] and entropy [20,21,22,23,24,25], but our approach offers a new, dynamic perspective. We do not use Hamilton’s stationary action principle, though, we expand it and derive from it a dynamic action principle for the system studied, where average unit action for one trajectory is continuously decreasing and total action for the whole system is continuously increasing with self-organization.
Despite its significance, the mechanisms driving self-organization remain only partially understood due to the non-linearity in the dynamics of complex systems. Existing approaches often rely on specific metrics like entropy or information which are static, and while they are valuable, they have limitations in their universality and ability to predict the most organized state of a system. In many cases, they fail to describe the dynamics of the processes that lead to the increase of order and complexity. For example, a less organized system may require more bits to be described using information, compared to the same system in a more organized, i.e. efficient state. These traditional measures often fall short in providing a comprehensive, quantitative framework that can be applied across various types of complex systems, and to describe the mechanisms that lead to higher levels of organization. This gap in understanding highlights the need for a new measure that can universally and quantitatively assess the level of organization in complex systems and the transitions between them.
The motivation for this study stems from the desire to bridge this gap by introducing a novel measure of organization based on dynamical variational principles. More specifically we use Hamilton’s principle of stationary action, which is the basis of all laws of physics. In the limiting case, when the second variation of the action is positive, this makes it a true principle of least action. The principle of least action posits that the path taken by a physical system between two states is the one for which the action is minimized. Extending this principle to complex systems, we propose Average Action Efficiency as a new, dynamic measure of organization. It quantifies the level of organization and serves as a predictive tool for determining the most organized state of a system. It also correlates with all other measures of complex systems, justifying and validating its use.
Understanding the mechanisms of self-organization has profound implications across various scientific disciplines. Understanding these natural optimization processes can inspire the development of more efficient algorithms and strategies in engineering and technology. It can enhance our understanding of biological and ecological processes. It can allow us to design more efficient economic and social systems. Studying self-organization also has profound scientific and philosophical implications. It challenges traditional notions of causality and control, emphasizing the role of local interactions and feedback loops in shaping global patterns. In our model, each characteristic of a complex system is simultaneously a cause and an effect of all others. By developing a universal, quantitative measure of organization, we aim to advance our understanding of self-organization and provide practical tools for optimizing complex systems across different fields.

1.2. Overview of the Theoretical Framework

We use the extension of Hamilton’s Principle of Stationary Action to a Principle of Dynamic Action, according to which action in self-organizing systems is changing in two ways: decreasing the average action for one event and increasing the total amount of action in the system during the process of self-organization, growth, evolution, and development. This view can lead to a deeper understanding of the fundamental principles of nature’s self-organization, evolution, and development in the universe, ourselves, and our society.

1.3. Hamilton’s Principle and Action Efficiency

Hamilton’s principle of stationary action is the most fundamental principle in nature, from which all other physics laws are derived [26,27]. Everything derived from it is guaranteed to be self-consistent [28]. Beyond classical and quantum mechanics, relativity, and electrodynamics, it has applications in statistical mechanics, thermodynamics, biology, economics, optimization, control theory, engineering, and information theory [29,30,31]. We propose its application, extension, and connection to other characteristics of complex systems as part of the complex systems theory.
Enders notably says: "One extremal principle is undisputed “the least-action principle (for conservative systems), which can be used to derive most physical theories, ...Recently, the stochastic least-action principle was also established for dissipative systems. Information theory and the stochastic least-action principle are important corner stones of modern stochastic thermodynamics." [32] and "Our analytical derivations show that MaxEPP is a consequence of the least-action principle applied to dissipative systems (stochastic least-action principle) [32].
Similar dynamic variational principles have also been proposed in considering the dynamics of systems away from thermodynamic equilibrium. Martyushev has published reviews on Maximum Entropy Production Principle (MEPP) saying: "A nonequilibrium system develops so as to maximize its entropy production under present constraints. " [33]. "Sawada emphasized that the maximal entropy production state is most stable to perturbations among all possible (metastable) states." [34], which we will connect with dynamical action principles in the second part of this work.
The derivation of the MEPP from LAP was first done by Dewar in 2003[35] [36], basing his work on Jayne’s theory from 1957 [37,38], and extending it to non-equilibrium systems.
The papers by Umberto Lucia "Entropy Generation: Minimum Inside and Maximum Outside" (2014) [39] and "The Second Law Today: Using Maximum-Minimum Entropy Generation" [40] examine the thermodynamic behavior of open systems in terms of entropy generation and the principle of least action. Lucia explores the concept that within open systems, entropy generation tends to a minimum inside the system and reaches a maximum outside it, which relates to our observations of dualities of the same characteristic.
François Gay-Balmaz and Hiroshi Yoshimura derive a form of dissipative Least Action Principle (LAP) for systems out of equilibrium. Specifically, they extend the classical variational approaches used in reversible mechanics to dissipative systems. Their work involves the use of Lagrangian and Hamiltonian mechanics in combination with thermodynamic forces and fluxes, and they introduce modifications to the standard variational calculus to account for irreversible processes [41,42,43]
Arto Annila derives the Maximum Entropy Production Principle (MEPP) from the Least Action Principle (LAP) and demonstrates how the principle of least action underlies natural selection processes, showing that systems evolve to consume free energy in the least amount of time, thereby maximizing entropy production. He links LAP to the second law of thermodynamics and, consequently, MEPP[44]. evolutionary processes in both living and non-living systems can be explained by the principle of least action, which inherently leads to maximum entropy production. [45]. Both papers provide a detailed account of how MEPP can be understood as an outcome of the Least Action Principle, grounding it in thermodynamic and physical principles.
The potential of the stochastic least action principle has been shown in [46] and a connection has been made to entropy.
The concept of least action has been generalized by applying it to both heat absorption and heat release processes [47]. This minimization of action corresponds to the maximum efficiency of the system, reinforcing the connection between the least action principle and thermodynamic efficiency. By applying the principle of least action to thermodynamic processes, the authors link this principle to the optimization of efficiency.
The increase in entropy production was related to the system’s drive towards a more ordered, synchronized state, and this process is consistent with MEPP, which suggests that systems far from equilibrium will evolve in ways that maximize entropy production. Thus, a basis is provided for the increase in entropy using LAP [48]
The least action principle has been used to derive Maximum entropy change for nonequilibrium systems. [49]
Variational methods have been emphasized in the context of non-equilibrium thermodynamics for fluid systems, especially in relation to MEPP emphasizing thermodynamic variational principles in nonlinear systems [50]
MEPP and the Least Action Principle (LAP) are connected through the Riemannian geometric framework, which provides a generalized least action bound applicable to probabilistic systems, including both equilibrium and non-equilibrium systems [51].
Herglotz principle introduces dissipation directly into the variational framework by modifying the classical action functional with a dissipation term. This is significant because it provides a way to account for energy loss and the irreversible nature of processes in non-equilibrium systems. Herglotz’s principle provides a powerful tool for non-equilibrium thermodynamics by allowing for the incorporation of dissipative processes into a variational framework. This enables the modeling of systems far from equilibrium, where energy dissipation and entropy production play key roles. By extending classical mechanics to include irreversibility, Herglotz principle offers a way to describe the evolution of systems in non-equilibrium thermodynamics, potentially linking it to other key concepts like the Onsager relations and the MEPP [52,53,54].
In Beretta’s fourth law of thermodynamics, the steepest entropy ascent could be seen as analogous to a least action path in the context of non-equilibrium thermodynamics, where the system follows the most "efficient" path toward equilibrium by maximizing entropy production. Both principles are forms of optimization, where one minimizes physical action and the other maximizes entropy, providing deep structural insights into the behavior of systems across physics [55].
In most cases in classical mechanics, Hamilton’s stationary action is minimized, in some cases, it is a saddle point, and it is never maximized. The minimization of average unit action is proposed as a driving principle and the arrow of evolutionary time, and the saddle points are temporary minima that transition to lower action states with evolution. Thus, globally, on long-time scales, average action is minimized and continuously decreasing, when there are no external limitations. This turns it into a dynamic action principle for open-ended processes of self-organization evolution and development.
Our thesis is that we can complement other measures of organization and self-organization by applying a new, absolute, and universal measure based on Hamilton’s principle and its extension to dissipative and stochastic systems. This measure can be related to previously used measures, such as entropy and information, as in our model for the mechanism of self-organization, progressive development, and evolution. We demonstrate this with power-law relationships in the results.
This paper presents a derivation of a quantitative measure of action efficiency and a model in which all characteristics of a complex system reinforce each other, leading to exponential growth and power law relations between each pair of characteristics. The principle of least action is proposed as the driver of self-organization, as agents of the system follow natural laws in their motion, resulting in the most action-efficient paths. This explains why complex systems form structures and order, and continue self-organizing in their evolution and development.
Our measure of action efficiency assumes dynamical flow networks away from thermodynamic equilibrium that transport matter and energy along their flow channels and applies to such systems. The significance of our results is that they empower natural and social sciences to quantify organization and structure in an absolute, numerical, and unambiguous way. Providing a mechanism through which the least action principle and the derived measure of average action efficiency as the level of organization interact in a positive feedback loop with other characteristics of complex systems explains the existence of observed events in Cosmic Evolution. The tendency to minimize average unit action for one crossing between nodes in a complex flow network comes from the principle of least action and is proposed as the arrow of time, the main driving principle towards, and explanation of progressive development and evolution that leads to the enormous variety of systems and structures that we observe in nature and society.

1.4. Mechanism of Self-Organization

The research in this study demonstrates the driving principle and mechanism of self-organization and evolution in general open, complex, non-equilibrium thermodynamic systems, employing agent-based modeling. We propose that the state with the least average unit action is the attractor for all processes of self-organization and development in the universe across all systems. We measure this state through Average Action Efficiency (AAE).
We present a model for quantitatively calculating the amount of organization in a general complex system and its correlation with all other characteristics through power law relationships. We also show the cause for the progressive development and evolution which is the positive feedback loop between all characteristics of the system that leads to an exponential growth of all of them until an external limit is reached. Always, the internal organization of all complex systems in nature reflects their external environment where the flows of energy and matter come from. This model also predicts power law relationships between all characteristics. Numerous measured complexity-size scaling relationships confirm the predictions of this model [56,57,58].
Our work addresses a gap in complex system science by providing an absolute and quantitative measure of organization, namely AAE, based on the movement of agents and their dynamics. This measure is functional and dynamic, not relative and static as in many other metrics. We show that the amount of organization is inversely proportional to the average physical amount of action in a system. We derive the expression for organization, apply it to a simple example, and validate it with results from agent-based modeling (ABM) simulations which allow us to verify experimental data, and to vary conditions to address specific questions[59,60]. We discuss extensions of the model for a large number of agents and state the limitations and applicability of this model in our list of assumptions.
Measuring the level of organization in a system is crucial because it provides a long-sought criterion for evaluating and studying the mechanisms of self-organization in natural and technological systems. All those are dynamic processes, which necessitate searching for a new, dynamic measure. By measuring the amount of organization, we can analyze and design complex systems to improve our lives, in ecology, engineering, economics, and other disciplines. The level of organization corresponds to the system’s robustness, which is vital for survival in case of accidents or events endangering any system’s existence [61]. Philosophically and mathematically, each characteristic of the system is a cause and effect of all the others, similar to auto-catalytic cycles, which is well-studied in cybernetics [62].

1.5. Negative Feedback

Negative feedback is evident in the fact that large deviations from the power law proportionality between the characteristics are not observed or predicted. This proportionality between all characteristics at any stage of the process of self-organization is the balanced state of functioning which is usually known as a Homeostatic, or dynamical equilibrium state of the system. Complex systems function as wholes only at values of all characteristics close to this Homeostatic state. If some external influence causes large deviations even on one of the characteristics from this homeostatic value, the system functioning is compromised[62].

1.6. Unit-Total Dualism

We find a unit-total dualism: unit quantities of the characteristics are minimized while total quantities are maximized with systems’ growth. For example, the average unit action for one event, which is one edge crossing in networks, is derived from the average path length and path time, and it is minimized as calculated by the average action efficiency α . At the same time, the total amount of action Q in the whole system increases, as the system grows, which can be seen in the results from our simulation. This is an expression of the principles of decreasing average unit action and increasing total action. Similarly, unit entropy per one trajectory decreases in self-organization, as the total entropy of the system increases with its growth, expansion, and increasing number of agents. Those can be termed the principles of decreasing unit entropy and of increasing total entropy. The information for describing one event in the system, with increased efficiency and shorter paths is decreasing, while the total information in the system as it grows is increasing. They are also related by a power law relationship, which means, that one can be correlated to the other, and for one of them to change, the other must also change proportionally.

1.7. Unit Total Dualism Examples

Analogous qualities are evidenced in data for real systems and appear in some cases so often that they have special names. For example, the Jevons paradox (Jevons effect) was published in 1866 by the English economist William S. Jevons [63] . In one example, as the fuel efficiency of cars increased, the total miles traveled also increased to increase the total fuel expenditure. This is also named a "rebound effect" from increased energy efficiency [64]. The naming of this effect as a "paradox" shows that it is unexpected, not well studied, and sometimes considered as undesirable. In our model, it is derived mathematically as a result of the positive feedback loops of the characteristics of complex systems, which is the mechanism of its self-organization, and supported by the simulation results. It is not only unavoidable, but also necessary for the functioning, self-organization, evolution, and development of those systems.
In economics, it is evident that with increased efficiency, the costs decrease which increases the demand, which is named the "law of demand" [65]. This is another example of a size-complexity rule, whereas the efficiency increases, which in our work is a measure of complexity, the demand increases, which means that the size of the system also increases. In the 1980s the Jevons paradox was expanded to a Khazzoom–Brookes postulate, formulated by Harry Saunders in 1992 [66], which says that it is supported by the "growth theory" which is the prevailing economic theory for long-run economic growth and technological progress. Similar relations have been observed in other areas, such as in the Downs–Thomson paradox [67], where increasing road efficiency increases the number of cars driving on the road. These are just a few examples that point out that this unit-total dualism has been observed for a long time in many complex systems and it was thought to be paradoxical.

1.8. Action Principles in this Simulation, Potential Well

In each run of this specific simulation, the average unit action has the same stationary point, which is a true minimum of the average unit action, and the shortest path between the fixed nodes is a straight line. This is the theoretical minimum and remains the same across simulations. The closest analogy is with a particle in free fall, where it minimizes action and falls in a straight line, which is a geodesic. The difference in the simulation is that the ants have a wiggle angle and, at each step, deposit pheromone that evaporates and diffuses, therefore the difference with gravity is that the effective attractive potential is not uniform. Due to this the potential landscape changes dynamically. The shape of the walls of the potential well changes slightly with fluctuations around the average at each step. It also changes when the number of ants is varied between runs.
The potential well is steeper higher on its walls, and the system cannot be trapped there in local minima of the fluctuations. This is seen in the simulation as initially, the agents form longer paths that disintegrate into shorter ones. In this region away from the minimum, the action is truly always minimized, with some stochastic fluctuations. Near the bottom of the well, the slope of its wall is smaller, and local minima of the fluctuations cannot be overcome easily by the agents. Then the the system temporarily gets trapped in one of those local minima and the average unit action is a dynamical saddle point.
The simulation shows that with fewer ants, the system is more likely to get trapped in a local minimum, resulting in a path with greater curvature and higher final average action (lower average action efficiency) compared to the theoretical minimum. With an increasing number of ants, they can explore more neighboring states, find lower local minima, and find lower average action states. Therefore, increasing the number of ants allows the system to explore more effectively neighboring paths and find shorter ones. This is evident as the average action efficiency improves when there are more ants, which can escape higher local minima and find lower action values (see Figure 7). As the number of ants (agents, system size) increases, they asymptotically find lower local minima or lower average action, improving average action efficiency, though never reaching the theoretical minimum.
In future simulations, if the distance between nodes is allowed to shrink and external obstacles are reduced, the shape of the entire potential well changes dynamically. Its minimum becomes lower, the steepness of its walls increases and the system more easily escapes local minima. However, it still does not reach the theoretical minimum, due to its fluctuations near the minimum of the well. The average action decreases, and average action efficiency increases with the lowering of this minimum, demonstrating continuous open-ended self-organization and development. This illustrates the dynamical action principle.

1.9. Research Questions and Hypotheses

This study aims to answer the following research questions:
  • How can a dynamical variational action principle explain the continuous self-organization, evolution, and development of complex systems?
  • Can Average Action Efficiency (AAE) be a measure for the level of organization of complex systems?
  • Can the proposed positive feedback model accurately predict the self-organization processes in systems?
  • What are the relationships between various system characteristics, such as AAE, total action, order parameter, entropy, flow rate, and others?
Our hypotheses are:
  • A dynamical variational action principle can explain the continuous self-organization, evolution and development of complex systems.
  • AAE is a valid and reliable measure of organization that can be applied to complex systems.
  • The model can accurately predict the most organized state based on AAE.
  • The model can predict the power-law relationships between system characteristics that can be quantified.

1.10. Summary of the Specific Objectives of the Paper

1. Define and Apply the Dynamical Action Principle: Define and apply the dynamical action principle, which extends the classical stationary action principle to dynamic, self-organizing systems, in open-ended evolution, showing that unit action decreases while total action increases during self-organization.
2. Demonstrate the Predictive Power of the Model: Build and test a model that quantitatively and numerically measures the amount of organization in a system, and predicts the most organized state as the one with the least average unit action and highest average action efficiency. Define the cases in which action is minimized, and based on that predict the most organized state of the system. The theoretical most organized state is where the edges in a network are geodesics. Due to the stochastic nature of complex systems, those states are approached asymptotically, but in their vicinity, the action is stationary due to local minima.
3. Validate a New Measure of Organization: Based on 1 and 2, develop and apply the concept of average action efficiency, rooted in the principle of least action, as a quantitative measure of organization in complex systems.
4. Explain Mechanisms of Progressive Development and Evolution: Apply a model of positive feedback between system characteristics to predict exponential growth and power-law relationships, providing a mechanism for continuous self-organization. Test it by fitting its solutions to the simulation data, and compare them to real-world data from the literature.
5. Simulate Self-Organization Using Agent-Based Modeling: Use agent-based modeling (ABM) to simulate the behavior of an ant colony navigating between a food source and its nest to explore how self-organization emerges in a complex system.
6. Define unit-total (local-global) dualism: Investigate and define the concept of unit-total dualism, where unit quantities are minimized while total quantities are maximized as the system grows, and explain its implications as variational principles for complex systems.
7. Contribute to the Fundamental and Philosophical Understanding of Self-Organization and Causality: Enhance the theoretical understanding of self-organization in complex systems, offering a robust framework for future research and practical applications.
This research aims to provide a robust framework for understanding and quantifying self-organization in complex systems based on a dynamical principle of decreasing unit action for one edge in a complex system represented as a network. By introducing Average Action Efficiency (AAE) and developing a predictive model based on the principle of least action, we address critical gaps in existing theories and offer new insights into the dynamics of complex systems. The following sections will delve deeper into the theoretical foundations, model development, methodologies, results, and implications of our study.

2. Building the Model:

2.1. Hamilton’s Principle of Stationary Action for a System

In this work, we utilize Hamilton’s Principle of Stationary Action, a variational method, to study self-organization in complex systems. Stationary action is found when the first derivative is zero. When the second variation is positive, the action is a minimum. Only in this case, do we have the true least action principle. We will discuss in what situations this is the case. Hamilton’s Principle of Stationary Action asserts that the evolution of a system between two states occurs along the path that makes the action functional stationary. By identifying and extremizing this functional, we can gain a deeper understanding of the dynamics and driving forces behind self-organization and describe it from first principles.
This is a first order approximation, simplified model, as an examples, and the lagrangian for the agent based simulation is described in following sections.
The classical Hamilton’s principle is:
δ I ( q , p , t ) = δ t 1 t 2 L ( q ( t ) , q ˙ ( t ) , t ) d t = 0
where δ is an infinitesimally small variation in the action integral I, L is the Lagrangian, q ( t ) are the generalized coordinates, q ˙ ( t ) are the time derivatives of the generalized coordinates, p is the momentum, and t is the time. t 1 and t 2 are the initial and final times of the motion.
For brevity, further in the text, we will use when appropriate L = L ( q ( t ) , q ˙ ( t ) , t ) , and I = I ( q , p , t ) .
This is the principle from which all physics and all observed equations of motion are derived. The above equation is for one object. For a complex system, there are many interacting agents. That means that we can propose that the sum of all actions of all agents is taken into account. This sum is minimized in its most action-efficient state, which we define as being the most organized. In previous papers [8,10,11,12] we have stated that for an organized system we can find the natural state of that system as the one in which the variation of the sum of actions of all of the agents is zero:
δ i = 1 n I i = δ i = 1 n t 1 t 2 L i d t = 0
where I i is the action of the i-th agent, L i is the Lagrangian of the i-th agent, and n represents the number of agents in the system, t 1 and t 2 are the initial and final times of the motions.
A network representation of a complex system.When we represent the system as a network, we can define one edge crossing as a unit of motion, or one event in the system, for which the unit average action efficiency is defined. In this case, the sum of the actions of all agents for all of the crossings of edges per agent per unit time, which is the total number of crossings (the flow of events, ϕ ), is the total amount of action in the network, Q. In the most organized state of the system, the variation of the total action, Q, is zero, which means that it is extremised as well and for the complex system in our example this extremum is a maximum.

2.2. An Example of True Action Minimization: Conditions

This is just an example to understand the conceptual idea of the model. Later we will specify it for our simulation with the actual interactions between the agents.
  • The agents are free particles, not subject to any forces, so the potential energy is a constant and can be set to be zero because the origin for the potential energy can be chosen arbitrarily, therefore V = 0 . Then, the Lagrangian L of the element is equal only to the kinetic energy T = m v 2 2 of that element:
    L = T V = T = m v 2 2
    where m is the mass of the element, and v is its speed.
  • We are assuming that there is no energy dissipation in this system, so the Lagrangian of the element is a constant:
    L = T = m v 2 2 = constant
  • The mass m and the speed v of the element are assumed to be constants.
  • The start point and the end point of the trajectory of the element are fixed at opposite sides of a square (see Figure A1). This produces the consequence that the action integral cannot become zero, because the endpoints cannot get infinitely close together:
    I = t 1 t 2 L d t = t 1 t 2 ( T V ) d t = t 1 t 2 T d t 0
  • The action integral cannot become infinity, i.e., the trajectory cannot become infinitely long:
    I = t 1 t 2 L d t = t 1 t 2 ( T V ) d t = t 1 t 2 T d t
  • In each configuration of the system, the actual trajectory of the element is determined as the one with the Least Action from Hamilton’s Principle:
    δ I = δ t 1 t 2 L d t = δ t 1 t 2 ( T V ) d t = δ t 1 t 2 T d t = 0
  • The medium inside the system is isotropic (it has all its properties identical in all directions). The consequence of this assumption is that the constant velocity of the element allows us to substitute the interval of time with the length of the trajectory of the element.
  • The second variation of the action is positive, because V = 0 , and T > 0 , therefore the action is a true minimum.

2.3. Building the Model

In our model, the organization is proportional to the inverse of the average of the sum of actions of all elements (8). This is the average action efficiency and we can label it with a symbol α . Here average action efficiency measures the amount of organization of the system. In a complex network, many different arrangements can correspond to the same action efficiency and therefore have the same level of organization. Thus, the average action efficiency represents the macrostate of the system. Many possible microstates of combinations of nodes, paths, and agents on the network can correspond to the same macrostate as measured by α . This is analogous to temperature in statistical mechanics representing a macrostate corresponding to many microstates of the molecular arrangements in the gas.
To make the action efficiency dimensionless we multiply the numerator by Planck’s constant h. Now it takes the meaning that the average action efficiency is inversely proportional to the average number of action quanta for one crossing between two nodes in the system. This also provides an absolute reference point h for the measure of organization.
In general,
α = h n m i , j = 1 n m I i , j
where n is the number of agents, and m is the average number of nodes each agent crosses per unit time. If we multiply the number of agents by the number of crossings for each agent, we can define it as the flow of events in the system per unit of time, ϕ = n m
In general:
α = h ϕ i , j = 1 n m I i , j
In the denominator, the sum of all actions of all agents and all crossings is defined as the total action per unit time in the system, Q.
Q = i , j = 1 n m I i , j
In our simulation, the average path length is equal to the average time because the speed of the agents in the simulation is set to one patch per second.
t = l
When the Lagrangian does not depend on time, because the speed is constant and there is no friction, as in this simulation, the kinetic energy is a constant (assumption #2), so the action integral takes the form:
I = t 1 t 2 L d t = t 1 t 2 T d t = T ( t 2 t 1 ) = T Δ t = L Δ t
Where Δ t is the interval of time that the motion of the agent takes.
This is for an individual trajectory. Summing over all trajectories, we get the total number of events, the flow, times the average time of one crossing for all agents. The sum of all times for all events is the number of events times the average time. Then for identical agents, the denominator becomes:
Q = i = 1 n m I i , j = n m L t = ϕ L t
Therefore:
α = h ϕ ϕ L t
and:
α = h L t
The Lagrangian is just the kinetic energy, because the potential energy in this simulation is zero, and in the simulation, we are free to set the mass to two and the velocity is one patch per second. Therefore, we can have the kinetic energy to be equal to one. This equation is used for the calculations in this paper.
Since Planck’s constant is a fundamental unit of action, even though action can vary continuously, this equation represents how far is the organization of the system from this highly action-efficient state, when there will be only one Planck unit of action per event. The action itself can be even smaller than h [68]. This provides a path to further continuous improvement in the levels of organization of systems below one quantum of action.
An example for one agent:
To illustrate the simplest possible case, for clarity, we apply this model to the example of a closed system in two dimensions with only one agent. We define the boundaries of the fixed system to form a square.
The endpoints here represent two nodes in a complex network. Thus the model is limited only to the path between the two nodes. The expansion of this model will be to include many nodes in the network and to average over all of them. Another extension is to include many elements, different kinds of elements, obstacles, friction, etc.
Figure 1 shows the boundary conditions for the system used in this example. In this figure, we present the boundaries of the system and define the initial and final points of the motion of an agent as two of the nodes in a complex network.
Figure 1. Comparison between the geodesic and a longer path between two nodes in a network.
Figure 1. Comparison between the geodesic and a longer path between two nodes in a network.
Preprints 137446 g001
Comparison between two different states of organization of the system. This is a schematic representation of the two states of the system, and the shortest path of the agent in each case. Here l 1 and l 2 are the lengths of the trajectory of the agent in each case. (a) a trajectory (geodesic) of an agent in a certain state of the system, given by the configuration of the internal constraints, l 1 . (b) a different configuration allowing the geodesic trajectory of the element to decrease by 50%, l 2 - the shortest possible path.
For this case, we set n = 1 , m = 1 , which is one crossing of one agent between two nodes in the network. An approximation for an isotropic medium (assumption #8) allows us to express the time using the speed of the element when it is constant (Assumption #3). In this case, then we can solve v = l Δ t which is the definition of average velocity for the interval of time as Δ t = l v , where l is the length of the trajectory of the element in each case between the endpoints.
The speed of the element v is fixed to be another constant, so the action integral takes the form:
I = L Δ t = L l v
When we substitute eq. 16 in the expression for organization, eq. X, we obtain:
α = h I = h v L l
For the simulation in this paper, l is the distance that the ants travel between food and nest. Because h, v, and L are all constants, we can simplify this as we set
C = h v L
And rewrite:
α = h v L l = C l
We can set this constant to C = 1 , when necessary.

2.4. Analysis of System States

Now we turn to the two states of the system with different actions of the elements, as shown in Figure A1. The organization of those two states is respectively:
α 1 = C l 1 in state 1 , and α 2 = C l 2 in state 2 of the system .
In Figure A1. , the length of the trajectory in the second case (b) is less, l 2 < l 1 , which indicates that state 2 has better organization. The difference between the organizations in the two states of the same system is generally expressed as:
α 2 α 1 = C l 2 C l 1 = C 1 l 2 1 l 1 = C l 1 l 2 l 1 l 2
This can be rewritten as:
Δ α = C Δ l i = 1 2 l i
Where Δ α = α 2 α 1 , Δ l = l 1 l 2 , and i = 1 2 l i = l 1 l 2 .
This is for one agent in the system. If we describe the multi-agent system, then, we use average path-length.

2.5. Average Action Efficiency (AAE)

In the previous example, we can say that the shorter trajectory represents a more action-efficient state, in terms of how much total action is necessary for the event in the system, which here is for the agent to cross between the nodes. If we expand to many agents between the same two nodes, all with slightly different trajectories, we can define that the average of the action necessary for each agent to cross between the nodes is the average action efficiency. Average action efficiency is how efficiently a system utilizes energy and time to perform the events in the system. More organized systems are more action-efficient because they can perform the events in the system with fewer resources, in this example, energy and time.
We can start from the presumption that the average action efficiency in the most organized state is always greater than or equal to its value in any other configuration, arrangement, or structure of the system. By varying the configurations of the structure until the average action efficiency is maximized, we can identify the most organized state of the system. This state corresponds to the minimum average action per event in the system, adhering to the principle of least action. We refer to this as the ground or most stable state of the system, as it requires the least amount of action per event. All other states are less stable because they require more energy and time to perform the same functions.
If we define average action efficiency as the ratio of useful output, here it is the crossing between the nodes, and, in other systems, it can be any other measure, to the total input or the energy and time expended, a system that achieves higher action efficiency is more organized. This is because it indicates a more coordinated, effective interaction among the system’s components, minimizing wasted energy or resources for its functions.
During the process of self-organization, a system transitions from a less organized to a more organized state. If we monitor the action efficiency over time, an increase in efficiency could indicate that the system is becoming more organized, as its components interact in a more coordinated way and with fewer wasted resources. This way we can measure the level of organization and the rate of increase of action efficiency which is the level and the rate of self-organization, evolution, and development in a complex system.
To use action efficiency as a quantitative measure, we need to define and calculate it precisely for the system in question. For example, in a biological system, efficiency might be measured in terms of energy conversion efficiency in cells. In an economic system, it can be the ratio of production of an item to the total time, energy, and other resources expended. In a social system, it could be the ratio of successful outcomes to the total efforts or resources expended.
The predictive power of the Principle of Least Action for Self-Organization:
For the simplest example here of only two nodes, calculating theoretically the least action state as the straight line between the nodes we arrive at the same state as the final organized state in the simulation in this paper. This is the same result from minimizing action and from any experimental result. It results in the geodesic of the natural motion of objects. When there are obstacles to the motion of agents, the geodesic is a curve described by the metric tensor. To achieve this prediction for multiagent systems we minimize the average action between the endpoints. Therefore the most organized state in the current simulation is predicted theoretically from the principle of least action. Therefore, the Principle of Least Action provides a predictive power for calculating the most organized state of a system, and verifying it with simulations or experiments. In engineered or social systems, it can be used to predict the most organized state and then construct it.

2.6. Multi-agent

Now we turn to the two states of the system with different average actions of the elements on Figure A1. The organization of those two states is respectively:
α 1 = C l 1 in state 1 , and α 2 = C l 2 in state 2 of the system .
The average length of the trajectories in the second case is less, l 2 < l 1 , which indicates that state 2 has better organization. The difference between the organizations in the two states of the same system is generally expressed as:
α 2 α 1 = C l 2 C l 1 = C 1 l 2 1 l 1 = C l 1 l 2 l 1 l 2
This can be rewritten as:
Δ α = C Δ l i = 1 2 l i
Where Δ α = α 2 α 1 , Δ l = l 1 l 2 , and i = 1 2 l i = l 1 l 2 .
This is when we use the average lengths of the trajectories and when the velocity is constant and the time and length are the same. In general, when the velocity varies we need to use time.

2.7. Using time

In this case, the two states of the system are with different average actions of the elements. The organization of those two states is respectively:
α 1 = C t 1 in state 1 , and α 2 = C t 2 in state 2 of the system .
In Figure A1, the length of the trajectory in the second case (b) is less, the average time for the trajectories is t 2 < t 1 , which indicates that state 2 has better organization. The difference between the organizations in the two states of the same system is generally expressed as:
α 2 α 1 = C t 2 C t 1 = C 1 t 2 1 t 1 = C l 1 t 2 t 1 t 2
This can be rewritten as:
Δ α = C Δ t i = 1 2 t i
Where Δ α = α 2 α 1 , Δ t = t 1 t 2 , and i = 1 2 t i = t 1 t 2 .
Which, recovering C, is:
Δ α = h v L Δ t i = 1 2 t i

2.8. An Example

For the simplest example of one agent and one crossing between two nodes if l 1 = 2 l 2 , or the first trajectory is twice as long as the second, this expression produces the result:
α 1 = C 2 l 2 = α 2 2 or α 2 = 2 α 1 ,
indicating that state 2 is twice as well organized as state 1. Alternatively, substituting in eq. 29 we have:
α 2 α 1 = C 2 1 2 = C 2 ,
or there is a 50% difference between the two organizations, which is the same as saying that the second state is quantitatively twice as well organized as the first one. This example illustrates the purpose of the model for direct comparison between the amounts of organization in two different states of a system. When the changes in the average action efficiency are followed in time, we can measure the rates of self-organization.
In our simulations, the higher the density and the lower the entropy of the agents, the shorter the paths and the time for crossing them, and the more the action efficiency.

2.9. Unit-total (local-global) dualism

In addition to the classical stationary action principle for fixed, non-growing, non-self-organizing systems:
δ I = 0
we find a dynamical action principle:
δ I 0
This principle exhibits a unit-total (local-global, min-max) dualism:
1. Average unit action for one edge decreases:
δ i , j = 1 n , m I i , j n m < 0
This is a principle for decreasing unit action for a complex system during self-organization, as it becomes more action-efficient until a limit is reached.
2. Total action of the system increases:
δ i , j = 1 n , m I i , j > 0
This is a principle for increasing total action for a complex system during self-organization, as the system grows until a limit is reached.
In our data, we see that average unit action, in terms of action efficiency decreases while total action increases Figure 8. Both are related strictly with a power law relationship, predicted by the model of positive feedback between the characteristics of the system.
Analogously, unit internal Boltzmann entropy for one path is decreasing while total internal Boltzmann entropy is increasing for a complex system during self-organization and growth Figure 9. These two characteristics are also related strictly to a power law relationship, predicted by the model of positive feedback between the characteristics of the system.
For the Gauss’ principle of least constraint [69] this will translate as the unit constraint (obstacles) for one edge decreases, the total constraints in the network of the whole complex system during self-organization increases as it grows and expands.
For Hertz’s principle of least curvature [27], this will translate as the unit curvature for one edge decreases, the total curvature in the network of the whole complex system during self-organization increases as it grows and expands and adds more nodes.
Some examples of unit-total (local-global) dualism in other systems are: In economies of scale as the size of the system grows, the total production cost increases as the unit cost per one item decreases. In the same example, the total profits increase, but the unit profit per item decreases. Also, as the cost per one computation decreases, the cost for all computations grows. As the cost per one bit of data transmission decreases the cost for all transmissions increases as the system increases. In biology, as the unit time for one reaction in a metabolic autocatalytic cycle decreases in evolution, due to increased enzymatic activity, the total number of reactions in the cycle increases. In ecology, as one species becomes more efficient in finding food, its time and energy expenditure for foraging a unit of food decreases, the numbers of that species increase and the total amount of food that they collect increases. We can keep naming other unit-total (local-global) dualisms in systems of a very different nature, to test the universality of this principle.

3. Simulations Model

In our simulation, the ants are interacting through pheromones. We can formulate an effective Lagrangian to describe their dynamics. The Lagrangian L depends on the kinetic energy T and the potential energy V. We can start building it slowly by adding necessary terms to the Lagrangian. Given that ants are influenced by pheromone concentrations, the potential energy component should reflect this interaction.
Components of the Lagrangian: 1. Kinetic Energy (T): In our simulation, the ants have a constant mass m, and their kinetic energy is given by:
T = 1 2 m v 2
where v is the velocity of the ant.
2. Effective Potential Energy (V): The potential energy due to pheromone concentration C ( r , t ) at position r and time t can be modeled as:
V eff = k C ( r , t )
where k is a constant that scales the influence of the pheromone concentration.
Effective Lagrangian (L): The Lagrangian L is given by the difference between the kinetic and potential energies:
L = T V
For an ant moving in a pheromone field, the effective Lagrangian becomes:
L = 1 2 m v 2 + k C ( r , t )
Formulating the Equations of Motion:
Using the Lagrangian, we can derive the equations of motion via the Euler-Lagrange equation:
d d t L x ˙ i L x i = 0
where x i represents the spatial coordinates (e.g., x , y ) and x ˙ i represents the corresponding velocities.
Example Calculation for a Single Coordinate:
1. Kinetic Energy Term:
L x ˙ = m x ˙
d d t L x ˙ = m x ¨
2. Potential Energy Term:
L x = k C x
The equation of motion for the x-coordinate is then:
m x ¨ = k C x
Full Equations of Motion:
For both x and y coordinates, the equations of motion are:
m x ¨ = k C x
m y ¨ = k C y
The ants are moving following the gradient of the concentration.
Testing for stationary Points of Action:
  • Minimum: If the second variation of the action is positive, the path corresponds to a minimum of the action.
  • Saddle Point: If the second variation of the action can be both positive and negative depending on the direction of the variation, the path corresponds to a saddle point.
  • Maximum: If the second variation of the action is negative, the path corresponds to a maximum of the action.
Determining the Nature of the Stationary Point:
To determine whether the action is a minimum, maximum, or saddle point, we examine the second variation of the action, δ 2 I . This involves considering the second derivative (or functional derivative in the case of continuous systems) of the action with respect to variations in the path.
Given the Lagrangian for ants interacting through pheromones
The action is:
I = t 1 t 2 1 2 m r ˙ 2 + k C ( r , t ) d t
First Variation:
The first variation δ I leads to the Euler-Lagrange equations, which give the equations of motion:
m r ¨ = k C ( r , t )
Second Variation:
The second variation δ 2 I determines the nature of the stationary point. In general, for a Lagrangian L = T V :
δ 2 I = t 1 t 2 δ 2 T δ 2 V d t
Analyzing the Effective Lagrangian:
1. Kinetic Energy Term T = 1 2 m r ˙ 2 : The second variation of the kinetic energy is typically positive, as it involves terms like m ( δ r ˙ ) 2 .
2. Potential Energy Term V eff = k C ( r , t ) : The second variation of the effective potential energy depends on the nature of C ( r , t ) . If C is a smooth, well-behaved function, the second variation can be analyzed by examining 2 C .
Nature of the Stationary Point:
  • Kinetic Energy Contribution: Positive definite, contributing to a positive second variation.
  • Effective Potential Energy Contribution: Depends on the curvature of C ( r , t ) . If C ( r , t ) has regions where its second derivative is positive, the effective potential energy contributes positively, and vice versa.
Therefore, given the typical form of the Lagrangian and assuming C ( r , t ) is well-behaved (smooth and not overly irregular), the action I is most likely a saddle point. This is because:
  • The kinetic energy term tends to make the action a minimum.
  • The potential energy term, depending on the pheromone concentration field, can contribute both positively and negatively.
Thus, variations in the path can lead to directions where the action decreases (due to the kinetic energy term) and directions where it increases (due to the potential energy term), characteristic of a saddle point.
Incorporating factors such as the wiggle angle of ants and the evaporation of pheromones introduces additional dynamics to the system, which can affect whether the action remains stationary, a saddle point, a minimum, or a maximum. Here’s how these changes influence the nature of the action:

3.0.1. Effects of Wiggle Angle and Pheromone Evaporation on the Action

1. Wiggle Angle: Impact: The wiggle angle introduces stochastic variability into the ants’ paths. This randomness can lead to fluctuations in the paths that ants take, affecting the stability and stationarity of the action. Mathematical Consideration: The additional term representing the wiggle angle’s variance in the Lagrangian adds a stochastic component, P ( θ , t ) :
L = 1 2 m v 2 + k C ( r , t ) + P ( θ , t )
where P ( θ , t ) = σ 2 ( θ ) · η ( t )
where variance in the wiggle angle θ is σ 2 ( θ ) , and η ( t ) is a random function of time that introduces variability into the system.
This term will then influence the dynamics by adding random fluctuations at each time step, making the effect of noise vary over time rather than being a constant shift.
Consequence: The action is less likely to be strictly stationary due to the inherent variability introduced by the wiggle angle. This can lead to more dynamic behavior in the system.
2. Pheromone Evaporation: Impact: Pheromone evaporation reduces the concentration of pheromones over time, making previously attractive paths less so as time progresses. Mathematical Consideration: Including the evaporation term in the Lagrangian:
L = 1 2 m v 2 + k C ( r , t ) e λ t
Consequence: The time-dependent decay of pheromones means that the action integral changes dynamically. Paths that were optimal at one point may no longer be optimal later, leading to continuous adaptation.

3.1. Considering the Nature of the Action

Given these modifications, the nature of the action can be characterized as follows:
1. Stationary Action: - Before Changes: In a simpler model without wiggle angles and evaporation, the action might be stationary at certain paths. - After Changes: With wiggle angle variability and pheromone evaporation, the action is less likely to be stationary. Instead, the system continuously adapts, and the action varies over time.
2. Saddle Point, Minimum, or Maximum: - Saddle Point: The action is likely to be at a saddle point due to the dynamic balancing of factors. The system may have directions in which the action decreases (due to pheromone decay) and directions in which it increases (due to path variability). - Minimum: If the system stabilizes around a certain path that balances the stochastic wiggle and the decaying pheromones effectively, the action might approach a local minimum. However, this is less likely in a highly dynamic system. - Maximum: It is unusual for the action in such optimization problems to represent a maximum because that would imply an unstable and inefficient path being preferred, which is contrary to observed behavior.
Practical Implications
1. Continuous Adaptation: - The system will require continuous adaptation to maintain optimal paths. Ants need to frequently update their path choices based on the real-time state of the pheromone landscape.
2. Complex Optimization: - Optimization algorithms must account for the random variations in movement, the rules for deposition and diffusion and the temporal decay of pheromones. This means more sophisticated models and algorithms are necessary to predict and find optimal paths.
Therefore, incorporating the wiggle angle and pheromone evaporation into the model makes the action more dynamic and less likely to be strictly stationary. Instead, the action is more likely to exhibit behavior characteristic of a saddle point, with continuous adaptation required to navigate the dynamic environment. This complexity necessitates advanced modeling and optimization techniques to accurately capture and predict the behavior of the system.

3.2. Dynamic Action

For dynamical non-stationary action principles, we can extend the classical action principle to include time-dependent elements. The Lagrangian is changing during the motion of an agent between the nodes as the terms in it are changing.
1. Time-Dependent Lagrangian that explicitly depends on time or other dynamic variables:
L = L ( q , q ˙ , t , λ ( t ) )
where ( q ) represents the generalized coordinates, ( q ˙ ) their time derivatives, ( t ) time, and ( λ ( t ) ) a set of dynamically evolving parameters. 2. Dynamic Optimization - the system continuously adapts its trajectory q(t) to minimize or optimize the action that evolves over time:
I = t 1 t 2 L ( q , q ˙ , t , λ ( t ) ) d t
The parameters λ ( t ) are updated based on feedback from the system’s performance. The goal is to find the path q ( t ) that makes the action stationary. However, since λ ( t ) is time-dependent, the optimization becomes dynamic.
Euler-Lagrange Equation
To find the stationary path, we derive the Euler-Lagrange equation from the time-dependent Lagrangian. For a Lagrangian L ( q , q ˙ , t , λ ( t ) ) , the Euler-Lagrange equation is:
d d t L q ˙ L q = 0
However, due to the dynamic nature of λ ( t ) , additional terms may need to be considered.
Updating Parameters λ ( t )
The parameters λ ( t ) evolve based on feedback from the system’s performance. This feedback mechanism can be modeled by incorporating a differential equation for λ ( t ) :
d λ ( t ) d t = f ( λ ( t ) , q ( t ) , q ˙ ( t ) , t )
Here, f represents a function that updates λ ( t ) based on the current state q ( t ) , the velocity q ˙ ( t ) , and possibly the time t. The specific form of f depends on the nature of the feedback and the system being modeled.
Practical Implementation
In our example of ants with a wiggle angle and pheromone evaporation. The effective Lagrangian will look like this:
L = 1 2 m v 2 + k C ( r , t ) e λ ( t ) t + P ( θ , t )
with all of the terms defined earlier.
The action I would be:
I = t 1 t 2 1 2 m v 2 + k C ( r , t ) e λ ( t ) t + P ( θ , t ) d t
Dynamical System Adaptation
The system adapts by updating λ ( t ) based on the current state of pheromones and the ants’ paths.
Solving the Equations
1. Numerical Methods: Usually, these systems are too complex for analytical solutions, so numerical methods (e.g., finite difference methods, Runge-Kutta methods) are used to solve the differential equations governing q ( t ) and λ ( t ) . 2. Optimization Algorithms: Algorithms like gradient descent, genetic algorithms, or simulated annealing can be used to find optimal paths and parameter updates.
By extending the classical action principle to include time-dependent and evolving elements, we can model and solve more complex, dynamic systems. This framework is particularly useful in real-world scenarios where conditions change over time, and systems must adapt continuously to maintain optimal performance. This approach is applicable in physical, chemical, and biological systems, and in fields such as robotics, economics, and ecological modeling, providing a powerful tool for understanding and optimizing dynamic, non-stationary systems.
The Lagrangian changes at each time step of the simulation, therefore we cannot talk about static action, but a dynamic action. This is dynamic optimization and reinforcement learning.
The average action is quasi-stationary, as is fluctuates around a fixed value, but, internally, each trajectory which it is composed of is fluctuating stochastically given the dynamic Lagrangian of each ant. It still fluctuates around the shortest theoretical path, so the average action is minimized far from the stationary path, even though close to the minimum it can be stuck in a neighboring stationary action path temporarily. In all these situations, as described above, the average action efficiency is our measure for organization.

3.3. Specific details in our simulation

For our simulation the details of the concentration changes at each patch C ( r , t ) at each update are the sum of three contributions and can be included as:
1. C i , j ( t ) is the preexisting amount of pheromone at each patch at time t.
2. Pheromone Diffusion: The changes of the pheromone at each patch at time t, are described by the rules of the simulation: 70% of the pheromone is split between all 8 neighboring patches on each tick, regardless of how much pheromone is in that patch, which means that 30% of the original amount is left in the central patch. On the next tick 70% of those remaining 30% will diffuse again. At the same time, using the same rule, pheromone is distributed from all 8 neighboring ants to the central one. Note: this rule for diffusion does not follow the diffusion equations in physics, where there is always flow from high concentration to low.
C i , j ( t + 1 ) = 0.3 C i , j ( t ) + 0.7 8 k , l = 1 1 C i + k , j + l ( t )
where | k | + | l | 0
The first term in the equation shows how much of the concentration of the pheromone from the previous time step is left in the next, and the second term shows the incoming pheromone from all neighboring patches, as 70% 1/8 of each concentration is distributed to the central one.
3. The amount of pheromone an ant deposits after n steps can be expressed as:
P ( n ) = 1 10 P 0 ( 0.9 ) n
Where P ( 0 ) = 30
The stochastic term, P ( θ , t ) depends on the ( σ 2 ( θ ) ), which is the variance of a uniform distribution and for the parameters in this simulation, is [70]
σ 2 ( θ ) = 50 2 12

3.4. Gradient based approach

We can use either the concentration’s value or the concentration gradient in the potential energy term. Using the gradient is a more exact approach but even more computationally intensive.
In further extension of the model, we can incorporate a gradient-based potential energy term. In this case, the concentration dependent term is: k C ( r , t ) instead of k C ( r , t ) and the Lagrangian becomes:
L = 1 2 m v 2 + k C ( r , t ) e λ ( t ) t + P ( θ , t )

3.5. Summary

1. We obtained the Lagrangian with the exact parameters for the specific current simulation that produced the data. Up to our current knowledge we don’t know of other studies which have published the Lagrangian approach to agent based simulations of ants.
2. The Lagrangian is impossible to solve analytically, to our current knowledge, due to the stochastic term. Also, the equation for the concentration of pheromones is at a given patch, but the equation for the amount deposited by the ants depends on how many steps, n, they have taken since they visited the food or nest. Each ant has a different path, so n in the equation will be different for each ant and it will be depositing a different amount of pheromones. This in general cannot be done analytically, it is dependent on the stochastic paths of each ant, therefore the only way to solve it numerically through the simulation. Also, in the equation, the concentration is for each patch i,j. We solve it numerically through the simulation.
3. The average path length obtained from the simulation serves as a numerical solution to the Lagrangian because it results from the model that incorporates all the dynamics described by the Lagrangian. This path length reflects the optimization and behaviors modeled by the Lagrangian terms, including kinetic energy, potential energy influenced by pheromone concentrations, and stochastic movement. The simulation is using the reciprocal of the average path-length as the average action efficiency. This takes into account all of the effects on the Lagrangian.
4. The average action could be stationary close to the theoretically shortest path, i.e. near the minimum of the average action, but further away from it it is always minimized, experimentally and from theoretical considerations. In the simulation, it is measured that longer paths always decay to shorter paths. There can only be some deviations very close to the shortest path due to memory effects and stochastic reasons which will decay with longer annealing and changing parameters such as exploration by increasing the wiggle angle, changing the pheromone deposition, diffusion and evaporation rates, changing the speed and mass of the ants, and other factors. When the average action efficiency is growing it means that the average unit action is decreasing. When the action is stationary, as it is at the end of the simulations, as seen in the time graphs, the average action efficiency is also stationary - it does not grow anymore, in the time and for the parameters of the current simulation. A similar process is happening in real complex systems such as organisms, ecosystems, cities, and economies. Due to the stochastic variations, we can consider only average quantities.

4. Mechanism

4.1. Exponential Growth and Size-Complexity Rule

Average action efficiency is the proposed measure for level of organization and complexity. To test it we turn to observational data. The size-complexity rule states that the complexity increases as a power law of the size of a complex system [56]. This rule has been observed in systems of a very different nature, without explanation or proposed origin. In the next section on the model of the mechanism of self-organization, we derive those exponential and power law dependencies. In this paper, we show how our data aligns with the size-complexity rule.

4.2. A model for the mechanism of self-organization

We apply the model first presented in our paper from 2015 [10] and used in the following papers [11,12] to the ABM simulation here, and specify all of the quantities in this model. Then, we show the exponential and power law solutions for this specific system.
Below is a visual representation of the positive feedback interactions between the characteristics of a complex system, which in our 2015 paper [10] have been proposed as the mechanism of self-organization, progressive development, and evolution, applied to the current simulation. Here i is the information in the system, calculated by the total amount of ant pheromones, t is the average time for all of the ants in the simulation crossing between the two nodes, N is the total number of ants, Q is the total action of all ants in the system, Δ S is the internal entropy difference between the initial and final state of the system in the process of self-organization of finding the shortest path, α is the average action efficiency, ϕ is the number of events in the system, which in the simulation is the number of paths or crossings between the two nodes, Δ ρ is the order parameter, which is the difference in the density of agents between the final and initial state of the simulation. The links connecting all those quantities represent positive feedback connections between them.
The positive feedback loops in Figure 2 are modeled with a set of ordinary differential equations. The solutions of this model are exponential for each characteristic and have a power law dependence between each two. The detailed solutions of this model are shown.
We acknowledge the mathematical point that, in general, solutions to systems of linear differential equations are not always exponential. This depends on the eigenvalues of the governing matrix, which must be positive real numbers for exponential growth to occur. Additionally, the matrix must be diagonalizable to support such solutions.

4.2.1. Systems with Constant Coefficients:

• For linear systems with constant coefficients, the solutions often involve exponential functions. This is because the system can be expressed in terms of matrix exponentials, leveraging the properties of constant coefficient matrices.
• Even in these cases, if the coefficient matrix is defective (non-diagonalizable), the solutions may include polynomial terms multiplied by exponentials (e.g., and ).

4.2.2. Systems with Variable Coefficients:

• When the coefficients are functions of the independent variable (e.g., time), the solutions may involve integrals, special functions (like Bessel or Airy functions), or other non-exponential forms.
• The lack of constant coefficients means that the superposition principle doesn’t yield purely exponential solutions, and the system may not have solutions expressible in closed-form exponential terms.

4.2.3. Higher-Order Systems and Resonance:

• In some systems, especially those modeling physical phenomena like oscillations or circuits, the solutions might involve trigonometric functions, which are related to exponentials via Euler’s formula but are not themselves exponential functions in the real domain.
• Resonant systems can exhibit behavior where solutions grow without bound in a non-exponential manner.
While exponential functions are a key part of the toolkit for solving linear differential equations, especially with constant coefficients, they don’t encompass all possible solutions. The nature of the coefficients and the structure of the system play crucial roles in determining the form of the solution.
In our specific system, the dynamics predicts exponential growth. We do not consider friction, negative feedback, or any dissipative processes that would introduce complex or negative eigenvalues. Instead, the system is driven by positive feedback loops, which lead to positive real eigenvalues. These conditions ensure that the matrix is diagonalizable and that the system exhibits exponential growth, as expected under these assumptions.
Our model operates under the assumption of constant positive feedback, which justifies the exponential growth observed in our simulations. This is a valid simplification for our study, focusing on systems with reinforcing interactions rather than dissipative forces.

5. Mechanism

This is the mathematical representation and solutions of the mechanism represented as a positive feedback loop between the eight characteristics of the system.
In general, in a linear system with eight quantities, the shortest way to represent the interactions is by linear differential equations, using a matrix to describe the interactions between different quantities. We are writing this system generally in order to specify and discuss different aspects of it. Let’s define our system as follows:
d d t x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 = a 11 a 12 a 13 a 14 a 15 a 16 a 17 a 18 a 21 a 22 a 23 a 24 a 25 a 26 a 27 a 28 a 31 a 32 a 33 a 34 a 35 a 36 a 37 a 38 a 41 a 42 a 43 a 44 a 45 a 46 a 47 a 48 a 51 a 52 a 53 a 54 a 55 a 56 a 57 a 58 a 61 a 62 a 63 a 64 a 65 a 66 a 67 a 68 a 71 a 72 a 73 a 74 a 75 a 76 a 77 a 78 a 81 a 82 a 83 a 84 a 85 a 86 a 87 a 88 x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8
Here, d d t denotes the derivative with respect to time, x 1 , x 2 , , x 8 are the quantities of interest, and a i j are constants that represent the interaction strengths between the quantities. The solutions for this system are exponential growth for each of the quantities, and power-law relationships can be derived from their exponential growth. Let’s consider eight quantities x 1 ( t ) , x 2 ( t ) , . . . , x 8 ( t ) each growing exponentially:
  • x 1 ( t ) = x 10 e a 1 t
  • x 2 ( t ) = x 20 e a 2 t
  • x 3 ( t ) = x 30 e a 3 t
  • x 4 ( t ) = x 40 e a 4 t
  • x 5 ( t ) = x 50 e a 5 t
  • x 6 ( t ) = x 60 e a 6 t
  • x 7 ( t ) = x 70 e a 7 t
  • x 8 ( t ) = x 80 e a 8 t
Each x i 0 is the initial value, and each a i is the growth rate for quantity x i ( t ) . To find a power law relationship between any two quantities, say x i ( t ) and x j ( t ) :
1. Solve for t in terms of x i ( t ) and x j ( t ) :
t = 1 a i ln x i ( t ) x i 0
t = 1 a j ln x j ( t ) x j 0
2. Set these two expressions equal to each other and solve for one variable in terms of the other:
1 a i ln x i ( t ) x i 0 = 1 a j ln x j ( t ) x j 0
ln x i ( t ) x i 0 = a i a j ln x j ( t ) x j 0
x i ( t ) x i 0 = x j ( t ) x j 0 a i a j
x i ( t ) = x i 0 x j ( t ) x j 0 a i a j x j 0 a i a j
This gives us a relationship between any two of the quantities x i ( t ) and x j ( t ) . Now, replacing the variables, system of linear differential equations represented in matrix form becomes:
d d t i t N Q Δ S α φ ρ = A i t N Q Δ S α φ ρ
Here, A is the matrix of coefficients that define the interactions between the different quantities. For example, a list of some of the power-law-like relationships involving α and Q with respect to the other variables based on their exponential growth relationships. Here we show only the relationships for average action efficiency and for total action:
Relationships Involving α ( t ) :
  • α ( t ) = α 0 i ( t ) i 0 a 6 a 1 i 0 a 6 a 1
  • α ( t ) = α 0 t ( t ) t 0 a 6 a 2 t 0 a 6 a 2
  • α ( t ) = α 0 N ( t ) N 0 a 6 a 3 N 0 a 6 a 3
  • α ( t ) = α 0 Q ( t ) Q 0 a 6 a 4 Q 0 a 6 a 4
  • α ( t ) = α 0 Δ S ( t ) Δ S 0 a 6 a 5 Δ S 0 a 6 a 5
  • α ( t ) = α 0 φ ( t ) φ 0 a 6 a 7 φ 0 a 6 a 7
  • α ( t ) = α 0 ρ ( t ) ρ 0 a 6 a 8 ρ 0 a 6 a 8
Relationships Involving Q ( t ) :
  • Q ( t ) = Q 0 i ( t ) i 0 a 4 a 1 i 0 a 4 a 1
  • Q ( t ) = Q 0 t ( t ) t 0 a 4 a 2 t 0 a 4 a 2
  • Q ( t ) = Q 0 N ( t ) N 0 a 4 a 3 N 0 a 4 a 3
  • Q ( t ) = Q 0 Δ S ( t ) Δ S 0 a 4 a 5 Δ S 0 a 4 a 5
  • Q ( t ) = Q 0 α ( t ) α 0 a 4 a 6 α 0 a 4 a 6
  • Q ( t ) = Q 0 φ ( t ) φ 0 a 4 a 7 φ 0 a 4 a 7
  • Q ( t ) = Q 0 ρ ( t ) ρ 0 a 4 a 8 ρ 0 a 4 a 8
These equations describe how α and Q scale with respect to each other and the other variables in the system, assuming all variables grow exponentially over time.
In our data, we see small deviations from the strict power law fits. A power-law can include a deviation term, which may show uncertainty in the values (measurement or sampling errors) or deviation from the power-law function (for example, for stochastic reasons):
y = k x n + ϵ
where:
  • y and x are the variables.
  • k is a constant.
  • n is the exponent.
  • ϵ is a term that accounts for deviations.

6. Simulation Methods

6.1. Agent-Based Simulations approach

Our study examines the properties of self-organization by simulating an ant colony navigating between a food source and its nest. The ants start with a random distribution, and then their trajectories become more correlated as they form a path. The ants pick up pheromone from the food and nest and lay it on each patch when they move. The food and the nest function as two nodes that attract the ants which follow the steepest gradient of the opposite pheromone. The pheromone is equivalent to information, as forming a path requires enough of it to ensure that the ants are able to follow the quickest path to their destinations rather than moving randomly. The ants can represent any agent in complex systems, from atoms and molecules to organisms, people, cars, dollars, or bits of information. Utilizing NetLogo for agent-based modeling and Python for data visualization and analysis, we measure self-organization via the calculated entropy decrease in the system, density order parameter, and average path length, which are contingent on the ants’ distribution and possible microstates in the simulated environment. We further explore the effects of having different numbers of ants in the system simulating the growth of the system. In this study we look only at the final values of the characteristics at the end of the simulation when the self-organization is complete, or the difference between their values in the final and initial state. Then, we can demonstrate the relationship between each two characteristics as the population increases. Our model predicts that, according to one of the mechanisms of self-organization, evolution, and development of complex systems, all of the characteristics of a complex system reinforce each other, grow exponentially in time, and are proportional to each other by a power-law relationship [10]. The principle of least action is proposed as a driver for this process. The tendency to minimize action for crossing between the nodes in a complex network is a reason for self-organization and the average action efficiency is a measure of how organized the system is.
This simulation can utilize variables that affect the world, making it easier or harder to form the path. In the collected data, only the number of ants was changed. Increasing the number of ants makes it more probable to find the path, as there is not only a higher chance of them reaching the food and nest and adding information to the world, but also a steeper gradient of pheromone. This both increases the rate of path formation and decreases the length of the path. The ants follow the direction of the steepest gradient around them, but, their speed does not depend on how steep is the gradient.
The simulation methods, such as for diffusion, are chosen by established in the literature criteria for computational speed and for realistic outcome. The values for the parameters are chosen by modifications in the program as to optimize the path formation.

6.2. Program Summary

The simulation is run using an agent-based software called NetLogo. In the simulation, a population of ants forms a path between two endpoints, called the food and nest. The world is a 41x41 patch grid with 5x5 food and nest centered vertically on opposite sides and aligned with the edge. To help with path formation, there is a pheromone laid by the ants on a grid whenever the food or nest is reached. This pheromone exhibits the behavior of evaporating and diffusing across the world. The settings for ants and pheromones can be configured to make path formation easier or harder.
Each tick of the simulation functions as time, which represents a second in our simulation, according to the following rules. First, the ants check if there is any pheromone in its neighboring patches that are in a view cone with an angle of 135 degrees, oriented towards its direction of movement. From the position which the ant is in the current patch, it faces the center of the neighboring patch with the largest value of the pheromone in its viewing angle. It is important to note that the minimum amount of pheromone an ant can detect is 1 / M , where M is the maximum amount of pheromone an ant can have, which in this simulation is 30. If there is not enough pheromone found in view, then the ant checks all neighboring patches with the same limitation for minimum pheromone. If any pheromone is found, if faces towards the patch with the highest amount. The ant then wiggles a random amount within an integer interval of -25 to 25 degrees, regardless of whether it faced any pheromone, and moves forward at a constant speed of 1 patch per tick. If the ant collides with the edge of the world, it turns 180 degrees and takes another step. In this simulation, the ants do not collide with obstacles or with each other. After it finishes moving, the ant checks if there is any food or nest in its current patch. A collision with the nest if the ant has food, or with the food source if it does not, will switch its status of having food, set its pheromone to 30, and update the path-length data. After the collision checks, it drops 1/10 of its current amount of pheromone at the patch. When all the ants have been updated, the patch pheromone is updated. There is a diffusion rate of 0.7, which means that 70% of the pheromone at each patch is distributed equally to neighboring patches. There is also an evaporation rate of 0.06, which means that the pheromone at each patch is decreased by 6 percent. There are more behaviors than these available in the simulation.

6.3. Analysis Summary

The program stores information about the status of the simulation on each tick of the simulation. Upon completion of one simulation, the data is exported directly from the program for analysis by Python. Some of the data, such as Average Action Efficiency, is not directly exported from the program but must be generated by Python from other datasets. The data are fit with a power law function in python. To generate the graphs, the matplotlib Python library is used. The data seen in the graphs is the average of 20 simulations and has a moving average with a window of 50. Furthermore, any graph that requires the final value in the dataset obtains the value by averaging the last 200 points of the dataset without the moving average.

6.4. Average Path Length

The average path length, <l>, estimates the average length of the paths between food and nest created by the ants. On each tick, the path-length variable for each ant is increased by the amount by which it moved, which is 1 patch per tick for this simulation. When an ant reaches an endpoint, the path-length variable is stored in a list and reset to zero. This list is for all of the paths completed on that tick, and at the end of the tick, the list is averaged and added to the average path length dataset. If no paths were created, 0 is added to the average path length to serve as a placeholder; this can easily be removed in the analysis step because it is known that the path length cannot reach a length of 0. It is also important to note that, due to the method used to calculate this dataset, there will be a clear peak if a stable path is formed. This is because the path length of all the ants must begin at zero, the dataset is not representative before the peak, because the shorter paths from the start of the simulation are averaged. The peak itself shows a shifting trend with self-organization when parameters are changed. The average path length data in this simulation is identical to the average path time and can be used interchangeably whenever time is needed instead of distance. If the speed was varying, then the distance and the time would be different.

6.5. Flow Rate

The flow rate, ϕ , is the number of paths completed at each tick, or crossing between the nodes in this simple network, which is the number of events in the system, defined that way. It is the measure of how many ants reached the endpoints on each tick. This can simply be measured by counting how many ants reach the food or nest, and adding this value to the dataset. In this measure, there are a lot of fluctuations, so a moving average is necessary to make the graph readable.

6.6. Final Pheromone

The final pheromone is the total amount of pheromone at the end of the simulation, which is information for the ants. The amount of pheromone can be calculated on each tick by summing all the pheromone values from the nest and the food for each patch which vary during the simulation. By final pheromone, we mean the average of the pheromone for the last 200 ticks at the end of the simulation,

6.7. Total Action

Action is calculated as the energy used times the time for each trajectory. Since energy is constant during the motion, it can be set to 1, so the individual action becomes equal to the time for one edge crossing, which is equal to the length of the trajectory. To get the total action for all agents, it is multiplied times the number of all events, or all crossings. The calculation for total action is based on flow rate and average path length. It is calculated after the simulation in Python using the equation Q = ϕ * < l > .

6.8. Average Action Efficiency

The definition for average action efficiency < α > is the average amount of action per one event in the system or for one edge crossing. This is calculated by dividing the number of events by the total action in the system. The calculation for action efficiency is based on the data for average path length. It is calculated by the equation < α >=1/<l>. This is based on the formula < α >= ϕ /Q. Note that the calculation for average action efficiency is first performed on the individual datasets, then the modified datasets are averaged, rather than averaging the datasets, then applying the equation.

6.9. Density

The density of the ants can be used as an order parameter. The density is changing as a result of reducing randomness in the motion. The program starts at maximum randomness, or internal entropy and while the path is forming, the local density of the ants is increasing. In our simulations, the total number of ants is fixed, and no ants enter or leave the system during each run. However, the density of ants within the simulation space changes over time as they redistribute themselves. Ants are initially distributed uniformly, but as they follow pheromone trails, they tend to concentrate in specific regions, particularly along frequently used paths. This leads to local increases in density along these paths and corresponding decreases in less-used areas, reflecting the emergence of self-organized patterns. Between the runs, when the total number of ants in the simulation is changed the the density is scaled proportionally to reflect this change in the total number of ants.
To calculate the density of the ants, the simulation must get the average of how many ants are in each patch. To achieve this, the simulation approximates a box around the ants in the system to represent the area that they occupy. First, the center of the box is calculated by the equation C x , y = p x , y for all ants, where p x , y is the position of each at each tick. Then, the length and width of the box are calculated by S x , y = 4 * ( p x , y C x , y ) 2 . Finally, the area can be calculated with the formula A = S x * S y . By using this method of averaging the dimensions of the box instead of simply taking the furthest ant, it is ensured that a group of ants has priority over a few outliers. In Python, after the simulation is finished, the density of ants per patch can be calculated for each population, N, by ρ = N / A . It is the total number of ants divided by the area of the box in which the ants are concentrated. At the beginning of the simulation, the box takes the whole world, and as the ants form the path, it gradually decreases in size, corresponding to an increased density.

6.10. Entropy

The system starts with maximum internal entropy, which decreases as paths are formed over time. The calculation for entropy is similar to the calculation of density. First, using the same method as described with density, a box is calculated around the ants within the system.
We consider the agents in our simulation to be distinguishable because we have two different types of ants and each ant in the simulation is labeled and identifiable. The Boltzmann entropy is S = k B * l n ( W ) .
Where the number of states W is the area A that they occupy to the power of the number of ants N.
W = A N
Plugging this into the Boltzmann formula, we get:
S = k B ln A N
Setting
k B = 1
We obtained the expression which we used in our calculations:
S = N ln A
The box is the average size of the area A in which the ants move. As the box decreases, the number of possible microstates in which they can be decreases.

6.11. Unit Entropy

Unit entropy measures the amount of entropy per path in the simulation. This is calculated in Python by dividing the internal entropy by the flow rate. It measures unit entropy at the end of the simulation, so the final 200 points of the internal entropy data are averaged, as are the final 200 points of the flow rate data. The averaged final entropy is then divided by the averaged flow rate: s f / ϕ .

6.12. Simulation parameters

Parameter Values and Settings
Tables 1 to 4 below show the simulation parameters. Table 1 shows the properties that affect the behavior of ants, such as speed of motion, wiggle angle, pheromone detection, and size of the ants. Table 2 shows the settings that affect the properties of the pheromone, such as diffusion rate, evaporation rate and the initial amount of pheromone that the ants pick up when they visit the food or the nest. Table 3 shows settings that affect the world size and initial conditions of the ants. Table 4 shows the size and the positioning of the food and nest.
In Table 4, the settings are that the food/nest are boxes centered vertically on the screen. They do not move during the simulation. Horizontally, the back edges are aligned with the edge of the screen. They have a size of 5x5. To create this, set the following properties listed in the Table 4, then press the "box-food-nest button".
Analysis Parameters All datasets are averages of 20 runs for each population. There is also a moving average of 50 applied after standard averaging.

6.13. Simulation Tests

We ran several tests to show that the simulation and analysis were working correctly.

6.13.1. World Size

Checking how many patches the world contains for the current setting. Running a command in NetLogo that counts how many patches are in a world with a size of 40 prints a value of 1681, and 1681 = 41 . This means that when the world ranges from -20 to +20, the center patch is included, making a total of 41 patches in each direction.

6.13.2. Estimated Path Area

We run a test to check how well the estimation of how much area the ants occupy. We observed the algorithm working in vertical direction, when the ants are randomly dispersed and when they are on the horizontal path. When the ants were disperse, the estimated width was 46.8, which is slightly above the real world size of 41. When the ants formed a path, the estimated width was 5.6, which is close to the observed, with only a few outliers. So, the function that estimates path width might be a few patches off, but this is the due to stochastic behavior when averaging the positions. If, however, we did not use averaging, then the outlier ants would have an undesirable impact on the estimated width, and make the measurement fluctuate much more. The methods for checking the width and length of the path are identical, and these are both used in calculating the area occupied by the ants, which is an important step in calculating entropy and density.

7. Results

In this section, we present the data in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30, Figure 31, Figure 32, Figure 33, Figure 34, Figure 35, Figure 36, Figure 37 and Figure 38, for self-organization as measured by different parameters as an output of the agent-based modeling simulations. For clarity we should emphasize that those quantities are predicted by the model, but, not produced by it. They model predictions are tested by the empirical results of the simulation.
First, we present time data for raw output, Figure 3, Figure 4, Figure 5 and Figure 6, from which points were obtained for the power law graphs. We show the evolution of some of the quantities from the beginning to the end of the simulation. The phase transition from disorder initially to order can be seen. The last 200 points averaged which have been used for the power law figures can be observed. The number of ants in the runs varies from 70 to 200. The time in the simulation runs from 0 to 1000 ticks. The derived data from the time graphs are presented in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30, Figure 31, Figure 32, Figure 33, Figure 34, Figure 35, Figure 36, Figure 37 and Figure 38, which are fit with power law functions to compare with the predictions of the model, and the fit parameters are represented in Table 5.
Figure 3. The density of ants versus the time as the number of ants increases from the bottom curve to the top. As the simulation progresses, the ants become more dense.
Figure 3. The density of ants versus the time as the number of ants increases from the bottom curve to the top. As the simulation progresses, the ants become more dense.
Preprints 137446 g003
Figure 4. The entropy of the simulation versus the time as the number of ants increases from bottom to top. Entropy decreases from the initial random state as the path forms.
Figure 4. The entropy of the simulation versus the time as the number of ants increases from bottom to top. Entropy decreases from the initial random state as the path forms.
Preprints 137446 g004
Figure 5. The total amount of pheromone versus the time passed as the number of ants increases from bottom to top. As the simulation progresses, there is more pheromone for the ants to follow.
Figure 5. The total amount of pheromone versus the time passed as the number of ants increases from bottom to top. As the simulation progresses, there is more pheromone for the ants to follow.
Preprints 137446 g005
Figure 6. The flow rate versus the time passed as the number of ants increases from bottom to top. As the simulation progresses, the ants visit the endpoints more often.
Figure 6. The flow rate versus the time passed as the number of ants increases from bottom to top. As the simulation progresses, the ants visit the endpoints more often.
Preprints 137446 g006
Table 5. This table contains all the fits for the power-law graphs. The "a" and "b" values in each row follow the equation y = a x b , and the R 2 is shown in the last column.
Table 5. This table contains all the fits for the power-law graphs. The "a" and "b" values in each row follow the equation y = a x b , and the R 2 is shown in the last column.
variables a b R 2
α vs. Q 7.713 · 10 36 6.787 · 10 2 0.977
α vs. i 1.042 · 10 35 6.131 · 10 2 0.981
α vs. ϕ 1.510 · 10 35 6.055 · 10 2 0.982
α vs. Δ s 1.020 · 10 35 6.675 · 10 2 0.978
α vs. Δ ρ 1.647 · 10 35 5.947 · 10 2 0.964
α vs. t 1.622 · 10 34 6.175 · 10 1 0.995
α vs. N 1.168 · 10 35 6.673 · 10 2 0.977
Q vs. i 8.502 · 10 1 9.012 · 10 1 1.000
Q vs. ϕ 2.000 · 10 4 8.897 · 10 1 1.000
Q vs. Δ s 6.202 · 10 1 9.829 · 10 1 0.999
Q vs. Δ ρ 7.133 · 10 4 8.784 · 10 1 0.990
Q vs. t 1.410 · 10 19 8.888 0.972
Q vs. N 4.550 · 10 2 9.830 · 10 1 1.000
i vs. ϕ 4.281 · 10 2 9.873 · 10 1 1.000
i vs. Δ s 7.064 · 10 1 1.090 0.999
i vs. Δ ρ 1.755 · 10 3 9.740 · 10 1 0.988
i vs. t 1.407 · 10 19 9.887 0.976
i vs. N 6.445 1.090 0.999
ϕ vs. Δ s 1.521 · 10 3 1.104 0.999
ϕ vs. Δ ρ 4.175 9.864 · 10 1 0.988
ϕ vs. t 5.438 · 10 16 1.002 · 10 1 0.977
ϕ vs. N 1.427 · 10 2 1.104 0.999
Δ s vs. Δ ρ 1.301 · 10 3 8.939 · 10 1 0.991
Δ s vs. t 4.439 · 10 17 9.035 0.969
Δ s vs. N 7.598 1.000 1.000
s i vs. N 7.598 1.000 1.000
s f vs. N 5.793 9.745 · 10 1 1.000
s u vs. N 4.059 · 10 2 1.298 · 10 1 0.938
s u vs. s f 5.121 · 10 2 1.329 · 10 1 0.935
Δ ρ vs. t 1.103 · 10 16 9.975 0.949
Δ ρ vs. N 3.308 · 10 3 1.110 0.991
t vs. N 7.061 · 10 1 1.075 · 10 1 0.970

7.1. Time graphs

The raw data are presented as output measures vs time. In this paper we present the time graphs with the output for four quantities. Figure 3, Figure 4, Figure 5 and Figure 6. All variables measure the degree of order in the system.
The time data show the phase transition from a disorganized to an organized state as an increase of the order parameter Figure 3 and as a decrease in internal entropy Figure 4. The amount of information Figure 5 and the number of events per unit time Figure 6 also undergo similar transitions. Those data are exponential in the region before the inflection point of the curves, where growth is unconstrained, confirming the exponential predictions of the model.
The density serves as an order parameter Figure 3, similar to entropy Figure 4. In the first case, the system starts with a close to zero order parameter which increases to some maximum value and then saturates as the system is fixed in size. In the second case, the system starts at maximum internal entropy and it drops to a minimum value as the system reaches the saturation point.
Pheromone is the amount of information in the system which is proportional to the degree of order Figure 5, and flow rate is also proportional to all of the other measures, indicating the number of events as defined in the system Figure 6. Both of these start at an initial minimum value and undergo a phase transition to the organized state, after which they saturate, due to the fixed size of the system. Those are measures directly connected to action efficiency and are some of the most important performance metrics for self-organizing systems.
The data in Figure 3 show the whole run as the density increases with time as self-organization occurs. The more ants, the larger the density. The increase of density depends on two factors: 1. The shorter average path length at the end, and 2. the increased number of ants.
Figure 4 shows the initial entropy when the ants are randomly dispersed which scales with the number of ants because the number of microstates corresponding to the same macrostate grows. When the ants form the path, the amount of decrease of entropy is larger for the larger number of ants, even though the absolute value of entropy at the end of the simulation also scales with the number of ants, since the number of microstates when the ants form the final path also scales with the number of ants. Entropy at the initial and final state of the simulation is seen to grow with the increase of the number of agents.
The pheromone is a measure of the information in the system. Its change during the simulation is shown on Figure 5. It scales with the number of ants. Each simulation starts with zero pheromones as the ants are dispersed randomly and do not carry any pheromones, but as they start forming the path they lay pheromones from the food and nest respectively and the more ants are in the simulation, the more pheromones they carry. Larger systems contain more information as each agent is a carrier of information.
Figure 6 shows the flow rate vs. time during the simulation. The number of events, which is the number of visits to the food and nest, is inversely proportional to the average path length, and respectively average path time, scales with the number of ants as expected. Initially, the number of crossings is zero but it quickly increases and is greater for the larger systems. After the ants form the shortest path, it saturates and stays close to constant for all simulations.
The average path length also can be used as a measure of the time and degree of self-organization.

7.2. Power law graphs

All figures representing the relationship between the characteristics of this system demonstrate power law relationships between all of the quantities as theoretically predicted by the model, as seen on Figure 7 to 38. They are all on a log-log scale. This serves as one confirmation of the model, as a theoretical explanation for the simulation. The power law data correspond to scaling relations measured in many systems of different natures [56,58].

7.2.1. Size-Complexity rule

Figure 7 shows the size-complexity rule: as the size increases, the action efficiency as a measure of the degree of organization and complexity increases. This is supported by all experimental and observational data on scaling relations by Geoffrey West, Bonner, Carneiro, and many others [12,56,57,58]. They confirm Kleiber’s law [71], and other similar laws, such as the area speciation rule in ecology and others.
Figure 7. The average action efficiency at the end of the simulation versus the number of ants on log-log scale. As more ants are added, they are able to form more action-efficient structures by finding shorter paths.
Figure 7. The average action efficiency at the end of the simulation versus the number of ants on log-log scale. As more ants are added, they are able to form more action-efficient structures by finding shorter paths.
Preprints 137446 g007

7.2.2. Unit-total dualism

The following graphs serve as empirical support for the unit-total dualism described in this paper. Figure 8 shows the unti-total dualism between the average action efficiency and total action.
Figure 8. The average action efficiency at the end of the simulation versus the total action as the number of ants increases on log-log scale. As there is more total action within the system, the ants become more action-efficient.
Figure 8. The average action efficiency at the end of the simulation versus the total action as the number of ants increases on log-log scale. As there is more total action within the system, the ants become more action-efficient.
Preprints 137446 g008
The total action is a measure of all energy and time spent in the simulation by the agents in the system, as it can be seen on Figure 8 . As the number of agents increases, the total action increases. This demonstrates the duality of decreasing unit action and increasing total action as a system self-organizes grows, develops, and evolves. It also demonstrates the dynamical action principle as the unit action per one event decreases with the growth of the system, as seen in the increase of the average action efficiency, while the total action increases. This is an expression of the dualism for the decreasing unit action principle and the increasing total action principle, for dynamical action as systems self-organize, grow, evolve, and develop.
Figure 9 is an expression of the unit-total duality of entropy. When the unit entropy in the system tends to decrease its total entropy increases.
Figure 9. Unit entropy at the end of the simulation versus internal entropy on log-log scale. As the total entropy for the simulation increases, the entropy per agent decreases.
Figure 9. Unit entropy at the end of the simulation versus internal entropy on log-log scale. As the total entropy for the simulation increases, the entropy per agent decreases.
Preprints 137446 g009

7.2.3. The rest of the characteristics

Next we show the rest of the power law fits between all of the quantities in the model, figs. 10 to 38. All of them are on a log-log scale, where a straight line is a power law curve on a linear-linear scale. These graphs match the predictions of the model and confirm the power-law relationships between all of the characteristics of a complex system derived there.
Figure 10 shows the average action efficiency at the end of the simulation versus the time required to traverse the path as the size of the system, in terms of number of agents, increases. In complex systems, as the agents find shorter paths, this state is more stable in dynamic equilibrium and is preserved. It has a higher probability of persisting. It is memorized by the system. If there is friction in the system, this trend will become even stronger, as the energy spent to traverse the shorter path will also decrease. To the macro-state at each point, there are many micro-states, corresponding to the variations of the paths of individual agents.
Figure 10. The average action efficiency at the end of the simulation versus the time required to traverse the path as the number of ants increases on log-log scale. Action efficiency increases as the time to reach the destination shortens, i.e. the path length becomes shorter.
Figure 10. The average action efficiency at the end of the simulation versus the time required to traverse the path as the number of ants increases on log-log scale. Action efficiency increases as the time to reach the destination shortens, i.e. the path length becomes shorter.
Preprints 137446 g010
Figure 11 shows the average action efficiency at the end of the simulation versus the density increase of the agents as the size of the system increases in terms of number of agents. Density increases the probability of shorter paths, i.e. less time to reach the destination, i.e. larger action efficiency. In natural systems as density increases, action efficiency increases, i.e. level of organization increases. Another term for density is concentration. When hydrogen gas clouds in the universe under the influence of gravity concentrate into stars, nucleosynthesis starts and the evolution of cosmic elements begins. In chemistry increased concentration of reactants speeds up chemical reactions, i.e. they become more action efficient. When single-cell organisms concentrate in colonies and later in multicellular organisms their level of organization increases. When human populations concentrate in cities, action efficiency increases, and civilization advances.
Figure 11. The average action efficiency at the end of the simulation versus the density increase measured as the difference between the final density minus the initial density as the number of ants increases on log-log scale. As the ants get more dense, they become more action efficient.
Figure 11. The average action efficiency at the end of the simulation versus the density increase measured as the difference between the final density minus the initial density as the number of ants increases on log-log scale. As the ants get more dense, they become more action efficient.
Preprints 137446 g011
As statistical Boltzmann entropy decreases, as seen on Figure 12, the density (concentration) increases and this allows the system to be more action-efficient and organized. Decreased randomness is correlated with a well-formed path as a flow channel, which corresponds to the structure (organization) of the system. Here, the decrease of entropy obeys the predictions of the model being in a strict power law dependence on the other characteristics of the self-organizing complex system.
Figure 12. The average action efficiency at the end of the simulation versus the absolute amount of entropy decreases as the number of ants increases on log-log scale. As the ants get less random, they become more action-efficient.
Figure 12. The average action efficiency at the end of the simulation versus the absolute amount of entropy decreases as the number of ants increases on log-log scale. As the ants get less random, they become more action-efficient.
Preprints 137446 g012
Figure 13 shows the average action efficiency at the end of the simulation versus the flow rate as the size of the system in terms of number of agents increases. The flow rate measures the number of events in a system. For real systems, those can be nuclear or chemical reactions, computations, or anything else. In this simulation, it is the number of visits at the endpoints, or the number of crossings. As the speed of the ants is a constant in this simulation, the number of visits or the flow of events is inversely proportional to the time for crossing, i.e. the path length, therefore action efficiency increases with the number of visits.
Figure 13. The average action efficiency at the end of the simulation versus the flow rate as the number of ants increases on log-log scale. As the ants visit the endpoints more often, they become more efficient.
Figure 13. The average action efficiency at the end of the simulation versus the flow rate as the number of ants increases on log-log scale. As the ants visit the endpoints more often, they become more efficient.
Preprints 137446 g013
Figure 14 shows the average action efficiency at the end of the simulation versus the amount of pheromone, or information, as the size of the system in terms of number of angents increases. The pheromone is what instructs the ants how to move. They follow its gradient towards the food or the nest. As the ants form the path, they concentrate more pheromone on the trail, and they lay it faster so it has less time to evaporate. Both depend on each other in a positive feedback loop. This leads to increased action efficiency, with a power-law dependence as predicted by the model. In other complex systems, the analog of the pheromone can be temperature and catalysts in chemical reactions. In an ecosystem, as animals traverse a path, the path itself carries information, and clearing the path reduces obstacles and, therefore the time and energy to reach the destination, i.e. action.
Figure 14. The average action efficiency at the end of the simulation versus the amount of pheromone, or information, as the number of ants increases on log-log scale. As there is more information for the ants to follow, they become more efficient.
Figure 14. The average action efficiency at the end of the simulation versus the amount of pheromone, or information, as the number of ants increases on log-log scale. As there is more information for the ants to follow, they become more efficient.
Preprints 137446 g014
Figure 15 shows the total action at the end of the simulation versus the size of the system in terms of number of agents. The total action is the sum of the actions of each agent. As the number of agents grows the total action grows. This graph demonstrates the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 15. The total action at the end of the simulation versus the number of ants on log-log scale. As more ants are added, there is action energy within the system.
Figure 15. The total action at the end of the simulation versus the number of ants on log-log scale. As more ants are added, there is action energy within the system.
Preprints 137446 g015
Figure 16 shows the total action at the end of the simulation versus the time required to traverse the path as the size of the system, in terms of number of agents increases. With more ants, the path forms better and gets shorter, which increases the number of visits. The shorter time is connected to more visits and increased size of the system, which is why the total action increases. This graph also demonstrates the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 16. The total action at the end of the simulation versus the time required to traverse the path as the number of ants increases on log-log scale.
Figure 16. The total action at the end of the simulation versus the time required to traverse the path as the number of ants increases on log-log scale.
Preprints 137446 g016
Figure 17 shows the total action at the end of the simulation versus the increase of density of agents as the size of the system in terms of number of agents increases. The larger the system is, it contains more agents, which corresponds to greater density, more trajectories, and more total action. This graph demonstrates as well the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 17. The total action at the end of the simulation versus the increase of density as the number of ants increases on log-log scale. As the ants become more dense, there is more action in the system.
Figure 17. The total action at the end of the simulation versus the increase of density as the number of ants increases on log-log scale. As the ants become more dense, there is more action in the system.
Preprints 137446 g017
Figure 18 shows the total action at the end of the simulation versus the absolute decrease of entropy as the size of the system in terms of number of agents increases. As the total entropy difference increases, which means that the decrease of the internal entropy is greater for a larger number of ants, the total action increases, because there are more agents in the system and they visit the nodes more often. Greater organization of the system is correlated with more total action demonstrating again the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 18. The total action at the end of the simulation versus the absolute decrease of entropy as the number of ants increases on log-log scale. As the entropy decreases, there is more action within the system.
Figure 18. The total action at the end of the simulation versus the absolute decrease of entropy as the number of ants increases on log-log scale. As the entropy decreases, there is more action within the system.
Preprints 137446 g018
Figure 19 shows the total action at the end of the simulation versus the flow rate, which is the number of events per unit time, as the size of the system in terms of number of agents increases. As the flow of events increases, which is the number of crossings of ants between the food and nest, the total action increases, because there are more agents in the system and they visit the nodes more often by forming a shorter path. This also demonstrates the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 19. The total action at the end of the simulation versus the flow rate as the number of ants increases on log-log scale. As the ants visit the endpoints more often, there is more total action within the system.
Figure 19. The total action at the end of the simulation versus the flow rate as the number of ants increases on log-log scale. As the ants visit the endpoints more often, there is more total action within the system.
Preprints 137446 g019
Figure 20 shows the total action at the end of the simulation versus the amount of pheromone as a measure for information, as the size of the system in terms of number of agentss increases. As the total number of agents in the system increases, they leave more pheromones, which causes forming a shorter path, increases the number of visits, and the total action increases. Again, this graph demonstrates the principle of increasing total action in self-organization, growth, evolution, and development of systems.
Figure 20. The total action at the end of the simulation versus the amount of pheromone as the number of ants increases on log-log scale. As there is more information for the ants to follow, there is more action within the system.
Figure 20. The total action at the end of the simulation versus the amount of pheromone as the number of ants increases on log-log scale. As there is more information for the ants to follow, there is more action within the system.
Preprints 137446 g020
Figure 21 shows the total pheromone as a measure for the amount of information at the end of the simulation versus the size of the system in terms of number of agents. As the total number of ants in the system increases, they leave more pheromones and form a shorter path, which counters the evaporation of the pheromones. This increases the amount of information in the system, which helps with its rate and degree of self-organization.
Figure 21. The total pheromone at the end of the simulation versus the number of ants on log-log scale. As more ants are added to the simulation, there is more information for the ants to follow.
Figure 21. The total pheromone at the end of the simulation versus the number of ants on log-log scale. As more ants are added to the simulation, there is more information for the ants to follow.
Preprints 137446 g021
Figure 22 shows the total pheromone at the end of the simulation versus the average path time required to traverse the path as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, and as they visit the food and nest more often and there are greater number of ants they leave more pheromones. The increased amount of information in turn helps form an even shorter path which reduces the pheromone evaporation increasing the pheromones event more. This is a visualization of the result of this positive feedback loop.
Figure 22. The total pheromone at the end of the simulation versus the time required to traverse the path as the number of ants increases on log-log scale. As it takes less time for the ants to travel between the nodes, there is more information for the ants to follow.
Figure 22. The total pheromone at the end of the simulation versus the time required to traverse the path as the number of ants increases on log-log scale. As it takes less time for the ants to travel between the nodes, there is more information for the ants to follow.
Preprints 137446 g022
Figure 23 shows the total pheromone as a measure of information at the end of the simulation versus the density increase as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, and as there are more ants, their density increases, and as they visit the food and nest more often and there is a greater number of ants and lower evaporation, they leave more information.
Figure 23. The total pheromone at the end of the simulation versus the density increase as the number of ants increases on log-log scale. As the ants become more dense, there is more information for them to follow.
Figure 23. The total pheromone at the end of the simulation versus the density increase as the number of ants increases on log-log scale. As the ants become more dense, there is more information for them to follow.
Preprints 137446 g023
Figure 24 shows the total pheromone as a measure of amount of information at the end of the simulation versus the absolute decrease of entropy as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher. As there are more ants, the entropy difference increases. The entropy during each simulation decreases, and as they visit the food and nest more often and there is a greater number of ants and less evaporation they accumulate more pheromones.
Figure 24. The total pheromone at the end of the simulation versus the absolute decrease of entropy as the number of ants increases on log-log scale. As the entropy decreases, there is more information for the ants to follow.
Figure 24. The total pheromone at the end of the simulation versus the absolute decrease of entropy as the number of ants increases on log-log scale. As the entropy decreases, there is more information for the ants to follow.
Preprints 137446 g024
Figure 25 shows the total pheromone as a measure of the amount of information in the systems at the end of the simulation versus the flow rate, which is the number of events (crossings of the edge) per unit time, as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher. They visit the food and nest more often, and as there are more ants, the number of visits increases proportionally, the evaporation decreases, and they accumulate more pheromones.
Figure 25. The total pheromone at the end of the simulation versus the flow rate as the number of ants increases on log-log scale.
Figure 25. The total pheromone at the end of the simulation versus the flow rate as the number of ants increases on log-log scale.
Preprints 137446 g025
Figure 26 shows the flow rate in terms of number of events at the end of the simulation versus the size of the system in terms of number of agents. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, visit the food and nest more often, and the number of visits increases proportionally.
Figure 26. The flow rate at the end of the simulation versus the number of ants on log-log scale. As more ants are added to the simulation and they are forming shorter paths in self-organization, the ants are visiting the endpoints more often.
Figure 26. The flow rate at the end of the simulation versus the number of ants on log-log scale. As more ants are added to the simulation and they are forming shorter paths in self-organization, the ants are visiting the endpoints more often.
Preprints 137446 g026
Figure 27 shows the flow rate in terms of number of events per unit time at the end of the simulation versus the time required to traverse between the nodes as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, visit the food and nest more often, and as there are more ants, the number of visits increases proportionally.
Figure 27. The flow rate at the end of the simulation versus the time required to traverse between the nodes as the number of ants increases on log-log scale. As the path becomes shorter, the ants are visiting the endpoints more often.
Figure 27. The flow rate at the end of the simulation versus the time required to traverse between the nodes as the number of ants increases on log-log scale. As the path becomes shorter, the ants are visiting the endpoints more often.
Preprints 137446 g027
Figure 28 shows the flow rate in terms of number of events (edge crossings) per unit time at the end of the simulation versus the time required to traverse between the nodes as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, this leads to an increase in density, and as there are more ants, the number of visits increases proportionally.
Figure 28. The flow rate at the end of the simulation versus the increase of density as the number of ants increases on log-log scale. As the ants get more dense, they are visiting the endpoints more often.
Figure 28. The flow rate at the end of the simulation versus the increase of density as the number of ants increases on log-log scale. As the ants get more dense, they are visiting the endpoints more often.
Preprints 137446 g028
Figure 29 shows the flow rate in terms of number of events (edge crossings) per unit time at the end of the simulation versus the absolute decrease of entropy as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, the absolute decrease of entropy is larger, and as there are more ants, the number of visits increases proportionally.
Figure 29. The flow rate at the end of the simulation versus the absolute decrease of entropy as the number of ants increases on log-log scale. As the entropy decreases more, the ants are visiting the endpoints more often.
Figure 29. The flow rate at the end of the simulation versus the absolute decrease of entropy as the number of ants increases on log-log scale. As the entropy decreases more, the ants are visiting the endpoints more often.
Preprints 137446 g029
Figure 30 shows the absolute amount of entropy decrease versus the size of the system in terms of number of agents as it increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, they start with a larger initial entropy and the difference between the initial and final entropy grows. More ants correspond to greater internal entropy decrease, which is one measure of self-organization. It is one of the scaling laws in the size-complexity rule.
Figure 30. The absolute amount of entropy decrease versus the number of ants on log-log scale. As more ants are added to the simulation, there is a larger decrease in entropy reflecting a greater degree of self-organization.
Figure 30. The absolute amount of entropy decrease versus the number of ants on log-log scale. As more ants are added to the simulation, there is a larger decrease in entropy reflecting a greater degree of self-organization.
Preprints 137446 g030
Figure 31 shows the absolute amount of entropy decrease versus the average time required to traverse the path at the end of the simulation as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher. When the path is shorter, this corresponds to shorter times to cross between the two nodes, the internal entropy decreases more.
Figure 31. The absolute amount of entropy decrease versus the time required to traverse the path at the end of the simulation as the number of ants increases on log-log scale. As it takes more time to move between the nodes with fewer ants, there is less of a decrease in entropy.
Figure 31. The absolute amount of entropy decrease versus the time required to traverse the path at the end of the simulation as the number of ants increases on log-log scale. As it takes more time to move between the nodes with fewer ants, there is less of a decrease in entropy.
Preprints 137446 g031
Figure 32 shows the absolute amount of entropy decrease versus the the amount of density increase at the end of the simulation as the size of the system in terms of number of agents increases. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, and as there are more ants their density increases, and the internal entropy difference increases proportionally.
Figure 32. The absolute amount of entropy decrease versus the amount of density increase as the number of ants increases on log-log scale. As the ants become more dense, there is a larger decrease in entropy.
Figure 32. The absolute amount of entropy decrease versus the amount of density increase as the number of ants increases on log-log scale. As the ants become more dense, there is a larger decrease in entropy.
Preprints 137446 g032
Figure 33 shows the amount of density increase versus the size as it increases in terms of number of agents. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, and as there are more ants, the density increases proportionally.
Figure 33. The amount of density increase versus the number of ants on log-log scale. As more ants are added to the simulation, there is a larger increase in density.
Figure 33. The amount of density increase versus the number of ants on log-log scale. As more ants are added to the simulation, there is a larger increase in density.
Preprints 137446 g033
Figure 34 shows the amount of density increase versus the average time required to traverse the path as the size increases in terms of number of agents. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, visit the food and nest more often, the time to cross between the nodes decreases, and the density increases proportionally.
Figure 34. The amount of density increase versus the time required to traverse the path as the number of ants increases on log-log scale. When there are more ants it takes less time to traverse the path, and there is more of an increase in density.
Figure 34. The amount of density increase versus the time required to traverse the path as the number of ants increases on log-log scale. When there are more ants it takes less time to traverse the path, and there is more of an increase in density.
Preprints 137446 g034
Figure 35 shows the average time required to traverse the path versus the increasing size of the system in terms of number of agents. As the total number of ants in the system increases, they form a shorter path as the degree of self-organization is higher, visit the food and nest more often, and as there are more ants, the time for the visits decreases proportionally.
Figure 35. The time required to traverse the path versus the number of ants on log-log scale. As more ants are added to the simulation, it takes less time to move between the nodes.
Figure 35. The time required to traverse the path versus the number of ants on log-log scale. As more ants are added to the simulation, it takes less time to move between the nodes.
Preprints 137446 g035
Figure 36 shows the final entropy at the end of the simulation versus the size of the system in terms number of agents. The final entropy in the system increases when there are more agents, and therefore more possible microstates of the system. This is an expression of the unit-total duality of entropy when the total entropy in the system tends to increase with its growth.
Figure 36. The final entropy at the end of the simulation versus population on log-log scale. As the population increases, there is more entropy.
Figure 36. The final entropy at the end of the simulation versus population on log-log scale. As the population increases, there is more entropy.
Preprints 137446 g036
Figure 37 shows the initial entropy at the beginning of the simulation versus the size of the system in terms number of agents. The initial entropy reflects the larger number of agents in a fixed initial size of the system and scales with the size of the system as expected. The initial entropy in the system increases when there are more agents in the space of the simulation, and therefore more possible microstates of the system.
Figure 37. Population versus the initial entropy on the first tick of the simulation on log-log scale. As the population increases, there is more entropy.
Figure 37. Population versus the initial entropy on the first tick of the simulation on log-log scale. As the population increases, there is more entropy.
Preprints 137446 g037
Figure 38 shows the unit entropy at the end of the simulation versus the size of the system in terms number of agents. This is an expression of the unit-total duality of entropy when the unit entropy in the system tends to decrease with its growth.
Figure 38. Unit entropy at the end of the simulation versus population on log-log scale. As there are more agents, there is less entropy per path at the end of the simulation.
Figure 38. Unit entropy at the end of the simulation versus population on log-log scale. As there are more agents, there is less entropy per path at the end of the simulation.
Preprints 137446 g038
In Table 5 we show the values of the fit parameters for the power law relationships

8. Discussion

Hamilton’s principle of stationary action has long been a cornerstone in physics, showing that the path taken by any system between two states is one that minimizes action for the most potentials in classical physics. In some cases, it is a saddle point never being a true maximum. Our research extends this principle to the realm of complex systems, proposing that the average action efficiency (AAE) serves as a predictor, measure, and driver of self-organization within these systems. By utilizing agent-based modeling (ABM), particularly through simulations of ant colonies, we demonstrate that systems naturally evolve towards states of higher organization and efficiency, consistent with the minimization of average physical action for one event in a system. In this simulation, as the number of agents in each run is fixed, all characteristics undergo a phase transition from an unorganized initial state to an organized final state. All of the characteristics are correlated with power-law relationships in the final state. This provides a new way of understanding self-organization and its driving mechanisms.
In an example of one agent, a state of the system where it has half of the action compared to another state, the system is calculated to have double the amount of organization. An extension of the model to open systems of n agents provides a method for calculating the level of organization of any system. The significance of this result is that it provides a quantitative measure for comparing different levels of organization within the same system and for comparing different systems.
The size-complexity rule can be summarized as the following: for a system to improve, it must become larger i.e. for a system to become more organized and action-efficient, it needs to expand. As a system’s action efficiency increases, it can grow, creating a positive feedback loop where growth and action efficiency reinforce each other. The negative feedback loop is that the characteristics of a complex system cannot deviate much from the power law relationship. If we externally limit the growth of the system, we also limit the increase in its action efficiency. Then the action becomes stationary which means that the average action efficiency and the total action in the system stop increasing. Otherwise for unbounded, growing systems, action is dynamic, which means that the action efficiency and total action continue increasing. This applies to dynamic, open thermodynamic systems that operate outside of thermodynamic equilibrium and have flows of energy and matter. The growth of any system is driven by its increase in action efficiency. Without reaching a new level of action efficiency, growth is impossible. This principle is evident in both organisms and societies.
Other characteristics such as the total amount of action in the system, the number of events per unit of time, the internal entropy decrease, the density of agents, the amount of information in the system (measured in terms of pheromone levels), and the average time per event are strongly correlated and increase according to a power law function. Changing the population in the simulation influences all these characteristics through their power law relationships. Because these characteristics are interconnected, measuring one can provide the values of the others at any given time in the simulation using the coefficients in the power law fits (Table Appendix). If we consider the economy as a self-organizing complex system, we can find a logical explanation for the Jevons paradox, which may need to be renamed to Jevons rule, because it is an observation of a regular property of complex systems, and not an unexplained counter-intuitive fact, as it has been considered for a long time.
We uncover a unit-total dualism, as in some of the characteristics, such as action and entropy, while the action and entropy per one event decrease, the total action, and entropy proportionally increase in the whole system. The unit and total quantities are correlated with power law equations. In the case of action, this leads to a dynamical action principle, where unit action is decreasing in self-organization, while total action is increasing. This variational principle is observed in self-organizing complex systems, and not in isolated agents.
The formation of the path in the simulation is an emergent property in this system due to the interactions of the agents and is not specified in the rules of the simulation. This least action state of the system, which is its most organized state is predicted from the principle of least action. This means that we have a way to predict emergent properties in complex systems, using basic physics principles. The emergence of structure from the properties of the agents is a hallmark of self-organizing systems and it appears spontaneously. Emergence is a property of the entire system and not of its parts.

9. Conclusions

This study reinforces the increase of average action efficiency during self-organization and with the size of systems as a driver and a measure for the evolution of complex systems. This offers new opportunities for understanding and describing the processes leading to increased organization in complex systems. It offers prospects for future research, laying a foundation for more in-depth exploration into the dynamics of self-organization and potentially inspiring the development of new strategies for optimizing system performance and resilience.
Our findings suggest that self-organization is inherently driven by a positive feedback loop, where systems evolve towards states of minimal unit action and maximal organization. Self-organization driven by the action principles could be the simplest explanation and thus pass the Occam’s razor. It could be the answer to "Why do complex systems self-organize at all?". Action efficiency always acts together with all other characteristics in the model, not in isolation. It drives self-organization through this mechanism of positive and negative feedback loops.
We found that this theory is working well for the current simulation. With additional details and features, it can be applied to more realistic systems. This model is testable and falsifiable. It needs to be always retested because every theory, every method, and every approach has its limits and needs to be extended, expanded, enriched, and detailed as new levels of knowledge are reached. We expect this from all scientific theories and explanations. This model may be tested for any network, for example, metabolic networks, ecological networks, Internet, road networks, etc.
Our simulations demonstrate that the level of organization is inversely proportional to the average physical action required for system processes. This measure aligns with the principle of least action, a fundamental concept in physics, and extends its application to complex, non-equilibrium systems. The results from our ant colony simulations consistently show that systems with higher average action efficiency exhibit greater levels of organization, validating our hypothesis.
When the processes of self-organization are open-ended and continuous the stationary action principles do not apply anymore except in limited cases. We have dynamical action principles where the quantities are changing continuously, either increasing or decreasing.
We uncovered an extension of the principle of least action to complex systems, which can be an extended variational principle of decreasing unit action per one event in a self-organizing complex system, and it is connected with a power law relation to another mirror variational principle of increasing total action of the system. Other complexity variational principles are the decreasing unit entropy per one event in the system, and the increasing of the total entropy as the system grows, evolves, develops, and self-organizes. We term those polar sets of variational principles, unit-total duality.
Other dualities to explore are, that the unit path curvature for one edge of the complex networks decreases, according to the Hertz’s principle of least curvature, as the total curvature for traversing all paths in the system increases. The unit path constraint for the motion of one edge decreases, according to the Gauss principle of least constraint, as the total constraint for the motion of all agents as the system grows increases. There are possibly many more variational dualities to be uncovered in self-organizing, evolving, and developing complex systems. Those dualities can be used to analyze, understand, and predict the behavior of complex systems. This is one explanation for the size-complexity rule observed in nature and the scaling relationships in biology and society. The unit-total dualism is that as unit quantities decrease, with the system becoming more efficient as a result of self-organization, total quantities grow and both are connected with positive feedback and are correlated by a power law relation. As one example we find a logical explanation for the Jevons and other paradoxes, and the subsequent work of economists in this field, which are also unit-total dualities inherent to the functioning of self-organizing and growing complex systems.
While our results are promising, our study has limitations. The simplified ant colony model used in our simulations does not capture the full spectrum of complexities and interactions present in real-world systems. Future research should aim to integrate more detailed and realistic models, incorporating environmental variability and agent heterogeneity, to test the universality and applicability of our findings more broadly and for specific systems.
Additionally, the interplay between average action efficiency and other organizational measures, such as entropy and order parameters, deserves further investigation. Understanding how these metrics interact could deepen our comprehension of complex system dynamics and provide a more holistic view of system organization.
The implications of our findings are significant for both theoretical research and practical applications. In natural sciences, this new measure can be used to quantify and compare the organization of different systems, providing insights into their evolutionary processes. In engineering and artificial systems, our model can guide the design of more efficient and resilient systems by emphasizing the importance of action efficiency. For example, in ecological and biological systems, understanding how organisms optimize their behaviors to achieve greater efficiency can inform conservation strategies and ecosystem management. In technology and artificial intelligence, designing algorithms and systems that follow the principle of least action can lead to more efficient processing and better performance.
Our findings contribute to a deeper understanding of the mechanisms underlying self-organization and offer a novel, quantitative approach to measuring organization in complex systems. This research opens up exciting possibilities for further exploration and practical applications, enhancing our ability to design and manage complex systems across various domains. By providing a quantitative measure of organization that can be applied universally, we enhance our ability to design and manage complex systems across various domains. Future research can build on our findings to explore the dynamics of self-organization in greater detail, develop new optimization strategies, and create more efficient and resilient systems.

9.0.1. Future Work

In Part 2 of this paper we measure the entropy production of this system, and include it in the positive feedback model of characteristics, leading to exponential and power law solutions. Then we verify that the entropy production also obeys the power law relationships with all of the other characteristics of the system. For example, in comparing to the internal entropy, we conclude that as the internal entropy is reduced, the external entropy production increases proportionally, which can be connected to the Maximum Entropy Production Principle, where internal entropy minimization leads to maximization of external entropy production, therefore we can say that self-organization leads to internal entropy decrease and formation of flow channels which maximize external entropy production.

Author Contributions

Conceptualization, G.G.; Theory, G.G.; Model, G.G.; methodology, M.B.; software, M.B.; validation, G.G. and M.B..; formal analysis, M.B.; investigation, G.G.; resources, G.G.; data curation, G.G.; writing—original draft preparation, G.G.; writing—review and editing, G.G.; visualization, M.B. and G.G.; supervision, G.G.; project administration, G.G.; funding acquisition, G.G. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors thank Assumption University for providing a creative atmosphere and funding and its Honors Program, specifically Prof. Colby Davie, for continuous research support and encouragement. Matthew Brouillet thanks his parents for their encouragement. Georgi Georgev thanks his wife for patience and support for this manuscript.

References

  1. Sagan, C. Cosmos; Random House: New York, 1980. [Google Scholar]
  2. Chaisson, E.J. Cosmic evolution; Harvard University Press, 2002.
  3. Kurzweil, R. The singularity is near: When humans transcend biology; Penguin, 2005.
  4. De Bari, B.; Dixon, J.; Kondepudi, D.; Vaidya, A. Thermodynamics, organisms and behaviour. Philosophical Transactions of the Royal Society A 2023, 381, 20220278. [Google Scholar] [CrossRef] [PubMed]
  5. England, J.L. Self-organized computation in the far-from-equilibrium cell. Biophysics Reviews 2022, 3. [Google Scholar] [CrossRef] [PubMed]
  6. Walker, S.I.; Davies, P.C. The algorithmic origins of life. Journal of the Royal Society Interface 2013, 10, 20120869. [Google Scholar] [CrossRef] [PubMed]
  7. Walker, S.I. The new physics needed to probe the origins of life. Nature 2019, 569, 36–39. [Google Scholar] [CrossRef]
  8. Georgiev, G.; Georgiev, I. The least action and the metric of an organized system. Open systems & information dynamics 2002, 9, 371. [Google Scholar]
  9. Georgiev, G.Y.; Gombos, E.; Bates, T.; Henry, K.; Casey, A.; Daly, M. Free Energy Rate Density and Self-organization in Complex Systems. In Proceedings of ECCS 2014; Springer; 2016; pp. 321–327. [Google Scholar]
  10. Georgiev, G.Y.; Henry, K.; Bates, T.; Gombos, E.; Casey, A.; Daly, M.; Vinod, A.; Lee, H. Mechanism of organization increase in complex systems. Complexity 2015, 21, 18–28. [Google Scholar] [CrossRef]
  11. Georgiev, G.Y.; Chatterjee, A.; Iannacchione, G. Exponential Self-Organization and Moore’s Law: Measures and Mechanisms. Complexity 2017, 2017. [Google Scholar] [CrossRef]
  12. Butler, T.H.; Georgiev, G.Y. Self-Organization in Stellar Evolution: Size-Complexity Rule. In Efficiency in Complex Systems: Self-Organization Towards Increased Efficiency; Springer, 2021; pp. 53–80.
  13. Shannon, C.E. A Mathematical Theory of Communication. Bell System Technical Journal 1948, 27, 379–423. [Google Scholar] [CrossRef]
  14. Jaynes, E.T. Information Theory and Statistical Mechanics. Physical Review 1957, 106, 620. [Google Scholar] [CrossRef]
  15. Gell-Mann, M. Complexity Measures - an Article about Simplicity and Complexity. Complexity 1995, 1, 16–19. [Google Scholar]
  16. Yockey, H.P. Information Theory, Evolution, and The Origin of Life; Cambridge University Press, 2005.
  17. Crutchfield, J.P.; Feldman, D.P. Information Measures, Effective Complexity, and Total Information. Physical Review E 2003, 67, 061306. [Google Scholar]
  18. Williams, P.L.; Beer, R.D. Information-Theoretic Measures for Complexity Analysis. Chaos: An Interdisciplinary Journal of Nonlinear Science 2010, 20, 037115. [Google Scholar]
  19. Ay, N.; Olbrich, E.; Bertschinger, N.; Jost, J. Quantifying Complexity Using Information Theory, Machine Learning, and Algorithmic Complexity. Journal of Complexity 2013. [Google Scholar]
  20. Kolmogorov, A.N. Three approaches to the quantitative definition of information. Problems of Information Transmission 1965, 1, 1–7. [Google Scholar] [CrossRef]
  21. Grassberger, P. Toward a quantitative theory of self-generated complexity. International Journal of Theoretical Physics 1986, 25, 907–938. [Google Scholar] [CrossRef]
  22. Pincus, S.M. Approximate entropy as a measure of system complexity. Proceedings of the National Academy of Sciences 1991, 88, 2297–2301. [Google Scholar] [CrossRef]
  23. Costa, M.; Goldberger, A.L.; Peng, C.K. Multiscale entropy analysis of complex physiologic time series. Physical review letters 2002, 89, 068102. [Google Scholar] [CrossRef]
  24. Lizier, J.T.; Prokopenko, M.; Zomaya, A.Y. Local information transfer as a spatiotemporal filter for complex systems. Physical Review E 2008, 77, 026110. [Google Scholar] [CrossRef]
  25. Rosso, O.A.; Larrondo, H.A.; Martin, M.T.; Plastino, A.; Fuentes, M.A. Distinguishing noise from chaos. Physical Review Letters 2007, 99, 154102. [Google Scholar] [CrossRef] [PubMed]
  26. Maupertuis, P.L.M.d. Essay de cosmologie; Netherlands, De l’Imp. d’Elie Luzac, 1751.
  27. Goldstein, H. Classical Mechanics; Addison-Wesley, 1980.
  28. Taylor, J.C. Hidden unity in nature’s laws; Cambridge University Press, 2001.
  29. Lauster, M. On the Principle of Least Action and Its Role in the Alternative Theory of Nonequilibrium Processes. In Variational and Extremum Principles in Macroscopic Systems; Elsevier, 2005; pp. 207–225.
  30. Nath, S. Novel molecular insights into ATP synthesis in oxidative phosphorylation based on the principle of least action. Chemical Physics Letters 2022, 796, 139561. [Google Scholar] [CrossRef]
  31. Bersani, A.M.; Caressa, P. Lagrangian descriptions of dissipative systems: a review. Mathematics and Mechanics of Solids 2021, 26, 785–803. [Google Scholar] [CrossRef]
  32. Endres, R.G. Entropy production selects nonequilibrium states in multistable systems. Scientific reports 2017, 7, 14437. [Google Scholar] [CrossRef] [PubMed]
  33. Martyushev, L.M.; Seleznev, V.D. Maximum entropy production principle in physics, chemistry and biology. Physics reports 2006, 426, 1–45. [Google Scholar] [CrossRef]
  34. Martyushev, L.M. Maximum entropy production principle: History and current status. Physics-Uspekhi 2021, 64, 558. [Google Scholar] [CrossRef]
  35. Dewar, R. Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states. Journal of Physics A: Mathematical and General 2003, 36, 631. [Google Scholar] [CrossRef]
  36. Dewar, R.C. Maximum entropy production and the fluctuation theorem. Journal of Physics A: Mathematical and General 2005, 38, L371. [Google Scholar] [CrossRef]
  37. Jaynes, E.T. Information theory and statistical mechanics. Physical review 1957, 106, 620. [Google Scholar] [CrossRef]
  38. Jaynes, E.T. Information theory and statistical mechanics. II. Physical review 1957, 108, 171. [Google Scholar] [CrossRef]
  39. Lucia, U. Entropy generation: Minimum inside and maximum outside. Physica A: Statistical Mechanics and its Applications 2014, 396, 61–65. [Google Scholar] [CrossRef]
  40. Lucia, U.; Grazzini, G. The second law today: using maximum-minimum entropy generation. Entropy 2015, 17, 7786–7797. [Google Scholar] [CrossRef]
  41. Gay-Balmaz, F.; Yoshimura, H. A Lagrangian variational formulation for nonequilibrium thermodynamics. Part II: continuum systems. Journal of Geometry and Physics 2017, 111, 194–212. [Google Scholar] [CrossRef]
  42. Gay-Balmaz, F.; Yoshimura, H. From Lagrangian mechanics to nonequilibrium thermodynamics: A variational perspective. Entropy 2019, 21, 8. [Google Scholar] [CrossRef] [PubMed]
  43. Gay-Balmaz, F.; Yoshimura, H. Systems, variational principles and interconnections in non-equilibrium thermodynamics. Philosophical Transactions of the Royal Society A 2023, 381, 20220280. [Google Scholar] [CrossRef] [PubMed]
  44. Kaila, V.R.; Annila, A. Natural selection for least action. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 2008, 464, 3055–3070. [Google Scholar] [CrossRef]
  45. Annila, A.; Salthe, S. Physical foundations of evolutionary theory 2010.
  46. Munkhammar, J. Quantum mechanics from a stochastic least action principle. Foundational Questions Institute Essay 2009. [Google Scholar]
  47. Zhao, T.; Hua, Y.C.; Guo, Z.Y. The principle of least action for reversible thermodynamic processes and cycles. Entropy 2018, 20, 542. [Google Scholar] [CrossRef]
  48. García-Morales, V.; Pellicer, J.; Manzanares, J.A. Thermodynamics based on the principle of least abbreviated action: Entropy production in a network of coupled oscillators. Annals of Physics 2008, 323, 1844–1858. [Google Scholar] [CrossRef]
  49. Wang, Q. Maximum entropy change and least action principle for nonequilibrium systems. Astrophysics and Space Science 2006, 305, 273–281. [Google Scholar] [CrossRef]
  50. Ozawa, H.; Ohmura, A.; Lorenz, R.D.; Pujol, T. The second law of thermodynamics and the global climate system: A review of the maximum entropy production principle. Reviews of Geophysics 2003, 41. [Google Scholar] [CrossRef]
  51. Niven, R.K.; Andresen, B. Jaynes’ maximum entropy principle, Riemannian metrics and generalised least action bound. In Complex Physical, Biophysical and Econophysical Systems; World Scientific, 2010; pp. 283–317.
  52. Herglotz, G.B. Lectures at the University of Göttingen. University of Göttingen: Göttingen, Germany 1930.
  53. Georgieva, B.; Guenther, R.; Bodurov, T. Generalized variational principle of Herglotz for several independent variables. First Noether-type theorem. Journal of Mathematical Physics 2003, 44, 3911–3927. [Google Scholar] [CrossRef]
  54. Tian, X.; Zhang, Y. Noether’s theorem for fractional Herglotz variational principle in phase space. Chaos, Solitons & Fractals 2019, 119, 50–54. [Google Scholar]
  55. Beretta, G.P. The fourth law of thermodynamics: steepest entropy ascent. Philosophical Transactions of the Royal Society A 2020, 378, 20190168. [Google Scholar] [CrossRef] [PubMed]
  56. Bonner, J.T. Perspective: the size-complexity rule. Evolution 2004, 58, 1883–1890. [Google Scholar] [PubMed]
  57. Carneiro, R.L. On the relationship between size of population and complexity of social organization. Southwestern Journal of Anthropology 1967, 23, 234–243. [Google Scholar] [CrossRef]
  58. West, G.B. Scale: the universal laws of growth, innovation, sustainability, and the pace of life in organisms, cities, economies, and companies; Penguin, 2017.
  59. Gershenson, C.; Trianni, V.; Werfel, J.; Sayama, H. Self-organization and artificial life. Artificial Life 2020, 26, 391–408. [Google Scholar] [CrossRef] [PubMed]
  60. Sayama, H. Introduction to the modeling and analysis of complex systems; Open SUNY Textbooks, 2015.
  61. Carlson, J.M.; Doyle, J. Complexity and robustness. Proceedings of the national academy of sciences 2002, 99, 2538–2545. [Google Scholar] [CrossRef]
  62. Heylighen, F.; Joslyn, C. Cybernetics and second-order cybernetics. Encyclopedia of physical science & technology 2001, 4, 155–170. [Google Scholar]
  63. Jevons,W.S. The coalquestion;an inquiry concerning the progress of the nation and the probable exhaustion of our coal-mines; Macmillan, 1866.
  64. Berkhout, P.H.; Muskens, J.C.; Velthuijsen, J.W. Defining the rebound effect. Energy policy 2000, 28, 425–432. [Google Scholar] [CrossRef]
  65. Hildenbrand, W. On the" law of demand". Econometrica: Journal of the Econometric Society 1983, pp. 997–1019.
  66. Saunders, H.D. The Khazzoom-Brookes postulate and neoclassical growth. The Energy Journal 1992, 13, 131–148. [Google Scholar] [CrossRef]
  67. Downs, A. Stuck in traffic: Coping with peak-hour traffic congestion; Brookings Institution Press, 2000.
  68. Feynman, R.P. Space-Time Approach to Non-Relativistic Quantum Mechanics. Reviews of Modern Physics 1948, 20, 367–387. [Google Scholar] [CrossRef]
  69. Gauß, C.F. Über ein neues allgemeines Grundgesetz der Mechanik.; Walter de Gruyter, Berlin/New York Berlin, New York, 1829.
  70. LibreTexts. 5.3: The Uniform Distribution, 2023. Accessed: 2024-07-15.
  71. Kleiber, M.; others. Body size and metabolism. Hilgardia 1932, 6, 315–353. [Google Scholar] [CrossRef]
Figure 2. Positive feedback model between the eight quantities in our simulation.
Figure 2. Positive feedback model between the eight quantities in our simulation.
Preprints 137446 g002
Table 1. Settings of the properties in this simulation that affect the behavior of the ants.
Table 1. Settings of the properties in this simulation that affect the behavior of the ants.
Parameter Value Description
ant-speed 1 patch/tick Constant speed
wiggle range 50 degrees random directional change, from -25 to +25
view-angle 135 degrees Angle of cone where ants can detect pheromone
ant-size 2 patches Radius of ants, affects radius of pheromone viewing cone
Table 2. Settings of the properties in this simulation that affect the behavior of the pheromone.
Table 2. Settings of the properties in this simulation that affect the behavior of the pheromone.
Parameter Value Description
Diffusion rate 0.7 Rate at which pheromones diffuse
Evaporation rate 0.06 Rate at which pheromones evaporate
Initial pheromone 30 units Initial amount of pheromone deposited
Table 3. Settings of the properties in this simulation that affect various other conditions.
Table 3. Settings of the properties in this simulation that affect various other conditions.
Parameter Value Description
projectile-motion off Ants have constant energy
start-nest-only off Ants start randomly
max-food 0 Food is infinite, food will disappear if this is greater than 0
constant-ants on Number of ants is constant
world-size 40 World ranges from -20 to +20, note that the true world size is 41x41
Table 4. Settings of the properties in this simulation that affect the position and size of the food and the nest.
Table 4. Settings of the properties in this simulation that affect the position and size of the food and the nest.
Parameter Value Description
food-nest-size 5 The length and width of the food and nest boxes
foodx -18 The position of the food in the x-direction
foody 0 The position of the food in the y-direction
nestx +18 The position of the nest in the x-direction
nesty 0 The position of the nest in the y-direction
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Short Biography of Authors

Preprints 137446 i001 Matthew Brouillet is a student at Washington University as part of the Dual Degree program with Assumption University. His major is Mechanical Engineering, and he has experience with computer programming. He developed the NetLogo programs for these simulations and utilized Python code to analyze the data. He is working with Dr. Georgiev to publish numerous papers in the field of self-organization.
Preprints 137446 i002 Dr. Georgi Y. Georgiev is a Professor of Physics at Assumption University and Worcester Polytechnic Institute. He earned his Ph.D. in Physics from Tufts University, Medford, MA. His research focuses on the physics of complex systems, exploring the role of variational principles in self-organization, the principle of least action, path integrals, and the Maximum Entropy Production Principle. Dr. Georgiev has developed a new model that explains the mechanism, driving force, and attractor of self-organization. He has published extensively in these areas and he has been an organizer of international conferences on complex systems.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated