Preprint
Article

HE-DEPSO Algorithm for Textures Optimization with Application in Particle Physics

Altmetrics

Downloads

78

Views

56

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

14 August 2024

Posted:

16 August 2024

You are already at the latest version

Alerts
Abstract
Within the phenomenology of particle physics, the theoretical model of 4-zero textures is validated using a chi-square criterion that compares experimental data with the computational results of the model. Traditionally, analytical methods that often imply simplifications, combined with computational analysis, have been used to validate texture models. In this paper, we propose a new meta-heuristic variant of the differential evolution algorithm that incorporates aspects of the particle swarm optimization algorithm called "HE-DEPSO" to obtain chi-squared values are less than a bound value, which exhaustive and traditional algorithms cannot obtain. The results show that the proposed algorithm can optimize the chi-square function according to the required criteria. We compare simulated data with experimental data in the allowed search region, thereby validating the 4-zero texture model.
Keywords: 
Subject: Computer Science and Mathematics  -   Other

1. Introduction

Broadly speaking, physics can be classified into three areas: the theoretical, which, through a mathematical formalism, explains and understands the physical phenomena; the experimental part, which groups disciplines related to the acquisition of data, the methods of data acquisition, the design and performance of experiments; finally, the aspect that links these two is known as phenomenology, which is in charge of 1) validating theoretical models, 2) how to measure their parameters, 3) how to distinguish one model from another and 4) studying the experimental consequences of these models. This paper addresses a phenomenological approach by providing a solution to the validation of a theoretical model of physics. More specifically, from particle physics to unveil the mechanisms of fermion mass generation and to reproduce the elements of the V C K M matrix known as the mixing matrix.
The beginning of these models goes back to the first years of the 1970s, shortly after the establishment of the Standard Model (SM) of Particle Physics. Since then, different approaches have been developed in the context of theoretical and phenomenological models, which can be broadly classified as follows: Radiative mechanisms [1,2]; Textures [3,4,5]; Symmetries between families [6,7]; and Seesaw mechanisms [8,9,10]. These approaches are interrelated.
The texture formalism was born by considering that certain entries of the mass matrix are zero, such that we can compute analytically the matrix that diagonalizes it and hence the matrix V C K M . In 1977 Harald Fritzsch created this formalism by using 6-zero hermetic textures as a viable model[11], in 2005 with experimental data from those years, he found 4-zero hermetic textures viable to generate the quark masses and the mixing matrix [12], however, given the precision of the current experimental data, it is worthwhile to assess the feasibility of these texture models.
There are numerical works on 4-zero texture models; however, the detailed numerical description of the techniques and algorithms used are not described by their authors [13]. The numerical analysis of texture models requires a χ 2 criterion that establishes when the theoretical part of the model agrees with the experimental part [14]. That is, to validate the model, a function (which we will call χ 2 ( X ) ) is built, and permissible values of the free parameters X of the model must be found such that this function takes the minimum value possible greater than 0 but less than 1. In other words, it is necessary to optimize said function under a certain threshold. For the particular case of the function χ 2 ( X ) obtained from the texture formalism, the difficulty comes from the cumbersome algebraic manipulation of the expressions involved, which complicate and entangle the application of classic optimization techniques; thus, alternative optimization techniques are required.
To the best of the authors’ knowledge, the works where bio-inspired optimization algorithms have been used within particle physics are the following: In experimental contexts, the particle swarm optimization algorithm (PSO) as well as genetic algorithms (GA), have been implemented within the hyperparameter optimization of machine learning (ML) models used in the analysis of data obtained in high-energy physics experiments [15]; The optimization of the design of particle accelerators where the differential evolution (DE) algorithm is quite effective [16,17]; Regarding phenomenology, genetic algorithms have been used to discriminate models from supersymmetric theories [18].
As can be seen, the incursion of bio-inspired optimization algorithms in particle physics has been very limited, even when the results are favorable. For this reason, further application of these techniques and algorithms in this type of particle physics area, especially in texture formalism, is of great interest.
The DE algorithm is one of the evolutionary algorithms that has stood out the most in recent times due to its simplicity, power, and efficiency [19,20]; however, like other evolutionary algorithms, it is likely to suffer premature convergence and stagnation in local minima [19,21]. The strategies implemented to solve these problems can be classified as follows [22]:
  • Change the mutation strategy. The mutation phase of the DE algorithm is important since it allows the integration of new individuals into the existing population, thus influencing its performance. Algorithms such as CoDE [23] have introduced new mutation strategies and implemented different mutation strategies, respectively, in order to improve the efficiency of the DE algorithm.
  • Change the parameter control. The DE algorithm is sensitive to the configuration of its main parameters: the scale factor F and the crossover probability C r [19,24]. Self-adaptive control of these parameters has been shown to improve the performance of the DE algorithm significantly. In this sense, the SHADE [25] and SaDE [26] algorithms represent two fairly well-known variants.
  • Incorporation of population mechanisms. The way in which the population is handled in the DE algorithm can improve its performance. Techniques such as genotypic topology [27], opposition-based initialization [28], and external population archives, as seen in JADE [29], have shown positive effects.
  • Hybridization with other optimization algorithms. One way to improve the performance of the DE algorithm is to take advantage of the operators’ strengths from other algorithms and incorporate them into the structure of the DE algorithm through a hybridization process [20,30]. Hybridization with other computational intelligence algorithms, such as Artificial Neural Networks (ANN), Ant Colony Optimization (ACO), and Particle Swarm Optimization (PSO), has marked a trend within the last decade [20,31].
The self-adaptive Differential Evolution algorithm based on Particle Swarm Optimization (DEPSO) [32] is a recent variant of the DE algorithm that integrates the PSO mutation strategy within its structure. DEPSO employs a probabilistic selection technique to choose between two mutation strategies: a new mutation strategy with an elite file called DE/e-rand/1, a modification of the DE/rand/1 strategy, and the mutation strategy of the PSO algorithm. This probabilistic selection technique enables DEPSO to improve the balance between exploitation and exploration, resulting in significantly better performance compared to both DE and PSO on various single-objective optimization problems.
This paper proposes a new variant of the DEPSO algorithm called Historical Elite Differential Evolution Based on Particle Swarm Optimization (HE-DEPSO) to improve optimization performance in solving complex single-objective problems, with a specific focus on addressing the problem of optimizing the χ 2 criteria for the 4-zero texture model of high energy physics. The proposed variant aims to explore areas of opportunity encountered to enhance DEPSO, specifically introducing a new mutation strategy named DE/current-to-EHE/1, which utilizes information from the elite individuals of the population and incorporates historical data from the evolutionary process to improve the balance between exploration and exploitation by leveraging information from elite individuals and historical data, particularly during the early stages of the evolutionary process. Additionally, HE-DEPSO employs the self-adaptive parameter control mechanism from the SHADE algorithm to reduce the sensitivity of the algorithm’s parameters. To test the HE-DEPSO algorithm’s performance, it was compared against other optimization algorithms, including DE, PSO, CoDE, SHADE, and DEPSO, using the CEC 2017 single-objective benchmark function set. The HE-DEPSO algorithm outperformed the other algorithms in terms of solution quality. Finally, the validation of the 4-zeros texture model was conducted, optimizing the χ 2 function and, at the same time, comparing the performance of our proposal against DE, PSO, CoDE, SHADE, and DEPSO algorithms for this particular application. The results are encouraging and expand the use of bio-inspired methods in high-energy physics by integrating a metaheuristic approach.
The remainder of the paper is structured as follows: Chapter 2 contains the definition of the problem to be solved, including the definition of the χ 2 criterion; Chapter 3 reviews the versions of the PSO and DEPSO algorithms used here. Chapter 4 explains our proposal, the HE-DEPSO algorithm; Chapter 5 subjects our algorithm to benchmark tests, while Chapter 6 presents the validation problem of the 4-zero texture model. Finally, in chapter 7, the conclusions are presented.

2. Problem Definition

In the SM scope, the masses of the quarks come from hermetic 3 × 3 matrices known as mass matrices (one for u-type quarks and another for d-type quarks)[33,34,35], as the absolute values of their eigenvalues and the matrix V C K M as the product of the matrix that diagonalizes the u-type quarks and the matrix that diagonalizes the d-type quarks, however, due to the mathematical formalism used, the mass matrices remain entirely unknown and consequently, the masses of the quarks and the mixing matrix V C K M cannot be theoretically predicted. The experiment provides us with the numerical value of these quantities.
The mixing matrix V C K M , can be written in general way as:
V C K M = V C K M u d V C K M u s V C K M u b V C K M c d V C K M c s V C K M c b V C K M t d V C K M t s V C K M t b ,
and is a unitary matrix containing information about the probability of transition between u-type quarks (left-hand index) and d-type quarks (right-hand index), through the weak interaction [36]. That is: | V C K M u s | quantifies the transition probability between the quark u and the quark s through the interaction with the boson W ± . Over many years and different collaborations, the magnitudes of the elements of the mixing matrix V C K M have been experimentally measured with an accuracy up to 10 5 and it is well known that four quantities (three angles and one phase) are needed to have a parameterization that adequately describes the matrix V C K M [37,38]. In this work we choose the Chau-Keng parametrization [37] and the three corresponding angles θ 13 , θ 12 , θ 23 are obtained by choosing the three magnitudes of the elements | V C K M u s | , | V C K M u b | and | V C K M c b | and the phase δ 13 is obtained from the Jarslog invariant (J) through the following relations:
sin θ 13 = | V C K M u b | ,
sin θ 12 = | V C K M u s | 1 | V C K M u b | 2 ,
sin θ 23 = | V C K M c b | 1 | V C K M u b | 2 ,
sin δ 13 = J 1 ( sin θ 12 ) 2 sin θ 12 [ 1 ( sin θ 13 ) 2 ] sin θ 13 1 ( sin θ 23 ) 2 sin θ 23 ,
where J = I m ( V u s V c b V u b * , V c s * ) . Hence, we choose | V C K M u s | , | V C K M u b | , | V C K M c b | and J as independent quantities.
Within the SM context, the mixing matrix, V C K M , is defined by [37]:
V C K M = U u U d ,
where U u is the matrix that diagonalizes the mass matrix of the u-type quarks, and U d is the matrix that diagonalizes the d-type quarks. It is here where the texture formalism is born, which consists of proposing structures with zeros in some entries of the mass matrix in such a way that we can find the matrices that diagonalize it and be able to calculate analytically the mixing matrix V C K M and validate the chosen texture structure.
Without loss of generality, the mass matrices M u and M d are considered hermitian so that the general mass structure has the form:
M q = E q D q F q D q * C q B q F q * B q * A q ,
where the index q runs over the labels u , d . The elements A q , C q and E q are real, while B q , D q and F q are complex and are usually written in their polar form Z q = | Z q | e ϕ Z q where | Z q | is the magnitude and ϕ Z q its angular phase ( Z = B , D , F ).
A matrix of the type four-zero textures [12] is formed from the above matrix by taking the entries ( 1 , 1 ) , ( 1 , 3 ) and ( 3 , 1 ) equal to zero. Thus, we arrive at the following matrix structure:
M q = 0 D q 0 D q * C q B q 0 B q * A q .
In references [3,12] it is shown that this matrix can be diagonalized by a unitary matrix as follows:
U q M q U q = d i a g { λ 1 q , λ 2 q , λ 3 q } , O q T P q M q O q T P q = d i a g { λ 1 q , λ 2 q , λ 3 q } ,
where d i a g { λ 1 q , λ 2 q , λ 3 q } denotes a diagonal matrix and λ i q denotes each of the three eigenvalues of M q . The matrices P q and O q are given by:
P q = 1 0 0 0 e i ϕ D q 0 0 0 e i ( ϕ D q + ϕ B q ) ,
and
O q = ( O q ) 11 ( O q ) 12 ( O q ) 13 ( O q ) 21 ( O q ) 22 ( O q ) 23 ( O q ) 31 ( O q ) 32 ( O q ) 33 = λ 2 q λ 3 q ( A q λ 1 q ) A q ( λ 2 q λ 1 q ) ( λ 3 q λ 1 q ) η q λ 1 q λ 3 q ( λ 2 q A q ) A q ( λ 2 q λ 1 q ) ( λ 3 q λ 2 q ) λ 1 q λ 2 q ( A q λ 3 q ) A q ( λ 3 q λ 1 q ) ( λ 3 q λ 2 q ) η q λ 1 q ( λ 1 q A q ) ( λ 2 q λ 1 q ) ( λ 3 q λ 1 q ) λ 2 q ( A q λ 2 q ) ( λ 2 q λ 1 q ) ( λ 3 q λ 2 q ) λ 3 q ( λ 3 q A q ) ( λ 3 q λ 1 q ) ( λ 3 q λ 2 q ) η q λ 1 q ( A q λ 2 q ) ( A q λ 3 q ) A q ( λ 2 q λ 1 q ) ( λ 3 q λ 1 q ) λ 2 q ( A q λ 1 q ) ( λ 3 q A q ) A q ( λ 2 q λ 1 q ) ( λ 3 q λ 2 q ) λ 3 q ( A q λ 1 q ) ( A q λ 2 q ) A q ( λ 3 q λ 1 q ) ( λ 3 q λ 2 q ) ,
Taking λ 3 q > 0 and A q > 0 , the relations between λ i q with the physical masses of the quarks are:
( λ 1 u , λ 2 u , λ 3 u ) = ( η u m u , η u m c , m t ) ,
( λ 1 d , λ 2 d , λ 3 d ) = ( η d m d , η d m s , m b ) ,
where m u is the u quark mass, m c is the c quark mass, m t is the t quark mass, m d is the d quark mass, m s is the s quark mass, and m b is the b quark mass and its experimental value is presented in Appendix A. In this work, the same four-zero texture structure is considered for the u-quark mass matrix and the d-type quark mass matrix (a parallel mass structure). The q-index takes two values q = u and q = d .
From the Equation (9), it is noticed that the elements of the matrix O q depend on the free parameters η q and A q , the first parameter takes values of + 1 and 1 and tells us that eigenvalue of the mass matrix is negative, when η q = 1 the first eigenvalue λ 1 q is negative and the second eigenvalue λ 2 q is positive, when η q = 1 the first eigenvalue λ 1 q is positive and the second eigenvalue λ 2 q is negative. The combinations of signs of the η u and η d parameters define the different cases of study to be considered (see Table 1). The second pair of free parameters are A u and A d whose values are restricted to the intervals m c < A u < m t and m s < A d < m b to ensure that the elements of the O u and O d matrices are real.
From the above, the mixing matrix V C K M , predicted by the four-texture model, is given by:
V C K M t h O u P u P d O d ,
in an explicit form:
( V C K M t h ) i j = ( O u ) 1 i ( O d ) 1 j + ( O u ) 2 i ( O d ) 2 j e i ϕ 1 + ( O u ) 3 i ( O d ) 3 j e i ( ϕ 1 + ϕ 2 ) ,
where the phases ϕ 1 and ϕ 2 are defined as:
ϕ 1 = ϕ D u ϕ D d , ϕ 2 = ϕ B u ϕ B d .
These are considered to be measured in radians with principal argument [ 0 , 2 π ] , and the indices i and j correspond, respectively, to the indices ( u , c , t ) and ( d , s , b ) .
The magnitude of the elements of the mixing matrix is given by:
| ( V C K M ) i j t h | = { [ ( O u ) 1 i ( O d ) 1 j ] 2 + [ ( O u ) 2 i ( O d ) 2 j ] 2 + [ ( O u ) 3 i ( O d ) 3 j ] 2 + 2 [ ( O u ) 1 i ( O d ) 1 j ] [ ( O u ) 2 i ( O d ) 2 j ] C o s ( ϕ 1 ) + 2 [ ( O u ) 1 i ( O d ) 1 j ] [ ( O u ) 3 i ( O d ) 3 j ] C o s ( ϕ 1 + ϕ 2 ) + 2 [ ( O u ) 2 i ( O d ) 2 j ] [ ( O u ) 3 i ( O d ) 3 j ] C o s ( ϕ 2 ) } 1 / 2 .
At this point, it is essential to emphasize that analytical expressions are obtained for the elements of the mixing matrix predicted by the four-zero texture formalism; with this information, it is possible to construct a more complete theory of the Standard Model where the origin of the mixing matrix V C K M is explained.
It only remains to validate the theoretical model of textures with 4-zeros, that is, to find the range of the free parameters A u , A d , ϕ 1 y ϕ 2 that agree with the experimental values of the matrix V C K M and for this, we define a function χ 2 ( X ) and use a chi-square criterion [39,40] established by:
0 < χ 2 ( X ) 4 < 1 ,
where X = ( A u , A d , ϕ 1 , ϕ 2 ) and
χ 2 ( X ) = ( V C K M ) u s t h ( X ) V u s 2 σ V u s 2 + ( V C K M ) u b t h ( X ) V u b 2 σ V u b 2 + ( V C K M ) c b t h ( X ) V c b 2 σ V c b 2 + J t h ( X ) J 2 σ J q 2 ,
the super-indexes “ t h ” are given by the Equation (15) and the quantities without super-index are the experimental data with uncertainty σ V k l 2 (see Equations (A1) and (A2)).
Although at first sight, the mathematically constructed function χ 2 ( X ) turns out to have a simple structure and the amount of free parameters is small, the difficulty of finding the numerical range of the same ones that fulfill the condition given in the Equation (16) comes from the cumbersome composition of functions to evaluate that constitute it. In order to have a notion of the topographic relief of this function, different projections of the function χ 2 ( X ) in different planes are shown in Figure 1. Figure 1a shows the projection onto the plane A u and A d (held fixed values for ϕ 1 = c 1 = c o n s t and ϕ 2 = c 2 = c o n s t ), that is; the dependence of the function χ 2 ( X ) on the variables A u and A d is shown (right graph) and the corresponding contour lines are illustrated (left graph). Similarly, Figure 1b corresponds to the projection of the function χ 2 ( X ) onto A u and ϕ 1 (setting A d = c 3 = c o n s t . and ϕ 2 = c 4 = c o n s t ). Analogously in the graphical displays we show the behavior of the function χ 2 ( X ) in the plane A u and ϕ 2 (Figure 1c), in the plane A d and ϕ 1 (Figure 1d), in the plane A d and ϕ 2 (Figure 1e), and in the ϕ 1 and ϕ 2 planes (Figure 1f).
The following color code has been established, regions towards the intense red color mean larger values of χ 2 ( X ) , while regions towards the intense blue color correspond to smaller values of χ 2 ( X ) . We point out that the function to optimize χ 2 ( X ) : R 4 R is defined on (17) with variables A u , A d , ϕ 1 , ϕ 2 bounded on the boundaries [ m c , m t ] , [ m s , m b ] , [ 0 , 2 π ] , [ 0 , 2 π ] and as can be observed from the graphs (Figure 1), the topography in which it is intended to optimize is rugged. With this, one sees the opportunity to explore the use of alternative optimization techniques such as bio-inspired algorithms.

3. Review of PSO, DE and DEPSO

3.1. Particle Swarm Optmization

The Particle Swarm Optimization (PSO) algorithm [41] is one of the most popular swarm intelligence-based algorithms. It models the behavior of groups of individuals, such as flocks, to solve complex optimization problems in D-dimensional space. The algorithm performs a search for the global optimum using a swarm of N P particles. For the i-th particle of the swarm X i , t ( i = 1 , 2 , . . . , N P ), its position and velocity at iteration t are represented by X i , t = { x i , t 1 , x i , t 2 , . . . , x i , t D } and V i , t = { v i , t 1 , v i , t 2 , . . . , v i , t D } respectively. At each iteration the position and velocity of each particle is updated taking into account the information of the best solution found by the swarm X g b e s t , t , and the best solution found individually X p b e s t , t by each particle. In the standard version of the PSO algorithm, the way in which the updating of the position and velocity of each particle is performed is carried out according to the following expressions [42]:
V i , t + 1 = ω · V i , t + c 1 · r 1 · ( X g b e s t , t X i , t ) + c 2 · r 2 · ( X p b e s t , t X i , t ) ,
X i , t + 1 = X i , t + V i , t + 1 ,
where c 1 and c 2 are the social and cognitive coefficients respectively, which commonly take the values c 1 = c 2 = 2 . r 1 and r 2 are two randomly selected values within the interval [ 0 , 1 ] . ω is known as the inertia parameter and its aims to provide a better balance between global and local search. The inertia parameter ω is updated at each iteration as follows:
ω = ω m a x t · ( ω m a x ω m i n ) t m a x ,
where t refers to the current iteration, t m a x is the maximum number of iterations, t m i n and t m a x are the minimum and maximum values of the inertia factor, which are generally set to values t m i n = 0.4 and t m a x = 0.9 .

3.2. Differential Evolution

The DE algorithm proposed by Storn and Price [43,44], is one of the most representative evolutionary algorithms, which, due to its ease of implementation, effectiveness and robustness, has been widely used in the solution of several complex optimization problems [45,46]. This algorithm makes use of a population P constituted from N P individuals or parameter vectors:
P t = { X 1 , t , X 2 , t , . . , X i , t } , i = 1 , . . , N P ,
where t refers to the current iteration, and each individual X i , t = { x i , t 1 , x 2 , t 2 , . . . , x i , t D } has D dimensions. At the beginning of the execution of this algorithm, the population is initialized randomly within the search space of the problem.
The DE algorithm consists mainly of three operators: mutation, crossover and selection. These operators allow the algorithm to perform the search for the global optimum. Mutation, crossover and selection are applied consecutively to each individual X i , t of the population P t in each of the t iterations until a certain stopping criterion is satisfied. Within the mutation, in each iteration and for each individual, a mutated vector V i , t is generated by means of the information of the current population P t and the application of a mutation scheme. In the standard version of the DE algorithm the mutated vector is generated following the DE/rand/1 mutation scheme, described as follows:
V i , t = X r 1 , t + F · ( X r 2 , t X r 3 , t ) ,
where the indices r 1 , r 2 and r 3 are randomly selected within the range [ 1 , N P ] such that they are different from each other and different from the index i. F > 0 is the scaling factor and is one of the parameters of the algorithm. The value of parameter F is typically within the interval [ 0 , 1 ] .
The next stage within the DE algorithm consists of the crossover, in which a test vector U i , t is generated, that is, once the mutated vector V i , t is generated for the individual X i , t , the information crossover between the vector X i , t and the vector V i , t is performed. This crossover operation is performed consistently following the binomial crossover:
U i , t j = V i , t j if r a n d [ 0 , 1 ] < C r or j = j rand X i , t j otherwise ,
where r a n d [ 0 , 1 ] is a number uniformly selected within the interval [ 0 , 1 ] , and j rand is an index corresponding to a variable which is uniformly selected in the interval [ 1 , D ] . Here C r [ 0 , 1 ] is known as the crossover probability, and like F, it is another parameter of the algorithm.
Once the corresponding test vector U i , t is generated for the individual X i , t , a selection procedure is carried out from which the population for iteration t + 1 is constructed. The standard way to perform this selection procedure consists of comparing the fit value of the test vector U i , t against the fit value of the individual X i , t , always keeping the individual with the best fit value. This procedure is performed as follows:
X i , t + 1 = U i , t if f ( U i , t ) < f ( X i , t ) X i , t otherwise .

3.2.1. Improvements on DE Parameters Control

One of the components that influence the performance of the DE algorithm is the control of the F and C r parameters [25,47]. The standard version of the DE algorithm uses fixed values for these two parameters, however, the selection of these parameters can be done deterministically (following a certain rule that modifies these values after a certain number of iterations), adaptively (according to the feedback of the search process) and in a self-adaptive way (by means of the evolution information provided by the individuals of the population) [30]. One of the most representative variants concerning the improvement of the control of the parameters F and C r of the DE algorithm was proposed in the SHADE algorithm [25].
Precisely, in each iteration of the SHADE algorithm, the parameters F and C r associated with test vectors with a better fit than their parent vector are stored in two sets: S F and S C r respectively. Two archives or memories M F and M C r with a defined time H, and whose contents M C r , k and M F , k ( k = 1 , . . . , H ) are initialized with a value of 0.5 , serve to generate the parameters F i and C r i of the i-th individual X i , t at each iteration t. The way to generate the values of these parameters requires the uniform selection of an index r i [ 1 , H ] and is carried out according to the following expressions:
C r i = randn i ( M C r , r i , 0.1 ) ,
F i = randc i ( M F , r i , 0.1 ) ,
where M C R , r i is the element selected from M C r to generate the value of C r i , while M F , r i is the element selected from M F to generate the value of F i ; randn i ( M C r , r i , 0.1 ) represents a normal distribution with mean M C R , r i and standard deviation 0.1 , while randc i ( M F , r i , 0.1 ) represents a Cauchy distribution with parameter position M F , r i and scale factor 0.1 . When generating the value of the parameter C r i , it must be verified that it is within the range [ 0 , 1 ] , otherwise, if C r i < 0 then C r i is truncated to 0 and if C r i > 1 then C r i is truncated to 1. For the parameter F i , if F i > 1 then F i is truncated to 1, otherwise, if F i < 0 then F i is regenerated until F i > 0 .
At the end of each iteration, the contents of the M C r and M F memories are updated as follows:
M C r , k , t + 1 = m e a n W L ( S C r ) , si S C r M C r , k , t , otherwise ,
M F , k , t + 1 = m e a n W L ( S F ) , si S F M F , k , t , otherwise ,
where m e a n W L ( S ) is the Lehmer weighted mean defined by means of the Equation (29), and S refers to S C r or S F .
m e a n W L = k = 1 | S | w k · S k 2 k = 1 | S | w k · S k
w k = Δ f k l = 1 | S | Δ f l
Δ f k = | f ( X p , k , t ) f ( X k , t ) |
In Equation (27) and Equation (28), k [ 1 , H ] determines the position in memory to be updated. At iteration t, the k t h element in memory is updated. At the beginning of the optimization process k = 1 , and is incremented each time a new element is added to memory. If k > H , k is assigned equal to 1.
Due to the good performance obtained by the SHADE algorithm when solving optimization problems using this strategy, this paper takes advantage of the adaptive control of the parameters provided by the SHADE algorithm.

3.2.2. DEPSO Algorithm

The self-adaptive differential evolution algorithm based on particle swarm optimization (DEPSO) [32] is a recent method, which incorporates characteristics of the PSO algorithm within the structure of the DE algorithm for solving numerical optimization problems. In this algorithm, the top α · N P ( α [ 0.1 , 0.9 ] ) of the best particles in the current iteration is used to form an elite sub-swarm ( P ) , while the rest, is used to form a non-elite sub-swarm ( Q ) .
The DEPSO algorithm follows a scheme similar to the standard version of the DE algorithm, i.e., it uses mutation, crossover and selection operators within the evolutionary process of the algorithm. Within the mutation, a mutation strategy is implemented which, in a self-adaptive way, selects between two mutation schemes to generate in each iteration a mutated vector V i , t + 1 for the i-th particle X i , t as follows:
V i , t + 1 = X r 1 , t P + F i · ( X r 2 , t p X r 3 , t Q ) if r a n d [ 0 , 1 ] < S P t ω · X i , t + c 1 · r 1 · ( X g b e s t , t X i , t ) + c 2 · r 2 · ( X p b e s t , t X i , t ) otherwise
where r a n d [ 0 , 1 ] is a random number selected uniformly within the interval [ 0 , 1 ] , and S P t represents the probability of selecting between one of the two mutation schemes. In Equation (32), the case in which r a n d [ 0 , 1 ] < S P t represents the form in which a novel mutation scheme denoted DE/e-rand/1 generates a mutated vector V i , t + 1 . In this scheme two optimal solutions of the elite sub-swarm ( X r 1 , t P and X r 2 , t p ) and one solution of the non-elite sub-swarm ( X r 3 , G Q ) are required, and a scaling factor F i is used, which acts at the level of each particle. The remaining case represents the way in which the mutation scheme of the standard PSO algorithm is used to generate a mutated vector V i , t + 1 .
The selection probability S P t of the Equation (32) changes adaptively within the evolution of the algorithm as follows:
S P t = 1 1 + e 1 ( t m a x / t + 1 ) τ ,
where t m a x is the maximum number of iterations and t the current iteration. τ is a positive constant.
Within the crossover operator, a test vector U i , t is generated by combining the information of the current particle X i , t and the mutated vector V i , t + 1 , following the binomial crossover (see Equation (23)). The selection of the surviving particle to the next iteration is carried out by the competition between the current particle X i , t and the test vector U i , t , the one with the best fit value according to the objective function is selected to remain within the next iteration.
When a particle remains stagnant over a maximum number of iterations (e.g. 5), the values of its corresponding scale factor F i and crossover probability C r i are reset in order to increase diversity. The way in which the values of these parameters are reset is as follows:
F i , t + 1 = F i , t if N S i < N S m a x F l + r a n d [ 0 , 1 ] · ( F u F l ) otherwise ,
C r i , t + 1 = C r i , t if N S i < N S m a x C r l + r a n d [ 0 , 1 ] · ( C r u C r l ) otherwise ,
where F i and C r i are the scaling factor and the crossover probability, respectively, of the particle X i , t at iteration t. F l = 0.1 and F u = 0.8 are the lower and upper bounds respectively of the scaling factor. C r l = 0.3 and C r u = 1.0 are the lower and upper bounds respectively of the crossover probability and r a n d [ 0 , 1 ] is a random number selected within the interval [ 0 , 1 ] . N S i is a stagnation counter for each particle, and N S m a x is the maximum number of iterations with stagnation.
To avoid stagnation, the DEPSO algorithm randomly updates a sub-dimension of individuals within the non-elite population ( Q ) in which N S i > N S m a x , to reset them as follows:
X i , t j = X m i n j + r a n d [ 0 , 1 ] · ( X m a x j X m i n j ) if r a n d [ 0 , 1 ] j γ , j = 1 , 2 , . . D X i , t j otherwise
where r a n d [ 0 , 1 ] j is a random number selected within the interval [ 0 , 1 ] . γ is a probability fixed value. X m i n j and X m a x j are the lower and upper limits, respectively, of the variable j.
In general, the DEPSO algorithm manages to be superior to other variants of the DE algorithm in different optimization problems. The performance of this algorithm is due to the fact that it maintains a good balance between exploration and exploitation, thanks to the use of the self-adaptive mutation strategy, in which the “DE/e-rand/1” scheme has better exploration abilities, while the mutation scheme of the PSO algorithm achieves better convergence abilities. With this strategy, the population manages to maintain a good diversity in the first stages of the evolutionary process, and a faster convergence towards the last stages of the process.

4. Proposed HE-DEPSO Algorithm

Despite the development of several advanced versions of DE algorithm in recent years, its performance still needs to improve in optimization problems with multiple local minima. To address these issues, the design of effective mutation operators and parameter control are two key aspects to improve the performance of the DE algorithm. In the proposed HE-DEPSO algorithm, an adaptive hybrid mutation operator is developed, which takes the self-adaptive mutation strategy from the DEPSO algorithm [32] as a basis and incorporates the parameter control mechanism from the SHADE algorithm [25]. This adaptive hybrid mutation operator introduces a new mutation strategy called "DE/current-to-EHE/1", which utilizes the historical information of the elite individuals in the population to enhance the optimization of the χ 2 function.

4.1. Adaptative Hybrid Mutation Operator

In order to achieve a better balance between exploration and exploitation, the mutation operator of the HE-DEPSO algorithm adopts a dual mechanism in which it adaptively selects between two mutation strategies. First, a new mutation strategy called “DE/current-to-EHE/1” is presented, which is oriented to improve the balance between exploration and exploitation abilities within the early stages of the evolutionary process. On the other hand, to improve the exploitation capacity within the more advanced stages of the evolutionary process, the mutation scheme of the PSO algorithm (Equation (32)) is incorporated in a similar way to the self-adaptive mutation strategy of the DEPSO algorithm (Equation (18)).
Unlike the “DE/e-rand/1” scheme used within the self-adaptive mutation strategy of the DEPSO algorithm, in which only the information of the individuals of the current iteration is used, the “DE/current-to-EHE/1” strategy uses the historical information of the elite and obsolete individuals of the evolutionary process to improve the exploration capability of the algorithm. This strategy is described below.

4.1.1. DE/Current-to-EHE/1 Mutation Strategy

Within evolutionary algorithms, the best individuals in the population, also known as elite individuals, retain valuable evolutionary information to guide the population to promising regions [32,48,49,50]. However, many of the proposals made use of the information from the elite individuals of the current iteration, completely forgetting the information from the previous elite individuals. This absence of information within subsequent iterations may limit the ability of new individuals to explore the search space. Because of this, historical evolution information is used to improve the ability to explore new individuals.
Before applying the mutation operator, all individuals in the current population P t are sorted in ascending order based on their fitness values. After reordering the population, two partitions of individuals are created. The first partition, denoted by E t , consists of the top p b % [ 0.1 , 1 ] best individuals or elite individuals. The second partition, denoted by NE t groups all non-elite individuals, i.e., N P N P · p b % . It is important to note that E t NE t = P t .
Unlike other mutation strategies [29,32,51], in which only elite individuals corresponding to the current iteration are used, this strategy makes use of the evolution history by incorporating two external archives of size N P . The first archive, denoted by HE , stores at each iteration the elite individuals belonging to the E t partition. If the size of the archive HE is greater than N P then its size is readjusted. The second archive used, denoted by HL , stores the obsolete individuals (individuals discarded within the selection process) and is updated at each iteration. Similar to the HE archive, if the size of the HL file is larger than N P then its size is reset. With the information of the individuals belonging to E t and HE , a group of candidates can be formed using E t HE to mutate individuals in the population. On the other hand, the individuals of NE t and HL make up a group of individuals NE t HL whose information contributes to the exploration of the search space. The way in which a mutated vector is generated by the “DE/current-to-EHE/1” strategy is as follows:
V i , t = X i , t + F i · ( X E r , t X i , t ) + F i · ( X P r , t X H L r , t ) ,
where X i , t is the i-th individual at iteration t, X E r , t is a randomly selected individual from among the individuals in the group E t HE , X P r , t is an individual randomly selected from within the current population P t , X H L r , t is an individual randomly selected from the individuals comprising the group NE t HL , and F i is the scaling factor corresponding to the i-th individual.
Following Equation (37), it is possible to observe that the “DE/current-to-EHE/1” strategy can help the HE-DEPSO algorithm to maintain a good exploration capability, and direct the mutated individuals to promising regions without leading to stagnation in local minima. This can be explained by the following reasons:
  • The use of a randomly selected individual within the group E t HE , i.e., X E r , t , helps to guide mutated individuals to more promising regions. However, due to the presence of the historical information of the elite individuals ( HE ), the mutated individuals are prevented from being directed towards the best regions found, thus allowing the maintenance of good diversity in the population and increase the chances of targeting optimal regions.
  • The participation of two individuals, X P r , t and X L r , t , randomly selected from P t and HL respectively, promotes the diversity of mutated individuals. Consequently, the search diversity of the HE-DEPSO algorithm is considerably improved, which is beneficial for escaping from possible local minima.
At the beginning of the execution of the HE-DEPSO algorithm, the archives HE and HL are defined empty. Through the evolutionary process, they store the elite and obsolete individuals, respectively. When analyzing the effect of the parameter p b % , which determines the percentage of individuals within the participation of E t , it was found that values close to 0.9 promote the participation of a more significant number of elite individuals, causing a greater diversity in the population. However, this diversity increase may affect the HE-DEPSO algorithm’s convergence capability. On the other hand, values close to 0.1 may restrict the number of elite individuals to be considered, which improves the convergence of the algorithm but may cause stagnation at local minima. Due to the above, within the HE-DEPSO algorithm, we choose to update in each iteration the value of p b % following a linear decrement given as follows:
p b % = p m a x t · ( p m a x p m i n ) t m a x ,
where t refers to the current iteration, t m a x is the maximum number of iterations, p m i n and p m a x are the minimum and maximum values of the interval assigned for the percentage of individuals within the partition E t . In this work, we have chosen to use the values p m a x = 0.4 and p m i n = 0.1 . In this way, the sensitivity of the p b % parameter is reduced and, at the same time, a good balance between exploitation and exploration is maintained.

4.1.2. Selection of Mutation Strategy for Adaptative Hybrid Mutation Operator

The adaptive hybrid mutation operator implemented in the HE-DEPSO algorithm uses two mutation strategies that aim to improve the exploration and exploitation abilities of the HE-DEPSO algorithm at different stages of the evolutionary process. Concretely, for each individual X i , t in the population, this operator generates a mutated vector V i , t as follows:
V i , t = X i , t + F i · ( X E r , t X i , t ) + F i · ( X P r , t X H L r , t ) if r a n d [ 0 , 1 ] < α t ω · X i , t + c 1 · r 1 · ( X g b e s t , t X i , t ) + c 2 · r 2 · ( X p b e s t , t X i , t ) otherwise ,
where r a n d [ 0 , 1 ] is a random number selected uniformly within the interval [ 0 , 1 ] , and α t represents the probability of selecting between the “DE/current-to-EHE/1” mutation strategy and the adopted mutation strategy of the PSO algorithm (Equation (18)).
The probability of selection α t , is updated at each iteration according to the Equation (33) taking a value of τ = 1.8 . Figure 2 shows the mutation strategy selection curve followed by the adaptive hybrid mutation operator of the HE-DEPSO algorithm. The probability selection curve described by the Equation (33) with a value of τ = 1.8 , for a maximum number of iterations t m a x , is illustrated in red in Figure 2. On the other hand, Figure 2 shows, as an example, a possible distribution of occurrences among which is selected the mutation strategy “DE/current-to-EHE/1” (squares in blue color) or the adopted strategy of the PSO algorithm (circles in magenta color). According to these graphs, it can be observed that within the first stages of evolution of the HE-DEPSO algorithm, the probability of selection α t tends towards values close to 1, which increases the probability that the mutation strategy “DE/current-to-EHE/1” is selected, and the balance between exploration and exploitation provided by this strategy is taken advantage of. On the other hand, towards the advanced stages of the algorithm evolution, the mutation strategy adopted from the PSO algorithm is selected more frequently; this implies that, during the advanced stages, there is a higher probability that the HE-DEPSO algorithm increases its exploitation ability due to more frequent use of the information of the best positions of each individual, as well as the information of the position of the best individual in the population.

4.2. The Complete HE-DEPSO Algorithm

The HE-DEPSO algorithm follows the usual structure of the DE algorithm in general, in which we have the stages of mutation, crossover, and selection. After generating a mutated vector V i , t for the i-th individual of the population as described in the previous section, we proceed to generate a test vector U i , t for the individual X i , t using the binomial crossover (Equation (23)). In order to reduce the sensitivity of the F and C r parameter control, the HE-DEPSO algorithm takes advantage of the individual-level adaptive parameter control of the SHADE algorithm to generate the parameter configuration of F and C r for each individual. Finally, in the selection stage, the individuals that would make up the population for the next iteration are identified; this is done according to Equation (24). Based on the above description, the pseudo-code of the HE-DEPSO algorithm is reported in (Figure 3).

5. Experimental Results and Analysis

This section verifies the performance of the proposed HE-DEPSO algorithm in solving different optimization problems. First, the set of test problems used is presented, then, the algorithms selected to carry out comparisons are presented, as well as the configuration of their parameters. Then, the performance of the HE-DEPSO algorithm is compared against the selected algorithms within the test problem set. Finally, a discussion of the results obtained is presented.

5.1. Benchmark Functions

In order to validate the performance of the HE-DEPSO algorithm in solving different optimization problems, the set of single-objective test functions with boundary constraints CEC 2017 [52] was used. This set consists of 29 test functions whose global optimum is known. We have two unimodal functions ( F 1 and F 3 ) , seven simple multimodal functions ( F 4 F 10 ) , ten hybrid functions ( F 11 F 20 ) and ten composite functions ( F 21 F 30 ) . In all these optimization problems, we seek to find the global minimum ( F m i n ) within the search space bounded in each dimension ( D ) by the interval [ 100 , 100 ] . Information about this set of test functions is briefly presented in Table 2.

5.2. Algorithms and Parameter Settings

In this subsection, HE-DEPSO is compared with five algorithms, including the PSO and DE algorithms, as well as three representative and advanced variants of the DE algorithm, named CoDE [23], SHADE [25] and DEPSO [32]. Among these algorithms, CoDE and DEPSO incorporate modifications mainly within the mutation operator of the DE algorithm, while SHADE modifies the control of F and C r parameters. The performance comparison of the HE-DEPSO algorithm with the previously mentioned algorithms was performed in two dimensions, D = 10 and D = 30 , on the CEC 2017 test function set.
In order to ensure a fair comparison, the parameter settings in common to all algorithms were assigned identically: the maximum number of iterations t m a x was set to 1000, the population size N P to 100, and 31 independent runs were performed. The configuration of each algorithm’s parameters followed the values suggested by their respective authors, as presented in Table 3. The proposed HE-DEPSO algorithm was implemented in Python version 3.11, and all algorithms were executed on the same computer with 8 GB of RAM and a six-core 3.6 GHz processor.

5.3. Comparison with DE, PSO, and Three State-of-the-Art DE Variants

In order to evaluate the performance of the HE-DEPSO algorithm, its optimization results and convergence properties are compared and analyzed with respect to those obtained by the algorithms: DE, PSO, CoDE, SHADE and DEPSO, on the set of CEC 2017 test functions.
Table 4 and Table 5 report the statistical results for each algorithm at D = 10 and D = 30 respectively. The error measure of the solution | F m i n F ( X * ) | , where F m i n is the known solution of the problem and X * is the best solution found in each iteration of each algorithm, was used to obtain these results. These tables show the mean and standard deviation (Std) of the solution error measure for each function and algorithm over the 31 independent runs and the ranking achieved based on the mean value obtained. The best results are highlighted in bold.
A non-parametric Wilcoxon rank sum test, with a confidentiality level of α = 0.05 , was performed to identify statistically significant differences between the results obtained by the HE-DEPSO algorithm and the results obtained by the other algorithms. In these tables, the statistical significance related to the performance of the HE-DEPSO algorithm is represented by the symbols “+”, “≈” and “−”, which indicate that the performance of the HE-DEPSO algorithm is “better/similar/worse” than the algorithm to be compared. The row “W/T/L” counts the total number of “+”, “≈” and “−”, respectively.

5.3.1. Optimization Results

According to the results reported in Table 4, for D = 10 , the proposed HE-DEPSO algorithm obtains the global optimal solution for functions F 1 F 4 , F 6 and F 9 . For the unimodal function F 3 , DEPSO and SHADE obtain the global optimal solution. On the other hand, SHADE, CoDE, and DE obtain the global optimal solution in the simple multimodal function F 9 . For functions F 5 , F 7 F 9 , F 11 , F 13 F 20 , F 23 and F 30 the HE-DEPSO algorithm achieves the best result among all algorithms. DEPSO is the best in the functions F 12 , F 21 , F 24 and F 27 F 29 . SHADE obtains the best results on function F 10 , while CoDE obtains the best results on functions F 22 , F 25 and F 26 . The PSO does not show superior results in any function. The results of this table indicate that, for these tests, the proposed HE-DEPSO algorithm obtains the best ranking according to the mean value among all the algorithms. On the other hand, the results of the Wilcoxon rank sum test show that HE-DEPSO is superior to DEPSO, SHADE, CoDE, DE and PSO in 17, 20, 24, 25, and 29 functions out of a total of 29 test functions respectively.
For D = 30 , the statistical results presented in Table 5 show that the HE-DEPSO algorithm achieves the best results on the functions F 1 F 5 , F 7 F 8 , F 10 F 18 , F 20 F 24 , F 26 F 27 and F 29 . On the other hand, DEPSO, SHADE, CoDE, and DE achieve the best results in the F 27 function. DEPSO is the best in the functions F 6 , F 9 , F 25 , F 28 and F 30 , while SHADE, shows to be the best in the function F 19 in comparison to the rest of the algorithms. With these results, it is possible to identify that HE-DEPSO obtains the best ranking among all the algorithms for these tests. According to the Wilcoxon rank sum test results, among the 29 test functions, HE-DEPSO has 19, 20, 26, 26, 25, and 27 items that are better than DEPSO, SHADE, CoDE, DE, and PSO, respectively.

5.3.2. Convergence Properties

The convergence properties can be summarized into four types, which are represented by the graphs presented in Figure 4 and Figure 5.
1.
The convergence properties of the functions F 1 F 4 , F 6 and F 9 can be identified within the same class; this can be observed with the help of Figure 4(a)–Figure 4(d). In this type, the HE-DEPSO algorithm shows faster convergence than the other algorithms at D = 10 , while at D = 30 , it obtains a better mean error value.
2.
In Figure 4(e)–Figure 4(h), the convergence curves of functions F 5 and F 8 are presented, which are similar to the corresponding ones for functions F 10 , F 16 , F 17 and F 20 . All algorithms have different degrees of evolutionary stagnation or exhibit slow evolution for these functions.
3.
The convergence curves of the functions F 12 F 15 , F 18 F 19 , F 23 and F 29 F 30 , are similar to the curves of the functions F 7 and F 24 (see Figure 5(a)–Figure 4(d)) obtained for the HE-DEPSO algorithm. In this type, HE-DEPSO presents a fast convergence at the beginning, and subsequently, it keeps evolving downward.
4.
The evolutionary process presented in the convergence curves of the functions F 22 and F 27 follows the same trend as the convergence curves of the functions F 25 and F 28 as can be seen in Figure 5(e)– Figure 4(h). Here, most of the algorithms present stagnation in an accelerated way, although some of them manage to keep evolving downward.
The experiments performed above prove the superiority of the proposed HE-DEPSO algorithm in solving different optimization problems. The reasons why the HE-DEPSO algorithm obtains such superior performance can be summarized as follows:
1.
The adaptive hybrid mutation operator implemented in HE-DEPSO is based on the self-adaptive mutation strategy of the DEPSO algorithm, which has proven to be helpful in tackling several complex optimization problems. On the other hand, the collaborative work between the “DE/current-to-EHE/1” mutation strategy with historical information of the elite individuals and the PSO algorithm strategy generates a good balance between exploration and exploitation at different stages of the evolutionary process.
2.
The self-adaptive control of F and C r parameters adopted from the SHADE algorithm allows mitigating parameter sensitivity. In this way, the crossover probability and the scale factor are dynamically adjusted during the evolutionary process at the level of each individual, which can make the proposed algorithm suitable for a wider variety of optimization problems.

6. 4-Zeros Texture Model Validation

In order to verify the physical feasibility of the 4-zero texture model, it is essential to find sets of numerical values for the parameters A u , A d , ϕ 1 and ϕ 2 that minimize the χ 2 ( X ) function to a value less than one (see Equation (16)), and also that, the numerical value for all | V c k m | predicted for the 4-zeros texture model ( evaluated in these sets of values) are in agreement with the experimental data. Firstly, an analysis of the HE-DEPSO algorithm’s behavior when optimizing the χ 2 ( X ) function will be conducted. The study will assess the algorithm’s optimization, convergence, and stability properties, providing insights into its performance when dealing with the χ 2 optimization problem.

6.1. HE-DEPSO Performance in χ 2 Optimization

The given optimization problem seeks to minimize the function χ 2 ( X ) given by the Equation (17). The search space is constrained by the possible signs of the parameters η u and η d , as shown in Table 1. The phases ϕ 1 and ϕ 2 can take values between 0 and 2 π . The ranges of parameters A u and A d depend on the signs of η u and η d . Specifically considering the case of study 1, A u can take values in the interval [ m c , m t ] , while A d in [ m s , m b ] .

6.1.1. Experiment Configuration

Through an experiment, an evaluation of the performance of the HE-DEPSO algorithm in the optimization of the χ 2 ( X ) function will be carried out in order to determine its ability to find the global minimum. This study will be carried out in the search space defined in case 1 previously mentioned. The performance of HE-DEPSO will be compared with the PSO and DE algorithms, as well as with advanced variants of the DE algorithm such as CoDE, SHADE, and DEPSO. To ensure a fair comparison, standard parameters have been set for all algorithms: t m a x = 1000 , N P = 100 , and 31 independent runs have been performed. The settings of the other parameters of each algorithm have been made following the indications in Table 3.

6.1.2. Experiment Results

To evaluate the effectiveness of the proposed HE-DEPSO algorithm, comparisons and analysis of its optimization results were performed with the DE, PSO, and three advanced variants of DE algorithms applied to the χ 2 ( X ) function. Using an approach similar to that described in Section 5, Table 6 presents the statistical data for each algorithm evaluated.
According to the data presented in Table 6, the HE-DEPSO algorithm excels in obtaining the best results in optimizing the χ 2 ( X ) function, outperforming DEPSO. Although DEPSO comes close to the results of HE-DEPSO, its accuracy is not matched. On the other hand, the DE algorithm also demonstrated acceptable accuracy, although lower than that achieved by HE-DEPSO and DEPSO. In contrast, the SHADE, CoDE, and PSO algorithms obtained the lowest results compared to the others. In terms of average ranking, HE-DEPSO is positioned as the leader among all the analyzed algorithms. Furthermore, the results of the Wilcoxon test confirm the superiority of HE-DEPSO over DEPSO, SHADE, CoDE, DE, and PSO in the optimization of the χ 2 ( X ) function.
Figure 6 presents the curves obtained over 31 independent runs by the HE-DEPSO, DEPSO, SHADE, CoDE, DE, and PSO algorithms. It is observed that the HE-DEPSO algorithm exhibits excellent convergence behavior, consistently outperforming SHADE, CoDE, DE, and PSO and achieving a lower mean error value. Although DEPSO converges faster starting from the 300 iteration, the performance of HE-DEPSO stands out for its consistency and exploration capability, which could explain its more extended convergence compared to DEPSO.
In Figure 7, box and whisker plots show the results of the HE-DEPSO, DEPSO, SHADE, CoDE, DE, and PSO algorithms, based on the best overall fits of 31 independent runs to the chi-square function, are presented. These graphs illustrate the distribution of these values into quartiles, with the median highlighted by a red horizontal line within a blue box. The box boundaries correspond to the upper (Q3) and lower (Q1) quartiles. Lines extending from the box indicate the maximum and minimum values, excluding outliers represented by the “+” symbol in blue. The green circles represent the mean of the overall best fits of the 31 independent runs.
According to the box and whisker plots, the HE-DEPSO algorithm demonstrated superior performance to the other evaluated algorithms regarding stability and solution quality, based on 31 independent runs. The distribution of the best global fitness values achieved by HE-DEPSO presents a higher concentration around an lower optimum, reflected in a lower average than the other algorithms. Consequently, the HE-DEPSO offers a more robust and stable performance.

6.2. Valid Regions for Parameters A u , A d , ϕ 1 and ϕ 2

In this subsection, we present the results obtained after applying the HE-DEPSO algorithm to search for valid regions of the A u , A d , ϕ 1 and ϕ 2 parameters. This process was carried out using the most recent experimental values of the V C K M matrix elements, the quark masses, and the Jarlskog invariant (see Appendix A).
Figure 8 and Figure 9 present the allowed regions for the parameters A u , A d , ϕ 1 and ϕ 2 , scaled as A u / m t , A d / m b , ϕ 1 / π and ϕ 2 / π respectively, in the first case study of Table 1. These regions are classified by the solutions found using the HE-DEPSO algorithm and are represented by black, blue, and orange dots, which correspond to different levels of precision of the χ 2 ( X ) function. The orange region, the smallest, indicates the region of highest precision, where experimental data agree best with theoretical predictions. These solutions were found by iteratively executing the algorithm until a total of 5000 solutions were collected at each of the precisions ( χ 2 < 1 , χ 2 < 1 × 10 1 and χ 2 < 1 × 10 2 ).
It is worth noting that for the rest of the case stidies (see Table 1), the found regions for the parameters A u , A d , ϕ 1 and ϕ 2 were similar.

6.3. Predictions for the V C K M Elements

The second part of model validation is the following: with the data obtained from the valid regions for the four free parameters of the 4-zeros texture model: A u , A d , ϕ 1 and ϕ 2 , found with the help of HE-DEPSO, we can evaluate the feasibility of the model according to the latest experimental values of the V C K M matrix and the Jarlskog invariant. This can be achieved by checking the model’s predictive power when predicting the remaining elements of the V C K M matrix that were not used within the χ 2 fit, i.e., | V c d | , | V u d | , | V c s | , | V t b | , | V t d | , and | V t s | . Using Equations (2), (3), (4), and (5), the Chau-Keng parametrization [37], and the solutions found, we can calculate the numerical predictions for those elements. These predictions are then presented using scatter plots. Each graph includes the experimental central value of each element with its associated experimental error value. The points that fall within the region delimited by the experimental values are considered good predictions, while those outside the region are considered poor predictions.
In Figure 10, the predictions for the elements | V u d | and | V c d | in the first case study are presented. The black, blue, and orange points indicate a precision of χ 2 < 1 , χ 2 < 1 × 10 1 , and χ 2 < 1 × 10 2 , respectively. For | V c d | , the experimental central value ( 0.22636 ) is indicated with a solid red vertical line on the corresponding axis, while its experimental error ( ± 0.00048 ) is shown with two red dashed vertical lines on either side of the central value. In the case of | V u d | , the experimental central value ( 0.97401 ) is represented with a solid blue horizontal line on the | V u d | axis, and its experimental error ( ± 0.00011 ) is indicated with two blue dashed horizontal lines on either side of the central value. The predictions for the two elements show that the data with a precision of χ 2 < 1 can be outside the experimental error region, suggesting a poor prediction. On the other hand, the predictions with χ 2 < 1 × 10 1 and χ 2 < 1 × 10 2 remain within this region, indicating accurate predictions. This pattern is repeated in the rest of the analyzed case studies.
The predictions for the elements | V c s | and | V t b | are shown in Figure 11 for the first case study. The experimental central value of | V c s | is represented by a solid red vertical line at the center of the | V c s | axis at 0.97320 , while its experimental error of ± 0.00011 is indicated by red dashed vertical lines on either side. The experimental central value of | V t b | is shown by a solid blue horizontal line at the center of the | V t b | axis at 0.999172 , and its experimental error of ± 0.000024 is represented by blue dashed horizontal lines on either side. According to the predictions shown in Figure 11, the data obtained with χ 2 < 1 do not represent good predictions, as they are outside the experimental errors. On the other hand, the predictions with χ 2 < 1 × 10 1 and χ 2 < 1 × 10 2 remain within the error region, indicating that they are good predictions. This pattern is repeated in the four case studies.
Finally, in Figure 12, the predictions for the elements | V t d | and | V t s | are shown in the four case studies. For | V t d | , the experimental central value is represented by a solid vertical line at | V t d | = 0.00854 , and its experimental error with two red dashed vertical lines at ± 0.00016 . For | V t s | , the experimental central value is shown with a solid blue horizontal line at | V t s | = 0.03978 , and its experimental error with two blue dashed horizontal lines at ± 0.00060 . The predictions for these two elements indicate that the data obtained with precisions of χ 2 < 1 and χ 2 < 1 × 10 1 may be outside the experimental errors, so they are not good predictions. In contrast, the prediction with χ 2 < 1 × 10 2 remains within the errors, being a good prediction. These characteristics are observed in the four case studies.
When analyzing the predictions of the elements of the V C K M matrix ( | V u d | , | V c d | , | V c s | , | V t b | , | V t d | and | V t s | ), it was observed that only the solutions of the HE-DEPSO algorithm with an accuracy of χ 2 < 1 × 10 2 fit within the limits of experimental error. These solutions are considered valid and capable of reproducing the V C K M matrix in all case studies and elements. Thus, it can be established that the 4-zero texture model is physically viable. It would be interesting, however, to explore the implications of this information obtained within the texture formalism.

7. Conclusions

This paper explored the feasibility of the 4-zeros texture model using the most recent experimental values. To gather data that can inform the decision on the model’s viability, we employ a Chi-square fit to compare the theoretical expressions of the model with experimental values of well-measured physical observables. We developed a single-objective optimization model by defining a χ 2 ( X ) function to identify the allowed regions for the free parameters of the 4-zeros texture model that are consistent with the experimental data. To address the challenge of optimizing the χ 2 ( X ) function within the Chi-square fit, we propose a new DEPSO algorithm variant, HE-DEPSO.
The proposed algorithm has demonstrated its ability to efficiently optimize different functions, particularly those from the CEC 2017 single-objective benchmark functions set. The convergence properties of HE-DEPSO show a good balance between solution precision and convergence speed in most cases. Regarding optimizing the χ 2 ( X ) function, our findings indicate that HE-DEPSO and DEPSO are more than adequate for solving the optimization problem, with HE-DEPSO providing the best solution quality and consistency. At the same time, DEPSO maintains a faster convergence rate. This part of the investigation also highlights the difficulty that some algorithms can face when optimizing the χ 2 ( X ) function, as algorithms like SHADE and CoDE, despite exhibiting good performance on the CEC 2017 test set, face some difficulties in optimizing the χ 2 ( X ) function. We can also conclude that optimization algorithms such as SHADE, CoDE, and DE would be worth considering for optimization problems in high-energy physics. Finally, with the use of HE-DEPSO algorithm we show that the 4-zeros texture model is compatible with current experimental data and thus we can affirm that this model is physically feasible.
Future work will focus on enhancing the convergence speed of the HE-DEPSO algorithm and evaluating its effectiveness across diverse problem domains, exploring its ability to tackle more complex optimization challenges. An extended comparative analysis against other bio-inspired algorithms or other advanced DE variants could also be conducted. Furthermore, this research could be expanded by analyzing additional texture models and determining their validity based on current experimental data.

Author Contributions

Conceptualization, P.M.-R. and R.N.-P.; methodology, P.L.-E., P.M.-R. and R.N.-P.; software, E.M.-G.; validation, E.M.-G. and P.L.-E.; formal analysis, P.L.-E. and P.M.-R.; investigation, E.M.-G. and P.L.-E.; resources, P.M.-R. and P.L.-E.; writing—original draft preparation, E.M.-G., P.M.-R. and R.N.-P.; writing—review and editing, P.M.-R. and J.C.S.-T.-M. visualization, P.M.-R. and J.C.S.-T.-M.; supervision, P.M.-R. and J.C.S.-T.-M.; funding acquisition, J.C.S.-T.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Autonomous University of Hidalgo (UAEH) and the National Council for Humanities, Science and Technology (CONAHCYT) with project number F003-320109.

Data Availability Statement

The Chi-Square function code is available on Github: https://github.com/EmX-Mtz/Chi-Square_Function

Acknowledgments

The first author acknowledges the National Council for Humanities, Science and Technology (CONAHCYT) for the financial support received in pursuing graduate studies, which has been essential for the completion of this research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A.

In this work, the following experimental values for the elements of the V C K M , the Jarlskog invariant, and the masses of the quarks (at the scale of M z ) are considered [53]:
| V C K M | = 0.97401 ± 0.00011 0.22650 ± 0.00048 0 . 00361 0.00009 + 0.00011 0.22636 ± 0.00048 0.97320 ± 0.00011 0 . 04053 0.00061 + 0.00083 0 . 00854 0.00016 + 0.00023 0 . 03978 0.00060 + 0.00082 0 . 999172 0.000035 + 0.000024 ,
J = ( 3 . 00 0.09 + 0.15 ) × 10 5 ,
m u = 1.23 ± 0.21 MeV , m d = 2.67 ± 0.19 MeV , m s = 53.16 ± 4.61 MeV , m c = 0.620 ± 0.017 GeV , m b = 2.839 ± 0.026 GeV , m t = 168.26 ± 0.75 GeV .

References

  1. Altarelli, G.; Feruglio, F. Neutrino masses and mixings: A theoretical perspective. Phys. Rept. 1999, 320, 295–318. [Google Scholar] [CrossRef]
  2. Altarelli, G.; Feruglio, F. Models of neutrino masses and mixings. New J. Phys. 2004, 6, 106. [Google Scholar] [CrossRef]
  3. Fritzsch, H.; Xing, Z.z. Mass and flavor mixing schemes of quarks and leptons. Prog. Part. Nucl. Phys. 2000, 45, 1–81. [Google Scholar] [CrossRef]
  4. Xing, Z.z. Flavor structures of charged fermions and massive neutrinos. Phys. Rept. 2020, arXiv:hep-ph/1909.09610]854, 1–147. [Google Scholar] [CrossRef]
  5. Gupta, M.; Ahuja, G. Flavor mixings and textures of the fermion mass matrices. Int. J. Mod. Phys. A 2012, arXiv:hep-ph/1302.4823]27, 1230033. [Google Scholar] [CrossRef]
  6. Froggatt, C.D.; Nielsen, H.B. Hierarchy of Quark Masses, Cabibbo Angles and CP Violation. Nucl. Phys. B 1979, 147, 277–298. [Google Scholar] [CrossRef]
  7. King, S.F.; Luhn, C. Neutrino Mass and Mixing with Discrete Symmetry. Rept. Prog. Phys. 2013, arXiv:hep-ph/1301.1340]76, 056201. [Google Scholar] [CrossRef] [PubMed]
  8. Borzumati, F.; Nomura, Y. Low scale seesaw mechanisms for light neutrinos. Phys. Rev. D 2001, 64, 053005. [Google Scholar] [CrossRef]
  9. Lindner, M.; Ohlsson, T.; Seidl, G. Seesaw mechanisms for Dirac and Majorana neutrino masses. Phys. Rev. D 2002, 65, 053014. [Google Scholar] [CrossRef]
  10. Flieger, W.; Gluza, J. General neutrino mass spectrum and mixing properties in seesaw mechanisms. Chin. Phys. C 2021, arXiv:hep-ph/2004.00354]45, 023106. [Google Scholar] [CrossRef]
  11. Fritzsch, H. Calculating the Cabibbo angle. Physics Letters B 1977, 70, 436–440. [Google Scholar] [CrossRef]
  12. Fritzsch, H.; Xing, Z.z. Four zero texture of Hermitian quark mass matrices and current experimental tests. Phys. Lett. B 2003, 555, 63–70. [Google Scholar] [CrossRef]
  13. zhong Xing, Z.; hua Zhao, Z. On the four-zero texture of quark mass matrices and its stability. Nuclear Physics B 2015, 897, 302–325. [Google Scholar] [CrossRef]
  14. Gómez-Ávila, S.; López-Lozano, L.; Miranda-Romagnoli, P.; Noriega-Papaqui, R.; Lagos-Eulogio, P. 2-zeroes texture and the Universal Texture Constraint. arXiv 2021, arXiv:2105.01554. [Google Scholar]
  15. Tani, L.; Rand, D.; Veelken, C.; Kadastik, M. Evolutionary algorithms for hyperparameter optimization in machine learning for application in high energy physics. The European Physical Journal C 2021, 81. [Google Scholar] [CrossRef]
  16. Zhang, Y.; Zhou, D.; et al. Application of differential evolution algorithm in future collider optimization. 7th International Particle Accelerator Conference (IPAC’16), Busan, Korea; 8-13 May 2016; pp. 1025–1027.
  17. Wu, J.; Zhang, Y.; Qin, Q.; Wang, Y.; Yu, C.; Zhou, D. Dynamic aperture optimization with diffusion map analysis at CEPC using differential evolution algorithm. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 2020, 959, 163517. [Google Scholar] [CrossRef]
  18. Allanach, B.C.; Grellscheid, D.; Quevedo, F. Genetic algorithms and experimental discrimination of SUSY models. Journal of High Energy Physics 2004, 2004, 069. [Google Scholar] [CrossRef]
  19. Bilal; Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Engineering Applications of Artificial Intelligence 2020, 90, 103479. [Google Scholar] [CrossRef]
  20. Ahmad, M.F.; Isa, N.A.M.; Lim, W.H.; Ang, K.M. Differential evolution: A recent review based on state-of-the-art works. Alexandria Engineering Journal 2022, 61, 3831–3872. [Google Scholar] [CrossRef]
  21. Pan, J.S.; Liu, N.; Chu, S.C. A Hybrid Differential Evolution Algorithm and Its Application in Unmanned Combat Aerial Vehicle Path Planning. IEEE Access 2020, 8, 17691–17712. [Google Scholar] [CrossRef]
  22. Opara, K.R.; Arabas, J. Differential Evolution: A survey of theoretical analyses. Swarm and Evolutionary Computation 2019, 44, 546–558. [Google Scholar] [CrossRef]
  23. Wang, Y.; Cai, Z.; Zhang, Q. Differential Evolution With Composite Trial Vector Generation Strategies and Control Parameters. IEEE Transactions on Evolutionary Computation 2011, 15, 55–66. [Google Scholar] [CrossRef]
  24. Peñuñuri, F.; Cab, C.; Carvente, O.; Zambrano-Arjona, M.; Tapia, J. A study of the Classical Differential Evolution control parameters. Swarm and Evolutionary Computation 2016, 26, 86–96. [Google Scholar] [CrossRef]
  25. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for Differential Evolution. 2013 IEEE Congress on Evolutionary Computation, 2013, pp. 71–78. [CrossRef]
  26. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization. IEEE Transactions on Evolutionary Computation 2009, 13, 398–417. [Google Scholar] [CrossRef]
  27. Das, S.; Abraham, A.; Chakraborty, U.K.; Konar, A. Differential Evolution Using a Neighborhood-Based Mutation Operator. IEEE Transactions on Evolutionary Computation 2009, 13, 526–553. [Google Scholar] [CrossRef]
  28. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Opposition-based differential evolution. IEEE Transactions on Evolutionary computation 2008, 12, 64–79. [Google Scholar] [CrossRef]
  29. Zhang, J.; Sanderson, A. JADE: Adaptive Differential Evolution With Optional External Archive. Evolutionary Computation, IEEE Transactions on 2009, 13, 945–958. [Google Scholar] [CrossRef]
  30. Chakraborty, S.; Saha, A.K.; Sharma, S.; Sahoo, S.K.; Pal, G. Comparative Performance Analysis of Differential Evolution Variants on Engineering Design Problems. Journal of Bionic Engineering 2022, 19, 1140–1160. [Google Scholar] [CrossRef]
  31. Lagos-Eulogio, P.; Miranda-Romagnoli, P.; Seck-Tuoh-Mora, J.C.; Hernández-Romero, N. Improvement in Sizing Constrained Analog IC via Ts-CPD Algorithm. Computation 2023, 11, 230. [Google Scholar] [CrossRef]
  32. Wang, S.; Li, Y.; Yang, H. Self-adaptive mutation differential evolution algorithm based on particle swarm optimization. Applied Soft Computing 2019, 81, 105496. [Google Scholar] [CrossRef]
  33. Novaes, S.F. Standard model: An Introduction. 10th Jorge Andre Swieca Summer School: Particle and Fields, 1999, pp. 5–102.
  34. Langacker, P. Introduction to the Standard Model and Electroweak Physics. Theoretical Advanced Study Institute in Elementary Particle Physics: The Dawn of the LHC Era, 2010, pp. 3–48. [CrossRef]
  35. Isidori, G.; Nir, Y.; Perez, G. Flavor Physics Constraints for Physics Beyond the Standard Model. Ann. Rev. Nucl. Part. Sci. 2010, arXiv:hep-ph/1002.0900]60, 355. [Google Scholar] [CrossRef]
  36. Antonelli, M.; et al. Flavor Physics in the Quark Sector. Phys. Rept. 2010, arXiv:hep-ph/0907.5386]494, 197–414. [Google Scholar] [CrossRef]
  37. Branco, G.C.; Lavoura, L.; Silva, J.P. CP Violation; Vol. 103, Int.Ser.Monogr.Phys., 1999.
  38. Eidelman, S.; et al. Review of particle physics. Particle Data Group. Phys. Lett. B 2004, 592, 1. [Google Scholar] [CrossRef]
  39. Félix-Beltrán, O.; González-Canales, F.; Hernández-Sánchez, J.; Moretti, S.; Noriega-Papaqui, R.; Rosado, A. Analysis of the quark sector in the 2HDM with a four-zero Yukawa texture using the most recent data on the CKM matrix. Phys. Lett. B 2015, arXiv:hep-ph/1311.5210]742, 347–352. [Google Scholar] [CrossRef]
  40. Barranco, J.; Delepine, D.; Lopez-Lozano, L. Neutrino mass determination from a four-zero texture mass matrix. Phys. Rev. D 2012, 86, 053012. [Google Scholar] [CrossRef]
  41. Eberhart, R.C.; Kennedy, J. A new optimizer using particle swarm theory. MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science 1995, pp. 39–43.
  42. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), 1998, pp. 69–73. [CrossRef]
  43. Storn, R.; Price, K. Differential Evolution: A Simple and Efficient Adaptive Scheme for Global Optimization Over Continuous Spaces. Journal of Global Optimization 1995, 23. [Google Scholar]
  44. Storn, R.; Price, K. Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. of Global Optimization 1997, 11, 341–359. [Google Scholar] [CrossRef]
  45. Sánchez Vargas, O.; De León Aldaco, S.E.; Aguayo Alquicira, J.; Vela Valdés, L.G.; Mina Antonio, J.D. Differential Evolution Applied to a Multilevel Inverter&mdash;A Case Study. Applied Sciences 2022, 12. [Google Scholar] [CrossRef]
  46. Batool, R.; Bibi, N.; Muhammad, N.; Alhazmi, S. Detection of Primary User Emulation Attack Using the Differential Evolution Algorithm in Cognitive Radio Networks. Applied Sciences 2023, 13. [Google Scholar] [CrossRef]
  47. Elsayed, S.M.; Sarker, R.A.; Ray, T. Differential evolution with automatic parameter configuration for solving the CEC2013 competition on Real-Parameter Optimization. 2013 IEEE Congress on Evolutionary Computation, 2013, pp. 1932–1937. [CrossRef]
  48. Zhong, X.; Cheng, P. An elite-guided hierarchical differential evolution algorithm. Applied Intelligence 2021, 51, 4962–4983. [Google Scholar] [CrossRef]
  49. Yang, Q.; Guo, X.; Gao, X.D.; Xu, D.D.; Lu, Z.Y. Differential Elite Learning Particle Swarm Optimization for Global Numerical Optimization. Mathematics 2022, 10. [Google Scholar] [CrossRef]
  50. Huang, Y.; Yu, Y.; Guo, J.; Wu, Y. Self-adaptive Artificial Bee Colony with a Candidate Strategy Pool. Applied Sciences 2023, 13. [Google Scholar] [CrossRef]
  51. Wang, S.; Li, Y.; Yang, H.; Liu, H. Self-adaptive differential evolution algorithm with improved mutation strategy. Soft Computing 2017, 22, 3433–3447. [Google Scholar] [CrossRef]
  52. Kumar, A.; Price, K.V.; Mohamed, A.W.; Hadi, A.A.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Numerical Optimization; Technical Report, 2016; pp. 1–34.
  53. Fritzsch, H.; Xing, Z.z.; Zhang, D. Correlations between quark mass and flavor mixing hierarchies. Nuclear Physics B 2022, 974, 115634. [Google Scholar] [CrossRef]
Figure 1. χ 2 function projections over variables A u , A d , ϕ 1 , ϕ 2 .
Figure 1. χ 2 function projections over variables A u , A d , ϕ 1 , ϕ 2 .
Preprints 115298 g001
Figure 2. Strategy selection curve; (a) Selection probability α t , (b) Distribution of strategy selection.
Figure 2. Strategy selection curve; (a) Selection probability α t , (b) Distribution of strategy selection.
Preprints 115298 g002
Figure 3. The pseudocode of HE-DEPSO
Figure 3. The pseudocode of HE-DEPSO
Preprints 115298 g003
Figure 4. Convergence curves for the functions F 1 , F 3 , F 5 and F 8 , with D = 10 , 30 . The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Figure 4. Convergence curves for the functions F 1 , F 3 , F 5 and F 8 , with D = 10 , 30 . The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Preprints 115298 g004
Figure 5. Convergence curves for the functions F 1 , F 3 , F 5 and F 8 , with D = 10 , 30. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Figure 5. Convergence curves for the functions F 1 , F 3 , F 5 and F 8 , with D = 10 , 30. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Preprints 115298 g005
Figure 6. Convergence curves of the error measure in the solution for the χ 2 ( X ) function. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent iterations.
Figure 6. Convergence curves of the error measure in the solution for the χ 2 ( X ) function. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent iterations.
Preprints 115298 g006
Figure 7. Box and whisker plots of the best global fit values obtained by HE-DEPSO and the DEPSO, SHADE, CoDE, DE, and PSO algorithms in the 31 independent repetitions in the χ 2 ( X ) function. The horizontal axis shows the algorithms to be compared, and the vertical axis shows the global fit values.
Figure 7. Box and whisker plots of the best global fit values obtained by HE-DEPSO and the DEPSO, SHADE, CoDE, DE, and PSO algorithms in the 31 independent repetitions in the χ 2 ( X ) function. The horizontal axis shows the algorithms to be compared, and the vertical axis shows the global fit values.
Preprints 115298 g007
Figure 8. Allowed regions for A u and A d constrained based on current experimental data with different levels of precision: χ 2 < 1 (black dots), χ 2 < 1 × 10 1 (blue dots) and χ 2 < 1 × 10 2 (orange dots).
Figure 8. Allowed regions for A u and A d constrained based on current experimental data with different levels of precision: χ 2 < 1 (black dots), χ 2 < 1 × 10 1 (blue dots) and χ 2 < 1 × 10 2 (orange dots).
Preprints 115298 g008
Figure 9. Allowed regions for ϕ 1 and ϕ 2 constrained based on current experimental data with different levels of precision: χ 2 < 1 (black dost), χ 2 < 1 × 10 1 (blue dots) and χ 2 < 1 × 10 2 (orange dots).
Figure 9. Allowed regions for ϕ 1 and ϕ 2 constrained based on current experimental data with different levels of precision: χ 2 < 1 (black dost), χ 2 < 1 × 10 1 (blue dots) and χ 2 < 1 × 10 2 (orange dots).
Preprints 115298 g009
Figure 10. Predictions for | V c d | and | V u d | for η u = + 1 and η d = + 1 .
Figure 10. Predictions for | V c d | and | V u d | for η u = + 1 and η d = + 1 .
Preprints 115298 g010
Figure 11. Predictions for | V c s | and | V t b | for η u = + 1 and η d = + 1 .
Figure 11. Predictions for | V c s | and | V t b | for η u = + 1 and η d = + 1 .
Preprints 115298 g011
Figure 12. Predictions for | V t d | and | V t s | for η u = + 1 and η d = + 1 .
Figure 12. Predictions for | V t d | and | V t s | for η u = + 1 and η d = + 1 .
Preprints 115298 g012
Table 1. Cases of study.
Table 1. Cases of study.
Case η u η d
1 + 1 + 1
2 + 1 1
3 1 + 1
4 1 1
Table 2. Details of benchmark functions used in the experiments.
Table 2. Details of benchmark functions used in the experiments.
Function Type Index Function Name Optimum ( F m i n )
Unimodal 1 Shifted and Rotated Bent Cigar 100
Functions 3 Shifted and Rotated Zakharov 300
4 Shifted and Rotated Rosenbrock 400
5 Shifted and Rotated Rastrigin 500
Simple 6 Shifted and Rotated Expanded Schaffer F6 600
Multimodal 7 Shifted and Rotated Lunacek Bi-Rastrigin 700
Functions 8 Shifted and Rotated Non-Continuous Rastrigin 800
9 Shifted and Rotated Levy 900
10 Shifted and Rotated Schwefel 1000
11 Hybrid Problem 1 (N = 3) 1100
12 Hybrid Problem 2 (N = 3) 1200
13 Hybrid Problem 3 (N = 3) 1300
Hybrid 14 Hybrid Problem 4 (N = 4) 1400
Functions 15 Hybrid Problem 5 (N = 4) 1500
16 Hybrid Problem 6 (N = 4) 1600
17 Hybrid Problem 7 (N = 5) 1700
18 Hybrid Problem 8 (N = 5) 1800
19 Hybrid Problem 9 (N = 5) 1900
20 Hybrid Problem 10 (N = 6) 2000
21 Composition Problem 1 (N = 3) 2100
22 Composition Problem 2 (N = 3) 2200
23 Composition Problem 3 (N = 4) 2300
Composition 24 Composition Problem 4 (N = 4) 2400
Functions 25 Composition Problem 5 (N = 5) 2500
26 Composition Problem 6 (N = 5) 2600
27 Composition Problem 7 (N = 6) 2700
28 Composition Problem 8 (N = 6) 2800
29 Composition Problem 9 (N = 9) 2900
30 Composition Problem 10 (N = 3) 3000
Table 3. Parameters settings of all algorithms.
Table 3. Parameters settings of all algorithms.
Algorithm Parameters
HE-DEPSO p m i n = 0.1 , p m a x = 0.4 , τ = 1.8 , c 1 = c 2 = 2 , ω [ 0.4 , 0.9 ] ,
H = N P , M F ( 1 : H ) = 0.5 , M C r ( 1 : H ) = 0.8
DEPSO c 1 = c 2 = 2 , ω [ 0.4 , 0.9 ] , C r [ 0.3 , 1.0 ] , F [ 0.1 , 0.8 ] ,
N S m a x = 5 , γ = 0.001 , τ = 1.8 , S E P = 0.3 · N P
SHADE H = N P , M F ( 1 : H ) = M C r ( 1 : H ) = 0.5
CoDE 1) [ F = 1.0 , C r = 0.1 ] , 2) [ F = 1.0 , C r = 0.9 ] , 3) [ F = 0.8 , C r = 0.2 ]
DE C r = 0.9 , F = 0.5
PSO ω [ 0.4 , 0.9 ] , c 1 = c 2 = 2
Table 4. Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the set of CEC 2017 test functions with D = 10 .
Table 4. Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the set of CEC 2017 test functions with D = 10 .
Function Metrics HE-DEPSO DEPSO SHADE CoDE DE PSO
Mean 0.0000E+00 1.8337E-15 0.0000E+00 1.1580E-10 3.5103E-05 1.1366E+10
F 1 Std 0.0000E+00 4.8427E-15 0.0000E+00 6.0507E-11 2.0793E-05 9.7631E+09
Rank 1 4 1 3 2 5
Significance + + +
Mean 9.1683E-16 1.3752E-14 1.6503E-14 2.0001E-09 1.7359E-04 1.5886E+10
F 3 Std 0.0000E+00 0.0000E+00 0.0000E+00 2.7390E-11 4.8441E-04 6.2297E+02
Rank 1 1 1 2 3 4
Significance + + +
Mean 0.0000E+00 2.3838E-14 2.0805E+00 2.3103E-09 1.0409E-03 5.8071E+01
F 4 Std 0.0000E+00 2.8513E-14 6.7544E-01 1.2601E-09 6.4578E-04 6.9704E+01
Rank 1 2 5 3 4 6
Significance + + + + +
Mean 2.8439E+00 4.0119E+00 3.8489E+00 8.5594E+00 2.9710E+01 3.0728E+01
F 5 Std 7.2094E-01 1.6940E+00 1.0401E+00 1.4246E+00 3.9355E+00 1.2073E+01
Rank 1 3 2 4 5 6
Significance + + + + +
Mean 0.0000E+00 4.6321E-03 2.7530E-04 4.6895E-03 4.5748E-05 1.1970E+01
F 6 Std 0.0000E+00 2.5790E-02 5.5026E-04 2.4954E-03 2.2095E-05 9.6167E+00
Rank 1 4 3 5 2 6
Significance + + + + +
Mean 1.2678E+01 2.1067E+01 1.3953E+01 2.0678E+01 3.9143E+01 3.0657E+01
F 7 Std 7.0313E-01 6.7134E+00 7.6551E-01 2.2560E+00 3.7389E+00 8.6791E+00
Rank 1 4 2 3 6 5
Significance + + + + +
Mean 2.2853E+00 4.2045E+00 3.8070E+00 8.6884E+00 2.6037E+01 2.4833E+01
F 8 Std 8.1114E-01 1.9346E+00 7.3957E-01 1.8765E+00 5.5290E+00 1.1493E+01
Rank 1 3 2 4 6 5
Significance + + + + +
Mean 0.0000E+00 7.3346E-15 0.0000E+00 0.0000E+00 0.0000E+00 6.8233E+01
F 9 Std 0.0000E+00 2.8391E-14 0.0000E+00 0.0000E+00 0.0000E+00 1.9812E+02
Rank 1 2 1 1 1 3
Significance + +
Mean 2.4499E+02 8.9742E+02 1.5812E+02 7.4438E+02 1.4730E+03 9.1356E+02
F 10 Std 1.1864E+02 1.6890E+02 7.6565E+01 1.1745E+02 1.7509E+02 3.3829E+02
Rank 2 4 1 3 6 5
Significance + + +
Mean 2.5687E-07 8.6658E-01 2.3208E+00 1.2502E+00 6.7909E+00 8.1368E+01
F 11 Std 1.4103E-06 6.1559E-01 4.7869E-01 4.6467E-01 9.9142E-01 8.3317E+01
Rank 1 2 4 3 5 6
Significance + + + + +
Mean 8.1966E+01 1.1535E+01 3.5298E+01 2.7652E+02 5.3458E+02 8.7444E+07
F 12 Std 7.0615E+01 2.8045E+01 1.4666E+01 6.4786E+01 1.2345E+02 3.8318E+08
Rank 3 1 2 4 5 6
Significance + + +
Mean 3.8882E+00 4.6716E+00 5.9948E+00 1.1231E+01 1.2796E+01 6.3448E+03
F 13 Std 2.4042E+00 9.7586E-01 6.0682E-01 2.0901E+00 1.7030E+00 1.2009E+04
Rank 1 2 3 4 5 6
Significance + + + +
Mean 2.4091E+00 6.8868E+00 1.9065E+01 2.0295E+01 2.6185E+01 1.7551E+02
F 14 Std 5.1573E+00 8.0249E+00 3.0340E+00 1.2717E-01 1.2365E+00 9.5615E+01
Rank 1 2 3 4 5 6
Significance + + + + +
Mean 3.6646E-01 5.7132E-01 1.0838E+00 1.3990E+00 2.1239E+00 4.6041E+02
F 15 Std 2.3145E-01 4.9456E-01 3.3723E-01 5.4455E-01 1.2934E+00 1.0508E+03
Rank 1 2 3 4 5 6
Significance + + + + +
Mean 1.4450E+00 1.6357E+00 4.5365E+00 4.3009E+00 1.9028E+01 1.2496E+02
F 16 Std 3.2202E-01 3.2942E+00 2.2215E+00 2.5242E+00 1.6484E+01 1.0780E+02
Rank 1 2 4 3 5 6
Significance + + + +
Mean 7.2016E+00 1.2075E+01 2.1001E+01 2.0112E+01 2.8168E+01 5.7710E+01
F 17 Std 3.0221E+00 8.5311E+00 3.2211E+00 2.8326E+00 1.8397E+00 2.4017E+01
Rank 1 2 4 3 5 6
Significance + + + + +
Mean 3.8770E-01 8.2949E+00 1.9823E+01 2.0117E+01 2.0542E+01 2.7241E+04
F 18 Std 1.4583E-01 6.7399E+00 1.9348E+00 3.4569E-02 1.0532E-01 1.8731E+04
Rank 1 2 3 4 5 6
Significance + + + + +
Mean 9.7895E-02 5.4625E-01 1.0349E+00 5.9210E-01 1.2208E+00 5.0978E+03
F 19 Std 1.2871E-01 3.7564E-01 1.5715E-01 4.4716E-02 2.0310E-01 1.1584E+04
Rank 1 2 4 3 5 6
Significance + + + + +
Mean 5.2046E+00 6.9310E+00 1.7787E+01 2.0000E+01 2.5137E+01 5.9825E+01
F 20 Std 3.1153E+00 6.8469E+00 3.9314E+00 1.4932E-04 1.9501E+00 4.0129E+01
Rank 1 2 3 4 5 6
Significance + + + +
Mean 1.7676E+02 1.0695E+02 1.5575E+02 1.2554E+02 1.9598E+02 1.9045E+02
F 21 Std 4.6029E+01 2.0972E+01 4.2795E+01 4.8087E+01 5.7606E+01 6.2148E+01
Rank 4 1 3 2 6 5
Significance + +
Mean 1.0000E+02 9.7428E+01 9.9647E+01 7.4280E+01 1.0070E+02 1.2711E+02
F 22 Std 0.0000E+00 1.5939E+01 2.1997E+00 4.4524E+01 5.7442E-01 5.6792E+01
Rank 4 2 3 1 5 6
Significance + + +
Mean 3.0613E+02 3.0845E+02 3.0859E+02 3.2072E+02 3.3600E+02 3.4479E+02
F 23 Std 1.4783E+00 2.1842E+00 1.4096E+00 3.4098E+00 5.2784E+00 1.6592E+01
Rank 1 2 3 4 5 6
Significance + + + + +
Mean 3.2397E+02 2.6066E+02 3.0017E+02 3.0685E+02 3.6119E+02 3.7743E+02
F 24 Std 4.1615E+01 1.1813E+02 6.5109E+01 9.2286E+01 4.7582E+00 1.5025E+01
Rank 4 1 2 3 5 6
Significance + +
Mean 4.1869E+02 4.2783E+02 4.0980E+02 3.9783E+02 4.0247E+02 4.7998E+02
F 25 Std 2.3095E+01 2.2521E+01 2.0463E+01 1.5694E-01 1.3981E+01 6.1710E+01
Rank 4 5 3 1 2 6
Significance + +
Mean 3.0304E+02 3.0849E+02 3.0085E+02 2.9625E+02 3.8419E+02 5.6150E+02
F 26 Std 1.1775E+01 3.2854E+01 1.7405E+01 6.3643E+01 1.6020E+02 2.5242E+02
Rank 3 4 2 1 5 6
Significance + + +
Mean 3.7584E+02 3.7153E+02 3.7683E+02 4.1575E+02 3.9004E+02 4.1060E+02
F 27 Std 2.3098E+01 5.3891E-01 7.7935E+00 1.8031E+01 3.4010E+01 2.0006E+01
Rank 2 1 3 5 4 6
Significance + + + +
Mean 4.7251E+02 4.7218E+02 4.7338E+02 4.7816E+02 4.7253E+02 6.1933E+02
F 28 Std 3.0463E-02 1.8112E+00 2.7252E+00 6.1120E+00 1.6134E-01 1.5566E+02
Rank 2 1 4 5 3 6
Significance + + + +
Mean 2.3616E+02 2.3508E+02 2.4658E+02 2.5569E+02 2.8284E+02 3.4235E+02
F 29 Std 6.1991E+00 8.2538E+00 6.1477E+00 9.0010E+00 1.4685E+01 9.4838E+01
Rank 2 1 3 4 5 6
Significance + + + +
Mean 2.0061E+02 2.0113E+02 2.0227E+02 2.0313E+02 2.0320E+02 1.6044E+06
F 30 Std 1.5718E-01 5.2187E-01 3.8772E-01 4.7938E-01 8.5716E-01 2.1981E+06
Rank 1 2 3 4 5 6
Significance + + + + +
Average rank 1.68 2.27 2.75 3.24 4.48 5.48
Final rank 1 2 3 4 5 6
W/T/L -/-/- 17/9/3 20/4/5 24/3/2 25/4/0 29/0/0
Table 5. Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the CEC 2017 test function set with D = 30 .
Table 5. Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the CEC 2017 test function set with D = 30 .
Function Metrics HE-DEPSO DEPSO SHADE CoDE DE PSO
Mean 1.6045E-14 2.0034E-04 2.3598E+03 1.2860E+08 2.0817E+09 6.3600E+10
F 1 Std 6.0758E-15 4.9086E-04 1.2985E+03 3.5599E+07 7.3074E+08 5.8100E+10
Rank 1 2 3 4 5 6
Significance + + + + +
Mean 1.4486E-13 1.3343E+02 6.1189E+04 7.0539E+04 4.1473E+05 2.1200E+04
F 3 Std 4.1092E-14 8.8712E+01 2.2749E+04 2.6622E+04 1.3053E+05 2.9800E+04
Rank 1 2 4 5 6 3
Significance + + + + +
Mean 2.8405E+00 1.2605E+01 7.7040E+01 9.6061E+01 3.5736E+01 1.7300E+03
F 4 Std 2.1842E+00 8.6176E+00 3.8595E+00 2.0503E+01 3.2288E+00 1.3000E+03
Rank 1 2 4 5 3 6
Significance + + + + +
Mean 2.3850E+01 1.8175E+02 9.3463E+01 1.8418E+02 2.4169E+02 1.8500E+02
F 5 Std 5.0701E+00 1.2845E+01 1.0871E+01 1.2630E+01 1.2302E+01 4.4900E+01
Rank 1 3 2 4 6 5
Significance + + + + +
Mean 4.6793E-01 5.7971E-03 3.7760E+01 5.8751E+01 2.4651E+01 5.3800E+01
F 6 Std 3.5101E-01 2.9636E-02 5.8394E+00 5.2672E+00 3.3305E+00 1.2000E+01
Rank 2 1 4 6 3 5
Significance + + + +
Mean 5.2487E+01 2.3377E+02 1.1601E+02 2.1572E+02 2.6887E+02 2.9100E+02
F 7 Std 3.3716E+00 1.4172E+01 8.0551E+00 1.2272E+01 1.3070E+01 1.1300E+02
Rank 1 4 2 3 5 6
Significance + + + + +
Mean 2.0726E+01 1.7779E+02 9.3771E+01 1.8161E+02 2.3844E+02 1.7100E+02
F 8 Std 4.6475E+00 1.1810E+01 9.9764E+00 1.3211E+01 1.6657E+01 3.8700E+01
Rank 1 4 2 5 6 3
Significance + + + + +
Mean 1.1236E+00 2.8880E-03 7.6008E+02 4.1017E+03 4.8783E+02 2.8400E+03
F 9 Std 9.0322E-01 1.6080E-02 2.7907E+02 8.3155E+02 1.5998E+02 1.2400E+03
Rank 2 1 4 6 3 5
Significance + + + +
Mean 2.6526E+03 6.7636E+03 3.4600E+03 6.3166E+03 7.9631E+03 5.2400E+03
F 10 Std 3.0794E+02 2.5866E+02 2.3363E+02 2.0917E+02 3.0010E+02 8.1500E+02
Rank 1 5 2 4 6 3
Significance + + + + +
Mean 3.2932E+01 7.9240E+01 7.3961E+01 2.7578E+02 2.1665E+02 9.3600E+02
F 11 Std 1.7540E+01 9.4859E+00 1.6453E+01 6.0425E+01 4.1872E+01 1.7000E+03
Rank 1 3 2 5 4 6
Significance + + + + +
Mean 6.0704E+03 7.4374E+04 1.7001E+06 5.9649E+08 2.5500E+09 1.1600E+10
F 12 Std 5.2411E+03 7.3297E+04 5.7892E+05 1.8156E+08 8.9672E+08 1.5200E+10
Rank 1 2 3 4 5 6
Significance + + + + +
Mean 5.3737E+01 2.3200E+03 7.2862E+03 3.3010E+08 1.0201E+08 1.0300E+10
F 13 Std 4.9535E+01 5.3868E+03 2.4057E+04 8.7976E+07 3.9539E+07 1.5000E+10
Rank 1 2 3 5 4 6
Significance + + + + +
Mean 5.1892E+01 9.7833E+01 7.5446E+01 1.5942E+04 4.0951E+03 2.2100E+05
F 14 Std 1.3639E+01 2.2150E+01 1.0230E+01 4.5198E+03 8.1599E+02 4.9000E+05
Rank 1 3 2 5 4 6
Significance + + + + +
Mean 6.7166E+01 7.9018E+01 1.0487E+02 3.0409E+07 3.1028E+06 2.9100E+08
F 15 Std 4.6957E+01 3.4973E+01 2.4704E+01 9.9730E+06 1.2198E+06 1.6200E+09
Rank 1 2 3 5 4 6
Significance + + +
Mean 2.7544E+02 1.4323E+03 8.5936E+02 1.6794E+03 2.3141E+03 1.7100E+03
F 16 Std 1.4178E+02 1.6353E+02 1.1941E+02 1.7833E+02 1.8579E+02 6.0700E+02
Rank 1 3 2 4 6 5
Significance + + + + +
Mean 6.6918E+01 3.8170E+02 1.9226E+02 6.8169E+02 1.1646E+03 8.4800E+02
F 17 Std 1.3274E+01 1.0402E+02 6.2540E+01 9.4653E+01 1.4710E+02 3.2100E+02
Rank 1 3 2 4 6 5
Significance + + + + +
Mean 6.1722E+01 2.4655E+02 2.3829E+05 3.4473E+05 7.9235E+05 1.9200E+06
F 18 Std 2.7440E+01 2.4587E+02 1.4060E+05 1.1610E+05 1.9796E+05 4.8700E+06
Rank 1 2 3 4 5 6
Significance + + + + +
Mean 4.0473E+01 3.7316E+01 3.2834E+01 5.0014E+03 3.7873E+04 4.2500E+08
F 19 Std 2.2587E+01 7.4498E+00 6.1811E+00 2.4562E+03 2.1473E+04 7.1600E+08
Rank 3 2 1 4 5 6
Significance + + +
Mean 8.7890E+01 2.6052E+02 3.3149E+02 4.8347E+02 1.0591E+03 6.0400E+02
F 20 Std 3.2291E+01 9.0720E+01 7.0732E+01 7.4079E+01 1.1750E+02 2.3100E+02
Rank 1 2 3 4 6 5
Significance + + + + +
Mean 2.2047E+02 3.7820E+02 2.7893E+02 3.9662E+02 4.4351E+02 3.8400E+02
F 21 Std 4.7309E+00 1.3317E+01 3.7140E+01 7.1300E+00 1.3511E+01 5.0800E+01
Rank 1 3 2 5 6 4
Significance +
Mean 1.0186E+02 2.9760E+03 2.2144E+02 6.5363E+03 8.0766E+03 4.5500E+03
F 22 Std 2.0196E+00 3.4464E+03 2.9380E+01 5.3749E+02 2.9236E+02 2.0400E+03
Rank 1 3 2 5 6 4
Significance + + + +
Mean 3.7543E+02 5.4924E+02 4.4917E+02 5.5419E+02 5.8732E+02 7.2800E+02
F 23 Std 8.3139E+00 1.3235E+01 9.7459E+00 1.2816E+01 1.9032E+01 7.9500E+01
Rank 1 3 2 4 5 6
Significance + + + + +
Mean 4.4893E+02 6.2854E+02 5.3810E+02 6.9034E+02 6.7462E+02 8.2800E+02
F 24 Std 6.6585E+00 1.3484E+01 1.6953E+01 1.5747E+01 1.2401E+01 1.0000E+02
Rank 1 3 2 5 4 6
Significance + +
Mean 3.8051E+02 3.7837E+02 3.7882E+02 4.1044E+02 3.9396E+02 5.6400E+02
F 25 Std 1.1308E+01 6.0195E-02 1.1887E+00 6.6423E+00 4.4183E+00 2.0000E+02
Rank 3 1 2 5 4 6
Significance + +
Mean 1.3469E+03 2.6899E+03 1.4149E+03 2.6587E+03 3.0652E+03 4.9700E+03
F 26 Std 9.6715E+01 2.2460E+02 5.7298E+02 9.0222E+01 1.2139E+02 9.0600E+02
Rank 1 4 2 3 5 6
Significance + + + +
Mean 5.0001E+02 5.0001E+02 5.0001E+02 5.0001E+02 5.0001E+02 7.1600E+02
F 27 Std 1.5786E-04 1.2917E-04 9.3150E-05 5.9917E-05 8.1534E-05 1.0700E+02
Rank 1 1 1 1 1 2
Significance
Mean 4.9902E+02 4.9059E+02 4.9845E+02 5.0001E+02 5.0001E+02 1.2300E+03
F 28 Std 3.0735E+00 4.2116E+00 1.3995E+00 6.6494E-05 6.6473E-05 8.4400E+02
Rank 3 1 2 4 4 5
Significance +
Mean 3.8546E+02 9.3919E+02 4.6431E+02 1.6187E+03 2.0724E+03 1.9200E+03
F 29 Std 4.5529E+01 2.1194E+02 4.2813E+01 1.8521E+02 1.8256E+02 6.1300E+02
Rank 1 3 2 4 6 5
Significance + + + + +
Mean 2.8659E+02 2.3933E+02 1.7439E+04 1.4650E+07 2.4426E+06 4.7800E+07
F 30 Std 5.8034E+01 6.7166E+00 2.1431E+04 6.8820E+06 1.5971E+06 8.9800E+07
Rank 2 1 3 5 4 6
Significance + + + +
Average rank 1.31 2.44 2.44 4.37 4.75 5.13
Final rank 1 2 2 3 4 5
W/T/L -/-/- 19/5/5 20/9/0 26/3/0 25/4/0 27/2/0
Table 6. Comparison of results between HE-DEPSO, DE, PSO and three advanced variants of the DE algorithm, on the function χ 2 ( X ) .
Table 6. Comparison of results between HE-DEPSO, DE, PSO and three advanced variants of the DE algorithm, on the function χ 2 ( X ) .
Function Metrics HE-DEPSO DEPSO SHADE CoDE DE PSO
Mean 1.7267E-28 3.4971E-28 7.2296E-03 2.3720E-05 6.1610E-13 7.4247E+00
χ 2 ( X ) Std 1.4496E-28 5.6400E-28 8.7102E-03 7.7748E-05 3.4294E-12 1.6815E+01
Classification 1 2 5 4 3 6
Significance + + + + +
Final
classification
1 2 5 4 3 6
W/T/L -/-/- 1/0/0 1/0/0 1/0/0 1/0/0 1/0/0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated