1. Introduction
Broadly speaking, physics can be classified into three areas: the theoretical, which, through a mathematical formalism, explains and understands the physical phenomena; the experimental part, which groups disciplines related to the acquisition of data, the methods of data acquisition, the design and performance of experiments; finally, the aspect that links these two is known as phenomenology, which is in charge of 1) validating theoretical models, 2) how to measure their parameters, 3) how to distinguish one model from another and 4) studying the experimental consequences of these models. This paper addresses a phenomenological approach by providing a solution to the validation of a theoretical model of physics. More specifically, from particle physics to unveil the mechanisms of fermion mass generation and to reproduce the elements of the matrix known as the mixing matrix.
The beginning of these models goes back to the first years of the 1970s, shortly after the establishment of the Standard Model (SM) of Particle Physics. Since then, different approaches have been developed in the context of theoretical and phenomenological models, which can be broadly classified as follows: Radiative mechanisms [
1,
2]; Textures [
3,
4,
5]; Symmetries between families [
6,
7]; and Seesaw mechanisms [
8,
9,
10]. These approaches are interrelated.
The texture formalism was born by considering that certain entries of the mass matrix are zero, such that we can compute analytically the matrix that diagonalizes it and hence the matrix
. In 1977 Harald Fritzsch created this formalism by using 6-zero hermetic textures as a viable model[
11], in 2005 with experimental data from those years, he found 4-zero hermetic textures viable to generate the quark masses and the mixing matrix [
12], however, given the precision of the current experimental data, it is worthwhile to assess the feasibility of these texture models.
There are numerical works on 4-zero texture models; however, the detailed numerical description of the techniques and algorithms used are not described by their authors [
13]. The numerical analysis of texture models requires a
criterion that establishes when the theoretical part of the model agrees with the experimental part [
14]. That is, to validate the model, a function (which we will call
) is built, and permissible values of the free parameters
X of the model must be found such that this function takes the minimum value possible greater than 0 but less than 1. In other words, it is necessary to optimize said function under a certain threshold. For the particular case of the function
obtained from the texture formalism, the difficulty comes from the cumbersome algebraic manipulation of the expressions involved, which complicate and entangle the application of classic optimization techniques; thus, alternative optimization techniques are required.
To the best of the authors’ knowledge, the works where bio-inspired optimization algorithms have been used within particle physics are the following: In experimental contexts, the particle swarm optimization algorithm (PSO) as well as genetic algorithms (GA), have been implemented within the hyperparameter optimization of machine learning (ML) models used in the analysis of data obtained in high-energy physics experiments [
15]; The optimization of the design of particle accelerators where the differential evolution (DE) algorithm is quite effective [
16,
17]; Regarding phenomenology, genetic algorithms have been used to discriminate models from supersymmetric theories [
18].
As can be seen, the incursion of bio-inspired optimization algorithms in particle physics has been very limited, even when the results are favorable. For this reason, further application of these techniques and algorithms in this type of particle physics area, especially in texture formalism, is of great interest.
The DE algorithm is one of the evolutionary algorithms that has stood out the most in recent times due to its simplicity, power, and efficiency [
19,
20]; however, like other evolutionary algorithms, it is likely to suffer premature convergence and stagnation in local minima [
19,
21]. The strategies implemented to solve these problems can be classified as follows [
22]:
Change the mutation strategy. The mutation phase of the DE algorithm is important since it allows the integration of new individuals into the existing population, thus influencing its performance. Algorithms such as CoDE [
23] have introduced new mutation strategies and implemented different mutation strategies, respectively, in order to improve the efficiency of the DE algorithm.
Change the parameter control. The DE algorithm is sensitive to the configuration of its main parameters: the scale factor
F and the crossover probability
[
19,
24]. Self-adaptive control of these parameters has been shown to improve the performance of the DE algorithm significantly. In this sense, the SHADE [
25] and SaDE [
26] algorithms represent two fairly well-known variants.
Incorporation of population mechanisms. The way in which the population is handled in the DE algorithm can improve its performance. Techniques such as genotypic topology [
27], opposition-based initialization [
28], and external population archives, as seen in JADE [
29], have shown positive effects.
Hybridization with other optimization algorithms. One way to improve the performance of the DE algorithm is to take advantage of the operators’ strengths from other algorithms and incorporate them into the structure of the DE algorithm through a hybridization process [
20,
30]. Hybridization with other computational intelligence algorithms, such as Artificial Neural Networks (ANN), Ant Colony Optimization (ACO), and Particle Swarm Optimization (PSO), has marked a trend within the last decade [
20,
31].
The self-adaptive Differential Evolution algorithm based on Particle Swarm Optimization (DEPSO) [
32] is a recent variant of the DE algorithm that integrates the PSO mutation strategy within its structure. DEPSO employs a probabilistic selection technique to choose between two mutation strategies: a new mutation strategy with an elite file called DE/e-rand/1, a modification of the DE/rand/1 strategy, and the mutation strategy of the PSO algorithm. This probabilistic selection technique enables DEPSO to improve the balance between exploitation and exploration, resulting in significantly better performance compared to both DE and PSO on various single-objective optimization problems.
This paper proposes a new variant of the DEPSO algorithm called Historical Elite Differential Evolution Based on Particle Swarm Optimization (HE-DEPSO) to improve optimization performance in solving complex single-objective problems, with a specific focus on addressing the problem of optimizing the criteria for the 4-zero texture model of high energy physics. The proposed variant aims to explore areas of opportunity encountered to enhance DEPSO, specifically introducing a new mutation strategy named DE/current-to-EHE/1, which utilizes information from the elite individuals of the population and incorporates historical data from the evolutionary process to improve the balance between exploration and exploitation by leveraging information from elite individuals and historical data, particularly during the early stages of the evolutionary process. Additionally, HE-DEPSO employs the self-adaptive parameter control mechanism from the SHADE algorithm to reduce the sensitivity of the algorithm’s parameters. To test the HE-DEPSO algorithm’s performance, it was compared against other optimization algorithms, including DE, PSO, CoDE, SHADE, and DEPSO, using the CEC 2017 single-objective benchmark function set. The HE-DEPSO algorithm outperformed the other algorithms in terms of solution quality. Finally, the validation of the 4-zeros texture model was conducted, optimizing the function and, at the same time, comparing the performance of our proposal against DE, PSO, CoDE, SHADE, and DEPSO algorithms for this particular application. The results are encouraging and expand the use of bio-inspired methods in high-energy physics by integrating a metaheuristic approach.
The remainder of the paper is structured as follows: Chapter 2 contains the definition of the problem to be solved, including the definition of the criterion; Chapter 3 reviews the versions of the PSO and DEPSO algorithms used here. Chapter 4 explains our proposal, the HE-DEPSO algorithm; Chapter 5 subjects our algorithm to benchmark tests, while Chapter 6 presents the validation problem of the 4-zero texture model. Finally, in chapter 7, the conclusions are presented.
2. Problem Definition
In the SM scope, the masses of the quarks come from hermetic
matrices known as mass matrices (one for u-type quarks and another for d-type quarks)[
33,
34,
35], as the absolute values of their eigenvalues and the matrix
as the product of the matrix that diagonalizes the u-type quarks and the matrix that diagonalizes the d-type quarks, however, due to the mathematical formalism used, the mass matrices remain entirely unknown and consequently, the masses of the quarks and the mixing matrix
cannot be theoretically predicted. The experiment provides us with the numerical value of these quantities.
The mixing matrix
, can be written in general way as:
and is a unitary matrix containing information about the probability of transition between u-type quarks (left-hand index) and d-type quarks (right-hand index), through the weak interaction [
36]. That is:
quantifies the transition probability between the quark
u and the quark
s through the interaction with the boson
. Over many years and different collaborations, the magnitudes of the elements of the mixing matrix
have been experimentally measured with an accuracy up to
and it is well known that four quantities (three angles and one phase) are needed to have a parameterization that adequately describes the matrix
[
37,
38]. In this work we choose the Chau-Keng parametrization [
37] and the three corresponding angles
,
,
are obtained by choosing the three magnitudes of the elements
,
and
and the phase
is obtained from the Jarslog invariant (
J) through the following relations:
where . Hence, we choose , , and J as independent quantities.
Within the SM context, the mixing matrix,
, is defined by [
37]:
where
is the matrix that diagonalizes the mass matrix of the u-type quarks, and
is the matrix that diagonalizes the d-type quarks. It is here where the texture formalism is born, which consists of proposing structures with zeros in some entries of the mass matrix in such a way that we can find the matrices that diagonalize it and be able to calculate analytically the mixing matrix
and validate the chosen texture structure.
Without loss of generality, the mass matrices
and
are considered hermitian so that the general mass structure has the form:
where the index
q runs over the labels
. The elements
,
and
are real, while
,
and
are complex and are usually written in their polar form
where
is the magnitude and
its angular phase (
).
A matrix of the type four-zero textures [
12] is formed from the above matrix by taking the entries
,
and
equal to zero. Thus, we arrive at the following matrix structure:
In references [
3,
12] it is shown that this matrix can be diagonalized by a unitary matrix as follows:
where
denotes a diagonal matrix and
denotes each of the three eigenvalues of
. The matrices
and
are given by:
and
Taking
and
, the relations between
with the physical masses of the quarks are:
where
is the u quark mass,
is the c quark mass,
is the t quark mass,
is the d quark mass,
is the s quark mass, and
is the b quark mass and its experimental value is presented in
Appendix A. In this work, the same four-zero texture structure is considered for the u-quark mass matrix and the d-type quark mass matrix (a parallel mass structure). The
q-index takes two values
and
.
From the Equation (
9), it is noticed that the elements of the matrix
depend on the free parameters
and
, the first parameter takes values of
and
and tells us that eigenvalue of the mass matrix is negative, when
the first eigenvalue
is negative and the second eigenvalue
is positive, when
the first eigenvalue
is positive and the second eigenvalue
is negative. The combinations of signs of the
and
parameters define the different cases of study to be considered (see
Table 1). The second pair of free parameters are
and
whose values are restricted to the intervals
and
to ensure that the elements of the
and
matrices are real.
From the above, the mixing matrix
, predicted by the four-texture model, is given by:
in an explicit form:
where the phases
and
are defined as:
These are considered to be measured in radians with principal argument
, and the indices
i and
j correspond, respectively, to the indices
and
.
The magnitude of the elements of the mixing matrix is given by:
At this point, it is essential to emphasize that analytical expressions are obtained for the elements of the mixing matrix predicted by the four-zero texture formalism; with this information, it is possible to construct a more complete theory of the Standard Model where the origin of the mixing matrix is explained.
It only remains to validate the theoretical model of textures with 4-zeros, that is, to find the range of the free parameters
,
,
y
that agree with the experimental values of the matrix
and for this, we define a function
and use a chi-square criterion [
39,
40] established by:
where
and
the super-indexes “
” are given by the Equation (
15) and the quantities without super-index are the experimental data with uncertainty
(see Equations (
A1) and (
A2)).
Although at first sight, the mathematically constructed function
turns out to have a simple structure and the amount of free parameters is small, the difficulty of finding the numerical range of the same ones that fulfill the condition given in the Equation (
16) comes from the cumbersome composition of functions to evaluate that constitute it. In order to have a notion of the topographic relief of this function, different projections of the function
in different planes are shown in
Figure 1.
Figure 1a shows the projection onto the plane
and
(held fixed values for
and
), that is; the dependence of the function
on the variables
and
is shown (right graph) and the corresponding contour lines are illustrated (left graph). Similarly,
Figure 1b corresponds to the projection of the function
onto
and
(setting
and
). Analogously in the graphical displays we show the behavior of the function
in the plane
and
(
Figure 1c), in the plane
and
(
Figure 1d), in the plane
and
(
Figure 1e), and in the
and
planes (
Figure 1f).
The following color code has been established, regions towards the intense red color mean larger values of
, while regions towards the intense blue color correspond to smaller values of
. We point out that the function to optimize
is defined on (
17) with variables
,
,
,
bounded on the boundaries
,
,
,
and as can be observed from the graphs (
Figure 1), the topography in which it is intended to optimize is rugged. With this, one sees the opportunity to explore the use of alternative optimization techniques such as bio-inspired algorithms.
3. Review of PSO, DE and DEPSO
3.1. Particle Swarm Optmization
The Particle Swarm Optimization (PSO) algorithm [
41] is one of the most popular swarm intelligence-based algorithms. It models the behavior of groups of individuals, such as flocks, to solve complex optimization problems in D-dimensional space. The algorithm performs a search for the global optimum using a swarm of
particles. For the
i-th particle of the swarm
(
), its position and velocity at iteration
t are represented by
and
respectively. At each iteration the position and velocity of each particle is updated taking into account the information of the best solution found by the swarm
, and the best solution found individually
by each particle. In the standard version of the PSO algorithm, the way in which the updating of the position and velocity of each particle is performed is carried out according to the following expressions [
42]:
where
and
are the social and cognitive coefficients respectively, which commonly take the values
.
and
are two randomly selected values within the interval
.
is known as the inertia parameter and its aims to provide a better balance between global and local search. The inertia parameter
is updated at each iteration as follows:
where t refers to the current iteration, is the maximum number of iterations, and are the minimum and maximum values of the inertia factor, which are generally set to values and .
3.2. Differential Evolution
The DE algorithm proposed by Storn and Price [
43,
44], is one of the most representative evolutionary algorithms, which, due to its ease of implementation, effectiveness and robustness, has been widely used in the solution of several complex optimization problems [
45,
46]. This algorithm makes use of a population
P constituted from
individuals or parameter vectors:
where
t refers to the current iteration, and each individual
has
D dimensions. At the beginning of the execution of this algorithm, the population is initialized randomly within the search space of the problem.
The DE algorithm consists mainly of three operators: mutation, crossover and selection. These operators allow the algorithm to perform the search for the global optimum. Mutation, crossover and selection are applied consecutively to each individual
of the population
in each of the
t iterations until a certain stopping criterion is satisfied. Within the mutation, in each iteration and for each individual, a mutated vector
is generated by means of the information of the current population
and the application of a mutation scheme. In the standard version of the DE algorithm the mutated vector is generated following the DE/rand/1 mutation scheme, described as follows:
where the indices
and
are randomly selected within the range
such that they are different from each other and different from the index
i.
is the scaling factor and is one of the parameters of the algorithm. The value of parameter
F is typically within the interval
.
The next stage within the DE algorithm consists of the crossover, in which a test vector
is generated, that is, once the mutated vector
is generated for the individual
, the information crossover between the vector
and the vector
is performed. This crossover operation is performed consistently following the binomial crossover:
where
is a number uniformly selected within the interval
, and
is an index corresponding to a variable which is uniformly selected in the interval
. Here
is known as the crossover probability, and like
F, it is another parameter of the algorithm.
Once the corresponding test vector
is generated for the individual
, a selection procedure is carried out from which the population for iteration
is constructed. The standard way to perform this selection procedure consists of comparing the fit value of the test vector
against the fit value of the individual
, always keeping the individual with the best fit value. This procedure is performed as follows:
3.2.1. Improvements on DE Parameters Control
One of the components that influence the performance of the DE algorithm is the control of the
F and
parameters [
25,
47]. The standard version of the DE algorithm uses fixed values for these two parameters, however, the selection of these parameters can be done deterministically (following a certain rule that modifies these values after a certain number of iterations), adaptively (according to the feedback of the search process) and in a self-adaptive way (by means of the evolution information provided by the individuals of the population) [
30]. One of the most representative variants concerning the improvement of the control of the parameters
F and
of the DE algorithm was proposed in the SHADE algorithm [
25].
Precisely, in each iteration of the SHADE algorithm, the parameters
F and
associated with test vectors with a better fit than their parent vector are stored in two sets:
and
respectively. Two archives or memories
and
with a defined time
H, and whose contents
and
are initialized with a value of
, serve to generate the parameters
and
of the
i-th individual
at each iteration
t. The way to generate the values of these parameters requires the uniform selection of an index
and is carried out according to the following expressions:
where
is the element selected from
to generate the value of
, while
is the element selected from
to generate the value of
;
represents a normal distribution with mean
and standard deviation
, while
represents a Cauchy distribution with parameter position
and scale factor
. When generating the value of the parameter
, it must be verified that it is within the range
, otherwise, if
then
is truncated to 0 and if
then
is truncated to 1. For the parameter
, if
then
is truncated to 1, otherwise, if
then
is regenerated until
.
At the end of each iteration, the contents of the
and
memories are updated as follows:
where
is the Lehmer weighted mean defined by means of the Equation (
29), and
S refers to
or
.
In Equation (
27) and Equation (
28),
determines the position in memory to be updated. At iteration
t, the
element in memory is updated. At the beginning of the optimization process
, and is incremented each time a new element is added to memory. If
,
k is assigned equal to 1.
Due to the good performance obtained by the SHADE algorithm when solving optimization problems using this strategy, this paper takes advantage of the adaptive control of the parameters provided by the SHADE algorithm.
3.2.2. DEPSO Algorithm
The self-adaptive differential evolution algorithm based on particle swarm optimization (DEPSO) [
32] is a recent method, which incorporates characteristics of the PSO algorithm within the structure of the DE algorithm for solving numerical optimization problems. In this algorithm, the top
of the best particles in the current iteration is used to form an elite sub-swarm
, while the rest, is used to form a non-elite sub-swarm
.
The DEPSO algorithm follows a scheme similar to the standard version of the DE algorithm, i.e., it uses mutation, crossover and selection operators within the evolutionary process of the algorithm. Within the mutation, a mutation strategy is implemented which, in a self-adaptive way, selects between two mutation schemes to generate in each iteration a mutated vector
for the
i-th particle
as follows:
where
is a random number selected uniformly within the interval
, and
represents the probability of selecting between one of the two mutation schemes. In Equation (
32), the case in which
represents the form in which a novel mutation scheme denoted DE/e-rand/1 generates a mutated vector
. In this scheme two optimal solutions of the elite sub-swarm
and
and one solution of the non-elite sub-swarm
are required, and a scaling factor
is used, which acts at the level of each particle. The remaining case represents the way in which the mutation scheme of the standard PSO algorithm is used to generate a mutated vector
.
The selection probability
of the Equation (
32) changes adaptively within the evolution of the algorithm as follows:
where
is the maximum number of iterations and
t the current iteration.
is a positive constant.
Within the crossover operator, a test vector
is generated by combining the information of the current particle
and the mutated vector
, following the binomial crossover (see Equation (
23)). The selection of the surviving particle to the next iteration is carried out by the competition between the current particle
and the test vector
, the one with the best fit value according to the objective function is selected to remain within the next iteration.
When a particle remains stagnant over a maximum number of iterations (e.g. 5), the values of its corresponding scale factor
and crossover probability
are reset in order to increase diversity. The way in which the values of these parameters are reset is as follows:
where
and
are the scaling factor and the crossover probability, respectively, of the particle
at iteration
t.
and
are the lower and upper bounds respectively of the scaling factor.
and
are the lower and upper bounds respectively of the crossover probability and
is a random number selected within the interval
.
is a stagnation counter for each particle, and
is the maximum number of iterations with stagnation.
To avoid stagnation, the DEPSO algorithm randomly updates a sub-dimension of individuals within the non-elite population
in which
, to reset them as follows:
where
is a random number selected within the interval
.
is a probability fixed value.
and
are the lower and upper limits, respectively, of the variable
j.
In general, the DEPSO algorithm manages to be superior to other variants of the DE algorithm in different optimization problems. The performance of this algorithm is due to the fact that it maintains a good balance between exploration and exploitation, thanks to the use of the self-adaptive mutation strategy, in which the “DE/e-rand/1” scheme has better exploration abilities, while the mutation scheme of the PSO algorithm achieves better convergence abilities. With this strategy, the population manages to maintain a good diversity in the first stages of the evolutionary process, and a faster convergence towards the last stages of the process.
4. Proposed HE-DEPSO Algorithm
Despite the development of several advanced versions of DE algorithm in recent years, its performance still needs to improve in optimization problems with multiple local minima. To address these issues, the design of effective mutation operators and parameter control are two key aspects to improve the performance of the DE algorithm. In the proposed HE-DEPSO algorithm, an adaptive hybrid mutation operator is developed, which takes the self-adaptive mutation strategy from the DEPSO algorithm [
32] as a basis and incorporates the parameter control mechanism from the SHADE algorithm [
25]. This adaptive hybrid mutation operator introduces a new mutation strategy called "DE/current-to-EHE/1", which utilizes the historical information of the elite individuals in the population to enhance the optimization of the
function.
4.1. Adaptative Hybrid Mutation Operator
In order to achieve a better balance between exploration and exploitation, the mutation operator of the HE-DEPSO algorithm adopts a dual mechanism in which it adaptively selects between two mutation strategies. First, a new mutation strategy called “DE/current-to-EHE/1” is presented, which is oriented to improve the balance between exploration and exploitation abilities within the early stages of the evolutionary process. On the other hand, to improve the exploitation capacity within the more advanced stages of the evolutionary process, the mutation scheme of the PSO algorithm (Equation (
32)) is incorporated in a similar way to the self-adaptive mutation strategy of the DEPSO algorithm (Equation (
18)).
Unlike the “DE/e-rand/1” scheme used within the self-adaptive mutation strategy of the DEPSO algorithm, in which only the information of the individuals of the current iteration is used, the “DE/current-to-EHE/1” strategy uses the historical information of the elite and obsolete individuals of the evolutionary process to improve the exploration capability of the algorithm. This strategy is described below.
4.1.1. DE/Current-to-EHE/1 Mutation Strategy
Within evolutionary algorithms, the best individuals in the population, also known as elite individuals, retain valuable evolutionary information to guide the population to promising regions [
32,
48,
49,
50]. However, many of the proposals made use of the information from the elite individuals of the current iteration, completely forgetting the information from the previous elite individuals. This absence of information within subsequent iterations may limit the ability of new individuals to explore the search space. Because of this, historical evolution information is used to improve the ability to explore new individuals.
Before applying the mutation operator, all individuals in the current population are sorted in ascending order based on their fitness values. After reordering the population, two partitions of individuals are created. The first partition, denoted by , consists of the top best individuals or elite individuals. The second partition, denoted by groups all non-elite individuals, i.e., . It is important to note that .
Unlike other mutation strategies [
29,
32,
51], in which only elite individuals corresponding to the current iteration are used, this strategy makes use of the evolution history by incorporating two external archives of size
. The first archive, denoted by
, stores at each iteration the elite individuals belonging to the
partition. If the size of the archive
is greater than
then its size is readjusted. The second archive used, denoted by
, stores the obsolete individuals (individuals discarded within the selection process) and is updated at each iteration. Similar to the
archive, if the size of the
file is larger than
then its size is reset. With the information of the individuals belonging to
and
, a group of candidates can be formed using
to mutate individuals in the population. On the other hand, the individuals of
and
make up a group of individuals
whose information contributes to the exploration of the search space. The way in which a mutated vector is generated by the “DE/current-to-EHE/1” strategy is as follows:
where
is the
i-th individual at iteration
t,
is a randomly selected individual from among the individuals in the group
∪
,
is an individual randomly selected from within the current population
,
is an individual randomly selected from the individuals comprising the group
∪
, and
is the scaling factor corresponding to the
i-th individual.
Following Equation (
37), it is possible to observe that the “DE/current-to-EHE/1” strategy can help the HE-DEPSO algorithm to maintain a good exploration capability, and direct the mutated individuals to promising regions without leading to stagnation in local minima. This can be explained by the following reasons:
The use of a randomly selected individual within the group , i.e., , helps to guide mutated individuals to more promising regions. However, due to the presence of the historical information of the elite individuals (), the mutated individuals are prevented from being directed towards the best regions found, thus allowing the maintenance of good diversity in the population and increase the chances of targeting optimal regions.
The participation of two individuals, and , randomly selected from and respectively, promotes the diversity of mutated individuals. Consequently, the search diversity of the HE-DEPSO algorithm is considerably improved, which is beneficial for escaping from possible local minima.
At the beginning of the execution of the HE-DEPSO algorithm, the archives
and
are defined empty. Through the evolutionary process, they store the elite and obsolete individuals, respectively. When analyzing the effect of the parameter
, which determines the percentage of individuals within the participation of
, it was found that values close to
promote the participation of a more significant number of elite individuals, causing a greater diversity in the population. However, this diversity increase may affect the HE-DEPSO algorithm’s convergence capability. On the other hand, values close to
may restrict the number of elite individuals to be considered, which improves the convergence of the algorithm but may cause stagnation at local minima. Due to the above, within the HE-DEPSO algorithm, we choose to update in each iteration the value of
following a linear decrement given as follows:
where
t refers to the current iteration,
is the maximum number of iterations,
and
are the minimum and maximum values of the interval assigned for the percentage of individuals within the partition
. In this work, we have chosen to use the values
and
. In this way, the sensitivity of the
parameter is reduced and, at the same time, a good balance between exploitation and exploration is maintained.
4.1.2. Selection of Mutation Strategy for Adaptative Hybrid Mutation Operator
The adaptive hybrid mutation operator implemented in the HE-DEPSO algorithm uses two mutation strategies that aim to improve the exploration and exploitation abilities of the HE-DEPSO algorithm at different stages of the evolutionary process. Concretely, for each individual
in the population, this operator generates a mutated vector
as follows:
where
is a random number selected uniformly within the interval
, and
represents the probability of selecting between the “DE/current-to-EHE/1” mutation strategy and the adopted mutation strategy of the PSO algorithm (Equation (
18)).
The probability of selection
, is updated at each iteration according to the Equation (
33) taking a value of
.
Figure 2 shows the mutation strategy selection curve followed by the adaptive hybrid mutation operator of the HE-DEPSO algorithm. The probability selection curve described by the Equation (
33) with a value of
, for a maximum number of iterations
, is illustrated in red in
Figure 2. On the other hand,
Figure 2 shows, as an example, a possible distribution of occurrences among which is selected the mutation strategy “DE/current-to-EHE/1” (squares in blue color) or the adopted strategy of the PSO algorithm (circles in magenta color). According to these graphs, it can be observed that within the first stages of evolution of the HE-DEPSO algorithm, the probability of selection
tends towards values close to 1, which increases the probability that the mutation strategy “DE/current-to-EHE/1” is selected, and the balance between exploration and exploitation provided by this strategy is taken advantage of. On the other hand, towards the advanced stages of the algorithm evolution, the mutation strategy adopted from the PSO algorithm is selected more frequently; this implies that, during the advanced stages, there is a higher probability that the HE-DEPSO algorithm increases its exploitation ability due to more frequent use of the information of the best positions of each individual, as well as the information of the position of the best individual in the population.
4.2. The Complete HE-DEPSO Algorithm
The HE-DEPSO algorithm follows the usual structure of the DE algorithm in general, in which we have the stages of mutation, crossover, and selection. After generating a mutated vector
for the
i-th individual of the population as described in the previous section, we proceed to generate a test vector
for the individual
using the binomial crossover (Equation (
23)). In order to reduce the sensitivity of the
F and
parameter control, the HE-DEPSO algorithm takes advantage of the individual-level adaptive parameter control of the SHADE algorithm to generate the parameter configuration of
F and
for each individual. Finally, in the selection stage, the individuals that would make up the population for the next iteration are identified; this is done according to Equation (
24). Based on the above description, the pseudo-code of the HE-DEPSO algorithm is reported in (
Figure 3).
5. Experimental Results and Analysis
This section verifies the performance of the proposed HE-DEPSO algorithm in solving different optimization problems. First, the set of test problems used is presented, then, the algorithms selected to carry out comparisons are presented, as well as the configuration of their parameters. Then, the performance of the HE-DEPSO algorithm is compared against the selected algorithms within the test problem set. Finally, a discussion of the results obtained is presented.
5.1. Benchmark Functions
In order to validate the performance of the HE-DEPSO algorithm in solving different optimization problems, the set of single-objective test functions with boundary constraints CEC 2017 [
52] was used. This set consists of 29 test functions whose global optimum is known. We have two unimodal functions
and
, seven simple multimodal functions
, ten hybrid functions
and ten composite functions
. In all these optimization problems, we seek to find the global minimum (
) within the search space bounded in each dimension
by the interval
. Information about this set of test functions is briefly presented in
Table 2.
5.2. Algorithms and Parameter Settings
In this subsection, HE-DEPSO is compared with five algorithms, including the PSO and DE algorithms, as well as three representative and advanced variants of the DE algorithm, named CoDE [
23], SHADE [
25] and DEPSO [
32]. Among these algorithms, CoDE and DEPSO incorporate modifications mainly within the mutation operator of the DE algorithm, while SHADE modifies the control of
F and
parameters. The performance comparison of the HE-DEPSO algorithm with the previously mentioned algorithms was performed in two dimensions,
and
, on the CEC 2017 test function set.
In order to ensure a fair comparison, the parameter settings in common to all algorithms were assigned identically: the maximum number of iterations
was set to 1000, the population size
to 100, and 31 independent runs were performed. The configuration of each algorithm’s parameters followed the values suggested by their respective authors, as presented in
Table 3. The proposed HE-DEPSO algorithm was implemented in Python version 3.11, and all algorithms were executed on the same computer with 8 GB of RAM and a six-core 3.6 GHz processor.
5.3. Comparison with DE, PSO, and Three State-of-the-Art DE Variants
In order to evaluate the performance of the HE-DEPSO algorithm, its optimization results and convergence properties are compared and analyzed with respect to those obtained by the algorithms: DE, PSO, CoDE, SHADE and DEPSO, on the set of CEC 2017 test functions.
Table 4 and
Table 5 report the statistical results for each algorithm at
and
respectively. The error measure of the solution
, where
is the known solution of the problem and
is the best solution found in each iteration of each algorithm, was used to obtain these results. These tables show the mean and standard deviation (Std) of the solution error measure for each function and algorithm over the 31 independent runs and the ranking achieved based on the mean value obtained. The best results are highlighted in bold.
A non-parametric Wilcoxon rank sum test, with a confidentiality level of , was performed to identify statistically significant differences between the results obtained by the HE-DEPSO algorithm and the results obtained by the other algorithms. In these tables, the statistical significance related to the performance of the HE-DEPSO algorithm is represented by the symbols “+”, “≈” and “−”, which indicate that the performance of the HE-DEPSO algorithm is “better/similar/worse” than the algorithm to be compared. The row “W/T/L” counts the total number of “+”, “≈” and “−”, respectively.
5.3.1. Optimization Results
According to the results reported in
Table 4, for
, the proposed HE-DEPSO algorithm obtains the global optimal solution for functions
,
and
. For the unimodal function
, DEPSO and SHADE obtain the global optimal solution. On the other hand, SHADE, CoDE, and DE obtain the global optimal solution in the simple multimodal function
. For functions
,
,
,
,
and
the HE-DEPSO algorithm achieves the best result among all algorithms. DEPSO is the best in the functions
,
,
and
. SHADE obtains the best results on function
, while CoDE obtains the best results on functions
,
and
. The PSO does not show superior results in any function. The results of this table indicate that, for these tests, the proposed HE-DEPSO algorithm obtains the best ranking according to the mean value among all the algorithms. On the other hand, the results of the Wilcoxon rank sum test show that HE-DEPSO is superior to DEPSO, SHADE, CoDE, DE and PSO in 17, 20, 24, 25, and 29 functions out of a total of 29 test functions respectively.
For
, the statistical results presented in
Table 5 show that the HE-DEPSO algorithm achieves the best results on the functions
,
,
,
,
and
. On the other hand, DEPSO, SHADE, CoDE, and DE achieve the best results in the
function. DEPSO is the best in the functions
,
,
,
and
, while SHADE, shows to be the best in the function
in comparison to the rest of the algorithms. With these results, it is possible to identify that HE-DEPSO obtains the best ranking among all the algorithms for these tests. According to the Wilcoxon rank sum test results, among the 29 test functions, HE-DEPSO has 19, 20, 26, 26, 25, and 27 items that are better than DEPSO, SHADE, CoDE, DE, and PSO, respectively.
5.3.2. Convergence Properties
The convergence properties can be summarized into four types, which are represented by the graphs presented in
Figure 4 and
Figure 5.
- 1.
The convergence properties of the functions
,
and
can be identified within the same class; this can be observed with the help of
Figure 4(a)–
Figure 4(d). In this type, the HE-DEPSO algorithm shows faster convergence than the other algorithms at
, while at
, it obtains a better mean error value.
- 2.
In
Figure 4(e)–
Figure 4(h), the convergence curves of functions
and
are presented, which are similar to the corresponding ones for functions
,
,
and
. All algorithms have different degrees of evolutionary stagnation or exhibit slow evolution for these functions.
- 3.
The convergence curves of the functions
,
,
and
, are similar to the curves of the functions
and
(see
Figure 5(a)–
Figure 4(d)) obtained for the HE-DEPSO algorithm. In this type, HE-DEPSO presents a fast convergence at the beginning, and subsequently, it keeps evolving downward.
- 4.
The evolutionary process presented in the convergence curves of the functions
and
follows the same trend as the convergence curves of the functions
and
as can be seen in
Figure 5(e)–
Figure 4(h). Here, most of the algorithms present stagnation in an accelerated way, although some of them manage to keep evolving downward.
The experiments performed above prove the superiority of the proposed HE-DEPSO algorithm in solving different optimization problems. The reasons why the HE-DEPSO algorithm obtains such superior performance can be summarized as follows:
- 1.
The adaptive hybrid mutation operator implemented in HE-DEPSO is based on the self-adaptive mutation strategy of the DEPSO algorithm, which has proven to be helpful in tackling several complex optimization problems. On the other hand, the collaborative work between the “DE/current-to-EHE/1” mutation strategy with historical information of the elite individuals and the PSO algorithm strategy generates a good balance between exploration and exploitation at different stages of the evolutionary process.
- 2.
The self-adaptive control of F and parameters adopted from the SHADE algorithm allows mitigating parameter sensitivity. In this way, the crossover probability and the scale factor are dynamically adjusted during the evolutionary process at the level of each individual, which can make the proposed algorithm suitable for a wider variety of optimization problems.
6. 4-Zeros Texture Model Validation
In order to verify the physical feasibility of the 4-zero texture model, it is essential to find sets of numerical values for the parameters
,
,
and
that minimize the
function to a value less than one (see Equation (
16)), and also that, the numerical value for all
predicted for the 4-zeros texture model ( evaluated in these sets of values) are in agreement with the experimental data. Firstly, an analysis of the HE-DEPSO algorithm’s behavior when optimizing the
function will be conducted. The study will assess the algorithm’s optimization, convergence, and stability properties, providing insights into its performance when dealing with the
optimization problem.
6.1. HE-DEPSO Performance in Optimization
The given optimization problem seeks to minimize the function
given by the Equation (
17). The search space is constrained by the possible signs of the parameters
and
, as shown in
Table 1. The phases
and
can take values between 0 and
. The ranges of parameters
and
depend on the signs of
and
. Specifically considering the case of study 1,
can take values in the interval
, while
in
.
6.1.1. Experiment Configuration
Through an experiment, an evaluation of the performance of the HE-DEPSO algorithm in the optimization of the
function will be carried out in order to determine its ability to find the global minimum. This study will be carried out in the search space defined in case 1 previously mentioned. The performance of HE-DEPSO will be compared with the PSO and DE algorithms, as well as with advanced variants of the DE algorithm such as CoDE, SHADE, and DEPSO. To ensure a fair comparison, standard parameters have been set for all algorithms:
,
, and 31 independent runs have been performed. The settings of the other parameters of each algorithm have been made following the indications in
Table 3.
6.1.2. Experiment Results
To evaluate the effectiveness of the proposed HE-DEPSO algorithm, comparisons and analysis of its optimization results were performed with the DE, PSO, and three advanced variants of DE algorithms applied to the
function. Using an approach similar to that described in
Section 5,
Table 6 presents the statistical data for each algorithm evaluated.
According to the data presented in
Table 6, the HE-DEPSO algorithm excels in obtaining the best results in optimizing the
function, outperforming DEPSO. Although DEPSO comes close to the results of HE-DEPSO, its accuracy is not matched. On the other hand, the DE algorithm also demonstrated acceptable accuracy, although lower than that achieved by HE-DEPSO and DEPSO. In contrast, the SHADE, CoDE, and PSO algorithms obtained the lowest results compared to the others. In terms of average ranking, HE-DEPSO is positioned as the leader among all the analyzed algorithms. Furthermore, the results of the Wilcoxon test confirm the superiority of HE-DEPSO over DEPSO, SHADE, CoDE, DE, and PSO in the optimization of the
function.
Figure 6 presents the curves obtained over 31 independent runs by the HE-DEPSO, DEPSO, SHADE, CoDE, DE, and PSO algorithms. It is observed that the HE-DEPSO algorithm exhibits excellent convergence behavior, consistently outperforming SHADE, CoDE, DE, and PSO and achieving a lower mean error value. Although DEPSO converges faster starting from the 300 iteration, the performance of HE-DEPSO stands out for its consistency and exploration capability, which could explain its more extended convergence compared to DEPSO.
In
Figure 7, box and whisker plots show the results of the HE-DEPSO, DEPSO, SHADE, CoDE, DE, and PSO algorithms, based on the best overall fits of 31 independent runs to the chi-square function, are presented. These graphs illustrate the distribution of these values into quartiles, with the median highlighted by a red horizontal line within a blue box. The box boundaries correspond to the upper (Q3) and lower (Q1) quartiles. Lines extending from the box indicate the maximum and minimum values, excluding outliers represented by the “+” symbol in blue. The green circles represent the mean of the overall best fits of the 31 independent runs.
According to the box and whisker plots, the HE-DEPSO algorithm demonstrated superior performance to the other evaluated algorithms regarding stability and solution quality, based on 31 independent runs. The distribution of the best global fitness values achieved by HE-DEPSO presents a higher concentration around an lower optimum, reflected in a lower average than the other algorithms. Consequently, the HE-DEPSO offers a more robust and stable performance.
6.2. Valid Regions for Parameters and
In this subsection, we present the results obtained after applying the HE-DEPSO algorithm to search for valid regions of the
,
,
and
parameters. This process was carried out using the most recent experimental values of the
matrix elements, the quark masses, and the Jarlskog invariant (see
Appendix A).
Figure 8 and
Figure 9 present the allowed regions for the parameters
,
,
and
, scaled as
and
respectively, in the first case study of
Table 1. These regions are classified by the solutions found using the HE-DEPSO algorithm and are represented by black, blue, and orange dots, which correspond to different levels of precision of the
function. The orange region, the smallest, indicates the region of highest precision, where experimental data agree best with theoretical predictions. These solutions were found by iteratively executing the algorithm until a total of 5000 solutions were collected at each of the precisions (
,
and
).
It is worth noting that for the rest of the case stidies (see
Table 1), the found regions for the parameters
,
,
and
were similar.
6.3. Predictions for the Elements
The second part of model validation is the following: with the data obtained from the valid regions for the four free parameters of the 4-zeros texture model:
,
,
and
, found with the help of HE-DEPSO, we can evaluate the feasibility of the model according to the latest experimental values of the
matrix and the Jarlskog invariant. This can be achieved by checking the model’s predictive power when predicting the remaining elements of the
matrix that were not used within the
fit, i.e.,
,
,
,
,
, and
. Using Equations (
2), (
3), (
4), and (
5), the Chau-Keng parametrization [
37], and the solutions found, we can calculate the numerical predictions for those elements. These predictions are then presented using scatter plots. Each graph includes the experimental central value of each element with its associated experimental error value. The points that fall within the region delimited by the experimental values are considered good predictions, while those outside the region are considered poor predictions.
In
Figure 10, the predictions for the elements
and
in the first case study are presented. The black, blue, and orange points indicate a precision of
,
, and
, respectively. For
, the experimental central value
is indicated with a solid red vertical line on the corresponding axis, while its experimental error (
) is shown with two red dashed vertical lines on either side of the central value. In the case of
, the experimental central value
is represented with a solid blue horizontal line on the
axis, and its experimental error (
) is indicated with two blue dashed horizontal lines on either side of the central value. The predictions for the two elements show that the data with a precision of
can be outside the experimental error region, suggesting a poor prediction. On the other hand, the predictions with
and
remain within this region, indicating accurate predictions. This pattern is repeated in the rest of the analyzed case studies.
The predictions for the elements
and
are shown in
Figure 11 for the first case study. The experimental central value of
is represented by a solid red vertical line at the center of the
axis at
, while its experimental error of
is indicated by red dashed vertical lines on either side. The experimental central value of
is shown by a solid blue horizontal line at the center of the
axis at
, and its experimental error of
is represented by blue dashed horizontal lines on either side. According to the predictions shown in
Figure 11, the data obtained with
do not represent good predictions, as they are outside the experimental errors. On the other hand, the predictions with
and
remain within the error region, indicating that they are good predictions. This pattern is repeated in the four case studies.
Finally, in
Figure 12, the predictions for the elements
and
are shown in the four case studies. For
, the experimental central value is represented by a solid vertical line at
, and its experimental error with two red dashed vertical lines at
. For
, the experimental central value is shown with a solid blue horizontal line at
, and its experimental error with two blue dashed horizontal lines at
. The predictions for these two elements indicate that the data obtained with precisions of
and
may be outside the experimental errors, so they are not good predictions. In contrast, the prediction with
remains within the errors, being a good prediction. These characteristics are observed in the four case studies.
When analyzing the predictions of the elements of the matrix (, , , , and ), it was observed that only the solutions of the HE-DEPSO algorithm with an accuracy of fit within the limits of experimental error. These solutions are considered valid and capable of reproducing the matrix in all case studies and elements. Thus, it can be established that the 4-zero texture model is physically viable. It would be interesting, however, to explore the implications of this information obtained within the texture formalism.
7. Conclusions
This paper explored the feasibility of the 4-zeros texture model using the most recent experimental values. To gather data that can inform the decision on the model’s viability, we employ a Chi-square fit to compare the theoretical expressions of the model with experimental values of well-measured physical observables. We developed a single-objective optimization model by defining a function to identify the allowed regions for the free parameters of the 4-zeros texture model that are consistent with the experimental data. To address the challenge of optimizing the function within the Chi-square fit, we propose a new DEPSO algorithm variant, HE-DEPSO.
The proposed algorithm has demonstrated its ability to efficiently optimize different functions, particularly those from the CEC 2017 single-objective benchmark functions set. The convergence properties of HE-DEPSO show a good balance between solution precision and convergence speed in most cases. Regarding optimizing the function, our findings indicate that HE-DEPSO and DEPSO are more than adequate for solving the optimization problem, with HE-DEPSO providing the best solution quality and consistency. At the same time, DEPSO maintains a faster convergence rate. This part of the investigation also highlights the difficulty that some algorithms can face when optimizing the function, as algorithms like SHADE and CoDE, despite exhibiting good performance on the CEC 2017 test set, face some difficulties in optimizing the function. We can also conclude that optimization algorithms such as SHADE, CoDE, and DE would be worth considering for optimization problems in high-energy physics. Finally, with the use of HE-DEPSO algorithm we show that the 4-zeros texture model is compatible with current experimental data and thus we can affirm that this model is physically feasible.
Future work will focus on enhancing the convergence speed of the HE-DEPSO algorithm and evaluating its effectiveness across diverse problem domains, exploring its ability to tackle more complex optimization challenges. An extended comparative analysis against other bio-inspired algorithms or other advanced DE variants could also be conducted. Furthermore, this research could be expanded by analyzing additional texture models and determining their validity based on current experimental data.
Author Contributions
Conceptualization, P.M.-R. and R.N.-P.; methodology, P.L.-E., P.M.-R. and R.N.-P.; software, E.M.-G.; validation, E.M.-G. and P.L.-E.; formal analysis, P.L.-E. and P.M.-R.; investigation, E.M.-G. and P.L.-E.; resources, P.M.-R. and P.L.-E.; writing—original draft preparation, E.M.-G., P.M.-R. and R.N.-P.; writing—review and editing, P.M.-R. and J.C.S.-T.-M. visualization, P.M.-R. and J.C.S.-T.-M.; supervision, P.M.-R. and J.C.S.-T.-M.; funding acquisition, J.C.S.-T.-M. All authors have read and agreed to the published version of the manuscript.
Funding
This study was supported by the Autonomous University of Hidalgo (UAEH) and the National Council for Humanities, Science and Technology (CONAHCYT) with project number F003-320109.
Data Availability Statement
Acknowledgments
The first author acknowledges the National Council for Humanities, Science and Technology (CONAHCYT) for the financial support received in pursuing graduate studies, which has been essential for the completion of this research.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A.
In this work, the following experimental values for the elements of the
, the Jarlskog invariant, and the masses of the quarks (at the scale of
) are considered [
53]:
References
- Altarelli, G.; Feruglio, F. Neutrino masses and mixings: A theoretical perspective. Phys. Rept. 1999, 320, 295–318. [Google Scholar] [CrossRef]
- Altarelli, G.; Feruglio, F. Models of neutrino masses and mixings. New J. Phys. 2004, 6, 106. [Google Scholar] [CrossRef]
- Fritzsch, H.; Xing, Z.z. Mass and flavor mixing schemes of quarks and leptons. Prog. Part. Nucl. Phys. 2000, 45, 1–81. [Google Scholar] [CrossRef]
- Xing, Z.z. Flavor structures of charged fermions and massive neutrinos. Phys. Rept. 2020, arXiv:hep-ph/1909.09610]854, 1–147. [Google Scholar] [CrossRef]
- Gupta, M.; Ahuja, G. Flavor mixings and textures of the fermion mass matrices. Int. J. Mod. Phys. A 2012, arXiv:hep-ph/1302.4823]27, 1230033. [Google Scholar] [CrossRef]
- Froggatt, C.D.; Nielsen, H.B. Hierarchy of Quark Masses, Cabibbo Angles and CP Violation. Nucl. Phys. B 1979, 147, 277–298. [Google Scholar] [CrossRef]
- King, S.F.; Luhn, C. Neutrino Mass and Mixing with Discrete Symmetry. Rept. Prog. Phys. 2013, arXiv:hep-ph/1301.1340]76, 056201. [Google Scholar] [CrossRef] [PubMed]
- Borzumati, F.; Nomura, Y. Low scale seesaw mechanisms for light neutrinos. Phys. Rev. D 2001, 64, 053005. [Google Scholar] [CrossRef]
- Lindner, M.; Ohlsson, T.; Seidl, G. Seesaw mechanisms for Dirac and Majorana neutrino masses. Phys. Rev. D 2002, 65, 053014. [Google Scholar] [CrossRef]
- Flieger, W.; Gluza, J. General neutrino mass spectrum and mixing properties in seesaw mechanisms. Chin. Phys. C 2021, arXiv:hep-ph/2004.00354]45, 023106. [Google Scholar] [CrossRef]
- Fritzsch, H. Calculating the Cabibbo angle. Physics Letters B 1977, 70, 436–440. [Google Scholar] [CrossRef]
- Fritzsch, H.; Xing, Z.z. Four zero texture of Hermitian quark mass matrices and current experimental tests. Phys. Lett. B 2003, 555, 63–70. [Google Scholar] [CrossRef]
- zhong Xing, Z.; hua Zhao, Z. On the four-zero texture of quark mass matrices and its stability. Nuclear Physics B 2015, 897, 302–325. [Google Scholar] [CrossRef]
- Gómez-Ávila, S.; López-Lozano, L.; Miranda-Romagnoli, P.; Noriega-Papaqui, R.; Lagos-Eulogio, P. 2-zeroes texture and the Universal Texture Constraint. arXiv 2021, arXiv:2105.01554. [Google Scholar]
- Tani, L.; Rand, D.; Veelken, C.; Kadastik, M. Evolutionary algorithms for hyperparameter optimization in machine learning for application in high energy physics. The European Physical Journal C 2021, 81. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhou, D.; et al. Application of differential evolution algorithm in future collider optimization. 7th International Particle Accelerator Conference (IPAC’16), Busan, Korea; 8-13 May 2016; pp. 1025–1027.
- Wu, J.; Zhang, Y.; Qin, Q.; Wang, Y.; Yu, C.; Zhou, D. Dynamic aperture optimization with diffusion map analysis at CEPC using differential evolution algorithm. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 2020, 959, 163517. [Google Scholar] [CrossRef]
- Allanach, B.C.; Grellscheid, D.; Quevedo, F. Genetic algorithms and experimental discrimination of SUSY models. Journal of High Energy Physics 2004, 2004, 069. [Google Scholar] [CrossRef]
- Bilal; Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Engineering Applications of Artificial Intelligence 2020, 90, 103479. [Google Scholar] [CrossRef]
- Ahmad, M.F.; Isa, N.A.M.; Lim, W.H.; Ang, K.M. Differential evolution: A recent review based on state-of-the-art works. Alexandria Engineering Journal 2022, 61, 3831–3872. [Google Scholar] [CrossRef]
- Pan, J.S.; Liu, N.; Chu, S.C. A Hybrid Differential Evolution Algorithm and Its Application in Unmanned Combat Aerial Vehicle Path Planning. IEEE Access 2020, 8, 17691–17712. [Google Scholar] [CrossRef]
- Opara, K.R.; Arabas, J. Differential Evolution: A survey of theoretical analyses. Swarm and Evolutionary Computation 2019, 44, 546–558. [Google Scholar] [CrossRef]
- Wang, Y.; Cai, Z.; Zhang, Q. Differential Evolution With Composite Trial Vector Generation Strategies and Control Parameters. IEEE Transactions on Evolutionary Computation 2011, 15, 55–66. [Google Scholar] [CrossRef]
- Peñuñuri, F.; Cab, C.; Carvente, O.; Zambrano-Arjona, M.; Tapia, J. A study of the Classical Differential Evolution control parameters. Swarm and Evolutionary Computation 2016, 26, 86–96. [Google Scholar] [CrossRef]
- Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for Differential Evolution. 2013 IEEE Congress on Evolutionary Computation, 2013, pp. 71–78. [CrossRef]
- Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization. IEEE Transactions on Evolutionary Computation 2009, 13, 398–417. [Google Scholar] [CrossRef]
- Das, S.; Abraham, A.; Chakraborty, U.K.; Konar, A. Differential Evolution Using a Neighborhood-Based Mutation Operator. IEEE Transactions on Evolutionary Computation 2009, 13, 526–553. [Google Scholar] [CrossRef]
- Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Opposition-based differential evolution. IEEE Transactions on Evolutionary computation 2008, 12, 64–79. [Google Scholar] [CrossRef]
- Zhang, J.; Sanderson, A. JADE: Adaptive Differential Evolution With Optional External Archive. Evolutionary Computation, IEEE Transactions on 2009, 13, 945–958. [Google Scholar] [CrossRef]
- Chakraborty, S.; Saha, A.K.; Sharma, S.; Sahoo, S.K.; Pal, G. Comparative Performance Analysis of Differential Evolution Variants on Engineering Design Problems. Journal of Bionic Engineering 2022, 19, 1140–1160. [Google Scholar] [CrossRef]
- Lagos-Eulogio, P.; Miranda-Romagnoli, P.; Seck-Tuoh-Mora, J.C.; Hernández-Romero, N. Improvement in Sizing Constrained Analog IC via Ts-CPD Algorithm. Computation 2023, 11, 230. [Google Scholar] [CrossRef]
- Wang, S.; Li, Y.; Yang, H. Self-adaptive mutation differential evolution algorithm based on particle swarm optimization. Applied Soft Computing 2019, 81, 105496. [Google Scholar] [CrossRef]
- Novaes, S.F. Standard model: An Introduction. 10th Jorge Andre Swieca Summer School: Particle and Fields, 1999, pp. 5–102.
- Langacker, P. Introduction to the Standard Model and Electroweak Physics. Theoretical Advanced Study Institute in Elementary Particle Physics: The Dawn of the LHC Era, 2010, pp. 3–48. [CrossRef]
- Isidori, G.; Nir, Y.; Perez, G. Flavor Physics Constraints for Physics Beyond the Standard Model. Ann. Rev. Nucl. Part. Sci. 2010, arXiv:hep-ph/1002.0900]60, 355. [Google Scholar] [CrossRef]
- Antonelli, M.; et al. Flavor Physics in the Quark Sector. Phys. Rept. 2010, arXiv:hep-ph/0907.5386]494, 197–414. [Google Scholar] [CrossRef]
- Branco, G.C.; Lavoura, L.; Silva, J.P. CP Violation; Vol. 103, Int.Ser.Monogr.Phys., 1999.
- Eidelman, S.; et al. Review of particle physics. Particle Data Group. Phys. Lett. B 2004, 592, 1. [Google Scholar] [CrossRef]
- Félix-Beltrán, O.; González-Canales, F.; Hernández-Sánchez, J.; Moretti, S.; Noriega-Papaqui, R.; Rosado, A. Analysis of the quark sector in the 2HDM with a four-zero Yukawa texture using the most recent data on the CKM matrix. Phys. Lett. B 2015, arXiv:hep-ph/1311.5210]742, 347–352. [Google Scholar] [CrossRef]
- Barranco, J.; Delepine, D.; Lopez-Lozano, L. Neutrino mass determination from a four-zero texture mass matrix. Phys. Rev. D 2012, 86, 053012. [Google Scholar] [CrossRef]
- Eberhart, R.C.; Kennedy, J. A new optimizer using particle swarm theory. MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science 1995, pp. 39–43.
- Shi, Y.; Eberhart, R. A modified particle swarm optimizer. 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), 1998, pp. 69–73. [CrossRef]
- Storn, R.; Price, K. Differential Evolution: A Simple and Efficient Adaptive Scheme for Global Optimization Over Continuous Spaces. Journal of Global Optimization 1995, 23. [Google Scholar]
- Storn, R.; Price, K. Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. of Global Optimization 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Sánchez Vargas, O.; De León Aldaco, S.E.; Aguayo Alquicira, J.; Vela Valdés, L.G.; Mina Antonio, J.D. Differential Evolution Applied to a Multilevel Inverter—A Case Study. Applied Sciences 2022, 12. [Google Scholar] [CrossRef]
- Batool, R.; Bibi, N.; Muhammad, N.; Alhazmi, S. Detection of Primary User Emulation Attack Using the Differential Evolution Algorithm in Cognitive Radio Networks. Applied Sciences 2023, 13. [Google Scholar] [CrossRef]
- Elsayed, S.M.; Sarker, R.A.; Ray, T. Differential evolution with automatic parameter configuration for solving the CEC2013 competition on Real-Parameter Optimization. 2013 IEEE Congress on Evolutionary Computation, 2013, pp. 1932–1937. [CrossRef]
- Zhong, X.; Cheng, P. An elite-guided hierarchical differential evolution algorithm. Applied Intelligence 2021, 51, 4962–4983. [Google Scholar] [CrossRef]
- Yang, Q.; Guo, X.; Gao, X.D.; Xu, D.D.; Lu, Z.Y. Differential Elite Learning Particle Swarm Optimization for Global Numerical Optimization. Mathematics 2022, 10. [Google Scholar] [CrossRef]
- Huang, Y.; Yu, Y.; Guo, J.; Wu, Y. Self-adaptive Artificial Bee Colony with a Candidate Strategy Pool. Applied Sciences 2023, 13. [Google Scholar] [CrossRef]
- Wang, S.; Li, Y.; Yang, H.; Liu, H. Self-adaptive differential evolution algorithm with improved mutation strategy. Soft Computing 2017, 22, 3433–3447. [Google Scholar] [CrossRef]
- Kumar, A.; Price, K.V.; Mohamed, A.W.; Hadi, A.A.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Numerical Optimization; Technical Report, 2016; pp. 1–34.
- Fritzsch, H.; Xing, Z.z.; Zhang, D. Correlations between quark mass and flavor mixing hierarchies. Nuclear Physics B 2022, 974, 115634. [Google Scholar] [CrossRef]
Figure 1.
function projections over variables , , , .
Figure 1.
function projections over variables , , , .
Figure 2.
Strategy selection curve; (a) Selection probability , (b) Distribution of strategy selection.
Figure 2.
Strategy selection curve; (a) Selection probability , (b) Distribution of strategy selection.
Figure 3.
The pseudocode of HE-DEPSO
Figure 3.
The pseudocode of HE-DEPSO
Figure 4.
Convergence curves for the functions , , and , with . The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Figure 4.
Convergence curves for the functions , , and , with . The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Figure 5.
Convergence curves for the functions , , and , with , 30. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Figure 5.
Convergence curves for the functions , , and , with , 30. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent repetitions.
Figure 6.
Convergence curves of the error measure in the solution for the function. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent iterations.
Figure 6.
Convergence curves of the error measure in the solution for the function. The horizontal and vertical axis represent the iterations and the mean error values for the 31 independent iterations.
Figure 7.
Box and whisker plots of the best global fit values obtained by HE-DEPSO and the DEPSO, SHADE, CoDE, DE, and PSO algorithms in the 31 independent repetitions in the function. The horizontal axis shows the algorithms to be compared, and the vertical axis shows the global fit values.
Figure 7.
Box and whisker plots of the best global fit values obtained by HE-DEPSO and the DEPSO, SHADE, CoDE, DE, and PSO algorithms in the 31 independent repetitions in the function. The horizontal axis shows the algorithms to be compared, and the vertical axis shows the global fit values.
Figure 8.
Allowed regions for and constrained based on current experimental data with different levels of precision: (black dots), (blue dots) and (orange dots).
Figure 8.
Allowed regions for and constrained based on current experimental data with different levels of precision: (black dots), (blue dots) and (orange dots).
Figure 9.
Allowed regions for and constrained based on current experimental data with different levels of precision: (black dost), (blue dots) and (orange dots).
Figure 9.
Allowed regions for and constrained based on current experimental data with different levels of precision: (black dost), (blue dots) and (orange dots).
Figure 10.
Predictions for and for and .
Figure 10.
Predictions for and for and .
Figure 11.
Predictions for and for and .
Figure 11.
Predictions for and for and .
Figure 12.
Predictions for and for and .
Figure 12.
Predictions for and for and .
Table 1.
Cases of study.
Case |
|
|
1 |
|
|
2 |
|
|
3 |
|
|
4 |
|
|
Table 2.
Details of benchmark functions used in the experiments.
Table 2.
Details of benchmark functions used in the experiments.
Function Type |
Index |
Function Name |
Optimum
|
Unimodal |
1 |
Shifted and Rotated Bent Cigar |
100 |
Functions |
3 |
Shifted and Rotated Zakharov |
300 |
|
4 |
Shifted and Rotated Rosenbrock |
400 |
|
5 |
Shifted and Rotated Rastrigin |
500 |
Simple |
6 |
Shifted and Rotated Expanded Schaffer F6 |
600 |
Multimodal |
7 |
Shifted and Rotated Lunacek Bi-Rastrigin |
700 |
Functions |
8 |
Shifted and Rotated Non-Continuous Rastrigin |
800 |
|
9 |
Shifted and Rotated Levy |
900 |
|
10 |
Shifted and Rotated Schwefel |
1000 |
|
11 |
Hybrid Problem 1 (N = 3) |
1100 |
|
12 |
Hybrid Problem 2 (N = 3) |
1200 |
|
13 |
Hybrid Problem 3 (N = 3) |
1300 |
Hybrid |
14 |
Hybrid Problem 4 (N = 4) |
1400 |
Functions |
15 |
Hybrid Problem 5 (N = 4) |
1500 |
|
16 |
Hybrid Problem 6 (N = 4) |
1600 |
|
17 |
Hybrid Problem 7 (N = 5) |
1700 |
|
18 |
Hybrid Problem 8 (N = 5) |
1800 |
|
19 |
Hybrid Problem 9 (N = 5) |
1900 |
|
20 |
Hybrid Problem 10 (N = 6) |
2000 |
|
21 |
Composition Problem 1 (N = 3) |
2100 |
|
22 |
Composition Problem 2 (N = 3) |
2200 |
|
23 |
Composition Problem 3 (N = 4) |
2300 |
Composition |
24 |
Composition Problem 4 (N = 4) |
2400 |
Functions |
25 |
Composition Problem 5 (N = 5) |
2500 |
|
26 |
Composition Problem 6 (N = 5) |
2600 |
|
27 |
Composition Problem 7 (N = 6) |
2700 |
|
28 |
Composition Problem 8 (N = 6) |
2800 |
|
29 |
Composition Problem 9 (N = 9) |
2900 |
|
30 |
Composition Problem 10 (N = 3) |
3000 |
Table 3.
Parameters settings of all algorithms.
Table 3.
Parameters settings of all algorithms.
Algorithm |
Parameters |
HE-DEPSO |
, , , , , |
|
, ,
|
DEPSO |
, , , , |
|
, , ,
|
SHADE |
,
|
CoDE |
1) , 2) , 3)
|
DE |
,
|
PSO |
,
|
Table 4.
Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the set of CEC 2017 test functions with .
Table 4.
Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the set of CEC 2017 test functions with .
Function |
Metrics |
HE-DEPSO |
DEPSO |
SHADE |
CoDE |
DE |
PSO |
|
Mean |
0.0000E+00 |
1.8337E-15 |
0.0000E+00 |
1.1580E-10 |
3.5103E-05 |
1.1366E+10 |
|
Std |
0.0000E+00 |
4.8427E-15 |
0.0000E+00 |
6.0507E-11 |
2.0793E-05 |
9.7631E+09 |
|
Rank |
1 |
4 |
1 |
3 |
2 |
5 |
|
Significance |
|
≈ |
≈ |
+ |
+ |
+ |
|
Mean |
9.1683E-16 |
1.3752E-14 |
1.6503E-14 |
2.0001E-09 |
1.7359E-04 |
1.5886E+10 |
|
Std |
0.0000E+00 |
0.0000E+00 |
0.0000E+00 |
2.7390E-11 |
4.8441E-04 |
6.2297E+02 |
|
Rank |
1 |
1 |
1 |
2 |
3 |
4 |
|
Significance |
|
≈ |
≈ |
+ |
+ |
+ |
|
Mean |
0.0000E+00 |
2.3838E-14 |
2.0805E+00 |
2.3103E-09 |
1.0409E-03 |
5.8071E+01 |
|
Std |
0.0000E+00 |
2.8513E-14 |
6.7544E-01 |
1.2601E-09 |
6.4578E-04 |
6.9704E+01 |
|
Rank |
1 |
2 |
5 |
3 |
4 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
2.8439E+00 |
4.0119E+00 |
3.8489E+00 |
8.5594E+00 |
2.9710E+01 |
3.0728E+01 |
|
Std |
7.2094E-01 |
1.6940E+00 |
1.0401E+00 |
1.4246E+00 |
3.9355E+00 |
1.2073E+01 |
|
Rank |
1 |
3 |
2 |
4 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
0.0000E+00 |
4.6321E-03 |
2.7530E-04 |
4.6895E-03 |
4.5748E-05 |
1.1970E+01 |
|
Std |
0.0000E+00 |
2.5790E-02 |
5.5026E-04 |
2.4954E-03 |
2.2095E-05 |
9.6167E+00 |
|
Rank |
1 |
4 |
3 |
5 |
2 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
1.2678E+01 |
2.1067E+01 |
1.3953E+01 |
2.0678E+01 |
3.9143E+01 |
3.0657E+01 |
|
Std |
7.0313E-01 |
6.7134E+00 |
7.6551E-01 |
2.2560E+00 |
3.7389E+00 |
8.6791E+00 |
|
Rank |
1 |
4 |
2 |
3 |
6 |
5 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
2.2853E+00 |
4.2045E+00 |
3.8070E+00 |
8.6884E+00 |
2.6037E+01 |
2.4833E+01 |
|
Std |
8.1114E-01 |
1.9346E+00 |
7.3957E-01 |
1.8765E+00 |
5.5290E+00 |
1.1493E+01 |
|
Rank |
1 |
3 |
2 |
4 |
6 |
5 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
0.0000E+00 |
7.3346E-15 |
0.0000E+00 |
0.0000E+00 |
0.0000E+00 |
6.8233E+01 |
|
Std |
0.0000E+00 |
2.8391E-14 |
0.0000E+00 |
0.0000E+00 |
0.0000E+00 |
1.9812E+02 |
|
Rank |
1 |
2 |
1 |
1 |
1 |
3 |
|
Significance |
|
≈ |
≈ |
≈ |
+ |
+ |
|
Mean |
2.4499E+02 |
8.9742E+02 |
1.5812E+02 |
7.4438E+02 |
1.4730E+03 |
9.1356E+02 |
|
Std |
1.1864E+02 |
1.6890E+02 |
7.6565E+01 |
1.1745E+02 |
1.7509E+02 |
3.3829E+02 |
|
Rank |
2 |
4 |
1 |
3 |
6 |
5 |
|
Significance |
|
+ |
− |
+ |
≈ |
+ |
|
Mean |
2.5687E-07 |
8.6658E-01 |
2.3208E+00 |
1.2502E+00 |
6.7909E+00 |
8.1368E+01 |
|
Std |
1.4103E-06 |
6.1559E-01 |
4.7869E-01 |
4.6467E-01 |
9.9142E-01 |
8.3317E+01 |
|
Rank |
1 |
2 |
4 |
3 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
8.1966E+01 |
1.1535E+01 |
3.5298E+01 |
2.7652E+02 |
5.3458E+02 |
8.7444E+07 |
|
Std |
7.0615E+01 |
2.8045E+01 |
1.4666E+01 |
6.4786E+01 |
1.2345E+02 |
3.8318E+08 |
|
Rank |
3 |
1 |
2 |
4 |
5 |
6 |
|
Significance |
|
− |
− |
+ |
+ |
+ |
|
Mean |
3.8882E+00 |
4.6716E+00 |
5.9948E+00 |
1.1231E+01 |
1.2796E+01 |
6.3448E+03 |
|
Std |
2.4042E+00 |
9.7586E-01 |
6.0682E-01 |
2.0901E+00 |
1.7030E+00 |
1.2009E+04 |
|
Rank |
1 |
2 |
3 |
4 |
5 |
6 |
|
Significance |
|
≈ |
+ |
+ |
+ |
+ |
|
Mean |
2.4091E+00 |
6.8868E+00 |
1.9065E+01 |
2.0295E+01 |
2.6185E+01 |
1.7551E+02 |
|
Std |
5.1573E+00 |
8.0249E+00 |
3.0340E+00 |
1.2717E-01 |
1.2365E+00 |
9.5615E+01 |
|
Rank |
1 |
2 |
3 |
4 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
3.6646E-01 |
5.7132E-01 |
1.0838E+00 |
1.3990E+00 |
2.1239E+00 |
4.6041E+02 |
|
Std |
2.3145E-01 |
4.9456E-01 |
3.3723E-01 |
5.4455E-01 |
1.2934E+00 |
1.0508E+03 |
|
Rank |
1 |
2 |
3 |
4 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
1.4450E+00 |
1.6357E+00 |
4.5365E+00 |
4.3009E+00 |
1.9028E+01 |
1.2496E+02 |
|
Std |
3.2202E-01 |
3.2942E+00 |
2.2215E+00 |
2.5242E+00 |
1.6484E+01 |
1.0780E+02 |
|
Rank |
1 |
2 |
4 |
3 |
5 |
6 |
|
Significance |
|
− |
+ |
+ |
+ |
+ |
|
Mean |
7.2016E+00 |
1.2075E+01 |
2.1001E+01 |
2.0112E+01 |
2.8168E+01 |
5.7710E+01 |
|
Std |
3.0221E+00 |
8.5311E+00 |
3.2211E+00 |
2.8326E+00 |
1.8397E+00 |
2.4017E+01 |
|
Rank |
1 |
2 |
4 |
3 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
3.8770E-01 |
8.2949E+00 |
1.9823E+01 |
2.0117E+01 |
2.0542E+01 |
2.7241E+04 |
|
Std |
1.4583E-01 |
6.7399E+00 |
1.9348E+00 |
3.4569E-02 |
1.0532E-01 |
1.8731E+04 |
|
Rank |
1 |
2 |
3 |
4 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
9.7895E-02 |
5.4625E-01 |
1.0349E+00 |
5.9210E-01 |
1.2208E+00 |
5.0978E+03 |
|
Std |
1.2871E-01 |
3.7564E-01 |
1.5715E-01 |
4.4716E-02 |
2.0310E-01 |
1.1584E+04 |
|
Rank |
1 |
2 |
4 |
3 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
5.2046E+00 |
6.9310E+00 |
1.7787E+01 |
2.0000E+01 |
2.5137E+01 |
5.9825E+01 |
|
Std |
3.1153E+00 |
6.8469E+00 |
3.9314E+00 |
1.4932E-04 |
1.9501E+00 |
4.0129E+01 |
|
Rank |
1 |
2 |
3 |
4 |
5 |
6 |
|
Significance |
|
≈ |
+ |
+ |
+ |
+ |
|
Mean |
1.7676E+02 |
1.0695E+02 |
1.5575E+02 |
1.2554E+02 |
1.9598E+02 |
1.9045E+02 |
|
Std |
4.6029E+01 |
2.0972E+01 |
4.2795E+01 |
4.8087E+01 |
5.7606E+01 |
6.2148E+01 |
|
Rank |
4 |
1 |
3 |
2 |
6 |
5 |
|
Significance |
|
− |
− |
− |
+ |
+ |
|
Mean |
1.0000E+02 |
9.7428E+01 |
9.9647E+01 |
7.4280E+01 |
1.0070E+02 |
1.2711E+02 |
|
Std |
0.0000E+00 |
1.5939E+01 |
2.1997E+00 |
4.4524E+01 |
5.7442E-01 |
5.6792E+01 |
|
Rank |
4 |
2 |
3 |
1 |
5 |
6 |
|
Significance |
|
+ |
+ |
≈ |
≈ |
+ |
|
Mean |
3.0613E+02 |
3.0845E+02 |
3.0859E+02 |
3.2072E+02 |
3.3600E+02 |
3.4479E+02 |
|
Std |
1.4783E+00 |
2.1842E+00 |
1.4096E+00 |
3.4098E+00 |
5.2784E+00 |
1.6592E+01 |
|
Rank |
1 |
2 |
3 |
4 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
3.2397E+02 |
2.6066E+02 |
3.0017E+02 |
3.0685E+02 |
3.6119E+02 |
3.7743E+02 |
|
Std |
4.1615E+01 |
1.1813E+02 |
6.5109E+01 |
9.2286E+01 |
4.7582E+00 |
1.5025E+01 |
|
Rank |
4 |
1 |
2 |
3 |
5 |
6 |
|
Significance |
|
≈ |
− |
≈ |
+ |
+ |
|
Mean |
4.1869E+02 |
4.2783E+02 |
4.0980E+02 |
3.9783E+02 |
4.0247E+02 |
4.7998E+02 |
|
Std |
2.3095E+01 |
2.2521E+01 |
2.0463E+01 |
1.5694E-01 |
1.3981E+01 |
6.1710E+01 |
|
Rank |
4 |
5 |
3 |
1 |
2 |
6 |
|
Significance |
|
≈ |
− |
− |
+ |
+ |
|
Mean |
3.0304E+02 |
3.0849E+02 |
3.0085E+02 |
2.9625E+02 |
3.8419E+02 |
5.6150E+02 |
|
Std |
1.1775E+01 |
3.2854E+01 |
1.7405E+01 |
6.3643E+01 |
1.6020E+02 |
2.5242E+02 |
|
Rank |
3 |
4 |
2 |
1 |
5 |
6 |
|
Significance |
|
+ |
≈ |
+ |
≈ |
+ |
|
Mean |
3.7584E+02 |
3.7153E+02 |
3.7683E+02 |
4.1575E+02 |
3.9004E+02 |
4.1060E+02 |
|
Std |
2.3098E+01 |
5.3891E-01 |
7.7935E+00 |
1.8031E+01 |
3.4010E+01 |
2.0006E+01 |
|
Rank |
2 |
1 |
3 |
5 |
4 |
6 |
|
Significance |
|
≈ |
+ |
+ |
+ |
+ |
|
Mean |
4.7251E+02 |
4.7218E+02 |
4.7338E+02 |
4.7816E+02 |
4.7253E+02 |
6.1933E+02 |
|
Std |
3.0463E-02 |
1.8112E+00 |
2.7252E+00 |
6.1120E+00 |
1.6134E-01 |
1.5566E+02 |
|
Rank |
2 |
1 |
4 |
5 |
3 |
6 |
|
Significance |
|
+ |
+ |
+ |
≈ |
+ |
|
Mean |
2.3616E+02 |
2.3508E+02 |
2.4658E+02 |
2.5569E+02 |
2.8284E+02 |
3.4235E+02 |
|
Std |
6.1991E+00 |
8.2538E+00 |
6.1477E+00 |
9.0010E+00 |
1.4685E+01 |
9.4838E+01 |
|
Rank |
2 |
1 |
3 |
4 |
5 |
6 |
|
Significance |
|
≈ |
+ |
+ |
+ |
+ |
|
Mean |
2.0061E+02 |
2.0113E+02 |
2.0227E+02 |
2.0313E+02 |
2.0320E+02 |
1.6044E+06 |
|
Std |
1.5718E-01 |
5.2187E-01 |
3.8772E-01 |
4.7938E-01 |
8.5716E-01 |
2.1981E+06 |
|
Rank |
1 |
2 |
3 |
4 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
Average rank |
|
1.68 |
2.27 |
2.75 |
3.24 |
4.48 |
5.48 |
Final rank |
|
1 |
2 |
3 |
4 |
5 |
6 |
W/T/L |
|
-/-/- |
17/9/3 |
20/4/5 |
24/3/2 |
25/4/0 |
29/0/0 |
Table 5.
Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the CEC 2017 test function set with .
Table 5.
Comparison of results between HE-DEPSO, DE, PSO, and three advanced variants of the DE algorithm on the CEC 2017 test function set with .
Function |
Metrics |
HE-DEPSO |
DEPSO |
SHADE |
CoDE |
DE |
PSO |
|
Mean |
1.6045E-14 |
2.0034E-04 |
2.3598E+03 |
1.2860E+08 |
2.0817E+09 |
6.3600E+10 |
|
Std |
6.0758E-15 |
4.9086E-04 |
1.2985E+03 |
3.5599E+07 |
7.3074E+08 |
5.8100E+10 |
|
Rank |
1 |
2 |
3 |
4 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
1.4486E-13 |
1.3343E+02 |
6.1189E+04 |
7.0539E+04 |
4.1473E+05 |
2.1200E+04 |
|
Std |
4.1092E-14 |
8.8712E+01 |
2.2749E+04 |
2.6622E+04 |
1.3053E+05 |
2.9800E+04 |
|
Rank |
1 |
2 |
4 |
5 |
6 |
3 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
2.8405E+00 |
1.2605E+01 |
7.7040E+01 |
9.6061E+01 |
3.5736E+01 |
1.7300E+03 |
|
Std |
2.1842E+00 |
8.6176E+00 |
3.8595E+00 |
2.0503E+01 |
3.2288E+00 |
1.3000E+03 |
|
Rank |
1 |
2 |
4 |
5 |
3 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
2.3850E+01 |
1.8175E+02 |
9.3463E+01 |
1.8418E+02 |
2.4169E+02 |
1.8500E+02 |
|
Std |
5.0701E+00 |
1.2845E+01 |
1.0871E+01 |
1.2630E+01 |
1.2302E+01 |
4.4900E+01 |
|
Rank |
1 |
3 |
2 |
4 |
6 |
5 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
4.6793E-01 |
5.7971E-03 |
3.7760E+01 |
5.8751E+01 |
2.4651E+01 |
5.3800E+01 |
|
Std |
3.5101E-01 |
2.9636E-02 |
5.8394E+00 |
5.2672E+00 |
3.3305E+00 |
1.2000E+01 |
|
Rank |
2 |
1 |
4 |
6 |
3 |
5 |
|
Significance |
|
− |
+ |
+ |
+ |
+ |
|
Mean |
5.2487E+01 |
2.3377E+02 |
1.1601E+02 |
2.1572E+02 |
2.6887E+02 |
2.9100E+02 |
|
Std |
3.3716E+00 |
1.4172E+01 |
8.0551E+00 |
1.2272E+01 |
1.3070E+01 |
1.1300E+02 |
|
Rank |
1 |
4 |
2 |
3 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
2.0726E+01 |
1.7779E+02 |
9.3771E+01 |
1.8161E+02 |
2.3844E+02 |
1.7100E+02 |
|
Std |
4.6475E+00 |
1.1810E+01 |
9.9764E+00 |
1.3211E+01 |
1.6657E+01 |
3.8700E+01 |
|
Rank |
1 |
4 |
2 |
5 |
6 |
3 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
1.1236E+00 |
2.8880E-03 |
7.6008E+02 |
4.1017E+03 |
4.8783E+02 |
2.8400E+03 |
|
Std |
9.0322E-01 |
1.6080E-02 |
2.7907E+02 |
8.3155E+02 |
1.5998E+02 |
1.2400E+03 |
|
Rank |
2 |
1 |
4 |
6 |
3 |
5 |
|
Significance |
|
− |
+ |
+ |
+ |
+ |
|
Mean |
2.6526E+03 |
6.7636E+03 |
3.4600E+03 |
6.3166E+03 |
7.9631E+03 |
5.2400E+03 |
|
Std |
3.0794E+02 |
2.5866E+02 |
2.3363E+02 |
2.0917E+02 |
3.0010E+02 |
8.1500E+02 |
|
Rank |
1 |
5 |
2 |
4 |
6 |
3 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
3.2932E+01 |
7.9240E+01 |
7.3961E+01 |
2.7578E+02 |
2.1665E+02 |
9.3600E+02 |
|
Std |
1.7540E+01 |
9.4859E+00 |
1.6453E+01 |
6.0425E+01 |
4.1872E+01 |
1.7000E+03 |
|
Rank |
1 |
3 |
2 |
5 |
4 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
6.0704E+03 |
7.4374E+04 |
1.7001E+06 |
5.9649E+08 |
2.5500E+09 |
1.1600E+10 |
|
Std |
5.2411E+03 |
7.3297E+04 |
5.7892E+05 |
1.8156E+08 |
8.9672E+08 |
1.5200E+10 |
|
Rank |
1 |
2 |
3 |
4 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
5.3737E+01 |
2.3200E+03 |
7.2862E+03 |
3.3010E+08 |
1.0201E+08 |
1.0300E+10 |
|
Std |
4.9535E+01 |
5.3868E+03 |
2.4057E+04 |
8.7976E+07 |
3.9539E+07 |
1.5000E+10 |
|
Rank |
1 |
2 |
3 |
5 |
4 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
5.1892E+01 |
9.7833E+01 |
7.5446E+01 |
1.5942E+04 |
4.0951E+03 |
2.2100E+05 |
|
Std |
1.3639E+01 |
2.2150E+01 |
1.0230E+01 |
4.5198E+03 |
8.1599E+02 |
4.9000E+05 |
|
Rank |
1 |
3 |
2 |
5 |
4 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
6.7166E+01 |
7.9018E+01 |
1.0487E+02 |
3.0409E+07 |
3.1028E+06 |
2.9100E+08 |
|
Std |
4.6957E+01 |
3.4973E+01 |
2.4704E+01 |
9.9730E+06 |
1.2198E+06 |
1.6200E+09 |
|
Rank |
1 |
2 |
3 |
5 |
4 |
6 |
|
Significance |
|
≈ |
≈ |
+ |
+ |
+ |
|
Mean |
2.7544E+02 |
1.4323E+03 |
8.5936E+02 |
1.6794E+03 |
2.3141E+03 |
1.7100E+03 |
|
Std |
1.4178E+02 |
1.6353E+02 |
1.1941E+02 |
1.7833E+02 |
1.8579E+02 |
6.0700E+02 |
|
Rank |
1 |
3 |
2 |
4 |
6 |
5 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
6.6918E+01 |
3.8170E+02 |
1.9226E+02 |
6.8169E+02 |
1.1646E+03 |
8.4800E+02 |
|
Std |
1.3274E+01 |
1.0402E+02 |
6.2540E+01 |
9.4653E+01 |
1.4710E+02 |
3.2100E+02 |
|
Rank |
1 |
3 |
2 |
4 |
6 |
5 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
6.1722E+01 |
2.4655E+02 |
2.3829E+05 |
3.4473E+05 |
7.9235E+05 |
1.9200E+06 |
|
Std |
2.7440E+01 |
2.4587E+02 |
1.4060E+05 |
1.1610E+05 |
1.9796E+05 |
4.8700E+06 |
|
Rank |
1 |
2 |
3 |
4 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
4.0473E+01 |
3.7316E+01 |
3.2834E+01 |
5.0014E+03 |
3.7873E+04 |
4.2500E+08 |
|
Std |
2.2587E+01 |
7.4498E+00 |
6.1811E+00 |
2.4562E+03 |
2.1473E+04 |
7.1600E+08 |
|
Rank |
3 |
2 |
1 |
4 |
5 |
6 |
|
Significance |
|
≈ |
≈ |
+ |
+ |
+ |
|
Mean |
8.7890E+01 |
2.6052E+02 |
3.3149E+02 |
4.8347E+02 |
1.0591E+03 |
6.0400E+02 |
|
Std |
3.2291E+01 |
9.0720E+01 |
7.0732E+01 |
7.4079E+01 |
1.1750E+02 |
2.3100E+02 |
|
Rank |
1 |
2 |
3 |
4 |
6 |
5 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
2.2047E+02 |
3.7820E+02 |
2.7893E+02 |
3.9662E+02 |
4.4351E+02 |
3.8400E+02 |
|
Std |
4.7309E+00 |
1.3317E+01 |
3.7140E+01 |
7.1300E+00 |
1.3511E+01 |
5.0800E+01 |
|
Rank |
1 |
3 |
2 |
5 |
6 |
4 |
|
Significance |
|
≈ |
≈ |
≈ |
+ |
≈ |
|
Mean |
1.0186E+02 |
2.9760E+03 |
2.2144E+02 |
6.5363E+03 |
8.0766E+03 |
4.5500E+03 |
|
Std |
2.0196E+00 |
3.4464E+03 |
2.9380E+01 |
5.3749E+02 |
2.9236E+02 |
2.0400E+03 |
|
Rank |
1 |
3 |
2 |
5 |
6 |
4 |
|
Significance |
|
+ |
≈ |
+ |
+ |
+ |
|
Mean |
3.7543E+02 |
5.4924E+02 |
4.4917E+02 |
5.5419E+02 |
5.8732E+02 |
7.2800E+02 |
|
Std |
8.3139E+00 |
1.3235E+01 |
9.7459E+00 |
1.2816E+01 |
1.9032E+01 |
7.9500E+01 |
|
Rank |
1 |
3 |
2 |
4 |
5 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
4.4893E+02 |
6.2854E+02 |
5.3810E+02 |
6.9034E+02 |
6.7462E+02 |
8.2800E+02 |
|
Std |
6.6585E+00 |
1.3484E+01 |
1.6953E+01 |
1.5747E+01 |
1.2401E+01 |
1.0000E+02 |
|
Rank |
1 |
3 |
2 |
5 |
4 |
6 |
|
Significance |
|
≈ |
≈ |
+ |
≈ |
+ |
|
Mean |
3.8051E+02 |
3.7837E+02 |
3.7882E+02 |
4.1044E+02 |
3.9396E+02 |
5.6400E+02 |
|
Std |
1.1308E+01 |
6.0195E-02 |
1.1887E+00 |
6.6423E+00 |
4.4183E+00 |
2.0000E+02 |
|
Rank |
3 |
1 |
2 |
5 |
4 |
6 |
|
Significance |
|
− |
≈ |
+ |
≈ |
+ |
|
Mean |
1.3469E+03 |
2.6899E+03 |
1.4149E+03 |
2.6587E+03 |
3.0652E+03 |
4.9700E+03 |
|
Std |
9.6715E+01 |
2.2460E+02 |
5.7298E+02 |
9.0222E+01 |
1.2139E+02 |
9.0600E+02 |
|
Rank |
1 |
4 |
2 |
3 |
5 |
6 |
|
Significance |
|
+ |
≈ |
+ |
+ |
+ |
|
Mean |
5.0001E+02 |
5.0001E+02 |
5.0001E+02 |
5.0001E+02 |
5.0001E+02 |
7.1600E+02 |
|
Std |
1.5786E-04 |
1.2917E-04 |
9.3150E-05 |
5.9917E-05 |
8.1534E-05 |
1.0700E+02 |
|
Rank |
1 |
1 |
1 |
1 |
1 |
2 |
|
Significance |
|
≈ |
≈ |
≈ |
≈ |
≈ |
|
Mean |
4.9902E+02 |
4.9059E+02 |
4.9845E+02 |
5.0001E+02 |
5.0001E+02 |
1.2300E+03 |
|
Std |
3.0735E+00 |
4.2116E+00 |
1.3995E+00 |
6.6494E-05 |
6.6473E-05 |
8.4400E+02 |
|
Rank |
3 |
1 |
2 |
4 |
4 |
5 |
|
Significance |
|
− |
≈ |
≈ |
≈ |
+ |
|
Mean |
3.8546E+02 |
9.3919E+02 |
4.6431E+02 |
1.6187E+03 |
2.0724E+03 |
1.9200E+03 |
|
Std |
4.5529E+01 |
2.1194E+02 |
4.2813E+01 |
1.8521E+02 |
1.8256E+02 |
6.1300E+02 |
|
Rank |
1 |
3 |
2 |
4 |
6 |
5 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
|
Mean |
2.8659E+02 |
2.3933E+02 |
1.7439E+04 |
1.4650E+07 |
2.4426E+06 |
4.7800E+07 |
|
Std |
5.8034E+01 |
6.7166E+00 |
2.1431E+04 |
6.8820E+06 |
1.5971E+06 |
8.9800E+07 |
|
Rank |
2 |
1 |
3 |
5 |
4 |
6 |
|
Significance |
|
− |
+ |
+ |
+ |
+ |
Average rank |
|
1.31 |
2.44 |
2.44 |
4.37 |
4.75 |
5.13 |
Final rank |
|
1 |
2 |
2 |
3 |
4 |
5 |
W/T/L |
|
-/-/- |
19/5/5 |
20/9/0 |
26/3/0 |
25/4/0 |
27/2/0 |
Table 6.
Comparison of results between HE-DEPSO, DE, PSO and three advanced variants of the DE algorithm, on the function .
Table 6.
Comparison of results between HE-DEPSO, DE, PSO and three advanced variants of the DE algorithm, on the function .
Function |
Metrics |
HE-DEPSO |
DEPSO |
SHADE |
CoDE |
DE |
PSO |
|
Mean |
1.7267E-28 |
3.4971E-28 |
7.2296E-03 |
2.3720E-05 |
6.1610E-13 |
7.4247E+00 |
|
Std |
1.4496E-28 |
5.6400E-28 |
8.7102E-03 |
7.7748E-05 |
3.4294E-12 |
1.6815E+01 |
|
Classification |
1 |
2 |
5 |
4 |
3 |
6 |
|
Significance |
|
+ |
+ |
+ |
+ |
+ |
Final
classification |
|
1 |
2 |
5 |
4 |
3 |
6 |
W/T/L |
|
-/-/- |
1/0/0 |
1/0/0 |
1/0/0 |
1/0/0 |
1/0/0 |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).