Self-adaptive differential evolution algorithm with dynamic fitness-ranking mutation and pheromone strategy

Differential evolution (DE) is a population-based optimization algorithm widely used to solve a variety of continuous optimization problems. The self-adaptive DE algorithm improves the DE by encoding individual parameters to produce and propagate better solutions. This paper proposes a self-adaptive differential evolution algorithm with dynamic fitness-ranking mutation and pheromone strategy (SDE-FMP). The algorithm introduces the dynamical mutation operation using the fitness rank of the individuals to divide the population into three groups and then select groups and their vectors with adaptive probabilities to create a mutant vector. Mutation and crossover operations use the encoded scaling factor and the crossover rate values in a target vector to generate the corresponding trial vector. The values are changed according to the pheromone when the trial vector is inferior in the selection, whereas the pheromone is increased when the trial vector is superior. In addition, the algorithm also employs the resetting operation to unlearn and relearn the dominant pheromone values in the progressing search. The proposed SDE-FMP algorithm using the suitable resetting periods is compared with the well-known adaptive DE algorithms on several test problems. The results show that SDE-FMP can give high-precision solutions and outperforms the compared methods. This is an open access article under the CC BY-SA license.

DE is a population-based global search algorithm introduced by Storn and Price in 1997 for continuous optimization.Its operations consist of mutation, crossover, and selection [11].The performance of DE depends on the control parameters: scaling factor F and crossover rate CR.The scaling factor controls the step size of the mutation operation, and the crossover rate indicates the probability of exchanging elements between the mutant and target vectors.These control parameter values significantly affect the algorithm's performance, and DE with the fixed parameters F and CR are only suitable for specific problems.Thus, many control parameter

LITERATURE REVIEW
This section reviews the basic DE algorithm, self-adaptive DE algorithms, adaptive DE algorithms, and continuous ACO algorithms that inspire our proposed algorithm.
The basic DE algorithm has three iterative operations: mutation, crossover, and selection.The algorithm generates N P population vectors from feasible solution space and indicates the best vector.For j = 1, . . ., N P , the mutation operation creates the mutant vector v j by randomly choosing three distinct vectors different from the target vector x j as (1): where F is the scaling factor.
Next, the crossover operation constructs the trial vector u j by exchanging the component of x j and v j with the crossover rate CR.Finally, the selection operation updates the target vector with the trial vector when the fitness value of u j is better than that of x j .
Since the basic mutation equation with the fixed parameters F and CR cannot solve a wide range of problems, the parameter adaptation and enhanced mutation strategy have been proposed to improve the performance of basic DE.

Self-adaptive DE algorithms
The self-adaptive DE algorithms encode the control parameter values to each target vector, adapt them based on the feedback of the search, and propagate the better ones to the next generation.
The jDE by Brest et al. [13] is the self-adaptive DE algorithm encoding F i and CR i to ith target vector with initial values 0.5 and 0.9, respectively.Mutation and crossover operations use these values from the target vector with the probabilities of 0.9; otherwise, it generates anew from [0.5, 0.9] and [0, 1], respectively.The algorithm does not use feedback from the selection operation.The jDE outperforms the basic DE with F = 0.5 and CR = 0.9 on several benchmark functions.Cheng et al. [14] proposed DE with FDDE strategy that uses the combined fitness and diversity rankings to position the random vectors in a mutation operator where the diversity ranking computes from the difference of the median fitness and individual fitness values.The strategy improves the performance of jDE in both low-dimensional and high-dimensional problems.This work indicates the role of position selection for vectors in the mutation operator.Qin et al. [15] proposed a differential evolution algorithm with strategy adaptation called SaDE.The algorithm uses four mutation operations and encodes their indices to each target vector.Normal distributions and changes encoded information based on the probability of success and failure in the selection operation.The experimental results show that SaDE outperforms basic DE and jDE.The EPSDE algorithm by Mallipeddia et al. [16] uses an ensemble of parameters and mutation strategies.This algorithm initially encodes a mutation strategy for a target vector and assigns corresponding parameters F and CR.If the generated trial vector is not better than its target vector, the algorithm changes the target's encoded information to the new one.EPSDE outperforms the classic DE and three adaptive DE algorithms: jDE, SaDE, and JADE.

Adaptive DE algorithms
The adaptive DE algorithm adjusts the control parameter with the overall feedback from the search.Then, the newly generated parameter will be biased according to the search feedback.
Zhang and Sanderson [17] presented the JADE algorithm that generates parameters F and CR from the Normal and Cauchy distributions, respectively.The algorithm implements an external archive for keeping the inferior vectors from selection and uses some top best vectors and the archived vectors in mutation.The results show that JADE is better than the classic DE and some adaptive DE algorithms.Tanabe and Fukunaga [18] introduced success-history-based parameter adaptation for differential evolution (SHADE), which improves the JADE algorithm with historical memories for each individual to update the mean of distributions.Experimental results show that SHADE is competitive with EPSDE, JADE, and CoDE.Next, Wang et al. [19] introduced DE with composite trial vector generation strategies and control parameters called CoDE.It generates three trial vectors with three different mutation operations and chooses the best one to compete with the target vector.The results show that CoDE is better than jDE, JADE, SaDE, and EPSDE algorithms.Zou et al. [20] presented the CUSDE algorithm that uses a new mutation strategy by selecting the vectors with the probability calculated from the number of consecutive unsuccessful updates and removing individuals with those large numbers.The basic DE with this approach outperforms basic DE and some adaptive DEs.

Ant colony optimization
ACO is the population-based algorithm using the pheromone strategy to guide the ant population to locate optimal solutions for discrete optimization [21].The ACO for continuous optimization requires dividing the initial spaces into discrete subspaces for formulating the pheromone structure.During the search process, the algorithm constructs a new solution vector with the components corresponding to subspaces according to the pheromone gathered from better solutions [22].
The ACO R algorithm is the first ACO algorithm for continuous optimization introduced by Dorigo and Socha [23].The algorithm uses a pheromone structure to store the best solution components in an archive table and generates a new candidate solution with the corresponding distributions.At the end of the generation, it updates the pheromone by adding better candidate solutions and removing the worst archive solutions in the table.The performance of ACO R is competitive with other probability learning methods.
Xiao and Li [24] presented the hybrid of ACO R and DE algorithms called HACO.It uses DE to generate new candidate solutions for the ACO R algorithm.The experimental results show that HACO performs better than ACO R algorithms.Singsathid and Wetweerapong [25] introduced the ACO with the domain partitioning technique called PACO.It generates solution components from partition points of adaptive search subspace that cover the best solutions and updates the pheromone according to the newly obtained best solution.The experiments show that PACO outperforms some well-known continuous ACO algorithms.

THE PROPOSED SDE-FMP ALGORITHM
We propose a self-adaptive differential evolution algorithm with dynamic fitness-ranking mutation and pheromone strategy, called SDE-FMP, which improve classic DE mutation by using the dynamic mutation and self-adaptive control parameters controlled with pheromone strategy and resetting operation.The details of SDE-FMP are as follows.

New dynamic mutation strategy for SDE-FMP
SDE-FMP sorts the population vectors at the beginning of each generation according to the fitness values and divides them into three groups with the same size, denoted by G 1 , G 2 , G 3 where they represent the top best individuals, intermediate individuals, and worst individuals, respectively.
To create a mutant vector for each target vector x i , i = 1, 2, . . ., N P , the algorithm chooses a group g k (from G 1 , G 2 , or G 3 ) for each position k = 1, 2, 3 of the mutation equation (g 2 and g 3 must be different Self-adaptive differential evolution algorithm with dynamic fitness-ranking mutation ... (Pirapong Singsathid)

❒
ISSN: 2302-9285 for diversity) with probability vectors P ropR k calculated from pheromone vectors P heromoneR k .Note that SDE-FMP initializes all components of pheromone vectors to be 1, ensuring equal probability.
Next, the algorithm chooses three distinct random vectors x R1 , x R2 , x R3 which differ from x i by uniformly selecting random vectors in the corresponding selected g k groups to generate a mutant vector as (2): where the scaling factor F (i) corresponds to target vector x i .The mutant vector enters the crossover operation to generate the trial vector u i as (3): where j = 1, . . ., D; s j is a uniform random number in (0, 1) and I rand is a randomly fixed integer from 1 to D.
At the selection operation, if u i is better than x i , the algorithm updates the pheromone at the lth position of g k = G l by adding one to the associated pheromone vector position to reinforce the pheromone information related to a successful solution as (4): At the end of each generation, the algorithm normalizes the P heromoneR k to the probability vector P ropR k as (5): for l = 1, 2, 3.

Self-adaptive control parameters of F and CR
We use scaling factor values F = 0.5, 0.7, 0.9 to control the step size for the mutant vectors and crossover rate values CR = 0.1, 0.9 to balance between intensifying and diversifying the search.CR = 0.1 is suitable for a local search, while CR = 0.9 is suitable for a global search.So, we have six combinations of these values.
At initialization, the algorithm sets all components of the pheromone vector P heromoneF CR to be one and encodes a random pair of F (i) and CR(i) for each target vector x i .The algorithm uses F (i) and CR(i) from x i to generate a trial vector u i from mutation and crossover operations.
At the selection operation, if u i is better than x i , the target vector retains its current F (i) and CR(i) values and the associated pheromone vector value P heromoneF CR(t) is incremented by one.
where t is the corresponding index of that combination.Otherwise, the target vector is re-encoded with a new random pair of F (i) and CR(i) based on the probability vector P ropF CR, allowing the target vector to explore new parameter values.Note that P ropF CR is calculated from P heromoneF CR for each generation as (7): for t = 1, 2, . . ., 6.

The pheromone resetting
At the end of each generation, SDE-FMP determines whether the pheromone vectors have reached the specified thresholds to prevent the dominance of certain pheromone values and promote fair competition among choices.The two parameters r g and r p represent the thresholds for P heromoneR k and P heromoneF CR, respectively.If the sum of any pheromone vector is greater than the corresponding threshold, the algorithm Bulletin of Electr Eng & Inf, Vol. 13 for i = 1 : N P do 12: Choose distinct x R k vectors with the probabilities P robR k , k = 1, 2, 3 13: Generate a mutant vector using eq.( 2) 14: Apply the crossover operation eq. ( 3) to get a trial vector u i 15: Evaluate f (u i ) and nf ← nf + 1 Replace x i with u i 18: Update each P heromoneR k using eq.( 4) 19: Update P heromoneF CR using eq.( 6) Re-encode the new random F (i), CR(i) for x i according to P robF CR if P heromoneR k ≥ r g for some k then 28: Normalize P heromoneR k to be P robR k using eq.( 5) 34: Normalize P heromoneF CR to be P robF CR using eq.( 7) 35: end while 36: Report the obtained x best , f best and nf

EXPERIMENTAL DESIGN
The performance of SDE-FMP is tested on eight selected benchmark functions that cover four main types: uni-modal, multi-modal, separable, and non-separable.Their formulae and search ranges are presented in Table 1.First, we design a preliminary experiment for finding the suitable values r g and r p for all types of problems.Then, we conduct two comparison experiments to evaluate the performance of SDE-FMP against other adaptive DE algorithms.The details of each experiment are given in the following subsections.

❒
ISSN: 2302-9285 [−5.12, 5.12] 4.1.Finding the suitable values r g and r p for SDE-FMP To find the suitable values for resetting periods r g and r p , the dimensions of test functions are D = 10, 30.The number of population N P = 30, maxnf = 20000D and V T R = 10 −10 are used.The parameters r g and r p are varied as r g = 200, 500 and r p = 200, 300, 400.The algorithm performs 50 independent runs for each configuration.We report the number of successful runs (NS), the mean number of function evaluations (meanNF), and the percentage of the standard deviation of function evaluations (%SD).

Comparing the performance of SDE-FMP with other adaptive DE algorithms using V T R
We use the obtained values r g and r p to compare the performance of SDE-FMP with some well-known adaptive DE algorithms: JADE [17], CoDE [19], jDE [13], and SaDE [15].The SDE-FMP uses the same setting as the first experiment, while the compared algorithms use the parameter settings as in their original papers.The dimensions are varied as D = 10, 30, 50.All algorithms perform 100 independent runs for each configuration.The MATLAB source codes of JADE, CoDE, jDE, and SaDE are available from Zhang's homepage: http://dces.essex.ac.uk/staff/qzhang.

Comparing the performance of SDE-FMP with other adaptive DE algorithms using maxnf on CEC 2005 benchmark functions
We compare the performance of SDE-FMP using suitable r g and r p with SaDE [15], FDDE F [14], and CUSDE [20] on 30-dimensional benchmark functions of CEC 2005 [26].The experiment reports the mean of optimal values and standard deviation using maxnf = 10000D over 50 independent runs.The best values of the compared algorithms are from their original papers.We use the t-test at the significance level of 0.05 to compare their performances.The symbols +, 0, − represent that the mean of the optimal value of the SDE-FMP is superior to, equal to, and inferior to the compared algorithm, respectively.

The suitable values r g and r p for SDE-FMP
The experiment finds the suitable values r g and r p that give the highest number of successful runs and the lowest meanNF for SDE-FMP.Table 2 shows that three combinations of (r g , r p ) i.e., (200, 400), (500, 200), and (500, 300) give 50 successful runs for all cases.We highlight the lowest meanNF values among these three combinations for each case and obtain r g = 500 and r p = 300 as the best values.

Performance comparison of SDE-FMP with other adaptive DE algorithms using V T R
We compare the performance of SDE-FMP with SaDE, CoDE, jDE, and JADE using the V T R = 10 −10 .Table 3 presents the number of successful runs and meanNF and highlights the best results that give 100 successful runs with the lowest meanNF.
The results show that SDE-FMP, SaDE, CoDE, jDE, and JADE give 100 successful runs for 24, 12, 16, 21, and 17 cases, respectively.Their lowest mean counts are 18, 0, 1, 0, and 5, respectively.Therefore, SDE-FMP achieves the best performance.Note that all compared algorithms cannot achieve high-quality solutions for the Rosenbrock function F 3 .

DISCUSSION
The SDE-FMP algorithm uses the pheromone strategy to adapt the probabilities for selecting subgroups in mutation where a high pheromone indicates a suitable group for the corresponding position in the mutation equation.Then, the mutant vector has more potential to create a better trial vector during the search.The algorithm also uses the pheromone for self-adaptive control parameters F and CR, where the most successful pair of F and CR has more potential to propagate to the next generations.The pheromone resetting is employed to eliminate the dominance and balance the cycle of gathering the pheromone and using the pheromone to improve the search performance.We obtain the suitable resetting periods r g = 500 and r p = 300 for P heromoneR k , k = 1, 2, 3 and P heromoneF CR, respectively.
We further investigate the impact of SDE-FMP's features, including the dynamic mutation strategy, self-adaptive control parameters, and pheromone resetting.We compared the performance of the SDE-FMP with the SDE-FMP without the proposed mutation strategy (using basic DE mutation strategy), the SDE-FMP without self-adaptive control parameters (using fixed F = 0.5 and CR = 0.9 values), and the SDE-FMP without pheromone resetting on the 30-dimensional Rosenbrock function for 30 independent runs.The results presented in Table 5 demonstrate that the SDE-FMP significantly outperforms the SDE-FMP without each feature.The dynamic mutation strategy and self-adaptive control parameters play crucial roles in improving the convergence speed, while the pheromone resetting further enhances the achievement of high-precision solutions.
Figures 1 and 2 illustrate the convergence graphs of SDE-FMP compared with SaDE, CoDE, jDE, and JADE on the Sphere, Griewank, Ackley, and Schwefel functions for 10 and 30 dimensions.They show that SDE-FMP can solve problems of various types faster than the compared algorithms.
It assigns corresponding parameters F and CR from Bulletin of Electr Eng & Inf, Vol. 13, No. 1, February 2024: 559-571 , No. 1, February 2024: 559-571 Bulletin of Electr Eng & Inf ISSN: 2302-9285 ❒ 563 resets all elements of those pheromone vectors back to 1.The pseudo code of SDE-FMP is presented in Algorithm 1. Initialize the population of N P individuals 2: Find the best vector x best and its best function value f best 3: Encode the F (i), CR(i) values to each target vector x i , i = 1, . . ., N P 4: Set all elements of P heromoneR k , k = 1, 2, 3 and P heromoneF CR to 1 5: Set all elements of P robR k to be equal for k = 1, 2, 3 6: Set all elements of P robF CR to be equal 7: Set number of function evaluations nf = 0 8: Set the V T R or the maximum number of function evaluations maxnf 9: while stopping condition is not satisfied do 10: Sort the population individuals according to fitness ranking and divide them into G 1 , G 2 , G 3 groups 11:

Table 1 .
Test functions

Table 2 .
Eng & Inf, Vol. 13, No. 1, February 2024: 559-571 The performance comparison of SDE-FMP with different values r g and r p over 50 independent runs

Table 3 .
The performance comparison of SDE-FMP, SaDE, CoDE, jDE, and JADE over 100 independent runs We compare the performance of SDE-FMP with SaDE, FDDE F, and CUSDE using the mean of obtained best values.Table4shows that the superior cases of SDE-FMP to SaDE, FDDE F, and CUSDE are 12, 9, and 10, whereas the inferior ones are 5, 6, and 5. Therefore, SDE-FMP overall outperforms the compared methods on CEC 2005 benchmark functions.