Using particle swarm optimization to solve test functions problems

Received May 22, 2021 Revised Aug 17, 2021 Accepted Oct 6, 2021 In this paper the benchmarking functions are used to evaluate and check the particle swarm optimization (PSO) algorithm. However, the functions utilized have two dimension but they selected with different difficulty and with different models. In order to prove capability of PSO, it is compared with genetic algorithm (GA). Hence, the two algorithms are compared in terms of objective functions and the standard deviation. Different runs have been taken to get convincing results and the parameters are chosen properly where the Matlab software is used. Where the suggested algorithm can solve different engineering problems with different dimension and outperform the others in term of accuracy and speed of convergence.


INTRODUCTION
A large number of real-life optimization problems in science, engineering, economics, and business are complex and difficult to solve. They cannot be solved in an exact manner within a reasonable amount of time. Using approximate algorithms is the main alternative to solve this class of problems [1], [2]. The optimization process involves finding a single or a series of optimal solutions from among a very large number of possibilities. In this process, the space of potential solutions is reduced to one or a few of the best ones [3], [4]. Many researchers constructed different methods to get the best solution because it is difficult to find the solution in many problems in engineering. Particle swarm optimization (PSO) is stochastic method depends on behavior of animals. Wang et al. [5] gave the basic and the details of PSO. They analyzed the method from various views of that are the structure, parameters, discrete PSO, parallel PSO, and multiobjective. Ozdemir [6] used the PSO to reach to the global minimum of the function suggested. He applied the method on different benchmark functions. The author proved that PSO is a successful algorithm to solve various problems. Jumaa et al. [7] suggested a method for scheduling optimization and running of distributed generator to reduce the loss of power, revamp voltage profile as well as the reliability of the whole network. They proposed particle swarm optimization in order to estimate the better site and the size of distributed generation. Garcia et al. [8] estimated a technique utilizing genetic algorithm (GA) to evaluate the relation maximum of differentiable functions. The authors introduced Python library contains components from the algorithm. Alyoutbaki and Al-Rawi [9] introduced a method to extend the fault tolerance of the system. Ant colony optimization (ACO) is used to enhance the suggested strategy. ACO can select better virtual machine where is to emigrate the cloudlet in order to decrease the consumption of energy and the time. 3423 This paper presents the particle swarm optimization to solve some suggested test functions. The rest of the paper is organized as; section 2 is the overview of genetic algorithm and PSO; comparison between the both algorithms is introduced in section 3; the simulation results are in section 4; section 5 is the conclusion.

Genetic algorithm
In order to solve many optimization problems, genetic algorithm is used which is considered as an adaptive technique. According to the selection of natural and the survival of fittest the population will promote through different iterations. If the genetic algorithm has been encoded properly in that case it is able to transfer the solution from simulation to the real applications where the GA will simulate the selection and survival [10], [11]. In this algorithm, the size of population of individuals is fixed constant, where the GA goes generation after generation. During the iterations of GA, it will pass through various operations such as reproduction, crossover, and mutation which produce new individuals offspring. The new population is rated by the function which is called objective function. The best new solutions are calculated by these procedures [12]- [14].

Particle swarm optimization
Particle swarm optimization is first given by Erberhart and Kennedy. It is an individual's population which are called particles. PSO is inspired from the behavior of flocking bird and the schooling fish [15], [16]. It is a technique depends on the relation between the particles of swarm where it is an optimization stochastic and the population intellects in the search space. Here, there is position and velocity for each particle. These values are changed through the generations where the best position is the particles of best experience and the global position is the best achieved experience of all particles [17], [18]. Also, the particles have vectors of numbers which are real and every vector position is called dimension. Two relations are there, first for velocity and the second for position which are illustrated in (1) and (2) [19], [20]: , Where represents the particles in the population of size ( ) with the dimension of ( ) and = { ,1 , ,2 , … … … . . , , }. In addition, the population has velocity ( ) and it is written as, = { ,1 , ,2 , … … … . . , , }. The ( ) is from 1 to ( ), ( ) from 1 to ( ) and ( ) is the iteration number. The ( , ) is the personal best with ( ℎ ) parameter of ( ℎ ) individual and the ( ) means the ℎ parameter of the best one of population to ( ). Finally, the ( ℎ ), ( ), and ( ) are the weight parameter, acceleration parameter and the random number between [0,1] [21]- [23]. Figure 1 shows the overall algorithm.

The difference between PSO and some other algorithms
The difference between PSO and some other algorithms. The following steps are found in many evolutionary methods: − In the initialization there is a random individual. − The range to reach to the optimum will be the base for the objective function which is calculated for each part. − The objective value is also the base for the reproduction of population. − These procedures can stop or continue.
In that case the PSO has the same points like many algorithms. Therefore, they are generated the population randomly. They are depended on the objective function to calculate the best solution. All of them goes from step to step to reach to the best, but they do not pledge the winning. Hence, the PSO for example does not contains processes like crossover or mutation. Where it is enhancing their particles by the internal velocity and PSO contains memory. However, there are some differences between PSO and other algorithms such as GA. Where all the individuals in GA translate the information between them while the in the PSO only globalbest provides the information to the other individuals [24].

RESULTS AND DISCUSSION
The tests have been done on the PC Intel Core i7 processor 2.7 GHz and windows 7 professional with 4 GB RAM. All the results are simulated using Matlab 2008a. The population size was 10 while the number of iterations are equal to 100. In addition, = 0.5 in GA and in PSO the inertia weight between 0.4 and 0.9 while the acceleration coefficients are 2. In order to test the ability of PSO, the following functions have been used in this paper where all of them are two dimensional [25].

Beale function
This is the first function suggested here. The mathematical formula is: Which is a multimodal function. The 1,2 ∈ [−4.5,4.5] with ( ) = 0 = (3,0.5). The details of this function are shown in Figure 2. The particle swarm optimization has been tested through the Beale function and it is compared with genetic algorithm as illustrated in Figure 3. It is obvious that the PSO is faster that GA as well it is more accurate to use in this function.

Matyas function
The expression for this function is: Where the parameters 1,2 ∈ [−10,10] and the best function ( ) = 0 at = (0,0). The graph of Matyas function is given in Figure 6. From Figure 7 which shows the value for the two methods during the generations. The value by using the PSO converges generation by generation to reach to the target point in very suitable time.

Mishra bird function
It is a function with formula: The dimensions should be 1,2 ∈ [−2 , 2 ] while the fuction is ( ) = −106.764537 at = (4.70104, 3.15294) = (−1.58214, −3.13024) . The graph for this function is given in Figure 8. The PSO still has the best position in terms of the error and the speed and the GA cannot overcome this algorithm as presented in Figure 9.  Figure 10. In this test which seems to be difficult for PSO which is trapped in local optima where the accuracy is less than the other algorithm as shown in Figure 11.  Figure 12. Figure 13 shown the GA is still dominant on GA and they are very comparative in terms of the speed.
Where 1,2 ∈ [−100,100] and ( ) = 0, with = (0,0) as shown in Figure 14. In this function, even if PSO is better, but both algorithms need more improvement to approximate to the optimum value which is 0 as shown in Figure 15.  Figure 16. In this function the PSO won to reach faster and gets the optimal solution as given in Figure 17.
It is a unimodal with 1,2 ∈ [−100,100] with ( ) = −1, at = ( , ) as shown in Figure 18. In this function, the GA do very bad while the PSO is excellent and it is reached the target solution in better way as presented in Figure 19. Figure 18. The easom function Figure 19. Comparision function Table 1 is the summary for average value and the deviation for both the algorithms with different suggested test functions where it is clearly that the PSO is dominant. Additional results are given in Table 2 for one check during the work.

CONCLUSION
Particle swarm optimization test has been done to highlight the ability of the algorithm to optimize the difficult problems. The experiments have included various mathematical functions with details presents in this paper. In addition to that, it is compared with other algorithm to be sure that this algorithm can