Dynamic Robust Particle Swarm Optimization Algorithm Based on Hybrid Strategy

Robust optimization over time can effectively solve the problem of frequent solution switching in dynamic environments. In order to improve the search performance of dynamic robust optimization algorithm, a dynamic robust particle swarm optimization algorithm based on hybrid strategy (HS-DRPSO) is proposed in this paper. Based on the particle swarm optimization, the HS-DRPSO combines differential evolution algorithm and brainstorms an optimization algorithm to improve the search ability. Moreover, a dynamic selection strategy is employed to realize the selection of different search methods in the proposed algorithm. Compared with the other two dynamic robust optimization algorithms on five dynamic standard test functions, the results show that the overall performance of the proposed algorithm is better than other comparison algorithms.


INTRODUCTION
Optimization problems are subject to change in response to the dynamics and uncertainty of the environment, leading to what are known as dynamic optimization problems (DOPs) (Jin & Branke, 2005).Most of the current research in this area has focused on tracking moving optimization (TMO) (Parrott & Li, 2006;Chen et al., 2023;Falahiazar et al., 2022), which involves an algorithm that seeks to identify a new optimal solution after each environmental modification.Despite its effectiveness in addressing dynamic optimization problems, this approach may present some limitations in practical applications.Firstly, it may face challenges in quickly identifying the optimal solution in each dynamic environment within a limited timeframe.Secondly, even if it manages to identify the optimal solution in the new environment, it will require a considerable amount of computational resources.
Based on the aforementioned considerations, Yu et al. (2010) introduced the concept of robust optimization over time (ROOT) with the primary aim of discovering a set of robust solutions that can adapt to multiple dynamic environments both in the present and future.Following this, Jin et al. (2012) proposed a framework for tackling dynamic robust problems, which involves an optimizer, a database containing historical information, an approximator, and a predictor.Along with robustness, the ROOT approach also takes into account switching costs, which was considered in literature (Huang et al., 2017) that proposed dynamic robust optimization algorithm (robust optimization over time considering switching cost, ROOT/SC).
However, the ROOT/SC algorithm has two limitations: 1) the search dimensions cannot be expanded sufficiently, leading to considerable practical restrictions; 2) the feasible solutions from non-dominated solution sets cannot be sought using the algorithm.To address these problems, Huang et al. (2020) proposed a more efficient dynamic robust multi-objective algorithm named ROOT/SCII (improved ROOT/SC), which incorporates minimizing switching costs as an additional objective by weighing the robustness of the high-dimensional decision space and switching costs.Yazdani et al. (2019) applied multiple swarm methods to the ROOT problem, using multi-swarm PSO to identify and track optimal values while collecting information about peaks in the decision space over time, which was used to select the next robust solution.Moreover, most dynamic robust algorithms employ prediction models to solve ROOT problems; however, the accuracy of such models in practical applications is dependent on the availability of data.In addition, for dynamic problems with highdimensional search spaces and high change frequencies, a large amount of data is often required to obtain accurate predictions.Consequently, Yazdani et al. (2017) proposed a new ROOT framework that eliminates the original predictor in ROOT (Jin et al., 2012) and substitutes the prediction of future fitness values with the prediction of future behavior of peaks, using the behavioral information of peaks to predict robust feasible solutions that satisfy the future dynamic environment when the resulting feasible solution does not satisfy the dynamic environment.
It has been demonstrated that the effectiveness of search engines is crucial to addressing dynamic robust problems.To further enhance the ability to solve such problems, this article proposes a dynamic robust optimization of particle swarm optimization algorithm based on a hybrid strategy (HS-DRPSO).In the HS-DRPSO algorithm, the two variation strategies of the differential evolution algorithm, i.e., "DE/rand/1" and "DE/best/1," are first combined with the particle swarm algorithm in each search period using a weight dynamic adjustment strategy.The population is then clustered, and the variation strategy of the brainstorming algorithm is used to select the central variation of the clusters to generate new individuals in the population and improve population diversity.By comparing the results with two other dynamic robust optimization algorithms across five dynamic standard test functions, it is demonstrated that the proposed algorithm's overall performance is superior.

Dynamic Optimization
A dynamic optimization problem refers to a sequence of optimization problems where the objective function varies with time or environment (Jin et al., 2005).Its mathematical representation is expressed as follows (Cruz et al., 2011): where f represents the objective function, a function of time t and decision variables x; S denotes the search space and X(t) represents the set of decision variables at time t.

Dynamic Robust Optimization Performance evaluation Index
The crux of dynamic robust optimization is to discover adaptable and resilient solutions that can perform well across multiple dynamic environments.To effectively appraise the performance of robust solutions, this paper proposes using two evaluation metrics, namely survival time and average fitness value, as introduced in the literature (Jin et al., 2005).The survival time is defined as follows: where x denotes the feasible solution; f h (x) denotes the fitness value corresponding to the h-th moment; η denotes the set threshold; l denotes the number of times that the fitness value lasts no less than the threshold since the t moment; and fs denotes the maximum number of time steps that the individual x at the t-th moment can satisfy the threshold in the future.
The expression for the average fitness values is given as follows: where T represents the time window value, and f a represents the average value of the individual x within the time window T.
In order to assess the efficiency of the algorithm presented in this paper, we utilize the performance evaluation metrics for dynamic robust optimization algorithms found in the literature (Fu et al., 2013).These metrics are expressed in equation ( 4): where f j represents the robustness of the solution obtained by the algorithm in the j-th environment and P represents the total number of environment types.
In addition, to provide a more comprehensive evaluation of the robustness of the proposed algorithm, this paper employs an evaluation index based on the fitness function value proposed in the literature (Yang et al., 2020), as shown in the following equation: where g robustness (x) represents the normalized robust function value of the resulting solution.If the survival time obtained in equation (2) or the average fitness obtained in equation ( 3) is greater than 0, g robustness (x) = 1; otherwise, it equals 0. g funvalue (x) is the result of L 2 parametric normalization of the optimal function value of the resulting solution.

Basic Particle Swarm Optimization Algorithm
Particle Swarm Optimization (PSO) is a global search algorithm based on population, which involves velocity and position updates using the following formulas (Yang et al., 2020): x e x e v e id id id where ω is the inertia weight, e represents the current iteration number, v id (e) denotes the speed of particle i in the d-th dimensional component at the e-th iteration, x id (e) denotes the position of particle i in the d-th dimensional component at the e-th iteration, c 1 and c 2 are learning factors, r 1 and r 2 are random numbers in the (0, 1) interval, pbest id the the component of historical optimal position of particle i in the d-th dimension, and gbest d denotes the component of position of the global extremum gbest. in the d-th dimensional.

Brainstorming Variation Strategy
Brainstorm Optimization (BSO) is a novel population-based optimization algorithm introduced by Shi in 2011 (Shi, 2011).The key concept of BSO is to mimic the human brainstorming process by treating each member of the population as a potential solution to the problem at hand.In each iteration, a k-means clustering algorithm is applied to cluster all individuals into several groups.Then, one of these groups is chosen at random according to probability, and the center of the chosen group is updated by adding perturbations based on the following equation: where x id represents the positional component of the i-th individual to be mutated in d dimensions, y id represents the positional component of the i-th individual to be generated in d dimensions, N(μ,σ 2 ) represents a Gaussian random number with μ as the mean and σ as the variance, ξ e ( ) represents the coefficient of variation at the e-th iteration, E max represents the maximum number of iterations, k is a factor that regulates the slope of the sig function and controls the convergence rate of the algorithm, and rand() represents a random number between 0 and 1. Upon selecting one individual from each of the two classes, the algorithm uses a probability value to determine which individuals to fuse together in order to create the individual to be mutated.The formula for fusing the two selected individuals is as follows: where x i d The fundamental particle swarm optimization algorithm updates the velocity and position of each particle based on the global optimum and historical optimum, which increases convergence speed but may cause the particles to become stuck in local optima during the search process.To address this issue and balance the algorithm's global search ability and local exploitation ability within the solution space, we incorporate a weighting factor and combine two differential evolutionary variation strategies, DE/rand/1 and DE/best/1, following Zhang et al. (2017).The former strategy uses random individuals for strong global breadth search ability, whereas the latter is guided by the population's historical optimum for stronger local exploration ability.The two strategies are merged using weighting factors as shown in equation ( 10): where x l e+1 denotes the mutated new individuals corresponding to the i-th individual of population x e at the e-th iteration; K is the mutation control parameter, where K max is set to 0.7 and K min is set to 0.5; l 1 , l 2 , l 3 , l 4 , l 5 are mutually different three random numbers taken from the range of [1, 2, ..., N], and none of them is equal to i; β is a monotonically decreasing function with the number of iterations.
The first period emphasizes the DE/rand/1 variation strategy for global search, while the later period emphasizes the DE/best/1 strategy for local exploitation, thus enhancing the population's search performance.

Particle Swarm Optimization Algorithm Based on Brainstorming Algorithm
While the PSO algorithm exhibits a strong global search capability, it is prone to the issues of prematurely converging to local optima and losing diversity in the later stages of evolution.To address this problem, we suppose to augment the PSO algorithm with a brainstorming variation strategy that applies BSO variation to individuals within the population.Moreover, the step size of the variation operator is inversely proportional to the number of population iterations, thereby further enhancing the efficiency of the algorithm update process.The specific steps of the proposed brainstorming variation-based particle swarm algorithm (Algorithm 1) are outlined below: Algorithm 1: Particle Swarm Optimization Algorithm Based on Brainstorming Algorithm Inputs: velocity-position; probability parameter P 1 ; Output: new population; 1 Input the velocity position information of the particles in the population and cluster the population into s classes using the k-means algorithm; 2 Rank the individuals in each class using equation ( 5 5) is employed to dynamically determine the new gbest.Additionally, the parameters c1 and c2 in the particle swarm velocity update equation, as described in equation ( 6), are adjusted by utilizing techniques from the literature (Xiao, 2017) in the following manner:

HS-DRPSO Algorithm Implementation Steps
Algorithm 2 presents the pseudo-code for the proposed dynamic robust particle swarm optimization algorithm based on the hybrid strategy in this paper.

ALGORITHM SIMULATION eXPeRIMeNTS
To demonstrate the efficacy of the proposed algorithm, it was empirically evaluated against the currently available dynamic robust correlation algorithm on five test functions, namely 1) ROOT, a time-domain robust optimization algorithm proposed by (Fu et al., 2015), and 2) DRPSO-DE, a hybrid particle swarm-based dynamic robust optimization algorithm proposed by (Yang et al., 2020).

Test Functions
To validate the efficacy of the proposed algorithm, we conducted tests using the modify Moving Peaks Benchmark (mMPB) function and four test functions (T 1 f 1 , T 2 f 1 , T 3 f 1 , and T 4 f 1 ) generated by the CEC2009 dynamic rotating peaks standard generator.The mMPB test function is derived from the Moving peak Problems (MPB), where the height, width, and position of each wave peak change with the environment.The mMPB test function is expressed using equation ( 13): The expressions of the test functions for T 1 f 1 , T 2 f 1 , T 3 f 1 , and T 4 f 1 are in the same form as shown in equation ( 14): where x d represents the component of the particle in the d-th dimension, while C td m denotes the component in the d-th dimension at the location of the m-th peak at the current time.The functions T 1 f 1 , T 2 f 1 , T 3 f 1 , and T 4 f 1 are transformed in a manner consistent with the method described in (Yang et al., 2020).

Test Function Parameter Setting
To facilitate comparison, the test function parameters for all algorithms were set to the same values as the standard algorithm parameters.The specifics of the parameter settings for the modified moving peak test function and the dynamic rotation peak test function can be found in Table 1.

Algorithm Parameter Setting
Both the proposed algorithm and the comparison algorithm were run independently for 30 iterations.
The time windows T were varied between 2, 4, and 6, and the thresholds η 1 for the mMPB function were set to 40, 45, and 50, while the thresholds η 2 for the dynamic rotation peak function were set to 10, 15, and 20.The remaining algorithm parameters were set according to the following specifications in Table 2.

Analysis of experimental Results
To evaluate the performance of the algorithms presented in this paper, they were tested and compared on the mMPB function and the dynamic rotation peak function, with the results displayed in Tables 3 and 4. Wilcoxon rank sum statistics were used at a significance level of 5% to assess the statistical significance of the obtained results.The mean of the 30 outcomes was selected for comparing the algorithm's performance, while the variance was employed for comparing the algorithm's stability, with the variance value given in parentheses.The best results from the experiments are highlighted in bold in the table.The symbols "-," "+" and "≈" indicate whether the HS-DRPSO algorithm proposed in this paper is "inferior," "superior" and "equivalent" to other proposed algorithms.The final statistical score determines the performance of the proposed algorithm, and the more "+" symbols there are, the greater the algorithm's significant performance advantage.
As shown in Table 3, it is evident that HS-DRPSO outperforms Fu and DRPSO-DE for most thresholds of the mMPB function.With regards to survival time, Wilcoxon statistics indicate that HS-DRPSO significantly outperforms the comparison algorithm at thresholds η 1 of 40 and 45.While at a threshold η 1 of 50, HS-DRPSO achieves improved but statistically insignificant results, similar to the comparison algorithm.As for the average fitness value, when the time window T is set as 2 and 6, the results obtained by HS-DRPSO are improved but not significantly different from the comparison algorithm.When the time window T is set as 4, the performance of HS-DRPSO is slightly worse than DRPSO-DE, but the difference is not statistically significant.Therefore, the experimental results demonstrate that HS-DRPSO is capable of achieving a more robust solution for the mMPB function.
From Table 4, it is evident that HS-DRPSO surpasses the comparison algorithm in terms of the performance of the dynamic rotation peak test function.According to Wilcoxon statistics, only in the T 3 f 1 function, when the time window T is 2, the maximum fitness value obtained by the algorithm is slightly worse than Fu, but the difference is not significant and is similar to the results of DRPSO-DE.However, for the other three test functions, HS-DRPSO shows a significant advantage in both maximizing the survival time and the average fitness value.The key factor behind this advantage is the variational strategy of HS-DRPSO, which enhances the search performance of the algorithm and makes it more adaptable to the intricate dynamic rotation peak testing environment.
To visually represent the dynamic performance of the proposed HS-DRPSO algorithm, line graphs are plotted to show the robust solutions found by the algorithm on 150 moments, as depicted in Figures 1 to 5. Upon observation of Figures 2 to 5, it becomes evident that the algorithm outperforms the comparison algorithms significantly, especially on T 1 f 1 , T 2 f 1 , T 3 f 1 and T 4 f 1 .Only on T 3 f 1 , when the time window T is 2, the algorithm's performance is marginally worse than that of Fu.Furthermore, as the survival time threshold η 1 and η 2 of each test function, and the time window threshold T increases, the robustness performance of the solution decreases, owing to the higher quality requirements of the robust solution.
As shown in Tables 4, one can see that the HS-DRPSO has better performance at the most thresholds of the mMPB function and is significantly better than the compared algorithms of the survival time.In addition, for the dynamic rotation peak test function, the HS-DRPSO has obvious advantages in both the survival time and the average fitness value.Moreover, Figures 1-5 further show the advantages of the proposed HS-DRPSO.Thus, the overall performance of HS-DRPSO is substantially better than the algorithms proposed in the literature (Fu et al., 2015) and literature (Yang et al., 2020), providing evidence that HS-DRPSO is an effective algorithm for solving dynamic robust problems.

CONCLUSION
To enhance the search performance of dynamic robust optimization algorithms, we present the HS-DRPSO algorithm that utilizes a hybrid approach, incorporating differential evolution and  capabilities by using more advanced machine learning methods and on applying the proposed method to real-word applications.
selected from the two classes, and y newd are the offspring individuals resulting from the fusion of the two individuals.DeSCRIPTION OF DyNAMIC ROBUST OPTIMIZATION ALGORITHM BASeD ON HyBRID STRATeGy OF PARTICLe SwARM ALGORITHM Hybrid Differential Variation Strategy height, width, and peak center point location of the m-th peak at time t, v t m represents the movement of the m-th peak position at time t, N(0,1) represents a Gaussian distribution of random numbers with a mean value of 0 and variance of 1, and the max function is used to determine the highest peak value of the M peaks as the function value.

Figure 3 .
Figure 3. Performance of robust solution under T 2 f 1 function