 Research
 Open access
 Published:
Hybrid beluga whale optimization algorithm with multistrategy for functions and engineering optimization problems
Journal of Big Data volume 11, Article number: 3 (2024)
Abstract
Beluga Whale Optimization (BWO) is a new metaheuristic algorithm that simulates the social behaviors of beluga whales swimming, foraging, and whale falling. Compared with other optimization algorithms, BWO shows certain advantages in solving unimodal and multimodal optimization problems. However, the convergence speed and optimization performance of BWO still have some performance deficiencies when solving complex multidimensional problems. Therefore, this paper proposes a hybrid BWO method called HBWO combining Quasioppositional based learning (QOBL), adaptive and spiral predation strategy, and NelderMead simplex search method (NM). Firstly, in the initialization phase, the QOBL strategy is introduced. This strategy reconstructs the initial spatial position of the population by pairwise comparisons to obtain a more prosperous and higher quality initial population. Subsequently, an adaptive and spiral predation strategy is designed in the exploration and exploitation phases. The strategy first learns the optimal individual positions in some dimensions through adaptive learning to avoid the loss of local optimality. At the same time, a spiral movement method motivated by a cosine factor is introduced to maintain some balance between exploration and exploitation. Finally, the NM simplex search method is added. It corrects individual positions through multiple scaling methods to improve the optimal search speed more accurately and efficiently. The performance of HBWO is verified utilizing the CEC2017 and CEC2019 test functions. Meanwhile, the superiority of HBWO is verified by utilizing six engineering design examples. The experimental results show that HBWO has higher feasibility and effectiveness in solving practical problems than BWO and other optimization methods.
Introduction
Metaheuristic algorithms (MAs, for short) have developed rapidly in recent years. It combines stochastic and local search algorithms and is widely used to solve global optimization problems in different fields. Compared with traditional algorithms, MAs can better deal with complex, multimodal, noncontinuous nondifferentiable problems [1] in the real world due to their “random factors”. For example, MAs can be used to solve problems in image processing [2], shape optimization [3], machine learning [4], deep learning [5], path planning [6], clustering [7], engineering problems [9, 10], and other fields. MAs simulate the relevant behavior of humans or animals and apply the rules and action principles in physics and chemistry to construct the mathematical model of the optimization algorithm. Therefore, MAs can be divided into four categories: evolutionbased algorithms, humanbased algorithms, physicsbased algorithms, chemistrybased algorithms, and swarmbased intelligence algorithms [8].
Evolutionbased algorithms are used to derive superior nextgeneration individuals through Darwinian evolution and the survival of the fittest in nature, thus enabling the population to progress collectively. Among the representative methods are Differential Evolution (DE) [12] and GA [13], which simulate the genetic laws in nature to find the optimal individual through natural derivation laws such as selection, crossover, and mutation. Some more specific methods are Gene Expression Programming (GEP) [14], Evolutionary Programming (EP) [15], etc.
Humanbased algorithms are mainly inspired by human social behaviors, including teaching and learning, competitive, cooperative, etc. For example, Teaching–LearningBased Optimization (TLBO) [16] Soccer League Competition (SLC) [18] simulates sports football matches and finds players with higher scores among fixed players and substitute players by ranking points, Socio Evolution & Learning Optimization (SELO) [19], Human Urbanization (HUS) [20] derived from human life, its purpose is to realize urbanization and have more convenient urban life, Growth Optimizer (GO) [21] its inspiration comes from learning and reflection in the process of growing up, Artificial EcosystemBased Optimization (AEO) [22], Harmony Search (HS) [17] simulate musicians to get wonderful music by constantly adjusting notes, etc.
Physics theorems and chemical experiments inspire algorithms based on physics and chemistry. The more classical ones include Simulated Annealing (SA) [23] and Gravitational Search Algorithm (GSA) [24] to simulate the motion between objects under the universal gravitation theorem. It also includes Lightning Attachment Process Optimization (LAPO) [25], Young’s DoubleSlit Experiment Optimizer (YDSE) [27], physical experiments from doubleslit interference have shown the wave of light, Atomic Orbit Search (AOS) [28] Inspired by concepts such as quantum mechanics and quantum atom models in physics, Atom Search Optimization (ASO) [29], eta.
Swarmbased intelligence algorithms focus on finding the best solution by simulating the behavior of the group and learning the intelligence of the group. Classical swarm intelligence algorithms include Particle Swarm Optimization (PSO) [30], Ant Colony Optimization (ACO) [31], and Firefly Algorithm (FA) [11]. Due to their simple structure, they may need better convergence accuracy when dealing with complex optimization problems. With the deepening of scientific research, a large number of new swarm intelligence algorithms have emerged, including Harris Hawks Optimization (HHO) [32], mimicking the hunting process of a Harris hawk, Sparrow Search Algorithm (SSA) [35], Chameleon Swarm Algorithm (CSA) [33] simulates the way chameleons change color according to their environment when searching for food. These search methods can achieve excellent performance against benchmark functions but suffer from slow convergence when dealing with real engineering problems. Also included, Jellyfish Search (JS) [34] simulated jellyfish following ocean currents, the Coati Optimization Algorithm (COA) [36], the Gannet Optimization Algorithm (GOA) [37] a numerical model was constructed based on the unique behavior of gannets during foraging. Although these algorithms have good convergence speed and accuracy, they fall into local optimization in some highdimensional problems and are more sensitive to control parameters. In addition, swarmbased intelligence includes AphidAnt Mutualism (AAM) [38], simulates the mutualism between aphids and ants in nature, Shrimp and Goby Association Search (SGA) [39] mathematical modeling based on the cooperative win–win relationship between shrimp and goby, Conscious NeighborhoodBased Crow Search (CCSA) [40], Manta Ray Foraging Optimization (MRFO) [41], Mountain Gazelle Optimizer (MGO) [42] it was inspired by the herd life of the mountain gazelle, Artificial Rabbits Optimization (ARO) [43], Artificial Hummingbird Algorithm (AHA) [44]. These algorithms have good global search capability and can handle continuous optimization problems, but solving discrete optimization problems is challenging.
The Beluga Whale Optimization (BWO) [45] was proposed by Changting Zhong et al. in 2022, and its inspiration comes from the three stages of beluga whale swimming, foraging, and whale fall. Beluga whales are social animals, and usually, many beluga whales migrate together every July. In summer, beluga whales gather in the estuary to hunt. Because beluga whales do not have sharp teeth, they generally eat salmon, cucurbit fish, cod, and smaller prey such as shrimp, squid and clams. Whale falling refers to the attack of natural enemies, polar bears, and killer whales during migration, as well as the harm of human beings to beluga whales and some irresistible factors that make beluga whales die and fall to the bottom of the sea. The above three stages correspond to BWO exploration, development, and whale fall, and a mathematical model is established accordingly.
The two algorithms, BWO and WOA, differ significantly in the process of constructing optimization models due to different inspirations [61]. Firstly, BWO was inspired by the swimming, foraging, and whalefall processes of beluga whales and developed a mathematical model based on these three stages. In contrast, WOA modeled the foraging process of humpback whales in a bubble network. Further, BWO designed a Lévy flight process during the exploitation phase that simulates the foraging movement of beluga whales and considers whales falling under natural factors. Therefore, the significant differences between the two algorithms need to be squarely addressed.
Some experiments have strongly demonstrated the competitiveness of BWO in solving optimization problems. However, due to the increasing complexity of real problems, the original BWO is unable to handle these problems effectively and achieve a suitable result. An improved situation exists for effective enhancement. Therefore, this paper presents an improved BWO called HBWO, which introduces Quasioppositional based learning (QOBL) [46], adaptive and spiral predation strategy, and NelderMead simplex search method (NM) [47 ~ 49]. Firstly, quasipositionalbased learning is introduced to obtain an optimal solution with higher probability. Secondly, the adaptive and spiral predation strategies improve the convergence speed while avoiding falling into local optima. Finally, the NelderMead simplex search method is introduced so that the better individuals replace the worst ones.
In this paper, the CEC 2017 test function [50] and the CEC 2019 test function [51] are used to verify the superiority of HBWO. The main contributions of this paper are as follows:

1
In order to improve the performance of BWO, three strategies were added on the basis of BWO, and an improved Beluga optimization algorithm, HBWO, was proposed.

2
The performance of HBWO was evaluated in CEC2017 and CEC2019 test functions and compared with nine optimization algorithms and two improved optimization algorithms. The numerical results show that HBWO has a certain competitiveness.

3
Six practical engineering application problems were solved by HBWO. It further shows the superiority of HBWO and the high efficiency of solving practical problems.
This paper is composed in the following way: Sect. "Basic BWO" briefly describes the specific process of BWO. Sect. "Proposed HBWO" presents a detailed description of the proposed HBWO. Sect. "Experimental results and discussion" tests the performance of HBWO using CEC2017 and CEC2019 test functions, and the resulting experimental results are analyzed and evaluated. Sect. "HBWO for engineering optimization problems" solves six practical engineering examples with the HBWO. Sect. "Conclusion and Future" summarizes the paper.
Basic BWO
The Beluga Whale Optimization (BWO), proposed by Changting Zhong et al. in 2022. The algorithm is inspired by the three phases of beluga whale swimming, feeding and whale falling.
The exploration and exploitation stages of BWO are determined by \(B_{f}\), which can be expressed as:
where \(B_{0}\) is a random number between (0, 1). When \(B_{f} > 0.5\) is the exploration phase of BWO, and when \(B_{f} \le 0.5\) is the exploitation phase of BWO. As T increases, \(B_{f}\) decreases from (0,1) to (0,0.5).
Exploration phase
The exploration phase mathematical model of BWO is inspired by beluga whale swimming. Based on the behavioral recordings of the belugas swimming, the two pairs of belugas swim closely together in a synchronized or mirrored fashion (Fig. 1a). Location update as follows:
where \(X_{i,j}^{T + 1}\) is the new position of the ith individual on the Jth dimension, \(p_{j} (j = 1,2, \cdots ,d)\) is a number randomly selected from the Ddimension, \(X_{i,pj}^{T}\) is the position of the ith individual on the \(p_{j}\) dimension, \(X_{r,p1}^{T}\) is the current position of the rth individuals (r is randomly selected), \(r_{1}\) and \(r_{2}\) are the random numbers of (0,1).
Exploitation phase
The BWO exploitation phase was inspired by the beluga whale's predatory behavior (Fig. 1b). It is expressed as follows:
where \(X_{i}^{T}\) and \(X_{r}^{T}\) are the current and the random beluga position, \(X_{i}^{T + 1}\) is the updated beluga position, \(X_{best}^{T}\) is the beluga with the best position, and \(r_{3}\) and \(r_{4}\) are random numbers of (0,1), \(C_{1} = 2r_{4} (1  T/T_{\max } )\) is used to measure the random jump strength of Lévy flight. L_{F} is calculated as follows:
where u and v are normal distributed random numbers and β is constant β = 1.5.
Whale fall
Whale fall refers to the attack of natural enemies such as polar bears and killer whales in the migration process, as well as the harm of humans and some irresistible factors that cause the death of beluga whales (Fig. 1c). A mathematical model of a whale's fall was established, expressed as follows:
where (\(C_{2} = 2W_{f} \times n\)), \(W_{f}\) can be expressed as follows:
The whale falls when \(B_{f} < W_{f}\)_{.}
Proposed HBWO
With the increasing complexity of optimization problems, BWO has some limitations in solving practical problems, given the shortcomings of the BWO, such as insufficient solution accuracy. In this section, an improved Beluga Whale Optimization HBWO is proposed by combining the following three strategies. (1) Improve the distribution quality of the initial population based on Quasioppositionbased learning (QOBL); (2) Adaptive and spiral predation strategy. The adaptive part makes the beluga learn from the best individual and reduces the learning from random beluga. The spiral predation part makes the beluga expand the exploitation phase, which helps to prevent the algorithm from being premature and jumping out of the local best beluga individual; (3) NelderMead simplex search method (NM), which makes better beluga whale individuals replace the worst beluga whale individuals through reflection, expansion, and contraction.
Quasioppositionbased learning (QOBL)
Oppositionbased learning (OBL) [52, 53] assumes that the initial population has a higher probability of getting the optimal solution than the random initial population. OBL has been extended to QOBL, Quasioppositionbased learning (QOBL), and the initial beluga population obtained with QOBL has better optimization ability than the initial beluga population obtained with OBL and random initialization. QOBL is denoted by:
where
where \(X_{i,j}^{T}\) is the position of the ith white whale on the jth dimension generated by random initialization.
Adaptive and spiral predation strategies
The adaptive part is an improvement made to the exploration part of the Beluga whale optimization. The original BWO exploration part ignores the learning from the optimal beluga individual, which leads to the weakness of the algorithm in finding the best, and there is some room for improvement. HBWO has made some improvements based on the framework of the original BWO exploration section. HBWO learns from random beluga individuals while also learning from optimal beluga individuals in some dimensions, increasing the algorithm's superiorityseeking ability in the exploration phase with the following mathematical formulation.
where \(s_{1}\) and \(s_{2}\) are two random integers from 0 to D (number of dimensions), \(x_{i,s1}^{T + 1}\) the new position of the ith beluga in the \(s_{1}\) th dimension, \(x_{g,s1}^{T + 1}\) is the position of the globally optimal beluga in the \(s_{1}\) th dimension, and \(x_{i,s1}^{T}\) is the current position of the ith beluga in the \(s_{1}\) th dimension. \(s_{2}\) same thing.
The spiral predation part is an improvement made to the exploitation part of the Beluga whale optimization (Fig. 2). Influenced by the Whale optimization algorithm, the spiral predation with cosine function is introduced in the exploitation stage of BWO to broaden the exploitation ability of beluga whales to enhance the diversity of later populations. It is calculated by Eq. (12).
where \(x_{i}^{T}\) is the current beluga position, \(x_{i}^{T + 1}\) is the position of the updated beluga, \(x_{best}^{T}\) the best solution, \(r_{3}\) and \(r_{4}\) is the random number of (0,1), \(C_{1} = 2r_{4} (1  T/T_{\max } )\), \(w\) is represented as follows:
where r and h are random numbers between (0,1).
NelderMead simplex search method (NM)
The method uses reflection, expansion, and compression to rescale the beluga individuals. By calculating the magnitude of these points and the corresponding point fitness values, a better beluga individual can be made to replace the worst beluga individual according to the relevant steps of NM, helping to find the optimal beluga individual (Fig. 3). The steps of NM are described below.

(1)
Reflection. Calculate the reflection point \(r = m + \alpha \cdot (m  w)\), α is the reflection coefficient, this paper \(\alpha = 1\).

(2)
Expansion. If \(f_{r} < f_{w}\), calculate the expansion point \(s = m + \beta \cdot (r  m)\), β is the expansion factor, this paper \(\beta = 2\). If \(f_{s} < f_{r}\), \(w = s\), otherwise \(w = r\).
where \(f_{r}\)、\(f_{w}\)、\(f_{s}\) denote the adaptation values of the corresponding points rws.

(1)
Compression.

①
If \(f_{b} < f_{r} < f_{m}\), \(w = r\).

②
If \(f_{m} < f_{r} < f_{w}\), contraction inward, \(c = m + (r  m)/2\), if \(f_{c} < f_{w}\), \(w = c\).

③
Otherwise, shrink outward, \(p = m  (r  m)/2\), if \(f_{p} < f_{w}\), \(w = p\), otherwise \(w = r\).
where the magnitude of the fitness value of the point \(f_{b}\)、\(f_{r}\)、\(f_{m}\)、\(f_{w}\)、\(f_{c}\)、\(f_{p}\) to which b、r、m、w、c、p corresponds.
In order to reduce the error caused by using NM randomness, it can be run repeatedly. This paper runs 5 times. Algorithm 1 gives the pseudocode.
Algorithm 1:
NelderMead simplex search method.
Detailed Steps for HBWO
Combining the three strategies based on quasioppositionbased learning, adaptive and spiral predation strategy, and Nelder–Mead simplex search method with BWO, an enhanced belugas optimization, which is marked as HBWO, is proposed.
For HBWO, the three strategies introduced are important ways to ensure balance. First, Quasioppositionbased learning strategies increase the likelihood that the population searches near the optimal solution by improving the initialization distribution of the population. This improves the search efficiency in the exploration phase. Adaptive and spiral predation strategies can increase search efficiency by introducing optimal position information in the exploration phase. The population constantly explores the optimal solution. In addition, the NelderMead simplex search method relies on the relevant position information of the population in the exploration phase to determine the optimal position in the solution space.
The steps of HBWO are as follows:
Step1. Initialization, determining the parameters related to the enhanced beluga optimization algorithm;
Step2. Generating an initial population of belugas by Eq. (9);
Step3. Calculate the fitness value and get the current optimal individual;
Step4. Calculate \(B_{f}\) according to Eq. (1). If \(B_{f} > 0.5\) calculates the position of individual beluga whale according to Eq. (12), else calculates the position of individual beluga whale according to Eq. (13);
Step5. The fitness value was calculated and sorted to find the current optimal belugas individual;
Step6. Calculate \(W_{f}\) according to Eq. (8), If \(B_{f} < W_{f}\) according to Eq. (6) calculate the individual position of beluga whale;
Step7. Determine whether the upper and lower bounds of the position are exceeded; if the upper bound is exceeded, ub is used instead, and if the lower bound is exceeded, lb is used instead;
Step8. Replacement of the position of the worst beluga individual according to NM;
Step9. Find the current optimal individual position;
Step10. If \(T \le T_{\max }\), then execute Step 3, otherwise execute Step 11;
Step11. Output the global optimal position and fitness values.
Algorithm 2 gives the pseudocode of HBWO. Figure 4 shows the flow chart of HBWO.
Time complexity of HBWO
The computational complexity (O) of HBWO was determined by three processes: algorithm initialization, fitness evaluation value and beluga whale individual updating. Firstly, Quasi oppositionbased learning (QOBL) was added, degree complexity is O(N_{pop} × D). Secondly, the adaptive and spiral predation strategies are introduced, and the computational complexity is O(N_{pop} × T_{max}). The degree complexity of a whale fall is O(N_{pop} × T_{max} × 0.1). Finally, the Nelder–Mead simplex search method is added, and the computational complexity is O(5 × N_{pop}). Therefore, the complexity of the proposed HBWO is O(HBWO) = O(N_{pop} × (D + 1.1 × T_{max} + 5).
Algorithm 2:
Proposed HBWO Algorithm.
Experimental results and discussion
In this section, by working with the more classical algorithms Particle Swarm Optimization (PSO), the more applied Whale Optimization Algorithm (WOA), Harris Hawk Optimizer (HHO), the more novel Sparrow Search Algorithm (SSA), Dandelion Optimizer(DO) [54], Sand Cat Swarm Optimization(SCSO) [55], Aquila Optimizer (AO) [56], Arithmetic Optimization Algorithm (AOA) [59], Improved Harris Hawk Optimizer Algorithm, Leader Harris Hawks Optimization (LHHO) [57], Improved Slime Mode Algorithm, Leader Slime Mode Algorithm (LSMA) [58] and the original BWO are compared as a way to verify the superiority of the proposed HBWO on the CEC2017 and CEC2019 test set. These two test sets contain singlepeaked, multipeaked, mixed, and composite functions, which are challenging and thus enable a more scientific measure of the algorithm's merit. The initial parameters of the optimization algorithm are shown in Table 1.
Sensitivity analysis of reflection and expansion coefficients
The NM strategy scales the positions of beluga individuals through reflection, expansion, and compression to prevent the algorithm from falling into a local candidate solution. The key to the NM strategy affecting the position update of beluga individuals lies in selecting the reflection parameter α and the expansion parameter β. The NM strategy is based on selecting the reflection parameter α and the expansion parameter β. In this section, we discuss and analyze the effects of the reflection parameter α and the expansion parameter β on the performance of the algorithm. We use the CEC2019 suite containing ten functions to investigate the effects of the reflection parameter α and the expansion parameter β. The values of reflection parameter α and expansion parameter β are categorized into [0.5, 1.5] and [1.5, 2.5] with 0.1 as the step size. For the reflection parameter α and the expansion parameter β selected, the mean values and ordering of the test function solutions obtained by HBWO in 20 independent experiments are presented in Table 2 and 3, respectively. Bold indicates the scaling parameter with the smallest mean value.
From the results in Tables 2 and 3, it can be found that the best average is obtained for all six cases with α = 1 and the lowest average is 1.7. It can be found that the closer α is to 1, the better the performance of the HBWO algorithm. In addition, all four cases with β = 2 obtained the best mean and the lowest mean value of 3. Hence, α = 1 as well as β = 2 are the most appropriate results.
Performance benefits of improved strategies
In order to effectively analyze the impact of introducing each strategy on the optimization ability of the BWO algorithm and to demonstrate the synergistic effect of multiple strategies, an ablation analysis of multiple strategies is performed. In order to improve the drawbacks of BWO and enhance the performance of the algorithm, Quasipoppositionalbased learning, adaptive and spiral predation, and NelderMead simplex search are introduced in the proposed HBWO and three other strategies. Table 4 gives the algorithms of the BWO variants using one or more fusion strategies. Where "1" indicates that the strategy is introduced and "0" indicates that the strategy is not introduced. The performance of multiple variants of the BWO algorithm was analyzed in the CEC2019 test suite, and Table 5 lists the experimental results, including averages and rankings, for multiple variants of the BWO algorithm for the CEC2019 test suite.
As can be seen in Table 5, the performance of the BWO variant containing the NM strategy is relatively unstable. These experimental results show that these four strategies are useful in improving the performance of the original algorithm, especially the introduced Adaptive and spiral predation, but lack robustness. In addition, from the experimental results, it can be seen that ANMBWO, QABWO, and AHWO, which are ranked 2, 3, and 4, outperform the other BWO variants in terms of overall optimization performance, which also proves that the Adaptive and spiral predation strategy plays a greater and more consistent role among these four strategies. Compared to BWO, the proposed HBWO mainly improves the exploration and balancing ability and the ability to jump out of the local solution. Therefore, the combination of Quasioppositionalbased learning, adaptive and spiral predation, and NelderMead simplex search moves the YDSE towards the optimal solution.
Experiments and analysis on the CEC2017 test set
Verify the strengths and weaknesses of the HBWO using the CEC2017 test set. First, the relevant parameters are set: the population size is 50, the number of dimensions is 30, and each algorithm is run 20 times for each test function. The results are shown, including the mean, standard deviation, best, worst, and rank.
The average rank of HBWO is 1.1724, as seen in Table 6, which ranks first. This indicates that the overall solution result of HBWO is better than the 30 test functions of CEC2017. As can be seen from Table 6, HBWO has significantly improved the meritseeking ability for each test function at CEC2017 compared to BWO, indicating the effectiveness of introducing QOBL, adaptive and spiral predation strategy, and NM. On the singlepeak test functions F1 and F3, the HBWO is superior to other algorithms. For the multipeak test functions F4F10, the HBWO algorithm ranked second after LSMA on F5 and F7. With composite test functions and mixed test functions, HBWO ranked second on F24, third on F27, and first on all other test functions. However, the proposed HBWO method has more runtime portion than the original BWO algorithm, which is mainly due to the added time complexity of the three added strategies.
From Table 6, it can be obtained that solving 30 dimensions on the CEC2017 test function, BWO ranks last, AOA ranks eleventh, LHHO ranks fifth, AO ranks fourth, SSA ranks third, LSMA ranks second, and HBWO ranks first, which fully indicates that the three strategies have significantly improved computational accuracy of the BWO. The performance ranking HBWO > LSMA > SSA > AO > LHHO > SCSO > PSO > HHO > DO > WOA > AOA > BWO of the twelve algorithms has been fully verified.
The Wilcoxon rank sum test values are given in Table 7. Bold data is used to indicate \(p > 0.05\). The results are obtained as 0/0/29, 0/2/27, 0/0/29, 0/0/29, 0/0/29, 0/0/29, 0/0/29, 0/0/29, 0/0/29, 0/0/29, 3/5/21. HBWO compared with SSA on test functions F13 and F27 \(p > 0.05\), and outperformed SSA on all 27 test functions. Compared with LSMA, it is \(p > 0.05\) on F8, F10, F21, F23, and F26 test functions and is better than LSMA on 21 test functions. Comparison with other algorithms outperformed the other algorithms. Therefore, the HBWO has a better ability to find the 30 dimensions of the 2017 test functions.
The convergence curves for each test function algorithm in CEC2017 are shown in Fig. 5. The convergence of HBWO is better on the singlepeak test function. With multipeak test and mixed test functions, HBWO suffers from a weaker convergence rate at the beginning of the iteration. However, due to the addition of spiral predation, the later iterations make HBWO jump out of the local most. It shows convergence still downward at the later stages, which makes HBWO have some competitiveness and superiority, especially on the F6, F9, F12, F14, F18, F19, and F22 test functions have better performance. For the composite test functions, HBWO has significantly better finding ability than other algorithms on the F26 and F30 test functions.
In addition, the box line plots are given in Fig. 6. It can be seen that the box corresponding to HBWO is smaller and lower in most cases, which indicates that the results solved by HBWO are better and more stable. HBWO only has slightly higher box positions than LSMA on the CEC2017 test functions F5, F7, F24, and F27, and the median box on the F8 test function is slightly lower than the LSMA and has a smaller box. The box of HBWO is smaller and lower than that of the other algorithms, especially on the test functions F14, F16, F17, F20, F22, and F29. In general, the box line plot shows that HBWO has a significant improvement over BWO.
The radar plot drawn by the ranking of the twelve optimization algorithms on the CEC2017 test function is given in Fig. 7. It can be seen that the HBWO has the smallest shaded area, and BWO has the largest area. It further illustrates that the performance of BWO combined with the three strategies postcomputation has a great improvement, which shows the superiority of HBWO.
Tables 8, 9, and 10 give the results of HBWO, OBCWOA [87], and BWO runs on CEC2017 test functions on 10, 50, and 100 dimensions. From the table, we can see that HBWO has improved performance on each test function in 50 dimensions, especially F1, F4, F12, F13, F14, F15, F19, and F30, which have very obvious improvements. It shows some superiority in 100 dimensions on the F1, F3, F12, F13, F14, F15, F17, F18, F19, and F30. It further illustrates that BWO combines the three strategies to obtain richer populations and escape local optimal solutions in highdimensional space.
Experiments and analysis on the CEC2019 test set
The performance of the HBWO was further tested on the CEC2019 test set. The algorithmrelated parameters were first set with a population size of 40 and run 20 times, and the results obtained are shown in Table 11. From Table 11, the average rank of HBWO is 1.1, which is the first overall rank. HBWO ranked second only on the F2 test function, and SCSO ranked first. However, HBWO ranks first on other test functions, and all experimental results are better than the SCSO algorithm. In general, HBWO is a great improvement over BWO. However, HBWO still needs to improve its accuracy for some singlepeak functions. The main reason for this is that the proposed algorithm effectively balances the exploration and development process, thus resulting in an exploration time that is too short.
From Table 11, it can be seen that the HBWO ranks first, the LSMA ranks second, the SCSO ranks third, the HHO ranks fourth, the BWO ranks eleventh, and the AOA ranks twelfth. Therefore, the performance ranking of the twelve algorithms for solving the CEC2019 can be obtained as HBWO > LSMA > SCSO > HHO > AO > LHHO > PSO > SSA > DO > WOA > BWO > AOA.
The Wilcoxon rank sum test values are given in Table 12. According to the last row of Table 12, the results are obtained as 0/1/9, 0/1/9, 0/1/9, 0/1/9, 0/1/9, 0/2/8, 0/2/8, 0/0/10, 0/1/9, 0/1/9, 0/3/7. It can be seen that HBWO is better than PSO, SSA, WOA, HHO, DO, BWO, and LHHO on 9 test functions; HBWO showed better results in the CEC2019 test set.
The convergence profile of HBWO on the CEC2019 test function is given in Fig. 8. It can be observed from Fig. 8. On the test functions F4, F6, F7, and F8, the HBWO algorithm not only converges fast but also converges with high accuracy, showing certain advantages. On the test functions F4, F6, F7, F8, and F10, HBWO's convergence speed is weaker than PSO in the early stage, but it still converges downward and jumps out of the local optimal value in the later iteration, and its optimization ability is stronger than PSO. HBWO has little difference in convergence speed and convergence accuracy with SSA, HHO, DO, SCSO, AO, BWO, LHHO, and LSMA on F1, F2, and F3 test functions, which is also the place that needs to be improved in the future. However, compared with the above algorithms on other test functions, HBWO is significantly superior to other algorithms.
Figure 9 presents the boxplots of each algorithm on the CEC2019 test functions. Figure 9 shows that HBWO has lower and smaller boxings on most of the test functions, which indicates the superiority of the algorithm. These test functions have obvious superiority, especially in the F3, F4, and F6. On the test functions F7 and F8, HBWO has a larger box, but obviously, it can be observed that HBWO has a lower box with a smaller mean value.
The radar plot drawn by the ranking of the twelve optimization algorithms on the CEC2019 test function is given in Fig. 10. It can be seen that the HBWO has the smallest shaded area, BWO is ranked eleventh in terms of area, and AOA has the largest area. It further illustrates that the performance of BWO combined with the QOBL, adaptive and spiral predation strategies, and NM postcomputation has shown a great improvement.
Comparative analysis of time and space complexity
In order to increase the persuasiveness of the comparison experiments and to verify other performances of the proposed HBWO, we present the time and space complexity cases of the comparison algorithms separately. Table 13 provides the time and space complexity cases of the comparison algorithms.
From Table 13, we can find that the complexity of AO, AOA, PSO, WOA, and LSMA are all O(N_{pop} × (D × T_{max} + 1)). The reason for this is that the processes of all the above methods are initialization as well as simpler iterative updates. The complexity of the built algorithms compared to these methods differs only in the whalefall process and the NM search.
Moreover, for the space complexity, the initialization overall can be considered as the maximum amount of space occupied by the optimization method at any time. Therefore, the space complexity of the proposed HBWO is O(N_{pop} × D). Meanwhile, all the other methods except PSO are O(N_{pop} × D). PSO has a complexity of O(2N_{pop} × D) because the fact that the initialization process has to initialize the velocity and position simultaneously.
HBWO for engineering optimization problems
To further verify the merit of HBWO, the proposed method is used to solve six practical engineering design problems. In this section, the population size is \(N = 30\) and the maximum number of iterations is \(T = 500\).
Corrugated bulkhead design problem
The corrugated bulkhead design problem [60] is to minimize the weight of the wavetrough bulkhead of the chemical tank truck under six constraints, and the design variables are width (w), depth (d), length (l) and plate thickness (t), let \(x = (x_{1} ,x_{2} ,x_{3} ,x_{4} ) = (w,d,l,t)\). Then the mathematical model of the corrugated bulkhead design problem is as follows:
subject to:
variable range: \(0 \le w,d,l \le 100,\quad 0 \le t \le 5.\)
RSA [62], GWO [28], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], HGS [64], AO [56], BWO [45], and HBWO were used to solve the problem. The results are shown in Tables 14 and 15; the result of HBWO is 6.842953. The bolded data are the minimum values in each index.
Tabular column design problem
The tube column design problem [66] is to obtain a uniform tube column. Figure 11 with minimum cost, which can withstand the compression load. The average diameter \(d\) and thickness \(t\) are varied in the range of [2, 14] and [0.2, 0.8]. The parameters are as follows: yield stress \(\sigma_{y} = 500kgf/cm^{2}\),modulus of elasticity \(E = 0.85 \times 10^{6} kgf/cm^{2}\) and density \(\rho = 0.0025kgf/cm^{3}\). The length \(L\) of the column is a \(250cm\) vector \(x = (x_{1} ,x_{2} ) = (d,t)\).
subject to:
where
HBWO is used to solve the tube column design problem, and the obtained results are compared with other optimization algorithms, including RSA [62], GWO [28], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], SMA [67], AO [56], and BWO [45]. The results obtained are shown in Tables 16 and 17. The optimal value of HBWO for solving the design of pipe string is 26.53129, and the result obtained is relatively stable.
Threebar truss design problem
The threebar truss design problem [68], as shown in Fig. 12, which is to minimize the value of the total weight of the truss structure while minimizing the volume.
subject to:
\(h_{2} (x) = \frac{{T_{2} }}{{\sqrt 2 T_{1}^{2} + 2T_{1} T_{2} }}P  \sigma \le 0,\)
with bounds: \(0 \le T_{1} ,T_{2} \le 1.\)
To solve this problem, RSA [62], GWO [28], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], HGS [64], AO [56], BWO [45] and HBWO are used. Tables 18 and 19 show the experimental results, from which it can be seen that the result of HBWO solving this problem is 0.012663, indicating that HBWO can achieve good and stable results.
Tension/compression spring design problem
The tension/compression spring design problem [69] (Fig. 13). The problem has three design variables: average coil diameter (\(D\)), wire diameter (\(d\)), and effective number of coils (\(N\)). Let \(X = [x_{1} ,x_{2} ,x_{3} ] = [d,D,N]\).
subject to:
\(h_{2} (x) = \frac{{4D^{2}  dD}}{{12566(Dd^{3}  d^{4} )}} + \frac{1}{{5108d^{2} }}  1 \le 0,\)
\(h_{4} (x) = \frac{D + d}{{1.5}}  1 \le 0,\)
variable range: \(0.05 \le d \le 2,\quad 0.25 \le D \le 1.3,\quad 2 \le N \le 15.\)
BWO [45], RSA [62], PSO [30], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], HGS [64], AO [56], and HBWO are used to solve tension/compression spring design problems. Tables 20 and 21 show that the optimal value of HBWO to solve this problem is 0.012663. HBWO has a strong competitiveness compared with other optimization algorithms.
Heat exchange design problem
The heat exchange design problem [70] is a constrained function with eight variables and six inequalities, and the constrained benchmark minimization problem is found.
subject to:
\(h_{6} (x) = x_{3} x_{8}  x_{3} x_{5} + 2500x_{5}  1250000 \ge 0,\)
where the range of the variables are
HBWO is used to solve heat exchange design problems, With RSA [62], GWO [28], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], HGS [64], AO [56] and BWO [45] solving results Contrast. The optimal value of HBWO to solve this problem is 7060.574, which can be obtained from Tables 22 and 23. However, the standard deviation of HBWO is still relatively large, and there is still some room for improvement.
Welded beam design problem
The welded beam design problem [71] (Fig. 14) is to find the minimum value of the manufacturing cost of the welded beam. The problem contains four design variables, which are: welding thickness \(h\), welding joint length \(l\), beam height \(t\), and beam thickness \(b\). Let \(X = [x_{1} ,x_{2} ,x_{3} ,x_{4} ] = [h,l,t,b]\), whose mathematical model is as follows.
where \({0}{\text{.1}} \le x_{1} \le 2\), \({0}{\text{.1}} \le x_{2} \le 1{0}\), \({0}.1 \le x_{3} \le 10\), \({0}{\text{.1}} \le x_{{4}} \le 2\), \(L = 14in\), \(m_{\max } = 136,000psi\), \(\sigma_{\max } = 36,600psi\), \(B = 6,000lb\), \(E = 30 \times 10^{6} psi\), \(G = 12 \times 10^{6} psi\),
\(m(X) = \sqrt {(\tau^{\prime})^{2} + 2\tau^{\prime}\tau^{\prime\prime}\frac{{x_{2} }}{2R} + (\tau^{\prime\prime})^{2} }\),\(m^{\prime} = \frac{B}{{\sqrt 2 x_{1} x_{2} }}\), \(m^{\prime\prime} = \frac{AR}{J}\),
\(A = P(L + \frac{{x_{2} }}{2})\),\(R = \sqrt {\frac{{x_{2}^{2} }}{4} + (\frac{{x_{1} + x_{3} }}{2})^{2} }\),\(J = 2\sqrt 2 x_{1} x_{2} \left[ {\frac{{x_{2}^{2} }}{4} + (\frac{{x_{1} + x_{3} }}{2})^{2} } \right]\),
\(n(X) = \frac{6PL}{{x_{4} x_{3}^{2} }}\), \(p(X) = \frac{{6BL^{{3}} }}{{Ex_{3}^{2} x_{4} }}\),
\(B_{c} (X) = \frac{{4.013E\sqrt {\frac{{x_{3}^{2} x_{4}^{6} }}{36}} }}{{L^{2} }}(1  \frac{{x_{3} }}{2L}\sqrt{\frac{E}{4G}} )\).
HBWO, RSA [62], GWO [28], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], HGS [64], AO [56] and BWO [45] was used to solve the welding beam design problem. It can be seen from Tables 24 and 25 that the optimal solution of the HBWO solution is 1.695252, indicating the superiority of HBWO.
Conclusion and future
In this paper, we propose an improved beluga whale optimization algorithm (HBWO), which introduces Quasioppositionbased learning (QOBL), adaptive and spiral predation, and Nelder–Mead simplex search method (NM) into the beluga whale optimization algorithm. The CEC2017 test function and CEC2019 test function are used to test its performance and compared with the Beluga optimization algorithm, classical algorithm, 2022 new algorithm, and improved algorithm. The experimental results show that HBWO has certain advantages. Meanwhile, the experimental results of six engineering cases further verify the high efficiency of HBWO in solving practical problems. Therefore, the introduction of the three strategies enables HBWO to obtain a better initial population while the performance, such as solution accuracy and convergence speed, is substantially improved. However, the experimental results show that in the CEC2017 multimodal test function, the accuracy of HBWO is still poor when facing certain functions, and there is still room for improvement. Meanwhile, HBWO increases some complexity due to adding strategies. Therefore, in future work, the proposed HBWO can be improved by integrating with other algorithms, or some new strategies can be entered into the exploration phase to make the performance of HBWO more recent. The improved algorithm can be used in image segmentation [72, 73], energy problems [74, 75], path optimization [76 ~ 78], feature selection [79, 80], curve and surface optimization [81, 82], and other fields [83 ~ 86, 88–89].
Availability of data and materials
All data generated or analyzed during this study were included in this published article.
References
Gang Hu, Chen L, Wang X, Wei G. Differential evolutionboosted sine cosine golden eagle optimizer with Lévy Flight. J Bionic Eng. 2022;19(6):1850–85.
Melman Anna, Evsutin Oleg. Comparative study of metaheuristic optimization algorithms for image steganography based on discrete Fourier transform domain. Appl Soft Comput. 2022;132:109847.
Gang Hu, Li M, Wang X, Wei G, Chang CT. An enhanced manta ray foraging optimization algorithm for shape optimization of complex CCGBall curves. KnowlBased Syst. 2022;240: 108071.
Truong DN, Chou JS. Fuzzy adaptive jellyfish searchoptimized stacking machine learning for engineering planning and design. Autom Constr. 2022;143: 104579.
Li Y, Peng T, Hua Lei, Ji C, Ma H, Nazir MS, Zhang C. Research and application of an evolutionary deep learning model based on improved grey wolf optimization algorithm and DBNELM for AQI prediction. Sust Cities Soc. 2022;87:104209.
Dian S, Zhong J, Guo B, Liu J, Guo R. A smooth path planning method for mobile robot using a BESincorporated modified QPSO algorithm. Expert Syst Appl. 2022;208: 118256.
Wang G, Guo S, Han L, Zhao Z, Song X. COVID19 groundglass opacity segmentation based on fuzzy cmeans clustering and improved random walk algorithm. Biomed Signal Proc Cont. 2023;79:104159.
Gang Hu, Zhong J, Bo Du, Wei G. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput Methods Appl Mech Eng. 2022;394: 114901.
Houssein EH, Çelik E, Mahdy MA, Ghoniem RM. Selfadaptive equilibrium optimizer for solving global, combinatorial, engineering, and multiobjective problems. Expert Syst Appl. 2022;195: 116552.
Houssein EH, Rezk H, Fathy A, Mahdy MA, Nassef AM. A modified adaptive guided differential evolution algorithm applied to engineering applications. Eng Appl Artif Intell. 2022;113: 104920.
Yang XS. Firefly algorithm, stochastic test functions and design optimisation. INT J BIOINSPIR COM. 2010;2(2):78–84.
Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim. 1997;11(4):341–59.
Rajeev S, Krishnamoorthy CS. Discrete optimization of structures using genetic algorithms J. Struct Eng. 1992;118(5):1233–50.
Zhong J, Feng L, Ong YS. Gene expression programming: a survey [Review Article]. IEEE Comput Intell Mag. 2017;12(3):54–72. https://doi.org/10.1109/MCI.2017.2708618.
D. Fogel, Artificial intelligence through simulated evolution. Evol. Comput. 2009; 227–296.
Rao RV, Savsani VJ, Vakharia DP. Teaching–learningbased optimization: a novel method for constrained mechanical design optimization problems. Comput Aided Des. 2011;43(3):303–15.
Lee KS, Geem ZW. A new metaheuristic algorithm for continuous engineering optimization: harmony search theory and practice. Comp Met App Mech Eng. 2005;194:36–8.
Moosavian N. Babak Kasaee Roodsari, Soccer league competition algorithm: a novel metaheuristic algorithm for optimal design of water distribution networks. Swarm Evol Comput. 2014;17:14–24.
Kumar M, Kulkarni AJ, Satapathy SC. Socio evolution & learning optimization algorithm: a socioinspired optimization methodology. Future Generation Comp Syst. 2018;81:252–72.
Ghasemian H, Ghasemian F, VahdatNejad H. Human urbanization algorithm: a novel metaheuristic approach. Math Comput Simul. 2020;178:1–15.
Zhang Q, Gao H, Zhan ZH, Li J, Zhang H, Optimizer G. A powerful metaheuristic algorithm for solving continuous and discrete global optimization problems. KnowlBased Syst. 2023;261: 110206.
Weiguo Zhao, Liying Wang, Zhenxing Zhang, Chapter 5  Engineering applications of artificial ecosystembased optimization,Editor(s): Weiguo Zhao, Liying Wang, Zhenxing Zhang, New Optimization Algorithms and their Applications, Elsevier 2021 93–121.
Nguyen LT, Nestorovi’c T. Unscented hybrid simulated annealing for fast inversion of tunnel seismic waves. Comput Met Appl Mech Eng. 2016;301:281–99.
Rashedi E, Nezamabadipour H, Saryazdi S. GSA: a gravitational search algorithm. Inf Sci. 2009;179(13):2232–48.
Foroughi Nematollahi A, Rahiminejad A, Vahidi B. A novel physical based metaheuristic optimization method known as lightning attachment procedure optimization. Appl Soft Comput. 2017;59:596–621.
Azizi M. Atomic orbital search: a novel metaheuristic algorithm. Appl Math Model. 2021;93:657–83.
AbdelBasset M, ElShahat D, Jameel M, Abouhawwash M. Young’s doubleslit experiment optimizer : a novel metaheuristic optimization algorithm for global and constraint optimization problems. Comp Met Appl Mech Eng. 2023;403:115652.
Mirjalili S, Mirjalili SM, Lewis A. Grey Wolf optimizer. Adv Eng Softw. 2014;69:46–61.
Zhao W, Wang L, Zhang Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. KnowlBased Syst. 2019;163:283–304.
Kennedy J, Eberhart R. Particle swarm optimization. Proc IEEE Int Conf Neural Netw. 1995;4:1942–8.
Dorigo M, Blum C. Ant colony optimization theory: a survey. Theoret Comput Sci. 2005;344(2–3):243–78.
Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: algorithm and applications. Future Generation Comp Syst. 2019;97:849–72.
Braik MS. Chameleon swarm algorithm: a bioinspired optimizer for solving engineering design problems. Expert Syst Appl. 2021;174:114685.
Chou JS, Truong DN. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl Math Comput. 2021;389: 125535.
Xue J, Shen B. A novel swarm intelligence optimization approach: sparrow search algorithm. Syst Sci Control Eng. 2020. https://doi.org/10.1080/21642583.2019.1708830.
Dehghani M, Montazeri Z, Trojovská E, Trojovský P. Coati optimization algorithm: a new bioinspired metaheuristic algorithm for solving optimization problems. KnowlBased Syst. 2023;259: 110011.
Pan JS, Zhang LG, Wang RB, Snášel V, Chu SC. Gannet optimization algorithm: a new metaheuristic algorithm for solving engineering optimization problems. Math Comput Simul. 2022;202:343–73.
Eslami N, Yazdani S, Mirzaei M, Hadavandi E. AphidAnt Mutualism: a novel natureinspired metaheuristic algorithm for solving optimization problems. Math Comput Simul. 2022;201:362–95.
SangTo T, LeMinh H, Wahab MA, Thanh CL. A new metaheuristic algorithm: shrimp and goby association search algorithm and its application for damage identification in largescale and complex structures. Adv Eng Software. 2023;176:103363.
Zamani H, NadimiShahraki MH, Gandomi AH. CCSA: conscious neighborhoodbased crow search algorithm for solving global optimization problems. Appl Soft Comput. 2019;85: 105583.
Zhao W, Zhang Z, Wang L. Manta ray foraging optimization: an effective bioinspired optimizer for engineering applications. Eng Appl Artif Intell. 2020;87: 103300.
Abdollahzadeh B, Gharehchopogh FS, Khodadadi N, Mirjalili S. Mountain gazelle optimizer: a new natureinspired metaheuristic algorithm for global optimization problems. Adv Eng Software. 2022;174:103282.
Wang L, Cao Q, Zhang Z, Mirjalili S, Zhao W. Artificial rabbits optimization: a new bioinspired metaheuristic algorithm for solving engineering optimization problems. Eng Appl Artif Intell. 2022;114: 105082.
Zhao W, Wang L, Mirjalili S. hummingbird algorithm: a new bioinspired optimizer with its engineering applications. Comput Methods Appl Mech Eng. 2022;388: 114194.
Zhong C, Li G, Meng Z. Beluga whale optimization: a novel natureinspired metaheuristic algorithm. KnowlBased Syst. 2022;251: 109215.
Çelik E. IEGQOAOA: informationexchanged gaussian arithmetic optimization algorithm with quasiopposition learning. KnowlBased Syst. 2023;260: 110169.
Sudan Yu, Heidari AA, He C, Cai Z, Althobaiti MM, Mansour RF, Liang G, Chen H. Parameter estimation of static solar photovoltaic models using Laplacian NelderMead hunger games search. Solar Energy. 2022;242:79–104.
Pandey V, Pandey SK. PYNodes: an abinitio python code for searching nodes in a material using NelderMead’s simplex approach. Comput Phys Commun. 2023;283: 108570.
Shuhui Xu, Wang Y, Wang Z. Parameter estimation of proton exchange membrane fuel cells using eagle strategy based on JAYA algorithm and NelderMead simplex method. Energy. 2019;173:457–67.
G. Wu, R. Mallipeddi, P.N. Suganthan. Problem definitions and evaluation criteria for the CEC 2017 competition and special session on constrained single objective realparameter optimization problem definitions and evaluation criteria for the CEC 2017 competition on constrained real parameter optimization (2017).
Chauhan S, Vashishtha G. A synergy of an evolutionary algorithm with slime mould algorithm through series and parallel construction for improving global optimization and conventional design problem. Eng Appl Artif Intell. 2023;118: 105650.
Houssein EH, Hussain K, Abualigah L, Elaziz MA, Alomoush W, Dhiman G, Djenouri Y, Cuevas E. An improved oppositionbased marine predators algorithm for global optimization and multilevel thresholding image segmentation. KnowledgeBased Syst. 2021;229:107348.
Xiaobing Yu, WangYing Xu, ChenLiang Li. Oppositionbased learning grey wolf optimizer for global optimization. KnowledgeBased Syst. 2021;226:107139.
Zhao S, Zhang T, Ma S, Chen M, Optimizer D. A natureinspired metaheuristic algorithm for engineering applications. Eng Appl Artif Intell. 2022;114: 105075.
Seyyedabbasi A, Kiani F. Sand Cat Swarm optimization: a natureinspired algorithm to solve global optimization problems. Eng Comput. 2022. https://doi.org/10.1007/s0036602201604x.
Abualigah L, Yousri D, Elaziz MA, Ewees AA, Alqaness MA.A., Gandomi AH. Aquila optimizer: a novel metaheuristic optimization algorithm. Comput Industrial Eng. 2021;157:107250.
Naik MK, Panda R, Wunnava A, et al. A leader Harris hawks optimization for 2D Masi entropybased multilevel image thresholding. Multimed Tools Appl. 2021;80:35543–83. https://doi.org/10.1007/s11042020104677.
Naik MK, Panda R, Abraham A. Normalized square difference based multilevel thresholding technique for multispectral images using leader slime mould algorithm. J King Saud Univ Comp Inform Sci. 2022;34(7):4524–36.
Abualigah L, Diabat A, Mirjalili S, Elaziz MA, Gandomi AH. The arithmetic optimization algorithm. Comp Met Appl Mech Eng. 2021. https://doi.org/10.1016/j.cma.2020.113609.
Bayzidi H, Talatahari S, Saraee M, Lamarche CP. Social network search for solving engineering optimization problems. Comput Intell Neurosci. 2021. https://doi.org/10.1155/2021/8548639.
Mirjalili S, Lewis A. The whale optimization algorithm. Adv Eng Softw. 2016;95:51–67.
Abualigah L, Elaziz MA, Sumari P, Geem ZW, Gandomi AH. Reptile search algorithm (RSA): a natureinspired metaheuristic optimizer. Expert Syst Appl. 2022;191:116158.
Dhiman G, Garg M, Nagar A, Kumar V, Dehghani M. A novel algorithm for global optimization: Rat swarm optimizer. J Ambient Intell Humaniz Comput. 2021;12:8457–82. https://doi.org/10.1007/s12652020025800.
Yang Yutao, Chen Huiling, Heidari Ali Asghar, Gandomi Amir H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst Appl. 2021;177:114864.
Seyedali Mirjalili SCA. A Sine cosine algorithm for solving optimization problems. Knowl Based Syst. 2016;96:120–33.
Ong KM, Ong P, Sia CK. A new flower pollination algorithm with improved convergence and its application to engineering optimization. Decision Anal J. 2022;5:100144.
Li S, Chen H, Wang M, Heidari AA, Mirjalili S. Slime mould algorithm: a new method for stochastic optimization. Future Generation Comp Syst. 2020;111:300–23.
Lin X, Xianxing Yu, Li W. A heuristic whale optimization algorithm with niching strategy for global multidimensional engineering optimization. Comput Ind Eng. 2022;171: 108361.
Gang H, Yang R, Qin X, Wei G. MCSA: Multistrategy boosted chameleoninspired optimization algorithm for engineering applications. Comp Met Appl Mech Eng. 2023;403:115676.
Jaberipour M, Khorram E. Two improved harmony search algorithms for solving engineering optimization problems. Commun Nonlinear Sci Numer Simul. 2010;15(11):3316–31.
Duan Y, Xiaobing Y. A collaborationbased hybrid GWOSCA optimizer for engineering optimization problems. Expert Syst Appl. 2023;213:119017.
Yang X, Wang R, Zhao D, Fanhua Y, Heidari AA, Zhangze Xu, Chen H, Algarni AD, Elmannai H, Suling Xu. Multilevel threshold segmentation framework for breast cancer images using enhanced differential evolution. Biomed Signal Proc Cont. 2023;80:104373.
Xiaobing Yu, Xuejing Wu. Ensemble grey wolf Optimizer and its application for image segmentation. Expert Syst Appl. 2022;209: 118267.
Wilberforce Tabbi, Hegazy Rezk AG, Olabi EI, EpelleAbdelkareem MA. Comparative analysis on parametric estimation of a PEM fuel cell using metaheuristics algorithms. Energy. 2023;262:125530.
Kathiravan K, Rajnarayanan PN. Application of AOA algorithm for optimal placement of electric vehicle charging station to minimize line losses. Electric Power Syst Res. 2023;214:108868.
Xiaobing Yu, Li C, Zhou JF. A constrained differential evolution algorithm to solve UAV path planning in disaster scenarios. KnowlBased Syst. 2020;204: 106209.
Zhang C, Zhou W, Qin W, Tang W. A novel UAV path planning approach: heuristic crossing search and rescue optimization algorithm. Expert Syst Appl. 2023;215: 119243.
Xiaobing Yu, Jiang N, Wang X, Li M. A hybrid algorithm based on grey wolf optimizer and differential evolution for UAV path planning. Expert Syst Appl. 2023;215: 119327.
Gang Hu, Bo Du, Wang X, Wei G. An enhanced black widow optimization algorithm for feature selection. KnowlBased Syst. 2022;235: 107638.
Essam H, Houssein DO, Emre Ç, Marwa ME, Ghoniem Rania M. Boosted sooty tern optimization algorithm for global optimization and feature selection. Expert Syst Appl. 2023;213:119015.
Gang Hu, Dou W, Wang X, Abbas M. An enhanced chimp optimization algorithm for optimal degree reduction of SaidBall curves. Math Comput Simul. 2022;197:207–52.
Zheng J, Ji X, Ma Z, Hu G. Construction of localshapecontrolled quartic generalized saidball model. Mathematics. 2023;11:2369.
Abeer AlHyari, Mua’Ad AbuFaraj, Hyperparameters Optimization of Convolutional Neural Networks using Evolutionary Algorithms, in: 2022 International Conference on Emerging Trends in Computing and Engineering Applications (ETCEA), 2022, pp. 16.
Li N, Zhou G, Yongquan Zhou Wu, Deng QL. Multiobjective pathfinder algorithm for multiobjective optimal power flow problem with random renewable energy sources: wind, photovoltaic and tidal. Sci Rep. 2023;13:10647.
Luo Q, Yin S, Zhou G, Meng W, Zhao Y, Zhou Y. Multiobjective equilibrium optimizer slime mould algorithm and its application in solving engineering problems. Struct Multidiscip Optim. 2023;66:114.
Zhang T, Zhou Y, Guo Zhou Wu, Deng QL. Discrete Mayfly Algorithm for spherical asymmetric traveling salesman problem. Expert Syst Appl. 2023;221: 119765.
Chen H, Li W, Yang X. A whale optimization algorithm with chaos mechanism based on quasiopposition for global optimization problems. Expert Syst Appl. 2020;158: 113612.
Hu G, Guo YX, Wei G, Abualigah L. Genghis Khan shark optimizer: a novel natureinspired algorithm for engineering optimization. Adv Eng Inform. 2023;58(8): 102210.
Hu G, Zheng Y, Abualigah L, Hussien AG. DETDO: an adaptive hybrid dandelion optimizer for engineering optimization. Adv Eng Inform. 2023;57: 102004.
Acknowledgements
None.
Funding
This research received financial support from National Natural Science Foundation of China (72072144, 71672144, 71372173, 70972053); Shaanxi soft Science Research Plan (2019KRZ007); Science and Technology Research and Development Program of Shaanxi Province (2021KRM183, 2017KRM059, 2017KRM057, 2014KRM282); Soft Science Research Program of Xi'an Science and Technology Bureau (21RKYJ0009).
Author information
Authors and Affiliations
Contributions
Conceptualisation: JH, HH. Methodology: JH, HH. Formal Analysis and investigation: JH, HH. Writing—original draft preparation: JH, HH. Writing—review and editing: JH, HH. Supervision: HH.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Huang, J., Hu, H. Hybrid beluga whale optimization algorithm with multistrategy for functions and engineering optimization problems. J Big Data 11, 3 (2024). https://doi.org/10.1186/s40537023008648
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40537023008648