Skip to main content

Hybrid beluga whale optimization algorithm with multi-strategy for functions and engineering optimization problems

Abstract

Beluga Whale Optimization (BWO) is a new metaheuristic algorithm that simulates the social behaviors of beluga whales swimming, foraging, and whale falling. Compared with other optimization algorithms, BWO shows certain advantages in solving unimodal and multimodal optimization problems. However, the convergence speed and optimization performance of BWO still have some performance deficiencies when solving complex multidimensional problems. Therefore, this paper proposes a hybrid BWO method called HBWO combining Quasi-oppositional based learning (QOBL), adaptive and spiral predation strategy, and Nelder-Mead simplex search method (NM). Firstly, in the initialization phase, the QOBL strategy is introduced. This strategy reconstructs the initial spatial position of the population by pairwise comparisons to obtain a more prosperous and higher quality initial population. Subsequently, an adaptive and spiral predation strategy is designed in the exploration and exploitation phases. The strategy first learns the optimal individual positions in some dimensions through adaptive learning to avoid the loss of local optimality. At the same time, a spiral movement method motivated by a cosine factor is introduced to maintain some balance between exploration and exploitation. Finally, the NM simplex search method is added. It corrects individual positions through multiple scaling methods to improve the optimal search speed more accurately and efficiently. The performance of HBWO is verified utilizing the CEC2017 and CEC2019 test functions. Meanwhile, the superiority of HBWO is verified by utilizing six engineering design examples. The experimental results show that HBWO has higher feasibility and effectiveness in solving practical problems than BWO and other optimization methods.

Introduction

Meta-heuristic algorithms (MAs, for short) have developed rapidly in recent years. It combines stochastic and local search algorithms and is widely used to solve global optimization problems in different fields. Compared with traditional algorithms, MAs can better deal with complex, multi-modal, non-continuous non-differentiable problems [1] in the real world due to their “random factors”. For example, MAs can be used to solve problems in image processing [2], shape optimization [3], machine learning [4], deep learning [5], path planning [6], clustering [7], engineering problems [9, 10], and other fields. MAs simulate the relevant behavior of humans or animals and apply the rules and action principles in physics and chemistry to construct the mathematical model of the optimization algorithm. Therefore, MAs can be divided into four categories: evolution-based algorithms, human-based algorithms, physics-based algorithms, chemistry-based algorithms, and swarm-based intelligence algorithms [8].

Evolution-based algorithms are used to derive superior next-generation individuals through Darwinian evolution and the survival of the fittest in nature, thus enabling the population to progress collectively. Among the representative methods are Differential Evolution (DE) [12] and GA [13], which simulate the genetic laws in nature to find the optimal individual through natural derivation laws such as selection, crossover, and mutation. Some more specific methods are Gene Expression Programming (GEP) [14], Evolutionary Programming (EP) [15], etc.

Human-based algorithms are mainly inspired by human social behaviors, including teaching and learning, competitive, cooperative, etc. For example, Teaching–Learning-Based Optimization (TLBO) [16] Soccer League Competition (SLC) [18] simulates sports football matches and finds players with higher scores among fixed players and substitute players by ranking points, Socio Evolution & Learning Optimization (SELO) [19], Human Urbanization (HUS) [20] derived from human life, its purpose is to realize urbanization and have more convenient urban life, Growth Optimizer (GO) [21] its inspiration comes from learning and reflection in the process of growing up, Artificial Ecosystem-Based Optimization (AEO) [22], Harmony Search (HS) [17] simulate musicians to get wonderful music by constantly adjusting notes, etc.

Physics theorems and chemical experiments inspire algorithms based on physics and chemistry. The more classical ones include Simulated Annealing (SA) [23] and Gravitational Search Algorithm (GSA) [24] to simulate the motion between objects under the universal gravitation theorem. It also includes Lightning Attachment Process Optimization (LAPO) [25], Young’s Double-Slit Experiment Optimizer (YDSE) [27], physical experiments from double-slit interference have shown the wave of light, Atomic Orbit Search (AOS) [28] Inspired by concepts such as quantum mechanics and quantum atom models in physics, Atom Search Optimization (ASO) [29], eta.

Swarm-based intelligence algorithms focus on finding the best solution by simulating the behavior of the group and learning the intelligence of the group. Classical swarm intelligence algorithms include Particle Swarm Optimization (PSO) [30], Ant Colony Optimization (ACO) [31], and Firefly Algorithm (FA) [11]. Due to their simple structure, they may need better convergence accuracy when dealing with complex optimization problems. With the deepening of scientific research, a large number of new swarm intelligence algorithms have emerged, including Harris Hawks Optimization (HHO) [32], mimicking the hunting process of a Harris hawk, Sparrow Search Algorithm (SSA) [35], Chameleon Swarm Algorithm (CSA) [33] simulates the way chameleons change color according to their environment when searching for food. These search methods can achieve excellent performance against benchmark functions but suffer from slow convergence when dealing with real engineering problems. Also included, Jellyfish Search (JS) [34] simulated jellyfish following ocean currents, the Coati Optimization Algorithm (COA) [36], the Gannet Optimization Algorithm (GOA) [37] a numerical model was constructed based on the unique behavior of gannets during foraging. Although these algorithms have good convergence speed and accuracy, they fall into local optimization in some high-dimensional problems and are more sensitive to control parameters. In addition, swarm-based intelligence includes Aphid-Ant Mutualism (AAM) [38], simulates the mutualism between aphids and ants in nature, Shrimp and Goby Association Search (SGA) [39] mathematical modeling based on the cooperative win–win relationship between shrimp and goby, Conscious Neighborhood-Based Crow Search (CCSA) [40], Manta Ray Foraging Optimization (MRFO) [41], Mountain Gazelle Optimizer (MGO) [42] it was inspired by the herd life of the mountain gazelle, Artificial Rabbits Optimization (ARO) [43], Artificial Hummingbird Algorithm (AHA) [44]. These algorithms have good global search capability and can handle continuous optimization problems, but solving discrete optimization problems is challenging.

The Beluga Whale Optimization (BWO) [45] was proposed by Changting Zhong et al. in 2022, and its inspiration comes from the three stages of beluga whale swimming, foraging, and whale fall. Beluga whales are social animals, and usually, many beluga whales migrate together every July. In summer, beluga whales gather in the estuary to hunt. Because beluga whales do not have sharp teeth, they generally eat salmon, cucurbit fish, cod, and smaller prey such as shrimp, squid and clams. Whale falling refers to the attack of natural enemies, polar bears, and killer whales during migration, as well as the harm of human beings to beluga whales and some irresistible factors that make beluga whales die and fall to the bottom of the sea. The above three stages correspond to BWO exploration, development, and whale fall, and a mathematical model is established accordingly.

The two algorithms, BWO and WOA, differ significantly in the process of constructing optimization models due to different inspirations [61]. Firstly, BWO was inspired by the swimming, foraging, and whale-fall processes of beluga whales and developed a mathematical model based on these three stages. In contrast, WOA modeled the foraging process of humpback whales in a bubble network. Further, BWO designed a Lévy flight process during the exploitation phase that simulates the foraging movement of beluga whales and considers whales falling under natural factors. Therefore, the significant differences between the two algorithms need to be squarely addressed.

Some experiments have strongly demonstrated the competitiveness of BWO in solving optimization problems. However, due to the increasing complexity of real problems, the original BWO is unable to handle these problems effectively and achieve a suitable result. An improved situation exists for effective enhancement. Therefore, this paper presents an improved BWO called HBWO, which introduces Quasi-oppositional based learning (QOBL) [46], adaptive and spiral predation strategy, and Nelder-Mead simplex search method (NM) [47 ~ 49]. Firstly, quasi-positional-based learning is introduced to obtain an optimal solution with higher probability. Secondly, the adaptive and spiral predation strategies improve the convergence speed while avoiding falling into local optima. Finally, the Nelder-Mead simplex search method is introduced so that the better individuals replace the worst ones.

In this paper, the CEC 2017 test function [50] and the CEC 2019 test function [51] are used to verify the superiority of HBWO. The main contributions of this paper are as follows:

  1. 1

    In order to improve the performance of BWO, three strategies were added on the basis of BWO, and an improved Beluga optimization algorithm, HBWO, was proposed.

  2. 2

    The performance of HBWO was evaluated in CEC2017 and CEC2019 test functions and compared with nine optimization algorithms and two improved optimization algorithms. The numerical results show that HBWO has a certain competitiveness.

  3. 3

    Six practical engineering application problems were solved by HBWO. It further shows the superiority of HBWO and the high efficiency of solving practical problems.

This paper is composed in the following way: Sect. "Basic BWO" briefly describes the specific process of BWO. Sect. "Proposed HBWO" presents a detailed description of the proposed HBWO. Sect. "Experimental results and discussion" tests the performance of HBWO using CEC2017 and CEC2019 test functions, and the resulting experimental results are analyzed and evaluated. Sect. "HBWO for engineering optimization problems" solves six practical engineering examples with the HBWO. Sect. "Conclusion and Future" summarizes the paper.

Basic BWO

The Beluga Whale Optimization (BWO), proposed by Changting Zhong et al. in 2022. The algorithm is inspired by the three phases of beluga whale swimming, feeding and whale falling.

The exploration and exploitation stages of BWO are determined by \(B_{f}\), which can be expressed as:

$$ B_{f} = B_{0} (1 - T/2T_{\max } ) $$
(1)

where \(B_{0}\) is a random number between (0, 1). When \(B_{f} > 0.5\) is the exploration phase of BWO, and when \(B_{f} \le 0.5\) is the exploitation phase of BWO. As T increases, \(B_{f}\) decreases from (0,1) to (0,0.5).

Exploration phase

The exploration phase mathematical model of BWO is inspired by beluga whale swimming. Based on the behavioral recordings of the belugas swimming, the two pairs of belugas swim closely together in a synchronized or mirrored fashion (Fig. 1a). Location update as follows:

$$ \left\{ {\begin{array}{*{20}c} {X_{i,j}^{T + 1} = X_{i,pj}^{T} + (X_{r,p1}^{T} - X_{i,pj}^{T} )(1 + r_{1} )\sin (2\pi r_{2} ),j = even} \\ {X_{i,j}^{T + 1} = X_{i,pj}^{T} + (X_{r,p1}^{T} - X_{i,pj}^{T} )(1 + r_{1} )\cos (2\pi r_{2} ),j = odd} \\ \end{array} } \right. $$
(2)

where \(X_{i,j}^{T + 1}\) is the new position of the ith individual on the Jth dimension, \(p_{j} (j = 1,2, \cdots ,d)\) is a number randomly selected from the D-dimension, \(X_{i,pj}^{T}\) is the position of the ith individual on the \(p_{j}\) dimension, \(X_{r,p1}^{T}\) is the current position of the rth individuals (r is randomly selected), \(r_{1}\) and \(r_{2}\) are the random numbers of (0,1).

Fig. 1
figure 1

Behaviors of beluga whales a Swim b Foraging c Whale fall

Exploitation phase

The BWO exploitation phase was inspired by the beluga whale's predatory behavior (Fig. 1b). It is expressed as follows:

$$ X_{i}^{T + 1} = r_{3} X_{best}^{T} - r_{4} X_{i}^{T} + C_{1} \cdot L_{F} \cdot (X_{r}^{T} - X_{i}^{T} ) $$
(3)

where \(X_{i}^{T}\) and \(X_{r}^{T}\) are the current and the random beluga position, \(X_{i}^{T + 1}\) is the updated beluga position, \(X_{best}^{T}\) is the beluga with the best position, and \(r_{3}\) and \(r_{4}\) are random numbers of (0,1), \(C_{1} = 2r_{4} (1 - T/T_{\max } )\) is used to measure the random jump strength of Lévy flight. LF is calculated as follows:

$$ L_{F} = 0.05 \times \frac{u \times \sigma }{{\left| v \right|^{1/\beta } }} $$
(4)
$$ \sigma = \left( {\frac{\Gamma (1 + \beta ) \times \sin (\pi \beta /2)}{{\Gamma (1 + \beta )/2 \times \beta \times 2^{(\beta - 1)/2} }}} \right)^{1/\beta } $$
(5)

where u and v are normal distributed random numbers and β is constant β = 1.5.

Whale fall

Whale fall refers to the attack of natural enemies such as polar bears and killer whales in the migration process, as well as the harm of humans and some irresistible factors that cause the death of beluga whales (Fig. 1c). A mathematical model of a whale's fall was established, expressed as follows:

$$ X_{i}^{T + 1} = r_{5} X_{i}^{T} - r_{6} X_{i}^{T} + r_{6} X_{step} $$
(6)
$$ X_{step} = (u_{b} - l_{b} )\exp ( - C_{2} T/T_{\max } ) $$
(7)

where (\(C_{2} = 2W_{f} \times n\)), \(W_{f}\) can be expressed as follows:

$$ W_{f} = 0.1 - 0.05_{2} T/T_{\max } $$
(8)

The whale falls when \(B_{f} < W_{f}\).

Proposed HBWO

With the increasing complexity of optimization problems, BWO has some limitations in solving practical problems, given the shortcomings of the BWO, such as insufficient solution accuracy. In this section, an improved Beluga Whale Optimization HBWO is proposed by combining the following three strategies. (1) Improve the distribution quality of the initial population based on Quasi-opposition-based learning (QOBL); (2) Adaptive and spiral predation strategy. The adaptive part makes the beluga learn from the best individual and reduces the learning from random beluga. The spiral predation part makes the beluga expand the exploitation phase, which helps to prevent the algorithm from being premature and jumping out of the local best beluga individual; (3) Nelder-Mead simplex search method (NM), which makes better beluga whale individuals replace the worst beluga whale individuals through reflection, expansion, and contraction.

Quasi-opposition-based learning (QOBL)

Opposition-based learning (OBL) [52, 53] assumes that the initial population has a higher probability of getting the optimal solution than the random initial population. OBL has been extended to QOBL, Quasi-opposition-based learning (QOBL), and the initial beluga population obtained with QOBL has better optimization ability than the initial beluga population obtained with OBL and random initialization. QOBL is denoted by:

$$ \left\{ {\begin{array}{*{20}c} {x_{i,j}^{T + 1} = H_{i,j} + (x_{i}^{T} - H_{i,j} ) \times rand(0,1),x_{i}^{T} < H_{i,j} } \\ {x_{i,j}^{T + 1} = x_{i}^{T} + (H_{i,j} - x_{i}^{T} ) \times rand(0,1),x_{i}^{T} \ge H_{i,j} } \\ \end{array} } \right. $$
(9)

where

$$ x_{i,j}^{T} = (lb_{j} + lb_{j} ) - X_{i}^{T} $$
(10)
$$ H_{i,j} = (lb_{j} + lb_{j} )/2 $$
(11)

where \(X_{i,j}^{T}\) is the position of the ith white whale on the jth dimension generated by random initialization.

Adaptive and spiral predation strategies

The adaptive part is an improvement made to the exploration part of the Beluga whale optimization. The original BWO exploration part ignores the learning from the optimal beluga individual, which leads to the weakness of the algorithm in finding the best, and there is some room for improvement. HBWO has made some improvements based on the framework of the original BWO exploration section. HBWO learns from random beluga individuals while also learning from optimal beluga individuals in some dimensions, increasing the algorithm's superiority-seeking ability in the exploration phase with the following mathematical formulation.

$$ \left\{ {\begin{array}{*{20}c} {x_{i,s1}^{T + 1} = x_{g,s1}^{T} + (x_{i,s1}^{T + 1} - x_{g,s1}^{T} )(1 + r_{1} )\sin (2\pi r_{2} )} \\ {x_{i,s2}^{T + 1} = x_{g,s2}^{T} + (x_{i,s2}^{T + 1} - x_{g,s2}^{T} )(1 + r_{1} )\cos (2\pi r_{2} )} \\ \end{array} } \right. $$
(12)

where \(s_{1}\) and \(s_{2}\) are two random integers from 0 to D (number of dimensions), \(x_{i,s1}^{T + 1}\) the new position of the ith beluga in the \(s_{1}\) th dimension, \(x_{g,s1}^{T + 1}\) is the position of the globally optimal beluga in the \(s_{1}\) th dimension, and \(x_{i,s1}^{T}\) is the current position of the ith beluga in the \(s_{1}\) th dimension. \(s_{2}\) same thing.

The spiral predation part is an improvement made to the exploitation part of the Beluga whale optimization (Fig. 2). Influenced by the Whale optimization algorithm, the spiral predation with cosine function is introduced in the exploitation stage of BWO to broaden the exploitation ability of beluga whales to enhance the diversity of later populations. It is calculated by Eq. (12).

$$ x_{i}^{T + 1} = r_{3} x_{best}^{T} - r_{4} x_{i}^{T} + C_{1} \cdot L_{F} \cdot x_{i}^{T} \cdot \cos (w \cdot 2 \cdot \pi ) $$
(13)

where \(x_{i}^{T}\) is the current beluga position, \(x_{i}^{T + 1}\) is the position of the updated beluga, \(x_{best}^{T}\) the best solution, \(r_{3}\) and \(r_{4}\) is the random number of (0,1), \(C_{1} = 2r_{4} (1 - T/T_{\max } )\), \(w\) is represented as follows:

$$ w = (h - 1) \cdot r + 1 $$
(14)
$$ h = T \cdot (( - 1)/T_{\max } ) - 1 $$
(15)
Fig.2
figure 2

Beluga whale feeding (a) primitive (b) spiral feeding

where r and h are random numbers between (0,1).

Nelder-Mead simplex search method (NM)

The method uses reflection, expansion, and compression to rescale the beluga individuals. By calculating the magnitude of these points and the corresponding point fitness values, a better beluga individual can be made to replace the worst beluga individual according to the relevant steps of NM, helping to find the optimal beluga individual (Fig. 3). The steps of NM are described below.

  1. (1)

    Reflection. Calculate the reflection point \(r = m + \alpha \cdot (m - w)\), α is the reflection coefficient, this paper \(\alpha = 1\).

  2. (2)

    Expansion. If \(f_{r} < f_{w}\), calculate the expansion point \(s = m + \beta \cdot (r - m)\), β is the expansion factor, this paper \(\beta = 2\). If \(f_{s} < f_{r}\), \(w = s\), otherwise \(w = r\).

Fig. 3
figure 3

NM a reflection. b expansion. c contractiond shrink

where \(f_{r}\)\(f_{w}\)\(f_{s}\) denote the adaptation values of the corresponding points rws.

  1. (1)

    Compression.

  2. If \(f_{b} < f_{r} < f_{m}\), \(w = r\).

  3. If \(f_{m} < f_{r} < f_{w}\), contraction inward, \(c = m + (r - m)/2\), if \(f_{c} < f_{w}\), \(w = c\).

  4. Otherwise, shrink outward, \(p = m - (r - m)/2\), if \(f_{p} < f_{w}\), \(w = p\), otherwise \(w = r\).

where the magnitude of the fitness value of the point \(f_{b}\)\(f_{r}\)\(f_{m}\)\(f_{w}\)\(f_{c}\)\(f_{p}\) to which b、r、m、w、c、p corresponds.

In order to reduce the error caused by using NM randomness, it can be run repeatedly. This paper runs 5 times. Algorithm 1 gives the pseudo-code.

Algorithm 1:

Nelder-Mead simplex search method.

figure a

Detailed Steps for HBWO

Combining the three strategies based on quasi-opposition-based learning, adaptive and spiral predation strategy, and Nelder–Mead simplex search method with BWO, an enhanced belugas optimization, which is marked as HBWO, is proposed.

For HBWO, the three strategies introduced are important ways to ensure balance. First, Quasi-opposition-based learning strategies increase the likelihood that the population searches near the optimal solution by improving the initialization distribution of the population. This improves the search efficiency in the exploration phase. Adaptive and spiral predation strategies can increase search efficiency by introducing optimal position information in the exploration phase. The population constantly explores the optimal solution. In addition, the Nelder-Mead simplex search method relies on the relevant position information of the population in the exploration phase to determine the optimal position in the solution space.

The steps of HBWO are as follows:

Step1. Initialization, determining the parameters related to the enhanced beluga optimization algorithm;

Step2. Generating an initial population of belugas by Eq. (9);

Step3. Calculate the fitness value and get the current optimal individual;

Step4. Calculate \(B_{f}\) according to Eq. (1). If \(B_{f} > 0.5\) calculates the position of individual beluga whale according to Eq. (12), else calculates the position of individual beluga whale according to Eq. (13);

Step5. The fitness value was calculated and sorted to find the current optimal belugas individual;

Step6. Calculate \(W_{f}\) according to Eq. (8), If \(B_{f} < W_{f}\) according to Eq. (6) calculate the individual position of beluga whale;

Step7. Determine whether the upper and lower bounds of the position are exceeded; if the upper bound is exceeded, ub is used instead, and if the lower bound is exceeded, lb is used instead;

Step8. Replacement of the position of the worst beluga individual according to NM;

Step9. Find the current optimal individual position;

Step10. If \(T \le T_{\max }\), then execute Step 3, otherwise execute Step 11;

Step11. Output the global optimal position and fitness values.

Algorithm 2 gives the pseudo-code of HBWO. Figure 4 shows the flow chart of HBWO.

Fig. 4
figure 4

Flow chart of HBWO algorithm

Time complexity of HBWO

The computational complexity (O) of HBWO was determined by three processes: algorithm initialization, fitness evaluation value and beluga whale individual updating. Firstly, Quasi- opposition-based learning (QOBL) was added, degree complexity is O(Npop × D). Secondly, the adaptive and spiral predation strategies are introduced, and the computational complexity is O(Npop × Tmax). The degree complexity of a whale fall is O(Npop × Tmax × 0.1). Finally, the Nelder–Mead simplex search method is added, and the computational complexity is O(5 × Npop). Therefore, the complexity of the proposed HBWO is O(HBWO) = O(Npop × (D + 1.1 × Tmax + 5).

Algorithm 2:

Proposed HBWO Algorithm.

figure b

Experimental results and discussion

In this section, by working with the more classical algorithms Particle Swarm Optimization (PSO), the more applied Whale Optimization Algorithm (WOA), Harris Hawk Optimizer (HHO), the more novel Sparrow Search Algorithm (SSA), Dandelion Optimizer(DO) [54], Sand Cat Swarm Optimization(SCSO) [55], Aquila Optimizer (AO) [56], Arithmetic Optimization Algorithm (AOA) [59], Improved Harris Hawk Optimizer Algorithm, Leader Harris Hawks Optimization (LHHO) [57], Improved Slime Mode Algorithm, Leader Slime Mode Algorithm (LSMA) [58] and the original BWO are compared as a way to verify the superiority of the proposed HBWO on the CEC2017 and CEC2019 test set. These two test sets contain single-peaked, multi-peaked, mixed, and composite functions, which are challenging and thus enable a more scientific measure of the algorithm's merit. The initial parameters of the optimization algorithm are shown in Table 1.

Table 1 parameter setting

Sensitivity analysis of reflection and expansion coefficients

The NM strategy scales the positions of beluga individuals through reflection, expansion, and compression to prevent the algorithm from falling into a local candidate solution. The key to the NM strategy affecting the position update of beluga individuals lies in selecting the reflection parameter α and the expansion parameter β. The NM strategy is based on selecting the reflection parameter α and the expansion parameter β. In this section, we discuss and analyze the effects of the reflection parameter α and the expansion parameter β on the performance of the algorithm. We use the CEC2019 suite containing ten functions to investigate the effects of the reflection parameter α and the expansion parameter β. The values of reflection parameter α and expansion parameter β are categorized into [0.5, 1.5] and [1.5, 2.5] with 0.1 as the step size. For the reflection parameter α and the expansion parameter β selected, the mean values and ordering of the test function solutions obtained by HBWO in 20 independent experiments are presented in Table 2 and 3, respectively. Bold indicates the scaling parameter with the smallest mean value.

Table 2 Mean and ranking of the different reflection parameter α
Table 3 Mean and ranking of the different reflection parameter β

From the results in Tables 2 and 3, it can be found that the best average is obtained for all six cases with α = 1 and the lowest average is 1.7. It can be found that the closer α is to 1, the better the performance of the HBWO algorithm. In addition, all four cases with β = 2 obtained the best mean and the lowest mean value of 3. Hence, α = 1 as well as β = 2 are the most appropriate results.

Performance benefits of improved strategies

In order to effectively analyze the impact of introducing each strategy on the optimization ability of the BWO algorithm and to demonstrate the synergistic effect of multiple strategies, an ablation analysis of multiple strategies is performed. In order to improve the drawbacks of BWO and enhance the performance of the algorithm, Quasi-poppositional-based learning, adaptive and spiral predation, and Nelder-Mead simplex search are introduced in the proposed HBWO and three other strategies. Table 4 gives the algorithms of the BWO variants using one or more fusion strategies. Where "1" indicates that the strategy is introduced and "0" indicates that the strategy is not introduced. The performance of multiple variants of the BWO algorithm was analyzed in the CEC2019 test suite, and Table 5 lists the experimental results, including averages and rankings, for multiple variants of the BWO algorithm for the CEC2019 test suite.

Table 4 Various BWO variants with four strategies
Table 5 Results of various BWO on the CEC2019 test suit

As can be seen in Table 5, the performance of the BWO variant containing the NM strategy is relatively unstable. These experimental results show that these four strategies are useful in improving the performance of the original algorithm, especially the introduced Adaptive and spiral predation, but lack robustness. In addition, from the experimental results, it can be seen that ANMBWO, QABWO, and AHWO, which are ranked 2, 3, and 4, outperform the other BWO variants in terms of overall optimization performance, which also proves that the Adaptive and spiral predation strategy plays a greater and more consistent role among these four strategies. Compared to BWO, the proposed HBWO mainly improves the exploration and balancing ability and the ability to jump out of the local solution. Therefore, the combination of Quasi-oppositional-based learning, adaptive and spiral predation, and Nelder-Mead simplex search moves the YDSE towards the optimal solution.

Experiments and analysis on the CEC2017 test set

Verify the strengths and weaknesses of the HBWO using the CEC2017 test set. First, the relevant parameters are set: the population size is 50, the number of dimensions is 30, and each algorithm is run 20 times for each test function. The results are shown, including the mean, standard deviation, best, worst, and rank.

The average rank of HBWO is 1.1724, as seen in Table 6, which ranks first. This indicates that the overall solution result of HBWO is better than the 30 test functions of CEC2017. As can be seen from Table 6, HBWO has significantly improved the merit-seeking ability for each test function at CEC2017 compared to BWO, indicating the effectiveness of introducing QOBL, adaptive and spiral predation strategy, and NM. On the single-peak test functions F1 and F3, the HBWO is superior to other algorithms. For the multi-peak test functions F4-F10, the HBWO algorithm ranked second after LSMA on F5 and F7. With composite test functions and mixed test functions, HBWO ranked second on F24, third on F27, and first on all other test functions. However, the proposed HBWO method has more runtime portion than the original BWO algorithm, which is mainly due to the added time complexity of the three added strategies.

Table 6 Results of various HBWO on CEC2017

From Table 6, it can be obtained that solving 30 dimensions on the CEC2017 test function, BWO ranks last, AOA ranks eleventh, LHHO ranks fifth, AO ranks fourth, SSA ranks third, LSMA ranks second, and HBWO ranks first, which fully indicates that the three strategies have significantly improved computational accuracy of the BWO. The performance ranking HBWO > LSMA > SSA > AO > LHHO > SCSO > PSO > HHO > DO > WOA > AOA > BWO of the twelve algorithms has been fully verified.

The Wilcoxon rank sum test values are given in Table 7. Bold data is used to indicate \(p > 0.05\). The results are obtained as 0/0/29, 0/2/27, 0/0/29, 0/0/29, 0/0/29, 0/0/29, 0/0/29, 0/0/29, 0/0/29, 0/0/29, 3/5/21. HBWO compared with SSA on test functions F13 and F27 \(p > 0.05\), and outperformed SSA on all 27 test functions. Compared with LSMA, it is \(p > 0.05\) on F8, F10, F21, F23, and F26 test functions and is better than LSMA on 21 test functions. Comparison with other algorithms outperformed the other algorithms. Therefore, the HBWO has a better ability to find the 30 dimensions of the 2017 test functions.

Table 7 Wilcoxon rank sum test results of other optimization algorithms CEC2017 test set based on HBWO

The convergence curves for each test function algorithm in CEC2017 are shown in Fig. 5. The convergence of HBWO is better on the single-peak test function. With multi-peak test and mixed test functions, HBWO suffers from a weaker convergence rate at the beginning of the iteration. However, due to the addition of spiral predation, the later iterations make HBWO jump out of the local most. It shows convergence still downward at the later stages, which makes HBWO have some competitiveness and superiority, especially on the F6, F9, F12, F14, F18, F19, and F22 test functions have better performance. For the composite test functions, HBWO has significantly better finding ability than other algorithms on the F26 and F30 test functions.

Fig. 5
figure 5figure 5

Convergence curves of HBWO and other algorithms on CEC2017 test set

In addition, the box line plots are given in Fig. 6. It can be seen that the box corresponding to HBWO is smaller and lower in most cases, which indicates that the results solved by HBWO are better and more stable. HBWO only has slightly higher box positions than LSMA on the CEC2017 test functions F5, F7, F24, and F27, and the median box on the F8 test function is slightly lower than the LSMA and has a smaller box. The box of HBWO is smaller and lower than that of the other algorithms, especially on the test functions F14, F16, F17, F20, F22, and F29. In general, the box line plot shows that HBWO has a significant improvement over BWO.

Fig. 6
figure 6figure 6

Boxplot of HBWO and other algorithms on CEC2017 test set

The radar plot drawn by the ranking of the twelve optimization algorithms on the CEC2017 test function is given in Fig. 7. It can be seen that the HBWO has the smallest shaded area, and BWO has the largest area. It further illustrates that the performance of BWO combined with the three strategies post-computation has a great improvement, which shows the superiority of HBWO.

Fig. 7
figure 7

Radar graph of HBWO and other algorithms on CEC2017 test set

Tables 8, 9, and 10 give the results of HBWO, OBCWOA [87], and BWO runs on CEC2017 test functions on 10, 50, and 100 dimensions. From the table, we can see that HBWO has improved performance on each test function in 50 dimensions, especially F1, F4, F12, F13, F14, F15, F19, and F30, which have very obvious improvements. It shows some superiority in 100 dimensions on the F1, F3, F12, F13, F14, F15, F17, F18, F19, and F30. It further illustrates that BWO combines the three strategies to obtain richer populations and escape local optimal solutions in high-dimensional space.

Table 8 Results of HBWO, OBCWOA and BWO for solving 10 dimensional CEC2017 test set
Table 9 Results of HBWO, OBCWOA and BWO for solving 50 dimensional CEC2017 test set
Table 10 Results of HBWO, OBCWOA and BWO for solving 100 dimensional CEC2017 test set

Experiments and analysis on the CEC2019 test set

The performance of the HBWO was further tested on the CEC2019 test set. The algorithm-related parameters were first set with a population size of 40 and run 20 times, and the results obtained are shown in Table 11. From Table 11, the average rank of HBWO is 1.1, which is the first overall rank. HBWO ranked second only on the F2 test function, and SCSO ranked first. However, HBWO ranks first on other test functions, and all experimental results are better than the SCSO algorithm. In general, HBWO is a great improvement over BWO. However, HBWO still needs to improve its accuracy for some single-peak functions. The main reason for this is that the proposed algorithm effectively balances the exploration and development process, thus resulting in an exploration time that is too short.

Table 11 Results of various HBWO on CEC2019

From Table 11, it can be seen that the HBWO ranks first, the LSMA ranks second, the SCSO ranks third, the HHO ranks fourth, the BWO ranks eleventh, and the AOA ranks twelfth. Therefore, the performance ranking of the twelve algorithms for solving the CEC2019 can be obtained as HBWO > LSMA > SCSO > HHO > AO > LHHO > PSO > SSA > DO > WOA > BWO > AOA.

The Wilcoxon rank sum test values are given in Table 12. According to the last row of Table 12, the results are obtained as 0/1/9, 0/1/9, 0/1/9, 0/1/9, 0/1/9, 0/2/8, 0/2/8, 0/0/10, 0/1/9, 0/1/9, 0/3/7. It can be seen that HBWO is better than PSO, SSA, WOA, HHO, DO, BWO, and LHHO on 9 test functions; HBWO showed better results in the CEC2019 test set.

Table 12 Wilcoxon rank sum test results of other algorithms CEC2019 test set based on HBWO

The convergence profile of HBWO on the CEC2019 test function is given in Fig. 8. It can be observed from Fig. 8. On the test functions F4, F6, F7, and F8, the HBWO algorithm not only converges fast but also converges with high accuracy, showing certain advantages. On the test functions F4, F6, F7, F8, and F10, HBWO's convergence speed is weaker than PSO in the early stage, but it still converges downward and jumps out of the local optimal value in the later iteration, and its optimization ability is stronger than PSO. HBWO has little difference in convergence speed and convergence accuracy with SSA, HHO, DO, SCSO, AO, BWO, LHHO, and LSMA on F1, F2, and F3 test functions, which is also the place that needs to be improved in the future. However, compared with the above algorithms on other test functions, HBWO is significantly superior to other algorithms.

Fig. 8
figure 8

Convergence curves of HBWO and other algorithms for solving CEC2019 test set

Figure 9 presents the boxplots of each algorithm on the CEC2019 test functions. Figure 9 shows that HBWO has lower and smaller boxings on most of the test functions, which indicates the superiority of the algorithm. These test functions have obvious superiority, especially in the F3, F4, and F6. On the test functions F7 and F8, HBWO has a larger box, but obviously, it can be observed that HBWO has a lower box with a smaller mean value.

Fig. 9
figure 9

Box plot of HBWO and other algorithms for solving CEC2019 test set

The radar plot drawn by the ranking of the twelve optimization algorithms on the CEC2019 test function is given in Fig. 10. It can be seen that the HBWO has the smallest shaded area, BWO is ranked eleventh in terms of area, and AOA has the largest area. It further illustrates that the performance of BWO combined with the QOBL, adaptive and spiral predation strategies, and NM post-computation has shown a great improvement.

Fig. 10
figure 10

Radar graph of HBWO and other algorithms for solving CEC2019 test set

Comparative analysis of time and space complexity

In order to increase the persuasiveness of the comparison experiments and to verify other performances of the proposed HBWO, we present the time and space complexity cases of the comparison algorithms separately. Table 13 provides the time and space complexity cases of the comparison algorithms.

Table 13 time and space complexity cases of the comparison algorithms

From Table 13, we can find that the complexity of AO, AOA, PSO, WOA, and LSMA are all O(Npop × (D × Tmax + 1)). The reason for this is that the processes of all the above methods are initialization as well as simpler iterative updates. The complexity of the built algorithms compared to these methods differs only in the whale-fall process and the NM search.

Moreover, for the space complexity, the initialization overall can be considered as the maximum amount of space occupied by the optimization method at any time. Therefore, the space complexity of the proposed HBWO is O(Npop × D). Meanwhile, all the other methods except PSO are O(Npop × D). PSO has a complexity of O(2Npop × D) because the fact that the initialization process has to initialize the velocity and position simultaneously.

HBWO for engineering optimization problems

To further verify the merit of HBWO, the proposed method is used to solve six practical engineering design problems. In this section, the population size is \(N = 30\) and the maximum number of iterations is \(T = 500\).

Corrugated bulkhead design problem

The corrugated bulkhead design problem [60] is to minimize the weight of the wave-trough bulkhead of the chemical tank truck under six constraints, and the design variables are width (w), depth (d), length (l) and plate thickness (t), let \(x = (x_{1} ,x_{2} ,x_{3} ,x_{4} ) = (w,d,l,t)\). Then the mathematical model of the corrugated bulkhead design problem is as follows:

$$ \min f\left( x \right) = \frac{5.885t(w + l)}{{w + \sqrt {\left| {l^{2} - d^{2} } \right|} }}, $$
(16)

subject to:

$$ h_{1} (x) = - td(0.4w + \frac{l}{6}) + 8.94(w + \sqrt {\left| {l^{2} - d^{2} } \right|} ) \le 0, $$
$$ h_{2} (x) = - td^{2} (0.2w + \frac{l}{12}) + 2.2(8.94(w + \sqrt {\left| {l^{2} - d^{2} } \right|} ))^{4/3} \le 0, $$
$$ h_{3} (x) = - t_{{}} + 0.0156w + 0.15 \le 0, $$
$$ h_{4} (x) = - t + 0.0156l + 0.15 \le 0, $$
$$ h_{5} (x) = - t + 0.15 \le 0, $$
$$ h_{6} (x) = - l + d \le 0, $$

variable range: \(0 \le w,d,l \le 100,\quad 0 \le t \le 5.\)

RSA [62], GWO [28], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], HGS [64], AO [56], BWO [45], and HBWO were used to solve the problem. The results are shown in Tables 14 and 15; the result of HBWO is 6.842953. The bolded data are the minimum values in each index.

Table 14 Optimal results of each algorithm for solving corrugated bulkhead design problem
Table 15 Statistical results of corrugated bulkhead design problem solved by each algorithm

Tabular column design problem

The tube column design problem [66] is to obtain a uniform tube column. Figure 11 with minimum cost, which can withstand the compression load. The average diameter \(d\) and thickness \(t\) are varied in the range of [2, 14] and [0.2, 0.8]. The parameters are as follows: yield stress \(\sigma_{y} = 500kgf/cm^{2}\),modulus of elasticity \(E = 0.85 \times 10^{6} kgf/cm^{2}\) and density \(\rho = 0.0025kgf/cm^{3}\). The length \(L\) of the column is a \(250cm\) vector \(x = (x_{1} ,x_{2} ) = (d,t)\).

$$ \min f(x) = 9.82dt + 2d, $$
(17)
Fig. 11
figure 11

Tabular column design problem

subject to:

$$ h_{1} (x) = \frac{P}{{\pi dt\sigma_{y} }} - 1 \le 0,\,h_{2} (x) = \frac{{8PL^{2} }}{{\pi^{3} Edt(d^{2} + t^{2} )}} - 1 \le 0, $$
$$ h_{3} (x) = \frac{2.0}{d} - 1 \le 0,\,h_{4} (x) = \frac{d}{14} - 1 \le 0, $$
$$ h_{5} (x) = \frac{0.2}{t} - 1 \le 0,\,h_{6} (x) = \frac{t}{0.8} - 1 \le 0. $$

where

$$ 2 \le d \le 14,\quad 0.2 \le t \le 0.8. $$

HBWO is used to solve the tube column design problem, and the obtained results are compared with other optimization algorithms, including RSA [62], GWO [28], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], SMA [67], AO [56], and BWO [45]. The results obtained are shown in Tables 16 and 17. The optimal value of HBWO for solving the design of pipe string is 26.53129, and the result obtained is relatively stable.

Table 16 Optimal results of each algorithm for solving Tabular column design problem
Table 17 Statistical results of Tabular column design problem solved by each algorithm

Three-bar truss design problem

The three-bar truss design problem [68], as shown in Fig. 12, which is to minimize the value of the total weight of the truss structure while minimizing the volume.

$$ \min f(x) = (2\sqrt 2 T_{1} + T_{2} ) \times 1, $$

subject to:

$$ h_{1} (x) = \frac{{\sqrt 2 T_{1} + T_{2} }}{{\sqrt 2 T_{1}^{2} + 2T_{1} T_{2} }}P - \sigma \le 0, $$

\(h_{2} (x) = \frac{{T_{2} }}{{\sqrt 2 T_{1}^{2} + 2T_{1} T_{2} }}P - \sigma \le 0,\)

$$ h_{3} (x) = \frac{1}{{T_{1} + \sqrt 2 T_{2} }}P - \sigma \le 0, $$

with bounds: \(0 \le T_{1} ,T_{2} \le 1.\)

Fig. 12
figure 12

Three-bar truss design problem

To solve this problem, RSA [62], GWO [28], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], HGS [64], AO [56], BWO [45] and HBWO are used. Tables 18 and 19 show the experimental results, from which it can be seen that the result of HBWO solving this problem is 0.012663, indicating that HBWO can achieve good and stable results.

Table 18 Optimal results of each algorithm for solving the three-bar truss design problem
Table 19 Statistical results of three bar truss design problem solved by each algorithm

Tension/compression spring design problem

The tension/compression spring design problem [69] (Fig. 13). The problem has three design variables: average coil diameter (\(D\)), wire diameter (\(d\)), and effective number of coils (\(N\)). Let \(X = [x_{1} ,x_{2} ,x_{3} ] = [d,D,N]\).

$$ \min f(x) = \left( {N + 2} \right)Dd^{2} , $$

subject to:

$$ h_{1} (x) = 1 - \frac{{D^{3} N}}{{71765d^{4} }} \le 0, $$

\(h_{2} (x) = \frac{{4D^{2} - dD}}{{12566(Dd^{3} - d^{4} )}} + \frac{1}{{5108d^{2} }} - 1 \le 0,\)

$$ h_{3} (x) = 1 - \frac{140.45d}{{D^{2} N}} \le 0, $$

\(h_{4} (x) = \frac{D + d}{{1.5}} - 1 \le 0,\)

Fig. 13
figure 13

Tension/compression spring design problem

variable range: \(0.05 \le d \le 2,\quad 0.25 \le D \le 1.3,\quad 2 \le N \le 15.\)

BWO [45], RSA [62], PSO [30], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], HGS [64], AO [56], and HBWO are used to solve tension/compression spring design problems. Tables 20 and 21 show that the optimal value of HBWO to solve this problem is 0.012663. HBWO has a strong competitiveness compared with other optimization algorithms.

Table 20 Optimal results of each algorithm for solving tension/compression spring design problem
Table 21 Statistical results of pressure tension/compression spring design solved by each algorithm

Heat exchange design problem

The heat exchange design problem [70] is a constrained function with eight variables and six inequalities, and the constrained benchmark minimization problem is found.

$$ \min f(x) = x_{1} + x_{2} + x_{3} $$

subject to:

$$ h_{1} (x) = 0.0025\left( {x{}_{4} + x_{6} } \right) - 1 \le 0, $$
$$ h_{2} (x) = 0.0025\left( {x{}_{5} + x_{7} - x{}_{4}} \right) - 1 \le 0, $$
$$ h_{3} (x) = 1 - 0.01\left( {x_{8} - x{}_{5}} \right) \ge 0, $$
$$ h_{4} (x) = x_{1} x_{6} - 833.33252x_{4} - 100x_{1} + 83333.333 \ge 0, $$
$$ h_{5} (x) = x_{2} x_{7} - 1250x_{5} - x_{2} x_{4} + 1250x_{4} \ge 0, $$

\(h_{6} (x) = x_{3} x_{8} - x_{3} x_{5} + 2500x_{5} - 1250000 \ge 0,\)

where the range of the variables are

$$ 100 \le x{}_{1} \le 1000,\quad 1000 \le x{}_{2},x{}_{3} \le 10000,\quad 10 \le x{}_{i} \le 1000(i = 4\sim 8). $$

HBWO is used to solve heat exchange design problems, With RSA [62], GWO [28], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], HGS [64], AO [56] and BWO [45] solving results Contrast. The optimal value of HBWO to solve this problem is 7060.574, which can be obtained from Tables 22 and 23. However, the standard deviation of HBWO is still relatively large, and there is still some room for improvement.

Table 22 Optimal results of each algorithm for solving heat exchange design problem
Table 23 Statistical results of heat exchange design problem solved by each algorithm

Welded beam design problem

The welded beam design problem [71] (Fig. 14) is to find the minimum value of the manufacturing cost of the welded beam. The problem contains four design variables, which are: welding thickness \(h\), welding joint length \(l\), beam height \(t\), and beam thickness \(b\). Let \(X = [x_{1} ,x_{2} ,x_{3} ,x_{4} ] = [h,l,t,b]\), whose mathematical model is as follows.

$$ \min f(X) = {1}{\text{.1}}0471x_{1}^{2} x_{2} + 0.04811x_{3} x_{4} (14.0 + x_{2} ) $$
$$ \begin{gathered} h_{1} (X) = m(x) - m_{\max } \le 0, \hfill \\ h_{2} (X) = n(x) - n_{\max } \le 0, \hfill \\ h_{3} (X) = p(x) - p_{\max } \le 0, \hfill \\ h_{4} (X) = x_{1} - x_{4} \le 0, \hfill \\ h_{5} (X) = B - B_{c} (x) \le 0, \hfill \\ h_{6} (X) = 0.125 - x_{1} \le 0, \hfill \\ h_{7} (X) = {1}{\text{.1}}0471x_{1}^{2} + 0.04811x_{3} x_{4} (14.0 + x_{2} ) - 5.0 \le 0, \hfill \\ \end{gathered} $$

where \({0}{\text{.1}} \le x_{1} \le 2\), \({0}{\text{.1}} \le x_{2} \le 1{0}\), \({0}.1 \le x_{3} \le 10\), \({0}{\text{.1}} \le x_{{4}} \le 2\), \(L = 14in\), \(m_{\max } = 136,000psi\), \(\sigma_{\max } = 36,600psi\), \(B = 6,000lb\), \(E = 30 \times 10^{6} psi\), \(G = 12 \times 10^{6} psi\),

Fig. 14
figure 14

Welded beam design problem

\(m(X) = \sqrt {(\tau^{\prime})^{2} + 2\tau^{\prime}\tau^{\prime\prime}\frac{{x_{2} }}{2R} + (\tau^{\prime\prime})^{2} }\),\(m^{\prime} = \frac{B}{{\sqrt 2 x_{1} x_{2} }}\), \(m^{\prime\prime} = \frac{AR}{J}\),

\(A = P(L + \frac{{x_{2} }}{2})\),\(R = \sqrt {\frac{{x_{2}^{2} }}{4} + (\frac{{x_{1} + x_{3} }}{2})^{2} }\),\(J = 2\sqrt 2 x_{1} x_{2} \left[ {\frac{{x_{2}^{2} }}{4} + (\frac{{x_{1} + x_{3} }}{2})^{2} } \right]\),

\(n(X) = \frac{6PL}{{x_{4} x_{3}^{2} }}\), \(p(X) = \frac{{6BL^{{3}} }}{{Ex_{3}^{2} x_{4} }}\),

\(B_{c} (X) = \frac{{4.013E\sqrt {\frac{{x_{3}^{2} x_{4}^{6} }}{36}} }}{{L^{2} }}(1 - \frac{{x_{3} }}{2L}\sqrt{\frac{E}{4G}} )\).

HBWO, RSA [62], GWO [28], SCSO [55], WOA [61], RSO [63], HHO [32], SCA [65], AOA [59], HGS [64], AO [56] and BWO [45] was used to solve the welding beam design problem. It can be seen from Tables 24 and 25 that the optimal solution of the HBWO solution is 1.695252, indicating the superiority of HBWO.

Table 24 Optimal results of each algorithm for solving welded beam design problem
Table 25 Statistical results of welded beam design problem solved by each algorithm

Conclusion and future

In this paper, we propose an improved beluga whale optimization algorithm (HBWO), which introduces Quasi-opposition-based learning (QOBL), adaptive and spiral predation, and Nelder–Mead simplex search method (NM) into the beluga whale optimization algorithm. The CEC2017 test function and CEC2019 test function are used to test its performance and compared with the Beluga optimization algorithm, classical algorithm, 2022 new algorithm, and improved algorithm. The experimental results show that HBWO has certain advantages. Meanwhile, the experimental results of six engineering cases further verify the high efficiency of HBWO in solving practical problems. Therefore, the introduction of the three strategies enables HBWO to obtain a better initial population while the performance, such as solution accuracy and convergence speed, is substantially improved. However, the experimental results show that in the CEC2017 multimodal test function, the accuracy of HBWO is still poor when facing certain functions, and there is still room for improvement. Meanwhile, HBWO increases some complexity due to adding strategies. Therefore, in future work, the proposed HBWO can be improved by integrating with other algorithms, or some new strategies can be entered into the exploration phase to make the performance of HBWO more recent. The improved algorithm can be used in image segmentation [72, 73], energy problems [74, 75], path optimization [76 ~ 78], feature selection [79, 80], curve and surface optimization [81, 82], and other fields [83 ~ 86, 88–89].

Availability of data and materials

All data generated or analyzed during this study were included in this published article.

References

  1. Gang Hu, Chen L, Wang X, Wei G. Differential evolution-boosted sine cosine golden eagle optimizer with Lévy Flight. J Bionic Eng. 2022;19(6):1850–85.

    Article  Google Scholar 

  2. Melman Anna, Evsutin Oleg. Comparative study of metaheuristic optimization algorithms for image steganography based on discrete Fourier transform domain. Appl Soft Comput. 2022;132:109847.

    Article  Google Scholar 

  3. Gang Hu, Li M, Wang X, Wei G, Chang C-T. An enhanced manta ray foraging optimization algorithm for shape optimization of complex CCG-Ball curves. Knowl-Based Syst. 2022;240: 108071.

    Article  Google Scholar 

  4. Truong D-N, Chou J-S. Fuzzy adaptive jellyfish search-optimized stacking machine learning for engineering planning and design. Autom Constr. 2022;143: 104579.

    Article  Google Scholar 

  5. Li Y, Peng T, Hua Lei, Ji C, Ma H, Nazir MS, Zhang C. Research and application of an evolutionary deep learning model based on improved grey wolf optimization algorithm and DBN-ELM for AQI prediction. Sust Cities Soc. 2022;87:104209.

    Article  Google Scholar 

  6. Dian S, Zhong J, Guo B, Liu J, Guo R. A smooth path planning method for mobile robot using a BES-incorporated modified QPSO algorithm. Expert Syst Appl. 2022;208: 118256.

    Article  Google Scholar 

  7. Wang G, Guo S, Han L, Zhao Z, Song X. COVID-19 ground-glass opacity segmentation based on fuzzy c-means clustering and improved random walk algorithm. Biomed Signal Proc Cont. 2023;79:104159.

    Article  Google Scholar 

  8. Gang Hu, Zhong J, Bo Du, Wei G. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput Methods Appl Mech Eng. 2022;394: 114901.

    Article  MathSciNet  Google Scholar 

  9. Houssein EH, Çelik E, Mahdy MA, Ghoniem RM. Self-adaptive equilibrium optimizer for solving global, combinatorial, engineering, and multi-objective problems. Expert Syst Appl. 2022;195: 116552.

    Article  Google Scholar 

  10. Houssein EH, Rezk H, Fathy A, Mahdy MA, Nassef AM. A modified adaptive guided differential evolution algorithm applied to engineering applications. Eng Appl Artif Intell. 2022;113: 104920.

    Article  Google Scholar 

  11. Yang XS. Firefly algorithm, stochastic test functions and design optimisation. INT J BIO-INSPIR COM. 2010;2(2):78–84.

    Article  Google Scholar 

  12. Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim. 1997;11(4):341–59.

    Article  MathSciNet  Google Scholar 

  13. Rajeev S, Krishnamoorthy CS. Discrete optimization of structures using genetic algorithms J. Struct Eng. 1992;118(5):1233–50.

    Article  Google Scholar 

  14. Zhong J, Feng L, Ong Y-S. Gene expression programming: a survey [Review Article]. IEEE Comput Intell Mag. 2017;12(3):54–72. https://doi.org/10.1109/MCI.2017.2708618.

    Article  Google Scholar 

  15. D. Fogel, Artificial intelligence through simulated evolution. Evol. Comput. 2009; 227–296.

  16. Rao RV, Savsani VJ, Vakharia DP. Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput Aided Des. 2011;43(3):303–15.

    Article  Google Scholar 

  17. Lee KS, Geem ZW. A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice. Comp Met App Mech Eng. 2005;194:36–8.

    Google Scholar 

  18. Moosavian N. Babak Kasaee Roodsari, Soccer league competition algorithm: a novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol Comput. 2014;17:14–24.

    Article  Google Scholar 

  19. Kumar M, Kulkarni AJ, Satapathy SC. Socio evolution & learning optimization algorithm: a socio-inspired optimization methodology. Future Generation Comp Syst. 2018;81:252–72.

    Article  Google Scholar 

  20. Ghasemian H, Ghasemian F, Vahdat-Nejad H. Human urbanization algorithm: a novel metaheuristic approach. Math Comput Simul. 2020;178:1–15.

    Article  MathSciNet  Google Scholar 

  21. Zhang Q, Gao H, Zhan Z-H, Li J, Zhang H, Optimizer G. A powerful metaheuristic algorithm for solving continuous and discrete global optimization problems. Knowl-Based Syst. 2023;261: 110206.

    Article  Google Scholar 

  22. Weiguo Zhao, Liying Wang, Zhenxing Zhang, Chapter 5 - Engineering applications of artificial ecosystem-based optimization,Editor(s): Weiguo Zhao, Liying Wang, Zhenxing Zhang, New Optimization Algorithms and their Applications, Elsevier 2021 93–121.

  23. Nguyen LT, Nestorovi’c T. Unscented hybrid simulated annealing for fast inversion of tunnel seismic waves. Comput Met Appl Mech Eng. 2016;301:281–99.

    Article  MathSciNet  Google Scholar 

  24. Rashedi E, Nezamabadi-pour H, Saryazdi S. GSA: a gravitational search algorithm. Inf Sci. 2009;179(13):2232–48.

    Article  Google Scholar 

  25. Foroughi Nematollahi A, Rahiminejad A, Vahidi B. A novel physical based meta-heuristic optimization method known as lightning attachment procedure optimization. Appl Soft Comput. 2017;59:596–621.

    Article  Google Scholar 

  26. Azizi M. Atomic orbital search: a novel metaheuristic algorithm. Appl Math Model. 2021;93:657–83.

    Article  MathSciNet  Google Scholar 

  27. Abdel-Basset M, El-Shahat D, Jameel M, Abouhawwash M. Young’s double-slit experiment optimizer : a novel metaheuristic optimization algorithm for global and constraint optimization problems. Comp Met Appl Mech Eng. 2023;403:115652.

    Article  MathSciNet  Google Scholar 

  28. Mirjalili S, Mirjalili SM, Lewis A. Grey Wolf optimizer. Adv Eng Softw. 2014;69:46–61.

    Article  Google Scholar 

  29. Zhao W, Wang L, Zhang Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl-Based Syst. 2019;163:283–304.

    Article  Google Scholar 

  30. Kennedy J, Eberhart R. Particle swarm optimization. Proc IEEE Int Conf Neural Netw. 1995;4:1942–8.

    Article  Google Scholar 

  31. Dorigo M, Blum C. Ant colony optimization theory: a survey. Theoret Comput Sci. 2005;344(2–3):243–78.

    Article  MathSciNet  Google Scholar 

  32. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: algorithm and applications. Future Generation Comp Syst. 2019;97:849–72.

    Article  Google Scholar 

  33. Braik MS. Chameleon swarm algorithm: a bio-inspired optimizer for solving engineering design problems. Expert Syst Appl. 2021;174:114685.

    Article  Google Scholar 

  34. Chou J-S, Truong D-N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl Math Comput. 2021;389: 125535.

    MathSciNet  Google Scholar 

  35. Xue J, Shen B. A novel swarm intelligence optimization approach: sparrow search algorithm. Syst Sci Control Eng. 2020. https://doi.org/10.1080/21642583.2019.1708830.

    Article  Google Scholar 

  36. Dehghani M, Montazeri Z, Trojovská E, Trojovský P. Coati optimization algorithm: a new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl-Based Syst. 2023;259: 110011.

    Article  Google Scholar 

  37. Pan J-S, Zhang L-G, Wang R-B, Snášel V, Chu S-C. Gannet optimization algorithm: a new metaheuristic algorithm for solving engineering optimization problems. Math Comput Simul. 2022;202:343–73.

    Article  MathSciNet  Google Scholar 

  38. Eslami N, Yazdani S, Mirzaei M, Hadavandi E. Aphid-Ant Mutualism: a novel nature-inspired metaheuristic algorithm for solving optimization problems. Math Comput Simul. 2022;201:362–95.

    Article  MathSciNet  Google Scholar 

  39. Sang-To T, Le-Minh H, Wahab MA, Thanh C-L. A new metaheuristic algorithm: shrimp and goby association search algorithm and its application for damage identification in large-scale and complex structures. Adv Eng Software. 2023;176:103363.

    Article  Google Scholar 

  40. Zamani H, Nadimi-Shahraki MH, Gandomi AH. CCSA: conscious neighborhood-based crow search algorithm for solving global optimization problems. Appl Soft Comput. 2019;85: 105583.

    Article  Google Scholar 

  41. Zhao W, Zhang Z, Wang L. Manta ray foraging optimization: an effective bio-inspired optimizer for engineering applications. Eng Appl Artif Intell. 2020;87: 103300.

    Article  Google Scholar 

  42. Abdollahzadeh B, Gharehchopogh FS, Khodadadi N, Mirjalili S. Mountain gazelle optimizer: a new nature-inspired metaheuristic algorithm for global optimization problems. Adv Eng Software. 2022;174:103282.

    Article  Google Scholar 

  43. Wang L, Cao Q, Zhang Z, Mirjalili S, Zhao W. Artificial rabbits optimization: a new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng Appl Artif Intell. 2022;114: 105082.

    Article  Google Scholar 

  44. Zhao W, Wang L, Mirjalili S. hummingbird algorithm: a new bio-inspired optimizer with its engineering applications. Comput Methods Appl Mech Eng. 2022;388: 114194.

    Article  MathSciNet  Google Scholar 

  45. Zhong C, Li G, Meng Z. Beluga whale optimization: a novel nature-inspired metaheuristic algorithm. Knowl-Based Syst. 2022;251: 109215.

    Article  Google Scholar 

  46. Çelik E. IEGQO-AOA: information-exchanged gaussian arithmetic optimization algorithm with quasi-opposition learning. Knowl-Based Syst. 2023;260: 110169.

    Article  Google Scholar 

  47. Sudan Yu, Heidari AA, He C, Cai Z, Althobaiti MM, Mansour RF, Liang G, Chen H. Parameter estimation of static solar photovoltaic models using Laplacian Nelder-Mead hunger games search. Solar Energy. 2022;242:79–104.

    Article  Google Scholar 

  48. Pandey V, Pandey SK. PY-Nodes: an ab-initio python code for searching nodes in a material using Nelder-Mead’s simplex approach. Comput Phys Commun. 2023;283: 108570.

    Article  Google Scholar 

  49. Shuhui Xu, Wang Y, Wang Z. Parameter estimation of proton exchange membrane fuel cells using eagle strategy based on JAYA algorithm and Nelder-Mead simplex method. Energy. 2019;173:457–67.

    Article  Google Scholar 

  50. G. Wu, R. Mallipeddi, P.N. Suganthan. Problem definitions and evaluation criteria for the CEC 2017 competition and special session on constrained single objective real-parameter optimization problem definitions and evaluation criteria for the CEC 2017 competition on constrained real parameter optimization (2017).

  51. Chauhan S, Vashishtha G. A synergy of an evolutionary algorithm with slime mould algorithm through series and parallel construction for improving global optimization and conventional design problem. Eng Appl Artif Intell. 2023;118: 105650.

    Article  Google Scholar 

  52. Houssein EH, Hussain K, Abualigah L, Elaziz MA, Alomoush W, Dhiman G, Djenouri Y, Cuevas E. An improved opposition-based marine predators algorithm for global optimization and multilevel thresholding image segmentation. Knowledge-Based Syst. 2021;229:107348.

    Article  Google Scholar 

  53. Xiaobing Yu, WangYing Xu, ChenLiang Li. Opposition-based learning grey wolf optimizer for global optimization. Knowledge-Based Syst. 2021;226:107139.

    Article  Google Scholar 

  54. Zhao S, Zhang T, Ma S, Chen M, Optimizer D. A nature-inspired metaheuristic algorithm for engineering applications. Eng Appl Artif Intell. 2022;114: 105075.

    Article  Google Scholar 

  55. Seyyedabbasi A, Kiani F. Sand Cat Swarm optimization: a nature-inspired algorithm to solve global optimization problems. Eng Comput. 2022. https://doi.org/10.1007/s00366-022-01604-x.

    Article  Google Scholar 

  56. Abualigah L, Yousri D, Elaziz MA, Ewees AA, Al-qaness MA.A., Gandomi AH. Aquila optimizer: a novel meta-heuristic optimization algorithm. Comput Industrial Eng. 2021;157:107250.

    Article  Google Scholar 

  57. Naik MK, Panda R, Wunnava A, et al. A leader Harris hawks optimization for 2-D Masi entropy-based multilevel image thresholding. Multimed Tools Appl. 2021;80:35543–83. https://doi.org/10.1007/s11042-020-10467-7.

    Article  Google Scholar 

  58. Naik MK, Panda R, Abraham A. Normalized square difference based multilevel thresholding technique for multispectral images using leader slime mould algorithm. J King Saud Univ Comp Inform Sci. 2022;34(7):4524–36.

    Google Scholar 

  59. Abualigah L, Diabat A, Mirjalili S, Elaziz MA, Gandomi AH. The arithmetic optimization algorithm. Comp Met Appl Mech Eng. 2021. https://doi.org/10.1016/j.cma.2020.113609.

    Article  MathSciNet  Google Scholar 

  60. Bayzidi H, Talatahari S, Saraee M, Lamarche CP. Social network search for solving engineering optimization problems. Comput Intell Neurosci. 2021. https://doi.org/10.1155/2021/8548639.

    Article  Google Scholar 

  61. Mirjalili S, Lewis A. The whale optimization algorithm. Adv Eng Softw. 2016;95:51–67.

    Article  Google Scholar 

  62. Abualigah L, Elaziz MA, Sumari P, Geem ZW, Gandomi AH. Reptile search algorithm (RSA): a nature-inspired meta-heuristic optimizer. Expert Syst Appl. 2022;191:116158.

    Article  Google Scholar 

  63. Dhiman G, Garg M, Nagar A, Kumar V, Dehghani M. A novel algorithm for global optimization: Rat swarm optimizer. J Ambient Intell Humaniz Comput. 2021;12:8457–82. https://doi.org/10.1007/s12652-020-02580-0.

    Article  Google Scholar 

  64. Yang Yutao, Chen Huiling, Heidari Ali Asghar, Gandomi Amir H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst Appl. 2021;177:114864.

    Article  Google Scholar 

  65. Seyedali Mirjalili SCA. A Sine cosine algorithm for solving optimization problems. Knowl Based Syst. 2016;96:120–33.

    Article  Google Scholar 

  66. Ong KM, Ong P, Sia CK. A new flower pollination algorithm with improved convergence and its application to engineering optimization. Decision Anal J. 2022;5:100144.

    Article  Google Scholar 

  67. Li S, Chen H, Wang M, Heidari AA, Mirjalili S. Slime mould algorithm: a new method for stochastic optimization. Future Generation Comp Syst. 2020;111:300–23.

    Article  Google Scholar 

  68. Lin X, Xianxing Yu, Li W. A heuristic whale optimization algorithm with niching strategy for global multi-dimensional engineering optimization. Comput Ind Eng. 2022;171: 108361.

    Article  Google Scholar 

  69. Gang H, Yang R, Qin X, Wei G. MCSA: Multi-strategy boosted chameleon-inspired optimization algorithm for engineering applications. Comp Met Appl Mech Eng. 2023;403:115676.

    Article  MathSciNet  Google Scholar 

  70. Jaberipour M, Khorram E. Two improved harmony search algorithms for solving engineering optimization problems. Commun Nonlinear Sci Numer Simul. 2010;15(11):3316–31.

    Article  Google Scholar 

  71. Duan Y, Xiaobing Y. A collaboration-based hybrid GWO-SCA optimizer for engineering optimization problems. Expert Syst Appl. 2023;213:119017.

    Article  Google Scholar 

  72. Yang X, Wang R, Zhao D, Fanhua Y, Heidari AA, Zhangze Xu, Chen H, Algarni AD, Elmannai H, Suling Xu. Multi-level threshold segmentation framework for breast cancer images using enhanced differential evolution. Biomed Signal Proc Cont. 2023;80:104373.

    Article  Google Scholar 

  73. Xiaobing Yu, Xuejing Wu. Ensemble grey wolf Optimizer and its application for image segmentation. Expert Syst Appl. 2022;209: 118267.

    Article  Google Scholar 

  74. Wilberforce Tabbi, Hegazy Rezk AG, Olabi EI, EpelleAbdelkareem MA. Comparative analysis on parametric estimation of a PEM fuel cell using metaheuristics algorithms. Energy. 2023;262:125530.

    Article  Google Scholar 

  75. Kathiravan K, Rajnarayanan PN. Application of AOA algorithm for optimal placement of electric vehicle charging station to minimize line losses. Electric Power Syst Res. 2023;214:108868.

    Article  Google Scholar 

  76. Xiaobing Yu, Li C, Zhou JF. A constrained differential evolution algorithm to solve UAV path planning in disaster scenarios. Knowl-Based Syst. 2020;204: 106209.

    Article  Google Scholar 

  77. Zhang C, Zhou W, Qin W, Tang W. A novel UAV path planning approach: heuristic crossing search and rescue optimization algorithm. Expert Syst Appl. 2023;215: 119243.

    Article  Google Scholar 

  78. Xiaobing Yu, Jiang N, Wang X, Li M. A hybrid algorithm based on grey wolf optimizer and differential evolution for UAV path planning. Expert Syst Appl. 2023;215: 119327.

    Article  Google Scholar 

  79. Gang Hu, Bo Du, Wang X, Wei G. An enhanced black widow optimization algorithm for feature selection. Knowl-Based Syst. 2022;235: 107638.

    Article  Google Scholar 

  80. Essam H, Houssein DO, Emre Ç, Marwa ME, Ghoniem Rania M. Boosted sooty tern optimization algorithm for global optimization and feature selection. Expert Syst Appl. 2023;213:119015.

    Article  Google Scholar 

  81. Gang Hu, Dou W, Wang X, Abbas M. An enhanced chimp optimization algorithm for optimal degree reduction of Said-Ball curves. Math Comput Simul. 2022;197:207–52.

    Article  MathSciNet  Google Scholar 

  82. Zheng J, Ji X, Ma Z, Hu G. Construction of local-shape-controlled quartic generalized said-ball model. Mathematics. 2023;11:2369.

    Article  Google Scholar 

  83. Abeer Al-Hyari, Mua’Ad Abu-Faraj, Hyperparameters Optimization of Convolutional Neural Networks using Evolutionary Algorithms, in: 2022 International Conference on Emerging Trends in Computing and Engineering Applications (ETCEA), 2022, pp. 1-6.

  84. Li N, Zhou G, Yongquan Zhou Wu, Deng QL. Multi-objective pathfinder algorithm for multi-objective optimal power flow problem with random renewable energy sources: wind, photovoltaic and tidal. Sci Rep. 2023;13:10647.

    Article  Google Scholar 

  85. Luo Q, Yin S, Zhou G, Meng W, Zhao Y, Zhou Y. Multi-objective equilibrium optimizer slime mould algorithm and its application in solving engineering problems. Struct Multidiscip Optim. 2023;66:114.

    Article  Google Scholar 

  86. Zhang T, Zhou Y, Guo Zhou Wu, Deng QL. Discrete Mayfly Algorithm for spherical asymmetric traveling salesman problem. Expert Syst Appl. 2023;221: 119765.

    Article  Google Scholar 

  87. Chen H, Li W, Yang X. A whale optimization algorithm with chaos mechanism based on quasi-opposition for global optimization problems. Expert Syst Appl. 2020;158: 113612.

    Article  Google Scholar 

  88. Hu G, Guo YX, Wei G, Abualigah L. Genghis Khan shark optimizer: a novel nature-inspired algorithm for engineering optimization. Adv Eng Inform. 2023;58(8): 102210.

    Article  Google Scholar 

  89. Hu G, Zheng Y, Abualigah L, Hussien AG. DETDO: an adaptive hybrid dandelion optimizer for engineering optimization. Adv Eng Inform. 2023;57: 102004.

    Article  Google Scholar 

Download references

Acknowledgements

None.

Funding

This research received financial support from National Natural Science Foundation of China (72072144, 71672144, 71372173, 70972053); Shaanxi soft Science Research Plan (2019KRZ007); Science and Technology Research and Development Program of Shaanxi Province (2021KRM183, 2017KRM059, 2017KRM057, 2014KRM282); Soft Science Research Program of Xi'an Science and Technology Bureau (21RKYJ0009).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualisation: JH, HH. Methodology: JH, HH. Formal Analysis and investigation: JH, HH. Writing—original draft preparation: JH, HH. Writing—review and editing: JH, HH. Supervision: HH.

Corresponding author

Correspondence to Jiaxu Huang.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, J., Hu, H. Hybrid beluga whale optimization algorithm with multi-strategy for functions and engineering optimization problems. J Big Data 11, 3 (2024). https://doi.org/10.1186/s40537-023-00864-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40537-023-00864-8

Keywords