Skip to main content

An efficient weighted slime mould algorithm for engineering optimization

Abstract

In engineering applications, optimal parameter design is crucial. While Slime Mould Algorithm (SMA) excels in parameter discovery under constrained conditions, it faces challenges in achieving global convergence and avoiding local opsecttimal traps in complex tasks. This paper introduces an enhanced variant of SMA, termed CCHSMA, which integrates a Chaotic Local Search (CLS) mechanism to improve initial population diversity and combines Covariance Matrix Adaptation (CMA) and Harris Hawks Optimization (HHO) strategies to enhance global search efficiency. CCHSMA aims to improve search quality and reduce the likelihood of getting trapped in local optima. We evaluated CCHSMA's effectiveness by benchmarking it against the standard SMA and its variants using 30 CEC2017 test functions, and compared its performance with seven notable meta-heuristic algorithms and ten advanced swarm intelligence variants. The experimental results demonstrate that CCHSMA outperforms the other algorithms tested on the benchmark functions. To further validate its practical utility, CCHSMA's performance was also benchmarked against leading algorithms in real-world engineering applications. This paper uses detailed statistical methods, including the Wilcoxon signed-rank test and the Friedman test, to validate the comparative results. Our findings show that CCHSMA outperforms other algorithms in solving complex engineering optimization problems such as tension/compression spring design, pressure vessel design, and three-bar truss design, proving to be a robust tool for complex engineering optimization. Its enhanced initial population diversity and improved global search efficiency are essential for effectively addressing diverse engineering challenges.

Introduction

In practical applications, the efficacy of project design significantly influences construction and operational costs. As project scales expand, achieving high-quality, cost-efficient designs becomes increasingly vital. The design process involves formulating complex mathematical models with objectives, constraints, and various characteristics like discontinuity, nonlinearity, and nonconvexity. Researchers have explored diverse methods to address these challenges, each with unique strengths and limitations.

In mathematical modeling, Pucker et al. applied the SIMP method to geotechnical engineering, demonstrating topology optimization's potential and validating it through real-world cases [1]. Hsu et al. optimized the fuzzy proportional-difference controller using generalized fuzzy rules, proving its robustness and versatility in engineering design through variable monotonicity analysis [2]. Herskovits et al. proposed numerical models for simultaneous analysis and design optimization and multidisciplinary design optimization, employing the feasible arc-in-point algorithm to reduce computational effort [3]. Sanchis et al. improved the normalized normal constraint method with an enhanced utopian hyperplane formulation, facilitating better Pareto boundary solution distribution in mechanical and structural design optimization [4].

In model automation, Esche et al. advanced chemical engineering models to automate simulation and optimization workflows [5]. Li et al. leveraged cloud-based resources to decompose engineering optimization into interconnected sub-tasks, enhancing optimization, virtualization, and computational efficiency [6].

Swarm intelligence algorithms are the core of this paper. Due to their computational efficiency and convergence to the optimal solution, they are very popular in many optimization fields such as networking [7, 8], image segmentation [9], the Internet of Things [10,11,12], economic emission [13, 14] and engineering design [15,16,17]. In engineering optimization problem, Kashani et al. reviewed various swarm intelligence algorithms in structural engineering, providing benchmarking insights and comparing optimization techniques [18]. Singh et al. examined various aspects of engineering problems, including constraints, objectives, and variables, to facilitate problem-solving [19]. Nasir et al. detailed water cycle algorithms and their applications across engineering fields [20]. Abualigah et al. categorized meta-heuristic optimization methods into basic, improved, and hybrid categories, reviewing their applications in engineering design [21].

Swarm intelligence algorithms, mimicking natural organisms' behavior, include Slime Mould Algorithm (SMA) [22, 23], Particle Swarm Optimization (PSO) [24], Salp Swarm Algorithm (SSA) [25], Bat Algorithm (BA) [26], Rime Optimization Algorithm (RIME) [27], Artificial Bee Colony (ABC) [28], Firefly Algorithm (FA) [29], Colony Predation Algorithm (CPA) [30], Differential Evolution (DE) [31], Harris Hawks Optimization (HHO) [32], Weighted Mean Of Vectors (INFO) [33], Grey Wolf Optimizer (GWO) [34], Polar Lights Optimization (PLO) [35], Hunger Games Search (HGS) [36], Runge Kutta Optimizer (RUN) [37], Liver Cancer Algorithm (LCA) [38], Artemisinin Optimization (AO) [39], Fata Morgana Algorithm (FATA) [40], Parrot Optimizer (PO) [41], Educational Competition Optimizer (ECO) [42], and Thermal Exchange Optimization (TEO) [43], and improved algorithms like Gaussian Barebone Harris Hawks Optimization (GBHHO) [44], Augmented Lagrange Constrained Particle Swarm Optimization (ALCPSO) [45], Simultaneous Sensor Calibration and Deformation Estimation (SCADE) [46], Enhanced Grasshopper Optimization Algorithm (EGOA) [47]. In practical applications, Sesok et al. optimized pile placement in mesh foundations using simulated annealing, demonstrating resource efficiency [48]. Akay et al. applied the ABC algorithm to various engineering problems, comparing it with other improved swarm intelligence algorithms [49]. Wang et al. introduced the gravitational search strategy-assisted ride optimization algorithm for engineering design [50]. Zhang et al. validated the improved sparrow searching whale optimization algorithm in engineering problems [51]. Arora et al. proposed the modified butterfly optimization algorithm for mechanical optimization [52]. Cheng et al. enhanced numerical search capabilities in structural algorithms with the fuzzy adaptive teaching–learning-based optimization algorithm [53]. Ye et al. combined hybrid optimization with data mining in the sequential approximate optimization method for improved resource utilization [54]. Galvan et al. developed the predictive parameter pareto genetic algorithm for multi-objective optimization in magneto-fluidic thermal transport systems [55]. Amir et al. utilized a sand cat swarm optimization algorithm to determine the direction and speed of motion through defined adaptive strategies, thereby achieving promising results in engineering optimization problems [56]. Yang et al. used arithmetic optimization algorithm to combine two strategies, opposition-based learning and the spiral modelling, in order to improve the global search and speed up the clustering of solutions at a later stage, and experiments showed that the algorithm performs well in engineering problems [57]. Hijjawi et al. combined cuckoo search algorithm operators to enhance their search capability, the algorithm called AOACS, and applied it to engineering design problems [58]. Hussien et al. optimized the beluga whale optimization using gaussian local mutation and transition factor strategies, demonstrating good performance in engineering design problems [59]. Zhang et al. improved the honey badger algorithm using seven mathematical spirals with polar diameters and polar angles. It demonstrated better convergence performance while solving engineering design problems [60]. Sait et al. combined swarm intelligence algorithm with artificial neural networks have achieved more effective results in practical engineering problems [61].

Since the SMA was proposed, researchers in various fields have carried out relevant studies and made relevant progress. Mostafa et al. confirmed the extraction of photovoltaic cell parameters by using a SMA [62]. Gurses et al. compared four different algorithms including SMA and proposed the improved HSMA-SA for optimal value search for optimization problems [63]. In the field of wireless communications, Li et al. have proposed a novel binary slime mold optimization algorithm for the spectrum allocation model to address the spectrum allocation schemes [64]. AlRassas et al. augmented the time-series model prediction with an improved ANFIS-SMAOLB as a means of predicting oil production [65]. Agarwal et al. proposed SMOA to shortest collision free path for robots to reduce the cost of time and money [66]. Pawani et al. proposed a comprehensive learning wavelet-mutated SMA for finding the optimal solution to the cogeneration scheduling problem with nonlinear and discontinuous constraints [67]. Peng et al. proposed the PDM-TSMA method to enhance the process of node localization within wireless sensor networks [68].

However, SMA faces challenges such as prolonged search duration and diminished search ability over time, including difficulty in escaping local optima. To address these limitations, this paper introduces the Chaotic Covariance Harris Slime Mould Algorithm (CCHSMA), designed to improve convergence speed and optimal solution identification. CCHSMA integrates the Chaotic Local Search (CLS), Covariance Matrix Adaptation (CMA) [69] and Harris Hawks Optimization (HHO) mechanisms. CLS enhances the quality of the initial population, CMA employs covariance computation with controlled step size, and HHO strengthens the mid-search process, synergistically enhancing SMA's performance.

The key contributions of this paper are as follows:

  • Development of CCHSMA, an advanced version of the SMA, demonstrating enhanced search efficiency, local search capabilities, and the ability to avoid local optima.

  • Performance evaluation of CCHSMA against leading swarm intelligence algorithms using the IEEE CEC2017 benchmarks, showcasing its superior efficacy.

  • Application of CCHSMA in engineering design optimization, effectively translating real-world constraints into mathematical models, with successful implementation in the Tension/Compression Spring Design (TCSD), Pressure Vessel Design (PVD), and Three-Bar Truss Design (TBTD).

The experimental results validate CCHSMA's effectiveness in engineering optimization, highlighting its reliability and applicability. This paper is structured as follows: Section "Background—SMA" introduces the fundamentals of SMA. Section "Proposed –CCHSMA method" elaborates on the CLS, CMA, and HHO mechanisms, explaining the framework's structure and rationale. Section "Experiment design and results" compares CCHSMA with other prominent algorithms. Section "Engineering problems" discusses CCHSMA's application in engineering design challenges. Section "Conclusions" concludes with a summary of findings and prospects for future research.

Background of the SMA

The SMA is modeled on the foraging behavior of slime moulds, particularly their complex movement patterns. This algorithm draws a analogy between slime mould foraging dynamics and optimization processes. During foraging, both the spatial distribution and the quantity of food significantly influence the slime mould's path. The distribution is influenced not only by proximity but also by the quantity of food, with greater abundance exerting a stronger pull on the slime mould, effectively increasing its 'weight' in the decision-making process.

Slime moulds autonomously determine their actions by evaluating the food's quantity and density. If highly concentrated food sources are far from the slime mould, their influence on its behavior diminishes. Three key processes encapsulate slime mould behavior, providing a basis for mathematical modeling of their foraging patterns.

Approaching food

The slime mould selects its foraging path by analyzing pheromones, leading to varied weighting in its decision-making process. This behavior is mathematically represented as follows:

$$X\left( {t + 1} \right) = \left\{ {\begin{array}{*{20}c} {X_{b} \left( t \right) + vb \cdot \,\left( {W \cdot X_{A} \,\left( t \right) - X_{B} \,\left( t \right)} \right)} & \quad{r < p} \\ {vc \cdot X\left( t \right)} & \quad{r \ge p} \\ \end{array} } \right.$$
(1)

where \(X\left(t\right)\) represents the current position generated by the slime mould at iteration \(t\), \({X}_{b}\) is the current population optimum, \({X}_{A}\) and \({X}_{B}\) represents two randomly selected solutions, and \(W\) denotes the weights influencing the foraging direction, \(r\) is a random value generated by a random function.

The probability \(p\) is determined by Eq. (2):

$$p = \tanh \,\left| {S\left( i \right) - bF} \right|$$
(2)

where \(i\) ranges from 1 to \(n\), \(S\left(i\right)\) denotes the fitness value of \(X\), and \(bF\) is the best fitness value observed.

The equation of \(vb\) is defined in Eqs. (3) and (4).

$$vb = \left[ { - a,{ }a} \right]$$
(3)
$$a = arctanh\left( { - \left( {\frac{t}{max\_t}} \right) + 1} \right)$$
(4)

where \(max\_t\) indicates the maximum number of iterations, and \(t\) represents the current number of iterations.

The value of \(vb\) operates within the range \([-a,a]\) and gradually converges to 0 as the number of iterations increases. Similarly, the variable \(vc\), defined in Eqs. (5) and (6), oscillates between \([-b,b]\) and eventually converges to 0.

$$vc = \left[ { - b,{ }b} \right]$$
(5)
$$b = 1 - \left( {\frac{t}{max\_t}} \right)$$
(6)

Wrapping food

This section models the slime mould's behavior in response to varying food qualities, introducing different levels of food quality as depicted in Eq. (7):

$$X^{*} = \left\{ {\begin{array}{*{20}c} {rand\, \cdot \,\left( {UB - LB} \right) + LB} & \quad\;{rand < z} \\ {X_{b} \left( t \right) + vb\, \cdot \,\left( {W\, \cdot \,X_{A} \left( t \right) - X_{B} \left( t \right)} \right)} & {r < p} \\ {vc\, \cdot \,X\left( t \right)} & {r \ge p} \\ \end{array} } \right.$$
(7)

here, \(LB\) and \(UB\) represent the search space's lower and upper bounds, respectively. The variables \(rand\) and \(r\) are random values within the range [0,1], and \(z\) is set to 0.03.

Finding food

The variable \(W\), representing food concentration, indicates that higher values correspond to higher food concentrations, thus attracting the slime moulds more effectively. The weight \(W\) evolves with each iteration of the algorithm, adapting the attractiveness of the food. The formula for \(W\) is:

$$W\,\left( {SmellIndex\,\left( i \right)} \right) = \left\{ {\begin{array}{*{20}c} {1 + r_{1} \, \cdot \,log\left( {\frac{bF - S\left( i \right)}{{bF - wF}} + 1} \right)} & \quad\;{condition} \\ {1 - r_{1} \, \cdot \,log\left( {\frac{bF - S\left( i \right)}{{bF - wF}} + 1} \right)} & {other} \\ \end{array} } \right.$$
(8)
$$SmellIndex = sort\left( S \right)$$
(9)

In Eqs. (8) and (9), \(SmellIndex\) represents a sorted array of fitness values derived from sorting \(S\). \(bF\) and \(wF\) denote the best and worst fitness values, respectively. The variable \({r}_{1}\) is a random value within [0,1], and \(condition\) refers to the first half of the sequence values in \(S\left(i\right)\).

Proposed SMA method

This section elaborates on optimizing the SMA to enhance its performance and generalizability.

Chaotic Local Search (CLS) mechanism

Effective initialization is a crucial aspect of optimization algorithms. Chaotic mapping, known for its unpredictability and non-repetitiveness, is a widely used method. It generates random numbers between 0 and 1, facilitating the exploration of diverse regions within the search space and accelerating convergence toward optimal solutions. We employ logistic mapping as a classic example of chaotic initialization, as defined in Eq. (10).

$$\left( {t + 1} \right) = \mu X\left( t \right)\left( {1 - X\left( t \right)} \right){ }t = 1, \cdots ,n - 1$$
(10)

here, \(\mu\) is the most important parameter, set to 4. \(t\) denotes the number of iterations, with \(X\left(t\right)\) as the current solution and \(X\left(t+1\right)\) as the initialized result. Function test results indicate that the CLS mechanism's initialization significantly enhances experimental efficiency. However, due to reduced search capability in later stages, it necessitates the integration with other mechanisms, such as CMA, detailed subsequently.

Covariance Matrix Adaptation (CMA)

Covariance measures the correlation between two variables, and the covariance matrix represents these correlations across multiple dimensions of a multidimensional random variable, as shown in Eqs. (11) and (12):

$${\varvec{C}} = \left[ {\begin{array}{*{20}c} {c_{11} } & {c_{12} } & {...} & {c_{1n} } \\ {c_{21} } & {c_{22} } & {...} & {c_{2n} } \\ {...} & {...} & {...} & {...} \\ {c_{n1} } & {c_{n2} } & {...} & {c_{nn} } \\ \end{array} } \right]$$
(11)

here, \({c}_{ij}\) representes the covariance (mixed central moment) between the \({i}^{th}\) and \({j}^{th}\) components of the \(n\)-dimensional variable \(\overrightarrow{X}=({X}_{1},{X}_{2},...,{X}_{n})\)

$$c_{ij} = Cov\left( {X_{i} ,X_{j} } \right) = E\left\{ {\left[ {X_{i} - E\,\left( {X_{i} } \right)} \right]\,\left[ {X_{j} - E\,\left( {X_{j} } \right)} \right]} \right\},i,j = 1,2,...,n$$
(12)

Mutation targeting is facilitated by modifying the covariance matrix of the mutation distribution, increasing the probability of reproducing effective mutation steps. The rate of change is adjusted based on the number of strategy parameters. This adaptive mechanism operates independently of any specific coordinate system. CMA constructs a mutation distribution with exponentially decreasing weights for selected mutations, leveraging past generations' selections to inform the direction and step size of new mutations. The distribution's overall variance and shape should be adjusted on different timescales. This approach mitigates the issue commonly encountered in swarm intelligence algorithms, where optimal solutions may be located near suboptimal individuals.

The role of CMA in the algorithm can be delineated in three distinct steps:

  1. 1.

    The algorithm begins by generating random initial solutions, forming an overall population through a normal distribution centered around these solutions, as described by Eq. (13):

    $$X_{i} \left( {t + 1} \right) \sim m\left( t \right) + \sigma \left( t \right) \cdot {\varvec{N}}\left( {0,{\varvec{C}}\left( t \right)} \right),i = 1,2, \ldots ,{\varvec{popsize}}$$
    (13)

where \({X}_{i}\) represents the \({i}^{th}\) individual in the population, with \(i\) takes a range of integers from 1 to the population size (popsize). \(t\) denotes the current number of iterations, and \(m\) represents the center of mass of the population. \(\sigma\) is the step size. N is the multinomial normal distribution, and C is the n × n covariance matrix of the population, where n is the problem's dimensionality.

  1. 2.

    This step involves selecting a portion of the optimal solutions for future populations, based on the weighted average operation determined in Eqs. (14) and (15):

    $$m\,\left( {t\, + \,1} \right)\, = \,\sum\limits_{i\, = \,1}^{\mu } {\omega_{i} \, \cdot \,X_{i} \,\left( {t\, + \,1} \right)}$$
    (14)
    $$\sum\limits_{{{\text{i}}\, = \,1}}^{\mu } {w_{i} = 1,{\text{w}}_{1} \ge {\text{w}}_{2} \ge \, \cdots \, \ge {\text{w}}_{\mu } > 0}$$
    (15)

    where \(\mu\) is the number of selected parent populations, less than popsize, and \({w}_{i}\) are the positive weights calculated from the previous generation, which sum to 1. \(m\left(t+1\right)\) becomes the center of mass for the next generation.

  1. 3.

    Determining the step size \(\sigma \left(t\right)\) is crucial, as described in Eq. (16):

    $$\sigma \left( {t + 1} \right) = \sigma \left( t \right){\text{exp}}\left( {\frac{{c_{\sigma } }}{{d_{\sigma } }}\left( {\frac{{P_{\sigma } \left( {t + 1} \right)}}{{EN\left( {0,I} \right)}} - 1} \right)} \right)$$
    (16)

Relevant parameters are detailed in Eq. (17) to Eq. (21):

$$\mu_{{{\text{eff}}}} \, = \,\left( {\sum\limits_{i\, = \,1}^{\mu } {w_{i}^{2} } } \right)^{ - 1}$$
(17)
$$c_{\sigma } = \frac{{\mu_{eff} + 2}}{{n + \mu_{eff} + 5}},{ }d_{\sigma } = 1 + 2{\text{max}}\left( {0,\sqrt {\frac{{\mu_{eff} - 1}}{n + 1}} - 1} \right) + {\text{c}}_{{\upsigma }}$$
(18)

here, \({\mu }_{\text{eff}}\) is variance effective selection mass, with \(1{\le \mu }_{\text{eff}}\le \mu\), and the evolutionary path \({P}_{c}\left(t\right)\) is given by:

$$\begin{gathered} P_{c} \left( {t + 1} \right)\, = \,\left( {1 - c_{1} } \right) \cdot P_{c} \left( t \right) \hfill \\ \quad \quad \quad + \sqrt {c_{c} \cdot \left( {2 - c_{c} } \right) \cdot \mu_{eff} } \left[ {\frac{{m\left( {t + 1} \right) - m\left( t \right)}}{\sigma \left( t \right)}} \right] \hfill \\ \end{gathered}$$
(19)

Among them:

$$c_{c} = \frac{{4 + \mu_{eff} /n}}{{n + 4 + 2\mu_{eff} /n}}$$
(20)
$$c_{1} = \frac{2}{{\left( {n + 1.3} \right)^{2} + \mu_{eff} }}$$
(21)

Exponential smoothing is employed for the construction of the evolutionary path and \({p}_{c}^{\left(0\right)}=0\), leading to:

$$p_{c}^{{\left( {t + 1} \right)}} = \left( {1 - c_{c} } \right)p_{c}^{\left( t \right)} + \sqrt {c_{c} \left( {2 - c_{c} } \right)\mu_{eff} \frac{{m^{{\left( {t + 1} \right)}} - m^{\left( t \right)} }}{{\sigma^{\left( t \right)} }}}$$
(22)

Finally, the covariance matrix is updated as follows:

$$\begin{gathered} {\mathbf{C}}\left( {t + 1} \right)\, = \,\left( {1 - c_{1} - c_{\mu } } \right) \cdot {\mathbf{C}}\left( t \right) + c_{1} \cdot P_{c} \left( {t + 1} \right) \hfill \\ \quad \quad \quad \cdot \left( {P_{c} \left( {t + 1} \right)} \right)^{T} + c_{\mu } \hfill \\ \quad \quad \quad \cdot \,\sum\limits_{i\, = \,1}^{\mu } {\omega_{i} } \cdot \frac{{\left( {X_{i} \left( {t + 1} \right) - m\left( t \right)} \right)}}{\sigma \left( t \right)} \cdot \left( {\frac{{\left( {X_{i} \left( {t + 1} \right) - m\left( t \right)} \right)}}{\sigma \left( t \right)}} \right)^{T} \hfill \\ \end{gathered}$$
(23)
$$c_{\mu } = {\text{min}}\left( {1 - c_{1} ,a_{\mu } \,\frac{{\mu_{{{\text{eff}}}} \, - \,2 + 1/\mu_{{{\text{eff}}}} }}{{\left( {{\text{n}} + 2} \right)^{2} \, + \,\alpha_{\mu } \,\mu_{{{\text{eff}}}} /2}}} \right)\,{\text{with}}\,\alpha_{\mu } \, = \,2$$
(24)

The update process offers two options: one based on relative expectations within the parent generation, and the other based on expectations between successive generations.

Harris Hawks Optimization (HHO)

HHO is inspired by the cooperative hunting behavior of Harris Hawks in nature. It simulates their search and capture strategies, dynamically adapting to different environments. In the algorithm, the optimal solution represents the prey, with the iterative process of HHO mirroring the hawks' pursuit and capture strategies. HHO comprises two main phases: exploration and exploitation. The exploration phase involves two strategies for locating prey, while the exploitation phase uses four strategies for collaborative capture.

Exploration phase

The exploration phase in HHO employs equal probabilities for each search strategy, making them probabilistically equivalent for exploring different regions. The simulation incorporates the parameter \(q\). The governing equations for this phase are as follows:

$$X\left( {t + 1} \right) = \left\{ {\begin{array}{*{20}c} {X_{rand} \left( t \right) - r_{1} \left| {X_{rand} \left( t \right) - 2r_{2} X\left( t \right)} \right| \quad\quad\quad\quad\quad\quad\quad\quad{q\, \ge 0.5}} \\ {\left( {X_{best} \left( t \right) - X_{mean} \left( t \right)} \right) - r_{3} \left( {LB + r_{4} \left( {UB - LB} \right)} \right) \;\quad{q < 0.5}} \\ \end{array} } \right.$$
(25)
$$X_{mean} \left( t \right) = \frac{1}{N}\,\sum\limits_{i\, = \,1}^{N} {X_{i} \left( t \right)}$$
(26)

When \(q\) ⩾ 0.5 the Harris Hawk searches randomly, when \(q\) < 0.5, it bases its search on the results from the previous generation. The iteration number is denoted by \(t\), where \(X(t)\) represents the population at the current iteration, and \({X}_{rand}\left(t\right)\) is the randomly selected location of the Harris Hawk in the \({t}^{th}\) iteration. The term \(\left|\cdot \right|\) represents the absolute value. \({X}_{best}(t)\) is the best solution found up to the current iteration. The parameters q, \(r1, r2, r3\), and \(r4\) are random values within the range \([\text{0,1}]\). \({X}_{mean}\left(t\right)\) is the average position of the current population, calculated as shown in Eq. (26).

Exploitation phase

In the HHO algorithm, once the Harris Hawk narrows down the prey's location, it employs four capture strategies: soft besiege, hard besiege, soft besiege with progressive rapid dives, and hard besiege with progressive rapid dives, adjusting the approach based on the prey's state.

The prey’s condition is characterized by two parameters: escape energy (E) and escape probability (r). The prey's ability to escape is determined by |E|; if |E|≥ 0.5, it has sufficient energy to escape, whereas |E|< 0.5 indicates a lack of energy. The escape probability is indicated by r, ranging from \([0, 1]\), with r ≥ 0.5 suggesting a failed escape and r < 0.5 indicating a successful escape. \(E\) is calculated as:

$$E = 2E_{0} \left( {1 - \frac{t}{max\_t}} \right)$$
(27)

where \({E}_{0}\) is a random number within \([-1, 1]\), and \(E\) decays over iterations, reflecting the prey's weakening.

  1. 1.

    Soft besiege (r ≥ 0.5 and |E|≥ 0.5):

Employing this strategy when the prey has enough energy but fails to escape, as shown in Eqs. (28) and (29):

$$X\left( {t + 1} \right) = {\Delta }X\left( t \right) - E\left| {JX_{{{\text{best}}}} \left( t \right) - X\left( t \right)} \right|$$
(28)
$${\Delta }X\left( t \right) = X_{{{\text{best}}}} \left( t \right) - X\left( t \right)$$
(29)

where \(\Delta X(t)\) is the difference between the hunter's and prey's positions, and J is a random number in [0, 2], indicating the prey's random motion intensity and varying between iterations.

  1. 2.

    Hard besiege (r ≥ 0.5 and |E|< 0.5):

When the prey is exhausted, the hard besiege strategy is used, as per Eq. (30):

$$X\left( {t + 1} \right) = X_{{{\text{best}}}} \left( t \right) - E\left| {{\Delta }X\left( t \right)} \right|$$
(30)

where E is calculated by Eq. (27) and ΔX (t) is calculated by Eq. (29).

  1. 3.

    Soft besiege with progressive rapid dives (r < 0.5 and |E|≥ 0.5):

Utilized when the prey is energetic and likely to escape. This strategy involves a combination of soft besiege and rapid dives, described in Eq. (31) to Eq. (34).

$$Y = X_{{{\text{best}}}} \left( t \right) - E\left| {JX_{{{\text{best}}}} \left( t \right) - X\left( t \right)} \right|$$
(31)
$$Z = Y + S \times LF\left( D \right)$$
(32)
$$X\left( {t + 1} \right) = \left\{ {\begin{array}{*{20}c} Y & \quad{{{\text{if }}F\left( Y \right) < F\left( {X\left( t \right)} \right)}} \\ Z & \quad{{{\text{if }}F\left( Z \right) < F\left( {X\left( t \right)} \right)}} \\ \end{array} } \right.$$
(33)
$$LF\left( x \right) = 0.01 \times \frac{\mu \times \sigma }{{|\nu |^{{\frac{1}{\rho }}} }},\sigma = \left( {\frac{{{\Gamma }\left( {1 + \beta } \right) \times {\text{sin}}\left( {\frac{\pi \beta }{2}} \right)}}{{{\Gamma }\left( {\frac{1 + \beta }{2}} \right) \times \beta \times 2^{{\left( {\frac{\beta - 1}{2}} \right)}} }}} \right)^{{\frac{1}{\rho }}}$$
(34)

where F is the fitness function, D denotes the problem's dimension, and \(S\) is a random vector bounded between 0 and 1. LF represents the Levy flight function, with \(\mu\) and \(\nu\) as random values between \([0, 1]\), and β is typically set to 1.5.

  1. 4.

    Hard besiege with progressive rapid dives (r < 0.5 and |E|< 0.5):

Applied when prey has low energy but is cunning, the strategy follows Eq. (35) to Eq. (37):

$$Y^{\prime} = X_{{{\text{best}}}} \left( t \right) - E\left| {JX_{{{\text{best}}}} \left( t \right) - X_{mean} \left( t \right)} \right|$$
(35)
$$Z^{\prime} = Y^{\prime} + S \times LF\left( D \right)$$
(36)
$$X\left( {t + 1} \right) = \left\{ {\begin{array}{*{20}c} {Y^{\prime}} & \quad{{{\text{ if }}F\left( {Y^{{Y^{\prime}}} } \right) < F\left( {X\left( t \right)} \right)}} \\ {Z^{\prime}} & {{\text{ if }}F\left( {Z^{\prime}} \right) < F\left( {X\left( t \right)} \right)} \\ \end{array} } \right.$$
(37)

where \({X}_{mean}(t)\) represents the average position of the Harris Hawk in the current iteration, as calculated from Eq. (26).

Improved SMA framework components

The CCHSMA framework integrates various strategies, and its overall flowchart is depicted in Fig. 1. During the initialization phase, the population range is constrained. Recognizing the impact of initial results on the algorithm's search efficiency, a chaotic mechanism is applied to the initial population. This approach ensures a uniform distribution across the entire space, significantly enhancing search efficiency.

Fig. 1
figure 1

Flowchart of CCHSMA

In the early stages, the SMA demonstrates efficient search capabilities, making it well-suited for iterative search and adaptation. However, as the search progresses into middle and late stages, SMA's efficiency declines. To address this, the CMA and HHO mechanisms are introduced to enhance exploration and exploitation. The experimental integration of SMA with CMA and HHO proves effective, improving the population's search capabilities and broadening the search scope. Additionally, it refines step size and search direction more systematically.

Experiment design and results

This section evaluates the performance of the CCHSMA method through experimental analysis. The benchmark functions from IEEE CEC2017 (referenced in Table 1) are utilized to compare CCHSMA's performance against six SMA variants, seven fundamental metaheuristics, and six advanced swarm intelligence algorithms. Additionally, the algorithm's practical application is tested on engineering design problems. The experiments were conducted on a system with the following specifications: Windows OS, 16 GB RAM, Intel(R) Core (TM) i5-10500 CPU at 3.10 GHz, using MATLAB R2019b.

Table 1 30 Functions of IEEE CEC2017

Section "Validation of the effectiveness of different mechanisms" examines the effects of various mechanisms on the SMA algorithm. Section "Qualitative analysis of the improved algorithm" demonstrates the qualitative analysis of CCHSMA. Section "Comparison with original swarm intelligence algorithms" focuses on validating CCHSMA's superior global search capability and its effectiveness in avoiding local optima, through comparisons with seven basic algorithms. Section “Comparison with swarm intelligence algorithm variants" extends this comparative analysis to include six other advanced improved algorithms, offering a comprehensive assessment of CCHSMA's performance in diverse scenarios. Section "Comparative analysis of algorithms" verifies the robustness of the algorithm in different dimensions with different number of iterations.

Validation of the effectiveness of different mechanisms

In this study, a thorough comparison is conducted between the original SMA and its variants, including the CCHSMA, using IEEE CEC2017 benchmark functions at a 30-dimensional setting.

The variants examined are CMAHHOSMA, CLSHHOSMA, CLSCMASMA, CLSSMA, CMASMA, and HHOSMA. Each variant features a unique integration of mechanisms: CMAHHOSMA combines CMA and HHO without chaotic initialization, CLSCMASMA merges CLS and CMA excluding HHO, and CLSHHOSMA integrates CLS with HHO improvements. CLSSMA, CMASMA, and HHOSMA each apply distinct strategies to enhance SMA.

All algorithms and their variants are tested with a consistent population size of 30 and a maximum of 300,000 function evaluations (MaxFEs). Performance is measured by the mean (Avg.) and standard deviation (Std.) of optimal solutions, with Avg assessing global and local search capabilities, and Std evaluating robustness.

The IEEE CEC2017 functions are selected for testing CCHSMA, with additional statistical comparisons using Friedman's test. This analysis provides algorithm rankings based on their average performance across various functions. The results, along with a visual representation in Table 2 and Fig. 2, underscore CCHSMA's superiority over other algorithms.

Table 2 Experimental results of CCHSMA, CLSHHOSMA, CLSCMASMA, CMAHHOSMA, HHOSMA, CMASMA, CLSSMA and SMA on CEC2017 benchmark functions
Fig. 2
figure 2

Convergence curves on CEC2017

Notably, when CLS is used independently with SMA (e.g., in F4), it shows effective initialization but struggles to escape local optima in later stages. In contrast, CMASMA demonstrates stronger convergence later in the search process despite a less effective initialization phase compared to CLSSMA.

The addition of the enhanced HHO strategy is beneficial when CMA underperforms, as seen in F16, where the combination of various strategies leads to improved results. Similarly, in F15, CCHSMA maintains robust exploration even when other algorithms reach a convergence point. In F30, CCHSMA effectively utilizes the CLS mechanism for superior initialization, enhancing efficiency and reducing runtime.

The integration of the three mechanisms in CCHSMA effectively harnesses their individual strengths, resulting in a synergistic effect that surpasses the sum of their separate impacts. As indicated in Table 3, CCHSMA ranks first among its six variants and the original SMA, excelling in global search capability, robustness, and avoiding local optima. Therefore, CCHSMA emerges as the preferred and enhanced algorithm for optimization tasks.

Table 3 Friedman test of CCHSMA, CLSHHOSMA, CLSCMASMA, CMAHHOSMA, HHOSMA, CMASMA, CLSSMA and SMA on CEC2017 benchmark functions

Qualitative analysis of the improved algorithm

To elucidate the fundamental characteristics of the CCHSMA, this section undertakes a qualitative analysis of the algorithm with a focus on three dimensions: search trajectory, average fitness, and population equilibrium. In the provided Fig. 3, the first column presents the three-dimensional functional graphs of the respective functions, indicating their general topographical features. The second column illustrates the spatiotemporal distribution of the search process, representing the spatial distribution of the population throughout the global search. The third column captures the evolution of the individuals' positions along the first dimension within the range of [−100, 100]. The fourth column quantifies the collective fitness level of the population.

Fig. 3
figure 3

a Illustration of functions, b Two-dimensional location distribution of CCHSMA, c Trajectory of CCHSMA in the first dimension, d Average fitness of CCHSMA

In the visual representations, functions F1 and F4 demonstrate greater smoothness. As shown in column b, these functions quickly converge towards the vicinity of the optimal solution after initialization, facilitating further exploration. After several iterations, as shown in column c, these functions gradually converge to a specific value for a focused search, ultimately reaching the optimal solution upon completing the iterative process.

Conversely, the search patterns for multimodal functions such as F6 and F10 are notably more complex. After initialization, the search trajectory exhibits continuous mutations and extensive exploration of potential solutions in various directions, avoiding entrapment in local minima. This dynamic is more clearly observed in column b.

Comparison with original swarm intelligence algorithms

This section presents a detailed comparison of CCHSMA with well-known meta-heuristic algorithms, including SSA, BA, PSO, ABC, FA, DE, and HHO.

A comprehensive analysis of Table 4 and Fig. 4 indicates that CCHSMA outperforms its counterparts across most functions. It excels in initialization, achieving favorable positions from the start. For example, in function F30, the CLS mechanism significantly enhances initialization. The figure reveals that algorithms like PSO and BA are highly dependent on their initialization results, which strongly impact their final performance. During the middle exploration phase, as illustrated by F18, while other algorithms may demonstrate slower convergence, CCHSMA's integration of the HHO mechanism ensures rapid and efficient global search due to its advanced exploratory capabilities. Additionally, its use of segmented population classification further extends its scope and versatility.

Table 4 Experimental results of CCHSMA, SSA, BA, PSO, ABC, FA, DE and HHO on CEC2017 benchmark functions
Fig. 4
figure 4

Convergence curves of CCHSMA and other MAs on CEC2017

The impact of the CMA mechanism on SMA is evident in function F12, where it improves the control over the search's depth and breadth. This enhancement leads to more reliable step generation and optimizes the algorithm's global search capabilities. The careful modulation of the influence from the parent generation ensures optimal utilization of the evolutionary path's length, resulting in significant performance improvements.

Post-evaluation, Table 5 illustrates the rank averages and final standings, with CCHSMA consistently securing the top position. As highlighted in Table 5, CCHSMA not only outperforms other leading population-based intelligence algorithms but also consistently ranks at the top across all functions. It achieves top-tier scores in nearly all fundamental tests for each function, demonstrating its exceptional overall performance and superiority in diverse optimization scenarios.

Table 5 Friedman test of CCHSMA, SSA, PSO, BA, FA, DE, ABC and HHO on CEC2017 benchmark functions

Comparison with swarm intelligence algorithm variants

This section focuses on comparing the CCHSMA with advanced swarm intelligence optimization algorithm variants, namely OMGSCA [70], RDWOA [71], EOBLSSA [72], GBHHO [44], GOTLBO [73], ALCPSO [45].

An analysis of the results in Table 6 and Fig. 5, particularly for function F1, reveals significant insights. While GOTLBO shows promising early-stage results and fast convergence, its performance declines over time, eventually reaching a stagnation point. In contrast, CCHSMA exhibits consistent convergence speed and stable progression throughout the iterations. Notably, in the later stages, CCHSMA avoids stagnation and engages effectively in local exploration, ultimately surpassing GOTLBO's performance.

Table 6 Experimental results of CCHSMA, RDWOA, OMGSCA, EOBLSSA, GOTLBO and GBHHO, ALCPSO on CEC2017 benchmark functions
Fig. 5
figure 5

Iteration curves of CCHSMA and other improved algorithms on CEC2017

Further examination of functions F3 and F12 highlights CCHSMA's sustained local exploration, even when other algorithms stabilize in later iterations. This persistent exploration allows CCHSMA to achieve optimal results by the end of the iteration process. This capability is particularly valuable for swarm intelligence algorithms in overcoming local optima. On the functions F1, F3, F4, F8, and F21, the CCHSMA algorithm has demonstrated its exceptional capability in local exploration and optimal solution identification, particularly as the algorithms approach the final stages of iterative convergence.

The comparative performance of these algorithms, including CCHSMA, is summarized in Table 7. In this comparison, CCHSMA achieves an average Friedman test result of 2.61333, ranking it highest overall. Although Fig. 5 shows that certain functions, like F18, may perform better with other algorithms, the overall results in Table 7 demonstrate CCHSMA's superior robustness and versatility. This robustness makes CCHSMA more adaptable to a variety of scenarios. In conclusion, CCHSMA not only improves the convergence capabilities of the original SMA but also broadens its application in tackling local convergence challenges. Throughout all functions, CCHSMA consistently outperforms its counterparts, including both basic algorithms and other advanced swarm intelligence options.

Table 7 Friedman test of CCHSMA, OMGSCA, RDWOA, GBHHO, EOBLSSA, GOTLBO and ALCPSO on CEC2017 benchmark functions

Comparative analysis of algorithms

In order to further demonstrate the robustness of the algorithms, we conducted additional comparative experiments, expanding the set of comparison algorithms from Section "Comparison with swarm intelligence algorithm variants" to include new algorithms such as XMACO [74], LXMWOA [75], SCBA [76], and QCSCA [77] for reference and comparison. These experiments were conducted using the IEEE CEC2017 test suite and were performed across 30, 50, and 100 dimensions at 300,000, 500,000, and 1,000,000 function evaluations, respectively. This expanded comparison aimed to analyze the strengths and weaknesses of each algorithm in different dimensional settings.

From Table 8, it is evident that CCHSMA consistently ranks at the top across all dimensions and function evaluation counts, demonstrating its robustness. Figure 6 illustrates that CCHSMA performs well throughout the entire process in the F1 test function. While some algorithms, such as GOTLBO, achieve faster convergence initially, CCHSMA excels in the final stages, overtaking others. Additionally, we can observe that with an increasing number of iterations, some algorithms demonstrate a strong ability to catch up. For instance, OMGSCA, which ranked 11th at 300,000 iterations, escapes local optima and rises to 4th place at 500,000 iterations, highlighting its capability to improve with extended iterations. Figure 7 shows that CCHSMA excels particularly in achieving convergence, with its advantages becoming more apparent as the iteration count increases. Similarly, OMGSCA's performance improves with more iterations.

Table 8 Friedman test of CCHSMA, XMACO, LXMWOA, SCBA, QCSCA, OMGSCA, RDWOA, GBHHO, EOBLSSA, GOTLBO and ALCPSO on CEC2017 benchmark functions
Fig. 6
figure 6

Convergence curves for function evaluation of each algorithm for different dimensions with different number of iterations on the F1 function

Fig. 7
figure 7

Convergence curves for function evaluation of each algorithm for different dimensions with different number of iterations on the F18 function

Engineering problems

In engineering, achieving accuracy and cost-effectiveness often begins with developing a precise mathematical model of the product. This section applies the CCHSMA to solve three engineering design problems: TCSD, PVD, and TBTD. These simulations incorporate a variety of constraints, including maximum-minimum, equation, boundary, and others, to find the optimal solution that meets the specified requirements, ensuring the practicality of the results. CCHSMA is compared against other advanced meta-heuristic algorithms like SCADE, ALCPSO, BA, GWO, EGOA, ISSWOA [51], FDA [78], EWOA [79], IGWO [80], HHO-PS [81], hHHO-SCA [82], QL-ADIFA [83], ASOINU [84], and MSFWA [85].

Tension/Compression Spring Design (TCSD)

The objective in spring design encompasses meeting criteria for structure, performance, strength, and axial stability. The goal is to create springs that withstand significant working loads while maintaining their integrity under specified operational conditions, particularly in high-pressure scenarios. Adapting the design to accommodate different materials and wire diameters is crucial for handling various force levels. In the TCSD problem, the primary objective is to minimize the weight of the spring.

The design parameters include the spring diameter (\(d\)), center diameter (\(D\)), number of active coils (\(n\)), pitch (\(p\)), and height (\(H\)). The challenge lies in minimizing the spring weight while ensuring its functional efficacy. This involves optimizing a set of variables related to these parameters:

$${\text{Consider}}\,\vec{y} = \left[ {y_{1} { }y_{2} { }y_{3} } \right] = \left[ {d{ }D{ }n} \right]$$
(38)

The mathematical formulation of TCSD is as follows.

$${\text{Minimize}}\,f\left( {\vec{y}} \right) = (y_{3} + 2)y_{2} y_{1}^{2}$$
(39)
$${\text{Subject to}}\,h_{1} \left( {\vec{y}} \right) = 1 - \frac{{y_{2}^{3} y_{3} }}{{71785y_{1}^{4} }} \le 0$$
(40)
$$h_{2} \left( {\vec{y}} \right) = \frac{{4y_{2}^{2} - y_{1} y_{2} }}{{12566\left( {y_{2} y_{1}^{3} - y_{1}^{4} } \right)}} + \frac{1}{{5180y_{1}^{2} }} \le 0$$
(41)
$$h_{3} \left( {\vec{y}} \right) = 1 - \frac{{140.45y_{1} }}{{y_{2}^{3} y_{3} }} \le 0$$
(42)
$$h_{4} \left( {\vec{y}} \right) = \frac{{y_{1} + y_{2} }}{1.5} - 1 \le 0$$
(43)
$${\text{Variable range}}\quad 0.05 \le y_{1} \le 2.00,\,0.25 \le y_{2} \le 1.30,\,2.00 \le y_{3} \le 15.0$$
(44)

Equation (38) to Eq. (44) capture the complex interplay between the objective function, constraints, and variables in the TCSD problem. A comprehensive analysis and ranking within the same experimental framework were conducted.

Table 9 showcases the superior performance of CCHSMA in resolving the TCSD challenge. The optimal solution obtained by CCHSMA is 0.012665233, with the optimal parameters being 0.051689061 for diameter (\(d\)), 0.35671773 for center diameter (\(D\)), and 11.28896601 for the number of active coils (\(n\)). These results highlight CCHSMA's exceptional global search capabilities in complex optimization scenarios, such as the TCSD problem.

Table 9 Comparison results of the TCSD problem

Pressure Vessel Design (PVD)

Pressure vessels are critical in both industrial production and daily life. In their design phase, factors like material performance, force characteristics, and structural connections are considered to optimize costs, enhance quality, and minimize maintenance expenses. The goal is to minimize the total cost of cylindrical pressure vessel components. The key variables affecting cost include shell thickness (\(S\)), head thickness (\(H\)), inner radius (\(R\)), and cross-sectional reduction range of the header (\(CH\)).

The mathematical model for the Pressure Vessel Design (PVD) problem is expressed as follows:

$${\text{Consider}}\,\vec{y} = \left[ {y_{1} { }y_{2} { }y_{3} { }y_{4} } \right] = \left[ {S{ }H{ }R{ }CH} \right]$$
(45)
$${\text{Objective}}:\,f\left( {\vec{y}} \right)_{min} = 0.6224y_{1} y_{3} y_{4} + 1.7781y_{3} y_{1}^{2} + 3.1661y_{4} y_{1}^{2} + 19.84y_{3} y_{1}^{2}$$
(46)

The restrictions are as follows:

$${\text{Subject to}}\,g_{1} \left( {\vec{y}} \right) = - y_{1} + 0.0193y_{3} \le 0,$$
(47)
$$g_{2} \left( {\vec{y}} \right) = - y_{3} + 0.00954y_{3} \le 0,$$
(48)
$$g_{3} \left( {\vec{y}} \right) = - \pi y_{4} y_{3}^{2} - \frac{4}{3}\pi y_{3}^{3} + 1296000 \le 0$$
(49)
$$g_{4} \left( {\vec{y}} \right) = y_{4} - 240 \le 0,$$
(50)

The variables are restricted to:

$$0 \le y_{1} \le 99,$$
(51)
$$0 \le y_{2} \le 99,$$
(52)
$$10 \le y_{3} \le 200,$$
(53)
$$10 \le y_{4} \le 200.$$
(54)

Using a similar analytical approach as in previous problems, this study compares the enhanced CCHSMA with other notable meta-heuristic algorithms. After rigorous testing for reliability, the results, presented in Table 10, demonstrate CCHSMA's superior performance in solving the PVD problem.

Table 10 Results of CCHSMA versus peers in literature for PVD case

The optimal solution obtained by CCHSMA is 6059.71445, with the variables shell thickness (\(S\)), head thickness (\(H\)), inner radius (\(R\)), and cross-sectional reduction range of the header (\(CH\)) being 0.8125, 0.4375, 42.09844474, and 176.6366069, respectively. These results highlight CCHSMA's robust local convergence capabilities in addressing this specific type of engineering problem.

Three-Bar Truss Design (TBTD)

The TBTD problem is a significant structural optimization challenge in engineering. Its primary objective is to minimize the weight of the structure while complying with constraints related to stress, deflection, and buckling. A distinctive aspect of this problem is that two of the three bars must be of equal length, focusing attention on the lengths of these bars, denoted as \({l}_{1}\) and \({l}_{2}\).

The mathematical formulation for the TBTD problem is as follows:

$${\text{Consider}}\,\vec{y} = \left[ {y_{1} { }y_{2} } \right] = \,\left[ {l_{1} \, \cdot \,l_{2} } \right]$$
(55)

Objective function:

$$f\left( x \right) = \left( {2\sqrt 2 y_{1} + y_{2} } \right) \times l$$
(56)

Subject to:

$$g_{1} \left( y \right) = \frac{{\sqrt 2 y_{1} + y_{2} }}{{\sqrt 2 y_{1}^{2} + 2y_{1} y_{2} }}P - \sigma \le 0$$
(57)
$$g_{2} \left( y \right) = \frac{{y_{2} }}{{\sqrt 2 y_{1}^{2} + 2y_{1} y_{2} }}P - \sigma \le 0$$
(58)
$$g_{3} \left( y \right) = \frac{1}{{\sqrt 2 y_{2} + y_{1} }}P - \sigma \le 0$$
(59)

where:

$$0 \le y_{i} \le 1{ }i = 1,{ }2$$
(60)
$$l\, = \,100\,cm,\,P\, = \,2{{kN} \mathord{\left/ {\vphantom {{kN} {cm}}} \right. \kern-0pt} {cm}}^{2} ,\,\sigma \, = \,2{{kN} \mathord{\left/ {\vphantom {{kN} {cm}}} \right. \kern-0pt} {cm}}^{2}$$
(61)

In the analysis of this design problem, a comparison of various swarm intelligence algorithms indicates that the CCHSMA stands out as the most optimal choice, delivering refined results. As shown in Table 11, the optimal solution achieved by CCHSMA is 263.8958434, with the corresponding variable values for \({l}_{1}\) and \({l}_{2}\) being 0.788675137 and 0.408248283, respectively. This comparison demonstrates that CCHSMA possesses robust local search capabilities, making it highly effective for this specific type of engineering problem.

Table 11 Comparison results of three-bar truss design problem between CCHSMA and other approaches

Conclusions

We introduce a novel variant of the SMA, termed CCHSMA, which exhibits robust capabilities for efficient search and controlled step size modulation, significantly improving its local search proficiency. The integration of the CLS strategy during initialization accelerates the identification of well-adapted populations, thereby enhancing search efficiency. Furthermore, the incorporation of an enhanced HHO mechanism bolsters global search capabilities. Additionally, the CMA mechanism is employed to calculate the covariance matrix, controlling step size and search direction, thereby aiding the algorithm in avoiding local optima. Results from IEEE CEC2017 benchmark functions underscore the robust global optimization potential of CCHSMA.

In engineering design optimization, CCHSMA has been applied to problems like Tension/Compression Spring Design, Pressure Vessel Design, and Three-Bar Truss Design. Mathematical models were developed for these cases, and experimental results demonstrate CCHSMA's proficiency in designing products with enhanced precision by effectively utilizing constraint information. This results in improved product quality, enhanced performance, and reduced design cycles. Comparative analyses with other advanced algorithms affirm the robustness and effectiveness of CCHSMA, consistently achieving superior results across various problem types.

Although current experiments have achieved theoretical optima in some cases, deviations still exist in specific scenarios. The problem of local optimality has only been further optimized and not completely resolved, while the capability for global search can still be enhanced. Future research will categorize different classes of engineering problems to explore varied applications and evaluate the impact of distinct mechanisms on engineering results. The aim is to further understand and enhance CCHSMA's adaptability and performance across a wide range of engineering challenges. Furthermore, the integration of single-objective swarm intelligence optimization algorithms with multi-objective strategies or deep learning frameworks can address a wider range of engineering application problems.

Availability of data and materials

No datasets were generated or analysed during the current study.

References

  1. Pucker T, Grabe J. Structural optimization in geotechnical engineering: basics and application. Acta Geotech. 2011;6(1):41–9.

    Article  Google Scholar 

  2. Hsu YL, Liu TC. Developing a fuzzy proportional-derivative controller optimization engine for engineering design optimization problems. Eng Optim. 2007;39(6):679–700.

    Article  MathSciNet  Google Scholar 

  3. Herskovits J, et al. Mathematical programming models and algorithms for engineering design optimization. Comput Methods Appl Mech Eng. 2005;194(30–33):3244–68.

    Article  MathSciNet  Google Scholar 

  4. Sanchis J, et al. A new perspective on multiobjective optimization by enhanced normalized normal constraint method. Struct Multidiscip Optim. 2008;36(5):537–46.

    Article  Google Scholar 

  5. Esche E, et al. MOSAIC - enabling large-scale equation-based flow sheet optimization. Chem Ing Tec. 2017;89(5):620–35.

    Article  Google Scholar 

  6. Li ZJ, et al. A generic cloud platform for engineering optimization based on OpenStack. Adv Eng Softw. 2014;75:42–57.

    Article  Google Scholar 

  7. Chen L, et al. Energy-saving access point configurations in WLANs: a swarm intelligent approach. J Supercomput. 2023;79(17):19332–64.

    Article  Google Scholar 

  8. Yue YG, et al. Improved whale optimization algorithm and its application in heterogeneous wireless sensor networks. Int J Distrib Sens Netw. 2021;17(5):15501477211018140.

    Article  Google Scholar 

  9. Shi MR, et al. A grade-based search adaptive random slime mould optimizer for lupus nephritis image segmentation. Comput Biol Med. 2023;160:106950.

    Article  Google Scholar 

  10. Lakhan A, et al. Efficient deep-reinforcement learning aware resource allocation in SDN-enabled fog paradigm. Autom Softw Eng. 2022;29(1):1.

    Article  Google Scholar 

  11. Lakhan A, et al. Blockchain multi-objective optimization approach-enabled secure and cost-efficient scheduling for the Internet of Medical Things (IoMT) in fog-cloud system. Soft Comput. 2022;26(13):6429–42.

    Article  Google Scholar 

  12. Lakhan A, et al. Federated learning-aware multi-objective modeling and blockchain-enable system for IIoT applications. Comput Electr Eng. 2022;100:107839.

    Article  Google Scholar 

  13. Dong R, et al. Boosted kernel search: framework, analysis and case studies on the economic emission dispatch problem. Knowl-Based Syst. 2021;233:107529.

    Article  Google Scholar 

  14. Dong R, et al. Boosting Kernel search optimizer with slime mould foraging behavior for combined economic emission dispatch problems. J Bionic Eng. 2023. https://doi.org/10.1007/s42235-023-00408-z.

    Article  Google Scholar 

  15. Liang X, et al. Chaotic oppositional sine–cosine method for solving global optimization problems. Eng Comput. 2020. https://doi.org/10.1007/s00366-020-01083-y.

    Article  Google Scholar 

  16. Yu H, et al. Dynamic Gaussian bare-bones fruit fly optimizers with abandonment mechanism: method and analysis. Eng Comput. 2020. https://doi.org/10.1007/s00366-020-01174-w.

    Article  Google Scholar 

  17. Nautiyal B, et al. Improved Salp Swarm Algorithm with mutation schemes for solving global optimization and engineering problems. Eng Comput. 2021. https://doi.org/10.1007/s00366-020-01252-z.

    Article  Google Scholar 

  18. Kashani AR, et al. Population-based optimization in structural engineering: a review. Artif Intell Rev. 2022;55(1):345–452.

    Article  Google Scholar 

  19. Singh HK. Development of optimization methods to deal with current challenges in engineering design optimization. AI Commun. 2016;29(1):219–21.

    Article  Google Scholar 

  20. Nasir M, et al. A comprehensive review on water cycle algorithm and its applications. Neural Comput Appl. 2020;32(23):17433–88.

    Article  Google Scholar 

  21. Abualigah L, et al. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: a comprehensive survey, applications, comparative analysis, and results. Neural Comput Appl. 2022;34(6):4081–110.

    Article  Google Scholar 

  22. Chen H, et al. Slime mould algorithm: a comprehensive review of recent variants and applications. Int J Syst Sci. 2022. https://doi.org/10.1080/00207721.2022.2153635.

    Article  Google Scholar 

  23. Li S, et al. Slime mould algorithm: a new method for stochastic optimization. Futur Gener Comput Syst. 2020;111:300–23.

    Article  Google Scholar 

  24. Kennedy J, Eberhart R. Particle swarm optimization. In Proceedings of ICNN'95-international conference on neural networks. 1995. IEEE.

  25. Mirjalili S, et al. Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv Eng Softw. 2017;114:163–91.

    Article  Google Scholar 

  26. Yang XS. Bat algorithm for multi-objective optimisation. Int J Bio-Inspired Comput. 2011;3(5):267–74.

    Article  Google Scholar 

  27. Su H, et al. RIME: a physics-based optimization. Neurocomputing. 2023. https://doi.org/10.1016/j.neucom.2023.02.010.

    Article  Google Scholar 

  28. Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Global Optim. 2007;39(3):459–71.

    Article  MathSciNet  Google Scholar 

  29. Yang X-S. Firefly algorithms for multimodal optimization. Berlin: Springer, Berlin Heidelberg; 2009.

    Book  Google Scholar 

  30. Tu J, et al. The colony predation algorithm. J Bionic Eng. 2021;18(3):674–710.

    Article  Google Scholar 

  31. Das S, et al. Differential evolution using a neighborhood-based mutation operator. IEEE Trans Evol Comput. 2009;13(3):526–53.

    Article  Google Scholar 

  32. Heidari AA, et al. Harris hawks optimization: algorithm and applications. Futur Gener Comput Syst Int J Esci. 2019;97:849–72.

    Article  Google Scholar 

  33. Ahmadianfar I, et al. INFO: an efficient optimization algorithm based on weighted mean of vectors. Expert Syst Appl. 2022;195:116516.

    Article  Google Scholar 

  34. Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Adv Eng Softw. 2014;69:46–61.

    Article  Google Scholar 

  35. Yuan C, et al. Polar lights optimizer: algorithm and applications in image segmentation and feature selection. Neurocomputing. 2024;607:128427.

    Article  Google Scholar 

  36. Yang Y, et al. Hunger games search: visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst Appl. 2021;177:114864.

    Article  Google Scholar 

  37. Ahmadianfar I, et al. RUN beyond the metaphor: an efficient optimization algorithm based on Runge Kutta method. Expert Syst Appl. 2021;181:115079.

    Article  Google Scholar 

  38. Houssein EH, et al. Liver cancer algorithm: a novel bio-inspired optimizer. Comput Biol Med. 2023;165:107389.

    Article  Google Scholar 

  39. Yuan C, et al. Artemisinin optimization based on malaria therapy: algorithm and applications to medical image segmentation. Displays. 2024;84:102740.

    Article  Google Scholar 

  40. Qi A, et al. FATA: an efficient optimization method based on geophysics. Neurocomputing. 2024;607:128289.

    Article  Google Scholar 

  41. Lian J, et al. Parrot optimizer: algorithm and applications to medical problems. Comput Biol Med. 2024;172:108064.

    Article  Google Scholar 

  42. Lian J, et al. The educational competition optimizer. Int J Syst Sci. 2024. https://doi.org/10.1080/00207721.2024.2367079.

    Article  Google Scholar 

  43. Kaveh A, Dadras A. A novel meta-heuristic optimization algorithm: thermal exchange optimization. Adv Eng Softw. 2017;110:69–84.

    Article  Google Scholar 

  44. Wei Y, et al. Predicting entrepreneurial intention of students: an extreme learning machine with Gaussian barebone harris hawks optimizer. IEEE Access. 2020;8:76841–55.

    Article  Google Scholar 

  45. Chen WN, et al. Particle swarm optimization with an aging leader and challengers. IEEE Trans Evol Comput. 2013;17(2):241–58.

    Article  Google Scholar 

  46. Alambeigi F, et al. SCADE: simultaneous sensor calibration and deformation estimation of FBG-equipped unmodeled continuum manipulators. IEEE Trans Rob. 2020;36(1):222–39.

    Article  Google Scholar 

  47. Feng HW, et al. An enhanced grasshopper optimization algorithm to the bin packing problem. J Control Sci Eng. 2020. https://doi.org/10.1155/2020/3894987.

    Article  Google Scholar 

  48. Sesok D, et al. Application of GRID computing for optimization of grillages. Mechanika. 2010;2:63–9.

    Google Scholar 

  49. Akay B, Karaboga D. Artificial bee colony algorithm for large-scale problems and engineering design optimization. J Intell Manuf. 2012;23(4):1001–14.

    Article  Google Scholar 

  50. Wang GH, Yuan YL, Guo WW. An improved rider optimization algorithm for solving engineering optimization problems. IEEE Access. 2019;7:80570–6.

    Article  Google Scholar 

  51. Zhang JH, et al. ISSWOA: hybrid algorithm for function optimization and engineering problems. J Supercomput. 2022;79:8789.

    Article  Google Scholar 

  52. Arora S, Singh S, Yetilmezsoy K. A modified butterfly optimization algorithm for mechanical design optimization problems. J Brazilian Soc Mech Sci Eng. 2018;40(1):1.

    Article  Google Scholar 

  53. Cheng MY, Prayogo D. A novel fuzzy adaptive teaching-learning-based optimization (FATLBO) for solving structural optimization problems. Eng Comput. 2017;33(1):55–69.

    Article  Google Scholar 

  54. Ye PC, Pan G. A novel sequential approximate optimization approach using data mining for engineering design optimization. Optim Methods Softw. 2015;30(6):1255–75.

    Article  MathSciNet  Google Scholar 

  55. Galvan E, et al. Performance assessment of a multi-objective parametric optimization algorithm with application to a multi-physical engineering system. Struct Multidiscip Optim. 2018;58(2):489–509.

    Article  MathSciNet  Google Scholar 

  56. Seyyedabbasi A, Kiani F. Sand Cat swarm optimization: a nature-inspired algorithm to solve global optimization problems. Eng Comput. 2023;39(4):2627–51.

    Article  Google Scholar 

  57. Yang Y, et al. An opposition learning and spiral modelling based arithmetic optimization algorithm for global continuous optimization problems. Eng Appl Artif Intell. 2022;113:104981.

    Article  Google Scholar 

  58. Hijjawi M, et al. Accelerated arithmetic optimization algorithm by cuckoo search for solving engineering design problems. Processes. 2023;11(5):1380.

    Article  Google Scholar 

  59. Hussien AG, et al. Novel memetic of beluga whale optimization with self-adaptive exploration-exploitation balance for global optimization and engineering problems. Soft Comput. 2023;27(19):13951–89.

    Article  Google Scholar 

  60. Zhang SW, et al. Improved honey badger algorithm based on elementary function density factors and mathematical spirals in polar coordinate systema. Artif Intell Rev. 2024;57(3):55.

    Article  Google Scholar 

  61. Sait SM, et al. Optimal design of structural engineering components using artificial neural network-assisted crayfish algorithm. Mater Test. 2024. https://doi.org/10.1515/mt-2024-0075.

    Article  Google Scholar 

  62. Mostafa M, et al. A new strategy based on slime mould algorithm to extract the optimal model parameters of solar PV panel. Sustain Energy Technol Assess. 2020;42:100849.

    Google Scholar 

  63. Gürses D, et al. Comparison of the arithmetic optimization algorithm, the slime mold optimization algorithm, the marine predators algorithm, the salp swarm algorithm for real-world engineering applications. Mater Test. 2021;63(5):448–52.

    Article  Google Scholar 

  64. Li L, et al. A novel binary slime mould algorithm with au strategy for cognitive radio spectrum allocation. Int J Comput Intell Syst. 2021;14(1):161.

    Article  Google Scholar 

  65. AlRassas AM, et al. Advance artificial time series forecasting model for oil production using neuro fuzzy-based slime mould algorithm. J Petroleum Explor Prod Technol. 2022;12(2):383–95.

    Article  Google Scholar 

  66. Agarwal D, Bharti PS. Implementing modified swarm intelligence algorithm based on Slime moulds for path planning and obstacle avoidance problem in mobile robots. Appl Soft Comput. 2021;107:107372.

    Article  Google Scholar 

  67. Pawani K, Singh M. Combined heat and power dispatch problem using comprehensive learning wavelet-mutated slime mould algorithm. Electr Power Compon Syst. 2023;51(1):12–28.

    Article  Google Scholar 

  68. Peng D, Gao YW. Localization algorithm for anisotropic wireless sensor networks based on the adaptive chaotic slime mold algorithm. Neural Comput Appl. 2023;35(36):25291–306.

    Article  Google Scholar 

  69. Hansen N, Ostermeier A. Completely derandomized self-adaptation in evolution strategies. Evol Comput. 2001;9(2):159–95.

    Article  Google Scholar 

  70. Chen H, et al. Advanced orthogonal learning-driven multi-swarm sine cosine optimization: Framework and case studies. Expert Syst Appl. 2020;144:113113.

    Article  Google Scholar 

  71. Chen HL, et al. An efficient double adaptive random spare reinforced whale optimization algorithm. Expert Syst Appl. 2020;154:113018.

    Article  Google Scholar 

  72. Ma B, et al. Enhanced sparrow search algorithm with mutation strategy for global optimization. IEEE Access. 2021;9:159218–61.

    Article  Google Scholar 

  73. Chen X, et al. Parameters identification of solar cell models using generalized oppositional teaching learning based optimization. Energy. 2016;99:170–80.

    Article  Google Scholar 

  74. Qi AL, et al. Directional mutation and crossover boosted ant colony optimization with application to COVID-19 X-ray image segmentation. Comput Biol Med. 2022;148:105810.

    Article  Google Scholar 

  75. Qi AL, et al. Directional mutation and crossover for immature performance of whale algorithm with application to engineering optimization. J Comput Des Eng. 2022;9(2):519–63.

    Google Scholar 

  76. Hu HY, et al. Horizontal and vertical crossover of sine cosine algorithm with quick moves for optimization and feature selection. J Comput Des Eng. 2022;9(6):2524–55.

    Google Scholar 

  77. Su H, et al. Kernel extreme learning with harmonized bat algorithm for prediction of pyrene toxicity in rats. Basic Clin Pharmacol Toxicol. 2024;134(2):250–71.

    Article  Google Scholar 

  78. Karami H, et al. Flow Direction Algorithm (FDA): a novel optimization approach for solving optimization problems. Comput Ind Eng. 2021;156:107224.

    Article  Google Scholar 

  79. Tu JZ, et al. Evolutionary biogeography-based whale optimization methods with communication structure: Towards measuring the balance. Knowl Based Syst. 2021;212:106642.

    Article  Google Scholar 

  80. Li Y, Lin X, Liu J. An improved gray wolf optimization algorithm to solve engineering problems. Sustainability. 2021;13(6):3208.

    Article  Google Scholar 

  81. Krishna AB, Saxena S, Kamboj VK. A novel statistical approach to numerical and multidisciplinary design optimization problems using pattern search inspired Harris hawks optimizer. Neural Comput Appl. 2021;33(12):7031–72.

    Article  Google Scholar 

  82. Kamboj VK, et al. An intensify Harris Hawks optimizer for numerical and engineering optimization problems. Appl Soft Comput. 2020;89:106018.

    Article  Google Scholar 

  83. Tan S, Zhao SR, Wu JR. QL-ADIFA: hybrid optimization using Q-learning and an adaptive logarithmic spiral-levy firefly algorithm. Math Biosci Eng. 2023;20(8):13542–61.

    Article  MathSciNet  Google Scholar 

  84. Sun P, et al. An intensify atom search optimization for engineering design problems. Appl Math Model. 2021;89:837–59.

    Article  Google Scholar 

  85. Han X, et al. Efficient hybrid algorithm based on moth search and fireworks algorithm for solving numerical and constrained engineering optimization problems. J Supercomput. 2020;76(12):9404–29.

    Article  Google Scholar 

Download references

Acknowledgements

We thank the reviewers for their efforts in this manuscript.

Funding

This work was supported in part by the Natural Science Foundation of Zhejiang Province (LZ22F020005), National Natural Science Foundation of China (62076185), Wenzhou Major Scientific & Technological Innovation Project (ZF2024004) and Zhejiang Provincial Education Department under Grant No. jg20230057.

Author information

Authors and Affiliations

Authors

Contributions

Qibo Sun: writing—original draft, writing—review & editing, software, visualization, investigation. Chaofan Wang: writing—original draft, writing—review & editing, software, visualization, investigation. Yi Chen: writing—original draft, writing—review & editing, software, visualization, investigation. Ali Asghar Heidari: writing—review & editing, software, visualization, investigation. Huiling Chen: conceptualization, methodology, formal analysis, investigation, writing—review & editing, funding acquisition, supervision, project administration. Guoxi Liang: conceptualization, methodology, formal analysis, investigation, writing—review & editing, funding acquisition, supervision, project administration.

Corresponding authors

Correspondence to Huiling Chen or Guoxi Liang.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

All the author has read the journal policies and submit this manuscript in accordance with those policies.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, Q., Wang, C., Chen, Y. et al. An efficient weighted slime mould algorithm for engineering optimization. J Big Data 11, 141 (2024). https://doi.org/10.1186/s40537-024-01000-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40537-024-01000-w

Keywords