The primary intention of optimal operation is to fulfil certain decision constraints and identify the optimal implementation. When solving some large-scale and complex combinational optimization issues, conventional techniques present inherent weaknesses, such as low computational productivity, poor optimization precision, premature convergence, exponentially increasing time and combinatorial explosion. However, evolutionary algorithms not only have simple framework operations, good parallelism, strong robustness, easy expansion, strong self-organization, high calculation accuracy and strong stability but can also effectively realize complementary advantages between algorithms or optimization strategies to acquire accurate global values. Some evolutionary algorithms include the walrus optimizer (WO)1, hippopotamus optimization (HO)2, sand cat swarm optimization (SCSO)3, Newton‒Raphson-based optimizer (NRBO)4, black-winged kite algorithm (BKA)5, linear population size reduction-success-history adaptation for differential evolution (L-SHADE)6, ensemble sinusoidal incorporated with L-SHADE (LSHADE-EpSin)7, covariance matrix adaptation evolution strategy (CMA-ES)8, and golden jackal optimization (GJO)9.

Mohapatra et al. utilized a modified GJO to accomplish the functions and engineering designs; this methodology exhibited remarkable reliability and dependability in delivering the solution10. Zhang et al. introduced an enhanced GJO to accomplish an infinite impulse response network; this methodology achieved greater computational efficiency and greater recognition precision11. Zhang et al. designed a revised GJO to segment images, and this methodology displayed good robustness and adaptability to prevent search stagnation and increase segmentation quality12. Hanafi et al. integrated a binary GJO to accomplish intrusion detection in the Internet of Things; this methodology achieved excellent superiority and practically acquired the maximum percentage accuracy and the finest detection effect13. Ghandourah et al. created a GJO to anticipate the thermal properties of solar stills; this methodology deployed extensive investigation and extraction to transmit a higher prediction accuracy and faster calculation efficiency14. Wang et al. mentioned a reinforced GJO to segment COVID-19 images; this methodology revealed good durability and feasibility in promoting the computation rate and segmentation precision15. Das et al. deployed the GJO to predict feature selection of software failure; this methodology has substantial relevance and dependability to achieve high classification precision and optimization efficiency16. Snášel et al. introduced an elite-opposition GJO to accomplish multiobjective engineering issues; this methodology exhibited certain superiority and resiliency in generating globally accurate values 17. Houssein et al. devised a modified GJO to segment skin cancer images; this methodology has strong feasibility and practicality18. Zhang et al. utilized binary GJO with a stochastic canvas map and cosine resemblance for selecting features, and this methodology demonstrated substantial superiority in terms of the convergence rate and calculation accuracy19. Lu et al. proposed a refined GJO and random configuration network to accomplish fault diagnosis of power transformers; this methodology provided excellent reliability and exploitation to increase the optimization accuracy and calculation efficiency20. Nanda et al. utilized an altered GJO based on a sine cosine and adopted a scaling factor to design an adaptive fuzzy PIDF controller; this methodology showed significant robustness and superiority to create better control parameters21. Wang et al. provided an adaptive GJO to identify abnormal user behaviour; this methodology exhibited great dependability and superiority in achieving the optimal parameters and the best actual value22. Yang et al. mentioned an upgraded GJO to achieve the finest distribution of parking locations for wine turbines and electric automobiles; this methodology had excellent predictability for decreasing grid energy losses and minimizing the objective value23. Najjar et al. combined GJO with a long short-term model to forecast the tribological properties of alumina-coated alumina; this methodology exhibited exceptional durability and superiority to acquire the ideal optimization solution24. Mahdy et al. utilized the GJO to design an integrated wave electricity and photovoltaic system supplying turbocharging stations; this methodology displayed good reliability and stability to mimic and maximize the issue25. Wang et al. described a customized GJO to segment aerial images; this methodology exhibited extensive investigation and exploitation to generate a greater computation rate and superior segmentation quality26. Wang et al. mentioned multistrategy GJO to accomplish function optimization and engineering design; this methodology exhibited strong global detection ability to avoid search stagnation and determine the best solution27. Zhang et al. deployed GJO with a lateral inhibition strategy to accomplish image matching; this methodology has strong reliability and dependability to achieve an accurate registration rate and superior exploration accuracy28. Sundar Ganesh et al. released a modified GJO to accomplish photovoltaic parameter estimation; this methodology utilized exploration and exploitation to increase computational efficiency and determine the optimal parameter29. Bai et al. constructed an enhanced GJO to accomplish function optimization and engineering design; this methodology exhibited remarkable superiority and reliability in efficiently completing a global search and achieved the best convergence accuracy30. Zhong et al. delivered a multiobjective GJO to resolve dynamic economic emission dispatch; this methodology featured strong superiority and robustness to complete the optimal scheduling solution31. Elhoseny et al. implemented a modified multistrategy GJO to accomplish function optimization and engineering design; this methodology exhibited strong adaptability and robustness to increase the solution efficiency and convergence accuracy32. Alharthi et al. introduced a modified GJO with chaotic maps to accomplish chemical data classification; this methodology exhibited superior evaluation efficiency and classification accuracy33. Li et al. integrated a cross-mutation GJO to accomplish function optimization and engineering design; this methodology exhibited strong effectiveness and feasibility to achieve a quicker convergence rate and greater computational precision34. In summary, research on the GJO has focused mainly on algorithm improvement and application. (1) The modified GJO uses efficient exploration strategies, effective encoding forms or hybrid optimization methods to realize supplementary advantages and enhance the optimization efficiency, which are applied to achieve function optimization and engineering design. These modified algorithms can effectively avoid search stagnation and promote solution efficiency, which balances exploration and exploitation to improve the convergence speed and calculation accuracy. (2) The modified GJO exhibits strong stability, robustness, feasibility, scalability, and parallelism to solve various large-scale and complex frontier problems, such as artificial intelligence, systems control, pattern recognition, engineering technology and network communication. The modified GJO method exhibits strong adaptability and robustness to promote computational precision and determine the best solution.

Although the above modified versions of the original GJO have enhanced the convergence speed and calculation accuracy to a certain extent, they still cannot efficiently achieve a balance between global exploration and local exploitation to avoid search stagnation and determine the best solution. The no-free-lunch (NFL) theorem states that there is no specific optimization algorithm that can resolve all optimization issues. More advanced and superior algorithms will continue to emerge in the improvement and application of the GJO, which motivates us to establish a novel CGJO for function optimization and engineering design. The complex-valued encoding mechanism is introduced into the basic GJO to encode the real and imaginary portions of the golden jackal and renew the position information, which increases population diversity, restricts search stagnation, expands the exploration area, promotes information exchange, fosters collaboration efficiency and improves convergence accuracy. The main contributions can be summarized as follows: (1) Complex-valued encoding golden jackal optimization (CGJO) is proposed to resolve the global optimization problem. (2) The complex-valued encoding mechanism increases population diversity, restricts search stagnation, expands the exploration area, promotes information exchange, fosters collaboration efficiency and improves convergence accuracy. (3) CGJO is compared with various optimization algorithms, including recently published, highly cited, and highly performing algorithms. (4) CGJO is tested against the CEC 2022 test suite and six real-world engineering designs by performing simulation experiments and analysing the results. (5) CGJO exhibits strong effectiveness and feasibility and outperforms the other algorithms.

The remainder of this article is divided into the following sections. Section “Golden jackal optimization” reveals the GJO. Section “Complex-valued encoding golden jackal optimization” describes the CGJO. Section “Simulation evaluation and result analysis” proposes comparative experiments and result analysis. Section “CGJO for engineering design” introduces the comparative experiments and result analysis. Section “Conclusions and future research” summarizes the findings, research limitations, and recommendations for future research.

Golden jackal optimization

The jackal collaborative hunting procedure is depicted in Fig. 1.

Fig. 1
figure 1

(A) Pair of golden jackals; (B) Foraging prey; (C) Trespassing and encircling prey; (D,E) Trapping prey.

Search domain

In GJO, the search agent is distributed arbitrarily, and the jackal population is initialized arbitrarily. The initial search agent is established as:

$$ Y_{0} = Y_{\min } + rand(Y_{\max } - Y_{\min } ) $$
(1)

where \(Y_{0}\) represents the initial population location of the golden jackal, \(rand \in [0,1][0,1]\), \(Y_{\min }\) and \(Y_{\max }\) represent the lower and upper boundaries, respectively.

The optimal and suboptimal search agent portrays a jackal pair, and the prey matrix is established as:

$$ Prey = \left[ {\begin{array}{*{20}c} {Y_{1,1} } & {Y_{1,2} } & \cdots & {Y_{1,d} } \\ {Y_{2,1} } & {Y_{2,2} } & \cdots & {Y_{2,d} } \\ \vdots & \vdots & \vdots & \vdots \\ {Y_{n,1} } & {Y_{n,2} } & \cdots & {Y_{n,d} } \\ \end{array} } \right] $$
(2)

where \(Y_{i,j}\) represents the jth dimension of the ith prey, \(n\) represents the prey size, and \(d\) represents the question dimension. The fitness matrix \(F_{OA}\) is established as:

$$ F_{OA} = \left[ {\begin{array}{*{20}c} {f(Y_{1,1} ;Y_{1,2} ;Y_{1,d} )} \\ {f(Y_{2,1} ;Y_{2,2} ;Y_{2,d} )} \\ \vdots \\ {f(Y_{n,1} ;Y_{n,2} ;Y_{n,d} )} \\ \end{array} } \right] $$
(3)

where \(f\) portrays the fitness function. The jackal pair renews and captures the prey according to the corresponding prey position.

Foraging the prey (exploration)

Jackals utilize their distinctive predatory abilities to perceive, track and capture prey; however, the target occasionally cannot be caught quickly and escapes. Hence, the male jackal leads the female jackal to search for prey, thereby facilitatig a more efficient the hunting process. The positions are established as:

$$ Y_{1} (t) = Y_{M} (t) - E \cdot \left| {Y_{M} (t) - rl \cdot Prey(t)} \right| $$
(4)
$$ Y_{2} (t) = Y_{FM} (t) - E \cdot \left| {Y_{FM} (t) - rl \cdot Prey(t)} \right| $$
(5)

where \(t\) represents the current iteration, \(Prey(t)\) represents the prey location, \(Y_{M} (t)\) and \(Y_{FM} (t)\) represent the locations of the male and female jackals, and \(Y_{1} (t)\) and \(Y_{2} (t)\) represent the renewed locations of the male and female jackals, respectively.

The prey’s energy to avoid \(E\) is established as:

$$ E = E_{1} \cdot E_{0} $$
(6)

where \(E_{1}\) represents the energy of the prey, and \(E_{0}\) represents the initial energy of the prey.

$$ E_{0} = 2 \cdot r - 1 $$
(7)

where \(r\) is a stochastic value in [0,1].

$$ E_{1} = c_{1} \cdot (1 - {{(t} \mathord{\left/ {\vphantom {{(t} T}} \right. \kern-0pt} T})) $$
(8)

where \(T\) represents the maximum number of iterations, \(c_{1} = 1.5\), and \(E_{1}\) linearly decreases from 1.5 to 0.

The stochastic value \(rl\) based on the Levy distribution is established as:

$$ rl = 0.05 \cdot LF(y) $$
(9)

The Levy flight \(LF\) portrays the fitness function.

$$ LF(y) = {{0.01 \times (\mu \times \sigma )} \mathord{\left/ {\vphantom {{0.01 \times (\mu \times \sigma )} {(\left| {v^{{{{(1} \mathord{\left/ {\vphantom {{(1} \beta }} \right. \kern-0pt} \beta })}} } \right|)}}} \right. \kern-0pt} {(\left| {v^{{{{(1} \mathord{\left/ {\vphantom {{(1} \beta }} \right. \kern-0pt} \beta })}} } \right|)}};{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \sigma = \left( {\frac{{\Gamma (1 + \beta ) \times \sin {{(\pi \beta } \mathord{\left/ {\vphantom {{(\pi \beta } 2}} \right. \kern-0pt} 2})}}{{\Gamma \left( {\frac{1 + \beta }{2}} \right) \times \beta \times \left( {2^{{\frac{\beta - 1}{2}}} } \right)}}} \right)^{{{1 \mathord{\left/ {\vphantom {1 \beta }} \right. \kern-0pt} \beta }}} $$
(10)

where \(\mu\) and \(v\) portray stochastic values in (0,1), \(\beta = 1.5\).

The jackal pair’s renewed position is established as:

$$ Y(t + 1) = \frac{{Y_{1} (t) + Y_{2} (t)}}{2} $$
(11)

Encircling and trapping the prey (exploitation)

The jackal can pounce, surround and devour the prey. The positions are established as follows:

$$ Y_{1} (t) = Y_{M} (t) - E \cdot \left| {rl \cdot Y_{M} (t) - Prey(t)} \right| $$
(12)
$$ Y_{2} (t) = Y_{FM} (t) - E \cdot \left| {rl \cdot Y_{FM} (t) - Prey(t)} \right| $$
(13)

where \(Prey(t)\) represents the prey location, \(Y_{M} (t)\) and \(Y_{FM} (t)\) represent the locations of the male and female jackals, respectively, and \(Y_{1} (t)\) and \(Y_{2} (t)\) represent the renewed locations of the male and female jackals, respectively. The renewed location is established as Eq. (11).

The pseudocode of GJO is listed in Algorithm 1.

Algorithm 1
figure a

GJO.

Complex-valued encoding golden jackal optimization

In natural ecosystems, chromosomes of sophisticated cellular structures are regularly composed of double or multiple strands. Complex-valued encoding uses mainly diploid technology to describe one allele of a chromosome and alter its position independently of the real and imaginary portions. This technology can enhance algorithmic parallelism, tap population diversity, avoid premature convergence, expand the feature space, improve the search efficiency and increase the information capacity. For an issue with \(M\) distinct factors, the structure is established as35,36:

$$ Y_{p} = R_{p} + iI_{p} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} p = 1,2,3,...,M $$
(14)

where \((R_{P} ,I_{P} )\) represents the gene of an organism with a diploid structure and \(R_{P}\) and \(I_{P}\) represent the real and imaginary parts of the complex-virtual encoding, respectively. The chromosome structure of the organism is outlined in Table 1.

Table 1 Chromosome model of the organism.

Initializing the complex-valued encoding population

Assume that the defined span is \(\left[ {A_{k} ,B_{k} } \right],k = 1,2,..,M\), and that stochastics generate \(M\) modulus and \(M\) arguments.

$$ \rho_{k} \in \left[ {0,\frac{{B_{k} - A_{k} }}{2}} \right],{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} k = 1,2,...,M $$
(15)
$$ \theta_{k} = \left[ { - 2\pi ,2\pi } \right],{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} k = 1,2,...,M $$
(16)

The \(M\) complex values are established as:

$$ Y_{Rk} + iY_{Ik} = \rho_{k} (\cos \theta_{k} + i\sin \theta_{k} ),{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} k = 1,2,...,M $$
(17)

The GJO uses \(M\) real and imaginary parts to alter the jackal location.

The altering methodology of CGJO

Foraging prey

  1. (1)(1)

    Alter the real portions:

    $$ Y_{1R} (t) = Y_{MR} (t) - E \cdot \left| {Y_{MR} (t) - rl \cdot Prey_{R} (t)} \right| $$
    (18)
    $$ Y_{2R} (t) = Y_{FMR} (t) - E \cdot \left| {Y_{FMR} (t) - rl \cdot Prey_{R} (t)} \right| $$
    (19)
    $$ Y_{R} (t + 1) = \frac{{Y_{1R} (t) + Y_{2R} (t)}}{2} $$
    (20)
  2. (1)(1)

    Alter the imaginary portions:

    $$ Y_{1I} (t) = Y_{MI} (t) - E \cdot \left| {Y_{MI} (t) - rl \cdot Prey_{I} (t)} \right| $$
    (21)
    $$ Y_{2I} (t) = Y_{FMI} (t) - E \cdot \left| {Y_{FMI} (t) - rl \cdot Prey_{I} (t)} \right| $$
    (22)
    $$ Y_{I} (t + 1) = \frac{{Y_{1I} (t) + Y_{2I} (t)}}{2} $$
    (23)

Encircling and trapping prey

  1. (1)(1)

    Alter the real portions:

    $$ Y_{1R} (t) = Y_{MR} (t) - E \cdot \left| {rl \cdot Y_{MR} (t) - Prey_{R} (t)} \right| $$
    (24)
    $$ Y_{2R} (t) = Y_{FMR} (t) - E \cdot \left| {rl \cdot Y_{FMR} (t) - Prey_{R} (t)} \right| $$
    (25)
    $$ Y_{R} (t + 1) = \frac{{Y_{1R} (t) + Y_{2R} (t)}}{2} $$
    (26)
  2. (1)(1)

    Alter the imaginary portions:

    $$ Y_{1I} (t) = Y_{MI} (t) - E \cdot \left| {rl \cdot Y_{MI} (t) - Prey_{I} (t)} \right| $$
    (27)
    $$ Y_{2I} (t) = Y_{FMI} (t) - E \cdot \left| {rl \cdot Y_{FMI} (t) - Prey_{I} (t)} \right| $$
    (28)
    $$ Y_{I} (t + 1) = \frac{{Y_{1I} (t) + Y_{2I} (t)}}{2} $$
    (29)

where \(Prey_{R}\) and \(Prey_{I}\) indicate the real and imaginary portions of the prey, respectively. \(Y_{R}\) and \(Y_{I}\) indicate the real and imaginary portions of the golden jackal, respectively.

The methodology for computing the fitness value

The fitness value of the complex value is established as:

$$ \rho_{k} = \sqrt {Y_{Rk}^{2} + Y_{Ik}^{2} } ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} k = 1,2,...,M $$
(30)
$$ Y_{k} = \rho_{k} {\text{sgn}} \left( {\sin \left( {\frac{{Y_{Ik} }}{{\rho_{k} }}} \right)} \right) + \frac{{B_{k} + A_{k} }}{2},{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} k = 1,2,...,M $$
(31)

where \(Y_{k}\) represents the converted authentic variable.

The solution procedure of CGJO

The CGJO increases the population diversity, improves algorithmic parallelism, accelerates global exploration and promotes optimization. The pseudocode of CGJO is listed in Algorithm 2. The flowchart of the CGJO is shown in Fig. 2.

Fig. 2
figure 2

Flowchart of the CGJO.

Algorithm 2
figure b

CGJO.

Computational complexity of the CGJO

Time complexity The time complexity is established to estimate the amount of resources consumed when each algorithm operates independently, which directly relates the question’s operational scale to the computational time. The big-O notation is a practical methodology for demonstrating the algorithm’s directness and dependability. The CGJO contains three primary processes: initialization, estimation of the fitness function, and alteration of the jackal position. In CGJO, \(n\) represents the population size, \(T\) represents the maximum number of iterations, and \(d\) represents the question dimension. The initialization involves \(O(n)\). Each process consists of estimating the fitness function and altering the jackal position \(O(T \times n) + O(T \times n \times d)\). Complex-valued encoding enhances algorithmic parallelism, taps population diversity, avoids premature convergence, expands the feature space, improves the search efficiency and increases the information capacity. CGJO exhibits excellent durability and dependability to acquire complementary advantages and regulates exploration and exploitation to promote computational precision. The time complexity of CGJO is \(O(n \times (T + T \times d + 1))\).

Space complexity the space complexity is utilized to store the space exhausted by the CGJO. In CGJO, \(n\) represents the population size, and \(d\) represents the question dimension. CGJO not only exhibits strong adaptability and robustness to achieve supplementary advantages and enhance optimization efficiency but also balances global exploration and local exploitation to promote computational precision and determine the best solution. The space complexity of the CGJO involves \(O(n \times d)\).

Simulation evaluation and result analysis

Experimental setup

The mathematical evaluations were performed with Windows 10 with an Intel Core i7-8750H 2.2 GHz CPU, a GTX1060, and 8 GB of memory.

Parameter settings

The algorithm’s parameters are a set of distinctive experimental values that are extracted from the original papers.

WO: stochastic value \(rand \in [0,1]\), danger factors \(A \in [0,2]\), \(R \in [ - 1,1]\), stochastic values \(r_{1 - 5} \in (0,1)\), migration step control factor \(\beta \in [0,1]\), distress factor \(p \in (0,1)\), standard deviation \(\sigma_{y} = 1\), control value \(\alpha = 1.5\), range \(\theta \in [0,\pi ]\).

HO: stochastic values \(r_{1 - 6} \in (0,1)\), integer \(I_{1,2} \in [1,2]\), stochastic value \(\omega \in [0,1]\), stochastic value \(\nu \in [0,1]\), constant value \(\theta = 1.5\).

SCSO: constant value \(S_{M} = 2\), sensitivity range \(r_{G} \in (0,2)\), stochastic value \(rand \in [0,1]\), range \(\theta \in [0,2\pi ]\), stochastic value \(R \in [ - 1,1]\).

NRBO: stochastic value \(\delta \in [ - 1,1]\), stochastic values \(a,b \in (0,1)\), stochastic values \(r_{1,2} \in (0,1)\), stochastic value \(\theta_{1} \in ( - 1,1)\), stochastic value \(\theta_{2} \in ( - 0.5,0.5)\).

BKA: stochastic value \(rand \in [0,1]\), constant value \(p = 0.9\), stochastic value \(r \in [0,1]\), Cauchy mutation \(C(0,1)\), \(\delta = 1\), \(\mu = 0\).

L-SHADE: \(Pbest = 0.1\), \(Arc{\kern 1pt} {\kern 1pt} {\kern 1pt} rate = 2\), learning rate \(c = 0.8\), threshold \(\max \_nfes/2\).

LSHADE-EpSin: \(Pbest = 0.1\), \(Arc{\kern 1pt} {\kern 1pt} {\kern 1pt} rate = 2\), learning rate \(c = 0.8\), threshold \(\max \_nfes/2\).

CMA-ES: parent number \(\mu = \left\lfloor {{\lambda \mathord{\left/ {\vphantom {\lambda 2}} \right. \kern-0pt} 2}} \right\rfloor\), weight factor \(w = \log (mu + 0.5) - \log (1:mu)\), step size \(\sigma = 0.3 \times 200\).

GJO: stochastic value \(r \in [0,1]\), constant value \(c_{1} = 1.5\), initial state \(E_{0} \in [ - 1,1]\), energy decrease \(E_{1} \in [0,1.5]\), default factor \(\beta = 1.5\), stochastic values \(u,v \in (0,1)\).

GJO-based on ranking-based mutation operator (RGJO)37: stochastic value \(r \in [0,1]\), constant value \(c_{1} = 1.5\), initial state \(E_{0} \in [ - 1,1]\), energy decrease \(E_{1} \in [0,1.5]\), default factor \(\beta = 1.5\), stochastic values \(u,v \in (0,1)\), scaling factor \(F = 0.7\).

GJO-based on the simplex method (SGJO)38: stochastic value \(r \in [0,1]\), constant value \(c_{1} = 1.5\), initial state \(E_{0} \in [ - 1,1]\), energy decrease \(E_{1} \in [0,1.5]\), default factor \(\beta = 1.5\), stochastic values \(u,v \in (0,1)\), stochastic value \(k \in (0,1)\), reflectivity \(\alpha = 1\), expansion factor \(\gamma = 1.5\), compression factor \(\beta_{1} = 0.5\), contraction factor \(\beta_{2} = 0.2\).

CGJO: stochastic value \(r \in [0,1]\), constant value \(c_{1} = 1.5\), initial state \(E_{0} \in [ - 1,1]\), energy decrease \(E_{1} \in [0,1.5]\), default factor \(\beta = 1.5\), stochastic values \(u,v \in (0,1)\).

Benchmark functions

CGJO is implemented to accomplish the CEC 2022 benchmark functions39,40, which confirms its practicality and feasibility. Table 2 outlines the CEC 2022 test suite.

Table 2 Description of CEC2022 test suite.

Experimental result analysis

The population size is 50, the maximum number of iterations is 1000, and the number of separate runs is 30. Best, Std, Mean, Median and Worst represent the optimal value, standard deviation, mean value, median value and worst value, respectively41. The ranking is based on the standard deviation. The robustness of the optimizaiton algorithms is to maintain the stable ability in term of noise and outliers. For noise in data sets, the optimization algorithms utilize the following strategies: (1) gradient smoothing, (2) momentum method, (3) adaptive learning optimizer, (4) smooth optimization objective function, (5) robust loss function. For outliers in data sets, the optimization algorithms utilize the following strategies: (1) delete outliers, (2) replace outliers, (3) convert outliers.

The simulation results of different algorithms for the CEC 2022 test functions are outlined in Table 3. For F1 and F2, compared with those of the basic GJO, the optimal values, standard deviations, mean values, median values and worst values of RGJO, SGJO and CGJO are significantly greater. The CGJO method is superior and reliable for identifying the ideal global value. The various evaluation indicators and computational solutions of CGJO are superior to those of WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO. CGJO has great computational efficiency and worldwide detection ability to minimize premature convergence and acquire the exact solution. The CGJO portrays tremendous stability and reliability in achieving the optimal ranking. The CGJO can enhance parallelism, tap population diversity and improve optimization efficiency, which uses detection and exploitation to obtain the highest convergence accuracy. For F3 and F5, CGJO not only exhibits obvious superiority and reliability in expanding the optimization space and avoiding search stagnation but also adjusts exploration and exploitation to identify the exact global solutions. The convergence speed and calculation accuracy of the modified GJO are greatly enhanced. The overall optimization ability and evaluation indicators of CGJO are superior to those of WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO, and CGJO exhibits strong stability and robustness. The greater minor standard deviation and higher ranking of the CGJO highlight that CGJO delivers good dependability and reliability to satisfy complementary advantages and resolve the function optimization. For F4 and F6, the overall search efficiency and optimization accuracy of RGJO and SGJO are improved compared with those of the basic GJO. CGJO exhibits strong effectiveness and robustness in achieving suboptimal solutions close to the exact solutions. The optimal values, standard deviations, mean values, median values and worst values of CGJO are superior to those of WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO. CGJO has better computational accuracy and superior stability. The optimal ranking of CGJO results in good stability and durability. For F7, the standard deviations of CGJO are worse than those of WO, L-SHADE and CMA-ES, but the optimal, mean, median and worst values of CGJO are superior to those of WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO. CGJO employs a distinctive encoding methodology to enrich the information capacity and strengthen the detection ability. The ranking of CGJO is slightly lower than those of WO, L-SHADE and CMA-ES. For F8, the various evaluation indicators and computational solutions of CGJO are relatively superior to those of WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO. The standard deviation of CGJO is superior to those of L-SHADE, BKA and NRBO but inferior to those of WO, HO, SCSO, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO. For F9, the standard deviations, mean values, median values and worst values of RGJO, SGJO and CGJO are significantly greater than those of the basic GJO. The optimal value of RGJO is worse than that of GJO. The various evaluation indicators and computational solutions of CGJO are superior to those of WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO. CGJO employs a distinctive search structure to extend the optimization field and increase the detection efficiency. The ranking of the CGJO is the smallest, and the stability and reliability of this algorithm are better. For F10 and F11, the various evaluation indicators and computational solutions of RGJO, SGJO and CGJO are superior to those of GJO. The optimal values, median values and worst values of CGJO are more substantial than those of WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO. The standard deviations, mean values and rankings of CGJO are superior to those of SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, GJO and SGJO. The SCGJO uses the diploid’s two-dimensional properties to alter the jackal’s position, which not only effectively broadens the exploration area and elevates the population diversity but also inhibits early convergence and realizes the ideal solution. For F12, the optimal values, standard deviations, mean values, median values and worst values of RGJO, SGJO and CGJO are significantly greater than those of the basic GJO. The various evaluation indicators and computational solutions of CGJO are superior to those of HO, SCSO, NRBO, LSHADE-EpsSin, GJO and SGJO. The CGJO has a relatively lower standard deviation and higher ranking. The CGJO uses the combination of the encoding method and GJO to renew the jackal’s position, which expands the feature space, enhances the optimization efficiency and increases the information capacity. CGJO has good stability and durability for identifying the optimal value.

Table 3 Simulation results of different algorithms for the CEC 2022 test functions.

The Wilcoxon rank-sum test is implemented to evaluate whether there is a noticeable discrepancy between CGJO and other methodologies42,43. \(p < 0.05\) indicates a significant discrepancy, \(p \ge 0.05\) indicates no significant discrepancy, and N/A indicates “not applicable”. The Wilcoxon signed rank-sum test for CEC 2022 between each algorithm and CGJO is outlined in Table 4.

Table 4 Wilcoxon signed rank-sum test for CEC 2022 between each algorithm and CGJO.

The convergence graph of each methodology is shown in Fig. 3. The convergence graph can directly and objectively reflect the convergence precision. The CGJO has a good detection capacity and optimization efficiency to accomplish the most effective solution. For the unimodal function F1, CGJO exhibits strong superiority and reliability to avoid premature convergence and achieve the exact solution. The optimal, mean, median and worst values of the CGJO change little. The CGJO has a faster convergence speed and higher calculation accuracy. For basic functions F2, F3, F4 and F5, the various evaluation indicators and computational solutions of the CGJO are superior to those of the WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO, and the particular functions of the CGJO determine the exact global solutions. The CGJO has good stability and durability. The CGJO has better convergence frequency and computational numerical accuracy, which has superiority and reliability to eliminate search stagnation and acquire an appropriate solution. For hybrid functions F6, F7 and F8, compared with those of the basic GJO, the optimal values, standard deviations, mean values, median values and worst values of the CGJO are significantly greater. The various evaluation indicators and computational solutions of the CGJO are superior to those of WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO, and CGJO has obvious superiority and stability. CGJO has elevated computational efficiency and an attractive detection capability for discovering the most accurate solution. For composite functions F9, F10, F11 and F12, CGJO exhibits superior durability and stability for identifying an accurate solution. The various evaluation indicators and computational solutions of the CGJO are substantially greater than those of the GJO, which are superior to those of the WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO methods. CGJO can enhance algorithmic parallelism, tap population diversity, avoid premature convergence, expand the feature space, improve search efficiency and increase information capacity. CGJO is durable and reliable for identifying the ideal solution.

Fig. 3
figure 3

Convergence graph of each methodology.

The ANOVA graph of each methodology is shown in Fig. 4. The standard deviation can directly and objectively exhibit stability and reliability. The comparatively low standard deviation reveals that the method not only has favourable strength and durability but also integrates detection and mining to obtain an exact or subaccurate solution. The standard deviation, the basis for ranking, is an accurate measure of optimization efficiency and stability. For the unimodal function F1, CGJO is sufficiently stable and durable to provide supplementary advantages and balances exploration and extraction to increase computational precision and yield the best solution. The CGJO has strong stability and the best ranking. For basic functions F2, F3, F4 and F5, the various evaluation indicators and computational solutions of the CGJO are superior to those of the WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO, and the CGJO can determine the exact global subsolution of the partial functions, which highlights that the CGJO exhibits excellent search efficiency to promote the optimization ability. The standard deviation and the ranking of CGJO are superior to those of WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO, which highlights that CGJO has remarkable reliability and superiority in determining the best solution. For hybrid functions F6, F7 and F8, the various evaluation indicators and computational solutions of the CGJO are substantially better than those of the GJO. The standard deviations and rankings of the CGJO are relatively good, and the CGJO has great reliability and dependability. For composite functions F9, F10, F11 and F12, the CGJO not only exhibits great stability and reliability to supervise exploration and extraction and strengthen the optimization performance but also utilizes the diploid mechanism to encode the golden jackal individual and determine the exact solution. The CGJO method exhibits outstanding detection capability and optimization efficiency for identifying the most accurate solution. Compared with those of the GJO, the computational solutions of the CGJO have changed significantly. The standard deviation of CGJO is relatively better than that of WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO. CGJO uses an efficient search mechanism and stability to acquire accurate values. CGJO employs the diploid’s two-dimensional properties to renew the jackal position and maximize the detection efficiency. CGJO not only combines exploration and extraction to broaden the optimization area and enrich the information capacity but also exhibits good stability and durability to prevent early convergence and identify accurate solutions.

Fig. 4
figure 4

ANOVAs of each methodology.

CGJO for engineering design

To demonstrate dependability, CGJO is implemented to accomplish ten engineering designs: cantilever beam44, three-bar truss45, tubular column46, piston lever47, tension/compression spring48 and gear train49.

Cantilever beam

The main goal is to mitigate the ultimate beam’s poundage, as shown in Fig. 5. There are five constraints, and a constant thickness is maintained (\(t = {2 \mathord{\left/ {\vphantom {2 3}} \right. \kern-0pt} 3}\)). The structure is as follows:

Fig. 5
figure 5

Cantilever beam.

Consider

$$ x = [x_{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{2} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{3} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{4} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{5} ] $$
(32)

Minimize

$$ f(x) = 0.6224(x_{1} + x_{2} + x_{3} + x_{4} + x_{5} ) $$
(33)

Subject to

$$ g(x) = \frac{61}{{x_{1}^{3} }} + \frac{37}{{x_{2}^{3} }} + \frac{19}{{x_{3}^{3} }} + \frac{7}{{x_{4}^{3} }} + \frac{1}{{x_{5}^{3} }} \le 1 $$
(34)

Variable range

$$ 0.01 \le x_{1} ,x_{2} ,x_{3} ,x_{4} ,x_{5} \le 100 $$
(35)

The statistical values of the cantilever beam are outlined in Table 5. The CGJO method exhibits remarkable robustness and durability in identifying the most significant decision variables and the smallest values. The statistical significance of CGJO outweighs those of other methodologies, which highlights that CGJO has good optimization efficiency and convergence precision.

Table 5 Statistical values of the cantilever beam.

Three-bar truss

The primary intention is to mitigate the ultimate truss’s poundage, as portrayed in Fig. 6. There are two constraints: sections \(A_{1}\) and \(A_{2}\). The structure is as follows:

Fig. 6
figure 6

Three-bar truss.

Consider

$$ x = [x_{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{2} ] = [A_{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} A_{2} ] $$
(36)

Minimize

$$ f(x) = (2\sqrt 2 x_{1} + x_{2} ) \times l $$
(37)

Subject to

$$ g_{1} (x) = \frac{{\sqrt 2 x_{1} + x_{2} }}{{\sqrt 2 x_{1}^{2} + 2x_{1} x_{2} }}P - \sigma \le 0 $$
(38)
$$ g_{2} (x) = \frac{{x_{2} }}{{\sqrt 2 x_{2} + 2x_{1} x_{2} }}P - \sigma \le 0 $$
(39)
$$ g_{3} (x) = \frac{1}{{\sqrt 2 x_{2} + x_{1} }}P - \sigma \le 0 $$
(40)
$$ l = 100cm,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} P = 2{{kN} \mathord{\left/ {\vphantom {{kN} {cm^{2} }}} \right. \kern-0pt} {cm^{2} }},{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \sigma = 2{{kN} \mathord{\left/ {\vphantom {{kN} {cm^{2} }}} \right. \kern-0pt} {cm^{2} }} $$
(41)

Variable range

$$ 0 \le x_{1} ,x_{2} \le 1 $$
(42)

The statistical values of the three-bar truss are outlined in Table 6. The ideal outcomes of CGJO are superior to those of alternative methodologies. CGJO employs the two-dimensional properties of the diploid mechanism to inhibit premature convergence and acquire the most favourable statistical values, which highlights that CGJO has robust detection ability and reliability.

Table 6 Statistical values of the three-bar truss.

Tubular column

The primary intention is to mitigate the installation and material expenses, as portrayed in Fig. 7. There are two constraints: the average diameter (\(d\)) and breadth (\(t\)). The structure is as follows:

Fig. 7
figure 7

Tubular column.

Consider

$$ x = [x_{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{2} ] = [d{\kern 1pt} {\kern 1pt} {\kern 1pt} t] $$
(43)

Minimize

$$ f(x) = 9.82x_{1} x_{2} + 2x_{1} $$
(44)

Subject to

$$ g_{1} (x) = \frac{P}{{\pi x_{1} x_{2} \sigma_{y} }} - 1 \le 0 $$
(45)
$$ g_{2} (x) = \frac{{8PL^{2} }}{{\pi^{3} Ex_{1} x_{2} (x_{1}^{2} + x_{2}^{2} )}} - 1 \le 0 $$
(46)
$$ g_{3} (x) = \frac{2.0}{{x_{1} }} - 1 \le 0 $$
(47)
$$ g_{4} (x) = \frac{{x_{1} }}{14} - 1 \le 0 $$
(48)
$$ g_{5} (x) = \frac{0.2}{{x_{2} - 1}} \le 0 $$
(49)
$$ g_{6} (x) = \frac{{x_{2} }}{0.8} - 1 \le 0 $$
(50)
$$ \sigma_{y} = 500{{kgf} \mathord{\left/ {\vphantom {{kgf} {cm^{2} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} E = 0.85 \times 10^{6} }}} \right. \kern-0pt} {cm^{2} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} E = 0.85 \times 10^{6} }}{{kgf} \mathord{\left/ {\vphantom {{kgf} {cm^{2} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} P = 2500kgf,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} L = 250cm}}} \right. \kern-0pt} {cm^{2} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} P = 2500kgf,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} L = 250cm}} $$
(51)

Variable range

$$ 2 \le x_{1} \le 14,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 0.2 \le x_{2} \le 0.8 $$
(52)

The statistical values of the tubular column design are outlined in Table 7. The selection factors and expense of CGJO are superior to those of alternative methodologies. CGJO promotes algorithmic parallelism and increases population diversity to increase convergence efficiency, highlighting that CGJO is an efficient and predictable approach.

Table 7 Statistical values of the tubular column design.

Piston lever

The primary intention is to mitigate the level and recognize the portions, as portrayed in Fig. 8. There are four constraints: \(H\), \(B\), \(X\) and \(D\). The structure is as follows:

Fig. 8
figure 8

Piston lever.

Consider

$$ x = [x_{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{2} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{3} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{4} ] = [H{\kern 1pt} {\kern 1pt} {\kern 1pt} B{\kern 1pt} {\kern 1pt} {\kern 1pt} D{\kern 1pt} {\kern 1pt} {\kern 1pt} X] $$
(53)

Minimize

$$ f(x) = \frac{1}{4}\pi x_{3}^{2} (L_{2} - L_{1} ) $$
(54)

Subject to

$$ g_{1} (x) = QL\cos \theta - RF \le 0 $$
(55)
$$ g_{2} (x) = Q(L - x_{4} ) - M_{\max } \le 0 $$
(56)
$$ g_{3} (x) = \frac{6}{5} \times (L_{2} - L_{1} ) - L_{1} \le 0 $$
(57)
$$ g_{4} (x) = \frac{{x_{3} }}{2} - x_{2} \le 0 $$
(58)
$$ R = \frac{{\left| { - x_{4} (x_{4} \sin \theta + x_{1} ) + x_{1} (x_{2} - x_{4} \cos \theta )} \right|}}{{\sqrt {(x_{4} - x_{2} )^{2} + x_{1}^{2} } }} $$
(59)
$$ F = \frac{{\pi Px_{3}^{2} }}{4} $$
(60)
$$ L_{1} = \sqrt {(x_{4} - x_{2} )^{2} + x_{1}^{2} } $$
(61)
$$ L_{2} = \sqrt {(x_{4} \sin \theta + x_{1} )^{2} + (x_{2} - x_{4} \cos \theta )^{2} } $$
(62)
$$ \theta { = }45^{o} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} Q = 10000lbs,{\kern 1pt} {\kern 1pt} {\kern 1pt} L = 240in,{\kern 1pt} {\kern 1pt} {\kern 1pt} M_{\max } = 1.8 \times 10^{6} lbs{\kern 1pt} {\kern 1pt} in,{\kern 1pt} {\kern 1pt} {\kern 1pt} P = 1500psi $$
(63)

Variable range

$$ 0.05 \le x_{1} ,x_{2} ,x_{4} \le 500,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 0.05 \le x_{3} \le 120 $$
(64)

The statistical values of the piston lever are outlined in Table 8. The CGJO generates real and imaginary parts to renew the jackal position and broaden the feature space. The statistical values of the CGJO are significant and optimal, which highlights that the CGJO has strong reliability and superiority in determining the best value.

Table 8 Statistical values of the piston lever.

Tension/compression spring

The primary intention is to mitigate the ultimate spring’s poundage, as portrayed in Fig. 9. There are three constraints: line thickness (\(d\)), average thickness (\(D\)) and reactive size (\(N\)). The structure is as follows:

Fig. 9
figure 9

Tension/compression spring.

Consider

$$ x = [x_{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{2} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{3} {\kern 1pt} ] = [d{\kern 1pt} {\kern 1pt} {\kern 1pt} D{\kern 1pt} {\kern 1pt} {\kern 1pt} N]{\kern 1pt} {\kern 1pt} {\kern 1pt} $$
(65)

Minimize

$$ f(x) = (x_{3} + 2)x_{2} x_{1}^{2} $$
(66)

Subject to

$$ g_{1} (x) = 1 - \frac{{x_{2}^{3} x_{3} }}{{71785x_{1}^{4} }} \le 0 $$
(67)
$$ g_{2} (x) = \frac{{4x_{2}^{2} - x_{1} x_{2} }}{{12566(x_{2} x_{1}^{3} - x_{1}^{4} )}} + \frac{1}{{5108x_{1}^{2} }} \le 0 $$
(68)
$$ g_{3} (x) = 1 - \frac{{140.45x_{1} }}{{x_{2}^{2} x_{3} }} \le 0 $$
(69)
$$ g_{4} (x) = \frac{{x_{1} + x_{2} }}{1.5} - 1 \le 0 $$
(70)

Variable range

$$ 0.05 \le x_{1} \le 2,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 0.25 \le x_{2} \le 1.3,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 2 \le x_{3} \le 15 $$
(71)

The statistical values of the tension/compression springs are outlined in Table 9. The CGJO implements the encoding technique to strengthen the discovery efficiency and increase the information capability. The CGJO has the finest variables and the lowest objective value, highlighting that the CGJO exhibits good durability and high calculation accuracy.

Table 9 Statistical values of the tension/compression spring.

Gear train

The main goal is to optimize the tooth size and reduce the ultimate cost, as shown in Fig. 10. There are four constraints: \(n_{A}\), \(n_{B}\), \(n_{C}\) and \(n_{D}\). The structure is as follows:

Fig. 10
figure 10

Gear train.

Consider

$$ x = [x_{1} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{2} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{3} {\kern 1pt} {\kern 1pt} {\kern 1pt} x_{4} ] = [n_{A} {\kern 1pt} {\kern 1pt} {\kern 1pt} n_{B} {\kern 1pt} {\kern 1pt} {\kern 1pt} n_{C} {\kern 1pt} {\kern 1pt} {\kern 1pt} n_{D} ] $$
(72)

Minimize

$$ f(x) = \left( {\frac{1}{6.931} - \frac{{x_{3} x_{2} }}{{x_{1} x_{4} }}} \right)^{2} $$
(73)

Variable range

$$ 12 \le x_{i} \le 60,{\kern 1pt} {\kern 1pt} {\kern 1pt} i = 1,2,...,4 $$
(74)

The statistical values of the gear train are outlined in Table 10. Compared with those of other methodologies, the objective variables of the CGJO have been extensively altered, which highlights that the CGJO exhibits remarkable reliability and adaptability to acquire an accurate solution.

Table 10 Statistical values of the gear train design.

Conclusions and future research

In this paper, CGJO is established to resolve the CEC 2022 test suite and six real-world engineering designs, and the purpose is to identify the global optimal exact solution of function optimization and the minimum cost of the engineering design. Complex-valued encoding employs the two-dimensional properties of the diploid methodology to describe one allele of a chromosome and renew the real and imaginary portions, which increases population diversity, restricts search stagnation, expands the exploration area, promotes information exchange, fosters collaboration efficiency and improves convergence accuracy. The CGJO has strong stability and robustness to overcome the low computational accuracy, premature convergence and poor solution efficiency. The CGJO combines GJO and complex-valued encoding to achieve complementary advantages and enhance computational efficiency. Therefore, CGJO not only exhibits fantastic stability and reliability to supervise exploration and extraction and strengthen the optimization performance but also utilizes the diploid mechanism to encode the golden jackal individual and determine the exact solution. The CGJO is compared with WO, HO, SCSO, NRBO, BKA, L-SHADE, LSHADE-EpsSin, CMA-ES, GJO, RGJO and SGJO. The experimental results reveal that the effectiveness and feasibility of CGJO are superior to those of other algorithms. CGJO has strong superiority and reliability to achieve a quicker convergence rate, greater computation precision, and greater stability and robustness.

The proposed CGJO has several limitations: (1) CGJO may face potential challenges in terms of computational complexity, mathematical theory analysis, astringency verification, and parameter selection. The calculation efficiency and solution accuracy may decrease. (2) When resolving complex, large-scale, high-dimensional multiobjective optimization problems, the CGJO may be limited and unable to balance exploration and exploitation to determine the superior convergence speed and calculation accuracy. (3) CGJO has strong stability and robustness for resolving the CEC 2022 test suite and six real-world engineering designs. However, the effectiveness and reliability of CGJO still need to be verified with more application datasets.

Future research on CGJO will focus on the following three aspects: (1) We will further study the mathematical theory, verify the algorithm astringency and select more effective control parameters through many experimental simulations. (2) We will introduce more effective search strategies (e.g., orthogonal opposition-based learning, the simplex method, the ranking-based mutation operator), unique encoding forms (e.g., quantum coding, discrete coding or binary coding), and hybrid swarm intelligence algorithms to achieve complementary advantages and avoid search stagnation, thereby increasing the overall convergence speed and calculation accuracy of the basic GJO. (3) According to the current situation of agricultural production in the Dabie Mountains in Anhui Province and the complex geographical environment of characteristic crops (e.g., Dendrobium, tea, rice, Chinese herbal medicine), the CGJO can be used for intelligent detection and intelligent control of distinctive understorey crops. We will utilize underforest crop harvesting machinery and intelligence, underforest crop planting machinery and intelligence, precision plant protection equipment and intelligence to achieve underforest crop intelligent machinery and portable machinery and equipment.