Abstract
Improving the efficiency of solving complex optimization problems is the focus of intelligent algorithm research in recent years. Complex optimization problems have the characteristics of high dimension, multi-minimum and variable coupling. Using intelligent algorithms to solve such problems is prone to premature convergence, high-dimensional invalid and excessive computational costs, resulting in low optimization efficiency of complex problems. In this paper, a double-layer search particle swarm optimization algorithm is proposed. The algorithm derives the sub-objective function by grouping the particle population and variable dimension, constructs the bilevel-layer search space, and defines the dynamic parameter matrix to realize the information interaction and space reconstruction of the sub-space. For the optimization problem with complex characteristics, bilevel-search can realize the alternation of fine and rough search to reduce the computational cost and improve the optimization efficiency. The estimation program and test samples of the proposed method are tested, and compared with other intelligent algorithms to further prove the advantages of this method.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In the process of innovative and green development in industrial design, numerous technical problems requiring optimization and innovation have emerged incessantly (Sellali et al. 2022; Soheyli et al. 2016; Sun et al. 2011). With the continuous expansion of industrial systems and the ongoing enhancement of technical requirements, these technical issues are exhibiting a trend of large-scale and complex development. The optimization of industrial development gives rise to a category of complex optimization problems characterized by large scale, multipolarity, variable coupling, and a complex structure of the objective function (Tey and Mekhilef 2014; Zhang et al. 2010). Typically, these problems establish solution models based on intelligent algorithms.
For complex optimization problems exhibiting multiple extremum characteristics, swarm intelligence algorithms are prone to premature convergence caused by individuals getting trapped in local optima. This significantly restricts the algorithm's solution performance for such problems and leads to low optimization efficiency (Wah and Ma 2006; Azar et al. 2011; Andre and Siarry 2001). In the case of optimization problems with large-scale characteristics, the increase in problem complexity results in the phenomenon known as 'dimension disaster' in variable dimensions (Kerner 1989; Xiang et al. 2006; Li and Xin 2012). This renders conventional optimization algorithms ineffective. Particularly, when large-scale optimization problems possess variable coupling characteristics, solving them becomes exceedingly complex (Dutta and Gandomi 2020).
The particle swarm optimization (PSO) algorithm is an iterative optimization method and widely employed as a mainstream approach for solving complex optimization problems. When compared to other algorithms such as genetic algorithms and simulated annealing, PSO algorithm demonstrates certain advantages in terms of convergence, diversity, and consistency (Yeh et al. 2010; Lin and Tang 2021; Fu-Shiung et al. 2019; Yan et al. 2021). In the study of asset markets, William R. introduced the concept of 'asymptotic inefficiency,' which refers to the scenario where the limited portfolio and return cannot align or approximate, indicating inefficiency in the long run (Zame 1989). In the context of global optimization problems, the phenomenon of 'gradual ineffectiveness' implies that the algorithm can approach the optimal solution with a low rate of progress, but achieving higher accuracy requires more computational resources (Liu and Zeng 2015; Liu et al. 2015, 2017; Liu et al. 2014).
In the process of calculating complex optimization problems, our objective is to find or approach the optimal solution while minimizing the computational cost to improve the efficiency of optimization. However, in actual calculations, as we strive for higher accuracy, the associated computational costs also increase. This phenomenon is referred to as 'Asymptotic Inefficiency' (Liu 2011; Liu et al. 2016, 2017; Zhao et al. 2015). When particle updating narrows down the range of the optimal solution, i.e., enhances the calculation accuracy, the inevitable consequence is an increase in computational cost. Therefore, our aim should be to minimize the impact of “Asymptotic Inefficiency”.
When utilizing particle swarm optimization for various optimization problems, the manifestation of 'Asymptotic Inefficiency' can differ. Particularly, when addressing simple optimization problems, the impact of this phenomenon may be negligible. This paper, however, focuses exclusively on complex optimization problems. It compares the 'Asymptotic Inefficiency' exhibited by different particle swarm optimization algorithms and validates the performance of the bilevel-search particle swarm optimization algorithm (Zame 1993; Skevas and Skevas 2021; Tran et al. 2016).
The rest of this paper is organized as follows. In Sect. 2, the characteristics and influence of complex optimization problems are analyzed, especially the phenomenon of “premature convergence”. In Sect. 3, we provide a detailed introduction to the principles of the bilevel-search particle swarm optimization algorithm. In Sect. 4, presents an empirical assessment of the optimization performance of the bilevel-search particle swarm optimization algorithm through example calculations and a comparison with three popular particle swarm optimization algorithms. in Sect. 5, we summarize the research findings and outline potential future research directions.
2 Background
Optimization problems are pervasive in various technical domains of social production (Abdullah and Hassan 2020; Kim et al. 2013). The key technical elements for solving diverse optimization problems lie in scientific mathematical models and intelligent solution algorithms. However, specific models and algorithms do not uniformly yield significant effects on solving all problems. The effectiveness of optimization problem solutions is heavily influenced by the inherent characteristics of the problem itself. Factors such as the dimensionality of the objective function, the quantity and distribution of local extremum points, and variable coupling often contribute to the complexity of problem-solving (Lui and Teodorovi 2011; Dereli and Kker 2021). This chapter analyzes the characteristics of complex optimization problems, including their large-scale nature, multipolarity, and variable coupling. Furthermore, it investigates the impact of these characteristics on optimization efficiency.
2.1 Complex optimization problems with large-scale characteristics
Taking single objective unconstrained optimization as an example, search for the global optimal solution of the objective function within the solution space can be formulated as Eq. (1):
where, f is the objective function; Ω is the solution space of the problem; n is the dimension value of the optimization vector x.
The dimensionality of the solution space Ω is directly determined by the number of optimization variables x. Real-world optimization problems, such as large-scale power system control optimization (Cagliari et al. 2021) and large-scale distribution route optimization (Wu 2014; Economics and Wisconsin 1996), often involve a significant number of decision variables. As the dimension of the objective function increases, it becomes increasingly challenging to find feasible solutions, regardless of the computational cost involved.
As shown in Fig. 1, PSO algorithm is used for Sphere-function experiments with different dimensions. The experimental results show that the higher the dimension, the less ideal the calculation effect is. After the function reaches 50 dimensions, even if it consumes more calculation resource, it can’t give a feasible solution to the problem, resulting in "negative efficiency" (Ngango and Hong 2022).
2.2 Complex optimization problems with multi extremum characteristics
Multi extremum characteristic means that in addition to the global optimal value, a large number of local extremum points are distributed in the solution space, such as the Griewank function of Eq. (2). The optimization problem of multi pole points is easy to cause “premature convergence” (El-Abd and Mohammed 2009) and make the algorithm stop evolution too early.
After PSO algorithm solves the five-dimensional multimodal function Griewank, the top 10 particles with the best fitness are selected, and their positions are shown in Table 1. It can be seen that all particles cannot get rid of the local extreme value and stay away from the global optimal solution. Table 2 shows the particle motion speed. The top 10 particles with the fastest progress are selected to observe their motion speed. It is found that the particle speed is close to 0, that is, even if more computing resources are consumed, the particles do not have enough power to get rid of the local dilemma for multi extreme and multi-dimensional problems.
2.3 Complex optimization problems with variable coupling characteristics
The optimization problem of variable coupling characteristics is also called indivisible problem, which means that some or all variables depend on each other, and the value of the objective function is affected by both the variable itself and the coupling variable. The common method to solve this kind of problem is to divide the large-scale coupling problem into small-scale uncoupled subproblems, and finally reconstruct the solution of the original problem by using the results of the subproblems.
The optimization problems encountered in engineering design often involve complex characteristics. The bilevel- search particle swarm optimization method proposed in this paper takes into account the coupling relationships among variables and aims to solve complex high-dimensional problems by reducing their dimensions, thereby improving optimization efficiency.
3 Methods
When employing the bilevel-search framework in the PSO algorithm to address large-scale complex optimization problems with coupling characteristics, several challenges need to be addressed: 1. Hierarchical handling of complex optimization problems; 2. Determination of sub-layer optimization objectives; 3. Construction of a high-level search subpopulation.
3.1 Hierarchical processing method
Hierarchical processing of large-scale complex optimization problems with variable coupling characteristics is to divide the high-dimensional space layer into multiple low-dimensional combined subspace layers to realize the splitting and dimensionality reduction of high-dimensional problems. This chapter mainly studies the two-layer space, and the space with higher layers can be treated similarly.
when the high-dimensional solution space is divided into multiple low-dimensional subspaces, the variables within these subspaces remain coupled. In this case, we employ multiple reference vectors to facilitate information transfer between different subproblems. This approach helps to partially restore the variable coupling relationship that may be disrupted by the segmentation of the solution space. As a result, it reduces computational costs and enhances the optimization efficiency of complex optimization problems.
The objective function of the bilevel-search space can be expressed as:
The sub-objective functions of the layer-2 of the double-layer search space can be expressed as:
where, s is the dimension of subspace, s = d / k. c is the reference value.
Assuming that the d-dimensional problem sets the population number as n, the state of each particle can be expressed by matrix M:
Through dimensionality reduction, the coupling variables are divided into k groups, X = [X1, X2 … Xk], and the population is also divided into sub-populations of group k, P = [P1, P2 … Pk], The population and variables are evenly distributed. The relationship between population and sub population can be expressed by formula (4). The hierarchical strategy is summarized in algorithm 1.
where, s represents the number of variables in each group.
3.2 Hierarchical optimization sub-objectives
Dividing the complex optimization problem into a combination of subproblems implies a bilevel search approach, where the search space is divided into two levels. The first level search, known as the generalized or fine search, involves locking onto multiple search spaces and narrowing down the search scope. The second level search, referred to as the regional or rough search, focuses on performing local optimization within each subspace. The optimization results obtained from the regional search are then fed back to the first level area to obtain the optimal solution.
The first layer search is well-understood and follows a conventional multi-objective function solution method. However, the second layer search involves multiple sub-objective functions. Since these sub-objective functions are still coupled, the optimization process requires not only updating the particle positions but also adjusting the dimensionality of the subpopulations and variable combinations. The hierarchical structure is depicted in Fig. 2, while the optimization sub-objectives strategy is summarized in Algorithm 2.
3.3 Population regeneration
When dealing with an uncoupled variable optimization problem, a reference vector Y is utilized to record and update the optimal solution for each subspace. The initial reference vector is obtained by combining any solution from each subspace and is recorded as Y = [y1, y2 … yk]. Through the iteration of particles within each subspace, optimal solutions y* i(i = (1, 2 … k)) are found. The high-dimensional optimal solution is represented as Y = [ y* 1, y* 2 … y* k].
However, it is evident that a single set of reference vectors cannot accommodate high-dimensional problems with coupled variables. To address this, we introduce a reference matrix and update the worst vector within the matrix every few generations. From each sub-population in the second layer search, layer-1 elite particles and layer-2 profiteer particles are selected to form the sub-population reference matrix. The sub-population reference matrix of the second layer then becomes the overall reference matrix of the first layer. The optimal solution for the high-dimensional problem is chosen from this reference matrix. The reference matrix for iteration m can be expressed as:
After several population iterations, the reference matrix is updated. First, update the reference matrix of the second layer search, then randomly combine the reference vectors of each group of matrices, and finally update the reference matrix of the first layer search. The optimal solution of high-dimensional complex problems is obtained from the reference matrix. The population regeneration strategy is summarized in algorithm 3.
3.4 PSO based on bilevel-search framework algorithm flow
When dealing with high-dimensional complex optimization problems, PSO algorithm based on bilevel-search framework first divides the population P and space vector X, then searches hierarchically, and finally reconstructs the global optimal solution from the optimal solution of the subproblem. The algorithm flow is as follows:
4 Numerical experiment
In this section, numerical simulation experiments are carried out to verify the performance of bilevel-search particle swarm optimization(BSPSO) algorithm in solving LSGO problems, and the current fluid algorithms is compared to reveal the effectiveness of BSPSO.
4.1 Selection of test function
For the LSGO problem, this paper selects 10 general LSGO test functions (Bergh and Engelbrecht 2004; Yang et al. 2008; Li and Yao 2012; Zhang and Sanderson 2009; Hedar 2005), and the relevant information of the test functions f1-f10 is shown in Table 3. Where, f1–f5 are single peak functions and f 6–f10 are multi peak functions. f 1, f 4 and f 6 are separable problems, and f 2, f 3, f 5 and f 7 are complex problems of variable coupling (Ke et al. 2009; Yu et al. 2013).
4.2 Influence of reference matrix on algorithm performance
For the same test function, under different reference matrix conditions, multiple independent operation results are shown in Table 4. List the BSPSO-p (p is the order of the reference vector, p = 2,3,5,10,20) results in turn.
The function dimensions of f1, f2 and f4 in Table 4 are 30d, while the dimension of f3 is 50d. As observed from Table 4, for the f1 functions, increasing the reference vector improves the algorithm's performance, although the effect is not significant. On the other hand, for the f4 functions, the algorithm's performance shows significant improvement with an increase in the order of the matrix, particularly with the 10th order matrix. Regarding the f2 function, the impact of the reference matrix is not ideal, as it only offers moderate improvement while consuming substantial computational resources. For the f3 function, which is 50-dimensional, increasing the order of the reference matrix enhances the algorithm's performance, albeit with slightly volatile data. Therefore, when dealing with high-dimensional functions, multiple experiments should be conducted to obtain average results.
Figure 3 shows the bilevel-search process of f4 function, and set the number of sub populations to 5. It can be seen that after the large-scale fine search of the first layer, the search area can be quickly locked, and then enter the rough search of the second layer to further improve the search ability and find the optimal value more accurately.
Figure 4 illustrates the bilevel-search process applied to the f3 function, with 3 subpopulations and p representing the order of the reference matrix. Through comparison, it is concluded that the search curves of the first layer are essentially the same. In the second layer search process, the p-10 search yields more accurate results, while the p-2 and p-3 searches exhibit similar performance.
In the realm of visualization, a comparison is made among the operational results of the f1, f3, f4 functions, as depicted in Fig. 5, where the fitness values of the functions are displayed. The first 1000 iterations represent the first layer search process, while the last 1000 iterations represent the second layer search process. It is evident that the second layer search enhances the algorithm's accuracy.
Generally, increasing the order of the reference matrix can significantly improve the algorithm's performance. However, excessively high orders will result in substantial computational resource consumption. Therefore, the following experiments will be conducted using a reference matrix of order 5.
4.3 Comparative analysis of experimental results
Compare the operation results of algorithms adaptive mutation particle swarm optimization (AMPSO), an improved backbone particle swarm optimization algorithm based on learning strategy (Bare-bones PSO, BBPSO), adaptive weight adjustment AWPSO and adaptive mutation bilevel-search PSO (AM_BPSO) on functions f1- f10. The results are shown in Table 5, including the average value, standard deviation and calculation time of fitness function.
For function f1, AM_BPSO algorithm has obvious advantages, accurate and stable calculation, but the calculation time is relatively long; For function f2, f3, f7 and f8, the effect of AWPSO algorithm is similar to that of AM_BPSO algorithm, and even AWPSO algorithm is better. For functions f4, f5, f6, f9 and f10, algorithm AM_BPSO has certain advantages in terms of calculation time and accuracy. Experimental results of f4 function is shown in Fig. 6.
The bilevel-layer search particle swarm optimization algorithm don’t have significant effects on all test functions. The above conclusions can be drawn that for some low-dimensional functions, the bilevel-search algorithm does not show advantages. For high-dimensional complex problems, the bilevel-search particle swarm optimization algorithm has obvious advantages. The bilevel-layer search particle swarm optimization algorithm has strong ability to solve complex problems, fast convergence speed and high calculation accuracy.
Under the condition of ensuring the accuracy, stability and convergence speed of the algorithm, improve the accuracy and use the calculation cost as little as possible.
5 Discussion and conclusion
In this paper, we propose a particle swarm optimization algorithm based on a bilevel-search framework. The first-level search, known as fine search, efficiently narrows down the search space to identify extreme regions. The second-level search, referred to as rough search, further enhances the algorithm's accuracy.
During the second-level search, we introduce a reference matrix to record and update the optimal positions of particles within each subspace. This incorporation of the reference matrix significantly accelerates the convergence speed and improves the accuracy of the optimization process. Comparing our algorithm to other optimization algorithms, the double-layer search structure demonstrates significant advantages.
By utilizing the bilevel-search framework, our proposed algorithm establishes a particle swarm optimization approach that combines fine and rough searches. This strategy leads to faster convergence and more precise optimization positions, setting it apart from other optimization algorithms.
Data availability
The datasets generated and analysed during the current study are not publicly available due the fact that they constitute an excerpt of research in progress but are available from the corresponding author on reasonable request.
References
Abdullah G, Hassan Z (2020) Using of particle swarm optimization (pso) to addressed reliability allocation of complex network. J Phys: Conf Ser 1664(1):1–12
Andre J, Siarry P (2001) An improvement of the standard genetic algorithm ghting premature convergence in continuous optimization. Adv Eng Softw 32(1):49–60
Azar FE, Perrier M, Srinivasan B (2011) A global optimization method based on multi-unit extremum-seeking for scalar nonlinear systems. Comput Chem Eng 35(3):456–463
Bergh F, Engelbrecht A (2004) A cooperative approach to particle swarm optimization. IEEE Trans Evol Comput 8(3):225–239
Cagliari L, Hicken JE, Mishra S (2021) Using the lss adjoint for simultaneous plant and control optimization of chaotic dynamical systems. Struct Multidiscip Optim 64(4):2361–2376
Dereli S, Kker R (2021) Strengthening the pso algorithm with a new technique inspired by the golf game and solving the complex engineering problem. Complex Intell Syst 1092(5):1515–1526
Dutta S, Gandomi A H (2020) Surrogate model-driven bio-inspired optimization algorithms for large-scale and high-dimensional problems. Biomimicry for Aerospace
Economics DO, Wisconsin UO (1996) Using randomization to break the curse of dimensionality. EconWPA
El-Abd and Mohammed (2009) Preventing premature convergence in a PSO and EDA hybrid. IEEE Congress on Evolutionary Computation, pp 3060–3066
Fu-Shiung H, Yi-Hong G (2019) A discrete cooperatively coevolving particle swarm optimization algorithm for combinatorial double auctions. Appl Intell 49(11):3845–3863
Hedar A (2005) Hedar test set. http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar$_$files/TestGO.htm
Ke T, Li X, Suganthan P N, Yang Z, Weise T (2009) Benchmark functions for the cec'2010 special session and competition on large-scale global optimization. Nat Inspired Comput Appl Lab, pp 1–18
Kerner W (1989) Large-scale complex eigenvalue problems. J Comput Phys 85(1):1–85
Kim YR, Park MS, Moore TJ, Varma S (2013) Multiple levels of metacognition and their elicitation through complex problem-solving tasks. J Math Behav 32(3):377–396
Li X, Xin Y (2012) Cooperatively coevolving particle swarms for large scale optimization. Evol Comput IEEE Trans 16(2):210–224
Li XD, Yao X (2012) Cooperatively coevolving particle swarms for large scale optimization. IEEE Trans Evol Comput 16(2):210–224
Lin H, Tang C (2021) Analysis and optimization of urban public transport lines based on multi-objective adaptive particle swarm optimization. IEEE Trans Intell Transp Syst 99(1):1–13
Liu Q (2011) Two minimal positive bases based direct search conjugate gradient methods for computationally expensive functions. Numer Algor 58(4):14357–14374
Liu Q, Cheng W (2014) A modified DIRECT algorithm with bilevel partition. J Global Optim 60(1):483–499
Liu Q, Zeng J (2015) Global optimization by multilevel partition. J Global Optim 61(1):47–69
Liu Q, Zeng J, Yang G (2015) MrDIRECT: a multilevel robust direct algorithm for global optimization problems. J Global Optim 62(1):205–227
Liu Q, Wei W, Yuan H, Zhan ZH, Li Y (2016) Topology selection for particle swarm optimization. Inf Sci 363(1):14357–14374
Liu Q, Yang G, Zhang Z, Zeng J (2017) Improving the convergence rate of the DIRECT global optimization algorithm. J Global Optim 67(1):851–872
Lui P, Teodorovi D (2011) Computing with bees: attacking complex transportation engineering problems. Int J Artif Intell Tools 12(3):375–394
Ngango J, Hong S (2022) Assessing production efficiency by farm size in Rwanda: a zero-inefficiency stochastic frontier approach, pp 2–29
Sellali M, Ravey A, Betka A, Kouzou A, Benbouzid M, Djerdir A, Kennel R, Abdelrahem M (2022) Multi-objective optimization-based health-conscious predictive energy management strategy for fuel cell hybrid electric vehicles. Energies 15(4):1–17
Skevas I, Skevas T (2021) A generalized true random-effects model with spatially autocorrelated persistent and transient inefficiency. Eur J Oper Res 293(3):1131–1142
Soheyli S, Mayam S, Hossein M, Mehrjoo M (2016) Modeling a novel CCHP system including solar and wind renewable energy resources and sizing by a CC-MOPSO algorithm. Appl Energy 184(1):375–395
Sun D, Clinet A, Bayen AM (2011) A dual decomposition method for sector capacity constrained traffic flow optimization. Transp Res Part B Methodol 45(6):880–902
Tey KS, Mekhilef S (2014) Modified incremental conductance algorithm for photovoltaic system under partial shading conditions and load variation. IEEE Trans Industr Electron 61(10):5384–5392
Tran KC, Tsionas MG (2016) Zero-inefficiency stochastic frontier models with varying mixing proportion: a semiparametric approach. Eur J Oper Res 249(3):1113–1123
Wah Ma (2006) MANIP—a multicomputer architecture for solving combinatonal extremum-search problems. IEEE Trans Comput 33(5):377–390
Wu Z (2014) Optimization of distribution route selection based on particle swarm algorithm. Int J Simul Model 12(2):230–242
Xiang Z, Chu C, Chen H (2006) A fast heuristic for solving a large-scale static dial-a-ride problem under complex constraints. Eur J Oper Res 174(2):1117–1139
Yan Y, Zhou Q, Cheng S, Liu Q, Li Y (2021) Bilevel-search particle swarm optimization for computationally expensive optimization problems. Optimization 25(1):14357–14374
Yang ZY, Tang K, Yao X (2008) Large scale evolutionary optimization using cooperative coevolution—sciencedirect. Inf Sci 178(5):2985–2999
Yeh WC, Lin YC, Chung YY, Chih M (2010) A particle swarm optimization approach based on monte carlo simulation for solving the complex network reliability problem. IEEE Trans Reliab 59(1):212–221
Yu W, Jin H, Wei SD, Yan JC, Tian CH, Min L, Wen TM (2013) Two-stage based ensemble optimization framework for large-scale global optimization. Eur J Oper Res 228(2):308–320
Zame WR (1993) Asymptotic behavior of asset markets: asymptotic inefficiency. Gen Equilibr Growth Trade 2:203–233
Zame WR (1989) Asymptotic inefficiency. UCLA Economics Working Papers, pp 1–59
Zhang J, Sanderson AC (2009) Adaptive differential evolution with optional external archive. IEEE Trans Evol Comput 13(5):945–958
Zhang Y, Yan J, Geng W, Wu L (2010) Find multi-objective paths in stochastic networks via chaotic immune PSO. Exp Syst Appl Int J 37(3):1911–1919
Zhao FQ, Liu Y, Zhang C, Wang JB (2015) A self-adaptive harmony PSO search algorithm and its performance analysis. Expert Syst Appl 42(21):7436–7455
Funding
This work is supported by the National Natural Science Foundation of China (Grant no. 51567002, 50767001), The National High Technology Research and Development of China (Grant no. 2007AA04Z197), Specialized Research Fund for the Doctoral Program of Higher Education (Grant no. 20094501110002).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
There are no known competing financial interests or personal relationships between authors that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wang, Y., Lei, Z. & Wu, J. Bilevel-search particle swarm optimization algorithm for solving LSGO problems. J Ambient Intell Human Comput 14, 16891–16901 (2023). https://doi.org/10.1007/s12652-023-04705-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12652-023-04705-7