1 Introduction

In optimization problems, heuristic algorithms try to obtain an optimum or near optimum solution by using experimental information (Murty 2003). Optimization methods often deal with hard-to-solve problems with a large number of decision variables, with many local optima, and computationally expensive calculations: in these situations, searching for solutions that minimize or maximize the given objective function as efficiently as possible can become incredibly challenging (Iacca et al. 2021). Researchers in the literature frequently use optimization algorithms to solve real-world problems, and the use of optimization algorithms is becoming more and more popular. There are many metaheuristic algorithms developed for many engineering problems in the literature (Baş and Ülker 2020a, b, c; Beşkirli 2021). Original versions of these algorithms often fall short of finding optimum values. That's why they are constantly being developed. With the fast development of big data technologies, many real-world optimization problems show an enormous increase in the number of decision variables, and large-scale optimization has become an active research field in the last decade (Ren et al. 2021). The large-scale optimization problem is an optimization problem in which the number of decision variables (that is the problem dimension) exceeds 100 (Li et al. 2021a, b).

There are many large-scale problems in the literature. For example flexible scheduling (Sun et al. 2019), gene array classification (Min et al. 2018), satellite-module layout design (Teng et al. 2010), and overlapping community detection in largescale networks (Gao et al. 2019). These problems can be formulated as large-scale optimization problems. However, it is still difficult to solve large-scale problems with traditional evolutionary algorithms. It is generally accepted that the performance of traditional evolutionary algorithms will deteriorate significantly when dealing with large-scale problems (Ren et al. 2021). The main reason for this is that the solution space of a problem increases exponentially with increasing size, and it is not insignificant for a traditional evolutionary algorithm to adequately explore a large solution area within acceptable computational time (Ren et al. 2021). Large-scale optimization has attracted a large number of researchers and has led the way in the development of many advanced optimization algorithms in recent years. Beşkirli proposed a new Tree Seed Algorithm based on the roulette wheel strategy (R-TSA) (Beşkirli 2021). With this strategy, the trees selected at the seed production phase of TSA were diversified and the locations of the seeds were updated to prevent it from being stuck in local minima. Li et al. proposed a dynamic sine cosine algorithm (DSCA). The discovery and use of SCA are dynamically balanced. They also introduced a dynamic inertia weight strategy to avoid falling into the local optimum (Li et al. 2021a, b). Deng et al. proposed the ranking-based biased learning swarm optimizer (RBLSO) for large-scale optimization. The proposed RBLSO contains two types of learning strategies, namely, ranking paired learning (RPL) and biased center learning (BCL) (Deng et al. 2019). Deng et al. proposed Quantum differential evolution with a cooperative coevolution framework and hybrid mutation strategy for large-scale optimization. It has combined the quantum computing characteristics of quantum evolutionary algorithm (QEA) and the divide and conquers the idea of cooperative coevolution evolutionary algorithm(CCEA) (Deng et al. 2021). Li et al. proposed an adaptive particle swarm optimizer with decoupled exploration and exploitation for large-scale optimization. Thanks to the novel balancing strategy of exploration and exploitation, the two functions in the proposed algorithm have been independently and simultaneously managed. (Li et al. 2021a, b). Ren et al. proposed An eigenspace divide-and-conquer approach for large-scale optimization (Ren et al. 2021).

Jaya is a population-based algorithm proposed by Rao for solving constrained and unconstrained continuous optimization problems (Rao 2016). Jaya algorithm is a simple and efficient population-based metaheuristic algorithm (Kaveh et al. 2021). In addition to its simplicity, there is no algorithm-specific parameter. The most prominent feature of Jaya is that while updating an individual's position in the search space, it uses both the individual with the highest quality fitness function value and the individual with the worst fitness function value. Thus, Jaya tries to search both in the local search space and the global search space. Also, this allows Jaya to avoid local minimum traps. Despite having these advantages, the Jaya algorithm suffers from some shortcomings, such as undesirable early convergence and the possibility of getting stuck at local minimums due to insufficient population diversity. Because of the advantages of the Jaya, it has attracted the attention of many researchers. Many studies have been done in the literature with the Jaya algorithm. Kaveh et al. proposed an improved Shuffled Jaya algorithm for sizing optimization of skeletal structures with discrete variables. The proposed algorithm uses the concept of the shuffling process to gain superior exploration capability in the search mechanism (Kaveh et al. 2021). Aslan et al. applied the basic Jaya algorithm for binary optimization problems. In binary optimization, the solution space is structured separately for this type of optimization problem. The decision variables of binary optimization problems are the elements of the set [0,1] (Aslan et al. 2019). Iacca et al. proposed an improved Jaya optimization algorithm with Lévy flight. They have investigated by applying Lévy flight to Jaya, a simple yet effective Swarm Intelligence optimization algorithm recently proposed in the literature (Iacca et al. 2021). Chaudhuri and Sahu introduced the Binary Jaya algorithm for the feature selection problem. They proposed a hybrid filter–wrapper approach for the feature selection. The hybrid methods are a combination of good characteristics of filter and wrapper methods (Chaudhuri and Sahu 2021). Warid suggested an original AMTPG-Jaya inspired approach to handle the OPF formulation. Warid proposes the implementation of a recently invented meta-heuristic optimization solver namely, an adaptive multiple teams perturbation-guiding Jaya (AMTPG-Jaya) technique to tackle with diverse single goal optimum power flow (OPF) forms (Warid, 2020). Das et al. introduced a Jaya algorithm-based wrapper method for optimal feature selection in supervised classification (Das et al. xxxx). Caldeira and Gnanavelbabu proposed a discrete Jaya algorithm for multi-objective flexible job-shop scheduling problems (Caldeira and Gnanavelbabu 2021). Nayak et al. introduced a Jaya algorithm with mutation and an extreme learning machine-based approach for sensorineural hearing loss detection (Nayak et al. 2019). Rao and Keesari proposed a multi-team perturbation guiding Jaya algorithm for optimization of wind farm layout (Rao and Keesari 2018). Elattar and ElSayed proposed a Modified JAYA algorithm for optimal power flow incorporating renewable energy sources considering the cost, emission, power loss, and voltage profile improvement (Elattar and ElSayed 2019).

When studies on Jaya are reviewed, discrete, continuous, and binary forms of Jaya have been proposed for various problems. Jaya has attracted the attention of many reviewers because of its simplicity and parameterless nature. When the literature was examined, it was noted that Jaya's performance was not compared for different population sizes at large dimensions for constrained continuous optimization problems. In addition, different techniques have been developed in the literature to improve Jaya (For example xor gate, filter–wrapper approach, etc.). In this study, Jaya's ability to explore the search space is developed. In addition, the application of Jaya to real-world problems (three different engineering design problems) has been carried out and its success has been proven. The success of IJaya has also been demonstrated by statistical tests.

1.1 The main contribution, motivation, and organization

Jaya has been proposed for low dimensional optimization problems and has shown promising results compared to other optimization algorithms. Although the traditional Jaya algorithm can effectively solve several real-world optimization problems, it does not always find the perfect solution. Therefore, the traditional Jaya algorithm is somewhat weak by default. In this paper, the original Jaya algorithm has been developed and an Improved Jaya algorithm is proposed (IJaya). In the original Jaya algorithm, each individual in the population updates their position using the positions of individuals with the best and worst fitness values. In Jaya's random walk stage, an individual tries to approach the position of the best-valued individual while trying to move away from the position of the worst valued individual. Thus, individuals try to approach the most optimal value. The original Jaya has developed its exploitation ability in the search space. This causes a rapid convergence in Jaya. It prevents Jaya from discovering different optimal values in the search space. The original Jaya may not be capable enough of reconnaissance in search space.

In this study, the exploration ability of Jaya was developed by using random individuals in the random walking phase. Jaya has been proposed for constrained and unconstrained optimization problems and it tests on low-scale problems. Real-world problems often have high variables. As the number of dimensions increases, it becomes difficult to solve the problems. As the large-scale optimization problems are solved, the search space of Jaya will be greatly expanded. This will lead to a reduction in the probability of generating first individuals close to the optimal position. Thus, the performance of Jaya drops sharply. The exploration ability of the proposed Jaya was developed by using random individuals in the random walking phase. In this paper, IJaya's performance has been tested not only at low scale dimensions but also at large-scale dimensions. Real-world problems are often large dimensions. Even if an optimization algorithm outperforms at low-scale dimensions, sometimes it may not perform adequately at high dimensions. Therefore, the performance of an optimization algorithm in both low dimensions and high dimensions should be demonstrated in a study. In this study, the performance of the Jaya algorithm in both low and large dimensions has been demonstrated. Jaya's ability to discover different optimal values in the search space has been enhanced. In this study, the ability to explore was developed by applying Jaya to large dimensions for the first time. In addition, the effect of population size on performance was examined. In this paper, the success of Jaya and IJaya in three different population sizes and six different low and large scale dimensions was examined in detail. The performances of Jaya and IJaya have been compared with algorithms frequently used in the literature. The achievements of Jaya and IJaya are also demonstrated by statistical tests. In addition, Jaya and IJaya have also been tested in three different engineering design problems. Thus, the success of Jaya and IJaya in solving real-world problems is also demonstrated in this study. The results are discussed and evaluated.

The organization of the paper is as follows. The original Jaya is studied in Sect. 2, Improved Jaya (IJaya) is detailed in Sect. 3. IJaya and Jaya are tested on eighteen benchmark functions for low and large dimensions in different population sizes in Sect. 4 and obtained results are compared with well-known heuristic methods. The results are evaluated. In addition, Jaya and IJaya have also been tested in three different engineering design problems in this section.

2 The Jaya algorithm

Jaya algorithm is a heuristic algorithm based on swarm intelligence developed by Rao in recent years (Rao 2016). Its name comes from the word ‘victory’ in the Sanskrit language (Iacca et al. 2021). Jaya algorithm has attracted attention with its simple structure. It has as few parameter values as possible. The most distinctive feature of the Jaya algorithm that distinguishes it from other heuristic algorithms is the random walking phase. In the random walking phase, each individual in the population updates their current positions according to the positions of the individuals with the best and worst fitness values of the population. The algorithm always tries to get closer to success (i.e. reaching the best solution) and tries to avoid failure (i.e. moving away from the worst solution) (Rao 2016). Equation 1 shows the random walking stage of the Jaya algorithm (Rao 2016).

$$ X_{i,j}^{^{\prime}} = X_{i,j} + r_{1} \times \left( {Best_{j} - \left| {X_{i,j} } \right|} \right) - r_{2} \times \left( {Worst_{j} - \left| {X_{i,j} } \right|} \right) $$
(1)

where i = (1, 2,..., N) is candidate solution to be processed on, j = (1, 2,..., n) is dimensions of the problem, Best represents the solution with the best fitness value, Worst has represented the solution with the worst fitness value, r1 and r2 are a random number in [0,1]. MaxFEs represent the termination condition. The work steps of Jaya is shown in Fig. 1.

Fig. 1
figure 1

The work steps of the Jaya algorithm

3 Improved Jaya algorithm (IJaya)

The Jaya algorithm is a newly developed population-based meta-heuristic proposed by Rao in 2016. Since its structure is simple and offers effective solutions to problems, it has been the working point of many researchers. Also, it was designed in such a way that it needs as few parameters as possible. As a result, it only needs two parameter values (the population size and the maximum number of generations). The main advantage of the Jaya algorithm is the random walk phase. In the random walk phase, Jaya updates the positions of the new candidate solutions by using the positions of the population members with the best and worst fitness values among the population members. Especially Jaya tries to approach the best position in the search space and move away from the worst position. This feature has improved the local search capability for the Jaya algorithm. Thus, it was able to escape from local traps in the search space. However, the Jaya algorithm's search-space exploration ability is not developed. In some cases, the original Jaya algorithm may not be able to find the global optimum, due to the presence of local optima that can get the search trapped (Iacca et al. 2021). Jaya algorithm has the disadvantage of exploring new points in the search space. Therefore, the random walking phase of the Jaya algorithm was updated in this study. Improved Jaya (IJaya) is proposed in the paper. During the movement of candidate solutions, not only the best and worst positions of the population were used, but also the positions of random individuals. In this way, the risk of the algorithm getting stuck at a local optimum is greatly reduced, while still making sufficient local improvements. Jaya offers a balance between exploration and exploitation.

The work steps of IJaya are the same as that of the original Jaya algorithm, shown in Fig. 1. The only difference is that when updating the positions of individuals in Jaya, only Eq. 1 is used, whereas, in IJaya, both Eqs. 1 and 2 are used. How to use Eqs. 1 or 2 in IJaya is determined by Eq. 3.

Although a simple change in the algorithm, this new random walk process led to an increase in overall performance in terms of solution quality, as demonstrated in the experimental section. The biggest reason for this is that there are jumps to different local minimum values in the search space instead of just getting stuck at local minimums. Real-world problems are often large in scale. As the size of the search space increases, the importance of the exploration capabilities of optimization algorithms has also increased. Performing a random walk through the positions of random individuals at some iterations in IJaya allowed IJaya to evaluate local minimums across all search space.

$$ Y_{i,j}^{^{\prime}} = X_{i,j} + r_{3} \times \left( {X_{i,j} - \left| {X_{r,j} } \right|} \right) $$
(2)

where i = (1, 2,..., N) is candidate solution to be processed on, j = (1, 2,..., n) is dimensions of problem, r is represents the solution with the random fitness value, r3 is random number in [0,1].

$$ Random~\;walking\;\left( {selection} \right) = \left\{ {\begin{array}{*{20}l} {X_{{i,j~}}^{'} ,} \hfill & {rand \ge 0.5} \hfill \\ {Y_{{i,j}}^{'} ,} \hfill & {otherwise} \hfill \\ \end{array} } \right. $$
(3)

4 Experimental results and analysis

Improved Jaya algorithm (IJaya) and Jaya algorithm have been tested on classical benchmark functions in low, medium, and high dimensions. In addition, the performances of the Jaya and IJaya are analyzed according to their population sizes. The algorithms are run under the same conditions. The population sizes (N) are 10, 25, and 50 while the dimensions (n) of the problem are 10, 20, 30, 100, 500, and 1000. The number of maximum iteration is set at 500. These parameters setup are shown in Table 1. The experiments are executed with a 2.3 GHz CPU and 4 GB RAM. Each application has been run twenty times.

Table 1 Parameters setup for original Jaya and IJaya

4.1 Benchmark functions

Eighteen different Benchmark functions are used to compare the performance of the proposed algorithm with other algorithms (Surjanovic and Bingham 2019; Suganthan et al. 2005; Beşkirli 2021; Baş and Ülker 2020d, e). These functions are shown in Table 2.

Table 2 Classical benchmark functions

4.2 Comparison of the performances of the IJaya and Jaya for the population size of 10

Jaya and IJaya Algorithms are run in low, medium, and high dimensions ({10, 20, 30, 100, 500, and 1000}) for the population size of 10 and the iteration number of 500. Each application has been run twenty times. The mean, best, worst, standard deviation (Std), and rank of the results are calculated for Jaya and IJaya Algorithms. Tables 3 and 4 show the results obtained when the problem dimension is in low dimensions (10, 20, and 30). Tables 5 and 6 show the results obtained when the problem dimension is in large dimensions (100, 500, and 1000). The best outcome values obtained by the algorithms are indicated in bold. Figure 2 shows the rank performances of Jaya and IJaya in terms of the mean results for the population size of 10 based on dimension sizes.

Table 3 The results of the Jaya and IJaya algorithms for the dimensions of 10, 20, and 30 in unimodal Benchmark functions (Population size = 10)
Table 4 The results of the Jaya and IJaya algorithms for the dimensions of 10, 20, and 30 in multimodal Benchmark functions (Population size = 10)
Table 5 The results of the Jaya and IJaya algorithms for the dimensions of 100, 500, and 1000 in unimodal Benchmark functions (Population size = 10)
Table 6 The results of the Jaya and IJaya algorithms for the dimensions of 100, 500, and 1000 in multimodal Benchmark functions (Population size = 10)
Fig. 2
figure 2

Rank performances of Jaya and IJaya in terms of the mean results for the population size of 10 based on dimension sizes (Dim = dimension)

Tables 3 and 4 show the best, worst, mean, and standard deviation of the fitness values of the Jaya and IJaya algorithms for low dimensions (10, 20, 30). The results show that IJaya algorithm has a high success for unimodal benchmark functions at low dimensions. IJaya showed higher success than the Jaya algorithm in multimodal benchmark functions at low dimensions. The IJaya algorithm shows superior performance at 100% of unimodal benchmark functions for dimension = 20 and 30 and IJaya algorithm has performance at 80% of unimodal benchmark functions for dimension = 10. The IJaya algorithm shows performance at 100% of multimodal benchmark functions for 8 out of 8 functions (f11, f12, f13, f14, f15, f16, f17, and f18).

Tables 5 and 6 show the best, worst, mean, and standard deviation of the fitness values of the Jaya and IJaya algorithms for large dimensions (100, 500, 1000). IJaya shows high success at 100% of unimodal benchmark functions for 10 datasets out of 10 datasets (f1, f2, f3, f4, f5, f6, f7, f8, f9, and f10) for large dimensions. IJaya showed higher success than the Jaya algorithm in multimodal benchmark functions at large dimensions. The IJaya algorithm shows superior performance at roughly 87,5% of multimodal benchmark functions for 7 out of 8 functions (f11, f13, f14, f15, f16, f17, and f18) for dimension = 500 and 1000. The IJaya algorithm has performance at 87,5% of multimodal benchmark functions for 7 out of 8 functions (f12, f13, f14, f15, f16, f17, and f18) for dimension = 100.

When the results are examined in general, the performance of IJaya is higher at lower dimensions. As the dimension size increases, it becomes more difficult to find the optimum solutions. IJaya has continued to be successful despite increasing in dimension size. The biggest reason for this is the ability to explore in the search space. Although Jaya is quite successful in finding local optimums in the search space, it has difficulties in finding global optimum points. Thanks to the newly added random walking phase, different local optimums in the search space are also tested. In unimodal benchmark functions, there is only one local optimum, but in multimodal benchmark functions, there are many local optimums. One of these local optima is the global optimum. Although many of the optimization algorithms show superior performance in unimodal benchmark functions, they cannot perform adequately in multimodal benchmark functions. When the experimental results of the IJaya algorithm were examined, it was observed that it showed superior performance in benchmark functions in both groups. This again proved the success of the IJaya algorithm.

4.3 Comparison of the performances of the IJaya and Jaya for the population size of 25

Jaya and IJaya Algorithms are run in low, medium, and high dimensions ({10, 20, 30, 100, 500, and 1000}) for the population size of 25 and the iteration number of 500. Each application has been run twenty times. The mean, best, worst, standard deviation (Std), and rank of the results are calculated for Jaya and IJaya Algorithms. Tables 7 and 8 show the results obtained when the problem dimension is in low dimensions (10, 20, and 30). Tables 9 and 10 show the results obtained when the problem dimension is in large dimensions (100, 500, and 1000). The best outcome values obtained by the algorithms are indicated in bold. Figure 3 shows the rank performances of Jaya and IJaya in terms of the mean results for the population size of 25 based on dimension sizes.

Table 7 The results of the Jaya and IJaya algorithms for the dimensions of 10, 20, and 30 in unimodal Benchmark functions (Population size = 25)
Table 8 The results of the Jaya and IJaya algorithms for the dimensions of 10, 20, and 30 in multimodal Benchmark functions (Population size = 25)
Table 9 The results of the Jaya and IJaya algorithms for the dimensions of 100, 500, and 1000 in unimodal Benchmark functions (Population size = 25)
Table 10 The results of the Jaya and IJaya algorithms for the dimensions of 100, 500, and 1000 in multimodal Benchmark functions (Population size = 25)
Fig. 3
figure 3

Rank performances of Jaya and IJaya in terms of the mean results for the population size of 25 based on dimension sizes (Dim = dimension)

Tables 7 and 8 show the best, worst, mean, and standard deviation of the fitness values of the Jaya and IJaya algorithms for low dimensions (10, 20, 30). IJaya showed superior success at 100% of unimodal benchmark functions at low dimensions. The IJaya algorithm shows superior performance at 100% of multimodal benchmark functions for 8 out of 8 functions (f11, f12, f13, f14, f15, f16, f17, and f18).

Tables 9 and 10 show the best, worst, mean, and standard deviation of the fitness values of the Jaya and IJaya algorithms for large dimensions (100, 500, 1000). IJaya shows high success at roughly 100% of unimodal benchmark functions for 10 datasets out of 10 datasets (f1, f2, f3, f4, f5, f6, f7, f8, f9, and f10) for large dimensions. IJaya showed higher success than the Jaya algorithm in multimodal benchmark functions at large dimensions. The IJaya algorithm shows superior performance at roughly 75% of multimodal benchmark functions for 6 out of 8 functions (f13, f14, f15, f16, f17, and f18).

When the results are examined in low and large dimensions, Jaya's success in unimodal benchmark functions is higher than multimodal benchmark functions and IJaya's success in unimodal benchmark functions is higher than multimodal benchmark functions. This shows that it is easier to find the optimum point in unimodal benchmark functions than in multimodal benchmark functions. On the other hand, IJaya is more successful in multimodal benchmark functions than Jaya. This shows that the updated random walk phase is successful. The success of the IJaya algorithm has been proven once again.

4.4 Comparison of the performances of the IJaya and Jaya for the population size of 50

Jaya and IJaya Algorithms are run in low, medium, and high dimensions ({10, 20, 30, 100, 500, and 1000}) for the population size of 50 and the iteration number of 500. Each application has been run twenty times. The mean, best, worst, standard deviation (Std), and rank of the results are calculated for Jaya and IJaya Algorithms. Tables 11 and 12 show the results obtained when the problem dimension is in low dimensions (10, 20, and 30). Tables 13 and 14 show the results obtained when the problem dimension is in large dimensions (100, 500, and 1000). The best outcome values obtained by the algorithms are indicated in bold. Figure 4 shows the rank performances of Jaya and IJaya in terms of the mean results for the population size of 50 based on dimension sizes.

Table 11 The results of the Jaya and IJaya algorithms for the dimensions of 10, 20, and 30 in unimodal Benchmark functions (Population size = 50)
Table 12 The results of the Jaya and IJaya algorithms for the dimensions of 10, 20, and 30 in multimodal Benchmark functions (Population size = 50)
Table 13 The results of the Jaya and IJaya algorithms for the dimensions of 100, 500, and 1000 in unimodal Benchmark functions (Population size = 50)
Table 14 The results of the Jaya and IJaya algorithms for the dimensions of 100, 500, and 1000 in multimodal Benchmark functions (Population size = 50)
Fig. 4
figure 4

Rank performances of Jaya and IJaya in terms of the mean results for the population size of 50 based on dimension sizes (Dim = dimension)

Tables 11 and 12 show the best, worst, mean, and standard deviation of the fitness values of the Jaya and IJaya algorithms for low dimensions (10, 20, 30). The results show that IJaya algorithm has a high success for unimodal benchmark functions at low dimensions. IJaya showed higher success than the Jaya algorithm in multimodal benchmark functions at low dimensions. The IJaya algorithm shows superior performance at 100% of unimodal benchmark functions for dimension = 10, 20, and 30. The IJaya algorithm shows performance at 100% of multimodal benchmark functions for 8 out of 8 functions (f11, f12, f13, f14, f15, f16, f17, and f18) for dimension = 10, 20, and 30.

Tables 13 and 14 show the best, worst, mean, and standard deviation of the fitness values of the Jaya and IJaya algorithms for large dimensions (100, 500, 1000). IJaya shows high success at 100% of unimodal benchmark functions for 10 datasets out of 10 datasets (f1, f2, f3, f4, f5, f6, f7, f8, f9, and f10) for large dimensions. IJaya showed higher success than the Jaya algorithm in multimodal benchmark functions at large dimensions. The IJaya algorithm shows superior performance at roughly 87.5% of multimodal benchmark functions for 7 out of 8 functions (f11, f13, f14, f15, f16, f17, and f18) for dimension = 100. The IJaya algorithm has performance at 75% of multimodal benchmark functions for 6 out of 8 functions (f13, f14, f15, f16, f17, and f18) for dimension = 500 and 1000.

When the results are examined in low and large dimensions, similar results were found in population size 50 as well as in population size 25. The only difference is that IJaya's success in multimodal benchmark functions at large dimensions in population size 50 is more successful than in population size 25. This showed that as the population size increases, the success of IJaya increases even more.

Results show that the eighteen functions achieved very good results for the IJaya in all population sizes for low dimensions (10, 20, 30), while the eighteen functions could not show the same performance for large dimensions (100, 500, 1000). The Jaya algorithm, on the other hand, was observed to achieve a better result in the f11 and f12, especially for the population sizes of 10, 25, and 50. The f3 function was not marked in bold because both algorithms obtained the optimum result. The IJaya algorithm achieved very high success in almost all of the population sizes. It achieved the best results especially for the population sizes of 25 and 50. It was noticed that the Improved Jaya algorithm (IJaya) has obtained a better place than the Jaya algorithm. This is because the exploration capability of the IJaya algorithm is improved in the search space. Search agents are not only using the best or worst values, but also the position information of a random agent of the system during the random walk phase.

Figure 5 shows the performances of Jaya and IJaya algorithms in different population sizes (10, 25, and 50) at low and large scale dimensions (10, 20, 30, 100, 500, and 1000). Four random unimodal and multimodal benchmark functions (f1, f5, f14, and f15) are selected for the convergence graphs. The convergence graphs in Fig. 5 show that the IJaya algorithm converged much faster than the Jaya algorithm.

Fig. 5
figure 5figure 5

Convergence graphs of the f1, f5, f14, and f15 functions for the Jaya and IJaya algorithms a for dimension = 10, b for dimension = 20, c for dimension = 30, d for dimension = 100, e for dimension = 500, and f for dimension = 1000

Figure 6 shows the rank performances of Jaya and IJaya in terms of the mean results in different population sizes 10, 25, and 50 at low and large scale dimensions (10, 20, 30, 100, 500, and 1000). According to the results, as the population size increases, the success of the relevant algorithm increases.

Fig. 6
figure 6

Rank performances of Jaya and IJaya in terms of the mean results for the population size of 10, 25, and 50 a for dimension = 10, b for dimension = 20, c for dimension = 30, d for dimension = 100, e for dimension = 500, and f for dimension = 1000

Tables 15, 16, and 17 show the Wilcoxon signed-rank results applied for the results of the Jaya and IJaya algorithms for low and large dimensions. It is set to signed-rank test with the 0.05 p-value are given in Tables 15, 16, and 17. The results showed that there is a semantic difference between the results of Jaya and IJaya algorithms. In most cases, it has received a positive sign.

Table 15 The statistical results of the Jaya and IJaya algorithms using Wilcoxon signed rank test for dimensions of 10 and 20
Table 16 The statistical results of the Jaya and IJaya algorithms using Wilcoxon Signed Rank Test for dimensions of 30 and 100
Table 17 The statistical results of the Jaya and IJaya algorithms using Wilcoxon Signed Rank Test for dimensions of 500 and 1000

4.5 Comparison of the performances of the IJaya and the other algorithms

Case 1 In this section, alongside the results obtained from the IJaya and Jaya algorithms, literature algorithms such as BA (Yang 2011) and PSO (Xinchao 2010) were used in the solution of eighteen unimodal and multimodal different functions. BA and PSO original codes are taken from the Matlab library (https://www.mathworks.com). The parameter settings of the comparison algorithms are shown in Table 18. The results obtained are compared in Tables 19, 20, 21. All these algorithms were run 20 times and at maximum iteration = 500 and population size = 25 for dimension = 10, 20, 30, 100, 500, and 1000 under the same conditions, and the best, the worst, mean, standard deviation (Std), and rank values were obtained. The best outcome values obtained by the algorithms are indicated in bold.

Table 18 Parameters of BA, PSO, Jaya, and IJaya algorithms
Table 19 The results of the IJaya and other algorithms for the dimensions of 10 and 20
Table 20 The results of the IJaya and other algorithms for the dimensions of 30 and 100
Table 21 The results of the IJaya and other algorithms for the dimensions of 500 and 1000

According to the comparison results, the IJaya algorithm has shown superior performance. The IJaya algorithm shows performance at 94,44% of benchmark functions for 17 out of 18 functions (except for f18) for dimension = 10, 20, 30, and 100. The IJaya algorithm shows performance at 83,33% of benchmark functions for 15 out of 18 functions (except for f11, f12, and f18) for dimension = 500 and 1000. The results showed that the newly added random walk phase improved the performance of the Jaya algorithm.

Table 22 shows the Wilcoxon signed-rank results applied for the results of the BA and IJaya algorithms for low and large dimensions. Table 23 shows the Wilcoxon signed-rank results applied for the results of the PSO and IJaya algorithms for low and large dimensions. It is set to signed-rank test with the 0.05 p-value are given in Tables 22 and 23. The results showed that there is a semantic difference between the results of BA, PSO, and IJaya algorithms. In most cases, it has received a positive sign.

Table 22 The statistical results of the BA and IJaya algorithms using Wilcoxon Signed Rank Test for dimensions of 10, 20, 30, 100, 500, and 1000
Table 23 The statistical results of the PSO and IJaya algorithms using Wilcoxon Signed Rank Test for dimensions of 10, 20, 30, 100, 500, and 1000

Figure 7 shows the performances of BA, PSO, Jaya, and IJaya algorithms in population size = 25 at low and large scale dimensions (10, 20, 30, 100, 500, and 1000). Four random unimodal and multimodal benchmark functions (f1, f5, f14, and f15) are selected for the convergence graphs. The convergence graphs in Fig. 7 show that the IJaya algorithm converged much faster than the other algorithms. Figure 8 shows the rank performances of Jaya, IJaya, BA, and PSO in terms of the mean results for the population size of 25 based on dimension sizes. The results summarized the success of IJaya.

Fig. 7
figure 7

Convergence graphs of the f1, f5, f14, and f15 functions for the BA, PSO, Jaya and IJaya algorithms a for dimension = 10, b for dimension = 20, c for dimension = 30, d for dimension = 100, e for dimension = 500, and f for dimension = 1000

Fig. 8
figure 8

Rank performances of Jaya, IJaya, BA, and PSO in terms of the mean results for the population size of 25 based on dimension sizes (Dim = dimension)

Case 2 In this section, alongside the results obtained from the IJaya and Jaya algorithms, literature algorithms such as PSO, CSS, GOA, SSA, and MVO were used in the solution of eighteen unimodal and multimodal different functions. The results of PSO, CSS, GOA, SSA, and MVO are taken from (Beşkirli, 2021). The results obtained are compared in Table 24. All these algorithms were run 30 times and at MaxFEs = 500,000 for dimension = 100 under the same conditions and the best, mean, and standard deviation (Std) values were obtained. The best outcome values obtained by the algorithms are indicated in bold.

Table 24 The results of the IJaya and other algorithms for the dimension of 100

Comparisons were made for nine benchmark functions (f1, f3, f5, f7, f9, f13, f15, f16, and f17). According to the comparison results, the IJaya algorithm shows performance at 66,67% of benchmark functions for 6 out of 9 functions (except for f9 and f13) for dimension = 100. IJaya algorithm has established a more stable balance between exploration and exploitation with a new random walk phase.

4.6 A comparison of Jaya and IJaya on the engineering design problems

The main purpose of optimization algorithms is to minimize the values of the design parameters and the total cost of the engineering design problem (Rather and Bala 2020). Jaya and IJaya have been applied to three famous mechanical engineering design problems which include the Welded Beam Design problem (WBD), the Compression Spring Design (CSD), and the Pressure Vessel Design problem (PVD). These problems consist of both equality and inequality constraints. The penalty function method was used to handle the constraints. In the penalty method, if the optimization algorithm violates any constraint, the algorithm is penalized with a high fitness function value (Rather and Bala 2020). The mathematical models of these problems are taking directly from Babalik et al. (2018) and Rather and Bala (2020). In this study, 30,000, 50,000, and 100,000 as the maximum evaluations (MaxFEs) and 20, 40, and 60 as the population sizes were selected, and we tested the success of Jaya and IJaya in three different engineering design problems. 30 independent runs were carried out for each function for all the algorithms. Population size and maximum evaluation parameters were chosen equally to make a fair comparison.

Table 25 shows a comparing Jaya and IJaya algorithms for population size = and 20, 40, and 60 on various engineering design problems for MaxFEs = 30,000. Table 26 shows a comparing Jaya and IJaya algorithms for population size = and 20, 40, and 60 on various engineering design problems for MaxFEs = 50,000. Table 27 shows a comparing Jaya and IJaya algorithms for population size = and 20, 40, and 60 on various engineering design problems for MaxFEs = 100,000. Table 28 shows the results of the Wilcoxon Signed-Rank Test on the results of Jaya and IJaya algorithms on various engineering design problems. Table 29 shows a comparing IJaya and other algorithms on various engineering design problems. Successful results are marked with bold font in Tables 25, 26, 27, and 29.

Table 25 Comparing Jaya and IJaya algorithms for population size = 20, 40, and 60 on various engineering design problems (MaxFEs = 30,000)
Table 26 Comparing Jaya and IJaya algorithms for population size = 20, 40, and 60 on various engineering design problems (MaxFEs = 50,000)
Table 27 Comparing Jaya and IJaya algorithms for population size = 20, 40, and 60 on various engineering design problems (MaxFEs = 100,000)
Table 28 The results of the Wilcoxon Signed-Rank Test on the results of Jaya and IJaya algorithms on various engineering design problems
Table 29 Comparing IJaya and other algorithms on various engineering design problems

When the results were examined, the success of Jaya and IJaya increased as the population increased, and in the same way, more optimum results were obtained as the number of MaxFEs increased. According to the results, the results of IJaya are more optimal than Jaya. Thus, the success of IJaya has been proven again in engineering design problems. IJaya has achieved this success by developing not only local search but also global search capability in search space. When IJaya and Jaya are compared with other algorithms in the literature, the results of IJaya and Jaya are satisfactory.

5 Conclusion

In this paper, the Improved Jaya algorithm (IJaya) is proposed for global continuous optimization. The random walking phase of the original Jaya algorithm has been developed and thus the balance between exploration and exploitation is controlled. This modification has a significant impact on performance as it allows the algorithm to evade local optima by making occasional "jumps" in the search space of the algorithm while maintaining Jaya's original simple algorithm philosophy. Such behaviors are based on the use of randomly selected individuals during the random walking phase.

The performance of the proposed IJaya has been assessed on the eighteen unimodal and multimodal benchmark functions. The performance of IJaya has been tested not only on low scaled dimensions but also on large scale dimensions (Dimension = {10, 20, 30, 100, 500, and 1000}). In addition to different dimensions, the success of IJaya in different population sizes is shown in this paper. Real-world problems are mostly large-scale dimensional. Optimization algorithms change in their success as the dimension of the problem increases. That's why Jaya and IJaya are tested for performance not only in low dimensions but also in higher dimensions. Thus, the success of IJaya has been demonstrated in problems of different dimensions. The success of IJaya has been also compared with the PSO, BA, CSS, GOA, MVO, and SSA algorithms in addition to the original Jaya algorithm in the low and large scale dimensions in the optimization problems. In addition, Jaya and IJaya have been applied to three famous mechanical engineering design problems which include the Welded Beam Design problem (WBD), the Compression Spring Design (CSD), and the Pressure Vessel Design problem (PVD). These problems consist of both equality and inequality constraints. Jaya and IJaya have been tested in detail in engineering design problems at three different population sizes (N = 20, 40, and 60) and three different amounts of MaxFEs (MaxFEs = 30,000, 50,000, and 100,000). Comparison results also showed that IJaya performed superiorly and can be used as an alternative algorithm for constrained and unconstrained continuous optimization. These superior results confirm the advantage of the updated random walking phase to escape local optima. Apart from the specific numerical results reported here, one of the main advantages of IJaya is its simplicity. IJaya is much simpler to implement (IJaya has two parameters, i.e. the population size and the number of generations) than most of the compared algorithms from the state-of-the-art.

In future research, it is thought to test the success of the Jaya algorithm not only in continuous optimization but also on discrete and binary optimization problems by hybridizing different algorithms.