1 Introduction

The Traveling Salesman Problem (TSP) is a classical optimization problem introduced by Bellmore and Nemhauser (1968) that has gained recognition in the fields of graph theory and operations research. This classic problem entails finding the shortest route for a salesman to visit a set of cities exactly once before returning to the starting point. However, as the number of cities increases, the complexity of the TSP grows exponentially, rendering the search for optimal solutions computationally inefficient and known for its complexity as it falls into the NP-hard class (Gary and Johnson 1979).

Despite its notorious difficulty and classification as NP-hard, TSP is one of the most widely studied problems in computational mathematics across various domains including vehicle routing (Adewumi and Adeleke 2018), computer wiring, engineering design (Li et al. 2022), machine sequencing (Raj and Bhattacharyya 2018; Mahapatra et al. 2021; Dehedkar and Raj 2022; Mahapatra and Raj 2023; Raj et al. 2023) and scheduling (Bansal and Singh 2022), as well as frequency assignment in communication networks.

Based on the structure of the distance matrix, the TSP problem is divided into two types: symmetric and asymmetric. The distance between any two cities is the same, regardless of the order in which they are visited is known as Symmetric Traveling Salesman Problem (STSP) while the distance between two cities may differ depending on the direction of travel is known as Asymmetric Traveling Salesman Problem (ATSP). This study is focused on solving ATSP. Mathematically, ATSP is represented as \(d(i, j) \ne d(j, i)\), where d(i, j) represents the distance from city i to city j, and d(j, i) represents the distance from city j to city i.

The primary objective of the ATSP is to find the most efficient tour that allows a traveling salesman to visit each city exactly once and return to the starting point, considering the asymmetrical distances. There are some exact methods such as branch and cut (Ascheuer et al. 2000), etc in the literature for solving ATSP problems, but all of them have exponential complexity, need too much computation time, or use too much memory to obtain the optimal solution.

In contrast to exact algorithms, new metaheuristic approaches are being utilized to address the NP-hard ATSP. Some of these approaches include the genetic algorithm (Nagata and Soler 2012), harmony search algorithm (Boryczka and Szwarc 2019), discrete bat algorithm (Osaba et al. 2016), discrete water cycle algorithm (Osaba et al. 2018) and discrete mayfly algorithm (Zhang et al. 2023).

Even though these metaheuristic algorithms have been applied to the ATSP, there is a gap in the existing literature when it comes to comparing various variants of the genetic algorithm (GA) against the standard GA on real-life instances of the ATSP. This research gap highlights the need for a comprehensive study that systematically evaluates and compares the performance of various GA variants on a real-life ATSP.

Genetic algorithms (Goldberg 1989; Deepa 2008; Katoch et al. 2021) belong to the larger class of evolutionary algorithms inspired by biological evolution. Holand’s GA (1970) is a powerful optimization technique that generates solutions to optimization problems using natural selection and genetics techniques. These algorithms are typically quite simple and have a quick processing time. As a result, using genetic algorithms and their variants to solve an NP-hard problem such as ATSP may be appropriate. That is why, this study is focused on evaluating the performance of simple GA and its variants i.e., adaptive genetic algorithm (AGA), binary-coded genetic algorithm (BGA), and real-coded genetic algorithm (RCGA). This evaluation is carried out on a real-life ATSP problem involving 22 districts of Haryana, India.

The main contributions of this research are as follows:

  • A novel real-life ATSP problem is formulated by using 22 districts of Haryana, India.

  • Some Variants of the GA algorithm have been applied to the formulated ATSP problem in MATLAB.

  • one-point crossover and exchange mutation are two genetic operators used in this study.

  • A comparative analysis has been carried out to evaluate the performance of GAs on different parameters such as the size of the population, the number of iterations, and the rate of crossover.

  • The evaluated results show that the binary genetic algorithm worked better in terms of the size of the population and the number of iterations, while the real-coded genetic algorithm worked better in terms of the rate of crossover.

The subsequent sections of the paper are structured as follows: Sect. 2 explains the related work on the ATSP problem. Section 3 introduces the real-life ATSP problem. Section 4 presents the mechanism of the genetic algorithms. Section 5 presents the genetic operators of GAs. Section 6 presents the implementation of GAs on formulated ATSP. Section 7 contains the results and discussion. Finally, Sect. 8 presents the conclusion of the paper.

2 Related work

Over the past fifty years, researchers have developed numerous algorithms, both exact and heuristic, to solve TSP. Balas and Christofides (1981) came up with a new approach called the restricted Lagrangean algorithm to solve TSP by adding linear inequalities to the constraints in a clever way. Deep et al. (2018) modeled a traveling salesman problem involving seven cities connected by Indian railways. To solve this problem, a genetic algorithm (GA) is employed with the fourth variant of order crossover (OX4) as proposed in Deep and Mebrahtu’s work and two mutation operators, namely inversion mutation and inverted displacement mutations, were incorporated.

For more surveys on solution methods for the TSP, the reader may refer to Fiechter (1994), Carpaneto et al. (1995), Potvin (1996), and Larranaga et al. (1999). We also strongly recommend Tawhid and Savsani (2019), Akhand et al. (2020), Li et al. (2023), Rocha and Subramanian (2023), and Mzili et al. (2023). These algorithms encompass iterative improvement methods, construction procedures, branch-and-bound exact algorithms, as well as popular meta-heuristic approaches like Ant Colony (AC), Genetic Algorithm (GA), Tabu Search (TS), Sine-Cosine Algorithm and Artificial rat optimization algorithm.

Exact and metaheuristic algorithms have been proposed for both symmetric TSP and asymmetric TSP cases but this study is focusing on the ATSP problem. Pekny and Miller (1990) introduced a parallel branch and bound algorithm to solve the ATSP problem. The algorithm employs various techniques, including an assignment problem-based lower bounding technique, subtour elimination branching rules, and a subtour patching algorithm for upper bounding. Ascheuer et al. (2000) introduced a branch and cut algorithm for ATSP with precedence constraints. Apart from these exact algorithms, researchers have also successfully applied meta-heuristic methods to solve the ATSP.

Buriol et al. (2004) proposed a new memetic algorithm designed specifically to solve ATSP. The algorithm includes a new and effective local search called the recursive arc insertion (RAI) mechanism along with various innovative features. These features encompass a complete ternary tree structure with thirteen nodes to organize the population topologically, a hierarchical organization of overlapping clusters leading to a unique selection scheme, and the implementation of efficient data structures.

Majumdar and Bhunia (2011) introduced a novel GA approach to address a realistic version of the ATSP focused on time minimization. The problem incorporates inter-city travel times represented as intervals and the approach is designed by combining local GA (LGA) and Global GA (GGA). GGA is employed to search for global optima within the main tours, while the LGA is specifically applied to randomly selected sub-tours derived from the main tour obtained by GGA. This use of LGA allows for the exploration of local optimal solutions within the sub-tours, contributing to improved overall performance. Nagata and Soler (2012) came up with a new operator called the edge assembly crossover (EAX) operator to modify GA and applied it to an ATSP problem.

Osaba et al. (2018) introduced a discrete version of the Water Cycle Algorithm (DWCA) tailored for efficiently solving two well-known optimization problems: TSP and ATSP. DWCA retains its inspiration from hydrological phenomena but incorporates novel elements to address these routing problems effectively. The algorithm uses the Hamming distance to measure differences between routes found during the search process, adapts the movement function based on the estimated inclination of the river, and employs an insertion-based mutation operator that emulates evaporation and raining processes in the discrete solution space encoded by permutations.

Zhang et al. (2023) proposed a Discrete Mayfly Algorithm (DMA), a swarm-based metaheuristic, specifically designed for the spherical ATSP. The DMA utilizes various operators, including inver-over, crossover, and 3-opt, to simplify parameters, enhance population diversity, and improve local search capabilities.

3 Problem statement

The TSP has numerous real-world applications across various domains, including logistics, transportation, manufacturing, and network routing. In road networks, it is common for the travel distance between two locations to be different in opposite directions due to factors like road conditions, traffic flow, and road restrictions. This asymmetry in travel distances creates a more complex and challenging problem, making it essential to explore specialized algorithms to address such real-life scenarios. In this specific case, solving the ATSP for the 22 districts of Haryana, India has different distances between nodes in both directions, which is of great importance in various scenarios, such as logistics, transportation planning, and resource allocation.

The state of Haryana, with its 22 districts, represents a significant geographical area with various important locations that require efficient visitation by a traveling salesman. By formulating this problem as an ATSP and using real-world data extracted from Google Maps, we can model and evaluate the most efficient routes for the salesman to visit all the districts while minimizing the overall travel distance. This analysis can lead to insights and decision-making tools that can benefit businesses, government agencies, and other organizations operating in Haryana. With this context in mind, 22 districts of Haryana, India modeled as an ATSP problem in the study.

Table 1 The formulated asymmetric travelling salesman problem is in matrix form

Each district is represented by a node and is numbered from 1 to 22. Also, the shortest distance between node 1 to node 2 is not always the same as the distance between node 2 to node 1. In matrix notation of this ATSP, the entries above the diagonal represents the maximum distance between the nodes and the entries below the diagonal represents the minimum distance between them. Faridabad, Gurugram, Panchkula, Karnal, Hisar, Panipat, Ambala, Kurukshetra, Rohtak, Jind, Rewari, Jhajjar, Kaithal, Bhiwani, Fatehabad, Sirsa, Yamunanagar, Sonipat, Palwal, Mahendergarh, Charkhi Dadri, and Nuh are the 22 districts of Haryana represented by nodes 1 to 22 respectively. The data used in the present study is taken from Google Maps and is presented in Table 1. By exploring this formulated problem, we aim to compare different GA variants and assess their performance in finding the most efficient routes for a salesman to visit all the districts in Haryana, India while minimizing the overall travel distance.

4 Genetic algorithms

In this section, we discuss genetic algorithm (or simple genetic algorithm) and some of its variants, i.e., adaptive genetic algorithm, binary-coded genetic algorithm, and real-coded genetic algorithm. Since GAs are the most popular algorithms, we just introduce the basic working principles of these algorithms.

4.1 Simple genetic algorithm (SGA)

The mechanism of simple genetic algorithm is very simple, involving copying and exchanging bits of partial strings. Reproduction, crossover, and mutation are three operations in the algorithm. Depending on the fitness value of the chromosome, a reproduction operator is applied to generate parent chromosomes or strings. After that, offspring are produced by applying a crossover operator to the parent chromosomes and then a mutation operator is used to keep the population diverse. A Pseudo Code is used to summarise the steps involved in the process of SGA is given below:

figure b

4.2 Binary coded genetic algorithm (BGA)

Depending upon the representation of the chromosomes genetic algorithm can be classified into binary genetic algorithm and real coded genetic algorithm. The chromosomes in binary coded genetic algorithm are encoded in the string of bits 0 and 1. The length of bit string is kept fixed. Then suitable crossover and mutation operators are applied on the population. To perform mutation random number for each bit of the string is generated and its value are swapped 0 for 1 and vice-versa. After obtaining the solutions they are then again converted into actual values. For more details about the algorithm, go through the paper (Kim et al. 2002; Mohebifar 2006).

4.3 Real coded genetic algorithm (RCGA)

Real coded genetic algorithm (Goldberg 1991) has strings made of real numbers unlike those of binary coded genetic algorithm and also takes the operators reproduction, crossover and mutation of real vectors. Hence, the representation of the chromosomes is very close to natural formulation of many problems and by the use of real parameters large and continuous domains can be easily searched as RCGA works well with the continuous space. The chromosomes in RCGA are bounded depending on the variables they represent. For more details about the algorithm, go through the paper (Eshelman and Schaffer 1993; Singh et al. 2015; Ali et al. 2018; Wang et al. 2019).

4.4 Adaptive genetic algorithm (AGA)

The basic idea behind adaptive genetic algorithm (Lin 2009) was to prevent pre-mature convergence of the algorithm arising due to lack of diversity in the population and unbalanced exploration and exploitation rate. In AGA, the parameters like population size, crossover and mutation rate are varied while the GA is still running so as to maintain diversity and exploitation and exploration rate in the population. For more details about the algorithm, go through the paper (Wang et al. 2008; Saptarini et al. 2020).

5 Genetic operators

In this section, we discuss the essential genetic operators used in genetic algorithms: encoding, selection, crossover, and mutation. Each of these operators plays a crucial role in shaping the algorithm’s efficiency, convergence, and exploration of the solution space. By applying these operators intelligently, we aim to find the most efficient routes for a traveling salesman to visit all the districts, thereby minimizing the overall travel distance and optimizing the solution.

5.1 Encoding or design the chromosome

A population is an accumulation of chromosomes. Each chromosome represents a possible solution and contains several genomes. An algorithm’s efficiency and processing power rely on the encoding of a problem. Designing a chromosome to represent a problem is a significant architectural decision in the algorithm. So, the primary focus is to design the chromosomes so that they can have a considerable influence on processing speed, convergence, and the overall ease of crossover and mutation. In addition to being the most effective and efficient method for encoding parameters into chromosomes, the representation’s underlying shape is also crucial. It is also important that the values of a chromosome are a solution’s parameters, and they must always be accompanied by an evaluation process that uses the parameters to determine the system’s aspect or conclusion. Given the value of the chromosome’s parameters, the fitness function measures how much this outcome is a better or worse solution.

5.2 Selection operator

Selection operator chooses the best possible chromosomes from the existing population to create the next generation. In various situations, a distinct selection procedure is utilized for the purpose of picking the population of the healthiest individuals, parents, or chromosomes for crossover or mutation.

5.3 Crossover

Crossover is the process of making new solutions from solutions that already exist. It is a form of reproduction through sexual means. To generate superior offspring, a random selection is made from the mating pool to choose two different string combinations to crossover. The approach that is selected is determined by the encoding method.

5.4 Mutation

Mutation is a small random change in the chromosome to produce a new solution. Crossover breeding can often provide too much variability, making it difficult to fully explore the underlying solution space. To maintain the diversity in the population, a mutation operator is used.

6 Implementation

In Sect. 4, the basic ideas of genetic algorithms were introduced. In this section, we will discuss the actual working & the time and space complexity of various genetic algorithms to solve the formulated ATSP problem.

6.1 Basic steps of algorithms

The various steps of implementation of genetic algorithms to our real-life problem are as follows:

Generate the initial population Generate a random initial population of potential solutions, where each chromosome represents a valid tour visiting all 22 districts exactly once. Each chromosome is a permutation of the numbers from 1 to 22, representing the order of city visits.

Fitness Value Define the fitness function to calculate the total distance of each tour in the population. The fitness value of each individual (chromosome) is the sum of the distances between consecutive cities based on the matrix notation. The fitness value represents the length of the tour, and it should be used to compute the fitness score for selection. The fitness value can be calculated using the formula,

$$\begin{aligned} F_{{Individual}} = \sum _{i,j=1}^{22} a_{ij} \end{aligned}$$
(1)

where \(\hbox {a}_{{ij}}\) is the distance between the two adjacent cities i and j of the tour.

Selection Operator The roulette wheel selection operator is used in this work for selecting the parent chromosomes. The value of the fitness function and the corresponding probability of every individual is calculated. Each individual is assigned a portion of the wheel. Each roulette wheel slice’s size corresponds exactly to its fitness value. According to fitness value, an individual has a higher chance of being selected i.e., Individual with higher fitness value have more chances of selection. The wheel slice is calculated as

$$\begin{aligned} W_s=\frac{F^R}{\sum _{j=1}^{N} F^R_{j}} \end{aligned}$$
(2)

where

$$\begin{aligned} F^R = (F_{\textrm{max}} - F_k) + 1, \end{aligned}$$
(3)

\(F^R\) is the reversed magnitude fitness function, \(F_j\) is the fitness of the jth chromosome, \(F_k\) is the fitness of the kth chromosome, \(F_{\textrm{max}}\) is the maximum fitness out of all the population’s chromosomes, and N is the number of chromosomes in the population.

Crossover In this work, a one-point crossover operator is used to create offspring. In this crossover operator, one cut point is selected randomly at the same position from both the parent chromosomes and the right (left) section of the points is exchanged resulting in the formation of two offspring.

Mutation Exchange mutation (Banzhaf 1990) is used in the work to avoid premature convergence by giving new paths. The exchange mutation operator randomly selects two cities in the tour and exchanges them.

Termination condition The termination condition (or stopping criteria) is to stop the process of the genetic algorithms. The algorithm finds better solutions after a few iterations but this tends to stop working in the later phases when the changes aren’t as large. So, we need to stop the process to make sure that the solution is close to the best one. In most cases, we preserve one of the following criteria for termination:

  1. (a)

    When there hasn’t been any change in the population for a while.

  2. (b)

    When we have reached a certain, predetermined number of generations.

  3. (c)

    When the objective function has attained a certain value that has already been set.

The number of iterations is used as a termination condition in this study.

6.2 Time and space complexity

The time complexity of the SGA algorithm as per the steps mentioned will be as under:

Initialization The initialization step involves setting up the parameters such as population size, crossover probability and mutation probability. So, it has a constant time complexity O(1).

Generate an Initial Population Generating the initial population involves creating a fixed number of candidate solutions, which typically have a linear time complexity O(n).

Calculate the Objective Function Evaluating the objective function for each candidate solution in the population requires calling the function n times. Thus, the time complexity for this step is O(n).

Selection Process Fitness proportionate selection or roulette wheel selection can be implemented efficiently with a linear time complexity of O(n). The process involves calculating the cumulative fitness and selecting individuals accordingly.

Crossover Operator The time complexity of a one-point crossover operator depends on the size of the candidate solution representation and hence the time complexity can be approximated as O(n).

Mutation Operator The exchange mutation operation also depends on the size of the candidate solution representation. So, the time complexity can be approximated as O(n).

Update Best Candidate Solution Updating the best candidate solution involves comparing the fitness of newly generated individuals to determine the best. This step has a linear time complexity O(n).

Output Best Candidate Solution The output step has a constant time complexity O(1) as it involves returning the best candidate solution found.

The overall time complexity of the SGA can be approximated as the sum of the complexities of each step. So, the SGA’s overall expected time complexity is .

Similarly, the overall time complexity of the AGA, BGA and RCGA can be approximated as O(\(n^2\)).

7 Results and discussion

In this section, we evaluate the performance of the SGA, AGA, BGA and RCGA approaches to obtain the optimal solution. The discussed algorithms were implemented in MATLAB R2021b, and their performances were tested using a laptop (computer) core i5-11300 H CPU @ 3.10GHz, with 16 GB of RAM. The performance of these four algorithms is analyzed by varying the parameters like population size, crossover and iteration. The results obtained by these algorithms on various parameters are listed in Table 2. Each algorithm is performed 20 times to obtain the minimum distance (Best value), the mean value, the standard deviation (S.D.) and the estimated time (ET).

In evolutionary computation, one of the most important factors to think about is an algorithm’s population size (Mora-Melia et al. 2017) as it has a big effect on how well and quickly an algorithm works. So, it is foremost to investigate the performance of various genetic algorithms (SGA, AGA, BGA, RCGA) with fixed crossover and mutation at different population sizes. Figure 1 shows the performance of various GAs at different population sizes. The results are performed at fixed crossover, mutation, and iteration rates of 90, 5, and 100 respectively to evaluate the influence of the population size on these algorithms.

When the population size (p) increases, the minimum distance in each algorithm also increases. As we can see, in SGA at p = 100, p = 150, p = 200, and p = 300, the minimum distance is 2139.2, 2143.45, 2150.8, and 2235.9, respectively. Also, at p = 100, p = 150, p = 200, and p = 300 in the BGA, the minimum distance is 2002.05, 2019.1, 2067.6 and 2111.7 respectively. Similarly, in AGA and RCGA, as the population size increases, so does the minimum distance. So, the behavior of all of these algorithms appears to be similar as the minimum distance increases with population size. It means that to find the minimum distance, we have to consider the smallest population size. Therefore, the performance of these algorithms is analyzed at a population size of 100 to see which one provides a better optimal solution. At p = 100, the minimum distances of SGA, AGA, BGA, and RCGA are 2139.2, 2151.6, 2002.05, and 2076.55, respectively. In this case, BGA provides a better optimal solution (minimum distance) with a path \(4\rightarrow 17\rightarrow 7\rightarrow 9\rightarrow 18\rightarrow 1\rightarrow 22\rightarrow 20\rightarrow 15\rightarrow 16\rightarrow 5\rightarrow 12\rightarrow 11\rightarrow 14\rightarrow 19\rightarrow 2\rightarrow 21\rightarrow 10\rightarrow 6\rightarrow 8\rightarrow 13\rightarrow 3\).

Now we investigate the effect of iteration on these algorithms by varying the size in the range of 100–500. The higher iteration size increases the chance of getting a better solution as the solution space is searched more thoroughly. With each passing iteration, the bad quality solutions get discarded, and the better solutions participate in the search process. To investigate the performance of various genetic algorithms, the results are performed at fixed rate of crossover, mutation, and population size of 50, 10, and 100 respectively. Figure 2 shows that when the iteration increases, the minimum distance in each algorithm decreases. In SGA, at iteration = 100, 200, 300, and 500, the minimum distances are 2044.15, 1792.05, 1653.7, and 1539.15, respectively. In AGA, at iteration = 100, 200, 300, and 500, the minimum distances are 2009.35, 1822.3, 1685.65,and 1591.2 respectively. Similarly in other algorithms, as iteration increases, the minimum distance decreases. So, the behavior of all of these algorithms appears to be similar as the minimum distance decreases with increasing of iteration. It means that to find the minimum distance, we have to consider the higher iteration.

Table 2 Statistical results of compared algorithms for formulated problem
Fig. 1
figure 1

Shortest distance of different algorithms at different population size

Fig. 2
figure 2

Shortest distance of different algorithms at different iteration

Fig. 3
figure 3

Shortest distance of different algorithms at different crossover

Therefore, the performance of these algorithms is analyzed at the iteration, 500 to see which one provides a better optimal solution. At iteration = 500, the minimum distance of SGA, AGA, BGA, and RCGA is 1539.15, 1591.2, 1517.5, and 1521.8 respectively. In this case, BGA provides a better optimal solution (minimum distance). So, in all of these algorithms, BGA also gives a better optimal solution in terms of iteration with a path \(10 \rightarrow 14 \rightarrow 21 \rightarrow 20 \rightarrow 19 \rightarrow 1 \rightarrow 2 \rightarrow 22 \rightarrow 11 \rightarrow 12 \rightarrow 9 \rightarrow 18 \rightarrow 6 \rightarrow 5 \rightarrow 16 \rightarrow 15 \rightarrow 17 \rightarrow 7 \rightarrow 3 \rightarrow 4 \rightarrow 8 \rightarrow 13.\)

Fig. 4
figure 4

Highlighted the shortest path in a formulated ATSP problem

Now we investigate the effect of crossover on various genetic algorithms analyzed by varying it. It is well known that the selection of crossover rate is crucial to the effectiveness of genetic algorithms. In the past, various research have been worked on determining optimal crossover or mutation rates which is still a problem as it depends on problems and even for different stages of the genetic process. So, here we applied various genetic algorithms on our problem to see the effect of the rate of crossover. The results are performed at fixed iteration (100), population size (100) and mutation rate (5) with crossover rates of 25, 50, 75, and 95, i.e., crossover probability 0.25, 0.50, 0.75, 0.95. The performance of these algorithms is analyzed using Fig. 3 to see which one provides a better optimal solution in terms of crossover rate.

At crossover = 25, the minimum distance for SGA, AGA, BGA, and RCGA, respectively, is 1681.15, 1654, 2102.15, and 1613.1. In this case, RCGA provides a better optimal solution (minimum distance). Similarly, at the rate of crossover = 50, the minimum distances of SGA, AGA, BGA, and RCGA are 1606.85, 1597.95, 2076.4 and 1491.5 respectively. In this case, RCGA also provides a better optimal solution. As a result, RCGA provides a better optimal solution in all these algorithms in terms of crossover with a path \(15\rightarrow 4\rightarrow 3\rightarrow 13\rightarrow 16\rightarrow 21\rightarrow 11\rightarrow 19\rightarrow 8\rightarrow 22\rightarrow 9\rightarrow 2\rightarrow 17\rightarrow 7\rightarrow 1\rightarrow 5\rightarrow 12\rightarrow 18\rightarrow 10\rightarrow 14\rightarrow 6\rightarrow 20\).

The results discussed above are based on their mean and standard deviation values. However, the best minimum distance is 1264 provided by the SGA at iteration 500 and the corresponding path has been highlighted in Fig. 4.

8 Conclusion

In the present work, we have discussed genetic algorithm and its variants on a asymmetric TSP as our aim was to compare the quality of solutions at different parameters. Based on the results of various GAs on the given ATSP, the real-coded genetic algorithm is superior in terms of the rate of crossover. The binary genetic algorithm performs better in finding the optimal solution in terms of iteration and population size. The major and minor variations in the distance are observed in the parameters such as size of population and the number of iterations. This study presents a comparative analysis of various genetic algorithms on ATSP, which is realistic and relevant for GA’s adaptability in real-world applications. Also, this study suggests various research directions for future research, one of which involves developing hybrid or improvement versions of GA to apply the ATSP problem.