1 Introduction

Air transport has definitely established itself as one of the most important means of transportation. The last two decades have experienced an immense evolution of air transport [1] for both passengers and cargo. All types of aircraft must pass prior to landing at an approaching stage under the guidance of the air traffic control (ATC) tower, which has a responsibility for securing the flights. This is a complex daily task of the ATC towers. The presence of a limited number of runways at airports is a major obstacle for the ATC tower, particularly during the rapid increase in air traffic and rush times of landing and takeoff operations. The high demand for more flights beyond the airport’s capacity leads to congestion at the airport, caused by its inability to accommodate all progressing flights, and therefore, airports will not be able to meet future needs.

In optimization context, there are two different optimization problems of aircraft sequencing and scheduling in the literature: (1) aircraft scheduling problem (ASP) and (2) aircraft landing problem (ALP) [2]. Both problems are different in terms of the objective function and also in terms of the complexity involved in solving the problem. Furthermore, the optimization methods proposed in the literature for both are also significantly different. To elaborate, ASP is a problem concerned with landing sequences and functional landing times for arrivals aircraft at the airport. The main constraints of this problem are to: (1) ensure the safe separation between aircraft arrival, (2) utilize the obtainable assimilation at the airport and (3) reduce the airborne delays. Indeed, ASP is a significant issue in the daily functioning of airports.

ALP, the main concern of this paper, is a combinatorial optimization problem (COP) which belongs to NP-hard category in almost all of its variation [3]. ALP is tackled by assigning a set of aircrafts arriving to runways and landing times in accordance with a set of hard and soft constraints. The hard constraints must be compulsorily met to arrive at a feasible solution for aircraft scheduling. An example of hard constraint in ALP is time window whereby each aircraft has earliest and latest time to land. On the other hand, soft constraint satisfaction is preferred, but not mandatory. However, the more the soft constraints are met, the better the quality of the ALP solution will be. An example of soft constraint is that each aircraft will record its preferred (target) landing time. Basically, the overall objective is to build a feasible ALP solution with a high quality [4]. As the ALP is a combinatorial optimization problem, most of researchers build their methods by working on feasible search space regions; therefore, the solution(s) feasibility will be maintained during the search. In case of violations occurring during the search, the repair process has to be invoked. Furthermore, the initial solution(s) must also be feasible; therefore, a suitable heuristic technique such as first-come first-serve is used to ensure the feasibility [5].

Several methods have been introduced for ALP. The earliest were exact (or calculus-based or deterministic) methods. This type of method is problem-dependent whereby any optimal solution is reached by constructing it event by event. However, this type of algorithm is not efficient for large-scale ALP that has large number of aircraft, airways and number/kind of constraints to be met. The large-scale ALP problems can be categorized into NP-hard class, which cannot be resolved in a polynomial time. Although these methods cannot effectively find a high-quality solution for large-scale ALP, they are nowadays used to handle some real-world problems with small size or find initial feasible solution for more advanced methods. The most popular exact methods which resolved ALP were linear programming [2, 6], dynamic programming [7, 8], mixed-integer goal programming [9], mixed-integer programming [4, 10], linear programming-based tree search [11], exact polynomial algorithm [12] and two-stage algorithm with heuristics [13]. To emphasize, deterministic methods are very powerful in finding exact solution for small instances. They cannot solve the large-scale instances in polynomial time. Therefore, approximation methods are devoted.

Recently, the emergence of metaheuristic-based algorithms for ALP has captured the attention of the scheduling research community [14]. Metaheuristic-based algorithms are general optimization templates that can be efficiently used for several optimization problems. They search for the optimal solution using their own operators controlled by some parameters to explore new regions in the problem search space and exploit the accumulative search [15]. Several studies have categorized these methods into either natural-based or non-natural-based, population-based or single-point search, dynamic or static objective function, memory or memory-less methods, etc. [16]. The common features of the metaheuristic-based algorithms are the concepts of exploration and exploitation which are reached differently from one algorithm to another. Exploration refers to the ability of the algorithm to explore the unvisited search space regions, whereas exploitation identifies the ability of the algorithm to deeply exploit the visited search space areas to reach the local optima on that area. These two concepts are contradictory and the balance between them is essential during the search [17].

There are specifically two familiar types of metaheuristic-based algorithms that have addressed APL as optimization problems as outlined below: (1) evolutionary-based algorithms such as scatter search [5], genetic algorithm [9, 18, 19] and population heuristic [20] and (2) swarm-based optimizers such as particle swarm optimization [3], ant colony optimization [21, 22], artificial bee colony [23] and gravitational search algorithm [24]. Local search-based algorithms which are the main concern of the work presented in this paper begin with a single provisional solution. At each iteration, that solution undergoes changes using efficient neighborhood moves until a locally optimized solution, in the same search space region of the initial one, is reached. Although trajectory-based algorithms are very powerful in finding the local optimal solution to which they converge, they track through a trajectory in the search space without scanning wider regions; thus, they are easily stuck in local optima. It has to be noted that the search space regions of a scheduling problem like ALP are very rugged and require a very powerful method in deepening search rather than shallow search. Therefore, the research community of the scheduling domain has recently turned its attention to trajectory-based approaches rather than other approaches [25].

It is worth mentioning that the attention of the current research on ALP was turned toward adapting local search-based methods for several reasons: Local search-based algorithms are much faster than other methods. They are also very powerful in deepening search which is very useful for the scheduling problem. Simulated annealing (SA) is a very powerful local search-based method can escape from the trap of local optima using particular acceptance criterion [26]. SA is successfully adapted for ALP in [9]. Furthermore, iterated local search (ILS) is another powerful local search-based method that can also escape the dilemma of local optima by means of using multi-start mechanism to try several search space regions [27]. As traditionally known, ALP is a COP or, in more specific terms, is a new hybrid local search method that complements the powerful features of two or more local search-based algorithms to enable deep search capability as well as the escape from the local optima. Recently, ILS and SA have been hybridized in different ways to cope with the search space of complex optimization problems [28, 29]. For example, Rajalakshmi et al. [29] integrated SA within the iterated local search algorithm to solve switching cells to switch in cellular mobile network. Rajalakshmi et al. used SA to locate the optimal solutions generated by the iterated local search algorithm. Experimentally, it was found that the performance of the hybrid algorithm [29] is better than that of SA and ILS algorithms when implemented individually.

Due to the fact that there is no free lunch (NFL) in optimization [30], we propose in this paper a hybridization between the successful features of SA and ILS for ALP which is called iterated simulated annealing (ISA). The optimization framework of ISA has two main loops: inner and outer loops. In the inner loop, SA is utilized by a cooling schedule and relaxing acceptance strategy to allow ISA to escape the local optima. In the outer loop, the restart mechanism and perturbation operation of the original ILS are utilized to empower ISA to navigate different search space regions.

To evaluate the performance of the ISA method, we conducted a test on a dataset consisting of 13 datasets with 49 benchmark instances of different sizes and complexities. The parameter sensitivity analysis for the proposed ISA is studied to check their effect on the convergence behavior. For comparative evaluation, the results obtained by ISA are compared with six well-regarded methods that produced the state-of-the-art results. It is worth mentioning that the results yielded by the proposed method excel those produced by other comparative methods in 4 out of 49 problem instances. Also, ISA is able to produce the best-known results in 32 problem instances as achieved by others. Indeed, the proposed method can be considered a very efficient addition to the ALP domain, which can be easily tailored to resolve other similar problems.

The rest of the paper is organized as follows: Sect. 2 provides a detailed description of the mathematical formula of the ALP. An elaborated illustration of the proposed approach is introduced in Sect. 3. Section 4 then provides a brief summary of both ILS and SA algorithms. Section 5 describes and analyzes the computational results and time efforts of the proposed method along with other state-of-the-art methods that have addressed ALP, with conclusions and future work in Sect. 6.

2 Aircraft landing problem: definition and formulation

To implement a feasible approach for the ALP, all aircraft arrival times should be known beforehand along with all other information about each one. Each aircraft has a preferred landing time, a predefined landing time window and a set of runways. The main goal is to schedule each runway to the suitable landing time for each arriving aircraft with a minimal overall deviation from the preferred landing time. A penalty cost is assigned to the scheduling solution subject to the landing time of each aircraft that lands before or after its preferred landing time. The necessary mathematical notations for the ALP are provided in Table 1. The mathematical formulation of the ALP model is introduced in the next constraint formulation based on the formulation provided in [11].

Table 1 Notations used to formulate the ALP

2.1 Constraints of the ALP model

In order to preserve the feasibility of the ALP solution, three hard constraints must be fulfilled: time window, separation time and multiple runways. These constraints are formalized as follows:

  • Constraint 1Time window For aircraft i, the landing time \(t_i\) is scheduled to be within its predefined landing time window, \([E_i,L_i ]\):

    $$\begin{aligned} E_i\le t_i \le L_i, \qquad \forall i \in \{1,2,\ldots , N\}. \end{aligned}$$
    (1)
  • Constraint 2Separation time Separation time constraints are conditioned by the landing order of aircraft and their assigned runways. The landing order of aircraft signals that either aircraft i lands before aircraft j which denotes \(x_{ij}=1\) or aircraft j lands before aircraft i which denotes \(x_{ji}=1\). This constraint is defined as shown in Eq. (2).

    $$\begin{aligned} x_{ij}+ x_{ji}=1 \qquad \forall i,j \in \{1,2,\ldots , N\} \wedge j>i \wedge x_{ij} \in (0,1). \end{aligned}$$
    (2)

    In some cases, for a particular pair (ij) of aircraft, it can be immediately decided whether \(x_{ij}=1\) or \(x_{ij}=0\); for example, if \(L_i<E_j\) then \(x_{ij}=1 \text { and } x_{ji}=0\). For any pair of successive landings on the same runway (ij) of aircraft, there is a requirement constraint that must be respected as shown in Eq. (3):

    $$\begin{aligned}&x_{ij} \times t_j\ge x_{ij}\times t_i+S_{ij}\times z_{ij}+t_{ij} (1-z_{ij})\nonumber \\&\qquad \forall i,j \in \{1,2,\ldots , N\} \wedge j>i. \end{aligned}$$
    (3)

    Let (ij) be a pair of aircraft such that \(i<j\) and assume that aircraft i and j land on the same runway, i.e., \(z_{ij}=1\) and \(1-z_{ij}=0\). If aircraft i lands before aircraft j then \(x_{ij}=1\), then the constraint in Eq. (4) becomes:

    $$\begin{aligned} t_j\ge t_i+S_{ij} \qquad \forall i,j \in \{1,2,\ldots , N\}. \end{aligned}$$
    (4)

    Equation (4) asserts that the separation criterion between aircraft i and j is conformed. This implies that the time \(S_{ij}\) must elapse after the landing of aircraft i at \(t_i\) before j can land at \(t_j.\)

  • Constraint 3Multiple runways The following constraint is used to assure that aircraft i must be assigned to only one runway:

    $$\begin{aligned} \sum _{r\in R}y_{ir}=1 \qquad \forall i \in \{1,2,\ldots , N\}. \end{aligned}$$
    (5)

    There are some constraints formulated in Eqs. (6, 7) that are needed to guarantee that the runways assigned to aircraft i and j are indistinguishable when they are set to land on the same runway. The constraint in Eq. (6) shows that the matrix \(z_{ij}\) is symmetric:

    $$\begin{aligned} z_{ij}= z_{ji} \qquad \forall i,j \in \{1,2,\ldots , N\} \wedge j>i. \end{aligned}$$
    (6)

    The decision variables \(y_{ir},y_{jr}\) and \(z_{ij}\) are linked together using Eq. (7).

    $$\begin{aligned}&z_{ij} \ge y_{ir}+ y_{jr}-1 \quad \forall i,j = (1,2,\ldots ,N) \wedge \nonumber \\&\quad i \ne j \quad \forall r=(1,2, \ldots , R). \end{aligned}$$
    (7)

    However, if one aircraft i or j lands on runway r and the other does not; then 0 must be assigned to \(z_{ij}\).

2.2 Objective function of the ALP model

The objective function, f, considered here for the ALP is formulated as shown in Eq. (8), subject to the three constraints identified before (i.e., time window, separation time and multiple runways).

$$\begin{aligned} \min f(X)= \sum _{i=1}^{N} \alpha _i \times P_{1i} + \beta _i \times P_{2i}. \end{aligned}$$
(8)

The expression in the parentheses of Eq. (8) defines total penalty corresponding to aircraft i. If aircraft i lands at its target landing time, then both \(\alpha _i\) and \(\beta _i\) are equal to zero and the cost incurred by its landing is zero. On the other hand, if aircraft i does not land at \(T_i\), then either \(\alpha _i\) or \(\beta _i\) is nonzero and there is a strictly positive cost incurred. In practical terms, this objective function is valuable, which has the ability to reflect prevailing conditions at an airport. Note that both \(\alpha _i\) and \(\beta _i\) are constant and their values are extracted from the dataset used. The values of both \(\alpha _i\) and \(\beta _i\) are connected with the decision variables in the solution X as in Eqs. 913.

$$\begin{aligned} x_i= T_i - \alpha _i + \beta _i \qquad \forall i \in \{1,2,\ldots , P\} \end{aligned}$$
(9)
$$\begin{aligned} 0\le & {} \alpha _i \le T_i - E_i \qquad \forall i \in \{1,2,\ldots , P\} \end{aligned}$$
(10)
$$\begin{aligned} \alpha _i\ge & {} T_i - x_i \qquad \forall i \in \{1,2,\ldots , P\} \end{aligned}$$
(11)
$$\begin{aligned} 0\le & {} \alpha _i \le L_i - T_i \qquad \forall i \in \{1,2,\ldots , P\} \end{aligned}$$
(12)
$$\begin{aligned} \alpha _i\ge & {} x_i - T_i \qquad \forall i \in \{1,2,\ldots , P\}. \end{aligned}$$
(13)

3 Research background

In order to build a self-exploratory paper, this section provides the elementary knowledge about the methods hybridized here. In the next subsection, basic fundamentals to the iterated local search (ILS) are illustrated as established in the original version [31]. Thereafter, a discussion about the basic concepts of simulated annealing is provided in Sect. 3.2.

3.1 Fundamentals of the iterated local search algorithm

ILS is an advanced extension method for basic local search method. It attempts to escape the local optima by means of restarting the local search technique several times starting with a different initial point at the search space. Normally, ILS has two important stages: improvement stage and perturbation stage. In the improvement stage, the simple local search mechanism is used to find a local optimal solution \(X^{\mathrm{opt}}\) which is passed to the perturbation stage in order to shuffle its elements for starting a new perturbed solution \(X'^{\mathrm{opt}}\). Finally, a greedy selection between \(X^{\mathrm{opt}}\) and \(X'^{\mathrm{opt}}\) is used. This process is repeated several times until a “good enough” local optima is obtained.

The perturbation is usually non-deterministic in order to avoid being cyclic. The perturbation stage is important to transform the solution from the current local optima to another point in the search region. The strength of perturbation has a great effect on the efficiency of ILS which takes control on the force of perturbation. Too small perturbation may lead the local search process to return to previous local optima already found, leading to misuse of computational resources. Too large perturbation may dominate the local search method to work as a local search method with a multi-random initial point. In other words, large perturbation in the early stages manages the local search procedure to explore new areas of the search space. The strength of perturbation is either static or dynamic. Indeed, static perturbation shuffles the local optima during the search and remains static from initial search until the final course of run. In contrast, dynamic perturbation strength progressively decreases with the advance of the local search algorithm to focus on exploitation of new areas of the search space.

The acceptance criterion is the key feature of metaheuristic algorithms because it allows for a more comprehensive search for optimal solutions. The pseudo-code of ILS is described in Algorithm 1.

figure a

In Algorithm 1, ILS is initiated with a provisional solution X. The historical point (H) is also initially defined and assigned by X. ILS consists of two nested loops: improvement which is the inner loop (i.e., lines 9–15) and perturbation which is the outer loop. In the improvement loop, a simple local search strategy is employed. The \(Tweak(\dots )\) function is used as a neighborhood move with a one-step random improvement method. The output of improvement stage is a local optima \(X'\) which is used in the perturbation stage. In the perturbation stage, \(X'\) replaces X, if better and the historical point (H) is updated. Finally, Perturb(...) function is used to shuffle H to be used as an initial point to the improvement loop. This process is repeated several times until the ideal solution (i.e., f(X) = 0) or the maximum number of iterations is reached. Note that the ideal solution is the one with zero cost. This solution is the exact one.

3.2 Simulating annealing

Simulated annealing is another extension to the simple local search method that simulates a bridge between the physical process of the metal annealing and the optimization process [32]. Similar to any local search method, SA is initiated with a random solution. Iteratively, this solution will be updated using neighborhood move associated with the problem to be solved. The solution update is then accepted based on two acceptance criteria:

  1. (i)

    Greedy acceptance rule: in which the updated solution replaces the current one, if better in terms of solution quality. Formally, let the current solution be \({{\varvec{X}}}=(X_1,X_2,\ldots ,X_n)\) and the updated solution be \({{\varvec{X}}}'=(X'_1,X'_2,\ldots ,X'_n)\). The greedy selection rule accepts the updated solution when \(f({{\varvec{X}}})<f({{\varvec{X}}}')\), where f(.) is the quality function.

  2. (ii)

    Relaxed acceptance rule: In case that the Greedy acceptance rule does not recognize any improvement in the updated solution, the relaxed acceptance rule is checked. The relaxed acceptance rule can accept the worst updated solution \({{\varvec{X}}}'\) than the current one \({{\varvec{X}}}\) if the following condition becomes true:

    $$\begin{aligned} \sim U(0,1) < \exp ^{\frac{-(f({{\varvec{X}}}')-f({{\varvec{X}}}))}{T}}, \end{aligned}$$

    where \(\sim U(0,1)\) is a uniform distributed function that generates a random digit between 0 and 1.

It is worth noting that the temperature T is exponentially decreased by a fraction \(\alpha \), where \(T(t+1)=\alpha \times T(t)\). In SA, this is called the cooling schedule process in which the SA begins with a high temperature value and it gradually decreases during the search until it almost reaches 0. Note that, when the value of T is high, the updated solution replaces the current one, if worse in large margin. This acceptance mechanism is degraded until a stagnation situation is reached where only greedy acceptance rule is used. Conventionally, this is in line with the main principle of optimization where the exploration is very useful in the initial search, but not important in the last course of runs.

The procedure of SA is pseudo-coded in Algorithm 2. As a matter of fact, the performance of SA depends on two factors: the initial temperature value T and the cooling schedule. High value of T leads to a heavily use of the relaxed acceptance rule in the initial search, and the SA might be required large time to converge. Here, SA behaves as a random search. In contrast, the lower the value of T is, the faster the convergence of SA will be. Here, SA behaves as a simple local search method. In cooling scheduling, a large value of \(\alpha \) may lead to slow convergence whereby SA will accept the worst solution in the initial search for large iterations.

figure b

4 Iterated simulating annealing (ISA) algorithm for ALP

In this section, the proposed methodology which hybridizes the ILS with SA is adapted for ALP. Initially, the ALP solution is represented as a vector \(X=(X_1,X_2,\ldots ,X_N)\) of length N aircraft. Each decision variable \(X_i\) in the ALP solution can be assigned by a tuple of values of aircraft index, landing runway and landing time. Table 2 provides an example of ALP solution representation. In the example, \(X_1( \text {aircraft Index})=2\) means the first aircraft to land is aircraft #2. \(X_1( \text {Landing Runway})=3\) means the aircraft number #2 will land in the airway #3 and the \(X_1(\text {Landing Time})=98\) means the aircraft number #2 will land at a time unit #98 and so on. Each solution is evaluated using the objective function formulated in Eq. (1).

The main role of ILS in the proposed method is not only to avoid local minima since SA has a cooling strategy to handle this, but also to be used as multi-restart algorithm that empower the proposed method to try different search space regions at each iteration and keep their local minima with assist of SA.

In the proposed method, the relaxed acceptance criterion of SA is incorporated with ILS optimization framework to improve its local exploitation. Additionally, by means of adding the relaxed acceptance criterion in the ISA, another way of avoiding the local optima is provided to empower the performance. The cooling schedule of ISA is based on the same theory of SA where it is initiated with high value of temperature. It cools down using a structured cooling schedule based on a predetermined cooling rate until an equilibrium state (or steady state) is reached.

The feasibility of the ALP solution is maintained during the search in which the hard constraints (i.e., time window, separation time and multiple runways) must be respected. It is worth mentioning that the proposed method only considers solutions in the feasible search space region. Therefore, any ALP solution with violation of hard constraints must be repaired. If the separation time constraint is violated between two aircraft (say, aircraft A lands before B), the proposed approach earlier changes the landing time of aircraft A with respect to its landing time window. If such changes keep the violation remaining, the proposed approach will restart the operation all over again.

The proposed ISA method complements the strength of both ILS and SA in single optimization algorithm. ISA has five main steps which will be discussed below. The flowchart of ISA is provided in Fig. 1, and its pseudo-code is provided in Algorithm 3.

figure c
Fig. 1
figure 1

ISA flowchart

Table 2 Example of ALP solution representation
  • Step I Initialize the parameters of both ALP and ISA: Initially, the parameters of ALP are extracted from the dataset to suitable data structure. These parameters are the number of arrival aircraft N, number of available runways R, earliest possible landing \(E_i\), targeted landing time \(T_i\), latest possible landing time \(L_i\), etc. as mathematically formulated in Table 1. Furthermore, the parameter of ISA required for tackling ALP is also initialized in this step. These parameters are temperature (T), cooling rate \(\alpha \) and number of iteration (MaxIter).

  • Step II Generation of initial solution: In ISA, the second step randomly generates the initial solution \({{\varvec{X}}}\). As aforementioned, each element in the solution \(X_i\) in initialized by a tuple of values: aircraft index, landing runway and landing time. In order to construct the initial solution, all hard constraints discussed in Sect. 2 must be satisfied in the initial solution to ensure its feasibility. Indeed, ensuring the feasibility of the initial solution of ALP is not a trivial task. In the literature, some works proposed heuristic methods based on greedy rules to generate the initial solution [27]. Others proposed heuristic methods based on the first-come first-serve technique to build the initial solution [6].

    figure d

    In ISA, in order to ensure the feasibility of the initial solution, a heuristic method based on time window constraint is proposed. This heuristic method is pseudo-coded in Algorithm 4. In the algorithm, \(\sim U(E_i,L_i)\) generates a random number between \(E_i\) and \(L_i\) and \(\sim \) U(1 , R) is a function generating a random number between 1 and R. In the initial loop, each decision variable is randomly assigned by a group of values. Thereafter, the whole solution is sorted based on the aircraft landing time. For example, if the random solution generated after sorting process becomes \(X=((2,3,98),(3,2,106),\ldots ,(1,2,258))\), then the aircraft #2 has a sequence of 1 which will be landed at time 98s in the landing airway #3. In lines 9–14, the separation time constraint is checked. In case it is violated, the heuristic method will restart the entire generating process from scratch.

  • Step III Improvement loop In the improvement loop, the initial solution \({{\varvec{\text {X}}}}\) is iteratively tweaked using Tweak() function (see Algorithm 3) to yield \({{\varvec{\text {X}}}}' =(X'_1,X'_2,\ldots ,X'_N)\). If the objective function value of \({{\varvec{\text {X}}}}'\) turns out to be better than that of \({{\varvec{\text {X}}}}\) (i.e., \(f({{\varvec{\text {X}}}}') < f({{\varvec{\text {X}}}}))\), the current solution \({{\varvec{\text {X}}}}\) will be replaced by the tweaked solution \({{\varvec{\text {X}}}}'\). If not, the relaxed acceptance criteria will be triggered. The relaxed acceptance criteria might replace the current solution by the tweaked one, if worse. This process is done based on the value of temperature (T). When the value of T is large in the initial search, the percentage of accepting the worst solution is high. This percentage will be degraded by decreasing the value of T using the equation in Algorithm 3, line 14, until no further worst solution is accepted. In \(Tweak({{\varvec{X}}})\) function, four neighborhood search methods are proposed for ALP. These neighborhood search methods are found to be very efficient in the navigation of the ALP search space and also maintain the feasibility of \({{\varvec{X}}}\) during the search. The new solution \({{\varvec{X}}}'\) is adjusted based on the switching mechanism shown in Eq. (14). Based upon Eq. (14), r is randomly generated in the range of \(r\in [0,1]\). Apparently, the selection probability of each neighborhood search method is in the same context of 1:2:3 ratio. Each activated neighborhood method only performs a single random move without perform checking whether this move leads to a better solution.

    $$\begin{aligned} {{\varvec{X}}}'= {\left\{ \begin{array}{ll} NB1({{\varvec{X}}}) &{} r \le 0.25. \\ B2({{\varvec{X}}}) &{}0.25<r \le 0.50 \\ NB3({{\varvec{X}}}) &{}0.5<r \le 0.75 \\ NB4({{\varvec{X}}}) &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
    (14)

The new ALP solution (\({{\varvec{X}}}'\)) is randomly adjusted in ISA by one of the following neighborhood methods:

  • NB1 select aircraft, \(X_i\), randomly and then randomly change its landing time \(t_i \in [E_i,L_i]\), where the index i denotes to aircraft i. The landing sequence is updated based on landing times. The runway will not be changed.

  • NB2 select aircraft, \(X_i\), at random and then randomly change its landing time \(t_i\in [E_i,L_i]\) and its landing runway \(r_i\in [1,R]\).

  • NB3 select randomly two aircraft, \(X_i\) and \(X_j\), from the same runway. Then set randomly the landing times \(t_i\) and \(t_j\) for \(X_i\) and \(X_j\), respectively, where i and j represent aircraft i and j, respectively. Swap the landing times \(t_i\) and \(t_j\) for \(X_i\) and \(X_j\). The landing times \(t_i\) and \(t_j\) are restricted within \([E_i,L_i]\) and \([E_j,L_j]\), respectively. The feasibility of \({{\varvec{X}}}'\) must be maintained.

  • NB4 randomly select aircraft \(X_i\) and \(X_j\) from two different runways \(r_i\) and \(r_j\), where \(i \ne j\). Swap landing times \(t_i\) and \(t_j\) for aircraft i and j. The utility of \({{\varvec{X}}}'\) must be conserved.

As it can be noticed from the objective function formulated in Eq. (1), separation time is not directly computed and is related to the aircraft sequence. Therefore, if the sequence is updated in any movement process, consequently, the separation time will also be affected. Apparently, the four movement processes update the sequence in different ways with respect to the separation time constraints.

At each iteration of ISA, each new solution of \({{\varvec{X}}}'\) replaces the current one of the \({{\varvec{X}}}\) using the acceptance rule of SA: greedy and relaxed. The relaxed acceptance rule depends on the value of T which is iteratively updated using cooling rate \(\alpha \).

At the end of the improvement loop, the resulting solution (i.e., best) is the local optimal solution and that will be passed on to the perturbation step to reshuffle its component randomly.

It should be stressed that a heuristic method for generating the initial feasible solution is used in two places in the algorithm: to build initial feasible solution and to preserve the generated solution during the improvement loop. This method randomly chooses the landing time within the time window for each aircraft. It then sorts the aircraft based on their landing time and then generates the landing sequence. This method is successfully applied on all instances without having problems in generating the initial feasible solution. Some works in the literature use the first-come first-serve (FCFS) method for generating the initial solution [27]. ISA specifically uses FCFS method with some random components to diversify the initial feasible solution. The same heuristic method is also used to maintain the feasibility of the generated solutions during the search.

  1. Step IV

    Perturbation: This is an important step in ISA to diversify the new resulting solution (i.e., best) from the improvement loop. The diversification process is done by means of using the same neighborhood methods discussed in the improvement loop, but without guidance from objective function. It is responsible for moving the improved solution (obtained by SA) to a new area in the search space. The perturbation process ensures that the solution is moved to a new region in the search space. The perturbed solution is considered the initial solution for the upcoming improvement loop which is repeated again.

  2. Step V

    Termination criterion: As aforementioned in Algorithm 3, there are two main loops: the inner loop which is the improvement loop based on SA and the outer loop is repeated as many times as the SA is repeated based on ILS concepts and conditions. The ISA is terminated if the best becomes ideal solution or the maximum number of iterations is reached.

5 Experimental results and discussion

The proposed ISA is programmed in Java under Microsoft Windows 10 platform. All experiments are implemented on Intel Machine with Core i5 CPU with 2.5 GHz and 6 GB of RAM. The experiments are designed to evaluate the effect of the hybridization concepts between ILS and SA. Each experiment is repeated 21 times. The results are recorded in terms of the percentage gap \(\varDelta (\%)\), which refers to the difference between the best-known values (BKVs) in the literature and the results obtained by the proposed ISA method. \(\varDelta (\%)\) criterion measure is defined as shown below [27]:

$$\begin{aligned} \varDelta (\%)=\frac{B-{\mathrm{BKV}}}{{\mathrm{BKV}}}\times 100\%, \end{aligned}$$
(15)

where B is the best result returned by an algorithm evaluated on a set of runs and the BKV results are taken from [5]. Note that when the value of \(\varDelta (\%)=0\), this means that the proposed algorithm is able to obtained the BKV results. When the value of \(\varDelta (\%) > 0\), this means that the proposed algorithm is obtained a result worse than the BKV results. When the value of \(\varDelta (\%) < 0\), this means that the proposed algorithm reaches a result that is superior to the BKV result.

In this section, the experimental results obtained by the proposed ISA are summarized in terms of the difference in BKV as formulated in Eq. (15). The parameter settings used to run the proposed method are configured as given in Table 3. These parameters are set after conducting a set of comprehensive experiments, and the best parameter values are defined.

Table 3 Parameter setting of ISA

Initially, the datasets under consideration are described in Sect. 5.1. The effect of parameter \(\alpha \) on the convergence behavior of ISA is presented in Sect. 5.2. Thereafter, comparison between the proposed methods including ISA, ILS and SA is made in Sect. 5.3. Comparison with other state-of-the-art methods is provided in Sect. 5.4. To show the significance test among comparative methods, Friedman’s test and post hoc Holm method are utilized as demonstrated in Sect. 5.5. Finally, the computational time for each comparative method is presented and elucidated in Sect. 5.6.

5.1 Dataset used

The performance of the ISA was evaluated using thirteen ALP groups of datasets, which has 49 different problem instances. These datasets are introduced in [11] and are publicly available in OR-library [33].Footnote 1

The characteristics of datasets used are summarized in Table 4. These datasets are varied in terms of number of aircraft and runways. Each dataset has a different number of runways with the same number of aircraft. The first column shows the dataset name, while the second column shows the number of problem instances in each dataset. The column R in Table 4 refers to the series of runways used to differentiate between the dataset instances. The final column refers to the number of aircraft in each dataset N. These problem instances can be grouped into small datasets (1 to 25 instances in 1–8 datasets) and large datasets (26 to 49 instances in 9–13 datasets). The problem instances are categorized into small and large based on the number of aircraft N in which the \(N \le 50\) is considered as small, while the \(N \ge 100\) is considered as large problem instances [27].

Table 4 Characteristics of ALP datasets

5.2 Effect of \(\alpha \) on the convergence rate of ISA

To demonstrate the influence of cooling rate (i.e., \(\alpha \)) on the convergence characteristic behavior of ISA, a series of experiments with different values of \(\alpha \) are conducted. These values are specifically: \(\alpha =0.5, \alpha =0.90,\alpha =0.97\) and \(\alpha =0.999 \). Recall that cooling rate is an important parameter in ISA and is responsible for managing the speed of convergence. \(\alpha \) with large value close to 1 refers to the slow decrease in the temperature T from its initial value to its final value. This means that smooth changes in the value of T enable ISA to escape the local minimum and therefore achieve better exploration features. \(\alpha \) with small value close to 0 ISA is returned to a steady-state stage very quickly, and exploitation behavior is focused. The results of various values of \(\alpha \) are recorded in Table 5. The statistical results of ALP solution are summarized in terms of the best (Best), average (Avg.), worst solutions (Worst) and the standard deviation (STD) among the best results obtained over ten replicated runs. Best results obtained among various values of \(\alpha \) are emboldened.

It appears that small dataset instances (from instance 1 to 25), ISA with \(\alpha =0.999\), ISA with \(\alpha =0.97\) and \(\alpha =0.90\) are able to produce almost the same best results equal to the BKV. However, ISA with \(\alpha =0.5\) reveals different convergence behavior where there are different results values in favor of the large \(\alpha \) values. In conclusion, large value of cooling rate \(\alpha \) close to one is preferable to gain better outcomes in small instances.

For large dataset instances (from instance 26 to 49), there are noticeable differences among almost all values of \(\alpha \). The closer the value of \(\alpha \) is to 1, the better the obtained results are. This can be borne out by the results recorded in Table 5. In conclusion, larger value of cooling rate \(\alpha \) is much useful for ISA-based ALP method whereby the search space of ALP can be more touchable and the navigation of ALP search space will be more efficient. Thus, the value of \(\alpha =0.999\) will be used in the upcoming experiments.

Table 5 Sensitivity analysis for alpha parameter in ISA

5.3 Comparison with the proposed local search methods

To study the effect of combining the different local search methods proposed in this paper (i.e., SA, ILS and ISA), we present here the results of SA, ILS and a combination of both SA and ILS, or shortened as ISA. The results of each proposed method are exhibited in Table 6. The results are a summary of 10 replicated runs and are recorded in terms of Best, Avg., Worst and STD. The best results are highlighted in bold. The starting search points of all used algorithms are stabilized to ensure fair comparisons for all replicated runs.

Notably, for small dataset instances of ALP (from instance 1 to 25), the results of ISA and ILS are almost the same, while there is a clear difference in comparison with SA alone. For large instances (from instance 26 to 49), ISA is able to obtain the best results, followed by SA, followed by ILS. In conclusion, both local search methods used in this paper have almost the same performance and share their powerful features in ISA yield powerful algorithm. Some of the comparative experimental cases are visualized as boxplot representation as illustrated in Fig. 2. These cases are selected according to the relevancy of the information. The main aim of using boxplot representation is to demonstrate the robustness of the proposed ISA algorithm. In boxplot representation, the lower line in the box represents the first quartile (Q1), and the line inside the box represents the median (M), while the upper line is the third quartile (Q3). The lower wisher is the best solution obtained, while the upper wisher is the worst solution. The “+” sign indicates the exceptional values. It is clearly observed that the results obtained by ISA algorithm are visualized by a compact box, which means that it is a robust algorithm. This is borne out by the boxes drawn where a very narrow space between Q1, M and Q3. Furthermore, it can be observed that the behavior of the ISA algorithm is better than those of both SA and ILS by obtaining almost the same best results for 10 times of runs.

Fig. 2
figure 2

Distribution of the best results over 10 runs performed using ISA, ILS and SA algorithms

Figure 3 shows the convergence behavior of ISA, SA and ILS algorithms against 1500–5000 iterations for four instances (i.e., 14, 23, 35 and 40). These instances are selected randomly to cover different sizes and complexities of the instances. The number of iterations is selected to clearly visualize the differences between the presented algorithms. In these convergence plots, x-axis represents the number of iterations and the y-axis denotes the values of the objective function. Figure 3 shows that the slop of the ISA algorithm is better than the slope of the other two algorithms in all instances. It is observed that the performance of SA is the worst compared to ILS and ISA algorithms on instance 14, while the performance of ILS is the worst in comparison with the other algorithms on the remaining instances. This proves that the integration between SA and ILS is able to balance exploration and exploitation and thus achieve reasonable optimization results.

Fig. 3
figure 3

Convergence behavior of ISA, ILS and SA

Table 6 A comparison between ISA, ILS and SA computational results

5.4 Comparison with previous methods

For comparative evaluation, Table 7 summarizes parameter setting and abbreviations of the comparative methods along with the parameter settings used. These comparative methods are able to produce high-quality results for ALP using the same problem instances under study. As other comparative methods, the results of ISA are reported in regard to \(\varDelta (\%)\) over 21 replicated runs.

Table 7 Parameter values of the competitor algorithms

The comparative results are summarized in Tables 8 and 9. Results reported in Table 8 show small-sized instances as indicated by datasets 1–8, and Table 9 shows large-sized instances as indicated by datasets 9–13. Note that the best-known value (BKV) is given for each problem instance in those tables. Other results are those obtained by HPSO, ILS, SS, BA, SA1 and SA2 methods in terms of the percentage gap of BKV. The best results are highlighted in bold font (lowest is best).

Table 8 A comparison between the computational results of the ISA method with the results of the BKV and the best-known results of the best existing performing methods for ALP on small-sized instances

As shown in Table 8, the proposed ISA was able to obtain the best results for 25 out of 25 small-sized problem instances as achieved by some comparative methods which are HPSO, ILS, SS and BA.

Table 9 A comparison between the computational results of the ISA with the BKV results and the results of the best performing methods for ALP in the literature on large-sized instances

The results summarized in Table 9 show that the proposed ISA method was able to outperform the other comparative methods in 4 out of the 24 (large-sized) instances as achieved in [3]. Furthermore, ISA was able to achieve results similar to BKV for 37, 42, 43 and 49 problem instances. In some cases, the ISA results were not be able to outperform those produced by HPSO, ILS, SA1 and SA2 in some instances with single runway (i.e., 26, 30, 35 and 40 problem instances). However, the difference margin to the corresponding instances of the BKV results is relatively small. These problem instances are highly complex with a very rugged search space. Since ISA adopts the random improvement acceptance strategy in the neighborhood search, improving the solution in the last course of run will be very rare. In [27], the best improvement acceptance strategy in the neighborhood search is adopted which is very efficient for complex and rugged search space.

From a different perspective, the summarized results recorded in Tables 8 and 9 prove that the results obtained by ISA method are considerably better than those produced by SS, BA, SA1 and SA2 methods. In particular, the method contributed in [27] has previously gotten best achievements using the same dataset with some new BKV. Our ISA was able to achieve excellent results in almost all problem instances except those produced in HPSO. Note that the method proposed in [27] is ILS and our ISA and can be considered as an extended improvement to ILS.

5.5 Statistical analysis

To validate the general efficiency of the proposed ISA method more strictly, a statistical analysis test using Friedman’s statistical test [34] with a significance level of \(\alpha =5\%\) was performed. This is to rank the ISA method long with the competitor methods in accordance with the obtained results on large-sized instances. Table 10 summarizes the ranking of algorithms that were obtained by Friedman’s test. From Table 10, HPSO was ranked first (with an average ranking 2.06), ILS ranked second (with an average ranking 2.79) and ISA ranked third (with an average ranking 3.00) followed in order by SA1, SA2, SS and BA in the last rank.

The p-value computed by Friedman’s test was 0.036E-11, which is below the significance level \(\alpha =5\%\). This means that there is a significant difference in the results of the evaluated algorithms. Thus, we performed a post hoc procedure (Holm’s method) to confirm that there are significant differences in the performance of the control algorithm (the best performing one—HPSO) and the remaining methods in the comparison, as well as reject the null hypothesis of equivalent performance between the evaluated methods.

Table 10 Average ranking of the comparative methods calculated by Friedman’s test

As shown in Table 11, Holm’s procedure reveals that the control method (HPSO) is statistically better than SA1, SA2, SS and BA. On the other hand, there is no significant difference between the performance of HPSO and both of ISA and ILS methods. Based on the results obtained from Holm’s method, we can conclude that the performance of the proposed ISA method is highly comparable to other competitors in the literature, and can be considered as a viable alternative in tackling the ALP.

Table 11 Adjusted p value of the Holm procedure table for \(\alpha =0.05\) (Friedman)

The results show that ISA presented a high level of performance as the instance size increased. This reveals that the ISA is well behaved on large instances. As a result, these findings confirm that the integration of SA with ILS contributes effectively to attaining optimal solutions on both small- and large-sized instances. In short, the results achieved by ISA assess the benefits of the hybridization of ILS and SA algorithms to accommodate a high degree of efficiency in addressing the ALP. This confirms that the aircraft landing scheduling approach described here is representative of various lines of research, favorable in terms of reported performance, and manages a good statement of progress for intelligent approaches to be clearly used in processing the ALP.

5.6 Computational time

It is not practical to compare the computational resources of ISA with other ALP-based methods as no standard tests exist and it is difficult to quantitatively compare time efforts among different scheduling approaches that use different machine specifications. In addition, a direct comparison is problematic because specific information related to software requirements is often not given. Moreover, details such as operating system, programming language, professionalism and the programming skills of the programmer, the compiler and number of iterations are often not specified. However, the computational time required to execute each problem instance using ISA is reported in Table 12 in comparison with that obtained by comparative methods available to the authors.

Table 12 shows that the mean computational time of ISA is almost better than that required for other methods. This concur that the computational effort of ISA is convinced within the range of the computational efforts of comparative methods. This emphasizes that ISA is a computationally efficient method for processing ALP which produces high-quality solutions in a reasonable computational time. In a nutshell, Table 12 affirms that the ISA method is capable of handling large-sized instances at very high speed on a computer with modest specifications such as Intel Core I5 processor running at Lenovo laptop with 6 GB RAM, running Windows operating system, on Java platform.

Table 12 Computational time (in seconds) of ISA and the presented state-of-the-art methods that addressed ALP over all instances

6 Conclusion and future work

This work has evinced the use of a hybridization of iterated local search (ILS) and simulated annealing (SA) in a single framework to develop a rapid and adequate optimization method to solve the aircraft landing problem (ALP) for 49 benchmark problem instances. This single optimization framework is referred to as iterated simulated annealing (ISA) and has been adopted to solve the ALP, which aims to schedule each incoming aircraft subjected to land on a runway within a predetermined time frame for landing. The key aims of ILS and SA in ISA are to navigate large search space and circumvent local optima in order to improve the ultimate outcomes of ALP. The flexibility of this ISA scheme in reliably addressing ALP is shown for benchmark problem instances with problems of various sizes and number of aircraft and runways in each problem. Specifically, the standard problem instances under study were spanned over thirteen datasets split into small-sized and large-sized groups. De facto, this scheduling problem could be considered as a combinatorial optimization problem that cannot be easily tackled by basic algorithms.

The statistical results (best, average and standard deviation) compared to the best-known values showed a high degree of performance and reliability. Convergence curves were presented to demonstrate the capacity of the proposed ISA algorithm to provide sensible solution for the ALP. Experimental results show that the ISA approach positively outperforms persuasive state-of-the-art methods on large-sized instances. Astonishingly, it arrived at new best results for 4 large-sized instances out of 24 instances and reached the best-known results in all small-sized instances using little computational efforts.

There are several trends for further work that could be considered:

  • The assessment of adaptability to various datasets with high levels of complexity and for datasets with very large-sized problem instances.

  • The evaluation of adaptability to address a dynamic case of ALP with instances of various structures that possess additional features such as a combination of takeoff and landing on the same or on different runways

  • Further work could be investigated to assess the suitability of the ISA method to schedule other problems in a range of scalability and complexity such as course time-tabling [35], examination time-tabling [36], Nurse restoring [37], multiple-reservoir scheduling [38], patient admission scheduling problems [39], economic load dispatch [40] and traveling salesman problem [41].

  • The proposed ISA could be further improved by hybridizing it with other effective features that could derived from promising local search algorithms such as tabu list from tabu search [42], GRASP operators from GRASP [43] and \(\beta \) operator from \(\beta \)-hill climbing [44].