1 Introduction

One of the core premise of evolutionary algorithms (EA) is their reliance on stochasticity, the ability to generate a random events, which in turn, provides the spark of perturbation towards the desired goal. The task of generating this stochasticity is generally in the realm of pseudorandom number generators (PRNG); a structured sequence of mathematical formulation which tries to yield a generally optimal range of distributed numbers between a specified range.

A wide variety of such random number generators exist, however, the most common in usage is the Mersenne Twister (Matsumoto and Nishimura 1998). A number of its variants have been designed; for a full listing please see Matsumoto (2012). Some other common PRNG are the Mother Of All, CryptoAPI, Indirection, Shift, Accumulate, Add, and Count (ISAAC), Keep it Simple Stupid (KISS) and Mutiply-With-Carry (MWC).

This paper explores a novel approach to generating PRNG, one with a lineage in chaos theory. The term chaos describes the complex behaviour of simple, well behaved systems. When casually observed, this behaviour may seem erratic and somewhat random, however, these systems are deterministic, whose precise knowledge of future behaviour is well known. The question is then to reconcile the notion of nonlinearity of these systems.

Sudden and dramatic changes in some nonlinear systems may give rise to complex behaviour called chaos. The noun chaos and the adjective chaotic are used to describe the time behaviour of a system when that behaviour is aperiodic (it never exactly repeats) and appears apparently random or noisy (Hilborn 2000).

This a periodic non-repeating behaviour of chaotic systems is the core foundation of this research. The objective is then to use a valid chaotic system, and embed it in the EA’s as a PRNG. Four general branches of chaotic systems exist, which are the dissipative systems, fractals, dissipative and high-dimensional systems and conservative systems. The systems used for this research are the discrete dissipative systems of Lozi and Delayed Logistic maps. These two systems are relatively simple in terms of period density, and therefore, easier to obtain data through sectional cropping.

Many chaotic maps in the literature possess certainty, ergodicity and the stochastic property. Recently, chaotic sequences have been adopted instead of random sequences with improved results. They have been used to improve the performance of EA’s (Alatas et al. 2009; Caponetto et al. 2003). They have also been used together with some heuristic optimisation algorithms (Davendra et al. 2010; Zuo and Fan 2006) to express optimisation variables. The choice of chaotic sequences is justified theoretically by their unpredictability, i.e. by their spread-spectrum characteristic, non-periodic, complex temporal behaviour, and ergodic properties (Ozer 2010).

A mathematical description of the connection between chaotic systems and random number generators has been given by Herring and Julian (1989). In this paper, a strong linkage has been shown between the Lehmer generator (Lehmer 1951) and the simple chaos dynamical system of Bernoulli shift (Palmore and McCauley 1987). By observation, it is obvious that the Bernoulli shift itself is similar in form to the Lehmer generator. Therefore, it was postulated that prime modulus multiplicative linear congruential generators are implementations of deterministic chaotic processes (Herring and Julian 1989).

A chaotic piecewise-linear one dimensional (PL1D) map has been utilised as a chaotic random number generator in Stojanovski and Kocarev (2001). The construction of the chaos random number system is based on the exploitation of the double nature of chaos, deterministic in microscopic space and by its defining equations, and random in macroscopic space. This new system is mathematically proven to overcome the major drawbacks of classical random number systems, which are its reliance on the assumed randomness of a physical process, inability to analyse and optimise the random number generator, inability to compute probabilities and entropy of the random number generator, and inconclusiveness of statistical tests.

A family of enhanced chaotic pseudo random number generators (CPRNG) has been developed by Lozi (2008), where the main imputes is the generation of very long series of pseudo-random number generations. This is accomplished through what is called the ultra weak coupling of chaotic systems, such as the Tent Map, which is enhanced in order to conceal the chaotic genuine function (Lozi 2009).

Differential evolution (DE), which is a population based global optimization algorithm over continuous spaces applying stochastic state transition rules, first proposed by Price (1999) has been modified using chaotic sequences. Davendra et al. (2010) has applied the canonical DE to solve the PID optimisation problem, whereas Ozer (2010) applied a sequence of chaotic maps to optimise a range of benchmark problems, with the conclusion that the Sinus map and Circle map have somewhat increased the solution quality and with the ability to escape the local optima. The economic dispatch problem was solved by Lu et al. (2011), where the Tent Map was utilised as a chaotic local search in order to bypass the local optima.

Concerning combinatorial optimisation problems, Yuan et al. (2008) has developed a chaotic hybrid DE, where the parameter selection and operation are handled by chaotic sequences, whereas Davendra et al. (2010) utilised both the population generation and parameter selection using a hybrid DE-Scatter Search algorithm to solve the travelling salesman problem.

The motivation of this work is to ascertain if any improvement can be attained in an EA by embedding a chaos map. Since current literature has applied chaos to a magnitude of such research with varying application of the chaos maps (population generation, mutation crossover, escaping the local optima etc.), we decided to totally replace the PRNG with a chaotic map. The Lozi map was chosen as it is a very simple map, which can be totally described in a few equations, making it easier to implement. Additionally, the Lozi map has proven to be an excellent generator in the previous work on DE for real value optimisation (Davendra et al. 2010).

This research looks at modifying the discrete self organising migrating algorithm (DSOMA) (Davendra 2009, 2013; Davendra and Bialic-Davendra 2013), yet another population based global optimisation algorithm over discrete spaces, with the Lozi chaotic map and utilising it to solve the lot-streaming flowshop problem with setup time. The primary aim of this work, therefore, is to ascertain if there is any improvement to the algorithm when comparing the chaotic variant to the standard DSOMA variant, utilising the best PRNG; the Mersenne Twister. Secondly, we have generated two unique data sets using the Lozi and Delayed Logistic systems for testing.

2 Lozi map

The Lozi map is a two-dimensional piecewise linear map whose dynamics are similar to those of the better known Henon map (Hennon 1979), and which admits strange attractors.

The advantage of the Lozi map is that one can compute every relevant parameter exactly, due to the linearity of the map, and the successful control can be demonstrated rigorously.

The Lozi map equations are given in (1) and (2).

$$\begin{aligned}&{X_{n + 1}} = 1 - a\cdot \left| {{X_n}} \right| + b\cdot {Y_n}\end{aligned}$$
(1)
$$\begin{aligned}&{Y_{n + 1}} = {X_n} \end{aligned}$$
(2)

The parameters used in this work are \(a = 1.7\) and \(b = 0.5\) as suggested in Caponetto et al. (2003) and the initial conditions are \(X_0 = -0.1\) and \(Y_0 = 0.1\). The Lozi map is shown in Fig. 1.

Fig. 1
figure 1

Lozi map

3 Delayed logistic map

The Delayed Logistic (DL) map equations are given in (3) and (4).

$$\begin{aligned}&{X_{n + 1}} = A\cdot {X_n}\cdot ( {1 - {Y_n}}) \end{aligned}$$
(3)
$$\begin{aligned}&{Y_{n + 1}} = {X_n} \end{aligned}$$
(4)

The parameter used in this work is \(a = 2.27\) and the initial conditions are \(X_0 = 0.001\) and \(Y_0 = 0.001\). The DL map is shown in Fig. 2.

Fig. 2
figure 2

Delayed logistic

Generally, two different aspects of random number are required, real and integer. The real number is required within the range of 0–1, whereas the integer is generally required between 1 and some upper bound like the size of the individual or the population size.

Each point of the chaotic system is a two-dimensional point in space. Generally, either of the x, or y axis values can be used as the random number, however, we have used the data from the x axis.

A sample histogram of the data values in the x-axis affixed between 0 and 1 is given in Fig. 3.

Fig. 3
figure 3

Lozi map data sample between [0, 1]

In order to obtain numbers between the specified bounds and in the precise order, initially a large number of chaotic numbers are generated using the chaotic map and stored in memory. Based on the requirement, each number is modified sequentially for usage.

4 Discrete self-organising migrating algorithm

DSOMA (Davendra 2009, 2013; Davendra and Bialic-Davendra 2013) is the discrete version of SOMA (Zelinka 2004), developed to solve permutation based combinatorial optimisation problem. The SOMA ideology of the sampling of the space between two individuals is modified for application in the combinatorial domain. Assume that there are two individuals in a search space. The objective for DSOMA is to transverse from one individual to another, while mapping each discrete space between these two individuals (Fig. 4). Figures 5 and 6 are 3D representations, where the DSOMA mapping is shown as the surface joining these two.

Fig. 4
figure 4

Two individuals in search space

Fig. 5
figure 5

End view of the two individuals in 3D search space

Fig. 6
figure 6

Isometric view of the two individuals in 3D search space

The major input of this algorithm is the sampling of the jump sequence between the individuals in the populations, and the procedure of constructing new trial individuals from these sampled jump sequence elements.

The overall outline for DSOMA can be given as:

  1. 1.

    Initial phase

  1. (a)

    Population generation: An initial number of permutative trial individuals is generated for the initial population.

  2. (b)

    Fitness evaluation: Each individual is evaluated for its fitness.

  1. 2.

    DSOMA

  1. (a)

    Creating jump sequences: Taking two individuals, a number of possible jump positions is calculated between each corresponding element.

  2. (b)

    Constructing trial individuals: Using the jump positions; a number of trial individuals is generated. Each element is selected from a jump element between the two individuals.

  3. (c)

    Repairment: The trial individuals are checked for feasibility and those, which contain an incomplete schedule, are repaired.

  1. 3.

    Selection

  1. (a)

    New individual selection: The new individuals are evaluated for their fitness and the best new fitness based individual replaces the old individual, if it improves upon its fitness.

  1. 4.

    Generations

  1. (a)

    Iteration: Iterate the population till a specified migration.

DSOMA requires a number of parameters as given in Table 1. The major addition is the parameter \(J_{min}\), which gives the minimum number of jumps (sampling) between two individuals. The SOMA variables PathLength, StepSize and PRT Vector which are defined a priori by the user based on the recommendations in Zelinka (2004), are not initialised in DSOMA as they are dynamically calculated by DSOMA using the adjacent elements between the individuals.

Table 1 DSOMA parameters

4.1 Initialisation

The population is initialised as a permutative schedule representative of the size of the problem at hand (5). As this is the initial population, the superscript of x is initialised to 0. The rand() function obtains a value between 0 and 1, and the INT() function rounds down the real number to the nearest integer. The if condition ensures that each element within the individual is unique.

$$\begin{aligned} \begin{array}{l} x_{i,j}^0 = \left\{ \begin{array}{l} 1 + \textit{INT}\left( r and\left( {} \right) \cdot \left( {N - 1} \right) \right) \\ \mathrm{{if}}\; x_{i,j}^0 \notin \left\{ {x_{i,1}^0 ,\ldots ,x_{i,j - 1}^0 } \right\} \\ \end{array} \right. \\ \qquad \qquad i = 1,\ldots ,\beta ;\; j = 1,\ldots , N \end{array} \end{aligned}$$
(5)

Each individual is vetted for its fitness (6), and the best individual, whose index in the population can be assigned as L (leader) and it is designated the leader as \(X_{L}^0\) with its best fitness given as \(C_{L}^0\).

$$\begin{aligned} C_i^0 = \mathfrak {I}\left( {X_i^0 } \right) ,\quad i = 1,\ldots ,\beta \end{aligned}$$
(6)

The pseudocode for generating a population is given in Fig. 7.

Fig. 7
figure 7

Pseudocode for generating initial population

After the generation of the initial population, the migration counter t is set to 1 where \(t=1,\ldots ,M\) and the individual index i is initialised to 1, where \(i=1,\ldots ,\beta \). Using these values, the following sections (Sects. 4.2, 4.3, 4.4, 4.5) are recursively applied, with the counters i and t being updated in Sects. 4.6 and 4.7 respectively.

4.2 Creating jump sequences

DSOMA operates by calculating the number of discrete jump steps that each individual has to circumnavigate. In DSOMA, the parameter of minimum jumps (\(J_{\min }\)) is used in lieu of PathLength, which states the minimum number of individuals or sampling between two individuals.

Taking two individuals in the population, one as the incumbent (\(X_i^t\)) and the other as the leader (\(X_L^t\)), the possible number of jump individuals \(J_{\max }\) is the mode of the difference between the adjacent values of the elements in the individual (7). A vector J of size N is created to store the difference between the adjacent elements in the individuals. The mode() function obtains the most common number in J and designates it as \(J_{max}\).

$$\begin{aligned} \begin{array}{l} J_{j} = {\left| {x_{i,j}^{t - 1} - x_{L,j}^{t - 1} } \right| , \quad j = 1,\ldots ,N}\\ {J_{\max }} = \left\{ \begin{array}{l} \textit{mode} (J) \;\; \textit{if} \;\; \textit{mode} (J) > 0\\ 1\;\;\textit{otherwise} \end{array} \right. \end{array} \end{aligned}$$
(7)

The step size (s), can now be calculated as the integer fraction between the required jumps and possible jumps (8).

$$\begin{aligned} s = \left\{ \begin{array}{l} \left\lfloor {\frac{{{J_{\max }}}}{{{J_{\min }}}}} \right\rfloor \mathrm{{ \;\;if\;\;}}{J_{\max }} \ge {J_{\min }}\\ 1\mathrm{{ \;\; otherwise}} \end{array} \right. \end{aligned}$$
(8)

Create a jump matrix \(\mathbf{{G}}\), which contains all the possible jump positions, that can be calculated as:

$$\begin{aligned} \begin{array}{l} {\mathbf{{G}}_{l,j}} = \left\{ \begin{array}{l} x_{i,j}^{t - 1} + s \cdot l\quad \mathrm{{ if \;\;\;}}x_{i,j}^{t - 1} + s \cdot l < x_{L,j}^{t - 1}\mathrm{{ \; and \;}} \\ \qquad \qquad \qquad \qquad \quad x_{i,j}^{t - 1} < x_{L,j}^{t - 1} \\ x_{i,j}^{t - 1} - s \cdot l\quad \mathrm{{ if \;\;\;}}x_{i,j}^{t - 1} + s \cdot l < x_{L,j}^{t - 1} \mathrm{{\;and \;}} \\ \qquad \qquad \qquad \qquad \quad x_{i,j}^{t - 1} > x_{L,j}^{t - 1} \\ 0\quad \mathrm{{ otherwise}} \end{array} \right. \\ \qquad \qquad j = 1,\ldots ,N;\;l = 1,\ldots ,{J_{\min }} \end{array} \end{aligned}$$
(9)

The pseudocode for creating jump sequences is given in Fig. 8.

Fig. 8
figure 8

Pseudocode for creating jump sequences

4.3 Constructing trial individuals

For each jump sequence of two individuals, a total of \(J_{\min }\) new individuals can now be constructed from the jump positions. Taking a new temporary population H \(( {H = \{ {{Y_1},\ldots ,{Y_{{J_{\min }}}}} \}})\), in which each new individual \(Y_{w}\) (\(w = 1,\ldots ,J_{\min }\)), is constructed piecewise from \(\mathbf {G}\). Each element \(y_{w,j}\)

\(( {{Y_w} = \{ {{y_{w,j}},\ldots ,{y_{w,N}}} \}},\; j = 1,2,\ldots ,N )\) in the individual, indexes its values from the corresponding \(j\)th column in \(\mathbf{{G}}\). Each \(l\)th (\(l = 1,\ldots ,J_{\min }\)) position for a specific element is sequentially checked in \(\mathbf{{G}}_{l,j}\) to ascertain if it already exists in the current individual \(Y_{w}\). If this is a new element, it is then accepted in the individual, and the corresponding \(l\)th value is set to zero as \(\mathbf{{G}}_{l,j}=0\). This iterative procedure can be given as in (10) and the pseudocode for constructing trial individual is represented in Fig. 9.

$$\begin{aligned} \begin{array}{l} y_{w,j} = \left\{ \begin{array}{l} \mathbf{{G}}_{l,j} \left\{ \begin{array}{l} \mathrm{{if \quad }}\mathbf{{G}}_{l,j} \notin \; \{ {y_{w,1},\ldots ,y_{w,j - 1}} \} \\ \qquad \mathrm{{\; and \;}}\mathbf{{G}}_{l,j} \ne 0; \\ \mathrm{{then \; }}\mathbf{{G}}_{l,j} = 0; \\ \end{array} \right. \\ 0\mathrm{{ \quad otherwise}} \end{array} \right. \\ \qquad \qquad l = 1,\ldots ,J_{\min };\;j = 1,\ldots ,N;\; w = 1,\ldots ,J_{\min } \end{array} \end{aligned}$$
(10)
Fig. 9
figure 9

Algorithm for constructing trial individuals

4.4 Repairing trial individuals

Some individuals may exist, which may not contain a permutative schedule. The jump individuals \(Y_{w}\) \((w = 1,2,\ldots ,J_{min})\), are constructed in such a way, that each infeasible element \(y_{w,j}\) is indexed by 0.

Taking each jump individual \(Y_{w}\) iteratively from H, the following set of procedures can be applied recursively.

Take A and B, where A is initialised to the permutative schedule \(A = \{ {1,2,\ldots ,N} \}\) and B is the complement of individual \(Y_{w}\) relative to A as given in (11).

$$\begin{aligned} B = A\backslash Y_w \end{aligned}$$
(11)

If after the complement operation, B is an empty set without any elements; \(B = \{ \}\), then the individual is correct with a proper permutative schedule and does not require any repairment.

However, if B contains values, then these values are the missing elements in individual \(Y_{w}\). The repairment procedure is now outlined. The first process is to randomise the positions of the elements in set B. Then, iterating through the elements \(y_{w,j}\) \((j = 1,\ldots ,N)\) in the individual \(Y_w\), each position, where the element \(y_{w,j} = 0\) is replaced by the value in B. Assigning \(B_{size}\) as the total number of elements present in B (and hence missing from the individual \(Y_{w}\)), the repairment procedure can be given as in (12).

$$\begin{aligned}&y_{w,j} = \left\{ \begin{array}{l} B_h \mathrm{{\quad \;\; if \;\;}}y_{w,j} = 0 \\ y_{w,j} \mathrm{{\quad otherwise}} \end{array} \right. \nonumber \\&\quad h = 1,\ldots ,B_{size};\; j = 1,\ldots ,N \end{aligned}$$
(12)

After each individual is repaired in H, it is then evaluated for its fitness value as in (13) and stored in \(\gamma \), the fitness array of size \(J_{min}\).

$$\begin{aligned} \gamma _w = \mathfrak {I}( {Y_w} ),\quad w = 1,\ldots ,J_{\min } \end{aligned}$$
(13)

The pseudocode for repairing trial individuals is given in Fig. 10.

Fig. 10
figure 10

Pseudocode for repairing trial individuals

4.5 Population update

2 Opt local search is applied to the best individual \(Y_{best}\) obtained with the minimum fitness value (\(\min ( {\gamma _w} )\)). After the local search routine, the new individual is compared with the fitness of the incumbent individual \(X_i^{t-1}\), and if it improves on the fitness, then the new individual is accepted in the population (14).

$$\begin{aligned} X_i^t = \left\{ \begin{array}{l} Y_{best} \mathrm{{\quad if \;}}\mathfrak {I}( {Y_{best}})<C_i^{t-1}\\ X_i^{t - 1} \mathrm{{ \quad otherwise}} \end{array} \right. \end{aligned}$$
(14)

If this individual improves on the overall best individual in the population, it then replaces the best individual in the population (15).

$$\begin{aligned} X_{best}^t = \left\{ \begin{array}{l} Y_{best}\mathrm{{\quad if \;}}\mathfrak {I}( {Y_{best}}) < C_{best}^t\\ X_{best}^{t - 1} \mathrm{{ \quad otherwise}} \end{array} \right. \end{aligned}$$
(15)

4.6 Iteration

Sequentially, incrementing i, the population counter by 1, another individual \(X_{i + 1}^{t - 1}\) is selected from the population, and it begins its own sampling towards the designated leader \(X_L^{t - 1}\) from Sects. 4.2, 4.3, 4.4 and 4.5. It should be noted that the leader does not change during the evaluation of one migration.

4.7 Migrations

Once all the individuals have executed their sampling towards the designated leader, the migration counter t is incremented by 1. The individual iterator i is reset to 1 (the beginning of the population) and the loop in Sects. 4.2, 4.3, 4.4, 4.5 and 4.6 is re-initiated.

4.8 2 Opt local search

The local search utilised in DSOMA is the 2 Opt local search algorithm. The reason as to why the 2 Opt local search was chosen, is that it is the simplest in the k-opt class of routines. As the DSOMA sampling occurs between two individuals in k-dimension, the local search refines the individual. This, in turn, provides a better probability to find a new leader after each jump sequence. The placement of the local search was refined heuristically during experimentation.

The complexity of this local search is \(O( {n^2 })\). As local search makes up the majority of the complexity time of DSOMA, the overall computational complexity of DSOMA for a single migration is \(O( {n^3 })\).

A schematic of the DSOMA routine is given in Fig. 11, which graphically outlines the procedure for creating jump sequence between two individuals, and constructing trial individuals.

Fig. 11
figure 11

DSOMA schematic

5 Lot-streaming problem

The lot-steaming problem considered in this paper is a subset of the generic flowshop scheduling problem. Whereas, in the permutative flowshop problem, each job n is processed by a single machine m, in a lot-streaming variant, each job is divided into smaller tasks called lots (l) (Pan et al. 2011). Once the processing of a sub-lot on its preceding machine is completed, it can be transferred to the downstream machine immediately. However, all \(l(j)\) sub-lots of job j should be processed continuously as no intermingling or exchanging is allowed. A separable sequence-dependent setup time is necessary for the first sub-lot of each job j before it can be processed on any machine k (Pan and Ruiz 2012).

Two different cases of the problem are considered; the idling and the non-idling case. The idling case is a simpler variant of the problem, where only the schedule of the lots are taken into consideration. A non-idling case, on the other hand, is more practical. A non-idle case arise when the machine is not allowed to be idle. This is beneficial, especially in the case where a number of machines are in operation, and resources such as electricity are wasted. Another practical situation is when expensive machinery is employed. Idling on such expensive equipment is often not desired. Other examples come from sectors where less expensive machinery is used but where machines cannot be easily stopped and restarted (Yavuz 2010).

A detailed description of the lot-streaming problem is given in Potts and Baker (1989), Chang et al. (2007) and Sarin and Jaiprakash (2007).

5.1 Idling case

The constraint in this case is that at any given time a machine can process only one sub-lot, and each sub-lot can only be assessed individually. Let the processing time of each sub-lot of job on machine \(m\) be \(P(m,j)\), and the setup time of job \(j\) on machine \(m\), after having processed job \(j\) is \(s(m,j,j)\), which can also represent the setup time of job \(j\) if it is the first job to be proceeded in the machine. The objective is to find a sequence with the optimal sub-lot starting and completion times to minimise the makespan.

The permissible job permutation can be presented as \(\pi = \{ {{\pi _1},{\pi _2},\ldots ,{\pi _n}} \}\), and the earliest start and completion time as \(S( m,j,r)\) and \(C( m,j,r)\), where \(r\) represents the specific sub-lot on job \(j\) being processed on machine \(m\).

Initially, the first and third directive of (16) are used to calculate the earliest start time of the first sub-lot of the first job (\({{\pi _1}}\)). The second and fourth directives calculate the completion times, making sure that preemption of jobs is not allowed. Equation (17) controls the earliest start time and the earliest completion time for the successive sub-lots of job \({\pi _1}\), which ensure that sub-lots of the same job are processed continuously.

Equation sets (18) and (20) are used for the calculations for the sub-lots of the following jobs in the \(\pi \). When calculating the start time for the first sub-lot of a job in (18), the completion time of the previous job on the current machine must be considered with the completion time of the sub-lot on the previous machine, and the setup time of the job on the current machine.

The makespan can be calculated using (20), which is essentially the completion time of the last sub-lot of the last job \({\pi _n}\) on the last machine \(m\) (Pan and Ruiz 2012).

A schematic of the makespan for the idling case is given in Fig. 12.

$$\begin{aligned}&{S}( {1,{\pi _1},1}) = s( {1,{\pi _1},{\pi _1}})\nonumber \\&{C}( {1,{\pi _1},1}) = {S}( {1,{\pi _1},1}) + P( {1,{\pi _1}});\nonumber \\&{S}( {w,{\pi _1},1}) = \max \{ {{C}( {w - 1,{\pi _1},1} ),s( {w,{\pi _1},{\pi _1}})} \},\end{aligned}$$
(16)
$$\begin{aligned}&{C}( {w,{\pi _1},1}) = {S}( {w,{\pi _1},1}) + + P( {w,{\pi _1}}),\nonumber \\&\quad w = 2,3,\ldots ,m \nonumber \\&{S}( {1,{\pi _1},r}) = {C}( {1,{\pi _1},r - 1}),\nonumber \\&{C}( {1,{\pi _1},r}) = {S}( {1,{\pi _1},r}) + P( {1,{\pi _1}}),\nonumber \\&\quad r = 2,3,\ldots ,l( {{\pi _1}}); \end{aligned}$$
(17)
$$\begin{aligned}&{S}( {w,{\pi _1},r}) = \max \left\{ {{C}( {w - 1,{\pi _1},r}),{C}( {,{\pi _1},r - 1})} \right\} ,\nonumber \\&{C}( {w,{\pi _1},r}) = {S}( {w,{\pi _1},r}) + P( {w,{\pi _1}}),\nonumber \\&\quad r = 2,3,\ldots ,l( {{\pi _1}}),\; w = 2,3,\ldots ,m \nonumber \\&{S}( {1,{\pi _1},1}) = {C}( {1,{\pi _{i - 1}},l( {{\pi _{i - 1}}} )} ) + s( {1,{\pi _{i - 1}},{\pi _i}} ),\nonumber \\&{C}( {1,{\pi _1},1}) = {S}( {1,{\pi _1},1}) + P( {1,{\pi _1}}), i = 2,3,\ldots ,n;\nonumber \\&{S}( {w,{\pi _1},1}) = \max \left\{ \begin{array}{l} {C}( {w - 1,{\pi _1},r}),\\ {C}( {w,{\pi _{i - 1}},l( {{\pi _{i - 1}}} )} )\\ \quad + s( {w,{\pi _{i - 1}},{\pi _i}}) \end{array} \right\} ,\end{aligned}$$
(18)
$$\begin{aligned}&{C}( {w,{\pi _1},1}) = {S}( {w,{\pi _1},1}) + P( {w,{\pi _i}}), \;\nonumber \\&\quad i = 2,3,\ldots ,n,w = 2,3,\ldots ,m\nonumber \\&{S}( {1,{\pi _1},r}) = {C}( {1,{\pi _i},r - 1} ),\nonumber \\&{C}( {1,{\pi _1},r}) = {S}( {1,{\pi _1},r}) + P( {1,{\pi _1}})\;\end{aligned}$$
(19)
$$\begin{aligned}&\quad i = 2,3,\ldots ,n, \;\; r = 2,3,\ldots ,l( {{\pi _i}});\nonumber \\&{S}( {w,{\pi _1},r}) = \max \{ {{C}( {w - 1,{\pi _1},r} ),{C}( {w,{\pi _{i - 1}},r - 1})} \},\nonumber \\&{C}( {w,{\pi _1},r}) = {S}( {w,{\pi _1},r}) + P( {w,{\pi _i}} ),\; \nonumber \\&\quad i = 2,3,\ldots ,n, \;\; r = 2,3,\ldots ,l( {{\pi _i}} ), w = 2,3,\ldots ,m \nonumber \\&{C_{\max }}( \pi ) = {C_T}( {m,{\pi _n},l( {{\pi _n}})}) \end{aligned}$$
(20)
Fig. 12
figure 12

Idling case of the lot streaming problem

5.2 Non-idling case

For the non-idling case, the earliest start time for the first sub-lot is given in (21) and (23), where the start time is the maximum of the setup time of the job in the current machine, the completion time of the first sub-lot on the previous machine, and the difference between the completion time of the whole job on the previous machine and the total processing time of the whole job on the preceding machine except the last sub-lot. This ensures that there is no idling time between two adjacent sub-lots. The last two directives of these equations calculate the completion time for the first job.

The subsequent processing time of the next job sequence is given in (23) and (24).

A schematic of the makespan for the non-idling case is given in Fig. 13.

$$\begin{aligned}&{S}( {1,{\pi _1},1}) = s( {1,{\pi _i},{\pi _i}})\nonumber \\&{C}( {1,{\pi _1},l( {{\pi _1}})}) = {S}( {1,{\pi _1},1}) + l( {{\pi _1}}) \times P( {1,{\pi _1}})\end{aligned}$$
(21)
$$\begin{aligned}&{S}( {w,{\pi _1},1}) = \max \left\{ \begin{array}{l} s( {w,{\pi _1},{\pi _1}}),{S}( {w - 1,{\pi _1},1} ) \\ \quad + p( {w - 1,{\pi _1}}),\\ {C}( {k - 1,{\pi _1},l( {{\pi _1}} )} ) \\ \quad -( {l( {{\pi _1}}) - 1}) \times P( {1,{\pi _1}}) \end{array} \right\} ,\end{aligned}$$
(22)
$$\begin{aligned}&{C}( {w,{\pi _1},l( {{\pi _1}})}) = {S}( {w,{\pi _1},1}) + l( {{\pi _1}}) \times P( {w,{\pi _i}}),\nonumber \\&\quad w = 2,3,\ldots ,m\nonumber \\&{S}( {1,{\pi _1},1}) = {C}( {1,{\pi _{i - 1}},l( {{\pi _{i - 1}}})}) + s( {1,{\pi _{i - 1}},{\pi _i}}),\nonumber \\&{C}( {1,{\pi _1},l( {{\pi _1}})} ) = {S}( {1,{\pi _1},1}) + l( {{\pi _1}}) \times P( {1,{\pi _1}}), \end{aligned}$$
(23)
$$\begin{aligned}&\quad i = 2,3,\ldots ,n \nonumber \\&{S}( {w,{\pi _1},1}) = \max \left\{ \begin{array}{l} {S}( {w - 1,{\pi _i},1} ) + P( {w - 1,{\pi _1}} ),\\ {C}( {w - 1,{\pi _1},l( {{\pi _1}})}) - ( {l( {{\pi _1}}) - 1}) \\ \quad \times P( {1,{\pi _1}}),\\ {C}( {w - 1,{\pi _{i - 1}},l( {{\pi _{i - 1}}})} ) \\ \quad + s( {1,{\pi _{i - 1}},{\pi _i}}) \end{array} \right\} ,\nonumber \\&{\;\;\;\;\;\;\;\;\;\;}i = 2,3,\ldots ,n, \;\; w = 2,3,\ldots ,m \nonumber \\&{C}( {w,{\pi _1},l( {{\pi _1}} )} ) = {S}( {w,{\pi _1},1}) + l( {{\pi _1}}) \times P( {w,{\pi _i}} ),\; \nonumber \\&\quad i = 2,3,\ldots ,n, \; w = 2,3,\ldots ,m \end{aligned}$$
(24)

The makespan for the non-idling case can be then calculated as (25).

$$\begin{aligned} {C_{\max }}( \pi ) = {C_T}( {m,{\pi _n},l( {{\pi _n}})}) \end{aligned}$$
(25)

The objective of the lot-streaming flow shop scheduling problem with makespan criterion is to find a permutation \({\pi ^ * }\) in the set of all permutations \(\prod \) which can be given as in (26) (Pan and Ruiz 2012).

$$\begin{aligned} {C_{\max }}( {{\pi ^ * }}) \le {C_{\max }}( \pi ),\quad \forall \pi \in \Pi \end{aligned}$$
(26)
Fig. 13
figure 13

Non-idling case of the lot streaming problem

6 Data sets generation

In keeping with the theme of utilising the chaotic maps in lieu of PRNG, the data sets have been generated using two unique chaotic maps; the Lozi and the Delayed Logistic map. Five unique sizes of data sets have been generated. They are from 10 job \(\times \) 5 machine, 20 job \(\times \) 10 machine, 50 job \(\times \) 25 machine, 75 job \(\times \) 30 machine and 100 job\(\times \) 50 machine. There are five instances for each data set, therefore, in total 25 data set instances for each of the Lozi and Delayed Logistic data sets.

In order to have unique data sets, each instance was initialised from a unique start position of the respective chaotic system. Additionally, the map was not allowed to be reinitialised. Two different maps were used in order to have more versatility in the data sets and to remove any particular bias.

The data sets are available at Davendra (2012) for download.

7 Experimentation and analysis

The main emphasis of this work is to show the improvement of applying the chaotic map to DSOMA. Therefore, the experimentation compares the generic and chaos induced DSOMA (DSOMA\(_{\hbox {c}}\)) algorithms on the same problem sets, using the identical parameter settings.

The parameter settings for DSOMA and DSOMA\(_{\hbox {c}}\) are given in Table 2. The experiments were conducted on a MacBook Pro, Sandy Bridge 2.3 GHz Intel Core i7, 8 GB 1,333 MHz DDR3 RAM.

Table 2 Parameter settings

7.1 Lozi generated data sets results

The results of the Lozi generated data sets are given in Table 3 for the idling case and Table 4 for the non-idling case. A total of five experiments have been performed on each data set, and the results are presented as the summation of the five experiments for the different problem instance sizes. Four different statistical analysis have been performed on the results. The minimum, maximum, average and execution time computation are presented. Additionally, the t test for the different data sets for the two different problems was conducted and is given in Tables 5 and 6. For the idling case in Table 3, DSOMA\(_{\hbox {c}}\) obtains better average minimum (8,684) and average (8,919.84) value compared to 8,899 and 9,248.48 for DSOMA. Furthermore, it obtains more minimum (5) and average values (5) for the individual data sets. In terms of the t test, DSOMA\(_{\hbox {c}}\) is significantly better than DSOMA on three data sets of 20 \(\times \) 10, 50 \(\times \) 25 and 75 \(\times \) 30 in the idling case, and both algorithms are significantly equal on the remaining two data sets.

Table 3 Lozi idling results
Table 4 Lozi non-idling results
Table 5 Paired t test for the Lozi idling results
Table 6 Paired t test for the Lozi non-idling results

For the non-idling case given in Table 4, the DSOMA and DSOMA\(_{\hbox {c}}\) algorithms performs equally. DSOMA manages to obtain better average minimum (9,863) and average (10,173.24) values compared to 9,878 and 10,179.32 for DSOMA\(_{\hbox {c}}\). In terms of individual instances, DSOMA and DSOMA\(_{\hbox {c}}\) have almost identical performances; average (2) and minimum (3) for DSOMA compared to average (3) and minimum (3) for DSOMA\(_{\hbox {c}}\). The t test results for all the data instances show that the two algorithms are significantly not different.

7.2 Delayed Logistic generated data sets results

For the Delayed Logistic problem sets, the results are given in Table 7 for the idling case and Table 8 for the non-idling case. The t test results for the different data sets are given in Tables 9 and 10. For the idling case, DSOMA\(_{\hbox {c}}\) is better performing on all problem instances having the better average (10,173.24) and minimum (12,176) values. Analysing the t test results, DSOMA\(_{\hbox {c}}\) is significantly better than DSOMA on the two instances of 50 \(\times \) 25 and 100 \(\times \) 50.

Table 7 Delayed Logistic idling results
Table 8 Delayed Logistic non-idling results
Table 9 Paired t test for the Delayed Logistic idling results
Table 10 Paired t test for the Delayed Logistic non-idling results

The non-idling case also affirms better performance of the DSOMA\(_{\hbox {c}}\) algorithm. Following the trend in the previous experiment, the chaos based DSOMA has better average minimum (13,379.8) and average (13,963.16) values and individual minimum (5) and average (4) values. From the t test results DSOMA\(_{\hbox {c}}\) is significantly better than DSOMA on four instances.

8 Conclusion

This research is based on ascertaining the effectiveness of applying the Lozi map to the DSOMA algorithm in order to solve the lot-streaming flowshop scheduling problem. In total, 50 data sets were generated using the Lozi and Delayed Logistic maps, which were solved for both the idling and non-idling case. In total, five experimentations for each data set were conducted, therefore a total of 500 individual experiments were conducted to validate the finding in this paper.

From the summarised results in Table 11, it is obvious that the DSOMA\(_{\hbox {c}}\) algorithm is better performing than the generic DSOMA algorithm. Of all the valid compared parameters, DSOMA\(_{\hbox {c}}\) obtained more minimum values, 18 compared to 5 and better average values, 17 compared to 4 for the DSOMA algorithm. This shows that DSOMA\(_{\hbox {c}}\) is more robust and has better probability of finding a better solution. In terms of t test, DSOMA\(_{\hbox {c}}\) is significantly better than DSOMA on nine data sets and equal in the remaining 11 data sets. This hypothesis raises the main issue regarding the importance of PRNG in stochastic algorithms. Whereas, the defining argument during the past decades has been the application of better crossover and mutation routines, the generator for selection and mutation, PRNG now gains more prominence.

Table 11 Summarised results

The downside of the experiment invariably has been the execution time, where there was a slight increase for the chaos version. However, this can be improved significantly through utilising the GPU for computing. This will be directed as a further aspect of this research, alongside the application of different chaotic maps as generators. The implementation of different chaos maps is important in the deduction aspect of which maps are compatible for the different problem classes.