Introduction

The competitive market leads producers to promote their manufacturing systems by more efficient and effective plan in a short period of time. So, in actual design of a manufacturing system, programming an efficient assembly line continuously was an important and controversial issue in the past decades (Baudin 2002). The manufacturing assembly line for the first time introduced by Henry Ford in the early 1900s (Fonseca et al. 2005). The assembly line balancing problem (ALBP) involves assigning needed tasks for producing a product as series or batches to a set of workstations, so that objective functions being optimized subject to limitations (Boysen et al. 2007). From this point of view, tasks sequence is the most important issue that should be considered in an assembly line development (Kao 1976).

There are numerous reviews about ALBP in the literature (Boysen et al. 2007; Battaïa and Dolgui 2013; Baybars 1986b; Becker and Scholl 2006; Ghosh and Gagnon 1989; Scholl 1999; Scholl and Becker 2006; Tasan and Tunali 2008), and classified it generally into two main types of Simple ALBP (SALBP) and Generalized ALBP (GALBP). GALBP versions have the extra features such as cost goals, station parallelization, mixed-model production, etc. in comparison with SALBPs (Becker and Scholl 2006). SALBP versions from goal point of view presented in Table 1.

SALBP-F is a feasibility problem for a given combination of time cycle and workstations number. SALBP-1andSALBP-2 are dual of each other, because the SALBP-1goal is minimizing the workstation number for a given cycle time, while the SALBP-2 goal is minimizing the cycle time for a given workstations number. In SALBP-E cycle time and workstations number ought to be minimized simultaneously so that efficiency can be maximized. In addition to the presented classification, assembly lines can be divided into two categories, respecting to their layout, Straight Assembly Lines and U-Shaped Assembly Lines. Straight assembly line considered as one of the most important traditional mass production sections, and then U-shaped assembly lines was defined to reduce the costs and improve Just-In-Time (JIT) (Monden 1983). On the other hand, it can be divided into single models and mixed models respecting to their types of products (Boysen et al. 2007). In the single model of assembly line, only one product can be produced in manufacturing line, and others that can produce more than one product, called mixed model assembly lines.

SALB problem is a single model for straight assembly line balancing and U-shape layout SALB called Simple U-line balancing (SULB).

The ALBPs were proven to be NP-hard by Gutjahr and Nemhauser (1964) and Ajenblit and Wainwright (1998). Therefore, according to the difficulty of such problems, lots of efforts exploited to development and expansion of heuristic methods such as the ranked positional weighting technique (RPWT) (Helgeson and Birnie 1961), COMSOAL technique (Arcus 1966), MALB technique (Dar-El 1973), MUST technique (Dar-El and Rubinovitch 1979, and LBHA method (Baybars 1986a), A critical path method (CPM) based approach (Avikal et al. 2013), and also meta-heuristic methods such as genetic algorithm (GA) (Ajenblit and Wainwright 1998; Falkenauer and Delchambre 1992), simulated annealing (SA) (Baykasoglu 2006), Tabu search (TS) (Peterson 1993; Lapierre et al. 2006), Particle swarm optimization (PSO) (Jian-sha et al. 2009), and ant colony optimization (ACO) (Sabuncuoglu et al. 2009).

Table 1 Versions of SALBP (Scholl and Becker 2006)

A multi-objective GA for solving U-shaped Assembly Line problem proposed by Hwang et al. (2008), and they did a comparison between Straight and U-shaped Assembly Lines. Kim et al. (2009) rendered a mathematical model and GA for A two-sided assembly line. In Hwang and Katayama (2009) work a multi-decision genetic approach for solving mixed-model U-shaped lines have been proposed which validated through a case study. A TS algorithm for solving two-sided assembly line problem prepared by Özcan and Toklu (2009) and the results benchmarked by existed approaches. An adaptive GA for solving ALBP offered by Yu and Yin (2010) which they proofed their algorithm efficiency with an example. In another noteworthy work a hybrid GA proposed by Akpınar and Mirac Bayhan (2011) and deployed for solving ALB mixed model with parallel workstation and zoning constraints. Zhang and Gen (2011) used a multi-objective GA to solve mixed-model assembly lines. Kazemi et al. (2011) proposed a two-stage GA for solving mixed-model U-shaped assembly lines. Nearchou (2011) used novel method based on PSO for SALBP and compared it with existing method. Rabbani et al. (2012) proposed a heuristic algorithm based on GA for mixed-model two-sided assembly line. Chang et al. (2012) focused on productivity in printed circuit board assembly line and rendered a GA with External Self-evolving Multiple Archives solving this problem. Chutima and Chimklai (2012) used a PSO to solve multi-objective two-sided mixed-model assembly line and showed if their proposed algorithm be combined with local search scheme quality of its solution set would be better. In another work, Purnomo et al. (2013) offered a mathematical model for two-sided assembly line and solved it with GA and iterative first-fit rule method, and lastly compared result of these methods. Manavizadeh et al. (2013) proposed a SA for a mixed model assembly U-line balancing type-I problem and compared algorithm results with exact method. Yuan et al. (2013) proposed an integer programming modeland a new meta-heuristic for mixed-model assembly line problem. Hamzadayi and Yildiz (2013) used a SA algorithm for problems line balancing and model sequencing in U-shaped assembly lines. Dou et al. (2013) proposed a discrete PSO for solving SALBP-1 and compared their results with GA. Kalayci and Gupta (2013) used a PSO with a neighborhood-based mutation operator for solving sequence-dependent disassembly line balancing. Li et al. (2014) created a mathematical model and a novel multi-objective optimization algorithm to solvetwo-sided assembly line. Delice et al. (2014) proposed a modified PSO for two-sided assembly line problem. Zha and Yu (2014) proposed a hybrid ant colony algorithm for solving U-line balancing and rebalancing problem and compared their algorithm results with existing methods. Al-Zuheri et al. (2014) considered mixed-model assembly line and used a GA to solve it.

Among these meta-heuristic methods, most of studies were devoted to GA and these previous research has indicated that there must be sufficient motivation to use this popular algorithm for solving emerged and defined problem. To perform a controlled random search for identifying the optimal solution, an alternative traditional optimal technique in the complex circumstances was provided (Tasan and Tunali 2008).

Concentration of many researchers on GA and its popularity was author motivation to improve the performance of this meta-heuristic through a modification as a part of contribution of this paper and put it into practice to solve the mentioned controversial problem.

Numerous works reviewed which solved ALBPs in crisp circumstance whilst actual world problems usually deal with uncertainty and vagueness. To represent uncertainty, fuzzy numbers can reflect the ambiguity of real data well. There is a considerable attention in the ALBPs literature that only some of them managed to solve such problems in fuzzy environment. In other words, in comparison with crisp ALBPs, a few numbers of researchers focused on fuzzy ALBP so far (Scholl and Becker 2006; Tasan and Tunali 2008). Between the articles used in solving fuzzy ALBP by précised methods, researches (Toklu and özcan 2008; Kara et al. 2009; Zhang and Cheng 2010; La Scalia 2013) are noticeable.

Through the studies in this area, ones that use heuristic and meta-heuristic methods for solving the ALBP in fuzzy environment are rare. In the 90s Tsujimura et al. (1995) and Gen et al. (1996) initialized using fuzzy GA for this problem. With a typical GA provided that the tasks processing time was presented in fuzzy numbers, they solved SALBP-1. While Brudaru and Valmar (2004) proposed a combined GA with Branch and Bound method to solve SALBP-1. Fonseca et al. (2005) presented modified Ranked Positional Weighting Technique and COMSOAL method with fuzzy numbers, and applied it to solve these sort of problems. Hop (2006) proposed a heuristic method to solve a fuzzy mixed-model ALBP. Zhang et al. (2009) prepared a heuristic method to solve SULBP with fuzzy numbers. Özbakır and Tapkan (2010) presented a model for two-sided ALBP and solved this problem by Bees algorithm. Zacharia and Nearchou (2012) also introduced a multi-objective GA to solve SALBP-2 with fuzzy numbers, in which they applied weighted sum of objectives. Zacharia and Nearchou (2013) presented a meta-heuristic algorithm based on genetic algorithm for solving SALBP-E.

As mentioned, since numerous researchers used GA and its popularity, this paper tends to improve performance of this algorithm through a modification. Also this is noteworthy that no research considered and solved SULB-1 using meta-heuristic methods in fuzzy circumstance. So this paper considered the SALB-1 and SULB-1 in which a modified GA presented with the one-fifth success rule that result in enhancing the performance. A fuzzy controller for better adaptation between GA and one-fifth success rule have rendered and also the parameters of proposed algorithm calibrated by Taguchi design of experiments. Due to the uncertainty in the real world, fuzzy numbers have been used to represent the assembly line cycle and processing time.

The rest of the paper is organized as follows: In “Problem formulation” section, the main characteristics of SALBP and SULBP are represented. In “Fuzzy numbers arithmetic and ranking” section, fuzzy arithmetic is provided as well as a number of criteria to sort fuzzy numbers. Genetic algorithm, one-fifth success rule and also the procedure of genetic algorithm modification with one-fifth success rule are presented in “Genetic algorithm” section. In “Comparison” section, at first the parameters of proposed algorithm would be calibrated using Taguchi method, and after that the proposed algorithm would be examined by benchmarks and its result would be compared with existing methods. Finally, conclusions and some guidelines for future studies are provided in “Conclusion” section.

Problem formulation

This section represents the main characteristics of SALBP-1 and SULBP-1. As mentioned before, assembly line is a series of locations which is called workstations, and a subset of tasks that are performed and need to be done for production of a unit in these locations (Gen et al. 1996). For these problems, the available information is as follows (Miltenburg and Wijngaard 1994):

  • A given set of tasks \(J=\left\{ i|i=1,2,\ldots n\right\} \).

  • The set of tasks’ needed time which is shown as \(T=\left\{ \tilde{t}_i |i=1,2, \ldots n\right\} \).

  • Each task’s allocated time that will be presented as triangular fuzzy number (TFN).

  • The set of precedence relations \(P=\{\left( a,b\right) \)|task \(a\) must be completed before task \(b\}\).

  • Maximum allowed fuzzy cycle time \(\left( {\tilde{C}_{max}}\right) \).

Symbols of this article are listed below:

  • \(\tilde{t}_i\): Fuzzy processing time that is represented by TFNs.

  • \(S_k\): Set of activities which done in \(k\) workstation \(S_k =\{i\)|task \(i\) is done at workstation \(k\}, \forall k=1,\ldots ,m\)

  • \(\tilde{t}\left( {S_k}\right) \): Fuzzy time that every workstation needs to complete all the required tasks.

  • \(\tilde{c}:\) Assembly line’s fuzzy cycle time, i.e. \(\mathop {\max }\limits _k\left\{ {\tilde{t}\left( {S_k}\right) }\right\} \).

  • \(\tilde{C}_{max}\): Maximum allowed fuzzy cycle time.

  • \(\tilde{T}\): Total processing time.

  • \(\tilde{I}_k\): Fuzzy idle time for workstation \(S_k, \left( {k=1,\ldots , m}\right) \).

  • \(\tilde{E}\): Fuzzy balance efficiency.

  • \(\widetilde{ID}\) Fuzzy idle percentage of assembly line.

In this problem, there are a number of workstations which are presented by set of \(WS=\left\{ {ws_1,ws_2,\ldots ,ws_m}\right\} \), and each task should be assigned only to one of these workstations. In addition, the “\(J\)” set should be allocated into workstations, so that the limits of Eqs. (1)–(3) are satisfied (Tsujimura et al. 1995; Gen et al. 1996):

$$\begin{aligned}&\mathop {\bigcup }\limits _{k=1}^m S_k =J\end{aligned}$$
(1)
$$\begin{aligned}&S_k \mathop {\bigcap }\limits _{k\ne l} S_l =\emptyset \end{aligned}$$
(2)
$$\begin{aligned}&\mathop {\sum }\limits _{i\epsilon S_k} \tilde{t}_i \le \tilde{C}_{\max } \quad k=1,2,\ldots ,m \end{aligned}$$
(3)

The first and second constraints guarantee that all tasks allocated to the workstations and each task will be allocated only to one workstation. The third one ensures that each workstation cycle time will not be greater than maximum allowable fuzzy cycle time. In SALB, \(j\)th work can be allocated to \(k\)th workstation, only when its prior tasks have been assigned to \(1, 2\ldots k\)th workstations, whilst in SULB, the \(j\)th task can be allocated to \(k\)th workstation, only when all its predecessor tasks or/and all its successor tasks have been allocated to \(1,2\ldots k\)th workstations (Miltenburg and Wijngaard 1994). Thus, in tasks allocation, constraint equation (4) for SALB and constraint equation (5) for SULB should be met (Miltenburg and Wijngaard 1994).

$$\begin{aligned}&If\;\left( {a,b}\right) \in P,a \in S_k, b\in S_l, then\;k\le l, for\;all\;a; \end{aligned}$$
(4)
$$\begin{aligned}&If\;\left( {a,b}\right) \in P, a \in S_k, b\in S_l, then\;k\le l,for\;all\;a;\nonumber \\&or, If\;\left( {b,c}\right) \in P, b\in S_k, c\in S_r, then\;r\le k,for\;all\;c;\nonumber \\ \end{aligned}$$
(5)

Constraint (4) is defined for SALB and ensures its compliance with predecessor constraints. Also constraint (5) is defined for SULB, guaranteeing the compliance of at least one of the predecessor or successor constraints.

Beside the main goal of SALBP-1andSULBP-1, that is minimizing the number of workstations, it’s possible to define other goals for comparing the solutions with same workstation numbers. According to the problem, there are following result [Eqs. (6)–(11)] (Fonseca et al. 2005; Zhang et al. 2009):

$$\begin{aligned} \tilde{t} \left( {S_k}\right)&= \mathop {\sum }\limits _{i\epsilon S_k} \tilde{t}_i,\quad k=1,\ldots ,m\end{aligned}$$
(6)
$$\begin{aligned} \tilde{c}&= \mathop {\max }\limits _k \left\{ {\tilde{t} \left( {S_k}\right) }\right\} \end{aligned}$$
(7)
$$\begin{aligned} \tilde{I}_k&= \tilde{C}_{max} - \tilde{t} \left( {S_k}\right) ,\quad k=1,\ldots ,m\end{aligned}$$
(8)
$$\begin{aligned} \tilde{T}&= \mathop {\sum }\limits _{k=1}^m \tilde{t} \left( {S_k}\right) \end{aligned}$$
(9)
$$\begin{aligned} \tilde{E}&= \tilde{T}/(m\times \tilde{c})\end{aligned}$$
(10)
$$\begin{aligned} \tilde{ID}&= \mathop {\sum }\nolimits _{k=1}^m (\tilde{C}_{max} - \tilde{t} \left( {S_k}\right) )\big /(m\times \tilde{C}_{max}) \end{aligned}$$
(11)

Figure 1 determines the main difference between SALB and SULB. It depicts a SALB and a SULB, with the cycle time of 10 min. Each node represents a task and the number above, represents the processing time for each node. As seen, in the SALB, the tasks are allocated to five workstations (with efficiency of 39/50). Instead, in SULB, tasks are allocated to 4 workstations (with efficiency of 39/40).

Fig. 1
figure 1

Straight assembly line (a), U-Shaped assembly line (b)

Equation (6) calculates the fuzzy cycle time of each workstation and Eq. (7) calculates the fuzzy cycle time of assembly line. Formula (8) calculates the fuzzy idle percentage of the assembly line. By Eqs. (9) and (10), the fuzzy efficiency of assembly line and by Eq. (11) fuzzy idle percentage of the assembly line could be calculated.

Fuzzy numbers arithmetic and ranking

This section provides fuzzy arithmetic as well as a number of criteria to rank fuzzy numbers. Because of vagueness and uncertainty in the real world, data are fuzzy numbers. In this paper, as shown in Fig. 2, TFNs are used to present the processing time of the tasks. A TFN can be characterized by three parameters \(\tilde{A} =(A_1, A_2, A_3)\). The reason of using triangulated data in this paper is because of its computational simplicity in comparison with other fuzzy data, as its considered calculations in Eqs. (12)–(15) (Kaufmann and Gupta 1991):

$$\begin{aligned} \tilde{A} + \tilde{B}&= (A_1 +B_1, A_2 +B_2, A_3 +B_3) \end{aligned}$$
(12)
$$\begin{aligned} \tilde{A}-\tilde{B}&= (A_1 -B_1, A_2 -B_2, A_3 -B_3)\end{aligned}$$
(13)
$$\begin{aligned} \tilde{A} \times \tilde{B}&= (A_1 \times B_1, A_2 \times B_2, A_3 \times B_3)\end{aligned}$$
(14)
$$\begin{aligned} \tilde{A}/\tilde{B}&= (A_1 /B_3, A_2 /B_2, A_3 /B_1) \end{aligned}$$
(15)

The operator \(\le \) used for comparing two fuzzy numbers in formula (3) whilst for comparison and TFNs ranking following criteria will be used for prioritization (Bortolan and Degani 1985):

  • Criterion 1: The data is greater which in terms of the three points weighted average (Beginning, Peak, End) be greater [Eq. (16)]

  • Criterion 2: The data is greater which in terms of the midpoint, be greater [Eq. (17)].

  • Criterion 3: The data is larger so which in terms of distance between the beginning and end point is greater [Eq. (18)].

    $$\begin{aligned} F_1&= \frac{A_1 +2\times A_2 +A_3}{4}\end{aligned}$$
    (16)
    $$\begin{aligned} F_2&= A_2 \end{aligned}$$
    (17)
    $$\begin{aligned} F_3&= A_3 -A_1 \end{aligned}$$
    (18)

For comparing some TFNs, initially, the criterion1 [Eq. (16)] is used, If the first criterion cannot determine the major TFN, the criterion2 [Eq. (17)] is used, and so on.

Fig. 2
figure 2

Triangular fuzzy number

Genetic algorithm

Genetic algorithm (Holland 1975) is a popular meta-heuristic algorithms. The majority of GAs consists of the following steps:

  • Step 1. Determine population size (nPop), maximum number of iteration (Itr), migration rate \(\left( {\alpha \%}\right) \), crossover rate \(\left( {\beta \%}\right) \), and the mutation rate \(\left( {\gamma \%}\right) \), so that satisfy \(\alpha +\beta +\gamma =100\,\%\) (\(\alpha \) is the ratio of chromosomes that migrate from a generation into the next. Also \(\beta \) and \(\gamma \) are the ratio of chromosomes which are advent in each generation by the crossover and mutation operations respectively).

  • Step 2. Generate initial population, using random numbers.

  • Step 3. Calculate fitness function for each chromosome.

  • Step 4. In case of satisfying the stopping criteria, the algorithm stops, otherwise goes to step five.

  • Step 5. Create the new generation, by the following methods:

    • Selection of \(\alpha \%\) of chromosomes from previous generation (this selection is based on fitness, or random, or other methods) and placing them in the next generation.

    • Crossover act on the \(\beta \%\) of the generation and place their children in the next generation.

    • Mutation act on the \(\gamma \%\) of the previous generations and placing new chromosomes in the next generation (to escape from the local optimum).

  • Step 6. Repeat the Steps 3 and 4.

The GA’s general diagram is displayed in Fig. 3.

Fig. 3
figure 3

General diagram of genetic algorithm (Holland 1975)

If the GA operators are defined properly and well adapted to the problem, this algorithm would be efficient to solve the problem. So, first of all, the algorithm operators ought to be defined for the ALBP. To generate the initial chromosomes, permutations from one to the number of tasks will be generated randomly, and to satisfy the predecessor restrictions, the generated chromosome should be repaired, if it’s necessary. For example, suppose that there are eight tasks that should be optimal, in terms of sequencing. Using random numbers, a permutation would be generated from one to eight, for example [1 3 5 4 7 8 2 6], as a chromosome.

Suppose that one of the precedence relation constraints is necessary for second task to be done before fifth task that it makes the generated chromosome infeasible and must be repaired. To repair this chromosome, a gene containing the second task is to be placed before the gene containing fifth task, and after repairing the initial chromosome it would be reordered into: [1 3 2 5 4 7 8 6].

After repairing the chromosomes, the tasks should be allocated to workstations. If the current workstation is \(k\), for assigning tasks to the SALB, the task of assigning sorted tasks in chromosome to \(k\)th workstation will continue, until the workstation’s cycle time doesn’t pass the maximum allowed fuzzy cycle time (if the expression \(\left\{ {\mathop {\sum }\nolimits _{i\epsilon S_k} \tilde{t}_i +\tilde{t}_j \le \tilde{C}_{max}}\right\} \) is satisfied, then \(\tilde{t}_j\) will be assigned into the current workstation, otherwise a new workstation will be built).

Differences between the task allocations to SALB with allocation of the same task to the SULB is that in the SALB the tasks should be selected from the beginning of the chromosome and be assigned to the workstations, whilst in the SULB tasks can be selected from the beginning or from the end of the chromosome. After allocating tasks to the workstations, one has to calculate fitness function or cost for each chromosome and then with the help of existing operators (Selection, Crossover, and Mutation) produces a new generation. For “Selection” operator, Random selection, Roulette Wheel selection, Tournament selection etc. can be exploited (Haupt and Haupt 2004). Also, In this paper, three methods of Single-Point Crossover, Two-Point Crossover and Uniform Crossover are used for “Crossover” operator (Haupt and Haupt 2004). There are different methods for the “Mutation” operator. In this paper, two genes and swapping them with each other is selected randomly (Haupt and Haupt 2004). What should be noted here is that after crossover and mutation, the child chromosome (due to predecessor constraints) may be infeasible in this case, so the produced chromosome should be repaired.

One-fifth success rule

One-fifth success rule is introduced by Rechenberg (1973) for the evolutionary strategies (ES) algorithm (that is a meta-heuristic method per se) at first, that is a meta-heuristic method per se. Like GA, ESs use mutation and crossover of chromosome for evolution of the generations. Each chromosome is presented as \((x_1,x_2,\ldots ,x_n,\sigma )\) that \(x_i\) s are the problem’s variables and presented by real numbers, and \(\upsigma \) is the mutation step length. Rechenberg (1973) mathematically had been proved one-fifth success rule for the ESs with a chromosome and a child. This rule says that if the ratio of the number of successful mutations to number of total mutations is equal to one-fifth, then the convergence rate to the optimum solution will be maximum rate. For mutation in ES, several methods are proposed that one of them as formula (19) is adding a random normal value to all genes.

$$\begin{aligned} x_i^{\prime }=x_i+N\left( {0,\sigma }\right) \end{aligned}$$
(19)

The value of \(\upsigma \) is determined by one-fifth success rule, during the algorithm. One-fifth success rule will follow three conditions to achieve the maximum convergence rate:

  • Mode 1. If the probability of success in past \(k\)-populations was equal to one-fifth, the mutation step length \((\sigma )\) will not change.

  • Mode 2. If the probability of success in past \(k\)-populations was more than one-fifth, the mutation step length \((\sigma )\) will increase.

  • Mode 3. If the probability of success in past \(k\)-populations was less than one-fifth, the mutation step length \((\sigma )\) will decrease.

In the cited modes, mode 2 used for prevent premature convergence by creating diversity in generations and mode 3 used for increase the speed of convergence. In fact, this algorithm considers diversity and convergence, simultaneously. If the diversity of chromosomes is low, to escape the local optimum, the step length to search a wider area of the solution space increased here and if the convergence of chromosomes is low, for converge chromosomes to optimum solution, the search space narrowed by reducing the step length. The general diagram of one-fifth success rule is shown in Fig. 4 (\(C\) is a constant, \(\beta \) is equal to \(\frac{1}{past\;k-populations}\), \(Ps\) is the probability of success in past \(k\) generations, \(t\) is the iteration number, \({\varvec{X}}^{{\varvec{t}}}\) presents the chromosomes in the time of \(t\), \(F({\varvec{X}}^{{\varvec{t}}})\) is the chromosome fitness function in the time of \(t\), \(\varvec{\sigma }_\mathbf{0}\) and \(\varvec{\sigma }\) are the Standard deviation from first step and next steps respectively.

Fig. 4
figure 4

General diagram of one-fifth rule

Combined GA with one-fifth success rule

As mentioned before, the one-fifth success rule considers diversity and convergence simultaneously for better search. Also in GA for simultaneous influence on diversity and convergence, one could use selective pressure (the probability of selecting the best member of the population, compared to the average probabilities of selecting the other members of the population). In other words, by controlling the selective pressure, diversity and convergence could be optimum simultaneous. To do this, the selection operator must be defined according to the fitness and selective pressure that has to be entered in selective operator. In this paper, to link between fitness of each chromosome and its selection probability, Boltzmann method is used Goldberg (1990), as in Eq. (20) (SP is the Selective Pressure, \(f_v\) is the fitness of \(v\)th chromosome, and \(P_v\) is the probability of selecting \(v\)th chromosome).

$$\begin{aligned} P_v \propto e^{SP\times f_v} \end{aligned}$$
(20)

According to the Eq. (20), the more fitness of the chromosome means more probability of selection of the chromosome. Summation of all probabilities ought to be equal to one. Thus, the probability of selecting each chromosome is divided by the summation of probabilities (Eq. (21), \(N\) is population size).

$$\begin{aligned} P_v =\frac{{e}^{SP\times f_v}}{\mathop {\sum }\nolimits _{i=1}^N e^{SP \times f_v}} \end{aligned}$$
(21)

In order to be able to successfully enter the one-fifth success rule in probabilities, initially it should sort the existing population on the basis of chromosome fitness. Then, by increasing or decreasing of the SP, it has tried the ratio of total probability of the bottom half of the population [the Weaker Half Probability (WHP)] be equal to one-fifth, in the other hands invoking the one-fifth success rule tend to tune the SP [formula (22)].

$$\begin{aligned} WHP=\mathop {\sum }\limits _{v=[\frac{N}{2}]+1}^N P_v =\frac{1}{5} \end{aligned}$$
(22)

SP controller

However, due to the continuous nature of the solution space, reach to the number of one-fifth is difficult (or even impossible). So, fuzzy terms and fuzzy rules are applied as SP controller in this paper. Some of used fuzzy terms defined as “Small”, “Good”, and “Big”. These fuzzy terms membership functions are presented in Fig. 5.

Fig. 5
figure 5

Fuzzy terms membership function

If the WHP is small, the SP should be limited [Formula (23)] otherwise it could be aroused [Formula (24)], (DoF = Degree of Firing).

$$\begin{aligned}&{\textit{IfWHPisSmall;\,\,then}}, SP_{t+1}\nonumber \\&\quad =SP_t \times \left\{ {1+ \left( {{\textit{WHP}}-\frac{1}{5}}\right) \times DoF_{{\textit{Small}}}}\right\} \end{aligned}$$
(23)
$$\begin{aligned}&\textit{IfWHPisBig;}\,\,{\textit{then}}, SP_{t+1}\nonumber \\&\quad =SP_t \times \left\{ {1+\left( {{\textit{WHP}}-\frac{1}{5}}\right) \times DoF_{{\textit{Big}}}}\right\} \end{aligned}$$
(24)

As seem in rules (23) and (24), more distance of WHP from one-fifth (center of good membership function) lead to more SP variation in direction of one-fifth. Also, DoF help for lower fluctuation and SP convergent. Besides, use of momentum can be useful to increase the convergence celerity (Formula (25)–(26)).

$$\begin{aligned}&{\textit{IfWHPisSmall}};{\textit{then}},SP_{t+1}\nonumber \\&\quad =SP_t \times \left\{ {1+ \left( {{\textit{WHP}}-\frac{1}{5}}\right) \times DoF_{Small}}\right\} +\alpha \left( {\Delta SP_t}\right) \nonumber \\\end{aligned}$$
(25)
$$\begin{aligned}&{\textit{IfWHPisBig}};{\textit{then}},SP_{t+1}\nonumber \\&\quad =SP_t \times \left\{ {1+\left( {{ WHP}-\frac{1}{5}}\right) \times DoF_{Big}}\right\} +\alpha \left( {\Delta { SP}_t}\right) \nonumber \\ \end{aligned}$$
(26)

\(\alpha \) presents the momentum in formula (25)–(26). In fact, SP’s variation and direction in every iteration \((\Delta SP_t)\) have an effect on SP in the next iteration \((SP_{t+1})\) that tend to convergence celerity.

Defined fuzzy rules effect on SP iteratively, whilst the WHP satisfy the defined fuzzy term of good. Scilicet WHP approximately being equal to the \(\frac{1}{5}\left( {WHP\cong \frac{1}{5}}\right) \).

Comparison

In this section, first of all the parameter control mechanism would be considered an after that the proposed modified GA would be benchmarked with standard functions and after that proposed algorithm examined with bench-marks of SALBP-1 and SULBP-1. And lastly comparison between proposed algorithm and existing method would be rendered.

Problem parameters control using Taguchi method

There are various method to calibrate the meta-heuristic algorithm parameters that some of them are full factorial design, i.e. they examine all possible combinations (Ruiz et al. 2006; Montgomery 2008), that is intrinsically time and cost consuming. Taguchi method (Taguchi 1986) uses special design of orthogonal arrays to study the whole parameters space with a small number of experiments.

Taguchi method clusters factors into two main groups: controllable and noise factors (uncontrollable). Since noise factors are uncontrollable and their elimination is unpractical and almost impossible, the Taguchi method tries to reach to the best controllable factors level from robustness point of view. In addition to determine the best factors level, Taguchi establishes the relative importance of each factor with respect to its main impacts on the objective function (Naderi et al. 2009). To analyze the experimental data and find optimal factor combination, Taguchi method uses a criterion entitled signal-to-noise (S/N) ratio which expected to be maximum.

Taguchi method divides objective functions into three groups:

The smaller the better In case that approaching objective function value to zero is better come in handy. In this situation, S/N ratio would be calculated by formula (27), \((e)\) determines number of experiment, \(Obj_e\) is objective function value in \(e\)th experiment, and \(nExp\) is number of parameters combination which should be examined.

$$\begin{aligned} \hbox {S}/\hbox {N ratio}=-10\log _{10} \left( {\mathop {\sum }\limits _e \frac{Obj_e^2}{nExp}}\right) \end{aligned}$$
(27)

The larger the better In case that upper value of objective function is better come in handy. In this situation, S/N ratio would be calculated by formula (28).

$$\begin{aligned} \hbox {S}/\hbox {N ratio}=-10\log _{10}\left[ {\frac{\left( \overline{Obj}^{2}\big /s^{2}\right) }{nExp}}\right] \end{aligned}$$
(28)

Nominal is best: In case that there is a specific target value for objective function come in handy. In this situation, S/N ratio would be calculated by formula (29).

$$\begin{aligned} \hbox {S}/\hbox {Nratio}=-10\log _{10} \left[ {\mathop {\sum }\limits _e \frac{\left( {1/Obj_e^2}\right) }{nExp}}\right] \end{aligned}$$
(29)

Controllable factors which selected for this portion are population size, maximum number of iteration, crossover rate, and mutation rate In addition to the S/N ratio, the means as a criterion is useful for finding the best factors combination. As mentioned the S/N ratio should be maximized regardless to the objective function type whilst for means the type of objective function is important and because that in assembly line problem, most of objective functions should be minimized, clearly the lower means value the better. To sum up, a level for parameters should be selected which in that level, S/N ratio has maximum value and means criterion has minimum value in comparison with the other levels, and just in case that for a level these criteria weren’t satisfied simultaneously, another experiment for that specific parameter should be design.

Proposed algorithm has been examined with three types of benchmarks (benchmarks were classified into three classes of A, B, and C according to their size). So, for every factor according to the benchmark size three levels considered which each level value caught through trial and error (Table 2).

Table 2 Parameters and their levels

The Minitab 17 used for Taguchi method implementation. The Taguchi experiments for each three class of A, B, and C have done separately. S/N ratio and means criteria for A, B, and C classes exposed in Figs. 6, 7, and 8 respectively.

Fig. 6
figure 6

Mean of means and S/N ratio for each parameter in Class A

Fig. 7
figure 7

Mean of means and S/N ratio for each parameter in Class B

Fig. 8
figure 8

Mean of means and S/N ratio for each parameter in Class C

As clear in Fig. 6, for nPopA and \(\gamma A\) third level would be selected, because in comparison with other level S/N ratio has maximum value and means has minimum value in this level. But for determination of ItrA and \(\beta A\), extra experiment should be designed. The same analysis for Figs. 7 and 8 comes in handy. Using exposed diagram and after complimentary experiments, selected levels for parameters presented in Table 3.

Table 3 Selected level of the parameters

Numerical results over evolutionary standard benchmarks

Proposed algorithm could be benchmarked rendered algorithms using standard functions and this also invoked for benchmarking proposed modified GA with the listed standard function in Table 4 (Molga and Smutnicki 2005).

Table 4 Standard function (Molga and Smutnicki 2005)

Obviously, the proposed modified GA improves the performance respect to the selection operator. So, it seems that this is necessary to compare between selection methods in traditional GA and rendered one. There are several methods for selection that Roulette Wheel, Tournament, and Random compared with the proposed method (Table 5).

Table 5 Methods details

As it can be observed in Figs. 9, 10, 11, 12 and 13, the proposed method is convergent into better solution for standard function in comparison with others barring Tournament in Sphere. Moreover, it has partly better convergence rate. Here, the convergence rate of proposed algorithm is better than Random and Roulette Wheel but rather than Tournament it lowers in some of functions. As mentioned before, focusing on diversity and convergence makes the convergence rate lessen and stopping algorithm in local solution respectively. Tournament method in rather to the proposed algorithm has more focus on convergence that this tends to boosting in convergence rate and also make ceasing Tournament in local solution more possible. In another word, low focus on the diversity is the main cause of Tournament worst results in comparison with proposed algorithm.

Fig. 9
figure 9

Algorithms results for De Jong’s function

Fig. 10
figure 10

Algorithms results for Rosenbrock’s Valley function

Fig. 11
figure 11

Algorithms results for Goldstein-Price’s function

Fig. 12
figure 12

Algorithms results for Ackley’s function

Fig. 13
figure 13

Algorithms results for Easom’s function

Numerical results over SALBP-1 and SULBP-1 benchmarks

In this section the proposed algorithm will examine on the benchmarks of SALBP-1 and SULBP-1. More details of these benchmarks are reachable on Scholl (1993) and http://alb.mansci.de. These benchmarks have defined in crisp state, so for transfer that to fuzzy state, postulated that input number in crisp state equals to peak point \((\hbox {A}_{2})\) in fuzzy state. For calculation of beginning point \((\hbox {A}_{1})\) and end point \((\hbox {A}_{3})\) formula (1) come in hand. In this paper the value of \(\psi \) and \(\chi \) for tasks processing time got used to be 0.1 and for \(\tilde{C}_{max}\) is 1.

$$\begin{aligned} \tilde{A} =\left( {A_2-\chi ,A_2,A_2 +\psi }\right) \end{aligned}$$
(30)

Selected benchmarks according to number of tasks \((n)\) are divided into A, B, & C. These groups consist of small-sized benchmarks \(\left( {n<25} \right) \), medium-sized benchmarks \(\left( 25\le n<45\right) \), and big-sized benchmarks \(\left( n\ge 45\right) \) respectively. Selected benchmarks for each class are show in Table 6.

Table 6 Selected benchmarks for each class (A, B, C)

The proposed algorithm implemented in MATLAB and run on a computer with “Core 2 Duo 2.2, 2 GHz PC”. Because of stochastic behavior in meta-heuristic algorithm, the algorithm was tried out 10 runs by each benchmark and the best solutions summarized in Tables 7 and 8. Output solutions compared with optimums by formula (31).

$$\begin{aligned} \% Deviation=\left\{ \frac{\left( {x-x^{*}}\right) }{x}^{*} \right\} \cdot 100 \end{aligned}$$
(31)

Tables 7 and 8 consist of below information:

  • Average of %Deviation: shows the average percent of deviations.

  • %Optimal solution: expose the optimum solution percentage.

As cleared in Tables 7 and 8, proposed algorithm tends to the optimum results in the all A class benchmarks that shows the high performance of this algorithm for this class. The bigger solution space, the lower solutions quality, and this are clear in B and C classes. All in all proposed algorithm has good performance in solving the class B and also good in class C. In SALB-1 rather than SULB-1 has better result rather than algorithm, that it is stem in smaller size of solution space, although the algorithm has high performance in both of SALB-1 and SULB-1 whichever apparently presented in the last row of Tables 7 and 8. More results exposed in details in Tables 9 and 10.

Table 7 Summarized reached solutions for SALBP-1
Table 8 Summarized reached solutions for SULBP-1
Table 9 Result of proposed algorithm for fuzzy SALBP-1
Table 10 Result of proposed algorithm for fuzzy SULBP-1

Run time depends to the problem size completely clear. In another word the more number of the tasks in ALBP the further computer run-time. The average CPU time for running each benchmark have exposed in Fig.14. As cleared, process time varies between 3 and 360 s that proofs algorithm valuable convergence time.

Fig. 14
figure 14

CPU time versus problem size

Final comparison between proposed method and existing methods

In this section, performance of the proposed algorithm is examined using (Tsujimura et al. 1995) test problem. An example of the problem is solved by the proposed algorithm to illustrate the improvements and the results compared with existing methods in this problem that is presented in Table 11, and finally the predecessor and successor constraints of the test problem are displayed in Fig. 15.

Fig. 15
figure 15

Predecessor and successor constraints graph (Tsujimura et al. 1995)

Table 11 Fuzzy processing time of tasks (Tsujimura et al. 1995)

The population and generation size of the algorithm defined in small scale because of the size of this example and power of the algorithm. The size of population is equal to 3 and the number of maximum generation is limited to 5. The best obtained solution for fuzzy efficiency and fuzzy idle time for SALB is as follow:

  • Fuzzy efficiency = [0.73, 0.97, 1]

  • Fuzzy idle percentage = [3.92, 16.67, 34.01]

Best solution for SULB is as follows:

  • Fuzzy efficiency = [0.73, 0.99, 1]

  • Fuzzy idle percentage = [3.92, 16.67, 34.01]

Commonly from every three run times of the algorithm, once will converge to the best obtained solution. The results from the proposed algorithm and the results from other existing methods are shown in Table 12. The first and second rows of the Table 12 present the results from the proposed algorithm in this paper for SALB and SULB, respectively. The third row of the table shows the results from the GA offered by Tsujimura et al. (1995). The fourth and fifth lines also show the results of fuzzy RPWT and modified COMSOAL proposed by Fonseca et al. (2005); and finally, the last line dedicated to the results of a fuzzy heuristic algorithm for SULB problem that offered by Zhang et al. (2009). As it is observed, none of the existing methods, rather from the number of workstations, rather than in terms of the line performance and rather from the idleness percentage are not better than our proposed modified algorithm. This shows the high performance of the proposed algorithm.

Table 12 Result of existing method for test problem

In addition to the detailed example, the proposed algorithm and other methods examined with bench-marks of SALBP-1 and SULBP-1 and results presented in Tables 13 and 14.

Table 13 Summarized result of existing method for SALBP-1
Table 14 Summarized result of existing method for SULBP-1

As presented in Tables 11 and 12, result of proposed algorithm averagely is better than other method in both “Average of %Deviation” and “%Optimal Solution” indexes each of which proof high performance for the proposed algorithm.

Conclusion

In this paper, the single model of straight and U-shaped assembly line balancing with fuzzy processing data have been considered. According to the uncertainty, variability, and imprecision in actual systems, task processing time as the problem input data presented as triangular fuzzy data. The main goal of problem which is minimizing the number of task stations subject to the Maximum allowed fuzzy cycle time \(\left( \tilde{C}_{max}\right) \) and problem constraints whilst for result comparison, some criteria were presented in addition to this goal. After mathematical formulation of the problem in fuzzy state, a combined genetic algorithm with One Fifth Success Rule has proposed, then the proposed algorithm parameters calibrated using Taguchi method and lastly the algorithm examined on different benchmarks and the experimental results proof its powerful capability.

However, it is limited by the assembly line balancing single model and it is hoped that future researchers be able to solve more complex problems such as mixed model of assembly line balancing using fuzzy processing times. In addition, the fuzzy problem of SALB-1 and SULB-1 could be solved by other meta-heuristic algorithms such as ACO and compare the results with proposed algorithm.