Keywords

Introduction

Many studies focused on the production scheduling to minimize the tardiness of jobs in a make-to-order job-shop. In these studies, it is generally assumed that the capacity at each work station is determined. However, in practice, it is often needs to be changed dynamically by making use of the numerical or empirical outcomes from production scheduling. For example, when too much tardiness of jobs repeatedly occur after proper production scheduling, it is necessary to allocate or reallocate capacities at relevant work stations in order to reduce the tardiness in future production (Yeh 1997; Fry and Russell 1993). This paper will address optimal planning for capacity allocation to support medium to long term (several months to years) decisions under a given production scheduling method in a make-to-order job-shop with stochastic orders and processing times.

For capacity allocation, most problems need to allocate multiple work stations’ capacity simultaneously. These are complex combinational optimization problems. Arakawa et al. (2000, 2003) presented a simulation model for job-shop scheduling incorporating capacity adjustment. In their study, a backward/forward hybrid simulation method is used for production scheduling at the first step; and based on the result of scheduling, a pattern search method is used to adjust capacity at the second step. Yang et al. (2005) used the particle swarm optimization (PSO) algorithm for integration of process planning and production scheduling in a job-shop. Some studies use simulation models as well as meta-heuristics algorithms in the design of the manufacturing systems similar to job-shops. Seshadri and Pinedo (1999) presented a framework consist of an optimization model and a simulation model to adjust the capacity for assembly and applied an iterative algorithm using CPLEX 10.2 to deal with the optimization. Shahabudee and Krishnaiah (1999) set the parameters of a multi-product Kanban system using genetic algorithm (GA); the parameters include the number of machines at each work station. In another study of Shahabudeen et al. (2003), they set similar parameters of a multi-product Kanban system using simulated annealing (SA). In all these studies, meta-heuristics algorithms usually use neighborhood search to reach the optimum solution from an initial solution. Coupled with simulation models, many alternatives were examined by simulation in the search procedure. For this reason, they often consume too much time in solving large-scale problems.

In this paper, bottleneck analysis is used as approximate discrete gradients of the objective function of the weighted tardiness. A modified simulated annealing is also presented, in which the neighborhood-generation is guided by the gradients in order to accelerate convergence and reduce the run time of the neighborhood search procedure. Our aim is to make the run time short enough for practical use, even if simulation is performed many times in the search procedure.

Optimization Model

In this study, the alternatives for capacity allocation can be adding/removing machines or work shifts. The available operation hours in regular time, such as working 8 h at daytime, is defined as the capacity of a machine. For example, at a work station, five machines can be allocated at most under the plant space availability. In this way, various numbers of machines can provide five discrete alternatives for capacity allocation from 8 to 40 h per day at the work station. Array these alternatives according to their capacity from low to high. The alternatives can be denoted by the integral values from 1 to 5.

Therefore, it is assumed that in a general job-shop that consists of m work stations, a linear array s = [c 1 c 2 … c m ] is the solution vector of the optimization model, where c j is the alternative number of the capacity level at work station j, for j = 1, 2, …, m. Then, the feasible region of s is a set of discrete vectors, denoted as S.

For make-to-order production, weighted tardiness is a general performance measure of job-shops. In this study, one of the purposes of capacity allocation is to fulfill the due dates of all jobs as much as possible. Suppose n jobs belong to p product classes will be manufactured in an m work stations job-shop within a q-months period, we can formulate the first object function to measure the performance of the job-shop in the q-months planning period as follow

$$ {z^{\mathrm{ T}}}(s)=\sum\limits_{l=1}^p {w_l^{\mathrm{ T}\mathrm{ P}}\sum\limits_{{i\in {I_l}}} {n_i^{\mathrm{ LS}}\max (x_i^{\mathrm{ C}}(s)-x_i^{\mathrm{ D}},0)} }, $$
(15.1)

where wTP l is the weight on tardiness penalty per unit product and per unit time of class l, I l is sets of i when job i belongs to class l, nLS i is the lot size of job i, xC i(s) is the completion time of job i in solution s, and xD i is the due date of job i. In the capacity allocation tool, each wTP l is assumed to be a fixed value in the q-months planning period, estimated by the production manager using historical data or practical experience. nLS i, xC i(s) and xD i are generated by the simulation model.

Another purpose of capacity allocation is to reduce the fixed cost, which mainly consists of the depreciation of machines and the fixed salary of operators in this study. The mean monetary values of the depreciation per month and per machine wM j at each work station j were provided by the production manager according to the cost accounting of the workshop. Supposing these values in the q-months planning period will be similar to their historical values, we estimated the fixed cost per month of the job-shop for all solutions s according to the number of machines nM j(s). Then, the second objective function is

$$ {z^{\mathrm{ C}}}(s)=q\sum\limits_{j=1}^m {w_j^{\mathrm{ M}}n_j^{\mathrm{ M}}(s)} . $$
(15.2)

The two objective functions are both considered in this study to get a feasible and profitable solution for a practical use. Hence, the optimization model with a bi-criteria objective function is

$$ \min {z^{\mathrm{ T}}}(s)+{z^{\mathrm{ C}}}(s) $$
(15.3)
$$ \mathrm{ subject}\ \mathrm{ to}: s\in S. $$
(15.4)

Gradient-Based Simulated Annealing

Kirkpatrick et al. (1983) firstly presented SA in 1983. In its neighborhood search, SA accepts inferior solutions according to a probability in order to bypass local optimums. Thus, in this study, we couple the gradient-based method with SA and present a hybrid method named GBSA to optimizing capacity allocation. The GBSA has not only the capability of avoiding local minima, but also a higher speed of convergence to approach stationary compared to the traditional SA.

  • Step 1: Input the control parameters of the GBSA: Initial Temperature T i, Termination Temperature T f, Cooling Rate α, Freeze Limit Φ, and Accept Limit β. Take T i as current temperature T. Generate initial solution s 0. A simulation is performed to compute the object function value z 0 in solution s 0. In this study, the initial solution s 0 was set to be 1.2 times (an empirical value from the practical case) of the mean capacity requirement per day in the tested cases.

  • Step 2: Detect the bottlenecks in the job-shop. To detect and measure the shifting bottlenecks in a job-shop, a statistical method called the active period method has been presented by Roser et al. (2002). They proposed that at any given time the momentary bottleneck is the machine with the longest uninterrupted active period at this time and in any given period of time the average bottlenecks can be measured by the percentage of the time that a work station. Although this method is not an exact one, it is very robust, easy to apply and has the ability to detect the bottlenecks in steady state systems or non-steady state systems.

  • Step 3: Suppose there are n s solutions neighbor to the current solution s 0 in the feasible region N +. They are denoted as h k (k = 1, 2, …, n s). In this step, “neighbor to” means only one element is +1 or −1. If the neighborhood h k is a solution to add machines to work station j, let p k = b j ; otherwise, let p k = −b j . Denote the minimum in p k as p min. We select a new solution s 1 from the neighborhoods of s 0 according to a probability shown as follows:

    $$ \mathrm{P}({s_1}={h_k})=\frac{{{{{\left( {{p_k}-{p_{{\operatorname{m}\mathrm{in}}}}} \right)}}^{\gamma }}}}{{\sum\limits_{k=1}^{{{n_{\mathrm{s}}}}} {{{{\left( {{p_k}-{p_{{\operatorname{m}\mathrm{in}}}}} \right)}}^{\gamma }}} }}. $$
    (15.5)

    Therefore, the neighbor of a better estimated objective-function value has a higher probability to be chosen in order to accelerate convergence. Parameter γ in (15.5) is used to adjust the influence of the bottleneck analysis in the search procedure. Based on pilot experiments, we observe that when the objective-function value has a large improvement in the previous iteration indicating that the guidance of the gradient works well at this stage of the search procedure, γ should be set to a larger value to make full use of the guidance of the gradient, or else γ should be set to a smaller value to have a better chance to move from one local minimum area to another one. For this consideration, in this study γ is set to 1 at the beginning of the search procedure and will be adjusted at each iteration as stated in Step 4.

  • Step 4: Calculate the objective function value z 1 in the new solution s 1 through a simulation. Let Δz = z 1z 0. If Δz < 0, the current solution s 0 will be replaced by the new solution s 1; otherwise, apply a probability P(A) = e−Δz/T to determine whether the replication should be performed. Set γ = |Δz|/(|Δz|)max, where (|Δz|)max is the maximum among all the |Δz| values in the past iterations.

  • Step 5: The current temperature T is adjusted after every Φ iterations according to α. If it’s below T f or the solution has not been improved for too many consecutive iterations to overstep β, stop the search produce; otherwise, go to Step 2.

    Step 6: Report s 0 and z 0 as the final solution and its objective function value, respectively.

In the proposed GBSA, the neighborhood-generation is not a random produce like that in the traditional SA, but controlled by the results of the bottleneck analysis. And γ will be changed at each iteration according to the improvement of the objective-function value. These modifications speed up the search for a better solution in the area with the most potential while still allows the search to move away from a local area to another. Thus, the neighborhood search may stop earlier as controlled by β and the computing time is reduced.

Computational Experiments

In this paper, three case studies are tested using our proposed GBSA. Case 1 consists of 3 types of orders and 5 work stations, Case 2 consists of 5 types of orders and 10 work stations, and Case 3 consists of 15 types of orders and 30 work stations, respectively. In this paper, only the data of Case 1 to be given in detail for the space constraints.

In Case 1, there are 3–10 machines at each of the five work stations. The scheduling method used in this workshop is a dispatching rule, earliest due date with the tie broken by first come first service (EDD/FCFS), for it is very easy to be applied in a dynamic job-shop with stochastic demand and processing times. Within a work station, the scheduling is complex in this workshop. For we have not enough detailed records about it, according to the production manager’s suggestion, we make an assumption that a task can always make full use of the capacity within a work station and the processing time of the tasks processed at the work station will decrease/increase linearly with adding/removing capacity to the work station.

In the simulation model, inter arrival times of the orders and processing times of the tasks are generated in exponential distributions; constraints of lead times, tardiness penalties per hour and depreciation of machines are set to be fixed values. These data is shown in Tables 15.1 and 15.2.

Table 15.1 Demand requirements and tardiness penalties in Case 1
Table 15.2 Processing times and depreciation of machines in Case 1

The simulation software was developed in Microsoft SQL2000. In all the cases, the simulation for any given solution was performed in the duration of 25,000 h. The simulations were all performed in a personal Pentium IV computer with 2.4G CPU and 1G memory. The mean simulation time of each simulation (including the time for bottleneck analysis) is 35 s in Case 1.

According to the pilot runs, two groups of control parameters are used to both traditional SA and GBSA. Therefore, there are four kinds of algorithm with different control parameter values or different neighborhood-generation methods applied to Case 1, 2, and 3, which is denoted as A1, A2, A3, and A4. Their control parameter values are shown in Table 15.3. The results of the three cases are shown in Table 15.4.

Table 15.3 Control parameters
Table 15.4 Results of the computational experiments

Conclusions

In this paper, a modified SA, named GBSA, is used as an optimization tool to optimize capacity allocation in make-to-order job-shops. Although the optimums of all the algorithms equip to each other in Case 1, the proposed GBSA used noticeably smaller computing time than the traditional SA. Moreover, with less computing time, GBSA found better solutions in Case 2 and 3 compared to the traditional SA. These results show that the proposed method can often finds better solutions with a shorter computation time compared to the traditional method. These optimal solutions for capacity allocation can be very useful to support decisions in performing tradeoffs between the tardiness penalty and the cost of capacity allocation.