1 Introduction

We consider so-called distributed production system, modeled and analyzed by using tools from the scheduling area, namely the extended open-shop scheduling problem with transport times and the makespan criterion. The extension consists in using various sets of operations for jobs and operation-dependent transport times. Formulated problem is more general than these considered up-to-now by us and more general than those considered in the literature. The novelty in the problem formulation is a mix of open-shop and job-shop with transport times. The novelty in the solution methods is the graph as well as a labelling procedure, which performs Critical Path Method (CPM) with detector of graph cycle. The stated problem have been decomposed into two sub-problems of scheduling: processing of jobs in each factory and transfer of jobs between factories. We show that these schedules can be represented by two job orders called respectively: “processing order” and “transfer order”. We propose a labelling procedure which transforms orders into appropriate non-delay schedules. The goal of optimization is to find the optimal/best job orders. We provide the mathematical model to find the non-delay schedule for the given job orders. Next, we formulate a graph model and introduce several special properties, which allows us to check the feasibility of job orders as well as to calculate the makespan value. The properties have been used in the neighborhood dedicated for local search methods. We have implemented the tabu search algorithm and tested experimentally its performance.

In the distributed manufacturing system, the production process is carried out in many locations mutually distant. The transport time between dispersed production stages has significant influence on the schedule designed for these cases, [3]. Currently, many companies are moving away from the traditional production method in an enterprise located in one place to distributed production implemented in many locations. There are many examples of industries in which such a process takes place, ranging from companies producing food products in which some production stages must be carried out at a safe distance from sanitary reasons and ending with the automotive industry in which final products in the form of cars are produced from many components produced in various locations. Another example of distributed production systems is the construction of buildings, in which buildings are built from prefabricated elements manufactured in many company branches located near the raw materials for production [16] and/or construction sites. Appropriate location of company branches increases the level of production efficiency while reducing production expectations and reducing, among others, production costs.

Two basic models of distributed systems are considered in the literature, i.e. distributed flexible flow shop (DFFS) and distributed flexible job shop (DFJS) systems. This is due to the fact that the vast number of production systems the technological route for each product is precisely defined. The least often considered are the production systems with open-shop policy in which the order of performing operations within one task can be any.

The flow production systems are the most common research object of scientists. They model the vast majority of manufacturing systems actually encountered. In addressing distributed flowshop scheduling problem where are many algorithms based on advanced local search methods (for example a tabu search algorithm Gao, Chen and Deng [8], a scatter search algorithm Naderi and Ruiz [20], a hybrid genetic algorithm Gao and Chen [7], iterated greedy algorithms Ruiz, Pan and Naderi [24]).

Determining an efficient schedule for job shop production system problems is definitely much more difficult. This is due to its NP-hardness and a huge number of solutions from which a significant fraction is unfeasible. As in the case of flow shop problems, for instances of problems with medium and large sizes, many heuristics algorithms based on local search are proposed. One of the first optimisation algorithms for distributed job shop problem is genetic algorithm proposed by Jia, Fuh, Nee and Zhang [14]. A hybrid genetic algorithm Hao-Chin and Tung-Kuan [12] and an agent-based fuzzy algorithm have recently been proposed Hsu, Kao, Ho and Lai [13].

The open-shop scheduling problem is a topic that has already been studied by many authors. Due to the NP-hardness of the problem for regular criteria, mainly heuristic algorithms based on various methods are proposed. One of the simplest algorithm of this type for total completion time minimisation is the polynomial algorithm proposed by Kyparisis and Koulamas (1997) [15]. Gueret and Prins (1998) [11] developed an original list scheduling heuristic algorithm enhanced by a local search improvement procedure.

Nowadays, the best heuristic algorithms provide good solutions for benchmark instances in short computation time. Liaw (2000) [17] introduces a hybrid genetic algorithm incorporates a local improvement procedure based on tabu search, Sha and Hsu (2008) [26] proposed a particle swarm optimization algorithm.

Other work focuses on criteria such as: late work Blazewicz et al. (2004) [2], total tardiness Naderi et al. (2010) [19], total tardiness and makespan Noori-Darvish and Tavakkoli-Moghaddam (2012) [25], makespan with release dates Bai and Tang (2013) [1] and cycle time Pempera and Smutnicki (2018) [23].

The chapter is organized as follows. Section 2 provides the mathematical description of the problem. Section 3 presents graph model and some properties of the problem. Optimisation algorithm is presented in Sect. 4, whereas the result of computation experiments shown in Sect. 5.

Table 1 List of symbols

2 Mathematical Model

We begin from a brief descriptive introduction to the problem. The distributed manufacturing system working with open-shop policy (DMSO) one can consider as an ensemble of cooperating factories engaged to provide various products. Factories are located far away each other. A product corresponds to a production job composed of operations, each of which is processed in the different factory. Operations included in the job can be performed in any order, then the route of a job through factories is variable. Each factory can perform only one operation at a time.

We introduce the following mathematical model, see Table 1 for the list of symbols and their meaning. We denote by

$${{\varvec{M}}}=\{1,\ldots ,m\}$$

the collection of geographically distributed manufacturing plants (factories). The set of tasks (jobs) which have to be performed using \({{\varvec{M}}}\) is denoted by

$${{\varvec{N}}} = \{1, 2, \ldots , n\}.$$

The job j is represented by the set of \(n_j\) independent operations indexed consecutively as follows

$${{\varvec{O}}}_j=\{l_{j-1} +1,\ldots , l_j\},$$

where

$$l_j = \sum _{s=1}^j{n_s}$$

is the number of operations of the first j jobs. Note that \(n_j\) can be various for different jobs. Set means that all operations from \(O_j\) have to be processed, in an order, but only one operation at the time. We define the set of all operations in the natural way

$$O=\bigcup _{j\in N}O_j,$$

where the total number of operations is \(|O|=o=l_n\). Operation \(i \in {{\varvec{O}}}\) is processed in factory \(\nu _i \in {{\varvec{M}}}\) in the duration \(p_i > 0\). \(\nu _i\) and \(p_i\) are known. We know also the distance (transfer time) defined by a matrix

$$D=[d_{kl}]_{m\times m},$$

where \(d_{kl}\) is a distance between k-th and l-th plant, \(k,l \in {{\varvec{M}}}\). Interruption of an operation performed in the factory is not allowed. Each factory processes at most one operation at a time. Buffers before and after factory have infinite capacity.

Overall schedule is defined by vector of start times

$$S=(S_1,S_2,\ldots ,S_o)$$

and completion times

$$C=(C_1,C_2,\ldots ,C_o).$$

Actually, since \(C_i=S_i+p_i\), then either S or C is sufficient to represent the schedule. The schedule S can be found having defined the following processing orders

  • order of performing operations inside the job (order in which job visits factories),

  • order of performing operations in each factory.

Both types of orders can be expressed by a permutation or/and their composition. The processing order of visiting factories of the job \(j \in {{\varvec{N}}}\) can be represented by the permutation

$$\tau _j = (\tau _j(1), \tau _j(2), . . . , \tau _j(n_j))$$

on the set \({{\varvec{O}}}_j\). We denote in the sequel

$$\tau =(\tau _1,\tau _2,\ldots , \tau _n).$$

The processing order of operations in factory \(k \in {{\varvec{M}}}\) can be represented by the permutation

$$\sigma _k = (\sigma _k(1), \sigma _k(2),\ldots , \sigma _k(m_k))$$

of \(m_k\) operations from the set

$${{\varvec{M}}}_k=\{j \in {{\varvec{O}}} : \nu _j = k\}.$$

We denote by analogy

$$\sigma =(\sigma _1,\sigma _2,\ldots ,\sigma _m).$$

The pair \((\sigma ,\tau )\) introduces constraints on time events in the following form

$$\begin{aligned} S_{\tau _j(i)} \ge C_{\tau _j(i-1)}+d_{\nu _{\tau _{j}(i-1)},\nu _{\tau _{j}(i)}}, \;\; i=2,\ldots ,n_j, \;\; j=1,\ldots ,n, \end{aligned}$$
(1)
$$\begin{aligned} S_{\sigma _{k}(i)} \ge C_{\sigma _{k}(i-1)}, \;\; i=2,\ldots ,m_k, \;\; k=1,\ldots ,m, \end{aligned}$$
(2)
$$\begin{aligned} S_i\ge 0, i=1,\ldots ,o, \end{aligned}$$
(3)

Inequality (1) expresses that the start time of operation \(\tau _j(i)\) has to be later then the completion time of its precedence operation \(\tau _j(i-1)\) plus the transport time from factory \(\nu _{\tau _{j}(i-1)}\) to factory \(\nu _{\tau _{j}(i)}\). Inequality (2) describes relations between events of start and completion of operations processed in the same factory and means that start of the operation \(\sigma _k(i)\) is possible only after completion of its precedence operation \(\sigma _k(i-1)\). The constraint (3) is obvious. Equality (4) ensures that processing of operation \(i \in {{\varvec{O}}}\) cannot be interrupted

$$\begin{aligned} C_{i} = S_{i} + p_{i}, \;\; i=1,\ldots ,o. \end{aligned}$$
(4)

We call processing order \((\sigma ,\tau )\) feasible if exists schedule S and/or C so that (1)–(4) hold. For the given \((\sigma ,\tau )\) we define the makespan as follows

$$\begin{aligned} C_{\max }(\sigma , \tau ) = \max _{i \in {{\varvec{O}}}} C_i. \end{aligned}$$
(5)

The overall aim of the distributed open shop scheduling problem is to find \(\sigma ^{*}\) and \(\tau ^{*}\) for which \(C_{\max }(\sigma , \tau )\) is the smallest among all feasible \((\sigma ,\tau )\)

$$\begin{aligned} C_{\max }(\sigma ^{*}, \tau ^{*}) = \min _{\sigma ,\tau } C_{\max }(\sigma , \tau ). \end{aligned}$$
(6)

The problem can be perceived as the mixed open-shop/job-shop scheduling problem. Indeed, assuming \(n_j=m\), \(j\in N\) and transport time zero we obtain the traditional definition of the open-shop existed in the scheduling area. Assuming that transport time is zero and \(\tau _j\) is fixed but can be various for various jobs, we obtain the well-known definition of the job-shop problem. Since both special cases, namely the open-shop and job-shop are NP-hard, the considered problem is also NP-hard. In order to show the problem in detail, we provide the following example.

Table 2 Data for the example
Table 3 Distances between factories

Example. Four jobs have to be scheduled in a distributed open-shop system consisting of four factories. Each job is processed in each factory. The job indexes, operation indexes, processing times and factories assigned to the operations are given in Table 2 (the data derive from 5th Taillard’s benchmark of the open-shop [27]). The distances between factories are given in Table 3.

Three Gantt charts are displayed in Figs. 1A–C for various processing orders \((\sigma ,\tau )\) and various transport times D. Chart A provides the optimal schedule with the makespan \(C_{\max }(\sigma ,\tau )=295\) for data given in Table 2 and processing orders

$$ \sigma =((11,7,1,14),(3,15,10,8),(13,12,6,2),(5,4,16,9)) $$
$$\begin{aligned} \tau =((3,4,1,2),(5,7,6,8),(11,12,10,9),(13,15,16,14)) \end{aligned}$$
(7)

assuming zero transport time (case \(d_{ij}=0\)). Chart B shows the schedule with the makespan \(C_{\max }(\sigma ,\tau )=320\) built for the processing order (7) (the same as in case A) and distances taken from Table 3. Chart C shows the best found schedule with \(C_{\max }(\sigma ,\tau )=306\) for the job order

$$ \sigma =((11,14,1,7),(3,8,10,15),(13,2,6,12),(5,9,16,4)) $$
$$\begin{aligned} \tau =((3,2,1,4),(5,8,6,7),(11,9,10,12),(13,14,16,15)) \end{aligned}$$
(8)

and distances from Table 3.

Fig. 1
figure 1

Gantt charts for the Example 2: A optimal for order (7) and zero transport time (\(d_{kl}=0\)), the makespan is 295. B certain for order (7) and transport time from Table 3, the makespan is 320 C optimal for order (8) and transport time given in Table 3, the makespan is 306

3 Application of the Graph

In order to introduce some special properties of the problem we convert mathematical model (1)–(4) to certain graph. For a fixed \((\sigma ,\tau )\) we define a directed planar graph in the Activity-on-Node (AoN) modelling style, namely

$$\begin{aligned} G(\sigma ,\tau )=(O,E(\sigma ) \cup F(\tau )), \end{aligned}$$
(9)

with the set of nodes O and the set of arcs \(E(\sigma ) \cup F(\tau )\). The node \(i\in O\) represents an operation and has weight \(p_i\). The set of arcs \(E(\sigma ) \cup F(\tau ) \subseteq O\times O\) represents preceding constraints and consists of two subsets:

  • following from the changeable order \(\tau \) of visiting plants by a job,

    $$\begin{aligned} F(\tau )=\bigcup _{j=1}^n \bigcup _{i=2}^{n_j} \{(\tau _j(i-1),\tau _j(i)\}, \end{aligned}$$
    (10)

    which corresponds to constraints (1), each arc \((\tau _k(i-1),\tau _k(i))\in F(\tau )\) has weight \(d_{\tau _k(i-1),\tau _k(i)}\),

  • following from the changeable processing order \(\sigma \) of operations in factories

    $$\begin{aligned} E(\sigma )=\bigcup _{k=1}^m \bigcup _{i=2}^{m_k} \{ (\sigma _k(i-1),\sigma _k(i)\}, \end{aligned}$$
    (11)

    which corresponds to constraints (2); each arc \(((\sigma _k(i-1),\sigma _k(i)) \in E(\sigma )\) has weight zero.

Graph \(G(\sigma ,\tau )\) is used for checking feasibility, finding the schedule S and the makespan value for the given processing order \((\sigma ,\tau )\). Appropriate features follow from properties shown in the sequel.

Property 1

For processing order \((\sigma ,\tau )\) exists feasible schedule S, C from (1)–(4) if and only if the graph \(G(\sigma ,\tau )\) does not contain a cycle.

Property 2

For processing order \((\sigma ,\tau )\), the starting time \(S_i\) of an operation \(i\in {{\varvec{O}}}\) equals the length of the longest path going to the node \(i\in O\) in the graph \(G(\sigma ,\tau )\).

Checking “if the graph has a cycle” in Property 1 can be done in the time O(o). Longest paths going to each node in Property 2 can be found by running Critical Path Method (CPM); it requires also the time O(o). Nevertheless, we propose an alternative labelling procedure, which combines features enumerated in Properties 1 and 2. Its running time is O(o). Its pseudo-code is shown in Fig. 2.

Notice, Step 1.2 implements constraint (1) and Step 1.3 implements constraint (2). Then, before performing Step 1.4 in \(S_i\) we have the earliest possible start time of operation \(i \in {{\varvec{O}}}\). Step 1.4 implements Eq. (4). Performing Steps 1.2, 1.3 and 1.4 is possible only for those operations for which completion times of all predecessors are known. Operations with known completion time for all predecessors in a given iteration are stored in the queue Q. Queue Q is initialized with operations having no predecessors (see Step 0.2). For each operation we store in \(np_i\), \(i \in {{\varvec{O}}}\) the number of predecessors with unknown (not determined yet) completion time. Each time when is determined completion time of some operation the \(np_i\) value for job and machine successors are updated and successors are stored in Q if \(np_i\) reach zero (Steps 1.5.1 and 1.6.1). If the completion time is not determined for all operations then the processing orders are unfeasible. This fact is detected by counting the number of operations for which the moment of completion is determined (see variable itercount).

Fig. 2
figure 2

Pseudocode of schedule determining algorithm for given \((\sigma ,\tau )\)

The problem (6) refers to huge number of processing orders, theoretically \(n!^mm!^n\), although the majority of them are unfeasible. Taking into account the cost of checking feasibility we are interested in the potential property allowing us to generate only feasible processing orders. The next two facts following from graph model, describe the conditions of generating the new feasible and better processing orders.

Let U be the arbitrary selected longest path in \(G(\sigma ,\tau )\). The maximum sub-path of U consisting of nodes representing operations performed in the same factory will be called the operation block or, briefly the block. The maximum sub-path of U consisting of nodes representing operations that belong to the same job will be called the job-block.

Schedule \((\sigma ,\tau )\)

indirect variables (variables following from \(\sigma \) and \(\tau \)):

  • \(jp_i\) - job predecessor i.e. operation preceding operation i in \(\tau \),

  • \(js_i\) - job successor i.e. operation succeeding operation i in \(\tau \),

  • \(fp_i\) - factory predecessor i.e. operation preceding operation i in \(\sigma \),

  • \(fs_i\) - factory successor i.e. operation succeeding operation i in \(\sigma \),

note that \(jp_i=0\), \(js_i=0\), \(fp_i=0\), \(fs_i=0\) if operation i has not appropriate predecessors or/and successors.

current variables

  • \(np_i\) - number of predecessors of operation \(i \in {{\varvec{O}}}\) with unknown (not yet determined) completion time,

  • Q - queue of operations with determined completion times all of predecessors,

  • itercount - number of operations with determined completion times all of predecessors.

Property 3

Let \((\sigma ,\tau )\) be a feasible solution. All processing orders created by swapping the order of two adjacent operations in the blocks are feasible.

Property 3 allows us to generate from a feasible processing order a sequence (subset) of feasible processing orders. It can be applied in an algorithm based on the B&B scheme as well as in various metaheuristics.

Property 4

Let \((\sigma ^\prime ,\tau ^\prime )\) be the feasible solution obtained from \((\sigma ,\tau )\) so that

$$\begin{aligned} C_{\max }(\sigma ^\prime ,\tau ^\prime )<C_{\max }(\sigma ,\tau ) \end{aligned}$$
(12)

then at least one condition occurs

  • at least at one operation from at least at one block is performed before the first operation of the block,

  • at least at one operation from at least at one block is performed after the last operation of the block,

  • at least at one operation from at least at one job-block is performed before the first operation of the job-block,

  • at least at one operation from at least at one job-block is performed after the last operation of the job-block,

  • the operation from at least at one internal job-block is performed in other order.

Property 4 is a realization of so-called block approach. It defines sufficient conditions necessary to generate the processing order with better makespan. Block properties were successfully used to construct efficient algorithms for flow-shop scheduling problem [22], job-shop scheduling problem [10], flexible job-shop scheduling problem [5] and many others.

4 Optimization Algorithm

We refer to the metaheuristic algorithm called tabu search, see [9] for foundations of the method. Briefly, it is a modification of the well-known fast descend search approach. The search trajectory is generated by checking successive local neighborhoods. The algorithm uses a short-term memory (called tabu list) to prevent wondering around in the solution space and to guide the search into promising regions of this space. The best non-forbidden solution in the neighborhood is selected and it becomes current solution for the next iteration.

Since determining the \(C_{\max }\) value for a given solution order consumes a considerable amount of time, in the interest of computation time, one should search only feasible and/or promising improvement of the objective function solutions.

The most important element of local search algorithms is the neighborhood definition of the current solution. One of the most effective ways to generate the solution neighborhood for job-shop problem is the method proposed by Smutnicki and Nowicki [21]. Watson, Howe and Whitley (2005) [28] experimentally proved its natural convergence to local minima regardless of the local search method used. Roughly speaking, the neighbor solution is generated by swapping the first operation in the block with the next or the last operation in the block with the second to last one.

Bearing in mind the similarity of the jobshop scheduling problem to the openshop scheduling problem (jobshop is a special case of openshop ) and the fact that this method of generating fulfill the conditions of Property 4 (for blocks and job-blocks) and Property 3 (only for blocks) we have decided to implement this type of neighborhood generation method. Precisely, let \(B=(B_1,\ldots ,B_b)\) be the sequence of blocks and job-blocks. The neighborhood \({{\varvec{N}}}(\sigma ,\tau )\) of solution \((\sigma ,\tau )\) consists of two subsets:

$$\begin{aligned} {{\varvec{N}}}(\sigma ,\tau )={{\varvec{N}}}_1(\sigma ,\tau ) \cup {{\varvec{N}}}_2(\sigma ,\tau ). \end{aligned}$$
(13)

The former set \({{\varvec{N}}}_1(\sigma ,\tau )\) contains processing orders created from \((\sigma ,\tau )\) by swapping the two first operations of the block or job-block i.e. operations \(B_s(1)\) and \(B_s(2)\), \(s = 1,\ldots ,b\). The latter set \({{\varvec{N}}}_2(\sigma ,\tau )\) contains of processing orders created from \((\sigma ,\tau )\) by swapping the two last operations operations of the block or job-block i.e. operations \(B_s(b_s-1)\) and \(B_s(b_s)\), \(s = 1,\ldots ,b\), where \(b_s\) is the number of operations in block or job-block s.

The tabu list TL were constructed and used in the following ways:

  • contains the pairs of operations,

  • has limited length L and is serviced by FIFO rule,

  • the pair (ab) of TL prohibits swapping operations a and b.

The algorithm terminates after maxiter iterations. In addition, in order to diversify the search process, after each nonimpiter iterations without improving the best found solution, there is a random “jump” from this solution. The jump is also performed if a search cycle is detected. In the jump, jumpiter iterations are performed. In each iteration of the jump, the neighbor solution is randomly selected and it becomes the current solution in the next iteration. The first solution in this procedure is the best solution found so far by the TS algorithm.

5 Experimental Results

We set three research goals:

  1. 1.

    evaluate the quality of solutions generated by TS,

  2. 2.

    evaluate the impact of transport times on the value of the objective function,

  3. 3.

    measure the calculation time.

The benchmark set consist of 40 modified open-shop Taillard instances. For each original Taillard’s instance we generate symmetric distance matrix. The benchmark set contains 4 group of instances of size: \(n\times m=4\times 4,\;5\times 5,\;7\times 7,\;10\times 10\). Each group contains 10 instances. TS algorithm were implemented in C# under Visual Studio 2010 environment. The machine used was a a PC with Intel I7 2.4GHz, under Windows 8.1 operating system with 8GB RAM.

In the TS, there are several control parameters. In the preliminary computer tests of efficiency of algorithm we determine the following value of parameters:

  • \(nonimpiter=1,000\),

  • \(L=7,\ldots ,13\) - length of tabu cyclically increase by one each time jump is performed,

  • \(jumpiter=5\).

Algorithm TS was run on \(maxiter = 10,000\) iterations.

Relative percentage deviation is employed to evaluate the quality of solution generated by proposed algorithm.

$$\begin{aligned} PRD={\frac{C_{\max }(\sigma ^{TS},\tau ^{TS})-LB}{LB}} \times 100\%, \end{aligned}$$
(14)

were \((\sigma ^{TS},\tau ^{TS})\) is solution generated by TS algorithm and LB is lower bound of \(C_{\max }\) value for given instance.

We introduce a simple lower bound calculated as the maximum of factory workloads i.e.

$$\begin{aligned} LB = \max _{1\le k\le m} \sum _{i\in {{\varvec{O}}}: \;\nu _i=k} p_i. \end{aligned}$$
(15)
Table 4 Relative percentage deviation for open-shop benchmark instances

In the initial experiment we evaluated the quality of solution generated by our TS using original Tailard’s instances of the open-shop scheduling problem. Note the the “classical” open-shop is a special case of DOSP i.e. with distances between factories equal zero. Optimal or near optimal values are known for the Taillard’s instances. So, the PRD in this tests, can refer to optimal makespans.

The optimal or best found values of \(C_{\max }\) for Tailard’s instances are depicted in column Ref in Table 4. The columns \(C_{\max }\) reports \(C_{\max }\) values for solution generated by TS and PRD columns percentage relative deviation. For small problem size (group \(4 \times 4\)), the average PRD is equal 0.82 and slightly increases with increasing size of problem, especially with increasing number of machines. The biggest average \(PRD=4.11\) we observe for group \(7 \times 7\). Quality of TS in pure open-shop we evaluate vicariously and briefly, because is is not our main topic. Average PRD of TS for all Tailard’s instances is 2.64 and it is similar to 2.54 obtained by the best algorithm mentioned in [18]. Nevertheless, for the pure open-shop still exists more advanced algorithms, with PRD close to zero, [6].

Table 5 Average percentage relative deviation and computation time for instance groups

Each Tailard’s benchmark for the open-shop problem contains up to nines diverse jobs, therefore is considered as hard for many heuristics. On the other hand, especially in the mass production system we observe small, medium or large series of identical products. This incline us to introduce another benchmark set.

In order to examine the impact of travel time between factories and series size on the \(C_{\max }\) values, we generated 4 groups of instances with different distances. The distances are generated random in the range \((1, frac \cdot LB)\), were \(frac \in \{0\%,5\%,10\%,20\%\}\). Note, that distances for \(frac = 10\%, 20\%\) are proportional to random generated distances for \(frac = 5\%\). For each of group we generate five subgroups (series) denoted \(\{1,2,\ldots ,5\}\), respectively; in subgroup x each job from original Taillard instances is duplicated x times. Finally, we receive \(20 \times 40\) instances of DOSP.

Table 5 presents the results obtained for different groups of instances. Results for “Series 1” confirm the supposition that transport between factories enlarges the makespan. The average PRD increases with increasing frac value (with increasing time of travel) and with increasing number of factories. For group \(5\%\) the PRD value varies from \(8.95\%\) to \(20.97\%\) while for group \(20\%\) the PRD value varies from \(25.08\%\) to \(58.23\%\). Taking into account the number of factories, it can easily be seen that the increase in the value of PRD is near proportional to the increase in the number of factories. For example for group \(5\%\) and 5 factories the \(PRD=10.63\%\) and for 10 factories is near two times greater i.e. \(PRD=20.97\%\). Surprising result can be seen for “Series 2..5”. For the majority of these groups, the PRD value is 0. This means that the makespan is limited only by certain bottleneck factory and time of other travels does not affect it. Nonzero PRD values are observed in groups of \(10\%\) and \(20\%\) for a large number of factories. PRD values quickly decreases if the Series index increases.

At the beginning of discussion about the computation time, it should be noted that the this time depends on the number of neighbors in the neighborhood in each iteration. This number, in order, depends on the distribution of blocks and job-blocks and is peculiar for each particular solution. In the “Series 1”, we observe that if the distance between factories increases, the calculation time decreases. This is due to the fact that the number of blocks and task blocks decreases as the impact of travel times on the total completion of all jobs increases. In the remaining series, in each group of examples of the same size, the calculation time is approximately the same. This time is relatively short and follows from a small number of blocks due to the dominance of the bottleneck factory. The computation time increases with increasing number of jobs. This relationship is approximately linear and shows that with the increase in the number of jobs, the average number of blocks changes only slightly.

The largest observed computation time, for instances with largest sizes, does not exceed 4 s and it is accepted in practical applications.

6 Conclusions

A distributed manufacturing system has been studied, modelled and analysed as the mixed open-shop/jobshop scheduling problem with transport times and the makespan criterion. We decomposed the problem of finding the best schedule into two sub-problems: finding optimal schedule for given processing order and finding the best processing order. To solve the former sub-problem we use the specific graph, which allow us to formulate several special properties: feasibility of processing order, fast computing of the makespan and elimination of non-perspective solution by so-called block properties. The block property defines the necessary conditions for the processing order to find better order than the current one without the direct calculation of the makespan value. In this way, non-promising solutions easily can be eliminated.

To solve the latter sub-problem we propose the metaheuristic algorithm based on tabu search (TS) method. Block properties have been used to construct the neighborhood of TS. Computer experiments show that the proposed algorithm provides solutions close to optimal in a short time already for test instances (taken from the literature) of the open-shop case. For the distributed manufacturing case the similarity of jobs has an impact on the quality of generated solutions. For instances with series of identical jobs, algorithm TS frequently generates an optimal schedule. Moreover, we observe for job series that transport time has rather weak impact on the selection of the best solution.

The used approach can be extended to cover more complex distributed manufacturing structures and other manufacturing policies. Further natural extension of the research will be parallel variants of the proposed algorithm, [4].