1 Introduction and Problem Characteristics

It is widely recognized that the spare parts area is often the most profitable part of a corporation [13], especially when working at the repair, maintenance and aftersales business. In aftersales, the usage of appropriate forecasting methods reduces safety stocks, decreases several costs related with the stock level without penalizing the level of service and is one of the management key challenges [12]. In the aftermarket automotive sector this is even more critical for the wholesalers, as several aspects add complexity layers to the demand forecast and subsequent inventory management. Namely, the huge number of spare parts with heterogeneous sales patterns, the risk of stock obsolescence for some parts of the portfolio as well as the importance given by the costumers to the vehicle which lead them to have very small comprehension with long waiting times [1, 2, 4]. Despite the published work about this matter by many authors, there is a considerable gap between research and practice, as stated in [4] or [1]. In [11] it is presented a survey about the application of spare parts management methods at a national level, based on the work done by Rodrigues and Sirova in the Czech Republic. In this paper we address the practical implementation, in a Portuguese company, of a stock management process designed to meet the company constrains w.r.t. hardware and data availability, as well as to deliver accurate outputs and reliable Key Performance Indicators (KPI).

During this work, it was possible to observe very distinct sales behaviour across the references that compose the company portfolio. There are references with sporadic demand, in the sense that they are consumed in certain moments, followed by long and variable intervals with no demand, as well as there are references that are sold in high quantities for a long continuous period of time. In Fig. 1, it is presented the sales history for a given sub-family of 460 references (y-axis), across a period of 9 months (x-axis). Each marker placed on (x, y) represents, at least, one sale of the y reference at time x.

Fig. 1
figure 1

Scatter of sales for a set of spare-parts references, during a 9-month period

Several facts may explain this behaviour. On the one hand, it is well known that the aftermarket spare parts sales for a given car model is, among other factors, correlated with its age. It is known that, some years after the model is launched there is an increase on the number of spare parts sold, followed by an equilibrium phase, and a subsequent decay on the end of the car’s expected life [8, 9]. However, those trends depend also on many other factors as, for instance, the number of sold vehicles from that particular model, the number of years since the model was released, the expected life for each vehicle component. On the other hand, the same spare part may be applied in different car models, which may have distinct beginning and ending production periods, and a single car reference may have several different aftermarket suppliers. Finally, there are many additional factors that are related with the spare parts demands to a given company, as the mileage and the type of utilization of each car or the marketshare the company owns which is, in fact, time and reference dependent. All of these make very difficult for a wholesaler to accurately evaluate the parameters needed to deploy such a model.

Due to all the previous stated reasons, in the next section we present a model that uses several computationally cheap forecasting methods aiming at: being suitable for the different rotation type of references that compose the company portfolio; reacting quickly to market changes, and at maintaining the computational complexity at reasonable levels regarding its implementation on a company with regular hardware resources. Table 1 indicates the notation used in this paper, regarding the methods described in the next section, where, in its final subsections we present the results of their application to real databases and discuss these intermediate results. This will be the support to set the background regarding the optimization model that is presented on Sect. 3. Finally, we discuss the overall results on Sect. 4.

Table 1 Notation for the forecasting methods

2 An Adjustable Model for Forecasting Spare Parts Demand

Several methods for the forecast of multi rotation references are presented in the literature [2, 6, 12]. Syntetos and Boylan [2] present a comprehensive review on forecasting spare parts demand, and the work of Romeijnders et al. in [12] is a very good survey on the subject this paper deals with, as it was based on the work done by the authors for Fokker Services, a company which deals with spare parts for aviation industry that has very similarities with the automotive business. Kennedy et al. [6] present also an overview on spare parts management that may be important for the second part of this paper.

2.1 Multirotation Reference Forecasting Methods

This section presents a brief review of methods for demand prediction, based on the description made by Romeijnders et al. in [12].

2.1.1 Zero Forecast

Zero Forecast method (ZF) forecasts the demand in the following period as 0 [12], that is:

$$\displaystyle \begin{aligned} \hat{X}_{t+1}=0 \end{aligned} $$
(1)

2.1.2 Moving Averages

The method of moving averages (MA) is one of the most famous methods for time series [7]. The demand on the next period is taken as the demand average of the previous N periods, eventually with respect to some weights 0 ≤ ω i ≤ 1, where \(\sum _{i=1}^{N}\omega _i =1\). Unless stated otherwise, we use as default \(\omega _i=\frac {1}{N}\):

$$\displaystyle \begin{aligned} \hat{X}_{t+1}=\sum_{i=1}^{N}{\omega_i d_{t-N+i}} \end{aligned} $$
(2)

It is important to notice that Nors company already used MA to forecast the next period demand, with N = 6. We will denote this as MA(6) in further analysis. Additionally, we used 12 months MA, with a combination for ω i that gives more importance to the last three periods, a little less to the remaining of the last semester, and a small one to the first trimester. This will be stated hereafter as K-Method.

2.1.3 Naive Forecast

The so-called Naive Method (NF) is a particular case of the moving averages with N=1, as it sets the value of demand during the last period as the forecast for the following period [12].

$$\displaystyle \begin{aligned} \hat{X}_{t+1}=d_t \end{aligned} $$
(3)

2.1.4 Exponential Smoothing

The Exponential Smoothing method (ES) is robust and is very well known as it fits itself rapidly to changes in the demand. Unlike MA, this method uses decreasing exponential combinations of past observations, to estimate future demands [3, 12]. In fact, it uses the forecast on the last period, adjusted by the prediction error \(d_t-\hat {X}_t\), through the expression:

$$\displaystyle \begin{aligned} \hat{X}_{t+1}=(1-\alpha)\hat{X}_t+\alpha d_t \end{aligned} $$
(4)

2.1.5 Croston Method

In the original paper [5], Croston showed that the ES and MA methods didn’t fit to series with intermittent demands. He proposed to update the quantity \(\hat {S}_{t+1}\) and the searching interval \(\hat {K}_{t+1}\) separately. The ES model is used to estimate those components in the periods of positive demand. The Croston forecast (CR) for the search quantity in period t is

$$\displaystyle \begin{aligned} \hat{S}_{t+1}= \left\{ \begin{array}{ll} \hat{S}_t,& d_t=0 \\ (1-\alpha)\hat{S}_t+\alpha d_t, & d_t>0 \end{array} \right. \end{aligned} $$
(5)

and the forecast for the number of periods with positive demand may be calculated through

$$\displaystyle \begin{aligned} \hat{K}_{t+1}= \left\{ \begin{array}{ll} \hat{K}_t, & d_t=0 \\ (1-\beta)\hat{K}_t+\beta k_t, & d_t>0 \end{array} \right. \end{aligned} $$
(6)

The forecast suggested by Croston method is obtained through:

$$\displaystyle \begin{aligned} \hat{X}_{t+1}=\frac{\hat{S}_{t+1}}{\hat{K}_{t+1}} \end{aligned} $$
(7)

2.1.6 SBA: Syntetos–Boylan Method

Syntetos and Boylan [14] presented in 2001 an update to the Croston method, the Syntetos–Boylan method (SBA). To avoid some bias, they proposed to include the factor \(1-\frac {\alpha }{2}\) in the Croston method, getting the following expression:

$$\displaystyle \begin{aligned} \hat{X}_{t+1}=\left(1-\frac{\alpha}{2}\right) \frac{\hat{S}_{t+1}}{\hat{K}_{t+1}} \end{aligned} $$
(8)

where \(\hat {S}_{t+1}\) and \(\hat {K}_{t+1}\) are evaluated using the expressions (5) and (6).

2.1.7 Teunter Method

Teunter method (TSB) [15] is an alternative to Croston method that updates the positive search probability, instead of updating the forecast for the periods with positive demand like the CR and TSB methods. The demand forecast for period t is achieved through:

$$\displaystyle \begin{aligned} \hat{S}_{t+1}= \left\{ \begin{array}{ll} \hat{S}_t, & d_t=0 \\ (1-\alpha)\hat{S}_t+\alpha d_t, & d_t>0 \end{array} \right. \end{aligned} $$
(9)

and the probability to have positive search is given by:

$$\displaystyle \begin{aligned} \hat{p}_{t+1}=(1-\beta)\hat{p}_t + \beta p_t \end{aligned} $$
(10)

As so, the forecast given by the Teunter method may be calculated by:

$$\displaystyle \begin{aligned} \hat{X}_{t+1}=\hat{p}_{t+1}\hat{S}_{t+1} \end{aligned} $$
(11)

2.1.8 SAGA or Holt Method

The SAGA method, also known as Holt method, is suitable for series with linear trend. It uses the exponential smoothing technique to estimate the level a and the growing trend b [3]. This model uses the following update expressions:

$$\displaystyle \begin{aligned} \begin{array}{l} a_t=\alpha d_t+(1-\alpha)(a_{t-1}+b_{t-1}),\\ \end{array} \end{aligned} $$
(12)

and

$$\displaystyle \begin{aligned} \begin{array}{l} b_t=\beta (a_t-a_{t-1})+(1-\beta)b_{t-1} \end{array} \end{aligned} $$
(13)

The forecast for the next period is obtained through:

$$\displaystyle \begin{aligned} \hat{X}_{t+1}=a_t+b_t \end{aligned} $$
(14)

2.1.9 Poisson Method

Under certain well specified conditions, the Poisson distribution is used to count events that occur randomly in a given interval of time. This distribution can be used to get a forecast, as an alternative to methods based on time series [10]. This was adapted to this problem, using as rare event the quantity sold in a given period of time. As so, defining X i as the random variable that represents the number of pieces sold for a given reference i on the time interval Δ, and supposing that X i ∼ P o(λ) with \(\lambda =E(X_i)\approx \frac {1}{n}\sum _{j=1}^{n}{d_{ij}}\) where \(\sum _{j=1}^{n}{d_{ij}}\) represents the total number of reference i articles sold in n periods Δ, the forecast, \(\hat {X}_i\), for Poisson model (PO) is obtained solving the equation

$$\displaystyle \begin{aligned} P(X_i<\hat{X}_i)=1-\alpha \end{aligned} $$
(15)

where α represents the desired confidence level.

Beside the methods described, many other could be inserted on this model. Indeed, several other methods were considered to be included in this work but, those failed to meet one of the requirements asked by the company: that its implementation should deliver results in reasonable time using the existing hardware. The Neural Networks approach and time series methods (like ARMA, ARIMA or SARIMA) didn’t improve significantly our results in the several tests where this model was used and/or increased considerably the overall running-time to values that were not compatible with the company’s available time to process the orders.

2.2 The Mixed Model: MM

The approach implemented in Nors was based on selecting, in each order period, the best method for each reference. With that purpose, we built a metric that uses recent periods to decide which method does fit better the present sales series and use it in the forthcoming forecasting period. More precisely, it receives the demand history for a given reference, and estimates which is the most suitable method for that series. This was done through the following error function (by default, the test period uses N = 18 months)

$$\displaystyle \begin{aligned} \mbox{Error}_t =0.6\sum_{i=1}^{\lfloor N/2\rfloor}{(d_{t-i}-\hat{X}_{t-i})^2}+ 0.4\sum_{i=\lfloor N/2 \rfloor+1}^{N}{(d_{t-i}-\hat{X}_{t-i})^2} \end{aligned} $$
(16)

where ⌊.⌋ represents the floor function.

The methods MA, ES, CR, SBA, TSB e SAGA are parametric, implying that their use depends on the values assigned to each of their parameters. We made this using a brute-force parametrization test and selecting for the forecast of the next unknown period the combination that minimizes the Mean Squared Error (MSE) for the selected m test periods:

$$\displaystyle \begin{aligned} \mbox{MSE}_t=\frac{1}{m}\sum_{i=1}^{m}{(d_{t-i}-\hat{X}_{t-i})^2} \end{aligned} $$
(17)

2.2.1 The Dynamic Safety Stock Approach

The methods described in the last subsection deliver the forecasted demand for the next period. Apart from this, another important quantity to determinate is the Safety Stock (SS). As stated in [16] this quantity may be understood as an extra stock, important to prevent eventual consumptions above the predicted demand on the following period. In order to determine the real quantities to be ordered for the next period, Nors used a safety margin based on Economic Order Quantity (EOQ) expression

$$\displaystyle \begin{aligned} \mbox{EOQ}_t=OP_t-Q(0)_t+\hat{X}_{t+1}\times \epsilon \end{aligned} $$
(18)

where OP t is the order point, 𝜖 is a parameter given as function of both reference cost and next period forecast for each reference and Q(0)t represents the existing stock for the given reference at the time of placing the order (units). Those 𝜖-values, that were pre-defined by the company commercial department, are presented in Table 2.

Table 2 Number of extra weeks to order, as function of reference’s cost and demand forecast

Taking advantage of the data obtained from the tests made previously when selecting the best method, and given that the main reason for the safety stock is to cover the instances where the method underestimates the demand for the next period, we calculate the safety stock quantity in a different way. In fact, to evaluate the SS, we only use the periods in the past where the real demand was above the forecast

$$\displaystyle \begin{aligned} \displaystyle \mbox{Error}_{med_t}=\frac{1}{\displaystyle \#{A_t}}\sum_{i\in A_t} (d_{t-i}-\hat{X}_{t-i}), \, A_t= \left\lbrace i=1,\cdots,n :d_{t-i} > \hat{X}_{t-i} \right\rbrace \end{aligned} $$
(19)

This value is upscaled by a risk factor,

$$\displaystyle \begin{aligned} \mbox{SS}_t=q_{1-\alpha'/2}\times \mbox{Error}_{med_t} \end{aligned} $$
(20)

where q represents the 1 − α′∕2 quantile of a suitable distribution for the errors series. The risk factor parametrization is intended to be settled using the company’s technical knowledge (e.g., taking into account each reference critical importance).

As usual, the order point is defined as \(OP_t=\hat {X}_t\times LT+SS_t\times LT\), where the Lead-Time (LT) is measured in the same temporal scale as the forecast \(\hat {X}_t\).

2.3 Computational Results with Real Data

The previous model was implemented and applied with two real datasets, DS12016 and DS22016, consisting of 1172 and 4032 references, respectively, belonging to two suppliers for which this methodology has been tested and applied. The following results represent the comparison of the model within the year of 2016 (52 weeks) against the older model implemented in Nors that was based only in the MA(6). For this work, we also had access to the previous 30 months history of sales w.r.t. these references. The lead-time for this supplier is 28 days, and the orders were launched weekly, starting in January 4, and ending in December 26.

It should be noticed that although there are some references without sales (145 and 1010 for each dataset, respectively) for the whole year of 2016, we kept them in the datasets in order to reproduce real-world conditions in these tests, as it is impossible to know in advance if a reference will or not be sold in a given year. Those were only deleted when evaluating the relative errors. For the same reason, we also include the returned items in 2016, from 2015 sales. As so, the quantity sold per reference may be negative.

In Tables 3 and 4, the descriptive measures to characterize the datasets reinforces the idea of great amplitude of sales distribution per reference, introduced in the beginning of this paper, as high, medium and low rotation products coexist in the same family of products. This factor may be one of the main factors that leads to the values presented on the last line of this table, representing the number of different methods per reference used to forecast at least one period in 2016, which seems to justify by itself, the idea presented in Sect. 2.2.

Table 3 Descriptive statistics regarding the application of MM in DS12016
Table 4 Descriptive statistics regarding the application of MM in DS22016

As so, proposing a single method for each reference is not a good choice. On average, each reference had its best prediction using between 3 and 4 different methods across the whole year. With respect to the performance of each method, Table 5 and Fig. 2 present the main idea although some comments should be made with that respect.

Fig. 2
figure 2

Method’s histogram regarding the number of times a method was selected as best, for each reference across the whole 52 weeks history

Table 5 Impact by method in DS12016 and DS22016

First, all the methods are used somewhere across the year and, as the computation time is acceptable for the company, there is no reason why any of them should be removed. Second, the K-method clearly overperforms the MA method which was the choice prior to this new implementation on Nors. Finally, the high frequency of ZF may be explained by its usage in references that have very scattered sales, as also to references that have no sales at all. Nevertheless, it is impossible to know that in advance, and as such the decision to eliminate a given reference from the portfolio is a management decision, far beyond the scope of this work. We also highlight the fact that if a reference is predicted by means of ZF, that doesn’t mean that the final ordered quantity will be 0, as the corresponding safety stock quantity may be positive.

For all these reasons, the MM mixed model is an upgrade to Nors prior state, as Table 6 clearly shows. The plots from Fig. 3 also allow to visualize the quality of predictions obtained by the model for both datasets.

Fig. 3
figure 3

Real errors applying MM to DS12016 (a) and DS22016 (b)

Table 6 Descriptive statistics for errors concerning the application of MM to both datasets

2.4 Financial Results

As usual, the main quality indicators of the forecasting model in this industry are related with the service level and sales volume. In Table 7 we present those parameters obtained from simulating MM applied to both datasets against the simulation of old methodology, where we settled \(q_{1-\frac {\alpha '}{2}}=2\), in Eq. (20).

Table 7 Financial details for new (MM) and old (MA(6)) methodology for both datasets

Given that it is common knowledge in this industry that increasing the service level towards values above 90% is very expensive, as well as that the presented sales volume is measured in cost price (in particular, the increase in sales has a higher return than the one stated in Table 7’s third column) the previous results are good outputs of this model from a financial point of view. Is also important to notice that in the past, the reliability of the prediction model wasn’t accurate enough, as they didn’t used the SS notion, but only the safety margin given by the ABC procedure in Table 2. We made the simulations with the SS approach for all the methodology. If not, the service levels for MA(6) would drop to levels below 85%. In the past, to tackle that problem, the product managers increased the sales prediction by some factor related with their experience. Obviously, this had several negative aspects such as: depending on each manager experience and making the company’s performance very vulnerable to changes in the staff. Also, the effect of trying to protect the company against out-of-stock episodes led it to reinforce manually the safety margins. In the end, the effects of applying this methodology to the whole set of the company’s products in late 2014, led to the global results presented on Fig. 4.

Fig. 4
figure 4

Evolution of stock value from Jan 2015 till Aug 2016 (top) and some important indicators from the previous year till the year after this model was implemented (bottom)

3 MORS: A Lightweight Model for the Supplier Order Processing System

Due to the success of the MM approach explained in the previous section, it was decided to develop a new mathematical model to optimize the order processing system. One of the reasons to develop this optimization model—denoted as MORS in the forthcoming references—was related with the fact that often, product managers decide to place orders of considerable amount when the demand forecasts don’t indicate so. Sometimes, this behaviour aims to accomplish contracts and to get some discounts agreed with the supplier when negotiating the rappel in the beginning of the year. This leads to several stock unbalances across the year, with low levels of stock during some parts of the year, that are compensated with big amounts of stock in the end of the year in order to reach the agreed amounts to accomplish the contracts or to reach some level of discount. An example may be seen in Fig. 5, where the daily stock value for the MM model (black) is plotted against the real stock (grey).

Fig. 5
figure 5

Theoretical (black) vs Real Stock (grey) in DS12016 family

Table 8 presents the notation used for the optimization model regarding a supplier of a set comprising N references (i = 1, …, N), supposed to be ordered in a periodic T-days scheme (t = 1, …, T).

Table 8 Notation for the optimization model

As those thresholds are known in the beginning of the year, our idea is to distribute the buying quantities all across the year, introducing the variable total budget (TB) and/or minimum budget to be spent (tb) as a parameter on the optimization model. This quantity is intended to be defined and evaluated across the year by the buyer manager, allowing him to decide the minimum (and/or maximum) amount to be spent in each order, as defined in constrains (30, 31). In our implementation, we distributed that value uniformly across the year, but this may be easily adapted to other types of requirements. It is also important to notice that, in grounds of a considerable loss in sales w.r.t. the goods supplied by a given supplier or some changes in the contract, the existence of this variable allows for a quick change/adaption to the buying strategy at any time of the process.

It is also natural for the manager to decide whether to restrain the number of days for which, in regular conditions, the system is allowed to maintain stock. Denoting this quantity by Tmax i, the inequality (32) guarantees this possibility.

Obviously, non-feasibility of the model may be a strong indicator that the management decisions on TB and Tmax may be incoherent. This may become an additional advantage of the model, as it may alert the managers to the existence of strategies that may be simultaneously unfeasible.

Another side effect of this model, is the minimization of distinct buys per reference, as decreasing the number of different references purchased in each order pulls an additional positive effect at the warehouse level. Indeed, handling and put-away timings are important drawbacks within the warehouse management. The effect of a economy of scale, using deeper and narrower orders, may have an important impact on the global efficiency results. We noticed, prior to this model application, that there were a non-negligible number of occurrences where products, although at the warehouse, were not available to be sold as their status was “in conference” leading to worst performances at the level-of-service KPI.

Regarding the number of days in stock, we also added a constraint (33) to assure that, if the forecast model is accurate, the order will provide enough pieces of each reference to prevent that no stockout takes place before receiving the order placed in the next period.

From the implementation point of view, we ran a pre-processing analysis on the references list, splitting the references set in three different subsets. The mandatory subset, S M, which comprises the references that must be ordered, as their stock is forecasted to end before the arrival of the next order. The admissible subset, S A, comprehends all the references that don’t belong to S M, but whose stock is forecasted not to last more than a threshold τ ≥ P + LT number of days, where P stands for the supplier order periodicity. These references, may or not be ordered within the current order (28, 29). All the references that don’t belong to S M ∪ S A, are discarded from the optimization model, and we will denote the resulting list as S′. In the scope of this real-world implementation, this fact may have important effects, as the number of different references by supplier may reach several thousands resulting in a significant reduction in the number of variables, which impacts the resources (memory/time) needed to run the model. We also restricted the maximum number of days for which the reference stock may be ordered to Tmax. We only allow our model to order the quantity of a given reference needed for multiples of the order periodicity. This option allow us to transform the integer decision variables for this type of problem \(Q_i\in \mathbb {N},\) for i ∈ S′, that represent the reference i replenishment quantity, into the set of binary variables x ij ∈{0, 1} , ∀i ∈ S′, indicating the number of days \(j\times P, \; j=0,1,\ldots ,\lfloor \frac {Tmax_{.}}{P}\rfloor \) in which each reference i may be ordered. This transformation allow us to determine the replenishment quantity, using the identity

$$\displaystyle \begin{aligned} Q_i=\sum_{j=0}^{\lfloor \frac{Tmax_{.}}{P}\rfloor }jx_{ij}\hat{D}_i, \; \forall i\in S' \end{aligned} $$
(21)

where \(\hat {D}_i\) represents the expected daily demand for reference i. The warehouse space availability when receiving the order may also be added to the model through the inequality (34).

As our goal is to minimize the handling, storing and interest costs, the following components should be included in the objective function:

  • The financial costs,Footnote 1 associated to purchase the references needed to fulfil the demand throughout a given period. Defining the reference i purchasing cost (pc i) and the interest rate (IR), this may be represented as a function of the number of days (n) we decide to place the order,

    $$\displaystyle \begin{aligned} \displaystyle f_1(n)&=\sum_{j=1}^n\, pc_{i}\, \hat{D}_i\, ((1+IR/360)^{j}-1) \\&= pc_{i}\, \hat{D}_i \, (1+IR/360) \frac{1-(1+IR/360)^n}{1-(1+IR/360)}-(pc_{i}\, \hat{D}_i)n \\&= pc_{i}\, \hat{D}_i \, ((1+IR/360) \frac{(1+IR/360)^n-1}{IR/360} -n)\; \end{aligned} $$
    (22)
  • The fixed cost fc i associated with the administrative process of placing the order,

    $$\displaystyle \begin{aligned} f_2(n)=fc_i \; \end{aligned} $$
    (23)
  • The cost associated with the handling and put-way processes, when the goods arrive at the warehouse, where vc i represents the variable cost per unit ordered,

    $$\displaystyle \begin{aligned} f_3(n)=n\, vc_i\, \hat{D}_i \; \end{aligned} $$
    (24)
  • Storage costs, which include warehouse rent and stock insurance, where hc i represents the reference i unitary holding cost per day (€/day)

    $$\displaystyle \begin{aligned} \displaystyle f_4(n)&=\sum_{j=1}^n\, hc_{i}\, \hat{D}_i\, (1+IR/360)^{j} \\&= hc_{i}\, \hat{D}_i \, (1+IR/360) \frac{1-(1+IR/360)^n}{1-(1+IR/360)} \\&= hc_{i}\, \hat{D}_i \, (1+IR/360) \frac{(1+IR/360)^n-1}{IR/360} \; \end{aligned} $$
    (25)

Inorder to make the comparison between different strategies simpler, a normalization is made with respect to the number of days for which a order is placed. As a consequence, the cost function with respect to a given reference i in the order placed to fulfil the needs for an n-days period, is defined by the following function:

(26)

where, w.r.t. the solution for the optimization model, the parcel \(vc_i\hat {D}_i\) may be dropped.

The plot of C i(n) for a given set of references is presented in Fig. 6, showing that for each reference there are very distinct optimal order periods.

Fig. 6
figure 6

Cost per day, for a given set of references, if the order is settled to cover the demand for the next x days

Given the previous considerations, the optimization model may be written as:

$$\displaystyle \begin{aligned} \displaystyle \min \sum_{i \in S'}\sum_{j=0}^{\lfloor \frac{Tmax_{.}}{P}\rfloor } C_i(jP)x_{ij} {} \end{aligned} $$
(27)

s.t.

$$\displaystyle \begin{aligned} \sum_{j=0}^{\lfloor \frac{Tmax_{.}}{P}\rfloor } x_{ij} =1, \, \forall i\in S_M {} \end{aligned} $$
(28)
$$\displaystyle \begin{aligned} \sum_{j=0}^{\lfloor \frac{Tmax_{.}}{P}\rfloor } x_{ij} \leq 1, \, \forall i\in S_A {} \end{aligned} $$
(29)
$$\displaystyle \begin{aligned} \sum_{i\in S'}pc_{i}Q_i \geq \frac{tb}{\frac{365}{P}} {} \end{aligned} $$
(30)
$$\displaystyle \begin{aligned} \sum_{i\in S'}pc_{i}Q_i \leq \frac{TB}{\frac{365}{P}} {} \end{aligned} $$
(31)
$$\displaystyle \begin{aligned} Q_i(0)+O_i+Q_i-LT*\hat{D}_i-SS_i\leq Tmax_{i}*\hat{D}_i, \; \forall i \in S' {} \end{aligned} $$
(32)
$$\displaystyle \begin{aligned} Q_i(0)+O_i+Q_i-LT*\hat{D}_i\geq (LT+P)*(\hat{D}_i+\frac{SS_i}{P}), \; \forall i \in S' {}\end{aligned} $$
(33)
$$\displaystyle \begin{aligned} \sum_{i\in S'}v_{i}(Q_i+I_i(LT)) \leq wva {} \end{aligned} $$
(34)
$$\displaystyle \begin{aligned} x_{ij} \in \{0,1\} \end{aligned} $$
(35)

4 Computational Study and Implementation Results

This optimization model was implemented using Matlab R2016b with the optimization toolbox. For both the DS12016 and DS22017 datasets, real sales and predictions given by the model described on Sect. 2.2 and respective results were compared with the existing rules for the Nors group (denoted as MM), as well as with the real order placed to the supplier. As in the previous section, we added the MA(6) method in order to be possible to check additional quality and financial indicators.

For the DS12016, we made 52 orders, resulting in a mean processing time of 20 s per order in a virtual machine with OS Windows 10, 8Gb RAM and a processor 2.8 GHz Intel Core i7. With the same hardware the solution for DS22016 dataset was exported, on average, after 46 s of computation time. Those times include the data importation and the solution exportation to an Excel worksheet, which is then imported by Nors native’s ERP. We settled the total budget (TB) to the real amount bought by Nors in 2016 with each of the suppliers, and the order periodicity to 7 days. We didn’t set the tb parameter in (30), as was our purpose to demonstrate that the budget for DS12016 was clearly excessive even when T max = 364, and that was not an issue regarding the other dataset. The maximum order value per week was defined as TB/52 although it may be settled in other ways (e.g. the remaining yearly budget divided by the number of remaining weeks till the end of the year). The maximum number of days to hold stock was defined in 364 days or 126 days (two different simulations) for DS12016, because the annual budget to this supplier was clearly excessive (Fig. 5). For DS22016 that value was settled to 94 days, as the yearly budget was much tighter w.r.t. the company needs. Both suppliers had an expected lead time of 28 days. For the MM and MA(6) the number of weeks for which the reference is ordered is one month plus the value indicated by EOQ (Table 2), and all the simulations used \(Z_{1-\frac {\alpha '}{2}}=2\) in (20).

In terms of daily stock, the overall behavior of the method is presented in Figs. 7, 8, and 9, with the following legend abbreviations:

  • Real—Real stock on the company.

    Fig. 7
    figure 7

    DS12016: Comparative performance of three methods vs real data, using 2× Safety Stock and T max = 364 days

    Fig. 8
    figure 8

    DS12016: Comparative performance of three methods vs real data, using 2× Safety Stock and T max = 126 days

    Fig. 9
    figure 9

    DS22016: Comparative performance of three methods vs real data, using 2× Safety Stock and T max = 91 days

  • MORS—Stock behaviour using MORS optimization process.

  • MM—Stock behaviour using the MM model cf. Sect. 2.2.

  • MA(6)—Stock behaviour using Nors original forecast model.

  • S.L.—Service level (%).

  • M.D.S.—Mean Daily Stock (€).

  • #R.B.—Total of references unique buy’s.

Analysing Table 9, where the main KPIs for the whole set of simulations are presented, some interesting facts are highlighted. The total amount really bought by Nors w.r.t. the DS12016 dataset is clearly excessive for the defined maximum period to hold stock. At the same time, the comparison between the MORS optimized model and the MM model, both based on the same forecasting model, clearly gives much better service-levels to our model. Also, the number of orders of different references made all over the year, reduces by a factor of 0.32 (0.54 for DS22016) when compared with the MM model. The purchase patterns of the models presented in Fig. 10, clearly emphasizes advantages at the warehouse level. The same pattern is found when looking at the mean percentage of times that the references are ordered in the whole year: 2.2 (per reference) for MORS versus 7.0 for the MM method, which emphasizes that impact, as it represents a very significant reduction on the number of different references that must be manipulated in the warehouse over the year. These values are 1.6 and 2.9 for DS22016, resulting in 54% of reduction on the optimized model.

Fig. 10
figure 10

DS12016: Purchases pattern for three methods with T max = 364 days

Table 9 Results main indicators for DS12016 and DS22016

It is also important to notice that our model’s Objective Function (27), had a global cost reduction factor between 0.5 (DS12016) and 0.71 (DS22016), which reflects very important savings within the logistics costs. The absolute values are not presented because of confidentiality related aspects. A final remark should be made w.r.t. the stock and/or purchases amounts as we just had access to the existing stock for those families in a weekly base (the day where the order was placed), as well as the orders placed in December 2015, that should had been delivered in January 2016. As so, some minor inconsistencies in this section may occur, regarding service levels in the first 28 (Lead-time) days of January or real stock levels. Nevertheless, independently of the goods being delivered in the beginning of 2017 due to lead time, we accounted for all the orders placed during 2016 in the financial results summary presented in Table 9. Concerning the lost sales KPI (Figs. 11 and 12), we evaluated these values based on the real sales recorded by the company. This happened because Nors didn’t keep the record of lost sales in the system, and, as so it was impossible to evaluate that KPI accurately afterwards. The lost sales value is reduced in more than 50%, although based on different levels of purchases as the MM model doesn’t take into account the available budget.

Fig. 11
figure 11

DS12016: Lost Sales Monetary Value for three methods with T max = 364 days

Fig. 12
figure 12

DS22016: Lost Sales Monetary Value for three methods

Figure 13 presents the order periodicity for the whole set of references, where a significant heterogeneity may be seen, resulting of minimizing order’s cost according to the optimization model.

Fig. 13
figure 13

DS12016: Histogram of relative frequencies vs orders periodicity. Only ordered references were considered

It is important to notice that although the MORS mean daily stock may increase as a direct consequence of the values used for T max, the volume of buys is kept below the maximum allowed (that is, the real volume for 2016). Also, when compared with MM and with the original Nors model, the amount per order is made much more uniform across the year, as it may be seen in Fig. 14.

Fig. 14
figure 14

DS12016: Buying amounts per order with T max = 364 days

5 Conclusions

In this work we modelled, developed, and implemented a new methodology in Nors group for the spare parts forecasting, improving the accuracy in predictions. This was tested with two different suppliers with distinct orders of magnitude regarding their budget. Afterwards, those forecasts were used as input on a new optimization model to decrease the financial costs of the ongoing orders. The proposed methodology had two different outcomes when tested in those two distinct suppliers for the 2016 year. On the one hand, and regarding the forecasting accuracy, it increased the service level for more than 1% in both suppliers. On the other hand, applying the MORS optimization model with the same budget that the company used in 2016 for those suppliers, we found that: the service level increased in the range of 2.4–3.6%; the number of different reference purchases along the whole year was reduced between 30% and 54% which has a strong impact on the warehouse management and the stock level was kept smooth across the whole year. Finally, the objective function of the optimization model, that is, the indirect costs of the purchases (handling, interests, and warehouse renting) were reduced by a factor of between 0.5 and 0.71. As expected, the mean daily stock value was increased, but the costs of that raise were already contemplated in the objective function definition. All of these were achieved in a model that is fully parametrizable by the management regarding factors like the maximum period to hold stock or the total budget amount, making it flexible to encompass any further changes in the management policy of the group. Despite the considerable gains achieved with this implementation, the model is being updated on the forecasting methods described on Sect. 2. In fact, the group is now using several other information available and new techniques of artificial intelligence, in order to obtain better forecasts for the references demands.