1 Introduction

In real life, there are several decision making situations where decision makers have to choose one value from a set of values of a parameter. This type of situation can be transformed into a mathematical programming model called multi-choice programming problem (MCPP). In MCPP, a multiple number of choices are assigned by the decision maker for any multi-choice parameter. Chang [1] proposed a new idea to solve multi-objective programming problem called multi-choice goal programming where for each objective a multiple number of aspiration levels or goals are set by the decision maker. In addition, to tackle these multi-choice aspiration levels a binary variable for each goal is introduced. In his other paper, Chang [2] used some continuous variable to tackle the situation instead of binary variable. Liao [3] proposed a model called multi-segment goal programming problem, in which the coefficients of the objective function take multiple aspiration level. Chang’s [1] binary variable method has been used to tackle the multi-choice parameters. Biswal and Acharya [4] proposed multi-choice linear programming problems (MCLPP) in which the right hand side parameters of the constraints take multiple aspiration level. They proposed a transformation technique to obtain an equivalent mathematical model. In their other paper [5], they introduced interpolating polynomial for each multi-choice type parameter to transform the model.

However, multi-choice programming problem can be considered as a combinatorial optimization problem. Since in actual decision making situations, we often make a decision on the basis of uncertain data, it makes proper sense to take the alternative choices of a multi-choice parameter as random variable in MCLPP. Stochastic programming is used for such decision making problems where randomness is involved.

Stochastic programming has been developed in various fields, a bibliographic review of stochastic programming can be found in [6, 7]. Among these approaches for stochastic programming, there are two very popular approaches namely,

  • (i) Chance-constrained programming or probabilistic programming proposed by Charnes and Cooper [8];

  • (ii) Two stage programming or Recourse programming proposed by Dantzig [9].

In optimization problems, to handle some random parameters we use both these methods. In chance constrained programming (CCP) technique, constraints of the problem can be violated up to a given level of probability. These satisfactory levels of probability are fixed by the decision maker. The two-stage programming technique also converts the stochastic problem into a deterministic problem. But the two stage programming does not allow any constraint to be violated. Several literature are found in the field of stochastic programming [1016]. Typical areas of application of stochastic programming are Finance and Industrial Engineering, Water resource management, Management of stochastic farm resources, Mineral blending, Human resource management, Energy production, Circuit manufacturing, Chemical engineering, Telecommunications etc.

Under these situations, in this paper, we concentrate to find a suitable methodology to solve the multi-choice linear programming problem with random variables as the alternative choices of a multi-choice parameter. Since most common distribution in nature is the normal distribution, we discuss the methodology by assuming all the alternatives of a multi-choice random parameter as independent normal random variable with known mean and variance. In some stochastic programming problems some of the parameters follows non-normal distribution namely, uniform, exponential, gamma and log-normal distributions. In the stochastic programming literature the deterministic models have been established for stochastic programming problem with such random variables [17, 18].

The organization of the paper is as follows: after giving a brief introduction about the problem in Sect. 1, we present the mathematical Model of Multi-choice random Linear Programming Problem in Sect. 2. The deterministic form of the proposed model is established in Sect. 3. In Sect. 4, a case study is presented and the proposed method is illustrated with the results. In Sect. 5, some conclusions are presented.

2 Mathematical model of multi-choice random linear programming problem

The mathematical model of a multi-choice random linear programming problem can be presented as:

$$\begin{aligned} \max : Z = \sum _{j=1}^{n}\left\{ c_j^{(1)},c_j^{(2)},\ldots ,c_j^{(k_j)}\right\} x_j \end{aligned}$$
(1)

subject to

$$\begin{aligned}&\sum _{j=1}^{n}\left\{ a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(p_{ij})}\right\} x_j \le \{b_i^{(1)},b_i^{(2)},\ldots ,b_i^{(r_i)}\}, \qquad i = 1,2, \ldots ,m \end{aligned}$$
(2)
$$\begin{aligned}&x_{j} \ge 0,\quad j=1,2,3,\ldots ,n \end{aligned}$$
(3)

where \(X=(x_1,x_2,\ldots ,x_n)\) is n-dimensional decision vector. We consider the decision variables as deterministic. Each alternative value \(c_j^{(l)}\) \((l=1,2, \ldots , k_j)\) of the multi-choice parameter \(c_j\), \(j = 1,2, \ldots , n\); \(a_{ij}^{(s)}\) \((s=1,2, \ldots , p_{ij})\) of multi-choice parameter \(a_{ij}, i = 1,2, \ldots , m; j = 1,2, \ldots , n\) and \(b_i^{(t)}\) \((t=1,2, \ldots , r_i)\) of multi-choice parameter \(b_i, i = 1,2, \ldots , m\) are considered as independent random variable. Since each of the alternative choices of multi-choice parameter of the problem (13) are random variable, we are unable to apply the usual solution procedure for mathematical programming problem directly to solve the problem. So, we need to develop some suitable methodology to solve these problems.

Considering that the constraints \(\sum _{j=1}^{n}\{a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(p_{ij})}\}x_j \le \{b_i^{(1)},b_i^{(2)},\ldots ,b_i^{(r_i)}\},\, i = 1,2, \ldots ,m\) of the problem (13) need not hold almost surely, but they can instead hold with some known probabilities. To be more precise, the original m constraints are interpreted as:

$$\begin{aligned} \Pr \left( \sum _{j=1}^{n}\left\{ a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(p_{ij})}\right\} x_j \le \left\{ b_i^{(1)},b_i^{(2)},\ldots ,b_i^{(r_i)}\right\} \right) \ge 1-\gamma _i, \qquad i = 1,2, \ldots ,m \end{aligned}$$
(4)

where \(\Pr\) means probability. \(\gamma _i\) is the given probability of the extents to which the i-th constraint violations are admitted. The inequalities (4) are called chance constraints. These inequalities means that the i-th constraint may be violated, but at most \(\gamma _i\) proportion of the time. Hence problem (13) can be restated as:

$$\begin{aligned} \max : z = \sum _{j=1}^{n}\left\{ c_j^{(1)},c_j^{(2)},\ldots ,c_j^{(k_j)}\right\} x_j \end{aligned}$$
(5)

subject to

$$\begin{aligned}&\Pr \left( \sum _{j=1}^{n}\left\{ a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(p_{ij})}\right\} x_j \le \left\{ b_i^{(1)},b_i^{(2)},\ldots ,b_i^{(r_i)}\right\} \right) \ge 1-\gamma _i; \, i = 1,2, \ldots ,m \end{aligned}$$
(6)
$$\begin{aligned}&x_{j} \ge 0,\quad j=1,2,3,\ldots ,n \end{aligned}$$
(7)

3 Deterministic model formulation

The model established in the previous section is basically a stochastic programming model. Since random variables are there in the model, we need to establish the deterministic form of the model to solve the problem. We establish the equivalent deterministic model of the problem (57) by using chance-constrained programming technique.

3.1 Interpolating polynomials for the multi-choice parameters

The model (57) contains multi-choice parameters. Multi-choice parameters are found in the objective function as well as in the constraints. Also, each alternative value of the multi-choice parameters found to be random variables. We are unable to apply any stochastic programming approach directly to the model as multiple choices are present for each parameter. At first, we transform these multi-choice parameters. To do so, interpolating polynomials are formulated. Interpolating polynomials are formed by introducing an integer variable corresponding to each multi-choice parameter. Each integer variable takes exactly k number of nodal points if the corresponding parameter has k number of choices. Each node corresponds to exactly one functional value of a multi-choice parameter. Here the functional value of each node is a random variable. Interpolating polynomial, which are formulated intersects the affine function exactly once at these nodes. We replace a multi-choice parameter by an interpolating polynomial using Lagrange’s formula.

For the multi-choice parameter \(c_j \, (j=1,2,\ldots ,n)\), we introduce an integer variable \(u_{j}\) which takes \(k_{j}\) number of values. We formulate a Lagrange interpolating polynomial \(f_{c_{j}}(u_{j})\) which passes through all the \(k_{j}\) number of points given by Table 1.

Table 1 Data for multi-choice parameter \(c_{j}\)

Following Lagrange’s formula [20] we obtain the interpolating polynomial for the multi-choice parameter \(c_{j}\,(j=1,2,\ldots ,n)\) as:

$$\begin{aligned} f_{c_{j}}(u_{j};c_j^{(1)},c_j^{(2)},\ldots ,c_j^{(k_j)})= & {} \frac{(u_{j}-1)(u_{j}-2)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-1)}(k_{j}-1)!}c_{j}^{(1)}\nonumber \\&+\frac{u_{j}(u_{j}-2)(u_{j}-3)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-2)}1!(k_{j}-2)!}c_{j}^{(2)}+\cdots \nonumber \\&+\frac{u_{j}(u_{j}-1)(u_{j}-2)\cdots (u_{j}-k_{j}+2)}{(k_{j}-1)!}c_{j}^{(k_{j})};\nonumber \\&j=1,2,\ldots ,n. \end{aligned}$$
(8)

Similarly, we introduce an integer variable \(w_{ij}\) to tackle the multi-choice parameter \(a_{ij}\,(i=1,2,...,m;\,j=1,2,...,n)\). The integer variable \(w_{ij}\) takes \(p_{ij}\) number of different values. Following the Lagrange’s formula, we construct an interpolating polynomial \(f_{a_{ij}}(w_{ij})\). The interpolating polynomial \(f_{a_{ij}}(w_{ij})\) passes through all the \(p_{ij}\) number of points which are given by Table 2. The interpolating polynomial can be written as:

$$\begin{aligned} f_{a_{ij}}(w_{ij};a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(p_{ij})})= & {} \frac{(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-1)}(p_{ij}-1)!}a_{ij}^{(1)}\nonumber \\&+\frac{w_{ij}(w_{ij}-2)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-2)}1!(p_{ij}-2)!}a_{ij}^{(2)}+\cdots \nonumber \\&+\frac{w_{ij}(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+2)}{(p_{ij}-1)!}a_{ij}^{(p_{ij})};\nonumber \\&i=1,2,\ldots ,m; \quad j=1,2,\ldots ,n. \end{aligned}$$
(9)

Similarly, we introduce a new integer variable \(v_i\) for the multi-choice parameter \(b_i\,(i=1,2,...,m)\). We formulate the corresponding interpolating polynomial \(f_{b_i}(v_i)\) which passes through the data points given by Table 3. The interpolating polynomial can be formulated as:

$$\begin{aligned} f_{b_i}(v_{i};b_i^{(1)},b_i^{(2)},\ldots ,b_i^{(r_i)})= & {} \frac{(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-1)}(r_i-1)!}b_i^{(1)}\nonumber \\&+\frac{v_{i}(v_{i}-2)(v_{i}-3)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-2)}1!(r_i-2)!}b_i^{(2)}+\cdots \nonumber \\&+\frac{v_{i}(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+2)}{(r_i-1)!}b_i^{(r_{i})};\nonumber \\&i=1,2,\ldots ,m. \end{aligned}$$
(10)
Table 2 Data for multi-choice parameter \(a_{ij}\)
Table 3 Data for multi-choice parameter \(b_{i}\)

Note that, in each of the above interpolating polynomials random variables are present. After replacing the multi-choice parameters by interpolating polynomials, we obtain a transformed stochastic model which can be stated as:

$$\begin{aligned} \max : z = \sum _{j=1}^{n}f_{c_{j}}\left( u_{j};c_j^{(1)},c_j^{(2)},\ldots ,c_j^{(k_j)}\right) x_j \end{aligned}$$
(11)

subject to

$$\begin{aligned}&\Pr \left( \sum _{j=1}^{n}f_{a_{ij}}(w_{ij};a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(p_{ij})})x_j \le f_{b_i}(v_{i};b_i^{(1)},b_i^{(2)},\ldots ,b_i^{(r_i)})\right) \ge 1-\gamma _i \nonumber \\&0<\gamma _i<1,\quad i = 1,2, \ldots ,m\nonumber \\&x_{j} \ge 0,\quad j=1,2,3,\ldots ,n\nonumber \\&0\le u_{j} \le k_{j}-1\nonumber \\&0\le w_{ij} \le p_{ij}-1\nonumber \\&0\le v_i \le r_i-1\nonumber \\&u_{j},w_{ij},v_i\in {\mathbb {N}}_{0};\quad i=1,2,3,\ldots ,m;\, j=1,2,\ldots ,n. \end{aligned}$$
(12)

3.2 Equivalent deterministic form of the chance constraints

The alternative choices of the multi-choice random parameters can follow any continuous distribution namely uniform, normal, exponential etc. Also these random variables are independent. The interpolating polynomial corresponding to each multi-choice random parameter is a linear combination of independent random variables. So, the polynomial function itself is a random variable. We can find the distribution function of the random variable by using the following theorem:

Theorem 1

Let X and Y be two independent random variables with distribution function \(F_X(x)\) and \(F_Y(y)\) respectively, defined for all real numbers. Then the sum \(Z=X+Y\) is a random variable with distribution function \(F_Z(z)\), where \(F_Z(z)\) is given by:

$$\begin{aligned} F_Z(z)= & {} \int _{-\infty }^{\infty }F_X(z-y)f_Y(y) dy \end{aligned}$$
(13)
$$\begin{aligned}= & {} \int _{-\infty }^{\infty }F_Y(z-x)f_X(x) dx \end{aligned}$$
(14)

where \(f_X(x)\) and \(f_Y(y)\) represent the probability density function of the random variables X and Y respectively.

In real life situations, the alternative choices of the multi-choice random parameter are considered from the same distribution. However, if the alternative choices of the multi-choice random parameter are from different distribution then the equivalent deterministic form of the i-th constraint of the problem can be establish in the following way:

$$\begin{aligned}&\Pr \left( \sum _{j=1}^{n}f_{a_{ij}}(w_{ij};a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(p_{ij})})x_j\; \le\; f_{b_i}(v_{i};b_i^{(1)},b_i^{(2)},\ldots ,b_i^{(r_i)})\right) \, \ge \, 1-\gamma _i \end{aligned}$$
(15)
$$\begin{aligned}&\Rightarrow \Pr \left( \sum _{j=1}^{n}f_{a_{ij}}(w_{ij};a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(p_{ij})})x_j- f_{b_i}(v_{i};b_i^{(1)},b_i^{(2)},\ldots ,b_i^{(r_i)})\le 0 \right) \ge 1-\gamma _i \end{aligned}$$
(16)

Let us define \(S_i=\sum _{j=1}^{n}f_{a_{ij}}(w_{ij};a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(p_{ij})})x_j- f_{b_i}(v_{i};b_i^{(1)},b_i^{(2)},\ldots ,b_i^{(r_i)}).\) Then \(S_i\) is a random variable with distribution function \(F_{S_i}\) which can be found by using Theorem 1. Hence from the Eq. 16, we have the following:

$$\begin{aligned}&\Pr \left( S_i\le 0 \right) \ge 1-\gamma _i\\&\quad \Rightarrow F_{S_i}(0) \ge 1-\gamma _i\\&\quad \Rightarrow F_{S_i}^{-1}(1-\gamma _i) \le 0\\&\quad \Rightarrow K_{1-\gamma _i} \le 0 \qquad where \, K_{1-\gamma _i}=\max F_{S_i}^{-1}(1-\gamma _i) \end{aligned}$$

which leads us to the deterministic form of the i-th chance constraint of the problem.

3.2.1 Chance constraints with normal random variable

Let us consider the case when all the random variables present in the problem are independent normal random variables. Let \(c_j^{(l)}\), \(a_{ij}^{(s)}\) and \(b_i^{(t)},\) \(l=1,2, \ldots , k_j;\,s=1,2, \ldots , p_{ij};\,t=1,2, \ldots , r_i;\,i = 1,2, \ldots , m; j = 1,2, \ldots , n\) be distributed normally with known mean and variance. Let us consider

$$\begin{aligned}&c_j^{(l)}\sim \mathcal{N}(\mu _{0j}^{(l)},(\sigma _{0j}^{(l)})^2)\quad l=1,2, \ldots , k_{j};\quad j = 1,2, \ldots , n.\\&a_{ij}^{(s)}\sim \mathcal{N}(\mu _{ij}^{(s)},(\sigma _{ij}^{(s)})^2)\quad s=1,2, \ldots , k_{ij};\quad i = 1,2, \ldots , m;\,j = 1,2, \ldots , n.\\&b_i^{(t)}\sim \mathcal{N}(\mu _{i}^{(t)},(\sigma _{i}^{(t)})^2)\quad t=1,2, \ldots , r_{i};\quad i = 1,2, \ldots , m. \end{aligned}$$

where the mean and variance of the parameters are given by Table 4.

Table 4 Mean and variance of the normal random variables

Now, the interpolating polynomial \(f_{a_{ij}}(w_{ij};a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(p_{ij})})\) is a linear function of \(p_{ij}\) number of independent normal random variables \(a_{ij}^{(s)}\) with known mean and variance. Also, interpolating polynomial \(f_{b_i}(v_{i};b_i^{(1)},b_i^{(2)},\ldots ,b_i^{(r_i)})\) is a linear function of \(r_{i}\) number of independent normal random variables \(b_{i}^{(t)}\) with known mean and variance. Let

$$\begin{aligned}&M_{ij}=x_{j}f_{a_{ij}}(w_{ij};a_{ij}^{(1)},a_{ij}^{(2)},\ldots ,a_{ij}^{(p_{ij})}) \end{aligned}$$
(17)
$$\begin{aligned}&N_i=f_{b_i}(v_{i};b_i^{(1)},b_i^{(2)},\ldots ,b_i^{(r_i)}) \end{aligned}$$
(18)

Then \(M_{ij}\) is a random variable with mean and variance \(E(M_{ij})\) and \(V(M_{ij})\) respectively, where \(E(M_{ij})\) and \(V(M_{ij})\) are defined as,

$$\begin{aligned} E(M_{ij})\, =\, & {} E\left( \frac{(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-1)}(p_{ij}-1)!}x_{j}a_{ij}^{(1)} \right. \\ & +\frac{w_{ij}(w_{ij}-2)(w_{ij}-3)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-2)}1!(p_{ij}-2)!}x_{j}a_{ij}^{(2)} \\ &\left. +\cdots +\frac{w_{ij}(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+2)}{(p_{ij}-1)!}x_{j}a_{ij}^{(p_{ij})}\right) \\= & {} \frac{(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-1)}(p_{ij}-1)!}x_{j}E(a_{ij}^{(1)})\\ &+\frac{w_{ij}(w_{ij}-2)(w_{ij}-3)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-2)}1!(p_{ij}-2)!}x_{j}E(a_{ij}^{(2)})\\&+\cdots +\frac{w_{ij}(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+2)}{(p_{ij}-1)!}x_{j}E(a_{ij}^{(p_{ij})})\\= & {} \frac{(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-1)}(p_{ij}-1)!}x_{j}\mu _{ij}^{(1)}\\ &+\frac{w_{ij}(w_{ij}-2)(w_{ij}-3)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-2)}1!(p_{ij}-2)!}x_{j}\mu _{ij}^{(2)}\\&+\cdots +\frac{w_{ij}(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+2)}{(p_{ij}-1)!}x_{j}\mu _{ij}^{(p_{ij})}\\ V(M_{ij})= & {} \left[ \frac{(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-1)}(p_{ij}-1)!}x_{j}\right] ^{2}(\sigma _{ij}^{(1)})^{2}\\&+\left[ \frac{w_{ij}(w_{ij}-2)(w_{ij}-3)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-2)}1!(p_{ij}-2)!}x_{j}\right] ^{2}(\sigma _{ij}^{(2)})^{2}\\&+\cdots +\left[ \frac{w_{ij}(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+2)}{(p_{ij}-1)!}x_{j}\right] ^{2}(\sigma _{ij}^{(p_{ij})})^{2} \end{aligned}$$

Similarly, \(N_i\) is also treated as a random variable with mean and variance \(E(N_{i})\) and \(V(N_{i})\). \(E(N_{i})\) and \(V(N_{i})\) are given by:

$$\begin{aligned} E(N_{i})= & {} \frac{(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-1)}(r_i-1)!}\mu _{i}^{(1)}+\frac{v_{i}(v_{i}-2)(v_{i}-3)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-2)}1!(r_i-2)!}\mu _{i}^{(2)}\\&+\cdots +\frac{v_{i}(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+2)}{(r_i-1)!}\mu _{i}^{(r_{i})}\\ V(N_{i})= & {} \left[ \frac{(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-1)}(r_i-1)!}\right] ^{2}(\sigma _{i}^{(1)})^{2}+\left[ \frac{v_{i}(v_{i}-2)(v_{i}-3)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-2)}1!(r_i-2)!}\right] ^{2}(\sigma _{i}^{(2)})^{2}\\&+\cdots +\left[ \frac{v_{i}(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+2)}{(r_i-1)!}\right] ^{2}(\sigma _{i}^{(r_{i})})^{2} \end{aligned}$$

Let us define \(A_i=(\sum _{j=1}^{n}M_{ij}-N_{i}),\,i=1,2,\ldots ,m\), then \(A_i\) are also normally distributed random variables. Hence the constraint (16) can be written as:

$$\begin{aligned}&\Pr \left( \frac{A_i-\mu _{A_i}}{\sigma _{A_i}}\le -\frac{\mu _{A_i}}{\sigma _{A_i}} \right) \ge 1-\gamma _i,\quad i=1,2,\ldots ,m \end{aligned}$$
(19)
$$\begin{aligned}&\Pr \left( \eta _i\le -\frac{\mu _{A_i}}{\sigma _{A_i}} \right) \ge 1-\gamma _i\qquad [taking\,\eta _i=\frac{A_i-\mu _{A_i}}{\sigma _{A_i}}] \end{aligned}$$
(20)

where \(\mu _{A_i}=\sum _{j=1}^{n}E(M_{ij})-E(N_{i})\), \(\sigma _{A_i}=\sqrt{\sum _{j=1}^{n}V(M_{ij})+V(N_{i})}\) and \(\eta _i\) is the standardized normally distributed random variable. Therefore the chance constraint (20) holds if and only if

$$\begin{aligned} \varPhi \left( -\frac{\mu _{A_i}}{\sigma _{A_i}}\right) \ge \varPhi (\eta _{\gamma _i}),\quad i=1,2,\ldots ,m. \end{aligned}$$
(21)

where \(\varPhi (z)=\frac{1}{\sqrt{2\pi }}\int _{-\infty }^{z}e^{-\frac{\eta ^2}{2}}d\eta\) is the cumulative density function (C.D.F.) of the standard normal random variable. The above inequality can be simplified as:

$$\begin{aligned} \mu _{A_i}+\sigma _{A_i}\eta _{\gamma _i}\le 0,\quad i=1,2,\ldots ,m. \end{aligned}$$
(22)

Hence we establish the deterministic form of the i-th \((i=1,2,\ldots ,m)\) constraint as a non-linear constraint. The constraint can be stated as:

$$\begin{aligned}&\sum _{j=1}^{n}x_{j}\left( \frac{(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-1)}(p_{ij}-1)!}\mu _{ij}^{(1)} +\frac{w_{ij}(w_{ij}-2)(w_{ij}-3)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-2)}1!(p_{ij}-2)!}\mu _{ij}^{(2)}\right. \\&\quad \left. +\cdots +\frac{w_{ij}(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+2)}{(p_{ij}-1)!}\mu _{ij}^{(p_{ij})}\right) +\eta _{\gamma _i}\sqrt{\sum _{j=1}^{n}V(M_{ij})+V(N_{i})}\\&\quad \le \left[ \frac{(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-1)}(r_i-1)!}\mu _{i}^{(1)} +\frac{v_{i}(v_{i}-2)(v_{i}-3)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-2)}1!(r_i-2)!}\mu _{i}^{(2)}\right. \\&\qquad \left. +\cdots +\frac{v_{i}(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+2)}{(r_i-1)!}\mu _{i}^{(r_{i})}\right] \end{aligned}$$

We obtain the deterministic form of the chance constraints which form the feasible region (say S) of the model (13). To establish the deterministic model of (13), we find the deterministic form of the objective function. Depending on the aim of the decision maker Charnes and Cooper [8] considered three types of decision rules to optimize objective functions with random variables:

  • (i) the minimum or maximum expected value model,

  • (ii) the minimum variance model and

  • (iii) the maximum probability model also called as dependent-chance model.

Moreover, Kataoka [21] and Geoffrion [22] individually proposed the fractile criterion model. We establish the deterministic models by assuming the random variables as independent normal random variable.

3.3 Expectation maximization model (E-model)

In order to deal with the situations where the decision maker wants to maximize the expected value of the objective function in (1112), we consider ’E’-model for the objective function [8]. Substituting the random variables by their expected values, we obtain the model as:

$$\begin{aligned}\max : Z' =& \sum _{j=1}^{n}x_j\left[ \frac{(u_{j}-1)(u_{j}-2)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-1)}(k_{j}-1)!}\mu _{0j}^{(1)} +\frac{u_{j}(u_{j}-2)(u_{j}-3)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-2)}1!(k_{j}-2)!}\mu _{0j}^{(2)}\right. \nonumber \\ &\left.\quad +\cdots +\frac{u_{j}(u_{j}-1)(u_{j}-2)\cdots (u_{j}-k_{j}+2)}{(k_{j}-1)!}\mu _{0j}^{(k_{j})}\right] \end{aligned}$$
(23)

subject to

$$\begin{aligned}&\sum _{j=1}^{n}x_{j}\left( \frac{(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-1)}(p_{ij}-1)!}\mu _{ij}^{(1)} +\frac{w_{ij}(w_{ij}-2)(w_{ij}-3)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-2)}1!(p_{ij}-2)!}\mu _{ij}^{(2)}\right. \nonumber \\&\quad \left. +\cdots +\frac{w_{ij}(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+2)}{(p_{ij}-1)!}\mu _{ij}^{(p_{ij})}\right) +\eta _{\gamma _i}\sqrt{\sum _{j=1}^{n}V(M_{ij})+V(N_{i})}\nonumber \\& \le \left[ \frac{(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-1)}(r_i-1)!}\mu _{i}^{(1)} +\frac{v_{i}(v_{i}-2)(v_{i}-3)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-2)}1!(r_i-2)!}\mu _{i}^{(2)}\right. \nonumber \\&\quad \left. +\cdots +\frac{v_{i}(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+2)}{(r_i-1)!}\mu _{i}^{(r_{i})}\right] \end{aligned}$$
(24)
$$\begin{aligned}&0<\gamma _i<1,\quad i = 1,2, \ldots ,m \end{aligned}$$
(25)
$$\begin{aligned}&x_{j} \ge 0,\quad j=1,2,3,\ldots ,n \end{aligned}$$
(26)
$$\begin{aligned}&0\le u_{j} \le k_{j}-1 \end{aligned}$$
(27)
$$\begin{aligned}&0\le w_{ij} \le p_{ij}-1 \end{aligned}$$
(28)
$$\begin{aligned}&0\le v_i \le r_i-1 \end{aligned}$$
(29)
$$\begin{aligned}&u_{j},w_{ij},v_i\in {\mathbb {N}}_{0};\quad i=1,2,3,\ldots ,m;\, j=1,2,\ldots ,n. \end{aligned}$$
(30)

Hence we obtain an equivalent deterministic model of the problem (13). We apply non-linear programming approach to solve the problem.

3.4 Variance minimization model (V-model)

In order to deal with the situations where the decision maker minimizes the variance of the objective function with random variable in (1112) subject to the fact that the expected value of the objective function achieved a certain target fixed by the decision maker, we consider ‘V’-model for the objective function. Let the target value for the expected value of the objective function fixed by decision maker be T. Here we minimize the variance of the objective function. The equivalent deterministic model presented as:

$$\begin{aligned} \min : Z' =& \sum _{j=1}^{n}x_j^2\left( \left[ \frac{(u_{j}-1)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-1)}(k_{j}-1)!}\right] ^2(\sigma _{0j}^{(1)})^{2} +\left[ \frac{u_{j}(u_{j}-2)(u_{j}-3)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-2)}1!(k_{j}-2)!}\right] ^2(\sigma _{0j}^{(2)})^{2}\right. \nonumber \\&\quad \left. +\cdots +\left[ \frac{u_{j}(u_{j}-1)(u_{j}-2)\cdots (u_{j}-k_{j}+2)}{(k_{j}-1)!}\right] ^2(\sigma _{0j}^{(k_{j})})^{2}\right) \end{aligned}$$
(31)

subject to

$$\begin{aligned}&\sum _{j=1}^{n}x_j\left[ \frac{(u_{j}-1)(u_{j}-2)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-1)}(k_{j}-1)!}\mu _{0j}^{(1)} +\frac{u_{j}(u_{j}-2)(u_{j}-3)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-2)}1!(k_{j}-2)!}\mu _{0j}^{(2)}\right. \nonumber \\&\qquad \left. +\cdots +\frac{u_{j}(u_{j}-1)(u_{j}-2)\cdots (u_{j}-k_{j}+2)}{(k_{j}-1)!}\mu _{0j}^{(k_{j})}\right] \ge T\end{aligned}$$
(32)
$$\begin{aligned}&\quad along \,with\, the \, constraints \,(24{-}30) \end{aligned}$$
(33)

3.5 Probability maximization model (P-model)

In order to deal with the situations where the decision maker wants to maximize the probability that the objective function with random variable is greater than or equal to a certain permissible level in (1112), we consider probability maximization model [23]. Considering the minimum permissible level of the objective function as z, we establish the objective function of the model as:

$$\begin{aligned} \max : p(X,U)=\Pr \left( \sum _{j=1}^{n}f_{c_{j}}(u_{j};c_j^{(1)},c_j^{(2)},\ldots ,c_j^{(k_j)})x_j \, \ge\, z\right) \end{aligned}$$
(34)

We assume \(c_j^{(l)}\, (l=1,2,\ldots ,k_j;\,j=1,2,\ldots ,n)\) are independent normal random variable with known mean and variance, so \(\sum _{j=1}^{n}f_{c_{j}}(u_{j};c_j^{(1)},c_j^{(2)},\ldots ,c_j^{(k_j)})x_j\) is a normal random variable. Let \(M=\sum _{j=1}^{n}f_{c_{j}}(u_{j};c_j^{(1)},c_j^{(2)},\ldots ,c_j^{(k_j)})x_j\) then expected value variance of M is given by,

$$\begin{aligned} E(M)= & {} \sum _{j=1}^{n}x_j\left[ \frac{(u_{j}-1)(u_{j}-2)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-1)}(k_j-1)!}\mu _{0j}^{(1)}+\frac{u_{j}(u_{j}-2)(u_{j}-3)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-2)}1!(k_j-2)!}\mu _{0j}^{(2)}\right. \\&\left. +\cdots +\frac{u_{j}(u_{j}-1)(u_{j}-2)\cdots (u_{j}-k_{j}+2)}{(k_j-1)!}\mu _{0j}^{(k_{j})}\right] \\ V(M)= & {} \sum _{j=1}^{n}x_j^{2}\left( \left[ \frac{(u_{j}-1)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-1)}(k_j-1)!}\right] ^{2}(\sigma _{i}^{(1)})^{2}+\left[ \frac{u_{j}(u_{j}-2)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-2)}1!(k_j-2)!}\right] ^{2}(\sigma _{i}^{(2)})^{2}\right. \\&\left. +\cdots +\left[ \frac{u_{j}(u_{j}-1)(u_{j}-2)\cdots (u_{j}-k_{j}+2)}{(k_j-1)!}\right] ^{2}(\sigma _{i}^{(r_{i})})^{2}\right) \end{aligned}$$

Therefore, we have

$$\begin{aligned}&\Pr \left( \sum _{j=1}^{n}f_{c_{j}}(u_{j};c_j^{(1)},c_j^{(2)},\ldots ,c_j^{(k_j)})x_j \, \ge\, z\right) \end{aligned}$$
(35)
$$\begin{aligned}&=\Pr \left( M\ge z\right) =1-\varPhi \left( \frac{z-E(M)}{\sqrt{V(M)}}\right) \end{aligned}$$
(36)

Hence we transformed the problem as:

$$\begin{aligned} \max : p(X,U)=1-\varPhi \left( \frac{z-E(M)}{\sqrt{V(M)}}\right) \end{aligned}$$
(37)

subject to

$$\begin{aligned}&\sum _{j=1}^{n}x_{j}\left( \frac{(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-1)}(p_{ij}-1)!}\mu _{ij}^{(1)} +\frac{w_{ij}(w_{ij}-2)(w_{ij}-3)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-2)}1!(p_{ij}-2)!}\mu _{ij}^{(2)}\right. \nonumber \\&\quad \left. +\cdots +\frac{w_{ij}(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+2)}{(p_{ij}-1)!}\mu _{ij}^{(p_{ij})}\right) +\eta _{\gamma _i}\sqrt{\sum _{j=1}^{n}V(M_{ij})+V(N_{i})}\nonumber \\&\le \left[ \frac{(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-1)}(r_i-1)!}\mu _{i}^{(1)} +\frac{v_{i}(v_{i}-2)(v_{i}-3)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-2)}1!(r_i-2)!}\mu _{i}^{(2)}\right. \nonumber \\&\quad \left. +\cdots +\frac{v_{i}(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+2)}{(r_i-1)!}\mu _{i}^{(r_{i})}\right] \end{aligned}$$
(38)
$$\begin{aligned}&0<\gamma _i<1,\quad i = 1,2, \ldots ,m \end{aligned}$$
(39)
$$\begin{aligned}&x_{j} \ge 0,\quad j=1,2,3,\ldots ,n \end{aligned}$$
(40)
$$\begin{aligned}&0\le u_{j} \le k_{j}-1 \end{aligned}$$
(41)
$$\begin{aligned}&0\le w_{ij} \le p_{ij}-1 \end{aligned}$$
(42)
$$\begin{aligned}&0\le v_i \le r_i-1 \end{aligned}$$
(43)
$$\begin{aligned}&u_{j},w_{ij},v_i\in {\mathbb {N}}_{0};\quad i=1,2,3,\ldots ,m;\, j=1,2,\ldots ,n. \end{aligned}$$
(44)

3.6 Fractile criterion optimization model

In order to deal with the situations where the decision maker is risk-averse and wants to guarantee that the probability of obtaining maximum value of the objective function is greater than or equal to some given threshold, we adopt the fractile criterion model [22]. Basically, the fractile criterion model is considered as complementary to the probability maximization model or P-model. In this model, the target variable to the objective function is maximized, provided that the probability that the objective function value is greater than the target variable is guaranteed to be larger than a given threshold. Let us denote the target variable to the objective function as \(\lambda\) and the given threshold be \(\beta\). Then fractile criterion model is given by:

$$\begin{aligned} \max : \lambda \end{aligned}$$
(45)

subject to

$$\begin{aligned}&\Pr \left( \sum _{j=1}^{n}f_{c_{j}}(u_{j};c_j^{(1)},c_j^{(2)},\ldots ,c_j^{(k_j)})x_j \ge \lambda \right) \ge 1-\beta \end{aligned}$$
(46)
$$\begin{aligned}&0<\beta <1 \end{aligned}$$
(47)
$$\begin{aligned}&\quad along \,with\, the \, constraints \,(24-30) \end{aligned}$$
(48)

In the above problem, we have an additional probabilistic constraint. Using chance constrained programming approach we obtain an equivalent deterministic model which is given by:

$$\begin{aligned} \max : \lambda \end{aligned}$$
(49)

subject to

$$\begin{aligned}&\sum _{j=1}^{n}x_{j}\left( \frac{(u_{j}-1)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-1)}(k_{j}-1)!}\mu _{oj}^{(1)} +\frac{u_{j}(u_{j}-2)(u_{j}-3)\cdots (u_{j}-k_{j}+1)}{(-1)^{(k_{j}-2)}1!(k_{j}-2)!}\mu _{0j}^{(2)}\right. \nonumber \\&\qquad \left. +\cdots +\frac{u_{j}(u_{j}-1)(u_{j}-2)\cdots (u_{j}-k_{j}+2)}{(k_{j}-1)!}\mu _{0j}^{(k_{j})}\right) +\eta _{1-\beta }\sqrt{V(M)}\ge \lambda \end{aligned}$$
(50)
$$\begin{aligned}&\sum _{j=1}^{n}x_{j}\left( \frac{(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-1)}(p_{ij}-1)!}\mu _{ij}^{(1)} +\frac{w_{ij}(w_{ij}-2)(w_{ij}-3)\cdots (w_{ij}-p_{ij}+1)}{(-1)^{(p_{ij}-2)}1!(p_{ij}-2)!}\mu _{ij}^{(2)}\right. \nonumber \\&\quad \left. +\cdots +\frac{w_{ij}(w_{ij}-1)(w_{ij}-2)\cdots (w_{ij}-p_{ij}+2)}{(p_{ij}-1)!}\mu _{ij}^{(p_{ij})}\right) +\eta _{\gamma _i}\sqrt{\sum _{j=1}^{n}V(M_{ij})+V(N_{i})}\nonumber \\&\le \left[ \frac{(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-1)}(r_i-1)!}\mu _{i}^{(1)} +\frac{v_{i}(v_{i}-2)(v_{i}-3)\cdots (v_{i}-r_{i}+1)}{(-1)^{(r_{i}-2)}1!(r_i-2)!}\mu _{i}^{(2)}\right.\\ &\quad\left. +\cdots +\frac{v_{i}(v_{i}-1)(v_{i}-2)\cdots (v_{i}-r_{i}+2)}{(r_i-1)!}\mu _{i}^{(r_{i})}\right] \end{aligned}$$
(51)
$$\begin{aligned}&0<\gamma _i<1,\, i = 1,2, \ldots ,m \end{aligned}$$
(52)
$$\begin{aligned}&0<\beta <1 \end{aligned}$$
(53)
$$\begin{aligned}&x_{j} \ge 0,\quad j=1,2,3,\ldots ,n \end{aligned}$$
(54)
$$\begin{aligned}&0\le u_{j} \le k_{j}-1 \end{aligned}$$
(55)
$$\begin{aligned}&0\le w_{ij} \le p_{ij}-1 \end{aligned}$$
(56)
$$\begin{aligned}&0\le v_i \le r_i-1 \end{aligned}$$
(57)
$$\begin{aligned}&u_{j},w_{ij},v_i\in {\mathbb {N}}_{0};\quad i=1,2,3,\ldots ,m;\, j=1,2,\ldots ,n. \end{aligned}$$
(58)

Hence, we establish the equivalent deterministic model of the proposed problem. We obtain the solution for the proposed problem by solving one of the four model described above (depending on the aim of the decision maker). We use standard optimization softwares to solve the deterministic model.

4 Numerical example and discussion

In this Section we present a hypothetical case study (see [24]) to illustrate the problem and methodology presented in this paper. We modeled the problem and solved it by using the proposed solution procedure.

A cattle feed manufacturing company wants to produce one type of cattle feed mix which is produced by mixing four type of food material. The food materials are Barley, Maize, Sesame Flakes and Groundnut. There are four markets where sufficient amount of Barley available. In all these four markets the price per unit of barley are different. Also there is some differences in the quality of the Barley. Similarly, there are five markets where Groundnut meal is available. In all these four markets the price per unit of Groundnut are different. Also there is some differences in the quality of the Groundnut. There are three type of maize available in three different markets. Cost, protein and fat for different type of maize is different. There two type of Sesame Flakes made from two different type of Sesame. The amount of Protein and fat available in these different Sesame Flakes are different. The cost of these available Sesame Flakes are different. The details of purchase cost of these raw material, amount of protein and fat in those materials are presented in the Tables 5, 6, 7 and 8. We assume that the cost and protein content of each material are random variable. Also, fat content of each type of maize are random variable. The source of the data for cost of different materials is http://164.100.222.56/amb/1/weeklymandiselect.asp. Protein and fat to vary greatly because of the local food systems, food security and interventions. Therefore either 20.7 or 21.0 or 22 or 22.7 % are the possible requirement of protein. Also, the required fat content may be either 4.8 or 5.0 or 5.3 or 5.9 or 6.2 %. The significant level of a probabilistic constraint corresponding to protein is 0.95. We consider confidence levels for fat constraint is 0.98 respectively.

Table 5 Data table of barley
Table 6 Data table of groundnut
Table 7 Data table of maize
Table 8 Data table of sesame flake

Let \(x_1\), \(x_2\), \(x_3\), \(x_4\) be the quantities of Barley, Groundnut meal, Maize and Sesame Flakes mixed in the cattle feed respectively. Let the sum total of the raw materials is equal to one kg. Also, it is assumed that, the quantities of Barley and Maize i.e., the value of \(x_1\) and \(x_3\) must be at least 0.5 and 0.01 kg respectively. So, we have an minimization problem with two chance constraints and two linear constraints.

The above situation can be modeled into a mathematical model which is a multi-choice probabilistic linear programming problem. The mathematical model is expressed as:

$$\begin{aligned} \min : Z =\left\{ c_1^{(1)},c_1^{(2)},c_1^{(3)},c_1^{(4)}\}x_1+\{c_2^{(1)},c_2^{(2)},c_2^{(3)},c_2^{(4)},c_2^{(5)}\}x_2+\{c_3^{(1)},c_3^{(2)},c_3^{(3)}\}x_3+\{c_4^{(1)},c_4^{(2)}\right\} x_4 \end{aligned}$$
(59)

subject to

$$\begin{aligned}&\Pr \left( \{a_{11}^1,a_{11}^2,a_{11}^3,a_{11}^4\}x_1+\{a_{12}^1,a_{12}^2,a_{12}^3,a_{12}^4,a_{12}^5\}x_2+\{a_{13}^1,a_{13}^2,a_{13}^3\}x_3+\{a_{14}^1,a_{14}^2\}x_4 \ge b_1\right) \nonumber \\&\quad \ge 0.95,\quad b_1\in \{20.7,21.0,22,22.7\} \end{aligned}$$
(60)
$$\begin{aligned}&\Pr \left( \{2.4,2.3,2.2,2.5\}x_1+\{1.29,1.35,1.28,1.3,1.25\}x_2+\{a_{23}^1,a_{23}^2,a_{23}^3\}x_3+\{11.1,12\}x_4 \ge b_2\right) \nonumber \\&\quad \ge 0.98,\quad b_2\in \{4.8,5.0,5.3,5.9,6.2\} \end{aligned}$$
(61)
$$\begin{aligned}&x_1+x_2+x_3+x_4=1 \end{aligned}$$
(62)
$$\begin{aligned}&x_1\ge 0.5 \end{aligned}$$
(63)
$$\begin{aligned}&x_3\ge 0.01 \end{aligned}$$
(64)
$$\begin{aligned}&x_{j} \ge 0,\quad j=1,2,3,4 \end{aligned}$$
(65)

where

$$\begin{aligned}&c_{1}^{(1)}\sim \mathcal{N} (11.4,1.16),\,c_{1}^{(2)}\sim \mathcal{N} (12.763,1.5173),\,c_{1}^{(3)}\sim \mathcal{N} (11.9021,0.6597),\,c_{1}^{(4)}\sim \mathcal{N} (11.7062,1.015),\\&c_{2}^{(1)}\sim \mathcal{N} (39.33,7.4), \,c_{2}^{(2)}\sim \mathcal{N} (40.774,18.1734),\,c_{2}^{(3)}\sim \mathcal{N} (39.72,10.12),\,c_{2}^{(4)}\sim \mathcal{N} (39.997,16.625),\\&c_{2}^{(5)}\sim \mathcal{N} (38.25,13.495),\,c_{3}^{(1)}\sim \mathcal{N} (13.673,0.7712), \,c_{3}^{(2)}\sim \mathcal{N} (13.64764,0.489293),\,c_{3}^{(3)}\sim \mathcal{N} (13.07041,0.74055),\\&c_{4}^{(1)}\sim \mathcal{N} (70.911,41.0979),c_{4}^{(2)}\sim \mathcal{N} (77.31441,45.36459); a_{11}^{(1)}\sim \mathcal{N} (11.6,0.25),\,a_{11}^{(2)}\sim \mathcal{N} (12,0.281),\\&a_{11}^{(3)}\sim \mathcal{N} (11.8,0.27),\,a_{11}^{(4)}\sim \mathcal{N} (11.5,0.26),a_{12}^{(1)}\sim \mathcal{N} (49.2,1.1), \,a_{12}^{(2)}\sim \mathcal{N} (52.1,0.624),\\&a_{12}^{(3)}\sim \mathcal{N} (51,0.7),\,a_{12}^{(4)}\sim \mathcal{N} (51.8,0.85),a_{12}^{(5)}\sim \mathcal{N} (49,1),\,a_{13}^{(1)}\sim \mathcal{N} (11.28,0.194),\\&a_{13}^{(2)}\sim \mathcal{N} (10.1,0.17),\,a_{13}^{(3)}\sim \mathcal{N} (9.4,0.15),a_{14}^{(1)}\sim \mathcal{N} (41.8,20.25),\,a_{14}^{(2)}\sim \mathcal{N} (52,18);\\&a_{23}^{(1)}\sim \mathcal{N} (5.2,0.25),a_{23}^{(2)}\sim \mathcal{N}(5,0.24),a_{23}^{(3)}\sim \mathcal{N} (4.74,0.2)\} \end{aligned}$$

Using ‘E’-model we obtain the deterministic equivalent of the model (5965) as:

$$\begin{aligned}\min : Z'&= (0.481483z_1^3 - 2.5564z_1^2 + 3.4379167z_1 + 11.4)x_1+(-0.2993z_2^4 + 2.434167z_2^3 - 6.456167z_2^2 + 5.7653z_2 + 39.33)x_2\nonumber \\&\quad +(-0.275935z_3^2 + 0.250575z_3 + 13.673)x_3+(70.911 + 6.40341z_4)x_4 \end{aligned}$$
(66)

subject to

$$\begin{aligned}&(0.0833z_1^3-0.55z_1^2+0.8667z_1+11.6)x_1+(-0.475z_2^4+3.833z_2^3-10.175z_2^2+9.717z_2+49.2)x_2+(0.24z_3^2-1.42z_3+11.28)x_3\nonumber \\&\quad +\, (41.8+10.2z_4)x_4-1.645\left[ (0.25-0.9167z_1+4.5048z_1^2-6.9202z_1^3+4.58z_1^4-1.369z_1^5+0.152z_1^6)x_1^2\right. \nonumber \\&\quad +\, (1.1-4.5834z_2+25.842z_2^2-54.70422z_2^3+55.35z_2^4-30.0788z_2^5+9.0116z_2^6-1.4016z_2^7+0.0883z_2^8)x_2^2\nonumber \\&\quad \left. +\, (0.1936-0.5808z_3+1.3467z_3^2-1.0454z_3^3+0.2559z_3^4)x_3^2+(20.25-40.5z_4 +38.25z_4^2)x_4^2\right] ^\frac{1}{2}\nonumber \\&\ge 20.7-.1667z_5^3+.85z_5^2-.3834z_5 \end{aligned}$$
(67)
$$\begin{aligned}&\quad (0.06667z_1^3-0.2z_1^2+0.03333z_1 + 2.4)x1+(-0.01583z_2^4+.1317z_2^3-.34917z_2^2+.293z_2+1.29)x_2+(-0.03z_3^2-.17z_3+5.2)x_3\nonumber \\&\quad +\, (11.1 + 0.9z_4)x_4-2.055x_3\sqrt{(0.25-0.75z_3+1.8225z_3^2-1.435z_3^3+0.3525z_3^4)}\nonumber \\&\ge 4.8-0.0333z_6^4+.233z_6^3-.4167z_6^2+.4167z_6 \end{aligned}$$
(68)
$$\begin{aligned}&x_1+x_2+x_3+x_4=1 \end{aligned}$$
(69)
$$\begin{aligned}&x_1\ge 0.5 \end{aligned}$$
(70)
$$\begin{aligned}&x_3\ge 0.01 \end{aligned}$$
(71)
$$\begin{aligned}&x_j\ge 0, \quad j = 1, 2, 3, 4 \end{aligned}$$
(72)
$$\begin{aligned}&z_1\in \{0,1,2,3\},\,z_2\in \{0,1,2,3,4\},\,z_3\in \{0,1,2\},\,z_4\in \{0,1\},\,z_5\in \{0,1,2,3\},\,z_6\in \{0,1,2,3,4\} \end{aligned}$$
(73)

The mathematical model (6673) is a mixed integer nonlinear programming problem. Using nonlinear programming technique [25] we can solve the problem. Using MATHEMATICA [26] software we obtain the optimal solution. Mathematica use “Differential Evolution” algorithm to find the numerical global optimal solution of the nonlinear programming problem. In this method the tolerance for accepting constraint violations is ’0.001’ (See [27]). So, the constraints are violated slightly (See Table 9). In order to compare the solution obtained by Mathematica, the transformed model is also solved in GAMS [28] solver. The same optimal solution is obtained. Optimal solutions of the different models are given by the Table 9.

Table 9 Optimal solutions of different models

For the ‘V-model’ and ‘P-model’ the target costs are set as 30 and 35 respectively. The maximum probability with the production cost as 35 is \(88.59\,\%\). From the Table 9 we can see that, for the different models the markets for Barley or the market for Groundnut Meal or types of Maize and Sesame flakes chosen are different in most of the cases. Hence from these models and the results obtained we conclude that the consideration of the multi-choice random parameters in the parametric space is very logical and helpful for the decision makers to take proper decision.

5 Conclusions

In this paper, a suitable methodology is established to solve a multi-choice linear programming problem where the decision maker set a number of random aspiration level for any parameter presents in the problem. By considering the alternative aspiration level as normal random variable, we establish the methodology. At first, we tackle the multi-choice parameter by using some interpolating polynomial. Interpolating polynomials are linear functions of normal random variables. So, the transformed problem is treated as a probabilistic programming problem. By using chance constraint programming approach, we establish an equivalent deterministic model. We present four different models to find the deterministic form of the objective function, depending on the desire of the decision maker. The proposed methodology provides a novel way for solving the multi-choice linear programming problems involving random variables as the alternative choices of the parameters. The present method serves as a useful decision making tool for a decision maker to find optimal solution with best alternative for a multi-choice parameter. A further study is essential for this problem in the presence of other random variables, namely uniform, exponential and log-normal random variable. There are several real life problems such as Production planning problem, Inventory control problem, Scheduling problems in management science where this multi-choice probabilistic programming technique can be applied.