Keywords

1 Introduction

Operations research is a very large area. In this paper, we will focus on operations research in connection to optimization of decisions, with one or more decision maker(s). The classical analytical methods of optimization and comparative statics analysis, basic economic theory and fundamental linear programming are well presented in Chiang [3].

Mathematical modeling is central to operations research. Usually, in applied problems, there are many different ways to define the mathematical models representing the components of the system under analysis. The reference book of the software package LINGO [1] contains large numbers of alternative operations research models and applications with numerical solutions.

A particular applied problem should, if possible, be analyzed with a problem relevant operations research method, using a problem relevant set of mathematical models. This may seem obvious to the reader, but it is far from trivial to determine the problem relevant method and models.

The two books by Winston, references [16, 17], give a good and rather complete presentation of most operations research methods, algorithms and typical applications. The operations research literature contains large numbers of alternative methods and models, applied to very similar types of applied problems. In many cases, the optimal decisions that are the results of the analyses, differ considerably.

For instance, if we want to determine the optimal decision in a particular problem, we may define it as a one dimensional optimization problem, or as a multidimensional problem where we simultaneously optimize several decisions that may be linked in different ways. We may also consider constraints of different sorts. In most problems, present decisions have consequences for the future development of the system under analysis. Hence, multi period analysis is often relevant. Weintraub et al. [15] contains many dynamic operations research problems and solutions from different natural resource sectors. Then, we realize that the future state of the world can change for several reasons. In resource management problems, for instance, we often want to determine optimal present extraction of some resource, such as coal or oil. If we take more today, we have to take less in the future. The present and future prices are very important parameters in such decision problems and we usually have to agree that the future prices are not perfectly known today. Price changes may occur because of technical innovations, political changes and many other reasons. We simply have to accept that future prices can never be perfectly predicted. Hence, the stochastic properties of prices have to be analyzed and used in the operations research studies in order to determine optimal present decisions. Many types of resources are continuously used, thanks to biological growth. Braun [2] gives a very good presentation of ordinary differential equations, which is key to the understanding and modeling of dynamical systems, including biological resources of all kinds. In agriculture, fishing, forestry, wildlife management and hunting, resources are used for many different purposes, including food, building materials, paper, energy and much more. In order to determine optimal present decisions in such industries, it is necessary to develop and use dynamic models that describe how the biological resources grow and how the growth is affected by present harvesting and other management decisions. Clark [4] contains several examples and solutions of deterministic optimal control theory problems in natural resource sectors.

The degree of unexplained variation in the future state of the resource is often considerable. Many crops are sensitive to extreme rains, heat, floods, parasites and pests. Forests are sensitive to storms and hurricanes, fires etc. Obviously, risk is of central importance to modeling and applied problem solving in these sectors. Grimmet and Stirzaker [6] contains most of the important theory of probability and random processes. Fleming and Rishel [5] contains the general theory of deterministic and stochastic optimal control. Sethi and Thompson [12] cover a field very similar to [5], but is more focused on applied derivations. Lohmander [8, 9] shows how dynamic and stochastic management decisions can be optimized with different methods, including different versions of stochastic dynamic programming. Lohmander [10] develops methodology for optimization of large scale energy production under risk, using stochastic dynamic programming with a quadratic programming subroutine. Deterministic systems are not necessarily predictable. Tung [13] is a fantastic book that contains many kinds of mathematical modeling topics and applications, including modern chaos theory and examples. Such theories and methods are also relevant to rational decision making in resource management problems. Until now, we have only considered problems with one decision maker. In reality, we often find many decision makers that all influence the development of the same system. In such cases, we can model this situation using game theory. Luce and Raiffa [11] gives a very good coverage of the classical field. In games without cooperation, the Nash equilibrium theory is very useful. Each player maximizes his/her own objective given that the other player maximizes his/her objective. Washburn [14] focuses on such games and the important and often quite relevant subset “two person zero sum games”. In such games, linear programming finds many relevant applications. Isaacs [7] describes and analyses several games of this nature, but in continuous time, with the method differential games. This manuscript could have been expanded in the direction of dynamic and stochastic games. The present format limitation however makes this impossible. Let us conclude this section with the finding that mathematical modeling in operations research is a rich field with an almost unlimited number of applications.

2 Analysis

Let us investigate alternative specifications of operations research models and discuss the properties. We may consider (1) as a general representation of linear constraints, as we find them in most logistics problems, manufacturing problems and many other applied problems. We assume that a feasible set exists and know that the feasible set obtained with linear constraints is convex. In a production problem, \( x_{k} \) is the production volume of product \( k \) and the constraints are capacity constraints, where \( C_{l} \) is the total capacity of resource \( l \).

$$ \left\{ {\begin{array}{*{20}c} {\alpha_{11} x_{1} + \ldots + \alpha_{1K} x_{K} \le C_{1} } \\ \ldots \\ {\alpha_{L1} x_{1} + \ldots + \alpha_{LK} x_{K} \le C_{L} } \\ \end{array} } \right. $$
(1)

In case we have a linear objective function, such as the total profit, \( \pi \), we may express that as (2).

$$ \pi (x_{1} , \ldots ,x_{K} ) = p_{0} + p_{1} x_{1} + \ldots + p_{K} x_{K} $$
(2)

Linear programming is a relevant optimization method if we want to maximize (2) subject to (1). The simplex algorithm will give the optimal solution in a finite number of iterations. In many applied problems, such as production optimization problems, it is also important to be able to handle the fact that market prices often are decreasing functions of the produced and sold quantities of different products. Furthermore, the production volume of one product may affect the prices of other products, the marginal production costs of different products may be linked and so on. Then, the objective function of the company may be approximated as a quadratic function (3). (Note that (3) may be further simplified.)

$$ \begin{aligned} \pi (x_{1} , \ldots ,x_{K} ) = & p_{0} + p_{1} x_{1} + \ldots + p_{K} x_{K} + \\ & + r_{11} x_{1}^{2} + r_{12} x_{1} x_{2} + \ldots + r_{1(K - 1)} x_{1} x_{K - 1} + r_{1K} x_{1} x_{K} + \\ & + \ldots \\ & + r_{K1} x_{K} x_{1} + r_{K2} x_{K} x_{2} + \ldots + r_{K(K - 1)} x_{K} x_{K - 1} + r_{KK} x_{K}^{2} \\ \end{aligned} $$
(3)

With a quadratic objective function and linear constraints, we have a quadratic programming problem (4). Efficient quadratic programming computer codes are available, that have several similarities to the simplex algorithm for linear programming. The Kuhn-Tucker conditions can be considered as linear constraints and in [1, 16], many such examples are solved.

$$ \begin{aligned} \hbox{max} & \pi (x_{1} , \ldots ,x_{K} ) \\ & s.t. \\ & \alpha_{11} x_{1} + \ldots + \alpha_{1K} x_{K} \le C_{{_{1} }} \\ & \ldots \\ & \alpha_{L1} x_{1} + \ldots + \alpha_{LK} x_{K} \le C_{L} \\ \end{aligned} $$
(4)

In real applications, we are often interested to handle the sequential nature of information. Market prices usually have to be regarded as partially stochastic. We may influence the price level via our production and sales volumes. Still, there is usually a considerable price variation outside the control of the producer. Then, we can optimize our decisions via stochastic dynamic programming, as shown in the example in (5) and (6). Let us consider the optimal extraction over time from a limited oil reserve. In every period \( t \) until we reach the planning horizon \( T \), we maximize the expected present value, \( f(.) \), for every possible level of the remaining reserve, \( s \), and for every market state, \( m \). \( f(.) \) = 0 for \( t = T + 1 \), which is shown in (6). In all earlier periods, the values of \( f(.) \) are maximized for all possible reserve and market levels, via the control \( u \), the extraction level. In a period \( t \), before we reach \( t = T + 1 \), the control \( u \) is selected so that the sum of the present value of instant extraction \( \pi (.) \) and the expected present value of future extraction \( \sum\limits_{n} {\tau (n\left| m \right.)f(t + 1,s - u,n)} \) is maximized. \( \tau (n\left| m \right.) \) denotes the transition probability from market state \( m \) to market state \( n \) from one period to the next. The control \( u \) has to belong to the set of feasible controls \( U(.) \) which is a function of \( t \), \( s \) and \( m \). Equations (5) and (6) summarize the principles and the recursive structure.

$$ f(t,s,m) = \mathop {\hbox{max} }\limits_{u \in U(t,s,m)} \left( {\pi (u;t,s,m) + \sum\limits_{n}^{{}} {\tau (\left. n \right|m)f(t + 1,s - u,n)} } \right)\quad \forall \left. {(t,s,m)} \right|\left( {0 \le t \le T} \right) $$
(5)
$$ f(T + 1,s,m) = 0\quad \forall (s,m) $$
(6)

With the stochastic dynamic programming method as a general tool, we may again consider the detailed production and/or logistics problem (4). Now, we can solve many such problems, (4), as sub problems, within the general stochastic dynamic programming formulation (5), (6). Hence, for each state and stage, we solve the relevant sub problems.

Now, the capacity levels (7) may be defined as functions of the control decisions, time, the remaining reserve and the market state. Furthermore, all other “parameters”, may be considered as functions, as described in (8), (9) and (10). As a result, we may describe the sub problems as (11) or even as (12).

$$ C_{l} = C_{l} (u,t,s,m)\quad \forall l $$
(7)
$$ \alpha_{lk} = \alpha_{lk} (u,t,s,m)\quad \forall (l,k) $$
(8)
$$ p_{k} = p_{k} (u,t,s,m)\quad \forall k $$
(9)
$$ r_{{k_{1} k_{2} }} = r_{{k_{1} k_{2} }} (u,t,s,m)\quad \forall (k_{1} ,k_{2} ) $$
(10)
$$ \begin{aligned} \hbox{max} \pi & (x_{1} , \ldots ,x_{K} ;u,t,s,m) \\ & s.t. \\ & \alpha_{11} x_{1} + \ldots + \alpha_{1K} x_{K} \le C_{{_{1} }} \\ & \ldots \\ & \alpha_{L1} x_{1} + \ldots + \alpha_{LK} x_{K} \le C_{L} \\ \end{aligned} $$
(11)
$$ \begin{aligned} \hbox{max} \pi & (x_{1} , \ldots ,x_{K} ;u,t,s,m) \\ & s.t \\ & \alpha_{11} (u,t,s,m)x_{1} + \ldots + \alpha_{1K} (u,t,s,m)x_{K} \le C_{{_{1} }} (u,t,s,m) \\ & \ldots \\ & \alpha_{L1} (u,t,s,m)x_{1} + \ldots + \alpha_{LK} (u,t,s,m)x_{K} \le C_{L} (u,t,s,m) \\ \end{aligned} $$
(12)

Now, we include the sub problems in the stochastic dynamic programming recursion Eq. (13). A problem of this kind is defined and numerically solved using LINGO software [1] by Lohmander [10].

$$ f(t,s,m) = \mathop {\hbox{max} }\limits_{u \in U(t,s,m)} \left( {\left( {\mathop {\hbox{max} \pi (x_{1} , \ldots ,x_{K} ;u,t,s,m)}\limits_{{\begin{array}{*{20}l} {s.t.} \hfill \\ {\alpha_{11} x_{1} + \ldots + \alpha_{1K} x_{K} \le C_{{_{1} }} } \hfill \\ \ldots \hfill \\ {\alpha_{L1} x_{1} + \ldots + \alpha_{LK} x_{K} \le C_{L} } \hfill \\ \end{array} }} } \right) + \sum\limits_{n}^{{}} {\tau (\left. n \right|m)f(t + 1,s - u,n)} } \right)\quad \forall \left. {(t,s,m)} \right|\left( {0 \le t \le T} \right) $$
(13)

Observe that (13) represents a very general and flexible way to formulate and solve applied stochastic multi period production and logistics problems of many kinds. The true sequential nature of decisions and information is explicitly handled, stochastic market prices and very large numbers of decision variables and constraints may be consistently considered. Furthermore, many other stochastic phenomena may be consistently handled with this approach. Several examples of how different kinds of stochastic disturbances may be included in optimal dynamic decisions are found in Lohmander [8, 9].

In the game theory literature, [7, 11, 14], we find many examples of two player constant sum games. In (14), we find such an example, with one objective function. The value of the game, \( Z \), is what we obtain when one player maximizes and one player minimizes the same objective function \( Q(\phi ,\varphi ) \). The maximizing player, \( A \), determines control \( \varphi \) and the minimizing player, \( B \), determines control \( \phi \). \( Q(\phi ,\varphi ) \) can, for instance, represent the difference in profit or resources between two companies or countries, during a conflict over a particular economic market, a geographical territory or something else. During a period of conflict, it may be relevant to define this as a constant sum game. (In other cases, con-constant sum games are sometimes more relevant, but then it is not always the case that strictly mathematical definitions of the game can be defined and explicitly solved.) Of course, \( \varphi \) and \( \phi \) may represent vectors.

$$ {\rm Z} = \mathop {\hbox{min} }\limits_{\phi } \mathop {\hbox{max} }\limits_{\varphi } Q(\phi ,\varphi ) = Q(\mathop \phi \limits^{\_} ,\mathop \varphi \limits^{\_} ) $$
(14)

We may develop and analyze constant sum games in a similar way as the earlier discussed problems, via the stochastic dynamic programming framework. In (15) and (16), one player maximizes and one player minimizes the value of the game. The maximizing player \( A \) controls \( u \) and \( x \) and the minimizing player \( B \) controls \( v \) and \( y \). The resources of \( A \) and \( B \) at time t are \( s_{At} \) and \( s_{Bt} \). Stochastic exogenous disturbances influence the development of the system via the transition probabilities \( \tau (n\left| m \right.) \). The state in the next period is considered as a general function of decisions of both players and of other variables and parameters. In simple situations, continuous time versions of dynamic game problems can be defined as differential games, as reported by Isaacs [7]. With a higher level of detail, we usually have to use discrete time and state space. Several interesting discrete examples are found in Washburn [14].

$$ \begin{aligned} Z(t,s_{At} ,s_{Bt} ,m) = & \mathop {\hbox{min} }\limits_{{v \in V(t,s_{Bt} ,m)}} \mathop {\hbox{max} }\limits_{{u \in U(t,s_{At} ,m)}} \left( \begin{aligned} \left( {\mathop {\mathop {\hbox{min} }\limits_{{y \in Y(t,s_{Bt} ,u,v,m)}} \mathop {\hbox{max} }\limits_{{x \in X(t,s_{At} ,u,v,m)}} Q(x,y;u,v,t,s_{At} ,s_{Bt} ,m)}\limits_{{\begin{array}{*{20}l} {s.t.} \hfill \\ {F_{{1,f_{1} }} (x,y) \le 0\;\forall f_{1} } \hfill \\ {F_{{2,f_{2} }} (x,y) \ge 0\forall f_{2} } \hfill \\ {F_{{3,f_{3} }} (x,y) = 0\forall f_{3} } \hfill \\ \end{array} }} } \right) \hfill \\ + \sum\limits_{n}^{{}} {\tau (\left. n \right|m)Z(t + 1,s_{A(t + 1)} (s_{At} ,t,m,v,u),s_{B(t + 1)} (s_{Bt} ,t,m,v,u),n)} \hfill \\ \end{aligned} \right) \\ & \quad \forall \left. {(t,s_{At} ,s_{Bt} ,m)} \right|\left( {0 \le t \le T} \right) \\ \end{aligned} $$
(15)
$$ Z(T + 1,s_{At} ,s_{Bt} ,m) = 0\quad \forall (s_{At} ,s_{Bt} ,m) $$
(16)

Note that the specification of the structure described by (15) and (16) can be adjusted to specific applications. This structure can be regarded as a generalization of many problems in [7, 14].

The control decisions \( u \) and \( v \), may represent key decisions, such as total use of constrained resources. As seen in (15), these decisions also influence the options and game values in future periods. The other control decisions, \( x \) and \( y \), where \( x \) and \( y \) may be vectors, can represent the decisions of \( A \) and \( B \) in very high resolution. Linear or quadratic programming as a tool in the sub problems makes this possible. Furthermore, the stochastic dynamic main program can provide solutions with almost unlimited resolution in the time dimension. The recursive structure of problem solving does not make it necessary to store all results in the internal memory. Of course, computation time increases with resolution.

3 Main Results

Operations research contains a large number of alternative approaches. With logically consistent mathematical modeling, relevant method selection and good empirical data, the best possible decisions can be obtained. This paper has presented arguments for using some particular combinations of methods that often are empirically motivated and computationally feasible (Fig. 1).

Fig. 1.
figure 1

The optimal oil industry management problem includes finding the optimal combination of oil extraction in different fields, domestic crude oil transport, refining and international logistics. All of this should be done with consideration of stochastic world market prices and possibly other stochastic events. Source: Lohmander [10]. Equations (13) and (6) are useful to solve this problem.