1 Introduction

Tropical optimization problems constitute an important research and application domain of tropical mathematics. As an applied mathematical discipline that concentrates on the theory and methods of semirings with idempotent addition, tropical (idempotent) mathematics dates back to the works of Pandit (1961), Cuninghame-Green (1962), Hoffman (1963), Vorob’ev (1963) and Romanovskiĭ (1964), at least two of which (Cuninghame-Green 1962; Hoffman 1963) have been motivated and illustrated by optimization problems.

Many subsequent publications that contributed to the development of tropical mathematics, including the monographs by Cuninghame-Green (1979), Zimmermann (1981), Kolokoltsov and Maslov (1997), Gondran and Minoux (2008), Butkovič (2010) and Maclagan and Sturmfels (2015), and a number of contributed papers, were concerned with optimization problems, most of which have been drawn from real-world applications in operations research and management science.

Multidimensional tropical optimization problems are generally formulated in the tropical mathematics setting to minimize or maximize linear and nonlinear functions defined on vectors over idempotent semifields (semirings with multiplicative inverses). The problems may include constraints given by linear and nonlinear equalities and inequalities. Many of the problems that come from real-world applications and, at the same time, admit solutions in the framework of tropical mathematics have nonlinear objective functions defined through multiplicative conjugate transposition of vectors (see, e.g., an overview in Krivulin 2015b).

There are problems with objective functions that involve the tropical algebraic product \(\varvec{x}^{-}\varvec{A}\varvec{x}\), where \(\varvec{A}\) is a given square matrix, \(\varvec{x}\) is the unknown vector, and \(\varvec{x}^{-}\) is the multiplicative conjugate transpose of \(\varvec{x}\). These functions appear in various applications in operations research and management science, including problems in project (machine) scheduling (Cuninghame-Green 1962, 1979; Superville 1978; Krivulin 2015a, c, d), location analysis (Zimmermann 1992; Hudec and Zimmermann 1993; Krivulin 2011), and decision making (Elsner and van den Driessche 2004, 2010; Gursoy et al. 2013), to name only a few.

The problem of minimizing the product in question was examined in early works (Cuninghame-Green 1962; Engel and Schneider 1975; Superville 1978) by using conventional mathematical techniques. It was shown that the minimum in the problem is equal to the tropical spectral radius of the matrix \(\varvec{A}\), and attained at the corresponding tropical eigenvectors of this matrix. Later, the problem was formulated in the framework of tropical mathematics in Cuninghame-Green (1979), where a complete solution was proposed by reducing to a linear programming problem. Solutions based on tropical mathematics were derived in Elsner and van den Driessche (2004) and Elsner and van den Driessche (2010). The results of Elsner and van den Driessche (2010) included an implicit description of a complete solution in the form of a vector inequality, and provided a computational procedure to solve the inequality. Finally, complete solutions in terms of tropical mathematics to both the problem and its generalizations, which have objective functions of an extended form as well as additional constraints, were given in Krivulin (2014, (2015a, (2015b).

In this paper, we consider a new rather general optimization problem, which includes known problems as special cases. We provide a complete solution in an explicit form on the basis of the approach developed in Krivulin (2014, (2015a, (2015b), which introduces an additional variable to represent the values of the objective function, and then reduces the initial problem to a parametrized vector inequality. The minimum of the objective function is evaluated by using the solution existence conditions for the inequality. A complete solution to the problem is given by the solutions of the parametrized inequality, provided the parameter is set to the minimum value. We discuss the computational complexity of the result to show that the solution can be obtained in polynomial time. As a consequence, we propose solutions to new special cases of the general problem.

We apply the results obtained to derive a new complete solution of a real-world problem that was drawn from project scheduling (see, e.g. Demeulemeester and Herroelen 2002; Neumann et al. 2003; T’kindt and Billaut 2006, for further details on optimal scheduling), and also served to motivate the study. We consider a project that consists of activities operating in parallel under temporal constraints in various forms, including release dates and time windows. For each activity, the flow-time is defined to be the time interval between its initiation and completion. The objective is to find an optimal schedule that minimizes the maximum flow time over all activities. This problem is an extended version of that in Krivulin (2015a), where a less complicated system of temporal constraints is considered. To illustrate the solution obtained for the problem, and the computational technique implemented by the solution, we present a representative numerical example.

Note that the problem under examination can be formulated as a linear program, and then solved by one of the known solution techniques of linear programming. However, these techniques usually take the form of iterative algorithms, and do not generally guarantee an explicit closed-form solution. Unlike the algorithmic approaches, the proposed solution provides direct results in a compact vector form suitable for further analysis and practical use. Considering, in addition, that the new solution can be calculated in polynomial time, it can certainly serve as a helpful complement and supplement to existing solutions.

The paper is organized as follows. Section 2 includes definitions and notation to be used in the subsequent sections. In Section 3, we present some preliminary results, including a binomial identity for matrices and the solution to linear inequalities. The main result is provided in Sect. 4, where we first offer examples of known optimization problems, then formulate and solve a new general problem, discuss the computational complexity of the solution, and finally, give solutions to new special cases of the general problem. Section 5 contains an application of the results in project scheduling, and concludes with a numerical example.

2 Basic definitions, notation and observations

We start with a short introduction in the context of tropical (idempotent) algebra to offer a unified and self-contained framework for the formulation and solution of tropical optimization problems in the rest of the paper. Below, we follow the notation and results in Krivulin (2014, (2015a, (2015b), which form a useful basis for the analysis and solution of the problems under study in a systematic manner and in a compact closed form. Further details on tropical mathematics at both introductory and advanced levels can be found in Baccelli et al. (1993), Kolokoltsov and Maslov (1997), Golan (2003), Heidergott et al. (2006), Gondran and Minoux (2008), Butkovič (2010) and Maclagan and Sturmfels (2015).

2.1 Idempotent semifield

An idempotent semifield is an algebraic structure \((\mathbb {X},\oplus ,\otimes ,\mathbb {0},\mathbb {1})\), where \(\mathbb {X}\) is a nonempty set, \(\oplus \) and \(\otimes \) are binary operations, called addition and multiplication, \(\mathbb {0}\) and \(\mathbb {1}\) are distinct elements in \(\mathbb {X}\), called zero and one, such that \((\mathbb {X},\oplus ,\mathbb {0})\) is an idempotent commutative monoid, \((\mathbb {X}{\setminus }\{\mathbb {0}\},\otimes ,\mathbb {1})\) is an Abelian group, and multiplication distributes over addition.

The semifield has idempotent addition, which implies that \(x\oplus x=x\) for each \(x\in \mathbb {X}\), and invertible multiplication, which allows each nonzero x to have its multiplicative inverse \(x^{-1}\) such that \(x\otimes x^{-1}=\mathbb {1}\).

Idempotent addition induces a partial order on \(\mathbb {X}\) such that \(x\le y\) if and only if \(x\oplus y=y\). It follows from this definition that \(x\le x\oplus y\) and \(y\le x\oplus y\). Furthermore, both operations \(\oplus \) and \(\otimes \) are monotone, which implies that the inequality \(x\le y\) yields \(x\oplus z\le y\oplus z\) and \(x\otimes z\le y\otimes z\) for all z. The inversion is antitone, which means that the inequality \(x\le y\) results in \(x^{-1}\ge y^{-1}\) for nonzero x and y. Finally, the inequality \(x\oplus y\le z\) is equivalent to the two inequalities \(x\le z\) and \(y\le z\).

It is assumed that the partial order can be extended to a linear one to take the semifield as linearly ordered. The relation symbols and the optimization objectives are considered below in terms of this order.

Integer powers are routinely used as shorthand for iterated multiplication such that \(x^{0}=\mathbb {1}\) and \(x^{m}=x^{m-1}\otimes x\) for all \(x\in \mathbb {X}\) and integer \(m\ge 1\). Moreover, it is assumed that the equation \(x^{m}=a\) has a solution for any \(a\in \mathbb {X}\) and positive integer m, which extends the power notation to rational exponents, and thus makes the semifield algebraically complete (radicable). In the expressions that follow, the multiplication sign \(\otimes \) is omitted for brevity.

Examples of the semifield include \(\mathbb {R}_{\max ,+}=(\mathbb {R}\cup \{-\infty \},\max ,+,-\infty ,0)\) and \(\mathbb {R}_{\min ,\times }=(\mathbb {R}_{+}\cup \{+\infty \},\min ,\times ,+\infty ,1)\), where \(\mathbb {R}\) is the set of real numbers and \(\mathbb {R}_{+}=\{x>0|x\in \mathbb {R}\}\), to list only a few.

The semifield \(\mathbb {R}_{\max ,+}\) is equipped with addition and multiplication defined, respectively, as \(\max \) and \(+\). Furthermore, the number \(-\infty \) is taken as zero, and 0 is as one. Each \(x\in \mathbb {R}\) has the inverse \(x^{-1}\), which corresponds to the opposite number \(-x\) in the usual notation. The power \(x^{y}\) exists for any \(x,y\in \mathbb {R}\) and coincides with the ordinary arithmetic product xy. The order defined by idempotent addition is consistent with the conventional total order on \(\mathbb {R}\).

In \(\mathbb {R}_{\min ,\times }\), we have \(\oplus =\min \), \(\otimes =\times \), \(\mathbb {0}=+\infty \) and \(\mathbb {1}=1\). The inversion and exponentiation notations have the usual meaning. The relation \(\le \) defines an order that is opposite to the standard linear order on \(\mathbb {R}\).

2.2 Matrices and vectors

The set of matrices of m rows and n columns over \(\mathbb {X}\) is denoted \(\mathbb {X}^{m\times n}\). A matrix with all entries equal to \(\mathbb {0}\) is the zero matrix denoted by \(\varvec{0}\). A matrix is row- (column-) regular, if it has no zero rows (columns).

Matrix addition and multiplication, and scalar multiplication follow the usual rules with the scalar operations \(\oplus \) and \(\otimes \) in place of the ordinary addition and multiplication. The above inequalities, which represent properties of the scalar operations, are extended entry-wise to matrix inequalities.

For any matrix \(\varvec{A}\in \mathbb {X}^{m\times n}\), its transpose is the matrix \(\varvec{A}^{T}\in \mathbb {X}^{n\times m}\).

The square matrices of order n form the set denoted by \(\mathbb {X}^{n\times n}\). A square matrix having \(\mathbb {1}\) along the diagonal and \(\mathbb {0}\) elsewhere is the identity matrix denoted by \(\varvec{I}\). For any square matrix \(\varvec{A}\), the nonnegative integer power is defined as \(\varvec{A}^{0}=\varvec{I}\) and \(\varvec{A}^{m}=\varvec{A}\varvec{A}^{m-1}\) for all integers \(m\ge 1\).

The trace of a matrix \(\varvec{A}=(a_{ij})\in \mathbb {X}^{n\times n}\) is given by

$$\begin{aligned} \mathrm {tr}\varvec{A} = a_{11}\oplus \cdots \oplus a_{nn} = \bigoplus _{i=1}^{n}a_{ii}. \end{aligned}$$

The trace possesses the usual properties given by the equalities

$$\begin{aligned} \mathrm {tr}(\varvec{A}\oplus \varvec{B}) = \mathrm {tr}\varvec{A} \oplus \mathrm {tr}\varvec{B}, \quad \mathrm {tr}(\varvec{A}\varvec{B}) = \mathrm {tr}(\varvec{B}\varvec{A}), \quad \mathrm {tr}(x\varvec{A}) = x\mathrm {tr}\varvec{A}, \end{aligned}$$

which are valid for any matrices \(\varvec{A},\varvec{B}\in \mathbb {X}^{n\times n}\) and scalar \(x\in \mathbb {X}\).

A matrix with only one column (row) is a column (row) vector. In what follows, all vectors are column vectors unless otherwise indicated. The set of column vectors of order n is denoted \(\mathbb {X}^{n}\).

A vector is regular if it has only nonzero elements. Let \(\varvec{x}\in \mathbb {X}^{n}\) be a regular vector and \(\varvec{A}\in \mathbb {X}^{n\times n}\) be a row-regular matrix. Then, the result of the multiplication \(\varvec{A}\varvec{x}\) is a regular vector. If the matrix \(\varvec{A}\) is column-regular, then the row vector \(\varvec{x}^{T}\varvec{A}\) is regular as well.

For any nonzero vector \(\varvec{x}\in \mathbb {X}^{n}\), its multiplicative conjugate transpose is the row vector \(\varvec{x}^{-}=(x_{i}^{-})\), where \(x_{i}^{-}=x_{i}^{-1}\) if \(x_{i}\ne \mathbb {0}\), and \(x_{i}^{-}=\mathbb {0}\) otherwise.

The conjugate transposition exhibits some significant properties to be used later. Specifically, if \(\varvec{x}\) and \(\varvec{y}\) are regular vectors of the same order, then the inequality \(\varvec{x}\le \varvec{y}\) implies \(\varvec{x}^{-}\ge \varvec{y}^{-}\) and vice versa. Furthermore, for any nonzero vector \(\varvec{x}\), the equality \(\varvec{x}^{-}\varvec{x}=\mathbb {1}\) holds. Finally, if the vector \(\varvec{x}\) is regular, then the matrix inequality \(\varvec{x}\varvec{x}^{-}\ge \varvec{I}\) is also valid.

A scalar \(\lambda \in \mathbb {X}\) is an eigenvalue of a matrix \(\varvec{A}\in \mathbb {X}^{n\times n}\), if there exists a nonzero vector \(\varvec{x}\in \mathbb {X}^{n}\) such that \(\varvec{A}\varvec{x}=\lambda \varvec{x}\). The maximum eigenvalue is referred to as the spectral radius of \(\varvec{A}\), and given by Cuninghame-Green (1962), Vorob’ev (1963) and Romanovskiĭ (1964)

$$\begin{aligned} \lambda = {\mathrm{tr}} \varvec{A}\oplus \cdots \oplus \mathrm {tr}^{1/n}(\varvec{A}^{n}) = \bigoplus _{m=1}^{n}\mathrm{tr}^{1/m}(\varvec{A}^{m}). \end{aligned}$$

3 Preliminary results

We now offer some auxiliary results to be used in the subsequent analysis of optimization problems. We start with binomial identities for square matrices, and then describe solutions to linear vector inequalities.

The inequalities are examined using somewhat different techniques and notation by many authors, including Vorob’ev (1963), Cuninghame-Green (1979), Zimmermann (1981), Baccelli et al. (1993) and Gondran and Minoux (2008). Below, we offer solutions given in a compact vector form that provides a natural basis for solving the optimization problems in a straightforward and concise manner.

3.1 Binomial identities

Let \(\varvec{A}\) and \(\varvec{B}\) be square matrices of the same order, and m be a positive integer. Then, the following binomial identity clearly holds:

$$\begin{aligned} (\varvec{A}\oplus \varvec{B})^{m} = \bigoplus _{k=1}^{m}\mathop {\bigoplus }_{i_{0}+i_{1}+\cdots +i_{k}=m-k}\varvec{B}^{i_{0}}(\varvec{A}\varvec{B}^{i_{1}}\varvec{A}\varvec{B}^{i_{2}}\cdots \varvec{A}\varvec{B}^{i_{k}}) \oplus \varvec{B}^{m}. \end{aligned}$$

As an extension of this identity, we derive the following results. First, after summation over all m and rearrangement of the output to collect terms of like number of cofactors \(\varvec{A}\), we obtain the matrix equality

$$\begin{aligned} \bigoplus _{k=1}^{m}(\varvec{A}\oplus \varvec{B})^{k} = \bigoplus _{k=1}^{m}\mathop {\bigoplus }_{0\le i_{0}+i_{1}+\cdots +i_{k}\le m-k}\varvec{B}^{i_{0}}(\varvec{A}\varvec{B}^{i_{1}}\cdots \varvec{A}\varvec{B}^{i_{k}}) \oplus \bigoplus _{k=1}^{m}\varvec{B}^{k}. \end{aligned}$$
(1)

Furthermore, by applying the trace and by using its properties, we rewrite (1) in the form of the scalar equality

$$\begin{aligned} \bigoplus _{k=1}^{m}\mathrm {tr}(\varvec{A}\oplus \varvec{B})^{k} = \bigoplus _{k=1}^{m}\mathop {\bigoplus }_{0\le i_{1}+\cdots +i_{k}\le m-k}\mathrm {tr}(\varvec{A}\varvec{B}^{i_{1}}\cdots \varvec{A}\varvec{B}^{i_{k}}) \oplus \bigoplus _{k=1}^{m}\mathrm {tr}\varvec{B}^{k}. \end{aligned}$$
(2)

Both identities (1) and (2) are used below to expand matrix expressions in evaluating the minimum of the objective function.

3.2 Linear inequalities

Suppose that, given a matrix \(\varvec{A}\in \mathbb {X}^{m\times n}\) and a regular vector \(\varvec{d}\in \mathbb {X}^{m}\), the problem is to find all vectors \(\varvec{x}\in \mathbb {X}^{n}\) that satisfy the inequality

$$\begin{aligned} \varvec{A}\varvec{x} \le \varvec{d}. \end{aligned}$$
(3)

A complete direct solution to the problem under fairly general conditions can be found in the following form (see, e.g. Krivulin 2015a).

Lemma 1

For any column-regular matrix \(\varvec{A}\) and regular vector \(\varvec{d}\), all solutions to (3) are given by

$$\begin{aligned} \varvec{x} \le (\varvec{d}^{-}\varvec{A})^{-}. \end{aligned}$$

Furthermore, we consider the problem: given a matrix \(\varvec{A}\in \mathbb {X}^{n\times n}\) and a vector \(\varvec{b}\in \mathbb {X}^{n}\), find all regular vectors \(\varvec{x}\in \mathbb {X}^{n}\) that solve the inequality

$$\begin{aligned} \varvec{A}\varvec{x}\oplus \varvec{b} \le \varvec{x}. \end{aligned}$$
(4)

To describe a solution to inequality (4) in a compact form, we introduce a function that maps each matrix \(\varvec{A}\in \mathbb {X}^{n\times n}\) onto the scalar

$$\begin{aligned} \mathrm {Tr}(\varvec{A}) = \mathrm {tr}\varvec{A}\oplus \cdots \oplus \mathrm {tr}\varvec{A}^{n}, \end{aligned}$$

and use the asterate operator (the Kleene star), which takes \(\varvec{A}\) to the matrix

$$\begin{aligned} \varvec{A}^{*} = \varvec{I}\oplus \varvec{A}\oplus \cdots \oplus \varvec{A}^{n-1}. \end{aligned}$$

Presented below is a complete solution proposed in Krivulin (2015b).

Theorem 1

For any matrix \(\varvec{A}\) and vector \(\varvec{b}\), the following statements hold:

  1. 1.

    If \(\mathrm {Tr}(\varvec{A})\le \mathbb {1}\), then all regular solutions to inequality (4) are given by \(\varvec{x}=\varvec{A}^{*}\varvec{u}\), where \(\varvec{u}\) is a regular vector such that \(\varvec{u}\ge \varvec{b}\).

  2. 2.

    If \(\mathrm {Tr}(\varvec{A})>\mathbb {1}\), then there is no regular solution.

To conclude this section, we present a solution to a system that combines inequality (4) with an upper bound on the vector \(\varvec{x}\) in the form

$$\begin{aligned} \begin{array}{rcl} \varvec{A}\varvec{x} \oplus \varvec{b} &{}\le &{} \varvec{x}, \\ \varvec{x} &{}\le &{}\varvec{d}. \end{array} \end{aligned}$$
(5)

By application of both Lemma 1 and Theorem 1, we arrive at the next solution, which is also a direct consequence of the result obtained in Krivulin (2014) for a slightly more general system.

Lemma 2

For any matrix \(\varvec{A}\), vector \(\varvec{b}\) and regular vector \(\varvec{d}\), we denote \(\Delta =\mathrm {Tr}(\varvec{A})\oplus \varvec{d}^{-}\varvec{A}^{*}\varvec{b}\). Then, the following statements hold:

  1. 1.

    If \(\Delta \le \mathbb {1}\), then all regular solutions to system (5) are given by \(\varvec{x}=\varvec{A}^{*}\varvec{u}\), where \(\varvec{u}\) is a regular vector such that \(\varvec{b}\le \varvec{u}\le (\varvec{d}^{-}\varvec{A}^{*})^{-}\).

  2. 2.

    If \(\Delta >\mathbb {1}\), then there is no regular solution.

4 Solution to optimization problems

In this section, we consider optimization problems involving the function \(\varvec{x}^{-}\varvec{A}\varvec{x}\), where \(\varvec{A}\) is a given matrix, and \(\varvec{x}\) is the unknown vector. The unconstrained minimization of this function is examined by different methods in various application contexts (Cuninghame-Green 1962; Engel and Schneider 1975; Superville 1978; Cuninghame-Green 1979; Elsner and van den Driessche 2004, 2010). Complete solutions to some constrained problems are proposed in Krivulin (2014, (2015a, (2015b)

We present examples of both unconstrained and constrained problems, and then formulate and solve a new general constrained optimization problem. As a consequence, we offer solutions for some new special cases of the general problem.

The results are given in the context of an arbitrary idempotent semifield in a common form, which can be readily interpreted in terms of particular semifields. Specifically, for the semifield \(\mathbb {R}_{\max ,+}\), we replace \(\oplus \) by \(\max \) and \(\otimes \) by \(+\), and use the relation symbol \(\le \) in the usual sense. In the framework of \(\mathbb {R}_{\min ,\times }\), we put \(\oplus =\min \) and \(\otimes =\times \), and understand the symbol \(\le \) to indicate the order, which is opposite to the standard linear order on \(\mathbb {R}\).

4.1 Examples of optimization problems

We start with an unconstrained problem that has the objective function written in a basic form. Given a matrix \(\varvec{A}\in \mathbb {X}^{n\times n}\), consider the problem to find regular vectors \(\varvec{x}\in \mathbb {X}^{n}\) that

$$\begin{aligned} \text {minimize} \ \varvec{x}^{-}\varvec{A}\varvec{x}, \end{aligned}$$
(6)

A solution to the problem can be provided by several ways (see, e.g. Krivulin 2014, 2015a, b), and takes the following form.

Lemma 3

Let \(\varvec{A}\) be a matrix with spectral radius \(\lambda >\mathbb {0}\). Then, the minimum value in problem (6) is equal to \(\lambda \), and all regular solutions are given by

$$\begin{aligned} \varvec{x} = (\lambda ^{-1}\varvec{A})^{*}\varvec{u}, \quad \varvec{u}\in \mathbb {X}^{n}. \end{aligned}$$

Some extensions of problem (6) were examined in Krivulin (2014), Krivulin (2015a, (2015b), where more general forms of the objective function are considered and/or further inequality constraints are added. Specifically, a problem with an extended function is solved in Krivulin (2015a). Given a matrix \(\varvec{A}\in \mathbb {X}^{n\times n}\), vectors \(\varvec{p},\varvec{q}\in \mathbb {X}^{n}\), and a scalar \(r\in \mathbb {X}\), the problem is to obtain regular \(\varvec{x}\in \mathbb {X}^{n}\) that

$$\begin{aligned} \text {minimize} \ \varvec{x}^{-}\varvec{A}\varvec{x}\oplus \varvec{x}^{-}\varvec{p}\oplus \varvec{q}^{-}\varvec{x}\oplus r. \end{aligned}$$
(7)

A complete direct solution to the problem is as follows.

Theorem 2

Let \(\varvec{A}\) be a matrix with spectral radius \(\lambda >\mathbb {0}\), and \(\varvec{q}\) be a regular vector. Then, the minimum value in problem (7) is equal to

$$\begin{aligned} \mu = \lambda \oplus \bigoplus _{m=0}^{n-1} (\varvec{q}^{-}\varvec{A}^{m}\varvec{p})^{1/(m+2)} \oplus r, \end{aligned}$$

and all regular solutions are given by

$$\begin{aligned} \varvec{x} = (\mu ^{-1}\varvec{A})^{*}\varvec{u}, \quad \mu ^{-1}\varvec{p} \le \varvec{u} \le \mu (\varvec{q}^{-}(\mu ^{-1}\varvec{A})^{*})^{-}. \end{aligned}$$

Suppose now that, given matrices \(\varvec{A},\varvec{B}\in \mathbb {X}^{n\times n}\), and a vector \(\varvec{g}\in \mathbb {X}^{n}\), we need to find regular solutions \(\varvec{x}\in \mathbb {X}^{n}\) to the problem

$$\begin{aligned} \begin{array}{cl} \text {minimize} &{} \quad \varvec{x}^{-}\varvec{A}\varvec{x}, \\ \text {subject to} &{} \quad \varvec{B}\varvec{x}\oplus \varvec{g} \le \varvec{x}. \end{array} \end{aligned}$$
(8)

The next complete solution to the problem is provided in Krivulin (2015b).

Theorem 3

Let \(\varvec{A}\) be a matrix with spectral radius \(\lambda >\mathbb {0}\), and \(\varvec{B}\) a matrix with \(\mathrm {Tr}(\varvec{B})\le \mathbb {1}\). Then, the minimum value in problem (8) is equal to

$$\begin{aligned} \mu = \lambda \oplus \bigoplus _{k=1}^{n-1}\mathop {\bigoplus }_{1\le i_{1}+\cdots +i_{k}\le n-k}\mathrm {tr}^{1/k}(\varvec{A}\varvec{B}^{i_{1}}\cdots \varvec{A}\varvec{B}^{i_{k}}), \end{aligned}$$

and all regular solutions are given by

$$\begin{aligned} \varvec{x} = (\mu ^{-1}\varvec{A}\oplus \varvec{B})^{*}\varvec{u}, \quad \varvec{u} \ge \varvec{g}. \end{aligned}$$

Below, we offer a solution to a new problem that combine the objective function in (7) with the extended set of constraints in (5).

4.2 New constrained optimization problem

This section includes a complete solution to a constrained problem, which presents an extended version of the problems considered above. We follow the approach developed in Krivulin (2014, (2015a, (2015b) to introduce an additional variable, which represents the minimum value of the objective function, and then to reduce the problem to an inequality, where the new variable plays the role of a parameter.

Suppose that, given matrices \(\varvec{A},\varvec{B}\in \mathbb {X}^{n\times n}\), vectors \(\varvec{p},\varvec{q},\varvec{g},\varvec{h}\in \mathbb {X}^{n}\), and a scalar \(r\in \mathbb {X}\), the problem is to find regular vectors \(\varvec{x}\in \mathbb {X}^{n}\) that

$$\begin{aligned} \begin{array}{cl} \text {minimize} &{} \quad \varvec{x}^{-}\varvec{A}\varvec{x}\oplus \varvec{x}^{-}\varvec{p}\oplus \varvec{q}^{-}\varvec{x}\oplus r, \\ \text {subject to} &{} \quad \varvec{B}\varvec{x}\oplus \varvec{g} \le \varvec{x}, \\ &{} \quad \varvec{x} \le \varvec{h}. \end{array} \end{aligned}$$
(9)

We start with some general remarks and useful notation. It immediately follows from Lemma 2 that the inequality constraints in (9) have regular solutions if and only if the condition \(\mathrm {Tr}(\varvec{B})\oplus \varvec{h}^{-}\varvec{B}^{*}\varvec{g}\le \mathbb {1}\) holds, which is itself equivalent to the two conditions \(\mathrm {Tr}(\varvec{B})\le \mathbb {1}\) and \(\varvec{h}^{-}\varvec{B}^{*}\varvec{g}\le \mathbb {1}\).

Clearly, the constraints can be rearranged to provide another representation of the problem in the form

$$\begin{aligned} \begin{array}{cl} \text {minimize} &{} \quad \varvec{x}^{-}\varvec{A}\varvec{x}\oplus \varvec{x}^{-}\varvec{p}\oplus \varvec{q}^{-}\varvec{x}\oplus r, \\ \text {subject to} &{} \quad \varvec{B}\varvec{x} \le \varvec{x}, \\ &{}\quad \varvec{g} \le \varvec{x} \le \varvec{h}. \end{array} \end{aligned}$$
(10)

To describe the solution in a compact form, we introduce an auxiliary notation for large matrix sums. First, we define the matrices \(\varvec{S}_{0}=\varvec{I}\) and

$$\begin{aligned} \varvec{S}_{k} = \mathop {\bigoplus }_{0\le i_{1}+\cdots +i_{k}\le n-k}\varvec{A}\varvec{B}^{i_{1}}\cdots \varvec{A}\varvec{B}^{i_{k}}, \quad k=1,\ldots ,n; \end{aligned}$$
(11)

and note that they satisfy the inequality \(\varvec{S}_{k}\ge \varvec{A}^{k}\).

For a different type of sums, we introduce the notation \(\varvec{T}_{0}=\varvec{B}^{*}\) and

$$\begin{aligned} \varvec{T}_{k} = \mathop {\bigoplus }_{0\le i_{0}+i_{1}+\cdots +i_{k}\le n-k-1} \varvec{B}^{i_{0}}(\varvec{A}\varvec{B}^{i_{1}}\cdots \varvec{A}\varvec{B}^{i_{k}}), \quad k=1,\ldots ,n-1. \end{aligned}$$
(12)

It is easy to see that the matrices are related by the equality \(\varvec{S}_{k+1}=\varvec{A}\varvec{T}_{k}\), which is valid for all \(k=0,1,\ldots ,n-1\). Finally, note that, under the condition \(\varvec{B}=\varvec{0}\), the matrices reduce to \(\varvec{S}_{k}=\varvec{A}^{k}\) and \(\varvec{T}_{k}=\varvec{A}^{k}\).

We are now in a position to offer a complete solution to problem (9).

Theorem 4

Let \(\varvec{A}\) be a matrix with spectral radius \(\lambda \), and \(\varvec{B}\) be a matrix such that \(\mathrm {Tr}(\varvec{B})\le \mathbb {1}\). Let \(\varvec{p}\) and \(\varvec{g}\) be vectors, \(\varvec{q}\) and \(\varvec{h}\) be regular vectors, and r be a scalar such that \(\varvec{h}^{-}\varvec{B}^{*}\varvec{g}\le \mathbb {1}\) and \(\lambda \oplus (\varvec{q}^{-}\varvec{p})^{1/2}\oplus r>\mathbb {0}\).

Then, the minimum value in problem (9) is equal to

$$\begin{aligned} \theta&= \bigoplus _{k=1}^{n}\mathrm {tr}^{1/k}(\varvec{S}_{k}) \oplus \bigoplus _{k=1}^{n-1}(\varvec{h}^{-}\varvec{T}_{k}\varvec{g})^{1/k} \\&\quad \oplus \bigoplus _{k=0}^{n-1}(\varvec{q}^{-}\varvec{T}_{k}\varvec{g}\oplus \varvec{h}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+1)} \oplus \bigoplus _{k=0}^{n-1}(\varvec{q}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+2)} \oplus r, \end{aligned}$$

and all regular solutions are given by

$$\begin{aligned} \varvec{x} = (\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*}\varvec{u}, \end{aligned}$$

where \(\varvec{u}\) is any regular vector that satisfies the conditions

$$\begin{aligned} \theta ^{-1}\varvec{p}\oplus \varvec{g} \le \varvec{u} \le ((\theta ^{-1}\varvec{q}^{-}\oplus \varvec{h}^{-})(\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*})^{-}. \end{aligned}$$

Proof

We introduce a parameter to represent the minimum value of the objective function, and then reduce the problem to solving a parametrized system of linear inequalities. The necessary and sufficient conditions for the system to have regular solutions serve to evaluate the parameter, whereas the general solution of the system is taken as a complete solution to the initial optimization problem.

Denote by \(\theta \) the minimum of the objective function over all regular vectors \(\varvec{x}\). Then, all regular solutions to problem (9) are determined by the system

$$\begin{aligned} \varvec{x}^{-}\varvec{A}\varvec{x}\oplus \varvec{x}^{-}\varvec{p}\oplus \varvec{q}^{-}\varvec{x}\oplus r\le & {} \theta ,\\ \varvec{B}\varvec{x}\oplus \varvec{g}\le & {} \varvec{x},\nonumber \\ \varvec{x}\le & {} \varvec{h}.\nonumber \end{aligned}$$
(13)

The first inequality at (13) is equivalent to the four inequalities

$$\begin{aligned} \varvec{x}^{-}\varvec{A}\varvec{x} \le \theta , \quad \varvec{x}^{-}\varvec{p} \le \theta , \quad \varvec{q}^{-}\varvec{x} \le \theta , \quad r \le \theta . \end{aligned}$$
(14)

We use these inequalities to derive a lower bound for \(\theta \) and verify that \(\theta \ne \mathbb {0}\). The first inequality at (14) and Lemma 3 imply that \(\theta \ge \varvec{x}^{-}\varvec{A}\varvec{x}\ge \lambda \). From the next two inequalities and a property of the conjugate transposition, we derive \(\theta ^{2}\ge \varvec{q}^{-}\varvec{x}\varvec{x}^{-}\varvec{p}\ge \varvec{q}^{-}\varvec{p}\), which gives \(\theta \ge (\varvec{q}^{-}\varvec{p})^{1/2}\). Since \(\theta \ge r\) as well, we finally obtain a lower bound for \(\theta \) in the form

$$\begin{aligned} \theta \ge \lambda \oplus (\varvec{q}^{-}\varvec{p})^{1/2}\oplus r, \end{aligned}$$
(15)

where the right-hand side is nonzero by the conditions of the theorem.

We can now multiply the first two inequalities at (14) by \(\theta ^{-1}\), and then apply Lemma 3 to the first three. As a result, we have the inequalities

$$\begin{aligned} \theta ^{-1}\varvec{A}\varvec{x} \le \varvec{x}, \quad \theta ^{-1}\varvec{p} \le \varvec{x}, \quad \varvec{x} \le \theta \varvec{q}. \end{aligned}$$

As the next step, we combine these inequalities with those in the system at (13). Specifically, the first two inequalities together with \(\varvec{B}\varvec{x}\oplus \varvec{g}\le \varvec{x}\) give the inequality \((\theta ^{-1}\varvec{A}\oplus \varvec{B})\varvec{x}\oplus \theta ^{-1}\varvec{p}\oplus \varvec{g}\le \varvec{x}\).

In addition, we take the inequalities \(\varvec{x}\le \theta \varvec{q}\) and \(\varvec{x}\le \varvec{h}\), and put them into the forms \(\varvec{x}^{-}\ge \theta ^{-1}\varvec{q}^{-}\) and \(\varvec{x}^{-}\ge \varvec{h}^{-}\). The last two inequalities are combined into one, which is then rewritten to give \(\varvec{x}\le (\theta ^{-1}\varvec{q}^{-}\oplus \varvec{h}^{-})^{-}\).

By coupling the obtained inequalities, we represent system (13) as

$$\begin{aligned} \begin{array}{rcl} (\theta ^{-1}\varvec{A}\oplus \varvec{B})\varvec{x} \oplus \theta ^{-1}\varvec{p}\oplus \varvec{g} &{}\le &{}\varvec{x},\\ \varvec{x} &{}\le &{} (\theta ^{-1}\varvec{q}^{-}\oplus \varvec{h}^{-})^{-}. \end{array} \end{aligned}$$
(16)

Considering that system (16) has the form of (5), we can apply Lemma 2 to examine this system. By the lemma, the necessary and sufficient condition for (16) to have regular solutions takes the form

$$\begin{aligned} \mathrm {Tr}(\theta ^{-1}\varvec{A}\oplus \varvec{B}) \oplus (\theta ^{-1}\varvec{q}^{-}\oplus \varvec{h}^{-})(\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*}(\theta ^{-1}\varvec{p}\oplus \varvec{g}) \le \mathbb {1}. \end{aligned}$$

To solve this inequality with respect to the parameter \(\theta \), we put it in a more convenient form by expanding the left-hand side in powers of \(\theta \).

As a starting point, we examine the matrix asterate

$$\begin{aligned} (\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*} = \bigoplus _{k=0}^{n-1}(\theta ^{-1}\varvec{A}\oplus \varvec{B})^{k} = \varvec{I} \oplus \bigoplus _{k=1}^{n-1}(\theta ^{-1}\varvec{A}\oplus \varvec{B})^{k}. \end{aligned}$$

After application of (1) to the second term, we rearrange the expression to collect terms with the same power of \(\theta \), and then use (12) to write

$$\begin{aligned} (\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*}&= \bigoplus _{k=1}^{n-1}\mathop {\bigoplus }_{0\le i_{0}+i_{1}+\cdots +i_{k}\le n-k-1}\theta ^{-k}\varvec{B}^{i_{0}}(\varvec{A}\varvec{B}^{i_{1}}\cdots \varvec{A}\varvec{B}^{i_{k}}) \oplus \bigoplus _{k=0}^{n-1}\varvec{B}^{k} \\&= \bigoplus _{k=1}^{n-1}\theta ^{-k}\varvec{T}_{k} \oplus \varvec{T}_{0} = \bigoplus _{k=0}^{n-1}\theta ^{-k}\varvec{T}_{k}. \end{aligned}$$

By using (2), (11), (12) and properties of the trace function, we also have

$$\begin{aligned} \mathrm {Tr}(\theta ^{-1}\varvec{A}\oplus \varvec{B})&= \bigoplus _{k=1}^{n}\mathrm {tr}(\theta ^{-1}\varvec{A}\oplus \varvec{B})^{k} \\&= \bigoplus _{k=1}^{n}\mathop {\bigoplus }_{0\le i_{1}+\cdots +i_{k}\le n-k}\theta ^{-k}\mathrm {tr}(\varvec{A}\varvec{B}^{i_{1}}\cdots \varvec{A}\varvec{B}^{i_{k}}) \oplus \bigoplus _{k=1}^{n}\mathrm {tr}(\varvec{B}^{k}) \\&= \bigoplus _{k=1}^{n}\theta ^{-k}\mathrm {tr}(\varvec{S}_{k}) \oplus \mathrm {Tr}(\varvec{B}). \end{aligned}$$

Substitution of these results into the condition for regular solutions yields

$$\begin{aligned} \bigoplus _{k=1}^{n}\theta ^{-k}\mathrm {tr}(\varvec{S}_{k}) \oplus \bigoplus _{k=0}^{n-1}\theta ^{-k}(\theta ^{-1}\varvec{q}^{-}\oplus \varvec{h}^{-})\varvec{T}_{k}(\theta ^{-1}\varvec{p}\oplus \varvec{g}) \oplus \mathrm {Tr}(\varvec{B}) \le \mathbb {1}. \end{aligned}$$

Since \(\mathrm {Tr}(\varvec{B})\le \mathbb {1}\) by the conditions of the theorem, the term \(\mathrm {Tr}(\varvec{B})\) does not affect the solution of the inequality, and hence can be omitted. The remaining inequality is equivalent to the system of inequalities

$$\begin{aligned} \theta ^{-k}\mathrm {tr}(\varvec{S}_{k})&\le \mathbb {1}, \quad k=1,\ldots ,n; \\ \theta ^{-k}(\theta ^{-1}\varvec{q}^{-}\oplus \varvec{h}^{-})\varvec{T}_{k}(\theta ^{-1}\varvec{p}\oplus \varvec{g})&\le \mathbb {1}, \quad k=0,1,\ldots ,n-1; \end{aligned}$$

which can be further split into the system

$$\begin{aligned} \theta ^{-k}\mathrm {tr}(\varvec{S}_{k})&\le \mathbb {1}, \quad k=1,\ldots ,n; \\ \theta ^{-k}\varvec{h}^{-}\varvec{T}_{k}\varvec{g}&\le \mathbb {1}, \\ \theta ^{-k-1}(\varvec{q}^{-}\varvec{T}_{k}\varvec{g}\oplus \varvec{h}^{-}\varvec{T}_{k}\varvec{p})&\le \mathbb {1}, \\ \theta ^{-k-2}\varvec{q}^{-}\varvec{T}_{k}\varvec{p}&\le \mathbb {1}, \quad k=0,1,\ldots ,n-1. \end{aligned}$$

Note that \(\varvec{h}^{-}\varvec{T}_{0}\varvec{g}=\varvec{h}^{-}\varvec{B}^{*}\varvec{g}\le \mathbb {1}\) by the conditions of the theorem, and thus the second inequality in the system is valid at \(k=0\) for all \(\theta >\mathbb {0}\).

By solving the inequalities, we have

$$\begin{aligned} \theta&\ge \mathrm {tr}^{1/k}(\varvec{S}_{k}),&k=1,\ldots ,n; \\ \theta&\ge (\varvec{h}^{-}\varvec{T}_{k}\varvec{g})^{1/k},&k=1,\ldots ,n-1; \\ \theta&\ge (\varvec{q}^{-}\varvec{T}_{k}\varvec{g}\oplus \varvec{h}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+1)}, \\ \theta&\ge (\varvec{q}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+2)},&k=0,1,\ldots ,n-1. \end{aligned}$$

The obtained solutions can be combined into one equivalent inequality

$$\begin{aligned}&\theta \ge \bigoplus _{k=1}^{n}\mathrm {tr}^{1/k}(\varvec{S}_{k}) \oplus \bigoplus _{k=1}^{n-1}(\varvec{h}^{-}\varvec{T}_{k}\varvec{g})^{1/k} \\&\qquad \oplus \bigoplus _{k=0}^{n-1}(\varvec{q}^{-}\varvec{T}_{k}\varvec{g}\oplus \varvec{h}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+1)} \oplus \bigoplus _{k=0}^{n-1}(\varvec{q}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+2)}. \end{aligned}$$

We have to couple the lower bound given by (15) with that defined by the last inequality. It is not difficult to verify that the right-hand side of this inequality already takes account of the terms \(\lambda \) and \((\varvec{q}^{-}\varvec{p})^{1/2}\) presented in (15). Indeed, considering that \(\varvec{S}_{k}\ge \varvec{A}^{k}\), we have

$$\begin{aligned} \bigoplus _{k=1}^{n}\mathrm {tr}^{1/k}(\varvec{S}_{k}) \ge \bigoplus _{k=1}^{n}\mathrm {tr}^{1/k}(\varvec{A}^{k}) = \lambda . \end{aligned}$$

Moreover, since \(\varvec{T}_{0}=\varvec{B}^{*}\ge \varvec{I}\), it is easy to see that

$$\begin{aligned} \bigoplus _{k=0}^{n-1}(\varvec{q}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+2)} \ge (\varvec{q}^{-}\varvec{T}_{0}\varvec{p})^{1/2} \ge (\varvec{q}^{-}\varvec{p})^{1/2}. \end{aligned}$$

By combining all lower bounds obtained for \(\theta \), we arrive at the inequality

$$\begin{aligned}&\theta \ge \bigoplus _{k=1}^{n}\mathrm {tr}^{1/k}(\varvec{S}_{k}) \oplus \bigoplus _{k=1}^{n-1}(\varvec{h}^{-}\varvec{T}_{k}\varvec{g})^{1/k} \\&\qquad \oplus \bigoplus _{k=0}^{n-1}(\varvec{q}^{-}\varvec{T}_{k}\varvec{g}\oplus \varvec{h}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+1)} \oplus \bigoplus _{k=0}^{n-1}(\varvec{q}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+2)} \oplus r. \end{aligned}$$

Since \(\theta \) is assumed to be the minimal value of the objective function, this inequality must hold as an equality, which yields the desired minimum.

Finally, we take the minimum value of \(\theta \), and then apply Lemma 2 to obtain all solutions of the system at (16) in the form

$$\begin{aligned} \varvec{x} = (\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*}\varvec{u}, \quad \theta ^{-1}\varvec{p}\oplus \varvec{g} \le \varvec{u} \le ((\theta ^{-1}\varvec{q}^{-}\oplus \varvec{h}^{-})(\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*})^{-}. \end{aligned}$$

Because the solution obtained is also a complete solution of the initial optimization problem, this ends the proof of the theorem. \(\square \)

We conclude this section with a brief discussion of the computational complexity of the solution obtained to see that it is polynomial in the dimension n. Indeed, this complexity is determined by the complexity of computing the minimum value \(\theta \), as the other components of the solution are given by a finite number of matrix and vector operations, and thus obviously take no more than polynomial time.

Furthermore, it directly follows from the expression for \(\theta \) that, if the evaluation of the matrix sequences \(\varvec{S}_{1},\ldots ,\varvec{S}_{n}\) and \(\varvec{T}_{0},\ldots ,\varvec{T}_{n-1}\) has polynomial complexity, then so has that of \(\theta \). Considering that \(\varvec{S}_{k+1}=\varvec{A}\varvec{T}_{k}\) for all \(k=0,\ldots ,n-1\), we need to verify that \(\varvec{T}_{k}\) can be obtained in polynomial time.

To describe a polynomial scheme of calculating \(\varvec{T}_{k}\), we first write

$$\begin{aligned} \varvec{T}_{k} = \bigoplus _{l=1}^{n-k-1}\varvec{Q}_{kl}, \quad \varvec{Q}_{kl} = {\bigoplus }_{i_{0}+i_{1}+\cdots +i_{k}=l} \varvec{B}^{i_{0}}(\varvec{A}\varvec{B}^{i_{1}}\cdots \varvec{A}\varvec{B}^{i_{k}}), \end{aligned}$$

where \(\varvec{Q}_{kl}\) is the sum of all matrix products that are comprised of k factors equal to \(\varvec{A}\) and l factors equal to \(\varvec{B}\), with \(\varvec{Q}_{k0}=\varvec{A}^{k}\), \(\varvec{Q}_{0l}=\varvec{B}^{l}\) and \(\varvec{Q}_{00}=\varvec{I}\). In this case, the evaluation of the matrices \(\varvec{T}_{0},\ldots ,\varvec{T}_{n-1}\) reduces to computing the matrices \(\varvec{Q}_{kl}\) for all \(k=0,\ldots ,n-1\) and \(l=0,\ldots ,n-k-1\).

Furthermore, we note that the recurrent relation \(\varvec{Q}_{kl}=\varvec{A}\varvec{Q}_{k-1,l}\oplus \varvec{B}\varvec{Q}_{k,l-1}\) holds for all \(k,l=1,2,\ldots \) It is clear that this relation offers a natural way to obtain successively all matrices \(\varvec{Q}_{kl}\), using two matrix multiplications and one matrix addition per matrix. Since the overall number of matrices involved in computation is \(1+2+\cdots +(n-1)=n(n-1)/2\), the computation of all matrices \(\varvec{T}_{k}\) requires polynomial time, and thus the entire solution has polynomial complexity.

4.3 Some special cases

As direct consequences of the result obtained, we now find solutions to special cases of problems (9) and (10) with reduced sets of constraints. To begin with, eliminate the first constraint in (10) and consider the problem

$$\begin{aligned} \begin{array}{cl} \text {minimize} &{} \quad \varvec{x}^{-}\varvec{A}\varvec{x}\oplus \varvec{x}^{-}\varvec{p}\oplus \varvec{q}^{-}\varvec{x}\oplus r, \\ \text {subject to} &{} \quad \varvec{g} \le \varvec{x} \le \varvec{h}. \end{array} \end{aligned}$$
(17)

Clearly, the solution to this problem can be derived from that of (10) by setting \(\varvec{B}=\mathbb {0}\). Under this condition, we have \(\varvec{S}_{k}=\varvec{A}^{k}\) and \(\varvec{T}_{k}=\varvec{A}^{k}\), whereas the solution is described as follows.

Corollary 1

Let \(\varvec{A}\) be a matrix with spectral radius \(\lambda \). Let \(\varvec{p}\) and \(\varvec{g}\) be vectors, \(\varvec{q}\) and \(\varvec{h}\) be regular vectors, and r be a scalar such that \(\varvec{h}^{-}\varvec{g}\le \mathbb {1}\) and \(\lambda \oplus (\varvec{q}^{-}\varvec{p})^{1/2}\oplus r>\mathbb {0}\). Then, the minimum value in problem (17) is equal to

$$\begin{aligned} \theta&= \lambda \oplus \bigoplus _{k=1}^{n-1}(\varvec{h}^{-}\varvec{A}^{k}\varvec{g})^{1/k} \oplus \bigoplus _{k=0}^{n-1}(\varvec{q}^{-}\varvec{A}^{k}\varvec{g}\oplus \varvec{h}^{-}\varvec{A}^{k}\varvec{p})^{1/(k+1)} \\&\quad \oplus \bigoplus _{k=0}^{n-1}(\varvec{q}^{-}\varvec{A}^{k}\varvec{p})^{1/(k+2)} \oplus r, \end{aligned}$$

and all regular solutions are given by

$$\begin{aligned} \varvec{x} = (\theta ^{-1}\varvec{A})^{*}\varvec{u}, \quad \theta ^{-1}\varvec{p}\oplus \varvec{g} \le \varvec{u} \le ((\theta ^{-1}\varvec{q}^{-}\oplus \varvec{h}^{-})(\theta ^{-1}\varvec{A})^{*})^{-}. \end{aligned}$$

Furthermore, we consider another special case of (10), which takes the form

$$\begin{aligned} \begin{array}{cl} \text {minimize} &{} \quad \varvec{x}^{-}\varvec{A}\varvec{x}\oplus \varvec{x}^{-}\varvec{p}\oplus \varvec{q}^{-}\varvec{x}\oplus r, \\ \text {subject to} &{} \quad \varvec{B}\varvec{x} \le \varvec{x}. \end{array} \end{aligned}$$
(18)

After slight modification of the proof of Theorem 4, we arrive at the next result, which can also be obtained directly by putting \(\varvec{g}=\varvec{0}\) and \(\varvec{h}^{-}=\varvec{0}^{T}\) in the solution of problem (10).

Corollary 2

Let \(\varvec{A}\) be a matrix with spectral radius \(\lambda \), and \(\varvec{B}\) be a matrix such that \(\mathrm {Tr}(\varvec{B})\le \mathbb {1}\). Let \(\varvec{p}\) be a vector, \(\varvec{q}\) be a regular vector, and r be a scalar such that \(\lambda \oplus (\varvec{q}^{-}\varvec{p})^{1/2}\oplus r>\mathbb {0}\). Then, the minimum value in problem (18) is equal to

$$\begin{aligned} \theta = \bigoplus _{k=1}^{n}\mathrm {tr}^{1/k}(\varvec{S}_{k}) \oplus \bigoplus _{k=0}^{n-1}(\varvec{q}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+2)} \oplus r, \end{aligned}$$

and all regular solutions are given by

$$\begin{aligned} \varvec{x} = (\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*}\varvec{u}, \quad \theta ^{-1}\varvec{p} \le \varvec{u} \le \theta (\varvec{q}^{-}(\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*})^{-}. \end{aligned}$$

Finally, note that eliminating both inequality constraints in (9) leads to the same solution as that provided by Theorem 2.

5 Application to project scheduling

We now apply the result obtained to solve an example problem, which is drawn from project scheduling (Demeulemeester and Herroelen 2002; Neumann et al. 2003; T’kindt and Billaut 2006) and serves to motivate and illustrate the study.

Consider a project consisting of a set of activities that are performed in parallel under various temporal constraints given by precedence relationships, release times and time windows. The precedence relationships are defined for each pair of activities and include the start–finish constraints on the minimum allowed time lag between the initiation of one activity and completion of another, and the start–start constraints on the minimum lag between the initiations of the activities. Once an activity starts, it continues to its completion, and no interruption is allowed. The activities are completed as soon as possible under the start–finish constraints.

The release time constraints take the form of release dates and release deadlines to specify that the activities cannot be initiated, respectively, before and after prescribed times. The time windows are given by lower and upper boundaries, and determine the minimum time slots preallocated to each activity. The activities have to occupy their time windows entirely. If the initiation time of an activity falls to the right of the lower boundary of its window, this time is adjusted by shifting to this boundary. In a similar way, the completion time is set to the upper boundary if it appears to the left of this boundary.

Each activity in the project has its flow-time defined as the duration of the interval between the adjusted initiation and completion times. A schedule is optimal if it minimizes the maximum flow-times over all activities. The problem of interest is to find the initiation and completion times of the activities to provide an optimal schedule subject to the temporal constraints described above.

5.1 Representation and solution of scheduling problem

Suppose a project involves n activities. For each activity \(i=1,\ldots ,n\), let \(x_{i}\) be the initiation and \(y_{i}\) the completion time. We denote the minimum possible time lags between the initiation of activity \(j=1,\ldots ,n\) and the completion of i by \(a_{ij}\), and between the initiations of j and i by \(b_{ij}\). If a time lag is not specified for a pair of activities, we set it to \(-\infty \).

The start–finish constraints yield the equalities

$$\begin{aligned} y_{i} = \max (a_{i1}+x_{1},\ldots ,a_{in}+x_{n}), \quad i=1,\ldots ,n; \end{aligned}$$

whereas the start–start constraints lead to the inequalities

$$\begin{aligned} x_{i} \ge \max (b_{i1}+x_{1},\ldots ,b_{in}+x_{n}), \quad i=1,\ldots ,n. \end{aligned}$$

Let \(g_{i}\) and \(h_{i}\) be, respectively, the possible earliest and latest initiation times. The release date and release deadline constraints are given by the inequalities

$$\begin{aligned} g_{i} \le x_{i} \le h_{i}, \quad i=1,\ldots ,n. \end{aligned}$$

Then, we denote the lower and upper boundaries of the minimum time window for activity i by \(q_{i}\) and \(p_{i}\), respectively. Let \(s_{i}\) be the adjusted initiation time and \(t_{i}\) the adjusted completion time of the activity. Since the time window must be fully occupied, we have

$$\begin{aligned} s_{i} = \min (x_{i},q_{i}) = - \max (-x_{i},-q_{i}), \quad t_{i} = \max (y_{i},p_{i}), \quad i=1,\ldots ,n. \end{aligned}$$

Finally, the maximum flow-time over all activities is given by

$$\begin{aligned} \max (t_{1}-s_{1},\ldots ,t_{n}-s_{n}). \end{aligned}$$

We are now in a position to represent the optimal scheduling problem of interest as that of finding \(x_{i}\), \(y_{i}\), \(s_{i}\) and \(t_{i}\) for all \(i=1,\ldots ,n\) to

$$\begin{aligned} \begin{array}{cll} \text {minimize} &{} \quad \max _{1\le i\le n}(t_{i}-s_{i}), &{} \\ \text {subject to} &{} \quad s_{i} = - \max (-x_{i},-q_{i}), &{} \quad t_{i} = \max (y_{i},p_{i}), \\ &{} \quad y_{i} = \max _{1\le j\le n}(a_{ij}+x_{j}), &{} \quad x_{i} \ge \max _{1\le j\le n}(b_{ij}+x_{j}), \\ &{} \quad g_{i} \le x_{i} \le h_{i}, &{} \quad i=1,\ldots ,n. \end{array} \end{aligned}$$

It is not difficult to see that this problem can be represented and solved within the framework of linear programming, which generally offers algorithmic solutions rather than a direct complete solution in an explicit form.

To obtain a direct solution, we place the problem in the context of tropical mathematics. Considering that the problem is formulated only in terms of the operations of maximum, ordinary addition, and additive inversion, we can rewrite it in the setting of the semifield \(\mathbb {R}_{\max ,+}\) as follows:

$$\begin{aligned} \begin{array}{cll} \text {minimize} &{} \quad \bigoplus _{i=1}^{n}s_{i}^{-1}t_{i}, &{} \\ \text {subject to} &{} \quad s_{i} = (x_{i}^{-1}\oplus q_{i}^{-1})^{-1}, &{} \quad t_{i} = y_{i}\oplus p_{i}, \\ &{} \quad y_{i} = \bigoplus _{j=1}^{n}a_{ij}x_{j}, &{} \quad x_{i} \ge \bigoplus _{j=1}^{n}b_{ij}x_{j}, \\ &{} \quad g_{i} \le x_{i} \le h_{i}, &{} \quad i=1,\ldots ,n. \end{array} \end{aligned}$$

Furthermore, we put the problem into a compact vector form. We introduce the matrix-vector notation

$$\begin{aligned} \varvec{A} = (a_{ij}), \quad \varvec{B} = (b_{ij}), \quad \varvec{x} = (x_{i}), \quad \varvec{y} = (y_{i}), \quad \varvec{g} = (g_{i}), \quad \varvec{h} = (h_{i}), \end{aligned}$$

and write the start–finish, start–start and release time constraints as

$$\begin{aligned} \varvec{y} = \varvec{A}\varvec{x}, \quad \varvec{x} \ge \varvec{B}\varvec{x}, \quad \varvec{g} \le \varvec{x} \le \varvec{h}. \end{aligned}$$

To take into account the time window boundaries and adjusted times, we use the vector notation

$$\begin{aligned} \varvec{s} = (s_{i}), \quad \varvec{t} = (t_{i}), \quad \varvec{p} = (p_{i}), \quad \varvec{q} = (q_{i}). \end{aligned}$$

The vectors of adjusted initiation and completion times take the form

$$\begin{aligned} \varvec{s} = (\varvec{x}^{-}\oplus \varvec{q}^{-})^{-}, \quad \varvec{t} = \varvec{y}\oplus \varvec{p}. \end{aligned}$$

The optimal scheduling problem to minimize the maximum flow-time subject to the temporal constraints under consideration now becomes

$$\begin{aligned} \begin{array}{cl} \text {minimize} &{} \quad \varvec{s}^{-}\varvec{t}, \\ \text {subject to} &{} \quad \varvec{s}^{-} = \varvec{x}^{-}\oplus \varvec{q}^{-}, \quad \varvec{t} = \varvec{y}\oplus \varvec{p}, \\ &{} \quad \varvec{A}\varvec{x} = \varvec{y}, \quad \varvec{B}\varvec{x} \le \varvec{x}, \\ &{} \quad \varvec{g} \le \varvec{x} \le \varvec{h}. \end{array} \end{aligned}$$
(19)

Note that, in the context of scheduling problems, it is natural to consider the matrix \(\varvec{A}\) as column-regular matrix, and the vectors \(\varvec{p}\), \(\varvec{q}\) and \(\varvec{h}\) as regular.

A complete solution to the problem is given by the next result.

Theorem 5

Let \(\varvec{A}\) be a column-regular matrix, and \(\varvec{B}\) be a matrix such that \(\mathrm {Tr}(\varvec{B})\le \mathbb {1}\). Let \(\varvec{p}\), \(\varvec{q}\) and \(\varvec{h}\) be regular vectors and \(\varvec{g}\) be a vector such that \(\varvec{h}^{-}\varvec{B}^{*}\varvec{g}\le \mathbb {1}\). Then, the minimum flow-time in problem (19) is equal to

$$\begin{aligned} \theta&= \bigoplus _{k=1}^{n}\mathrm {tr}^{1/k}(\varvec{S}_{k}) \oplus \bigoplus _{k=1}^{n-1}(\varvec{h}^{-}\varvec{T}_{k}\varvec{g})^{1/k} \oplus \bigoplus _{k=1}^{n}(\varvec{q}^{-}\varvec{S}_{k}\varvec{g})^{1/k} \nonumber \\&\qquad \oplus \bigoplus _{k=0}^{n-1}(\varvec{h}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+1)} \oplus \bigoplus _{k=0}^{n}(\varvec{q}^{-}\varvec{S}_{k}\varvec{p})^{1/(k+1)}, \end{aligned}$$
(20)

and the vectors of initiation and completion times are given by

$$\begin{aligned} \varvec{x}&= (\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*}\varvec{u},&\varvec{y}&= \varvec{A}(\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*}\varvec{u}, \end{aligned}$$
(21)
$$\begin{aligned} \varvec{s}&= (((\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*}\varvec{u})^{-}\oplus \varvec{q}^{-})^{-},&\varvec{t}&= \varvec{A}(\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*}\varvec{u}\oplus \varvec{p}, \end{aligned}$$
(22)

where \(\varvec{u}\) is any vector that satisfies the conditions

$$\begin{aligned} \theta ^{-1}\varvec{p}\oplus \varvec{g} \le \varvec{u} \le ((\theta ^{-1}\varvec{q}^{-}\varvec{A}\oplus \varvec{h}^{-})(\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*})^{-}. \end{aligned}$$
(23)

Proof

First, we eliminate the vectors \(\varvec{s}\) and \(\varvec{t}\) from problem (19) by representing the objective function as

$$\begin{aligned} \varvec{s}^{-}\varvec{t} = (\varvec{x}^{-}\oplus \varvec{q}^{-})(\varvec{y}\oplus \varvec{p}) = \varvec{x}^{-}\varvec{y}\oplus \varvec{q}^{-}\varvec{y}\oplus \varvec{x}^{-}\varvec{p}\oplus \varvec{q}^{-}\varvec{p}. \end{aligned}$$

Furthermore, we substitute \(\varvec{y}=\varvec{A}\varvec{x}\) to reduce (19) to the problem

$$\begin{aligned} \begin{array}{cl} \text {minimize} &{} \quad \varvec{x}^{-}\varvec{A}\varvec{x}\oplus \varvec{q}^{-}\varvec{A}\varvec{x}\oplus \varvec{x}^{-}\varvec{p}\oplus \varvec{q}^{-}\varvec{p}, \\ \text {subject to} &{} \quad \varvec{B}\varvec{x} \le \varvec{x}, \\ &{} \quad \varvec{g} \le \varvec{x} \le \varvec{h}, \end{array} \end{aligned}$$

which has the form of (10), where \(\varvec{q}^{-}\) is replaced by \(\varvec{q}^{-}\varvec{A}\) and r by \(\varvec{q}^{-}\varvec{p}\).

To apply Theorem 4, we note that, under the given conditions, the conditions of the theorem are satisfied as well. Specifically, since both vectors \(\varvec{p}\) and \(\varvec{q}\) are regular, we have \(r=\varvec{q}^{-}\varvec{p}>\mathbb {0}\), and thus provide the last condition of Theorem 4.

Next, we refine the expression for \(\theta \) by applying the identity \(\varvec{A}\varvec{T}_{k}=\varvec{S}_{k+1}\), which is valid for all \(k=0,\ldots ,n-1\).

After some rearrangement of sums, we arrive at (20). Both the representation for \(\varvec{x}\) at (21) and the condition on \(\varvec{u}\) at (23) are directly obtained from Theorem 4. The other expressions in (21) and (22) are immediate consequences. \(\square \)

As before, the solutions to special cases without constraints are readily derived from the general solution offered by Theorem 5. Specifically, we eliminate the boundary constraint \(\varvec{g}\le \varvec{x}\le \varvec{h}\) by setting \(\varvec{g}=\varvec{0}\) and \(\varvec{h}^{-}=\varvec{0}^{T}\), and/or the linear inequality constraint with matrix in the form \(\varvec{B}\varvec{x}\le \varvec{x}\) by setting \(\varvec{B}=\varvec{0}\), which further yields the substitutions \(\varvec{S}_{k}=\varvec{A}^{k}\) and \(\varvec{T}_{k}=\varvec{A}^{k}\).

5.2 Numerical example

To provide a clear illustration of the above result and of the computational technique, we solve in detail a simple low-dimensional problem. Even though the example under consideration is somewhat artificial, it well demonstrates the applicability of the solution to real-world problems of higher dimension.

Let us examine a project that involves \(n=3\) activities under constraints given by the matrices

$$\begin{aligned} \varvec{A} = \left( \begin{array}{c@{\quad }c@{\quad }r} 4 &{} 0 &{} -\infty \\ 2 &{} 3 &{} 1 \\ 1 &{} 1 &{} 3 \end{array} \right) , \quad \varvec{B} = \left( \begin{array}{r@{\quad }r@{\quad }r} -\infty &{} -1 &{} 1 \\ 0 &{} -\infty &{} 2 \\ -1 &{} -\infty &{} -\infty \end{array} \right) , \end{aligned}$$

and by the vectors

$$\begin{aligned} \varvec{p} = \left( \begin{array}{c} 4 \\ 4 \\ 5 \end{array} \right) , \quad \varvec{q} = \left( \begin{array}{c} 3 \\ 2 \\ 1 \end{array} \right) , \quad \varvec{g} = \left( \begin{array}{c} 0 \\ 0 \\ 1 \end{array} \right) , \quad \varvec{h} = \left( \begin{array}{c} 2 \\ 3 \\ 3 \end{array} \right) . \end{aligned}$$

We start with the verification of the existence conditions for regular solutions in Theorem 5. First note that the matrix \(\varvec{A}\) is obviously column-regular. In what follows, we need the powers of the matrix \(\varvec{A}\), which have the form

$$\begin{aligned} \varvec{A}^{2} = \left( \begin{array}{c@{\quad }c@{\quad }c} 8 &{} 4 &{} 1 \\ 6 &{} 6 &{} 4 \\ 5 &{} 4 &{} 6 \end{array} \right) , \quad \varvec{A}^{3} = \left( \begin{array}{c@{\quad }c@{\quad }c} 12 &{} 8 &{} 5 \\ 10 &{} 9 &{} 7 \\ 9 &{} 7 &{} 9 \end{array} \right) . \end{aligned}$$

Then, we take the matrix \(\varvec{B}\) and calculate

$$\begin{aligned} \varvec{B}^{2} = \left( \begin{array}{r@{\quad }r@{\quad }c} 0 &{} -\infty &{} 1 \\ 1 &{} -1 &{} 1 \\ -\infty &{} -2 &{} 0 \end{array} \right) , \quad \varvec{B}^{3} = \left( \begin{array}{r@{\quad }r@{\quad }c} 0 &{} -1 &{} 1 \\ 0 &{} 0 &{} 2 \\ -1 &{} -\infty &{} 0 \end{array} \right) , \quad \mathrm {Tr}(\varvec{B}) = 0. \end{aligned}$$

Furthermore, we successively obtain

$$\begin{aligned} \varvec{B}^{*} = \left( \begin{array}{r@{\quad }r@{\quad }c} 0 &{} -1 &{} 1 \\ 1 &{} 0 &{} 2 \\ -1 &{} -2 &{} 0 \end{array} \right) , \quad \varvec{h}^{-}\varvec{B}^{*} = \left( \begin{array}{rrr} -2&-3&-1 \end{array} \right) , \quad \varvec{h}^{-}\varvec{B}^{*}\varvec{g} = 0. \end{aligned}$$

Since \(\mathrm {Tr}(\varvec{B})=\varvec{h}^{-}\varvec{B}^{*}\varvec{g}=0\), where \(0=\mathbb {1}\), we conclude that the conditions of Theorem 5 are fulfilled, and thus the problem under study has regular solutions.

As the next step, we find the minimum value \(\theta \) by application of (20). The evaluation of \(\theta \) involves the matrices

To obtain \(\varvec{S}_{1}\), \(\varvec{S}_{2}\) and \(\varvec{T}_{1}\), we calculate the matrices

$$\begin{aligned} \varvec{A}\varvec{B} = \left( \begin{array}{c@{\quad }c@{\quad }c} 0 &{} 3 &{} 5 \\ 3 &{} 1 &{} 5 \\ 2 &{} 0 &{} 3 \end{array} \right) , \quad \varvec{B}\varvec{A} = \left( \begin{array}{c@{\quad }r@{\quad }r} 2 &{} 2 &{} 4 \\ 4 &{} 3 &{} 5 \\ 3 &{} -1 &{} -\infty \end{array} \right) , \end{aligned}$$

and then the matrices

$$\begin{aligned} \varvec{A}\varvec{B}^{2} = \left( \begin{array}{c@{\quad }r@{\quad }c} 4 &{} -1 &{} 5 \\ 4 &{} 2 &{} 4 \\ 2 &{} 1 &{} 3 \end{array} \right) , \quad \varvec{A}\varvec{B}\varvec{A} = \left( \begin{array}{c@{\quad }c@{\quad }c} 6 &{} 6 &{} 8 \\ 7 &{} 6 &{} 8 \\ 6 &{} 4 &{} 6 \end{array} \right) , \quad \varvec{A}^{2}\varvec{B} = \left( \begin{array}{c@{\quad }c@{\quad }c} 4 &{} 7 &{} 9 \\ 6 &{} 5 &{} 8 \\ 5 &{} 4 &{} 6 \end{array} \right) . \end{aligned}$$

After substitution of these matrices, we have

$$\begin{aligned} \varvec{S}_{1} = \left( \begin{array}{c@{\quad }c@{\quad }c} 4 &{} 3 &{} 5 \\ 4 &{} 3 &{} 5 \\ 2 &{} 1 &{} 3 \end{array} \right) , \quad \varvec{S}_{2} = \left( \begin{array}{c@{\quad }c@{\quad }c} 8 &{} 7 &{} 9 \\ 7 &{} 6 &{} 8 \\ 6 &{} 4 &{} 6 \end{array} \right) , \quad \varvec{T}_{1} = \left( \begin{array}{c@{\quad }c@{\quad }c} 4 &{} 3 &{} 5 \\ 4 &{} 3 &{} 5 \\ 3 &{} 1 &{} 3 \end{array} \right) . \end{aligned}$$

Based on the results obtained, we calculate the sum

$$\begin{aligned} \bigoplus _{k=1}^{3}\mathrm {tr}^{1/k}(\varvec{S}_{k}) = 4. \end{aligned}$$

To evaluate the remaining sums, we first find the vectors

$$\begin{aligned} \varvec{h}^{-}\varvec{T}_{0} = ( \begin{array}{ccc} -2&-3&-1 \end{array} ), \quad \varvec{h}^{-}\varvec{T}_{1} = ( \begin{array}{ccc} 2&1&3 \end{array} ), \quad \varvec{h}^{-}\varvec{T}_{2} = ( \begin{array}{ccc} 6&3&3 \end{array} ), \end{aligned}$$

and then obtain

$$\begin{aligned} \varvec{h}^{-}\varvec{T}_{1}\varvec{g} = 4, \quad \varvec{h}^{-}\varvec{T}_{2}\varvec{g} = 6, \quad \varvec{h}^{-}\varvec{T}_{0}\varvec{p} = 4, \quad \varvec{h}^{-}\varvec{T}_{1}\varvec{p} = 8, \quad \varvec{h}^{-}\varvec{T}_{2}\varvec{p} = 10. \end{aligned}$$

With these results, we get another two sums

$$\begin{aligned} \bigoplus _{k=1}^{2}(\varvec{h}^{-}\varvec{T}_{k}\varvec{g})^{1/k} = \bigoplus _{k=0}^{2} (\varvec{h}^{-}\varvec{T}_{k}\varvec{p})^{1/(k+1)} = 4. \end{aligned}$$

Furthermore, we obtain the vectors

$$\begin{aligned} \varvec{q}^{-}\varvec{S}_{0}&= (\begin{array}{ccc} -3&-2&-1 \end{array} ), \quad \varvec{q}^{-}\varvec{S}_{1} = (\begin{array}{ccc} 2&1&3 \end{array} ),\\ \varvec{q}^{-}\varvec{S}_{2}&=(\begin{array}{ccc} 5&4&6 \end{array} ), \quad \varvec{q}^{-}\varvec{S}_{3} = (\begin{array}{ccc} 9&7&8 \end{array} ), \end{aligned}$$

and then calculate

$$\begin{aligned}&\varvec{q}^{-}\varvec{S}_{1}\varvec{g} = 4, \quad \varvec{q}^{-}\varvec{S}_{2}\varvec{g} = 7, \quad \varvec{q}^{-}\varvec{S}_{3}\varvec{g} = 9, \\&\quad \varvec{q}^{-}\varvec{S}_{0}\varvec{p} = 4, \quad \varvec{q}^{-}\varvec{S}_{1}\varvec{p} = 8, \quad \varvec{q}^{-}\varvec{S}_{2}\varvec{p} = 11, \quad \varvec{q}^{-}\varvec{S}_{3}\varvec{p} = 13. \end{aligned}$$

Finally, we use the above results to find the last two sums

$$\begin{aligned} \bigoplus _{k=1}^{3}(\varvec{q}^{-}\varvec{S}_{k}\varvec{g})^{1/k} = \bigoplus _{k=0}^{3}(\varvec{q}^{-}\varvec{S}_{k}\varvec{p})^{1/(k+1)} = 4. \end{aligned}$$

By combining all sums according to (20), we have

$$\begin{aligned} \theta = 4. \end{aligned}$$

To describe the solution set defined by (21) and (23), we first obtain

$$\begin{aligned} \theta ^{-1}\varvec{q}^{-}\varvec{A} = (\begin{array}{ccc} -3&-3&-2 \end{array}), \quad \theta ^{-1}\varvec{q}^{-}\varvec{A}\oplus \varvec{h}^{-} = (\begin{array}{ccc} -2&-3&-2 \end{array}). \end{aligned}$$

We calculate the matrices

$$\begin{aligned} \theta ^{-1}\varvec{A} \oplus \varvec{B} = \left( \begin{array}{r@{\quad }r@{\quad }r} 0 &{} -1 &{} 1 \\ 0 &{} -1 &{} 2 \\ -1 &{} -3 &{} -1 \end{array} \right) , \quad (\theta ^{-1}\varvec{A} \oplus \varvec{B})^{2} = \left( \begin{array}{r@{\quad }r@{\quad }c} 0 &{} -1 &{} 1 \\ 1 &{} -1 &{} 1 \\ -1 &{} -2 &{} 0 \end{array} \right) , \end{aligned}$$

and then find

$$\begin{aligned} (\theta ^{-1}\varvec{A} \oplus \varvec{B})^{*} = \left( \begin{array}{r@{\quad }r@{\quad }c} 0 &{} -1 &{} 1 \\ 1 &{} 0 &{} 2 \\ -1 &{} -2 &{} 0 \end{array} \right) . \end{aligned}$$

With (21), all solutions \(\varvec{x}=(x_{1},x_{2},x_{3})^{T}\) to the problem are given by

$$\begin{aligned} \varvec{x} = (\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*}\varvec{u}, \quad \varvec{u}_{1} \le \varvec{u} \le \varvec{u}_{2}, \end{aligned}$$

where the bounds for the vector \(\varvec{u}=(u_{1},u_{2},u_{3})^{T}\) in (23) are defined as

$$\begin{aligned} \varvec{u}_{1} = \theta ^{-1}\varvec{p}\oplus \varvec{g} = \left( \begin{array}{c} 0 \\ 0 \\ 1 \end{array} \right) , \quad \varvec{u}_{2} = ((\theta ^{-1}\varvec{q}^{-}\varvec{A}\oplus \varvec{h}^{-})(\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*})^{-1} = \left( \begin{array}{c} 2 \\ 3 \\ 1 \end{array} \right) . \end{aligned}$$

Note that the columns in the matrix \((\theta ^{-1}\varvec{A}\oplus \varvec{B})^{*}\) are equal up to constant factors, and therefore, this matrix can be represented as

$$\begin{aligned} \left( \begin{array}{r@{\quad }r@{\quad }c} 0 &{} -1 &{} 1 \\ 1 &{} 0 &{} 2 \\ -1 &{} -2 &{} 0 \end{array} \right) = \left( \begin{array}{c} 1 \\ 2 \\ 0 \end{array} \right) \left( \begin{array}{r@{\quad }r@{\quad }c} -1&-2&0 \end{array} \right) . \end{aligned}$$

We introduce a new scalar variable

$$\begin{aligned} v = \left( \begin{array}{r@{\quad }r@{\quad }c} -1&-2&0 \end{array} \right) \varvec{u}, \end{aligned}$$

and rewrite the solution in the form

$$\begin{aligned} \varvec{x} = \left( \begin{array}{c} 1 \\ 2 \\ 0 \end{array} \right) v, \quad v_{1} \le v \le v_{2}, \end{aligned}$$

where the lower and upper bounds on v are given by

$$\begin{aligned} v_{1} = \left( \begin{array}{r@{\quad }r@{\quad }c} -1&-2&0 \end{array} \right) \varvec{u}_{1} = 1, \quad v_{2} = \left( \begin{array}{r@{\quad }r@{\quad }r} -1&-2&0 \end{array} \right) \varvec{u}_{2} = 1. \end{aligned}$$

Since both bounds coincide, we have the single vector of initiation time

$$\begin{aligned} \varvec{x} = \left( \begin{array}{c} 2 \\ 3 \\ 1 \end{array} \right) . \end{aligned}$$

Finally, using formulas (21) and (22) gives the vector of completion time and the vectors of adjusted initiation and completion times

$$\begin{aligned} \varvec{y} = \left( \begin{array}{c} 6 \\ 6 \\ 4 \end{array} \right) , \quad \varvec{s} = \left( \begin{array}{c} 2 \\ 2 \\ 1 \end{array} \right) , \quad \varvec{t} = \left( \begin{array}{c} 6 \\ 6 \\ 5 \end{array} \right) . \end{aligned}$$