1 Introduction

Forests provide unique ecosystem services to people around the world, and their conservation is essential. Massive deforestation must be avoided and forest degradation reduced. In addition, the maintenance of sustainable forest management is important for the carbon balance; it contributes to the mitigation of climate change too. But due to demographic pressure, timber needs are increasing significantly to meet the demand for construction and other exploitation, and this poses a double challenge to sustainable forest management: how to increase the volumes of wood produced and adapt to current and future global changes?

We study a forest management problem in order to analyze the impact of different assumptions regarding vital rates (growth, mortality, recruitment), as well as different environmental conditions, on forest dynamics. Growth and mortality processes are influenced in a nonlinear way by competition for resources: light (mainly), water and nutrients in the soil. A synthesis of works on the modeling of these problems can be found among the following references: Angulo et al. [2], Calsina and Saldana [7] and [8], Chave [9], Goetz et al. [11] (see also the references therein).

We are interested in the optimal forest harvesting. Several models are used in the literature from the discrete, deterministic or statistics point of view. See, for example, Malo et al. [20] where reinforcement learning algorithms are used for a discrete stochastic forest model or Fer et al. [10], where they use Gaussian models. But discrete and statistical approaches are not entirely satisfactory because they do not give enough knowledge on the behavior of the system from the qualitative point of view.

We propose to study the problem from an approach based on the formulation and analysis of a system of partial differential equations (PDE) as in [7]. The PDE model permits to consider a larger scale of space and time, and to study analytically the temporal evolution (trajectories) of the dynamic system described by the size distribution for the more representative species of the composition of the forest. Here, the PDE model is a population dynamics system structured in size. Finally, an optimal control problem (timber cutting) will define the optimal cut to be respected. The objective functional includes the benefits from timber production plus an ecological term corresponding to the total population of individuals of small diameter size, in the goal to allow a regeneration of the forest. This second term in the objective function plays a similar role as the one in Hritonenko et al. [13] (see also [14]) where they take into account the expenses generated from planting new trees.

The existence of a solution to the PDE problem considered in this article is not trivial due to the non-local nature of the model and the nonlinear terms present in the renewal, competition and growth processes. It will require the use of methods such as the fixed point method. Moreover, we deal with the optimal control problem. In particular, we maximize a benefit function from the timber price parameter, under environmental constraints. The control function (wood cutting) is characterized by an optimality system. We seek here to preserve two objectives: to maintain the natural regeneration of the forest while meeting the demand for wood from the current and future population. Note that harvesting problems for age structured populations, i.e., the case where the velocity of the growth is equal to 1, have been previously studied in Ainseba et al. [1], Bernhard and Veliov [4], Brokate [5], Gurtin and MacCamy [12], Murphy and Smith [21]. In this case the characteristic lines of the age-structured model are linear, so that the solution can be viewed as a solution to an ordinary differential equation with delay and one can characterize the optimal system via an adapted Pontryagin’s principle.

The paper is organized as follows. In Sect. 2, we present the problem we are studying and we give the main definitions. In Sect. 3, we prove the existence and uniqueness of the solution to the nonlinear problem. In Sect. 4, we study the optimal control and its characterization by an optimality necessary condition. We give a conclusion in a last section.

2 Position of the Problem

We investigate a nonlinear size-structured forest model with a harvesting trees function (control), including harvesting benefit (timber production) and plantation of new trees (forest regeneration). We follow the lines of the article by Calsina and Saldana [7], where they considered the case with a growth rate depending on the size as well as on the total population. See also the article by Tahvonen [24] and the papers by Kato [15], Kato et al. [16], Kato and Torikata [18], where some variants of the model in [7] are studied.

Denoting by \(u:=u(t,x)\), the density of trees population, at time t and of size (diameter) x which varies from \(\ell _0\) to \(\ell _{max}\), where \(\ell _0\) is the minimum diameter recruitment of a tree called the diameter at breast height (dbh), and which in general measures all the living trees of size dbh\(>10\) cm in tropical forests. For simplicity, we denote by \(\ell _0=0\) and \(\ell _{max}=\ell \) for the maximum dbh.

Then, we consider the intra-species competition function \(E_u(t,x)\), defined by:

$$\begin{aligned} E_u(t,x)=\frac{\pi }{4}\int _{x}^{\ell } y^2\, u(t,y)\, \mathrm{d}y. \end{aligned}$$
(1)

The function \(E_u\) is called the cumulative basal area of trees greater in size than x. It is considered as the index of the shading and it measures the effect of shading of larger trees on a tree of size x (see Kohyama [19] and the references therein). This definition seems to be realistic. Indeed, the assumption of equal availability for light resource is not really suitable, since individuals of a less size with respect to one individual are not competing with it for this resource.

We study the following problem:

$$\begin{aligned} \begin{array}{lr} \displaystyle \frac{\partial u}{\partial t} +\frac{\partial }{\partial x}\left( V(E_u(t,x),x)u\right) =-\mu (E_u(t,x),x)\,u -v(t)u&{} (t,x)\in \, Q:=]0,T[\times ]0,\ell [,\\ u(t,0)\, V\!\left( E_u(t,0),0\right) =\displaystyle \int _{0}^{\ell }\beta (E_u(t,x),x)\,u(t,x)\,\mathrm{d}x&{}t\in \, ]0,T[,\\ \displaystyle {u(0,x)=u_0(x)}&{}x\in \, ]0,\ell [. \end{array} \end{aligned}$$
(2)

In (2) we find the classical functions:

  • \(V(E_u(t,x),x) : \) growth rate;

  • v(t) :  time dependent harvesting rate. It represents the control function;

  • \(\mu (E_u(t,x),x) : \) the death rate;

  • \(\displaystyle B(t):=\int _{0}^{\ell }\beta (E_u(t,x),x)\,u(t,x)\,\mathrm{d}x :\) the birth function, \(\beta (E_u(t,x),x) : \) the birth rate.

If the solution u is differentiable with respect to the x variable (see below), we can rewrite the first equation of (2) as follows:

$$\begin{aligned} \frac{\partial u}{\partial t} +V(E_u(t,x),x)\;\frac{\partial u}{\partial x} =-G_v(E_u(t,x),x)\,u, \end{aligned}$$
(3)

where

$$\begin{aligned} G_v(E_u(t,x),x)=\mu _v(E_u(t,x),x)+V_x(E_u(t,x),x), \end{aligned}$$
(4)

and where

$$\begin{aligned} \mu _v(E_u(t,x),x)=\mu (E_u(t,x),x)+v(t) \end{aligned}$$
(5)

(notice that \(\mu _0=\mu \)), and where

$$\begin{aligned} \displaystyle V_{x}(E_u(t,x),x) = \frac{\partial V}{\partial E}(E_u(t,x),x) \frac{\partial E_u}{\partial x}(t,x) \displaystyle + \frac{\partial V}{\partial x}(E_u(t,x),x). \end{aligned}$$
(6)

Remark 1

In fact, the mortality rate \(\mu (E_u(t,x),x)\) is the sum of the mortality function due to competition and the natural mortality function \(\mu _{_N}(x)\), where we have:

$$\begin{aligned} \begin{array}{ll} \displaystyle \lim _{x\rightarrow \ell }\int _0^x \mu _{_N}(y)\,\mathrm{d}y=+\infty , \end{array} \end{aligned}$$

and which expresses that individuals die before the maximal size \(x=\ell \) (see, for example, Gurtin and MacCamy [12]).

Definition 2.1

A non-negative function u is a solution of the IBVP (2) if \(E_u(t,x)\) is a continuous function on Q and if it satisfies:

$$\begin{aligned} D\,u(t,x(t))= & {} \lim _{h\rightarrow 0}\left( \frac{u(t+h,x(t+h))-u(t,x(t))}{h} \right) \\= & {} -G_v(E_u(t,x),x)\, u \qquad (t,x)\in Q, \end{aligned}$$

where the characteristic curve \(x(t)=\varphi (t;t_0,x_0)\) passing through \((t_0,x_0) \in Q\), is solution of the ordinary differential equation (ODE):

$$\begin{aligned} \begin{array}{lcl} \displaystyle \frac{\mathrm{d}x}{\mathrm{d}t}(t) = V(E_u(t,x(t)),x(t)), \quad t \in ]0,T[,\\ x(t_0) = x_{0} \in ]0, \ell [. \end{array} \end{aligned}$$
(7)

In particular, a solution to (7) is given by:

$$\begin{aligned} x(t)=x_0+\int _{t_0}^t V(E_u(s,x(s)),x(s))\,\mathrm{d}s. \end{aligned}$$
(8)

Denote by \(\varphi (t;\tau ,\eta )\), the characteristic curve passing through the initial pair \((\tau ,\eta )\), and by \(\phi _t:=\varphi (t;0,0)\) the characteristic curve through (0, 0), separating the trajectories of the individuals present at the initial time \(\tau =0\) from the trajectories of those individuals born after the initial time. In particular, if u(tx) is a solution of the problem (2), the initial condition is given by \(u_0(x)=u_0(\varphi (0;0,x))\).

For any \((t,x)\,\in Q\) such that \(x<\phi _t\), the individuals are born after \(t=0\), at an initial time \(\tau :=\tau (t,x)>0\) have their size equal to zero (hence \(\varphi (\tau ;t,x)=0\)). By composition, it is equivalent to:

$$\begin{aligned} \varphi (t;\tau ,0)=x. \end{aligned}$$
(9)

We consider (9) as a definition for the initial time \(\tau \) (when \(x<\phi _t\)). It is called the initial time of the cohort through (tx). Then, integrating along the characteristics x(t), we obtain an explicit formula of the solution. Indeed, we have the:

Lemma 2.1

Let u(tx) be a solution of the problem (2). Then u has the following representation along the characteristic curves:

$$\begin{aligned} u(t,x)= \left\{ \begin{array}{ll} \displaystyle \frac{B(\tau )}{V(E_u(\tau ,0),0)}\; \\ \exp \!\left( -\int _{\tau }^t G_v(E_u(s,\varphi (s;\tau ,0)), \varphi (s;\tau ,0))\,\mathrm{d}s\right) , &{}\text {for a.e.}\, x \in [0,\phi _t[,\\ \displaystyle u_0(x)\;\\ \exp \!\left( -\int _0^t G_v(E_u(s,\varphi (s;0,x)), \varphi (s;0,x))\,\mathrm{d}s\right) , &{}\text {for a.e.}\, x \in [\phi _t,\ell [, \end{array}\right. \end{aligned}$$
(10)

for any time \(t>0\).

Proof

Let u(tx) be a solution of the problem (2). Define \(\overline{u}(t;t^*,x^*) =u(t,x(t;t^*,x^*))\), where \(x(t;t^*,x^*)\) is the characteristic curve taking the value \(x^*\) at time \(t^*\). Then in view of equations (3)–(4) and (7), \(\overline{u}\) satisfies to the initial value problem:

$$\begin{aligned} \begin{array}{rcll} \displaystyle \frac{\mathrm{d}\overline{u}}{\mathrm{d}t} (t;t^*,x^*) &{}=&{} -G_v(E_u (t,x(t;t^*,x^*)), x(t;t^*,x^*))\, \overline{u}(t;t^*,x^*) &{}t > t^*,\\ \overline{u}(t^*;t^*,x^*) &{}=&{} u(t^*,x^*)&{}t=t^*. \end{array} \end{aligned}$$

The solution of this problem is represented by the following formula:

$$\begin{aligned} \overline{u}(t;t^*,x^*)&=u(t,x(t;t^*,x^*))\\&=u(t^*,x^*)\,\exp \! \left( -\int _{t^*}^{t} G_v(E_u(s,x(s;t^*,x^*)), x(s;t^*,x^*))\,\mathrm{d}s \right) . \end{aligned}$$

Considering the case \(x:=x(t;t^*,x^*) <\phi _t\) where the initial data is given by (2\(_b\)) and the case \(x:=x(t;t^*,x^*) >\phi _t\) where it is given by (2\(_c\)), we obtain (10). \(\square \)

Remark 2

The above method of characteristics to nonlinear problems is classical. One can see the article by Gurtin and MacCamy [12]. The method is also detailed in the book by Webb [25].

3 Welposedness of the Problem

We recall some differentiability properties of the characteristics curves, which are used for changing variables.

Lemma 3.1

Let \(\varphi (t;\,\tau , \,\eta )\) be the characteristic curve through \((\tau ,\eta )\) solution to the ODE (7) with \(x(\tau )=\varphi (\tau ;\tau ,\eta )=\eta \). Then, \(\varphi \) is differentiable with respect to \(\tau \) and \(\eta \), and we have:

$$\begin{aligned} \frac{\mathrm{d}\varphi }{\mathrm{d}\tau }(t;\,\tau , \,\eta ) = -V(E_u(\tau ,\eta ),\,\eta )\, \exp \!\Big (\int _{\tau }^{t}V_{x} (E_u(s,\varphi (s; \tau , \eta )),\, \varphi (s; \tau , \eta ))\, \mathrm{d}s\Big ), \end{aligned}$$
(11)

and,

$$\begin{aligned} \frac{\mathrm{d}\varphi }{\mathrm{d}\eta }(t;\,\tau , \,\eta ) = \exp \!\Big (\int _{0}^{t}V_{x} (E_u(s,\varphi (s; \tau , \eta )),\, \varphi (s; \tau , \eta ))\, \mathrm{d}s\Big ). \end{aligned}$$
(12)

Proof

For the proof one applies the definition of differentiability of ODE equations with respect to parameters and to the initial condition (see, for example, Pontryagin [23], Chap. 4.6). One can find a detailed proof in Kato [18] (Lemma 3.4). \(\square \)

Lemma 3.2

The intra-species competition function \(E_u(t,x)\) is given by the integral equation:

$$\begin{aligned} \begin{array}{lcl} E_u(t,x) &{} = &{} \displaystyle \frac{\pi }{4} \int _{\varphi ^{-1}(t;0,x)}^{t} \varphi ^2(t;\tau ,0)\,B(\tau )\\ &{}&{}\,\exp \!\left( -\int _{\tau }^{t} \mu _v(E_u(s,\varphi (s;\tau ,0)),\varphi (s;\tau ,0))\,\mathrm{d}s\right) \,\mathrm{d}\tau \\ \\ &{} &{} + \displaystyle \frac{\pi }{4}\int _{0}^{\ell } \varphi ^2(t;0,y)\,u_{0}(y)\\ &{}&{}\, \exp \!\left( -\int _0^t \mu _v(E_u(s,\varphi (s;0,y)), \varphi (s;0,y))\,\mathrm{d}s\right) \, \mathrm{d}y. \end{array} \end{aligned}$$
(13)

Moreover, the birth function \(\displaystyle B(t)\) is given by the integral equation:

$$\begin{aligned} \begin{array}{lcl} B(t) &{}=&{} \displaystyle \int _{0}^{t} \beta (E_u(\tau ,\varphi (t;\tau ,0)),\varphi (t;\tau ,0)) B(\tau )\,\\ &{}&{}\,\exp \left( -\int _{\tau }^{t} \mu _v(E_u(s,\varphi (s;\tau ,0)),\varphi (s;\tau ,0)) \,\mathrm{d}s\right) \mathrm{d}\tau \\ \\ &{} &{} + \displaystyle \int _{0}^{\ell } \beta (E_u(t,\varphi (t;0,x)),\varphi (t;0,x))u_{0}(x)\,\\ &{}&{}\,\exp \left( -\int _0^t \mu _v(E_u(s,\varphi (s;0,x)), \varphi (s;0,x))\,\mathrm{d}s\right) \mathrm{d}x. \end{array} \end{aligned}$$
(14)

Proof

Indeed, we have:

$$\begin{aligned} E_u(t,x)=\frac{\pi }{4} \left[ \int _{x}^{\phi _t} y^{2}u(t,y)\,\mathrm{d}y +\int _{\phi _t}^{\ell }y^{2} u(t,y)\,\mathrm{d}y\right] . \end{aligned}$$

Using (10), we obtain the following integral equation for \(E_u(t,x)\):

$$\begin{aligned} \begin{array}{lcl} E_u(t,x) &{} = &{} \displaystyle \frac{\pi }{4} \int _{x}^{\phi _t} \varphi ^2(t;\tau ,0) \frac{B\!\left( \tau \right) }{V(E_u(\tau ,0),0)}\\ &{}&{}\,\exp \!\left( -\int _{\tau }^t G_v(E_u(s,\varphi (s;\tau ,0)), \varphi (s;\tau ,0))\,\mathrm{d}s\right) \, \mathrm{d}\varphi \\ \\ &{} &{} +\displaystyle \frac{\pi }{4} \int _{\phi _t}^{\ell } \varphi ^2(t;0,y)\, u_{0}(y) \\ &{}&{}\exp \!\left( -\int _0^t G_v(E_u(s,\varphi (s;0,y)), \varphi (s;0,y))\,\mathrm{d}s\right) \, \mathrm{d}\varphi \\ \\ &{} = &{}\displaystyle \frac{\pi }{4} \left( I + J\right) . \end{array} \end{aligned}$$

We use (4) and (11), to have:

$$\begin{aligned} \begin{array}{lcl} I &{} = &{} \displaystyle \int _{x}^{\phi _t} \varphi ^2(t;\tau ,0) \frac{B(\tau )}{V(E_u(\tau ,0),0)}\\ &{}&{} \exp \left( -\int _{\tau }^t G_v(E_u(s,\varphi (s;\tau ,0)), \varphi (s;\tau ,0))\,\mathrm{d}s\right) \, \mathrm{d}\varphi \\ \\ &{} = &{} \displaystyle \int _{x}^{\phi _t} \varphi ^2(t;\tau ,0)B(\tau )\, \exp \left( -\int _{\tau }^t\mu _v (E_u(s,\varphi (s;\tau ,0)), \varphi (s;\tau ,0))\,\mathrm{d}s\right) \\ \\ &{} &{} \quad \quad \quad \quad \quad \quad \quad \quad \times \displaystyle \frac{d\varphi }{V(E_u(\tau ,0),0)\, \exp \!\left( \displaystyle \int _{\tau }^t V_{x}(E_u(t,\varphi (s;\tau ,0)),\varphi (s;\tau ,0))\,\mathrm{d}s \right) } \\ \\ &{} = &{} \displaystyle \int _{t}^{\varphi ^{-1}(t;0,x)}\varphi ^2(t;\tau ,0)B(\tau ) \exp \left( -\int _{\tau }^{t} \mu _v(E_u(s,\varphi (s;\tau ,0)), \varphi (s;\tau ,0))\,\mathrm{d}s\right) \\ &{}&{}\quad \times (-\mathrm{d}\tau ). \end{array} \end{aligned}$$

For the second integral J, we use (12) and we have:

$$\begin{aligned} \begin{array}{lcl} J &{} = &{} \displaystyle \int _{\phi _t}^{\ell } \varphi ^2(t;0,y) \,u_{0}(y) \,\exp \!\left( -\int _0^t G_v(E_u(s,\varphi (s; 0, y)), \varphi (s; 0, y))\,\mathrm{d}s\right) \,\mathrm{d}\varphi \\ \\ &{} = &{} \displaystyle \int _{\phi _t}^{\ell } \varphi ^2(t;0,y)\, u_{0}(y)\exp \!\left( -\int _0^t \mu _v(E_u(s,\varphi (s; 0, y)), \varphi (s; 0, y))\,\mathrm{d}s\right) \\ \\ &{} &{} \displaystyle \quad \quad \quad \quad \quad \quad \quad \quad \quad \times \displaystyle \frac{\mathrm{d}\varphi }{\exp \left( -\displaystyle \int _0^t V_{x}(E_u(t,\varphi (s;0,y)),\varphi (s;0,y))\,\mathrm{d}s\right) }\\ \\ &{} = &{} \displaystyle \int _{0}^{\ell }\varphi ^2(t;0,y)\, u_{0}(y)\exp \!\left( -\int _0^t \mu _v(E_u(s,\varphi (s; 0, y)), \varphi (s; 0, y))\,\mathrm{d}s\right) \,\mathrm{d}y. \end{array} \end{aligned}$$

By adding \(I+J,\) we obtain the desired result.

For the proof of (14), we use the same decomposition. Indeed, as in above we have:

$$\begin{aligned} \begin{array}{lcl} B(t) &{} = &{} \displaystyle \left[ \int _{0}^{\phi _t} \beta (E_u(t,x),x)\,u(t,x)\,\mathrm{d}x +\int _{\phi _t}^{\ell } \beta (E_u(t,x),x)\,u(t,x)\,\mathrm{d}x\right] \\ \\ &{} = &{}\!\! \displaystyle \int _{0}^{\phi _t} \frac{\beta (E_u(t, \varphi (t;\tau ,0)), \varphi (t;\tau ,0))\, B\!\left( \tau \right) }{V(E_u(\tau ,0),0)} \\ &{}&{}\, \exp \left( -\int _{\tau }^tG_v (E_u(s,\varphi (s;\tau ,0)), \varphi (s;\tau ,0))\,\mathrm{d}s\right) \mathrm{d}\varphi \\ \\ &{} &{} +\displaystyle \int _{\phi _t}^{\ell } \beta (E_u(t,\varphi (t;0,x)), \varphi (t;0,x)) \,u_{0}(x)\,\\ &{}&{}\exp \left( -\int _0^t G_v (E_u(s,\varphi (s;0,x)), \varphi (s;0,x))\,\mathrm{d}s\right) \mathrm{d}\varphi . \end{array} \end{aligned}$$

Using (4), (11) and (12), we find the result. \(\square \)

Remark 3

In equation (13), we integrate from \(\tau =\varphi ^{-1}(t;0,x)\), which corresponds to the initial time of the characteristic passing through zero, to t. In the following analysis we will consider non-negative time, since we work with the time interval [0, T] only.

Next, we prove the existence of a weak solution to problem (2) using fixed point arguments. From Lemmas 3.1 and 3.2, we deduce that the existence of a solution u(tx) is equivalent to the existence of a couple of functions \(E_u(t,x)\) and B(t) (see [7]). We will first do the following hypothesis:

\((H_1) : \) V is upper-bounded in \(E_u\) and x, \(\left| V\right| \le V_0\) (for some \(V_0>0\)), and V is Lipschitz with respect to \(E_u\) of Lipschitz constant \(V_L\);

\((H_2) : \) \(\beta \) is a non-negative function, upper-bounded by \(\beta _0\), and is Lipschitz with respect to \(E_u\) and x of Lipschitz constant \(\beta _L\);

\((H_3) : \) \(\mu _v\) is a non-negative Lipschitz function with respect to \(E_u\) and x of Lipschitz constant \(\mu _L\).

From (8), we have that:

$$\begin{aligned} \varphi (t;\tau ,0)=\int _{\tau }^t V(E_u(s,\varphi (s;\tau ,0)), \varphi (s;\tau ,0))\,\mathrm{d}s, \end{aligned}$$
(15)

where \(x_0=0\), from which we deduce that \(|\varphi ^2(t;\tau ,0)|\le V_0^2\, |t-\tau |^2\) where \(V_0\) is the upper bound of |V| in \(\mathbb {R}\times [0,\ell ]\). And for \(\tau =0\) we have:

$$\begin{aligned} \varphi (t;0,y)=y+\int _0^t V(E_u(s,\varphi (s;0,y)), \varphi (s;0,y))\,\mathrm{d}s. \end{aligned}$$
(16)

We deduce that:

$$\begin{aligned} \frac{\pi }{4}\int _0^{\ell } \varphi ^2(t;0,y)u_0(y)\,\mathrm{d}y \le E_0+P_0V_0^2t^2 +2\ell P_0V_0t \le E_0+C_0P_0V_0T, \end{aligned}$$
(17)

where \(\displaystyle C_0=TV_0+2\ell \) and where we use the following notations:

$$\begin{aligned} E_0:=E_{u_0}=\frac{\pi }{4}\int _0^\ell y^2u_0(y)\,\mathrm{d}y \qquad \text {and}\quad P_0:=P_{u_0}=\frac{\pi }{4}\int _0^\ell u_0(y)\,\mathrm{d}y. \end{aligned}$$

Now, let be \(K> \max (E_0,P_0)\). Denote by:

$$\begin{aligned} M=\left\{ f\in \mathcal {C}\!\left( \overline{Q}\right) ; \quad f(0,0)=E_0,\quad \left\| f\right\| \le K\right\} , \end{aligned}$$

where \(\displaystyle \left\| \cdot \right\| :=\left\| \cdot \right\| _{\mathcal {C} \left( \overline{Q}\right) }\) is the SupNorm on \(\mathcal {C} \left( \overline{Q}\right) \) with \(\overline{Q}:=\left[ 0,T\right] \times \left[ 0,\ell \right] \) (the set M is a closed metric subset of \(\mathcal {C}\left( \overline{Q} \right) \)), and define the mapping \(\mathcal {E} : M\rightarrow \mathcal {C}\left( \overline{Q} \right) \) such that for any fixed \(E_u \in M\), compute B(t) as the unique solution for the linear Volterra integral equation (14). Then, the operator \(\mathcal {E}(E_u(t))\) corresponds to the right hand side of (13) for these \(E_u(t)\) and B(t).

And thus, we have the theorem:

Theorem 3.1

Consider the hypothesis \((H_1)-(H_3)\) on the V, \(\beta \) and \(\mu _v\) functions, respectively. Let the mapping \(\mathcal {E} : M\rightarrow \mathcal {C}\left( \overline{Q} \right) \) be defined as above. Then \(\mathcal {E}\) is a contraction from M into itself (i.e., \(\mathcal {E}\) has a fixed point and there exists a unique solution to (2)).

Proof

Step 1. We show that \(\mathcal {E}\) maps M into itself.

We follow the classical method by Gurtin and MacCamy [12] (see also [7]), beginning by obtaining a bound to B(t). From (14) and from the hypothesis \((H_2)\) and \((H_3)\) on \(\beta \) and \(\mu _v\), respectively, we have \(\displaystyle B(t)\le \frac{4}{\pi } \beta _0P_0+\beta _0 \int _0^tB(\tau )\,\mathrm{d}\tau \), where \(\displaystyle \beta _0 =\sup _{(E_{u},x)\in \mathbb {R}\times [0,\ell ]} \{\beta (E_u(.,.),x)\}\). Using the Gronwall inequality, we obtain:

$$\begin{aligned} B(t)\le \frac{4}{\pi } \beta _0P_0\,e^{\beta _0 t} \end{aligned}$$
(18)

Now, we put this in (13) and we use the hypothesis \((H_1)\) for V. Then, we obtain:

$$\begin{aligned} \begin{array}{lcl} \left| \mathcal {E}(E_u)(t,x) \right| &{} \le &{} \displaystyle \beta _0P_0\!\!\int _{0}^{t} \!\varphi ^2(t;\tau ,0) \exp (\beta _0\tau )\\ &{}&{}\,\exp \!\left( -\!\!\int _{\tau }^{t} \mu _v(E_u(s,\varphi (s;\tau ,0)),\varphi (s;\tau ,0))\,\mathrm{d}s\right) \!\mathrm{d}\tau \\ \\ &{} &{} + \displaystyle \frac{\pi }{4}\int _{0}^{\ell } \varphi ^2(t;0,y)\,u_{0}(y)\,\\ &{}&{} \exp \!\left( -\int _0^t \mu _v(E_u(s,\varphi (s;0,y)), \varphi (s;0,y))\,\mathrm{d}s\right) \,\mathrm{d}y\\ \\ &{}\le &{}\displaystyle P_0V_0^2T^2 e^{\beta _0T}+E_0+C_0P_0V_0T \; \le K \end{array} \end{aligned}$$

up to small values of T if necessary. Hence, we have \(\mathcal {E}(E_u)\in M\).

Step 2. We show that \(\mathcal {E}\) is a contraction.

Let be \(E_{u_1}\) and \(E_{u_2}\) two elements of M corresponding to two functions \(u_1\) and \(u_2\) such that \(u_1(0,x)=u_2(0,x)=u_0(x)\). From the expressions (15) and (16), we use the following notations:

$$\begin{aligned} \begin{array}{ll} \displaystyle \varphi _{i\tau }(t) :=\varphi _{E_{u_i}}(t;\tau ,0) =\int _{\tau }^t V(E_{u_i}(s,\varphi (s;\tau ,0)), \varphi (s;\tau ,0))\,\mathrm{d}s, \quad i=1,2,\\ \displaystyle \varphi _{iy}(t) :=\varphi _{E_{u_i}}(t;0,y) =y+ \int _0^t V(E_{u_i}(s,\varphi (s;0,y)), \varphi (s;0,y))\,\mathrm{d}s, \quad i=1,2. \end{array} \end{aligned}$$
(19)

For simplicity, we also use the notations:

$$\begin{aligned} \begin{array}{ll} &{}\displaystyle \mu _{i\tau }(t):=\mu _v\left( E_{u_i}(t,\varphi (t;\tau ,0)), \varphi (t;\tau ,0)\right) ,\\ &{} \displaystyle \beta _{i\tau }(t):=\beta \left( E_{u_i}(t,\varphi (t;\tau ,0)), \varphi (t;\tau ,0)\right) ,\\ &{}\displaystyle \mu _{iy}(t):=\mu _v\left( E_{u_i}(t,\varphi (t;0,y)), \varphi (t;0,y)\right) ,\\ &{}\displaystyle \beta _{iy}(t) :=\beta \left( E_{u_i}(t,\varphi (t;0,y)), \varphi (t;0,y)\right) . \end{array} \end{aligned}$$
(20)

Then, we have:

\( \begin{array}{l} \mathcal {E}(E_{u_1})(t,x) -\mathcal {E}(E_{u_2})(t,x)\\ \displaystyle \quad = \displaystyle -\frac{\pi }{4}\int _{\varphi ^{-1}(t;0,x)}^t \left( \varphi _{1\tau }^2\,\mathcal {B}(E_{u_1})(\tau ) -\varphi _{2\tau }^2\,\mathcal {B}(E_{u_2})(\tau )\right) \exp \left( -\int _{\tau }^t\mu _{1\tau }(s)\,\mathrm{d}s\right) \mathrm{d}\tau \\ \\ \quad \quad \displaystyle -\frac{\pi }{4}\int _{\varphi ^{-1}(t;0,x)}^t \varphi _{2\tau }^2\,\mathcal {B}(E_{u_2})(\tau )\\ \left( \exp \left( -\int _{\tau }^t\mu _{1\tau }(s)\,\mathrm{d}s\right) -\exp \left( -\int _{\tau }^t\mu _{2\tau }(s)\,\mathrm{d}s\right) \right) \mathrm{d}\tau \\ \\ \quad \quad \displaystyle +\frac{\pi }{4}\int _0^\ell \left( \varphi _{1y}^2-\varphi _{2y}^2\right) \,u_0(y)\, \exp \left( -\int _0^t\mu _{1y}(s)\,\mathrm{d}s\right) \mathrm{d}y\\ \\ \quad \quad \displaystyle +\frac{\pi }{4}\int _0^\ell \varphi _{2y}^2\,u_0(y)\, \left( \exp \left( -\int _0^t\mu _{1y}(s)\,\mathrm{d}s\right) -\exp \left( -\int _0^t\mu _{2y}(s)\,\mathrm{d}s\right) \right) \mathrm{d}y, \end{array}\)

where here we used the property \(a_1b_1-a_2b_2 =a_2(b_1-b_2) +b_1(a_1-a_2)\), valid for any functions or reals \(a_1\), \(a_2\), \(b_1\) and \(b_2\).

Recall that we consider non-negative time variable \(t\in [0,T]\) (see Remark 3). That is, we have the bounds:

$$\begin{aligned}&\displaystyle \left| \mathcal {E}(E_{u_1})(t,x) -\mathcal {E}(E_{u_2})(t,x)\right| \\&\quad \displaystyle \le \frac{\pi }{4}\int _0^t \left| \varphi _{1\tau }^2\,\mathcal {B}(E_{u_1})(\tau ) -\varphi _{2\tau }^2\,\mathcal {B}(E_{u_2})(\tau )\right| \,\mathrm{d}\tau \\&\quad \quad \displaystyle +\frac{\pi }{4}\int _0^t \varphi _{2\tau }^2\, \left| \mathcal {B}(E_{u_2})(\tau )\right| \left( \int _{\tau }^t\left| \mu _{1\tau }(s) -\mu _{2\tau }(s)\right| \,\mathrm{d}s\right) \mathrm{d}\tau \\&\quad \quad \displaystyle +\frac{\pi }{4}\int _0^\ell \left( \left| \varphi _{1y} -\varphi _{2y}\right| \left| \varphi _{1y} +\varphi _{2y}\right| |u_0(y)|\right) \,\mathrm{d}y\\&\quad \quad \displaystyle +\frac{\pi }{4}\int _0^\ell \varphi _{2y}^2\,|u_0(y)|\, \left( \int _0^t\left| \mu _{1y}(s)-\mu _{2y}(s)\right| \,\mathrm{d}s\right) \mathrm{d}y =\frac{\pi }{4}\left( I_1+I_2+I_3+I_4\right) , \end{aligned}$$

using in particular the fact that \(\displaystyle |e^{-x}-e^{-y}|\le |x-y|\) for every \(x,y>0\). For the first integral \(I_1\), we have:

$$\begin{aligned}&\displaystyle \left| \varphi _{1\tau }^2\,\mathcal {B}(E_{u_1})(\tau ) -\varphi _{2\tau }^2\,\mathcal {B}(E_{u_2})(\tau )\right| \\&\quad \displaystyle \le \displaystyle \beta _0 \int _0^{\tau }\left| \varphi _{1\tau }^2\, \mathcal {B}(E_{u_1})(s) -\varphi _{2\tau }^2\, \mathcal {B}(E_{u_2})(s)\right| \,\mathrm{d}s\\&\qquad \displaystyle +\frac{4}{\pi }\beta _0P_0 e^{\beta _0\tau }\int _0^{\tau } \varphi _{2\tau }^2\,\\&\qquad \times \left| \beta _{1\tau } \exp \left( -\int _{s}^{\tau } \mu _{1s}(\alpha )\, \mathrm{d}s\alpha \right) -\beta _{2\tau } \exp \left( -\int _{s}^{\tau } \mu _{2s}(\alpha )\, \mathrm{d}s\alpha \right) \right| \mathrm{d}s\\&\qquad \displaystyle +\int _0^\ell \left| \varphi _{1\tau }-\varphi _{2\tau }\right| \left| \varphi _{1\tau }+\varphi _{2\tau }\right| \, u_0(y)\, \beta _{1y}\exp \left( -\int _0^{\tau } \mu _{1y}(\alpha )\,\mathrm{d}s\alpha \right) \mathrm{d}y\\&\qquad \displaystyle +\int _0^\ell \varphi _{2\tau }^2\,u_0(y)\, \left| \beta _{1y}\exp \left( -\int _0^{\tau } \mu _{1y}(\alpha )\,\mathrm{d}\alpha \right) -\beta _{2y}\exp \left( -\int _0^{\tau } \mu _{2y}(\alpha )\,\mathrm{d}\alpha \right) \right| \mathrm{d}y \end{aligned}$$

that we write as:

$$\begin{aligned}&\displaystyle \left| \varphi _{1\tau }^2\,\mathcal {B}(E_{u_1})(\tau ) -\varphi _{2\tau }^2\,\mathcal {B}(E_{u_2})(\tau )\right| \\&\quad \displaystyle \le \displaystyle \beta _0\int _0^{\tau }\left| \varphi _{1\tau }^2\,\mathcal {B}(E_{u_1})(s) -\varphi _{2\tau }^2\,\mathcal {B}(E_{u_2})(s)\right| \,\mathrm{d}s\\&\qquad \displaystyle +\frac{4}{\pi }\beta _0P_0e^{\beta _0\tau }\int _0^{\tau }\varphi _{2\tau }^2\, \beta _{1\tau }\left| \exp \left( -\int _{s}^{\tau }\mu _{1s}(\alpha )\, \mathrm{d}\alpha \right) -\exp \left( -\int _{s}^{\tau }\mu _{2s}(\alpha )\, \mathrm{d}\alpha \right) \right| \mathrm{d}s\\&\qquad \displaystyle +\frac{4}{\pi }\beta _0P_0e^{\beta _0\tau }\int _0^{\tau }\varphi _{2\tau }^2\, \left| \beta _{1\tau } - \beta _{2\tau } \right| \exp \left( -\int _{s}^{\tau }\mu _{2s}(\alpha )\, \mathrm{d}\alpha \right) \,\mathrm{d}s\\&\qquad \displaystyle +\int _0^\ell \left| \varphi _{1\tau }-\varphi _{2\tau }\right| \left| \varphi _{1\tau }+\varphi _{2\tau }\right| \, u_0(y)\, \beta _{1y}\exp \left( -\int _0^{\tau } \mu _{1y}(\alpha )\,\mathrm{d}\alpha \right) \mathrm{d}y\\&\qquad \displaystyle +\int _0^\ell \varphi _{2\tau }^2\,u_0(y)\, \beta _{1y}\left| \exp \left( -\int _0^{\tau } \mu _{1y}(\alpha )\,\mathrm{d}\alpha \right) -\exp \left( -\int _0^{\tau } \mu _{2y}(\alpha )\,\mathrm{d}\alpha \right) \right| \mathrm{d}y\\&\qquad \displaystyle +\int _0^\ell \varphi _{2\tau }^2\, u_0(y)\, \left| \beta _{1y}-\beta _{2y} \right| \exp \left( -\int _0^{\tau } \mu _{2y}(\alpha )\,\mathrm{d}\alpha \right) \mathrm{d}y \\&\quad \displaystyle \le \displaystyle \beta _0\int _0^{\tau }\left| \varphi _{1\tau }^2\,\mathcal {B}(E_{u_1})(s) -\varphi _{2\tau }^2\,\mathcal {B}(E_{u_2})(s)\right| \,\mathrm{d}s+J_1+J_2+J_3+J_4 +J_5. \end{aligned}$$

In \(J_3\), we have from (19) the following majoration:

$$\begin{aligned}&\displaystyle \left| \varphi _{1\tau } -\varphi _{2\tau }\right| \\&\quad =\displaystyle \left| \int _{\tau }^t \left( V(E_{u_1}(s,\varphi (s;\tau ,0)), \varphi (s;\tau ,0)) -V(E_{u_2}(s,\varphi (s;\tau ,0)), \varphi (s;\tau ,0))\right) \,\mathrm{d}s \right| \\&\quad \le \displaystyle V_L \int _0^{t}\left| E_{u_1}(s,\varphi (s; \tau ,0)) -E_{u_2}(s,\varphi (s;\tau ,0))\right| \,\mathrm{d}s \le T\, V_L \left\| E_{u_1}-E_{u_2}\right\| , \end{aligned}$$

where \(V_L\) is the Lipschitz constant on V. Thus,

$$\begin{aligned} \begin{array}{ccl} \displaystyle J_3 &{}=&{}\displaystyle \int _0^\ell \left| \varphi _{1\tau }-\varphi _{2\tau }\right| \left| \varphi _{1\tau }+\varphi _{2\tau }\right| \, u_0(y)\, \beta _{1y}\exp \left( -\int _0^{\tau } \mu _{1y}(\alpha )\,\mathrm{d}\alpha \right) \mathrm{d}y\\ &{}\le &{}\displaystyle \frac{8}{\pi } \beta _0P_0V_0 V_LT^2 \left\| E_{u_1}-E_{u_2}\right\| . \end{array} \end{aligned}$$

Now, using the fact that \(\beta \) and \(\mu \) are Lipschitzian functions with respect to E and x, and using (20) we obtain successively:

\( \begin{array}{ccl} \displaystyle J_1 &{}=&{}\displaystyle \frac{4}{\pi }\beta _0P_0e^{\beta _0\tau }\int _0^{\tau }\varphi _{2\tau }^2\, \beta _{1\tau }\left| \exp \left( -\int _{s}^{\tau }\mu _{1s}(\alpha )\, \mathrm{d}\alpha \right) -\exp \left( -\int _{s}^{\tau }\mu _{2s}(\alpha )\, \mathrm{d}\alpha \right) \right| \mathrm{d}s\\ &{}\le &{}\displaystyle \frac{4}{\pi } \beta _0^2P_0e^{\beta _0T} V_0^2T^3\mu _L \left\| E_{u_1}-E_{u_2}\right\| , \end{array}\)

\( \begin{array}{ccl} \displaystyle J_2 &{}=&{}\displaystyle \frac{4}{\pi }\beta _0P_0e^{\beta _0\tau }\int _0^{\tau }\varphi _{2\tau }^2\, \left| \beta _{1\tau } - \beta _{2\tau } \right| \exp \left( -\int _{s}^{\tau }\mu _{2s}(\alpha )\, \mathrm{d}\alpha \right) \,\mathrm{d}s\\ &{}\le &{}\displaystyle \frac{4}{\pi } \beta _0P_0e^{\beta _0T} V_0^2T^3\beta _L \left\| E_{u_1}-E_{u_2}\right\| , \end{array}\)

\( \begin{array}{ccl} \displaystyle J_4 &{}=&{}\displaystyle \int _0^\ell \varphi _{2\tau }^2\,u_0(y)\, \beta _{1y}\left| \exp \left( -\int _0^{\tau } \mu _{1y}(\alpha )\,\mathrm{d}\alpha \right) -\exp \left( -\int _0^{\tau } \mu _{2y}(\alpha )\,\mathrm{d}\alpha \right) \right| \mathrm{d}y\\ &{}\le &{}\displaystyle \frac{4}{\pi } \beta _0P_0V_0^2 T^2\mu _L \left\| E_{u_1}-E_{u_2}\right\| , \end{array} \)

and,

$$\begin{aligned}J_5 =\int _0^\ell \varphi _{2\tau }^2\,u_0(y)\, \left| \beta _{1y}-\beta _{2y}\right| \exp \left( -\int _0^{\tau } \mu _{2y}(\alpha )\,\mathrm{d}\alpha \right) \mathrm{d}y \le \frac{4}{\pi }P_0V_0^2 T^2\beta _L \left\| E_{u_1}-E_{u_2}\right\| . \end{aligned}$$

We resume by:

$$\begin{aligned}&\left| \varphi _{1\tau }^2\,\mathcal {B}(E_{u_1})(\tau ) -\varphi _{2\tau }^2\,\mathcal {B}(E_{u_2})(\tau )\right| \le \beta _0\int _0^{\tau }\left| \varphi _{1\tau }^2\,\mathcal {B}(E_{u_1})(s) -\varphi _{2\tau }^2\,\mathcal {B}(E_{u_2})(s)\right| \,\mathrm{d}s\\&\quad +\frac{4}{\pi }\widetilde{C}T^2 \left\| E_{u_1}-E_{u_2}\right\| , \end{aligned}$$

where

$$\begin{aligned}\widetilde{C} =P_0V_0 \left( \beta _0^2e^{\beta _0T}V_0T\mu _L +\beta _0e^{\beta _0T}V_0T\beta _L +2\beta _0V_L +\beta _0V_0\mu _L +V_0\beta _L\right) . \end{aligned}$$

Using Gronwall’s inequality, we obtain:

$$\begin{aligned} \left| \varphi _{1\tau }^2\,\mathcal {B}(E_{u_1})(\tau ) -\varphi _{2\tau }^2\,\mathcal {B}(E_{u_2})(\tau )\right| \le \frac{4}{\pi }\widetilde{C}T^2 e^{\beta _0\tau } \left\| E_{u_1}-E_{u_2}\right\| . \end{aligned}$$

We deduce that:

$$\begin{aligned} I_1=\displaystyle \int _0^t\left| \varphi _{1\tau }^2\,\mathcal {B}(E_{u_1})(\tau ) -\varphi _{2\tau }^2\,\mathcal {B}(E_{u_2})(\tau )\right| \mathrm{d}\tau \le \frac{4}{\pi }\widetilde{C}T^3 e^{\beta _0T} \, \left\| E_{u_1}-E_{u_2}\right\| . \end{aligned}$$

For the remaining integrals, we have:

$$\begin{aligned}&I_2=\int _0^t \varphi _{2\tau }^2\, \left| \mathcal {B}(E_{u_2})(\tau )\right| \left( \int _{\tau }^t\left| \mu _{1\tau }(s) -\mu _{2\tau }(s)\right| \,\mathrm{d}s\right) \mathrm{d} \tau \\&\quad \quad \le \frac{4}{\pi } \beta _0P_0e^{\beta _0T}T^4V_0^2 \mu _L\left\| E_{u_1}-E_{u_2}\right\| , \\&I_3=\int _0^\ell \left( \left| \varphi _{1y} -\varphi _{2y}\right| \left| \varphi _{1y} +\varphi _{2y}\right| |u_0(y)|\right) \,\mathrm{d}y \le \frac{8}{\pi } P_0TV_L(TV_0+\ell ) \left\| E_{u_1}-E_{u_2}\right\| , \\&I_4=\int _0^\ell \varphi _{2y}^2\,|u_0(y)|\, \left( \int _0^t\left| \mu _{1y}(s) -\mu _{2y}(s)\right| \,\mathrm{d}s\right) \mathrm{d}y\\&\quad \quad \le \frac{4}{\pi } \left( E_0+C_0P_0V_0T\right) T\mu _L \left\| E_{u_1}-E_{u_2}\right\| , \end{aligned}$$

where here we use in particular the majoration (17).

Finally:

$$\begin{aligned} \left\| \mathcal {E}(E_{u_1}) -\mathcal {E}(E_{u_2})\right\| \le \frac{\pi }{4} \left( I_1+I_2+I_3+I_4\right) =\widetilde{C}' T\left\| E_{u_1}-E_{u_2}\right\| , \end{aligned}$$

which is contractant for small T. Here,

$$\begin{aligned} \displaystyle \widetilde{C}' =\widetilde{C}T^2e^{\beta _0T} +\beta _0P_0e^{\beta _0T}V_0^2T^3\mu _L +2P_0V_L(TV_0+\ell ) +\left( E_0+C_0P_0V_0T\right) \mu _L. \end{aligned}$$

This ends the proof of the theorem. \(\square \)

4 The Optimal Control Problem

We study the optimal timber production where the control is the amount cut of trees. The objective functional we would like to maximize includes two terms. The first one corresponds to the net benefits from timber production and the second one corresponds to the total number of individuals of size x small, \(x \in [0,\ell _{1}],\) where the size \(\ell _{1}\) is the minimum diameter cutting in forestry, also called diameter-limit cutting (DLC) (see Nyland [22]). These trees play the same role as new trees which are planted in order to replace those that have been cut down as in Hritonenko et al. [13] and [14] (see also Kato [17]).

The optimal harvesting problem consists in maximizing the function J defined by:

$$\begin{aligned} J(v)=\int _Q k(x)\, v(t)\, u(t,x)\, \mathrm{d}x\mathrm{d}t + \int _0^T\int _0^{\ell _1} \rho (t)\, u(t,x)\, \mathrm{d}x\mathrm{d}t, \end{aligned}$$
(21)

where u is the solution of (2), k(x) is the price function such that:

$$\begin{aligned}&k\in \mathcal {C}^1([0,\ell ]),\quad 0<k_0=k(0)\le k(x)\le k(\ell )=k_M\quad \text{ and }\;\; \nonumber \\&\quad 0 \le k'(x)\le k_2\quad \text{ a.e. }\;\; \text{ in }\; [0,\ell ], \end{aligned}$$
(22)

\(k_2\) being a constant, and where

$$\begin{aligned} \rho \in \mathcal {C}^1([0,T]),\quad \rho (t)>0\;\; \text{ a.e. } \quad \text{ in }\; [0,T]. \end{aligned}$$
(23)

The control function \(v:=v(t)\) is to be find in the set of controls:

$$\begin{aligned} \mathcal {U}=\left\{ v(t)\in \mathcal {C}\!\left[ 0,T\right] ; \; 0\le v(t)\le v_{\text {M}}, \quad \forall \; t\in [0,T]\right\} , \end{aligned}$$
(24)

where \(v_{\text {M}}\) is the maximum harvesting rate.

We use—in a different way—the method of separable models introduced by Busenberg and Iannelli [6] (see also Anita [3]). We write the solution u(tx) in the form:

$$\begin{aligned} u(t,x)=z(t)\,\tilde{u}(t,x), \end{aligned}$$
(25)

so that we obtain the two problems (26) and (27) below. The first one is the PDE problem corresponding to the state of the forest in the case of no harvest (\(v=0\)) :

$$\begin{aligned} \begin{array}{lr} \displaystyle \frac{\partial \tilde{u}}{\partial t} +V(E_u(t,x),x)\,\frac{\partial \tilde{u}}{\partial x} =-G_0(E_u(t,x),x)\tilde{u} \,\,&{} \displaystyle \text{ in }\,\, Q:=]0,T[\times ]0,\ell [,\\ \displaystyle {\tilde{u}(t,0)\, V\!\left( E_u(t,0),0\right) =\int _{0}^{\ell }\beta (E_u(t,x),x)\,\tilde{u}(t,x)\,\mathrm{d}x}\,\,&{} \displaystyle \text{ in }\,\,]0,T[,\\ \displaystyle {\tilde{u}(0,x)=u_{0}(x)}\,\,\,&{} \displaystyle \text{ in }\, ]0,\ell [. \end{array} \end{aligned}$$
(26)

The second problem is the simple ODE given by :

$$\begin{aligned} \begin{array}{lr} \displaystyle \dot{z}(t) +v(t) \,z(t)=0\,\,&{} \displaystyle \text{ in }\,\, ]0,T[,\\ \displaystyle z(0)=1,\,\,\,&{} \end{array} \end{aligned}$$
(27)

of solution:

$$\begin{aligned} z(t)=\exp \left( -\int _0^tv(s)\,\mathrm{d}s\right) , \qquad \forall \; t\in [0,T]. \end{aligned}$$
(28)

With this, the criterion J changes to the following. Maximize:

$$\begin{aligned} \widetilde{J}(v)=\int _0^T v(t)\,z(t)\,\tilde{q}_1(t)\,\mathrm{d}t +\int _0^T \rho (t)\,z(t)\,\tilde{q}_2(t)\,\mathrm{d}t, \end{aligned}$$
(29)

where the functions \(\tilde{q}_1(t)\) and \(\tilde{q}_2(t)\) depend on the state of the forest in the case of no harvesting:

$$\begin{aligned} \tilde{q}_1(t)=\int _0^{\ell }k(x)\tilde{u}(t,x)\,\mathrm{d}x, \qquad \tilde{q}_2(t)=\int _0^{\ell _1}\tilde{u}(t,x)\,\mathrm{d}x. \end{aligned}$$
(30)

Hence, the study of the maximization problem (21) is equivalent to the one of (29)–(30). We notice that the control function appears only in the ODE problem (27), but u depending on v appears in the velocity of the PDE (26).

Proposition 4.1

Let \((\tilde{u}(t,x),z(t))\) be the solution pair of problems (26)–(27). Then, there exists at least one optimal control \(v^*\in \mathcal {U}\) to the problem (29)–(30) (and then for problem (21)).

Proof

Let be \(\displaystyle d:=\sup _{v\in \mathcal {U}}J(v) =\sup _{v\in \mathcal {U}}\int _0^Tz(t)\zeta (t)\,\mathrm{d}t\) where

$$\begin{aligned} \zeta (t)=v(t)\tilde{q}_1(t)+\rho (t)\tilde{q}_2(t). \end{aligned}$$
(31)

Then, there is \((v_n)_{n\in \mathbb {N}^*}\), a minimizing sequence such that:

$$\begin{aligned} d-\frac{1}{n}<\int _0^Tz_n(t) \zeta _n(t)\,\mathrm{d}t \le d, \end{aligned}$$
(32)

where \(z_n:=z(v_n)\). Since \((v_n)\) is bounded (hypothesis), we deduce from (28) that \((z_n)\) and its time derivative \((\dot{z}_n)\) are bounded. Thus, \((z_n)\subset \mathcal {C}[0,T]\) is precompact and there is a sequence—still denoted \((z_n)\)—which strongly converges to \(z^{*}\in \mathcal {C}([0,T])\).

Now, we have to prove that:

$$\begin{aligned} \int _0^Tz_n(t)\zeta _n(t)\,\mathrm{d}t \longrightarrow \int _0^Tz^{*}(t) \zeta ^{*}(t)\,\mathrm{d}t, \end{aligned}$$

where \(\displaystyle \zeta ^*(t)=v^*(t)\tilde{q}_1^*(t) + \rho ^*(t)\tilde{q}_2^*(t)\).

As in the above section, we write \(z_n\zeta _n-z^{*}\zeta ^{*}=z_n\, (\zeta _n-\zeta ^{*}) +\zeta ^{*}\,(z_n-z^{*})\). We then have to show the weak convergence of \(\zeta _n(t)\) to \(\zeta ^{*}(t)\). According to (31) we have \(\zeta _n(t)=v_n(t)\tilde{q}_{1n}(t)+\rho (t)\tilde{q}_{2n}(t)\). Since \(\tilde{u}\) is bounded, \(\displaystyle \tilde{q}_{2n}(t)\) weakly converges to \(\displaystyle \tilde{q}_2^*(t)=\int _0^{\ell } \tilde{u}^*(t,x)\,\mathrm{d}x\). It remains to show that \(\tilde{q}_{1n}(t)\) converges strongly. That is to show that the time derivative sequence \(\displaystyle (\dot{\tilde{q}}_{1n})\) is bounded. We have:

$$\begin{aligned} \displaystyle \frac{\mathrm{d}\tilde{q}_{1n}}{\mathrm{d}t}(t)= & {} \displaystyle \int _0^{\ell } k(x)\,\frac{\partial \tilde{u}_n}{\partial t}(t,x)\,\mathrm{d}x\\= & {} \displaystyle -\int _0^{\ell }k(x)\,V(E_n(t,x),x)\, \frac{\partial \tilde{u}_n}{\partial x}(t,x)\,\mathrm{d}x\\&-\int _0^{\ell }k(x)\,G_0(E_n(t,x),x)\,\tilde{u}_n(t,x)\,\mathrm{d}x\\= & {} \displaystyle -k(\ell )\,V(E_n(t,\ell ),\ell )\,\tilde{u}_n(t,\ell ) +k_0\,V(E_n(t,0),0)\,\tilde{u}_n(t,0)\\&\displaystyle +\int _0^{\ell }k'(x)\,V(E_n(t,x),x)\,\tilde{u}_n(t,x)\,\mathrm{d}x\\&+\int _0^{\ell }k(x)\,V_x(E_n(t,x),x)\,\tilde{u}_n(t,x)\,\mathrm{d}x\\&\displaystyle -\int _0^{\ell }k(x)\,G_0(E_n(t,x),x)\,\tilde{u}_n(t,x)\,\mathrm{d}x. \end{aligned}$$

We use the hypothesis \((H_1)-(H_3)\), hence:

$$\begin{aligned} \qquad \qquad \qquad \displaystyle \left| \frac{\mathrm{d}\tilde{q}_{1n}}{\mathrm{d}t}(t)\right| \le \left( k_0\,\beta _0+k_2\,V_0+k_M\,V_L\right) \, \left| \tilde{u}_n(t,.)\right| _{L^1(0,\ell )}\qquad \forall \; t\in [0,T]. \quad \square \qquad \qquad \qquad \end{aligned}$$

4.1 Necessary Conditions

We now give the necessary conditions of the optimal control \(v^{*}\). For simplicity, we suppose that:

$$\begin{aligned} V:=V(t,x),\qquad \mu :=\mu (t,x)\qquad \beta :=\beta (t,x). \end{aligned}$$
(33)

We have the following result.

Proposition 4.2

Consider the problems (26) and (27) with the cost functional \(\widetilde{J}\) given by (29)–(30). If v(t) maximizes (29), then there exists a continuous function \(\lambda \in \mathcal {C}(0,T; \mathbb {R})\) such that the optimal control is bang–bang:

$$\begin{aligned} v=\left\{ \begin{array}{lll} 0&{}\text {if } &{}\tilde{q}_1(t)-\lambda (t)< 0,\\ v_M&{} \text{ if } &{}\tilde{q}_1(t)-\lambda (t)> 0. \end{array} \right. \end{aligned}$$

Moreover, if \(\tilde{q}'_1(t) +\rho (t)\tilde{q}_2(t)>0\) and,

$$\begin{aligned} \tilde{q}_1(0)<\int _{t^*}^{T}\exp (v_M(t^*-s)(-v_M\tilde{q}_1(s)-\rho (s)\tilde{q}_2(s))\mathrm{d}s+\int _{0}^{t^*}\rho (s)\tilde{q}_2(s)\mathrm{d}s, \end{aligned}$$

then we have one switch from the minimal value to the maximal one for the optimal control.

Proof

Applying Pontryagin’s maximum principle theorem, there exist an adjoint variable \(\lambda (t)\) and a control v that we should determine, such that the Hamiltonian

$$\begin{aligned} H(z,v;\lambda ) =v(t)z(t)\tilde{q}_1(t) +\rho (t)z(t)\tilde{q}_2(t) +\lambda (t) (-v(t)z(t)), \end{aligned}$$

is constant along the optimal trajectories. The adjoint equation is given by:

$$\begin{aligned} \dot{\lambda }=-\frac{\partial H}{\partial z} =-v(t)\tilde{q}_1(t) -\rho (t)\tilde{q}_2(t)+\lambda (t)v(t), \end{aligned}$$
(34)

with the transversality condition \(\lambda (T)=0\). The derivative of the Hamiltonian with respect to v gives:

$$\begin{aligned} \frac{\partial H}{\partial v} =z(t)\tilde{q}_1(t)-\lambda (t) z(t). \end{aligned}$$
(35)

Thus, \(\displaystyle \frac{\partial H}{\partial v}\) does not depend on v. Since \(z(t)\ge 0\), the sign of \(\displaystyle \frac{\partial H}{\partial v}\) depends on the sign of \(\tilde{q}_1(t)-\lambda (t)\). The control is then bang–bang and we have:

$$\begin{aligned} v=\left\{ \begin{array}{llll} v_M&{} \text{ if } &{}\tilde{q}_1(t)-\lambda (t)> 0,\\ 0&{}\text {if } &{}\tilde{q}_1(t)-\lambda (t)< 0. \end{array} \right. \end{aligned}$$

Note that we can easily see by contradiction that we have no singular controls under our hypothesis. Assume that \(v^*\) is singular, thus there exists an interval I such that \(z(t)(\tilde{q}_1(t)-\lambda (t))=0\) for \(t\in I\). As z cannot be zero thus we have that \(\tilde{q}_1(t)-\lambda (t)=0\) for \(t\in I\) and by derivation \(\lambda '(t)=\tilde{q}'_1(t)\) for \(t\in I\). Using the adjoint equation one gets \(\lambda '(t)=-\rho (t)\tilde{q}_2(t)\) for \(t\in I\) so that \(\tilde{q}'_1(t)=-\rho (t)\tilde{q}_2(t)\) for \(t\in I\) and this is not possible.

In order to compute the number of switches in the optimal control function, we define the function \(\Psi (t)=z(t)(\tilde{q}_1(t)-\lambda (t))\) and we compute its derivative with respect to t. Thus, using the state and the adjoint equations (27), (34) one gets:

$$\begin{aligned} \Psi '(t)=-v(t)z(t)(\tilde{q}_1(t)-\lambda (t))-z(t) (\tilde{q}'_1(t)-v(t)\tilde{q}_1(t) -\rho (t)\tilde{q}_2(t) +\lambda (t)v(t)), \end{aligned}$$

and after simplification:

$$\begin{aligned}\Psi '(t)=z(t)(\tilde{q}'_1(t) +\rho (t)\tilde{q}_2(t)). \end{aligned}$$

Thanks to the transversality condition we have that \(\Psi (T)=z(T) \tilde{q}_1(T)\) is non-negative. This means that close to the final time T the optimal control is at its maximal value.

Then, if \(\tilde{q}'_1(t) +\rho (t)\tilde{q}_2(t)>0\), we have \(\Psi '(t)>0\) and the function \(\Psi \) is increasing. It is clear now that if \(\Psi (0)<0\), then we have a unique switch in the optimal control function and that the optimal control strategy consists in not doing anything until a time \(t^*\) and then switch to the maximal effort until the final time T. The time \(t^*\) can be calculated explicitly in this case. The solution of the state equation is given by:

$$\begin{aligned} z=\left\{ \begin{array}{ll} 1&{} \text{ if } t<t^*,\\ \exp (-v_M(t-t^*))&{}\text {if\,\,\,} t^*<t<T. \end{array} \right. \end{aligned}$$

One can also compute the solution of the adjoint equation and the condition \(\Psi (0)<0\) is then equivalent to \(\tilde{q}_1(0)<(1-\exp (-v_MT))\int _{0}^{T}(-v_M\tilde{q}_1(s) -\rho (s)\tilde{q}_2(s))\exp (-v_Ms)\mathrm{d}s\).

If \(\Psi (0)>0\), then the function \(\Psi \) cannot vanish in [0, T] and thus the optimal strategy consists in taking the control \(v=v_M\) during the whole harvesting period. \(\square \)

Remark 4

The assumptions (33) on V, \(\mu \) and \(\beta \) in the above proposition can be more general for establishing the necessary conditions to the optimal control, but they are necessary for its characterization.

5 Conclusion

In this work, we considered an optimal forest harvesting problem where trees are in competition for light. The functional we are maximizing includes the benefits from timber production with a penalization taking into account the regeneration of the forest. We proved the existence of a solution to the size and time non-local and nonlinear structured problem using the fixed point theorem. Splitting the control problem allowed us to use Pontryagin’s maximum principle, and we proved the existence of an optimal control of bang–bang type with one switch under some conditions on the state of the forest when no man action is considered. The optimal control program always finishes with the maximum of harvest rate to insure a maximum benefit to the forestman. Depending on the state of the forest when no control acts, one can consider other optimal strategies with multiple switches or without any switch.