Keywords

1 Formulation of the Problem

A large number of works have been devoted to the study of linear and nonlinear waves (see [1,2,3,4,5,6] for an overview). In particulary, in [1] by Castro, Palacios and Zuazua an alternating descent method for the optimal control of the inviscid Burgers equation is considered in the presence of shocks. It is computed numerical approximations of minimizers for optimal control problems governed by scalar conservation laws.

Partial differential equations of the first order are locally solved by methods of the theory of ordinary differential equations by reducing them to a characteristic system. The application of the method of characteristics to the solution of partial differential equations of the first order makes it possible to reduce the study of wave evolution [7]. In [8, 9], methods for integrating of nonlinear partial differential equations of the first order were developed. Further, many papers appeared devoted to the study of questions of the unique solvability of the Cauchy problem for different types of partial differential equations of first order (see, for example, [10,11,12,13,14]).

The theory of optimal control for systems with distributed parameters is widely used in solving problems of aerodynamics, chemical reactions, diffusion, filtration, combustion, heating, etc. (see, [15,16,17,18,19]). Various analytical and approximate methods for solving problems of optimal control systems with distributed parameters are being developed and effectively used (see, for example, [20,21,22]). Interesting results are obtained in the works [23,24,25,26,27,28]).

In this paper we consider the questions of unique solvability and determination of the control function in the nonlinear initial value problem for a Whitham-type partial differential equation with nonlinear impulse conditions. So, in the domain \(\varOmega \equiv [0;T]\times \textrm{R}\) for \(t\ne t_{i} , i=1,2, \ldots ,k\) we study the following quasilinear equation

$$\begin{aligned} \frac{\partial u (t, x)}{\partial t} +u(t,x) \frac{\partial u (t, x)}{\partial x} =F \left( t, x,u (t, x),\alpha (t)\right) \end{aligned}$$
(13.1)

with nonlinear initial value condition

$$\begin{aligned} u (t,x)_{ \left| t=0\right. } =\varphi \left( x,\int \limits _{0}^{T}K(\xi )u(\xi ,x)\text {d}\xi \right) ,\quad x\in \textrm{R} \end{aligned}$$
(13.2)

and nonlinear impulsive condition

$$\begin{aligned} u\left( t_{i}^{+},x\right) -u\left( t_{i}^{-},x\right) =G_{i} \left( u\left( t_{i},x\right) \right) ,\quad i=1,2,\ldots ,k, \end{aligned}$$
(13.3)

where u(tx) is unknown state function, \(\alpha (t)\) is unknown control function, \(t\ne t_{i} , i=1,2,\ldots ,k,\) \(0=t_{0} <t_{1} <\cdots <t_{k} <t_{k+1} =T<\infty \), \(\textrm{R}\equiv (-\infty ,\infty )\), \(F(t,x, u,\alpha )\in C^{ 0, 1, 1,1} (\varOmega \times \textrm{R}\times \varUpsilon ),\) \(\varUpsilon \equiv [0,M^{ *} ],\) \( 0<M^{ *} <\infty ,\) \(\varphi (x,u)\in C^{ 1} (\textrm{R}^{2}),\) \(\int _{0}^{T}\left| K(\xi ) \right| \text {d}\xi <\infty ,\) \(u\left( t_{i}^{+} ,x\right) ={\mathop {\lim }\nolimits _{\nu \rightarrow 0^{+} }} u\left( t_{i} +\nu ,x\right) ,\) \(u\left( t_{i}^{-},x\right) ={\mathop {\lim }\nolimits _{\nu \rightarrow 0^{-} }} u\big (t_{i} -\nu ,x\big )\) are right-hand side and left-hand side limits of function u(tx) at the point \(t=t_{i} \), respectively. Here we note that the points \(t=t_{i}\) (\(i=1,2, \ldots ,k,\)) are given and constants. In addition, all functions involved in this work are scalar; vector functions are not used.

In most literature, the Burgers equation is called the equation (see, for examples, [6, Formula 4.1, Page 106], [7, Formula 14.9, Page 82])

$$\begin{aligned} \frac{\partial u (t, x)}{\partial t} +u(t,x) \frac{\partial u (t, x)}{\partial x} =\mu \frac{\partial ^2 u (t, x)}{\partial x^2}. \end{aligned}$$

When \(\mu =0\) this equation is called the Hopf equation, which describes the Riemann wave (see [7, Formula 4.6, Page 29]). Equation (13.1) has a nonlinear right hand side. In this form, it is called a nonlinear Whitham-type equation (see [9]). One of the features of this Eq. (13.1) is that it cannot be integrated in the usual characteristic. Therefore, in this paper, an extended characteristic is introduced.

The differential equation (13.1) also describes the process of regulating the behavior of students within the discipline and moral ethics established at the university for many decades.

We use some Banach spaces: the space \(C\left( \varOmega ,\textrm{R}\right) \) consists continuous function u(tx) with the norm

$$\begin{aligned}\left\| u \right\| _{C} ={\mathop {\sup }\limits _{(t,x)\in \varOmega }} \left| u(t,x) \right| ;\end{aligned}$$

we also need in using the linear space

$$\begin{aligned}PC\left( \varOmega ,\textrm{R}\right) =\left\{ u:\varOmega \rightarrow \textrm{R}; u(t,x)\in C\left( \varOmega _{i,i+1} ,\textrm{R}\right) ,\quad i=1,\ldots ,k\right\} \end{aligned}$$

with the following norm

$$\begin{aligned}\left\| u \right\| _{PC} =\max \left\{ \left\| u \right\| _{C(\varOmega _{i,i+1} )} ,\quad i=1,2,\ldots ,k\right\} ,\end{aligned}$$

where \(\varOmega _{i,i+1} =\left( t_{i},t_{i+1} \right] \times \textrm{R},\) \(u\left( t_{i}^{+} ,x\right) \) and \(u\left( t_{i}^{-},x\right) \) \((i=0,1,\ldots ,k)\) exist and are bounded; \(u\left( t_{i}^{-},x\right) =u\left( t_{i} ,x\right) \).

To determine the control function \(\alpha (t)\) in the initial value problem (13.1)–(13.3), we use the following quadratic cost function

$$\begin{aligned} J [u (t,x)]=\int \limits _{0}^{T}\gamma (t)\alpha ^{2} (t)\text {d}t, \end{aligned}$$
(13.4)

where \(\gamma (t)\in C([0;T],\textrm{R}).\)

Problem Statement. Find a state functions \(u(t,x)\in PC(\varOmega ,\textrm{R})\) and control function \(\alpha (t)\in \left\{ \alpha : \left| \alpha (t) \right| \le M^{*} , t\in \varOmega _{ T} \right\} \), delivering a minimum to functional (13.4) and the state function u(tx) for all \((t,x)\in \varOmega , t\ne t_{i} , i=1,2,\ldots ,k\) satisfies the differential equation (13.1), initial value condition (13.2), for \((t,x)\in \varOmega , t=t_{i} , i=1,2,\ldots ,k\) satisfies the nonlinear impulsive condition (13.3).

2 Reducing the Problem (13.1)–(13.3) to a Functional-Integral Equation

Since condition (13.2) is specified at the initial point of the interval [0, T], then problem (13.1)–(13.3) will be called the nonlinear initial value problem with impulse effects or, in short, initial value problem. Let the function \(u(t,x)\in PC(\varOmega ,\textrm{R})\) be a solution of the initial value problem (13.1)–(13.3). We write the domain \(\varOmega \) as \(\varOmega =\varOmega _{0,1} \cup \varOmega _{1,2} \cup \cdot \cdot \cdot \cup \varOmega _{k,k+1} ,\) where \(\varOmega _{i,i+1} =\left( t_{i} ,t_{i+1} \right] \times \textrm{R}.\) On the first domain \(\varOmega _{0,1} \) the Eq. (13.1) we rewrite as

$$\begin{aligned} D_{ u} [u]= F \left( t,x, u(t,x),\alpha (t)\right) , \end{aligned}$$
(13.5)

where \(D_{ u}=\left( \frac{\partial }{\partial t} +u(t,x) \frac{\partial }{\partial x} \right) \) is Whitham operator.

Now we introduce the notation, which is called the extended characteristic:

$$\begin{aligned}p(t,s,x)=x- \int \limits _{s}^{t}u (\theta ,x) \text {d}\theta , \quad p (t,t,x)=x.\end{aligned}$$

We introduce a function of three dimensional argument \(w (t,s,x)=u \left( s,p(t,s,x)\right) ,\) such that for \(t=s\) it takes the form \(w (t,t,x)=u \left( t, p (t,t,x)\right) =u (t,x)\). Differentiate the function w(tsx) with respect to the new argument s

$$\begin{aligned}w_{s} (t,s,x)=u_{ s} \left( s,p (t,s,x)\right) +u_{p} \left( s,p (t,s,x)\right) \cdot p_{s} (t,s,x).\end{aligned}$$

Then, taking into account the last relation, we rewrite Eq. (13.5) in the following extended form

$$\begin{aligned} \frac{\partial }{\partial s} w(t,s,x)= F\left( s,p(t,s,x), w (t,s,x),\alpha (s)\right) . \end{aligned}$$
(13.6)

Integrating equations (13.6) along the extended characteristic, we obtain

$$\begin{aligned} & \int \limits _{0}^{t_{1} }F \left( s,p (t,s,x), w \left( t,s,x\right) ,\alpha (s)\right) \text {d}s \nonumber \\ & \quad =w (t,t_{1}^{-},x)-w (t,0^{+} ,x),\quad t\in \left( 0,t_{1} \right] , \end{aligned}$$
(13.7)
$$\begin{aligned} & \int \limits _{t_{1} }^{t_{2}} F\left( s,p(t,s,x), w\left( t,s,x\right) ,\alpha (s)\right) \text {d}s \nonumber \\ & \quad =w(t,t_{2}^{-} ,x)-w(t,t_{1}^{+},x),\quad t\in \left( t_{1},t_{2} \right] , \end{aligned}$$
(13.8)
$$\begin{aligned}{\vdots }\end{aligned}$$
$$\begin{aligned} & \int \limits _{t_{k} }^{t_{k+1} }F\left( s,p(t,s,x), w \left( t,s,x\right) ,\alpha (s)\right) \text {d}s \nonumber \\ & \quad =w (t,t_{k+1}^{-},x)-w(t,t_{k}^{+} ,x),\quad t\in \left( t_{k} ,t_{k+1} \right] ,\quad t_{k+1} =T. \end{aligned}$$
(13.9)

Taking \(w(t,0^{+} ,x)=w(t,0,x),\) \(w(t,t_{p+1}^{-},x)=w(t,s,x)\) into account, (0, T] from the integral relations (13.7)–(13.9) we have on the interval:

$$\begin{aligned} & \int \limits _{0}^{s}F\left( \varsigma ,p (t,\varsigma ,x), w \left( t,\varsigma ,x\right) ,\alpha (\varsigma )\right) \text {d}\varsigma =\left[ w\left( t,t_{1},x\right) -w\left( t,0^{+},x\right) \right] \nonumber \\ & \quad +\left[ w\left( t,t_{2} ,x\right) -w\left( t,t_{1}^{+} ,x\right) \right] +\cdots +\left[ w(t,s,x)-w\left( t,t_{p}^{+} ,x\right) \right] \nonumber \\ & \quad =-w(t,0,x)-\left[ w\left( t,t_{1}^{+},x\right) -w\left( t,t_{1},x\right) \right] -\left[ w\left( t,t_{2}^{+},x\right) -w\left( t,t_{2} ,x\right) \right] \nonumber \\ & \quad -\cdots -\left[ w\left( t,t_{k}^{+},x\right) -w\left( t,t_{k},x\right) \right] +w(t,s,x). \end{aligned}$$
(13.10)

Taking into account the impulsive condition (13.3), the last equality (13.10) we rewrite as

$$\begin{aligned} w(t,s,x)& =w(t,0,x)+\int \limits _{0}^{s}F\left( \varsigma ,p (t ,\varsigma ,x), w \left( t,\varsigma ,x\right) ,\alpha (\varsigma )\right) \text {d} \varsigma \nonumber \\ & \quad +\sum _{0<t_{i} <s}G_{i} \left( w\left( t,t_{i} ,x\right) \right) , \end{aligned}$$
(13.11)

where w(t, 0, x) is arbitrary constant along the extended characteristic, which to be determined. The initial value condition (13.2) for Eq. (13.11) takes the form

$$w(t,0,x)=\varphi \left( p(t,0,x),\int \limits _{0}^{T}K(\xi )w(t,\xi ,x)\text {d}\xi \right) .$$

Then, taking into account this initial value condition (13.2), from (13.11) we obtain that

$$\begin{aligned} w (t,s,x)& =\varphi \left( p(t,0,x),\int \limits _{0}^{T}K(\xi )w(t,\xi ,x)\text {d}\xi \right) \nonumber \\ & \quad +\int \limits _{0}^{s}F\left( \varsigma ,p(t,\varsigma ,x), w\left( t,\varsigma ,x\right) ,\alpha (\varsigma )\right) d\varsigma \nonumber \\ & \quad +\sum _{0<t_{i} <s}G_{i} \left( w\left( t,t_{i},x\right) \right) . \end{aligned}$$
(13.12)

For \(t=s\), from (13.12) we arrive at the nonlinear functional-integral equation

$$\begin{aligned} u (t,x) &=\varTheta (t,x;u)\equiv \varphi \left( p(t,0,x),\int \limits _{0}^{T}K(\xi )u(\xi ,p(t,\xi ,x))\text {d}\xi \right) \nonumber \\ & \quad +\int \limits _{0}^{t}F \left( s, p(t,s,x), u \left( s, p (t,s,x)\right) ,\alpha (s)\right) \text {d} s \nonumber \\ & \quad +\sum _{0<t_{i} <t}G_{i} \left( u\left( t_{i},p (t,t_{i},x)\right) \right) , \end{aligned}$$
(13.13)

where p(tsx) is defined from the integral equation

$$\begin{aligned} p (t,s,x)=x- \int \limits _{s}^{t}u \left( \theta ,p(t,\theta ,x)\right) \text {d}\theta ,\quad p(t,t,x)=x, \end{aligned}$$
(13.14)

\(x\in \textrm{R}\) plays the role of a parameter.

3 Solvability of the Functional-Integral Equation (13.13)

For fixed values of control function \(\alpha (t)\), we study the solvability of the functional-integral equation (13.13).

Theorem 1

Let the positive quantities \(\varDelta _{\varphi }, \varDelta _{f} (t), \varDelta _{G_{i}},\chi _{1}, \chi _{21}(t),\chi _{22}(t),\chi _{3i}, i=1,2,\ldots ,k\) be existed and for them following conditions be satisfied:

  1. 1.

    \(0<{\mathop {\sup }\nolimits _{x\in \textrm{R}}} \left| \varphi (x,0) \right| \le \varDelta _{\varphi } <\infty ;\)

  2. 2.

    \({\mathop {\sup }\nolimits _{x\in \textrm{R}}} \left| F(t,x, 0,\alpha ) \right| \le \varDelta _{f} (t), \, 0<\varDelta _{f} (t)\in C[0;T];\)

  3. 3.

    \(0<\left| G_{i} (0) \right| \le \varDelta _{G_{i} } <\infty , i=1,2,\ldots ,k;\)

  4. 4.

    \(\left| \varphi (x_{1},u_{1} )-\varphi (x_{2},u_{2} ) \right| \le \chi _{1} \left( \left| x_{1} -x_{2} \right| +\left| u_{1} -u_{2} \right| \right) , \, 0<\chi _{1} =\textrm{const};\)

  5. 5.

    \(\left| F(t,x_{1}, u_{1},\alpha )-F(t,x_{2} ,u_{ 2} ,\alpha ) \right| \le \chi _{21} (t)\left| x_{ 1} -x_{ 2} \right| +\chi _{22} (t)\left| u_{1} -u_{ 2} \right| ;\)

  6. 6.

    \(\left| G_{i} (u_{1} )-G_{i} (u_{2} ) \right| \le \chi _{3i} \left| u_{ 1} -u_{ 2} \right| , 0<\chi _{3i} =\textrm{const}, i=1,2,\ldots ,k;\)

  7. 7.

    \(\rho ={\mathop {\max }\nolimits _{t\in [0;T]}} \int _{0}^{t}H(t,s) \text {d}s+\sum _{i=1}^{k}\chi _{3i} <1,\) where

$$\begin{aligned}H(t, s)=\chi _{1} \left( 1+\left| K(s) \right| \right) + \chi _{21} (s) (t-s)+\chi _{22} (s).\end{aligned}$$

Then, for fixed values of \(\alpha (t)\), the functional-integral equation (13.13) with (13.14) has a unique solution in the domain \(\varOmega \). This solution can be obtained by the following successive approximations:

$$\begin{aligned} u_{0} (t,x)=0,\quad u_{n+1} (t,x)\equiv \varTheta (t,x; u_{n}, p_{n} ),\quad n=0, 1, 2, \ldots , \end{aligned}$$
(13.15)

where \(p_{n} (s,t,x)\) is defined from the following iteration

$$\begin{aligned}p_{0} (t,t,x)=x,\quad p_{n} (t,s,x)=x- \int \limits _{s}^{t}u_{ n-1} \left( \theta ,p_{ n-1} (t,\theta ,x)\right) \text {d}\theta . \end{aligned}$$

Proof

By virtue of the conditions of the theorem, we obtain that the following estimate holds for the first difference of approximation (13.15):

$$\begin{aligned} \left| u_{1} (t,x)-u_{ 0} (t,x) \right| & \le {\mathop {\sup }\limits _{x\in \textrm{R}}} \left| \varphi (x,0) \right| +\sum _{0<t_{i} <T}\left| G_{i} \left( 0\right) \right| \nonumber \\ & \quad +{\mathop {\max }\limits _{t\in [0;T]}} \int \limits _{0}^{t} \Delta _{f} (s) \text {d} s\le \Delta _{\varphi } +\sum _{i=1}^{k}\Delta _{G_{i} } +\Delta _{1} <\infty , \end{aligned}$$
(13.16)

where

$$\begin{aligned}\Delta _{ 1} ={\mathop {\max }\limits _{t\in [0;T]}} \int \limits _{0}^{t}\Delta _{f} (s) \text {d}s<\infty . \end{aligned}$$

Taking into account estimate (13.16) and the conditions of the theorem, we obtain that for an arbitrary difference of approximation (13.15) the following estimate holds:

$$\begin{aligned} \left| u_{n+1} (t,x)-u_{n} (t,x) \right| & \le \left| \varphi \left( p_{n+1} (t,0,x),\int \limits _{0}^{T}K(\xi )u_{n} (\xi ,p_{n} (t,\xi ,x))\text {d}\xi \right) \right. \nonumber \\ & \quad \left. -\varphi \left( p_{n} (t,0,x),\int \limits _{0}^{T}K(\xi )u_{n-1} (\xi ,p_{n-1} (t,\xi ,x))\text {d}\xi \right) \right| \nonumber \\ & \quad +\int \limits _{0}^{t}\left| F\left( s,p_{n+1} (t,s,x),u_{n} \left( s,p_{n} (t,s,x)\right) ,\alpha (s)\right) \right. \nonumber \\ & \quad \left. -F\left( s,p_{n} (t,s,x),u_{n-1} \left( s,p_{n-1} (t,s,x)\right) ,\alpha (s)\right) \right| \text {d}s \nonumber \\ & \quad +\sum _{0<t_{i} <t}\left| G_{i} \left( u_{n} \left( t_{i} ,p_{n} (t,t_{i} ,x)\right) \right) \right. \nonumber \\ & \quad \left. -G_{i} \left( u_{n-1} \left( t_{i} ,p_{n-1} (t,t_{i} ,x)\right) \right) \right| \nonumber \\ & \le \chi _{1} \Bigl [\int \limits _{0}^{t} \left| u_{n} (s,x)-u_{n-1} (s,x) \right| \text {d}s \nonumber \\ & \quad + \int \limits _{0}^{T} \left| K(s) \right| \cdot \left| u_{n} (s,x)-u_{n-1} (s,x) \right| \text {d}s \Bigr ] \nonumber \\ & \quad +\int \limits _{0}^{t}\Bigl [\chi _{21} (s)\int \limits _{s}^{t} \left| u_{n} (\theta ,x)-u_{n-1} (\theta ,x) \right| \text {d}\theta \nonumber \\ & \quad +\chi _{22} (s) \left| u_{n} (s,x)-u_{n-1} (s,x) \right| \Bigr ] \text {d}s \nonumber \\ & \quad +\sum _{0<t_{i} <t}\chi _{3i} \left| u_{n} \left( t_{i} ,x\right) -u_{n-1} \left( t_{i} ,x\right) \right| \nonumber \\ & \le {\mathop {\max }\limits _{t\in [t_i;t_{i+1}]}} \int \limits _{0}^{t}H(t, s) \left| u_{n} (s,x)-u_{n-1} (s,x) \right| \text {d}s \nonumber \\ & \quad +\sum _{i=1}^{k}\chi _{3i} {\mathop {\max }\limits _{t\in [t_i;t_{i+1}]}} \left| u_{n} \left( t,x)\right) -u_{n-1} \left( t,x\right) \right| , \end{aligned}$$
(13.17)

where

$$\begin{aligned}H(t, s)=\chi _{1} \left( 1+\left| K(s) \right| \right) + \chi _{21} (s) (t-s)+\chi _{22} (s).\end{aligned}$$

In estimate (13.17), we pass to the norm in the space \(PC\left( \varOmega ,\textrm{R}\right) \) and arrive at the estimate

$$\begin{aligned} \left\| u_{n+1} (t,x)-u_{n} (t,x) \right\| _{PC} \le \rho \cdot \left\| u_{n} (t,x)-u_{n-1} (t,x) \right\| _{PC} , \end{aligned}$$
(13.18)

where

$$\begin{aligned}\rho ={\mathop {\max }\limits _{t\in [t_i;t_{i+1}]}} \int \limits _{0}^{t}H(t,s) \text {d}s+\sum _{i=1}^{k}\chi _{3i} . \end{aligned}$$

Since \(\rho <1\), it follows from estimate (13.18) that the sequence of functions \(\left\{ u_{n} (t,x)\right\} _{n=1}^{ \infty } \), defined by formula (13.15), converges absolutely and uniformly in the domain \(\varOmega \). In addition, it follows from the existence of a unique fixed point of the operator \(\Theta (t,x;u)\) on the right-hand side of (13.13) that the functional-integral equation (13.13) has a unique solution in the domain \(\varOmega \). The theorem has been proven.

Corollary

Let all the conditions of Theorem be satisfied. Then, for fixed values of the control function \(\alpha (t)\), the initial value problem (13.1)–(13.3) with impulse effects has a unique solution in the domain \(\varOmega \).

4 Determination of the Control Function

Let \(\alpha ^{*} (t)\) be optimal control function:

$$\begin{aligned}\Delta J \left[ \alpha ^{*} (t) \right] =J \left[ \alpha ^{*} (t)+\Delta \alpha ^{*} (t)\right] -J \left[ \alpha ^{*} (t) \right] \ge 0,\end{aligned}$$

where \(\alpha ^{*} (t)+\Delta \alpha ^{*} (t)\in C(\varUpsilon ).\) We apply the maximum principle to our problem to find the necessary conditions for optimality. To build the Pontryagin’s function we use the Eq. (13.6):

$$\begin{aligned} H\left( \alpha (s),w(t,s,x)\right) =\psi (s)F\left( s,p(t,s,x), w(t,s,x),\alpha (s)\right) -\gamma (s)\alpha ^{2} (s), \end{aligned}$$
(13.19)

where for the new unknown function \(\psi (t)\) by differentiation (13.19) with respect to the function w(tsx) we will obtain the following equation

$$\begin{aligned} \dot{\psi }(s)=-\psi (s)F_{w} \left( s,p(t,s,x), w (t,s,x),\alpha (s)\right) . \end{aligned}$$
(13.20)

Similarly, by differentiation (13.19) with respect to the control function \(\alpha (t)\) we obtain the following equation

$$\begin{aligned} \psi (s)F_{\alpha } \left( s,p(t,s,x), w (t,s,x),\alpha (s)\right) -2\gamma (s)\alpha (s)=0. \end{aligned}$$
(13.21)

By differentiation the Eq. (13.21) with respect to the control function \(\alpha (t)\) we obtain the necessary condition for optimality

$$\begin{aligned} \psi (s)F_{\alpha \alpha } \left( s,p(t,s,x), w(t,s,x),\alpha (s)\right) -2\gamma (s)\le 0. \end{aligned}$$
(13.22)

In (13.21) we express \(\psi (s)\) and substitute it into (13.22). Then we have

$$\begin{aligned}F_{\alpha } \left( s,p(t,s,x), w(t,s,x),\alpha (s)\right) \cdot \left( \frac{\alpha (s)}{F_{\alpha } \left( s,p(t,s,x), w (t,s,x),\alpha (s)\right) } \right) _{\alpha } >0.\end{aligned}$$

Equation (13.21) we rewrite as

$$\begin{aligned} \alpha (s)=\frac{\psi (s)}{2\gamma (s)} F_{\alpha } \left( s,p(t,s,x), w (t,s,x),\alpha (s)\right) . \end{aligned}$$
(13.23)

Solving differential equation (13.20) with end-point condition \(\psi (T)=\psi _{T} \), we obtain

$$\begin{aligned} \psi (s)=\psi _{T} \cdot \exp \left\{ \int \limits _{s}^{T}F_{w} \left( \theta ,p(t,\theta ,x), w (t,\theta ,x),\alpha (\theta )\right) \text {d}s \right\} . \end{aligned}$$
(13.24)

Substitute the presentation (13.24) into (13.23), we have

$$\begin{aligned} \alpha (s) & =\frac{\psi _{T}}{2\gamma (s)} F_{\alpha } \left( s,p(t,s,x), w (t,s,x),\alpha (s)\right) \nonumber \\ & \quad \times \exp \left\{ \int \limits _{s}^{T}F_{w} \left( \theta ,p(t,\theta ,x), w(t,\theta ,x),\alpha (\theta )\right) \text {d}\theta \right\} . \end{aligned}$$
(13.25)

Thus, in general case, the problem of optimal control is reducing to solve the Eqs. (13.13) and (13.20) together as one system of two equations. It can be solved by the method of successive approximations. But, if we consider the concrete form of function, for example,

$$\begin{aligned} F\left( s,p(t,s,x), w (t,s,x),\alpha (s)\right) =2\,s-3p(t,s,x)+\beta (s) w (t,s,x)+\delta (s) \alpha (s), \end{aligned}$$
(13.26)

then the Eq. (13.20) takes the form \(\dot{\psi }(s)=-\psi (s) \beta (s)\) and the Eq. (13.25) takes the form

$$\begin{aligned} \alpha (s)=\frac{\psi _{T} }{2\gamma (s)} \delta (s)\exp \left\{ \int \limits _{s}^{T}\beta (\theta )\text {d}\theta \right\} , \end{aligned}$$
(13.27)

where \(\delta (s)\) and \(\beta (s)\) are given continuous on [0, T] functions.

For \(s=t\) the obtained value on control function (13.27) we substitute into Eq. (13.13):

$$\begin{aligned} u (t,x)& =\varphi \left( p(t,0,x),\int \limits _{0}^{T}K(\xi )u(\xi ,p(t,\xi ,x))\text {d}\xi \right) \nonumber \\ & \quad +\int \limits _{0}^{t}\Bigl [2\,s-3p(t,s,x)+\beta (s) u \left( s, p (t,s,x)\right) \nonumber \\ & \quad +\frac{\psi _{T} }{2\gamma (s)} \delta ^{2} (s)\exp \left\{ \int \limits _{s}^{T}\beta (\theta )\text {d}\theta \right\} \Bigr ]\text {d}s +\sum _{0<t_{i} <t}G_{i} \left( u\left( t_{i},p (t,t_{i},x)\right) \right) \end{aligned}$$
(13.28)

with (13.14). Equation (13.28) consists only one unknown function u(tx) and it can be easily solved by the method of successive approximations. In this case the necessary condition for optimality (13.22) takes form \(\gamma (t)>0.\)

5 Conclusion

In this paper the questions of unique solvability and determination of the control function \(\alpha (t)\) in the initial value problem (13.1)–(13.3) for a Whitham type partial differential equation with impulse effects are studied. The modified method of characteristics allows partial differential equations of the first order to be represented as ordinary differential equations that describe the change of an unknown function along the line of characteristics. The nonlinear functional-integral equation (13.13) is obtained. The unique solvability of the initial value problem (13.1)–(13.3) is proved by the method of successive approximations and contraction mappings. The determination of the unknown control function \(\alpha (t)\) is reduced to solving the Eq. (13.25). In the particular case of (13.26), solving of the optimal control problem is simple. The process of solving state function u(tx) reduced to solve functional-integral equation (13.28) by the method of successive approximations.