1 Introduction

The research devoted the controllability was started in the 1960s by Kalman and refers to linear dynamical systems. Because the most of practical dynamical systems are nonlinear, that’s why, in recent years various controllability problems for different types of nonlinear or semilinear dynamical systems have been considered [1,2,3,4,5,6,7,8,9].

There are large type of controllability such as completely controllability, small controllability, local controllability, regional controllability, near controllabilitry, null controllability and output controllability [4,5,6, 8,9,10,11,12,13,14].

In the present paper we investigate the output controllability of a class of nonlinear infinite-dimensional discrete systems. More precisely, we consider the nonlinear system whose state is described by the following difference equation

$$\begin{aligned} (S) \left\{ \begin{array}{lcl} x_{i+1} &{} = &{} Ax_{i} + Ex_{i} +Bu_{i},\quad \; i \in \{0,\ldots ,N-1 \},\\ x_{0},&{} &{} \end{array} \right. \end{aligned}$$

the corresponding output signal is

$$\begin{aligned} y_i = Cx_i, \quad \forall \,i \in \{0,\ldots ,N \}. \end{aligned}$$

The operator \(A : X \longrightarrow X\) is supposed to be bounded on the Hilbert space X (the state space), \(E : X \longrightarrow X\) is a nonlinear operator, \(B \in \mathcal{L}(U,X)\) and \(C \in \mathcal{L}(X,Y)\) where the Hilbert space U is the input space and the Hilbert space Y is the output one.

Given a desired output \( y^d = (y^{d}_{i})_{i \in \{1,\ldots ,N \} } \), we investigate the optimal control \( u= (u_{i})_{i \in \{0,1,\ldots ,N-1 \}} \) which minimizes the functional cost

$$\begin{aligned} J(u) = \parallel u \parallel ^{2} \end{aligned}$$

over all controls satisfying

$$\begin{aligned} Cx_i = y_i^d , \quad \forall \,i \in \{1,\ldots ,N \}. \end{aligned}$$

To solve this problem and inspired by what was done in [15, 16] we use, in the first part, a state space technique to show that the problem of input retrieval can be seen as a problem of optimal control with constraints on the final state [17]. In the second part, we use a technique based on the fixed point theorem (see [2, 3, 7, 18,19,20,21]). We establish that the set of admissible controls is completely characterized by the pseudo inverse corresponding to the linear part of the system and the fixed points of an appropriate mapping. Finally, A numerical example is given to illustrate the obtained results.

Remark 1

The assumption that A is a bounded operator is not so restrictive even in the distributed parameter system. We can see, for example that the discrete system obtained from the evolution equation considered in [22], satisfies this condition.

2 Statement of the problem

We consider the discrete system described by

$$\begin{aligned} (S) \left\{ \begin{array}{lcl} x_{i+1} &{} = &{} Ax_{i} + Ex_{i} + Bu_{i}, \; i \in \{0,\ldots ,N-1 \},\\ x_{0}&{} &{} \end{array} \right. \end{aligned}$$
(1)

the corresponding output

$$\begin{aligned} y_i = Cx_i, \quad i \in \{0,\ldots ,N \}, \end{aligned}$$
(2)

where \( x_i \in X \) is the state of system \((S),\;u_i \in U \) is the control variable and \(y_i \in Y\) is the output function, \(A \in \mathcal{L}(X), \, B \in \mathcal{L}(U,X)\) and \(C \in \mathcal{L}(X,Y)\). Consider the following control problem. Given a desired trajectory \( y^d = (y_1^d,\ldots , y_N^d)\), we try to find the optimal control \(u = (u_0,u_1,\ldots , u_{N-1})\) which minimizes the functional cost

$$\begin{aligned} J(u) = \Vert u\Vert ^2, \end{aligned}$$
(3)

over all controls satisfying

$$\begin{aligned} Cx_i= y_i^d,\;\;\; \forall \;\;i \in \{1,\ldots ,N \}. \end{aligned}$$

2.1 An adequate state space approach

In this subsection, we give some technical results which will be used in the sequel. For a finite subset \(\sigma _{r}^{s}=\{r,r+1,\ldots ,s\} \;\text{ of }\;{ ZZ}\), with \(s \ge r\), let \(l^{2}(\sigma ^s_{r},X)\) be the space of all sequences \((z_i)_{i \in \sigma _{r}^{s}}\;\;,\;z_i \in X.\)

Remark 2

\(l^{2}(\sigma ^{s}_{r},X)\) is a Hilbert space with the usual addition, scalar multiplication and with an inner product defined by

$$\begin{aligned}<x,y>_{l^{2}(\sigma _r^{s},X)} = \displaystyle \sum _{i=r}^{s} <x_i,y_i>_{X}. \end{aligned}$$

Let \(L_{1}\;\text{ and }\;F\) be the operators given by

$$\begin{aligned}&\begin{array}{lccl} L_{1} : &{} l^2\left( \sigma _{-N}^{-1};X\right) &{} \longrightarrow &{} l^2\left( \sigma _{-N}^{-1};X\right) , \\ &{} (z_{-N},\ldots ,z_{-1}) &{} \longmapsto &{} (z_{-N+1},\ldots ,z_{-1},0), \end{array}\\&\begin{array}{lccl} F : &{} X &{} \longrightarrow &{} l^2\left( \sigma _{-N}^{-1};X\right) ,\\ &{} x &{} \longmapsto &{} (0,\ldots ,0,x) \end{array} \end{aligned}$$

and define the variables \(z^i \in l^2(\sigma _{-N}^{-1};Y)\) by

$$\begin{aligned} \begin{array}{lll} z^i &{}=&{} (z^i_{-N}, \ldots ,z^i_{-1}), \\ z^i_k &{}=&{} \left\{ \begin{array}{lll} x_{i+k},\quad &{} \text{ if } i+k \ge 0 \\ x_0,\quad &{} \text{ else }, \end{array} \right. \end{array} \end{aligned}$$

where \((x_i)_i\) is the solution of system (S). Then the sequence \((z^i)_i\) is the unique solution of the following difference equation

$$\begin{aligned} \begin{array}{lcl} \left\{ \begin{array}{lcl} z^{i+1} &{} = &{} L_{1}z^{i} + Fx_{i},\quad i \in \sigma _{0}^{N-1},\\ z^{0}&{} = &{} (x_0,x_0,\ldots ,x_0). \end{array} \right. \end{array} \end{aligned}$$

Let \(e_i \in X \times l^2(\sigma _{-N}^{-1};X)\) be the signals defined by \(e_{i} = \left( \begin{array}{l} x_i \\ z^i \end{array} \right) \), then we easily establish the following result.

Proposition 1

\( (e_i)_{i \in \sigma _{0}^{N}} \) is the unique solution of the difference equation described by

$$\begin{aligned} (S_1) \; \left\{ \begin{array}{lcl} e_{i+1} &{} = &{} \varPsi e_{i} + \varPhi e_{i} + \bar{B}u_{i}\;,\; i \in \sigma _{0}^{N-1},\\ e_{0} &{} = &{} \left( \begin{array}{l} x_0 \\ (x_0,\ldots ,x_0) \end{array} \right) , \end{array} \right. \end{aligned}$$

where \( \varPsi \) = \( \left( \begin{array}{cc} A &{} 0 \\ F &{} L_{1} \end{array} \right) \), \( \varPhi \) = \( \left( \begin{array}{cc} E &{} 0 \\ 0 &{} 0 \end{array} \right) \;\) and \(\; \bar{B} \) = \( \left( \begin{array}{c} B \\ 0 \end{array} \right) \).

Remark 3

The equality

$$\begin{aligned} e_N =\left( \begin{array}{l} x_N \\ z^N \end{array} \right) = \left( \begin{array}{l} x_N \\ (x_0,\ldots ,x_{N-1}) \end{array} \right) , \end{aligned}$$

allows us to assimilate the trajectory \((x_0,\ldots ,x_{N-1},x_N)\) of system (S) to the final state \(e_N\;\text{ of }\;(S_1)\). This implies that our problem of input retrieval is equivalent to a problem of optimal control with constraints on the final state \(e_N\).

2.2 The optimal control expression

Let’s consider the operator \(\varGamma \) defined by

$$\begin{aligned} \begin{array}{lccl} \varGamma : &{} X \times l^{2}\left( \sigma _{-N}^{-1};X\right) &{} \longrightarrow &{} l^{2}\left( \sigma _1^N;Y\right) \\ &{} \left( x,(\xi _i)_{-N \le i \le 0}\right) &{} \longmapsto &{} \left( t_1,\ldots ,t_N\right) \end{array} \end{aligned}$$
(4)

with \( t_i=C\xi _{i-N}, \quad \forall i \in \sigma _{1}^{N-1} \;\;\; \text{ and } \;\;\; t_N=Cx \).

Definition 1

  1. (a)

    The system (S) is said to be exactly output controllable on \(\sigma _{1}^{N}\) if \( \forall x_0 \in X, \;\; \forall y \in l^{2}(\sigma _1^{N};Y),\;\;\exists u \in l^{2}(\sigma _0^{N-1};U)\;\;\text{ such } \text{ that }\;\; Cx_{i} = y_{i},\;\;\;i \in \sigma _{1}^{N}.\)

  2. (b)

    The system (S) is said to be weakly output controllable on \(\sigma _{1}^{N}\) if \(\forall \epsilon > 0, \;\;\forall x_0 \in X, \;\; \forall y \in l^{2}( \sigma _1^N;Y),\;\exists u \;\;\text{ such } \text{ that }\;\; \parallel Cx_{i} - y \parallel _{Y}\; \le \;\epsilon . \)

Definition 2

  1. (a)

    The system (S) is said to be \(\varGamma \)-controllable on \(\sigma _{1}^{N}\) if \( \forall e_0 \in X, \;\; \forall y^{d} \in l^{2}(\sigma _1^{N};Y), \;\;\exists u \in l^{2}(\sigma _0^{N-1};U)\) such that \(\varGamma e_N = y^{d}\).

  2. (b)

    The system (S) is said to be \(\varGamma \)-weakly controllable on \(\sigma _{1}^{N}\) if \(\forall \epsilon > 0, \;\;\forall e_0 \in X, \;\; \forall y^{d} \in l^{2}(\sigma _1^N;Y),\;\exists u\) such that\( \parallel \varGamma e_N - y^{d} \parallel _{l^{2} (\sigma _1^N;Y)} \;\le \;\epsilon \).

Remark 4

From the above definition, we can easily establish the following results

  1. (i)

    (S) is exactly output controllable \(\sigma _{1}^{N}\) \(\Longleftrightarrow \) \((S_1)\) is \(\varGamma \)-controllable on \(\sigma _{1}^{N}\).

  2. (ii)

    (S) is weakly output controllable on \(\sigma _{1}^{N}\) \(\Longleftrightarrow \) \((S_1)\) is \(\varGamma \)-weakly controllable on \(\sigma _{1}^{N}\).

Proposition 2

Given a desired output \( y^d = (y_1^d,\ldots , y_N^d)\) in \(l^2(\sigma _1^N;Y)\), the problem \((\mathcal{P}_{1})\) and \((\mathcal{P}_{2}) \) defined as:

$$\begin{aligned}&\begin{array}{lcl} (\mathcal{P}_1) &{} &{} \left\{ \begin{array}{ll} \text{ Find }\; u^* \; \text{ such } \text{ that } &{} \\ Cx_{i} = y_i^d, \quad \forall \;\;i \in \sigma _1^{N}&{}\;\;\; (i)\\ \parallel u^* \parallel = inf \{\; \parallel v \parallel /v \; \text{ verify } \;\;\; (i) \} &{} \;\;\; (ii) \end{array} \right. \end{array}\\&\begin{array}{lcl} (\mathcal{P}_2) &{} &{} \left\{ \begin{array}{ll} \text{ Find }\; u^* \; \text{ such } \text{ that } &{} \\ \varGamma e_{N} = y^d \;\; \text{ in }\;\; l^2(\sigma _1^N;Y)&{}\;\; (j)\\ \parallel u^* \parallel = inf \{\; \parallel v \parallel /v\;\text{ verify } \;\; (j) \} &{}\;\; (jj) \end{array} \right. \end{array} \end{aligned}$$

have the same solution \( u^* \).

By Proposition 2, the resolution of problem \((\mathcal{P}_{1})-(\mathcal{P}_{2}) \) is equivalent to find the control \(u^{*}\) which ensure the \(\varGamma \)-controllability of system \((S_1)\) and with a minimal cost.

3 Statement of the new problem

We consider the discrete system described by

$$\begin{aligned} (S_1) \; \left\{ \begin{array}{lcl} e_{i+1} &{} = &{} \varPsi e_{i} + \varPhi e_{i} + \bar{B}u_{i}\;,\quad i \in \sigma _0^{N-1}\\ e_{0} &{} &{} \text{ is } \text{ given } \end{array} \right. \end{aligned}$$
(5)

where \( e_i \in \mathcal{X}= X \times l^{2}(\sigma _{-N}^{-1};X) \) is the state of system \((S_1),\;u_i \in U\) is the control variable, \(\varPsi \in \mathcal{L}(\mathcal{X})\) and \(\bar{B} \in \mathcal{L}(U,X)\). Consider the following control problem. Given a desired trajectory \( y^d = (y_1^d,\ldots , y_N^d)\), we find the control \(u^{*}\) which minimizes the functional cost

$$\begin{aligned} J(u) = \Vert u\Vert ^2 \end{aligned}$$
(6)

over all controls satisfying

$$\begin{aligned} \varGamma e_{N}= y^d, \end{aligned}$$

\(e_{N}\) is the final state of system \((S_1)\) at instant N, and \(\varGamma \) is given by (4). We shall call \(u^*\) the wanted control and the solution of system \((S_1)\) is

$$\begin{aligned} e_i = \varPsi ^{i}e_{0} + \displaystyle \sum _{j=0}^{i-1} \varPsi ^{j} \varPhi e_{i-1-j} + \displaystyle \sum _{j=0}^{i-1} \varPsi ^{j} \bar{B}u_{i-1-j}, \quad i\in \sigma _1^{N}. \end{aligned}$$
(7)

Let L denote the linear operator defined on \(T=l^{2}\left( \sigma _1^{N};\mathcal{X}\right) \) by

$$\begin{aligned} \begin{array}{lccl} L : &{} T = l^{2}(\sigma _1^{N};\mathcal{X})&{} \longrightarrow &{} T \\ &{} \xi =(\xi _{1},\dots ,\xi _{N}) &{} \longmapsto &{} L\xi =(L\xi )_{1 \le i \le N} \end{array} \end{aligned}$$

where

$$\begin{aligned} \left\{ \begin{array}{ccl} (L\xi )_i &{} = &{} \varPsi ^{i-1} \varPhi e_0+ {\displaystyle \sum _{j=0}^{i-2}} \varPsi ^{j} \varPhi \xi _{i-1-j};\quad 2\le i \le N\\ (L\xi )_1 &{} = &{} \varPhi e_0 \end{array}\right. \end{aligned}$$

and Let H denote the linear operator defined on \(\mathcal{U}\) by

$$\begin{aligned} \begin{array}{lccl} H : &{} \mathcal{U}=l^{2}\left( \sigma _0^{N-1};U\right) &{} \longrightarrow &{} T \\ &{} u=(u_{0},\dots ,u_{N-1}) &{} \longmapsto &{} Hu \end{array} \end{aligned}$$

where

$$\begin{aligned} (Hu)_i = \displaystyle \sum _{j=0}^{i-1} \varPsi ^{j} \bar{B}u_{i-1-j}, \;\;\; i\in \sigma _{1}^{N}. \end{aligned}$$

So, the Eq. (7) can be rewritten as

$$\begin{aligned} e =(e_1,\ldots ,e_N)= \tilde{\varPsi }e_{0} + Le + Hu \end{aligned}$$
(8)

where

$$\begin{aligned} \tilde{\varPsi } e_{0} = \left( \varPsi ^{i} e_{0}\right) _{1 \le i \le N} \end{aligned}$$

The operator H is not invertible in general. Introduce then:

$$\begin{aligned} \tilde{H} : x \in (\ker {H})^{\perp } \rightarrow \tilde{H}(x)=H(x) \in Range(H) \end{aligned}$$

this operator is invertible and its inverse, which is defined on Range(H) can be extended to \(Range(H)\bigoplus \) \(Range(H)^{\perp }\) as follows

$$\begin{aligned} H^{\dag } : x+y \in Range(H) \bigoplus Range(H)^{\perp } \rightarrow \tilde{H}^{-1}(x) \in \mathcal{U} \end{aligned}$$

the operator \(H^{\dag }\) is known as the pseudo inverse operator of H. If Range(H) is closed then \(T= Range(H) \bigoplus \) \(Range(H)^{\perp }\) and \(H^{\dag }\) is defined on all the space T. The mapping \(H^{\dag }\) satisfies in particular

$$\begin{aligned} \begin{array}{lcl} \left\{ \begin{array}{lcl} H H^{\dag }x &{} = &{} x,\quad \forall x \in Range(H) \\ H^{\dag }Hy &{} = &{} y,\quad \forall y \in (\ker {H})^{\perp }. \end{array} \right. \end{array} \end{aligned}$$

4 Fixed point technique

4.1 Characterization of the set of admissible controls

Let \( y^d = (y_1^d,\ldots , y_N^d)\) a predefined output. The aim of this section is to give a characterization of the set of all admissible control in consideration the fixed points of a function appropriately chosen, i.e., We shall characterize the set \(\mathcal{U}_{ad}\) of all control which ensure the \(\varGamma \)-controllability.

$$\begin{aligned} \mathcal{U}_{ad} =\left\{ u \in l^{2}\left( \sigma _0^{N-1};U\right) \; /\;\varGamma e_{N} = y^{d}\right\} \end{aligned}$$

where \((e_{0},\dots ,e_{N})\) is the trajectory which takes system from the initial state \(e_0\). If Range(H) is closed then \(T= Range(H)\bigoplus Range(H)^{\perp }\) and \(H^{\dag } \) is defined on all the space T. We suppose that Range(H) is closed. Let \(p: T \longrightarrow Range(H)\) be any projection on Range(H) and \(\bar{e} \ne 0\) be any fixed element of Range(H), we define

$$\begin{aligned} \begin{array}{lccl} f_{\bar{e}} : &{} T &{} \longrightarrow &{} Range(H) \\ &{}e &{} \longmapsto &{} f_{\bar{e}}(e)= \left\{ \begin{array}{lll} 0 &{},\quad &{} \text{ if } \text{ and } \text{ only } \text{ if } \; \varGamma e_{N} = y^{d} \\ \bar{e} &{},\quad &{} \text{ else } \end{array} \right. \end{array} \end{aligned}$$

and let

$$\begin{aligned} \begin{array}{lccl} \xi : &{} T &{} \longrightarrow &{} T \\ &{}e &{} \longmapsto &{} \xi (e)= e - \tilde{\varPsi } e_{0} - Le \end{array} \end{aligned}$$
(9)

and we consider the mapping

$$\begin{aligned} \begin{array}{lccl} g : &{} T &{} \longrightarrow &{} T \\ &{}e &{} \longmapsto &{} g(e)= \tilde{\varPsi } e_{0} + Le + p\xi (e) +f_{\bar{e}}(e). \end{array} \end{aligned}$$
(10)

Then, we have the following proposition.

Proposition 3

Let \( P_{g}= \{e \in T /g(e) = e \}\) denotes the set of all fixed points of g. Then

Proof

Let \(e^{*} \in P_{g}\), we have

$$\begin{aligned} g(e^{*})=\tilde{\varPsi } e_{0}^{*} + Le^{*} + p\xi (e^{*}) +f_{\bar{e^{*}}}(e)=e^{*} \end{aligned}$$
(11)

then

$$\begin{aligned} e^{*} - \tilde{\varPsi } e^{*}_{0} - Le^{*} = p\xi (e^{*}) + f_{\bar{e}}(e^{*}) \end{aligned}$$

which implies that

$$\begin{aligned} \xi (e^{*}) = p\xi (e^{*}) + f_{\bar{e}}(e^{*}) \;\;\in Range(H) \end{aligned}$$

that means

$$\begin{aligned} p\xi (e^{*}) = \xi (e^{*}) \end{aligned}$$

and \( f_{\bar{e}}(e^{*})=0\) which carries that \( \varGamma e_{N}^{*}= y^{d}.\)

Consequently, the Eq. (11) become

$$\begin{aligned} e^{*} = \tilde{\varPsi } e_{0}^{*} + Le^{*} + \xi (e^{*}) = \tilde{\varPsi } e_{0}^{*} + Le^{*} + HH^{\dag }\xi (e^{*}).\nonumber \\ \end{aligned}$$
(12)

Let \( u^{*} = H^{\dag }\xi (e^{*}) + \alpha ^{*},\;\text{ with }\;\alpha ^{*} \in \ker (H)\;\text{ and }\;e^{*} \in P_{g} \), then

$$\begin{aligned} Hu^{*} =HH^{\dag }\xi (e^{*}) +H(\alpha ^{*}) \end{aligned}$$

and from (12), we have

$$\begin{aligned} Hu^{*} = HH^{\dag }\xi (e^{*}) = e^{*}- \tilde{\varPsi } e^{*}_{0} - Le^{*} \end{aligned}$$

which implies that

$$\begin{aligned} \begin{array}{lcl} \left\{ \begin{array}{lcl} e^{*} &{} = &{} \tilde{\varPsi } e_{0}^{*} + Le^{*} + Hu^{*} \\ \varGamma e^{*}_{N} &{} = &{} y^{d} \end{array} \right. \end{array} \end{aligned}$$

thus

$$\begin{aligned} u^{*} \in \mathcal{U}_{ad}. \end{aligned}$$

Consequently, \(\forall e \in P_{g}\), we have and

Now, we show that . Let \(u^{*} \in \mathcal{U}_{ad}\) and \((e_1^{u^{*}},\ldots ,e_{N-1}^{u^{*}})\) the trajectory of system \((S_1)\) corresponding to control \(u^{*}\), then we have

$$\begin{aligned} \begin{array}{lcl} \left\{ \begin{array}{lcl} e^{u^{*}} &{} = &{} \tilde{\varPsi } e_{0} + Le^{u^{*}} + Hu^{*} \\ \varGamma e_N^{u^{*}} &{} = &{} y^{d} \end{array} \right. \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{lcl} \left\{ \begin{array}{lcl} \xi \left( e^{u^{*}}\right) &{} = &{} Hu^{*} \\ \varGamma e_{N}^{u^{*}} &{} = &{} y^{d}. \end{array} \right. \end{array} \end{aligned}$$

Consequently

$$\begin{aligned} \begin{array}{lcl} \left\{ \begin{array}{lcl} \xi \left( e^{u^{*}}\right) &{} = &{} Hu^{*} \in Range(H)\\ f_{\bar{e}}\left( e^{u^{*}}\right) &{} = &{} 0 \end{array} \right. \end{array} \end{aligned}$$

and

$$\begin{aligned} e^{u^{*}} = \tilde{\varPsi } e_{0} + Le^{u^{*}} + p\xi \left( e^{u^{*}}\right) +f_{\bar{e}}\left( e^{u^{*}}\right) = g\left( e^{u^{*}}\right) . \end{aligned}$$

Then \(e^{u^{*}}\) is a fixed point of the mapping of g, moreover, we can write

$$\begin{aligned} u^{*}=H^{\dag }\xi \left( e^{u^{*}}\right) + \left( u^{*} - H^{\dag }\xi \left( e^{u^{*}}\right) \right) \end{aligned}$$

which implies that

$$\begin{aligned} H\left( u^{*} -H^{\dag }\xi \left( e^{u^{*}}\right) \right) = Hu^{*} - HH^{\dag }\xi \left( e^{u^{*}}\right) =0 \end{aligned}$$

consequently

and finally we have

\(\square \)

Remark 5

The fixed points of g are independent of the choice of the projection p and the element \(\bar{e}\). Indeed, let \(p_1\) and \(p_2\) two projection on \(Im \; H\) and \(\bar{e}_1\) and \(\bar{e}_2\) two any elements not equal to zero of \(Im \; H\). Let’s consider the applications

$$\begin{aligned}&\begin{array}{cccl} g_1 : &{} T &{} \longrightarrow &{} T,\\ &{} e &{} \longrightarrow &{}g_1(e)= \tilde{\varPsi } e_{0} + Le + p_1\xi (e) +f_{\bar{e}_1}(e), \end{array}\\&\begin{array}{cccl} g_2 : &{} T &{} \longrightarrow &{} T,\\ &{} e &{} \longrightarrow &{}g_2(e)= \tilde{\varPsi } e_{0} + Le + p_2\xi (e) +f_{\bar{e}_2}(e). \end{array} \end{aligned}$$

Let e a fixed point of \(g_1\), by proof of Proposition 3, we have \(\varGamma e_N=y^d\) and \(\xi (e)\in Im\;H\), he result that \(p_2\xi (e)=\xi (e)\) and \(f_{\bar{e}_1}(e)=0\), then

$$\begin{aligned} g_2(e)=\tilde{\varPsi } e_{0} + Le + \xi (e)=e. \end{aligned}$$

What shows that e is a fixed point of \(g_2\). By symmetry, it clear that the fixed points of \(g_2\) are also the fixed points of \(g_1\).

4.2 Problem of minimization

By the above proposition, we can characterize the set of admissible control \(\mathcal{U}_{ad}\), among those controls, we allow to determine those with the minimal norm, i.e., we solve the following problem:

$$\begin{aligned} \bar{\mathcal{P}} : \min _{u \in \ \mathcal{U}_{ad}}(J(u) = \parallel u \parallel ^{2}). \end{aligned}$$

If we suppose that \(P_{g}\) is fini, i.e., \(P_{g}= \{e^{1},\ldots ,e^{q} \}\), we have

where

then, we obtain

$$\begin{aligned} \bar{\mathcal{P}} \Longleftrightarrow \min _{1 \le i \le q}\left( \min _{u \in \ \mathcal{U}_{ad}^{i}}\left( J(u) = \parallel u \parallel ^{2}\right) \right) . \end{aligned}$$
(13)

Remark 6

Let \(u \in \mathcal{U}_{ad}^{i}\) then \(u = H^{\dagger }\xi (e^{i}) + v\) with . Thus

$$\begin{aligned} \begin{array}{ccl} \parallel u \parallel ^{2} &{} = &{}<u,u>=<H^{\dag }\xi (e^{i})+v,H^{\dag }\xi (e_{i})+v> \\ &{} = &{} \parallel H^{\dag }\xi (e^{i})\parallel ^{2}+\,2<H^{\dag }\xi (e^{i}),v>+ \parallel v \parallel ^{2}\\ &{} = &{} \parallel H^{\dag }\xi (e^{i})\parallel ^{2}+\, J_{i}(v) \end{array} \end{aligned}$$

finally we have

$$\begin{aligned} J(u) = \parallel H^{\dag }\xi (e^{i})\parallel ^{2} + J_{i}(v), \end{aligned}$$

with

$$\begin{aligned} J_i(v)=2<H^{\dag }\xi (e^{i}),v>+\,\Vert v\Vert ^2. \end{aligned}$$
(14)

Lemma 1

The two following problems are equivalents

  1. (a)

    \(\left\{ \begin{array}{l} {\displaystyle \min _{u \in \mathcal{U}_{ad}^{i}}}J(u) =\,\parallel u^{*} \parallel ^{2} \\ \text{ with }\; u^{*}= H^{\dagger }\xi (e^{i}) + v^{*} \end{array} \right. \)

  2. (b)
    figure a

Proof

(b) \(\Longrightarrow \) (a) Let \(w \in \mathcal{U}_{ad}^{i}\) which implies that \(w = H^{\dagger }\xi (e^{i}) + \bar{w}\) with , then we have

$$\begin{aligned} J(w) = \,\parallel w \parallel ^{2} =\,\parallel H^{\dag }\xi (e_{i}^{*}) \parallel ^{2} +\,J_{i}(\bar{w}) \end{aligned}$$

thus

$$\begin{aligned} J(w) \ge \,\parallel H^{\dag }\xi (e_{i}^{*})\parallel ^{2} +\,J_{i}(v^{*}) \end{aligned}$$

consequently

$$\begin{aligned} J(w) \ge \,\parallel u^{*} \parallel ^{2}\,=\, J(u). \end{aligned}$$

So, \(\forall w \in \mathcal{U}_{ad}^{i} \), we have \( J(w) \ge J(u^{*})\) and

$$\begin{aligned} \min _{u \in \mathcal{U}_{ad}^{i}}J(w) = J(u^{*}) =\,\parallel u^{*} \parallel ^{2}. \end{aligned}$$

(a) \(\Longrightarrow \) (b) Let \(u^{*}\) such that \(\parallel u^{*} \parallel ^{2}\,= {\displaystyle \min _{u \in \mathcal{U}_{ad}^{i}}}(J(u))\), or \(\mathcal{U}_{ad}^{i}\) is closed, then we have \(u^{*} \in \mathcal{U}_{ad}^{i}\) and there exists such that \(u^{*}= H^{\dag } \xi (e_{i}) + v^{*}\).

Let and consider \(u=H^{\dag }\xi (e_{i}^{*}) + w\) then we have

$$\begin{aligned} \parallel u \parallel ^{2}\,\ge \,\parallel u^{*} \parallel ^{2} \end{aligned}$$

\({\parallel } H^{\dag }\xi (e_{i}^{*}){\parallel }^{2} {+}\,2{<}H^{\dag }\xi (e_{i}^{*}), w{>} + {\parallel } w {\parallel }^{2} \ge {\parallel } H^{\dag }\xi (e_{i}^{*}){\parallel }^{2}+\,2<H^{\dag }\xi (e_{i}^{*}), v^{*}> + \parallel v^{*}\parallel ^{2}\)

thus, we have

$$\begin{aligned} 2<H^{\dag }\xi (e_{i}^{*}), w{>} {+}\,\parallel w \parallel ^{2}\,\ge \,2<H^{\dag }\xi (e_{i}^{*}), v^{*}{>} {+}\!\parallel v^{*}\parallel ^{2} \end{aligned}$$

which implies that

consequently

\(\square \)

Theorem 1

If we suppose that the set \(P_g\) is finite, then the optimal control allow to have the \(\varGamma \)-Controllability (then the exactly output controllability of system (S)) is given by

$$\begin{aligned} u^{*}\,=H^{\dagger }\xi (e^{i_{0}}), \end{aligned}$$

with \( e^{i_{0}}\) a fixed point of application g given by (10) and which verified

$$\begin{aligned} \parallel H^{\dagger }\xi (e^{i_{0}}) \parallel ^{2}\,= \inf _{1 \le i \le q} \left\{ \parallel H^{\dagger }\xi (e^{i}) \parallel ^{2} \right\} . \end{aligned}$$

Proof

Let’s consider \(P_g=\{e^1,\ldots ,e^q\}\), then by Lemma 1, we have

$$\begin{aligned} {\displaystyle \min _{u\in \mathcal{U}_{ad}}}J(u)=\Vert u\Vert ^2\;\; \text{ with }\;\; u^{*}= H^{\dagger }\xi (e^{i})+v^{*} \end{aligned}$$

where \(v^{*}\) is an element of H that achieves the minimum of the functional J given by (14), However, H is a closed subspace, then the minimum of J is reached for \(v^{*}=0\) and therefore, we have

$$\begin{aligned} {\displaystyle \min _{u\in \mathcal{U}_{ad}}}J(u)=\Vert H^{\dagger }\xi (e^{i})\Vert ^2. \end{aligned}$$

While using the equivalence (13), we deduct that

$$\begin{aligned} {\displaystyle \min _{u\in \mathcal{U}_{ad}}}(J(u)=\Vert u\Vert ^2)={\displaystyle \inf _{1\le i\le q}} \{\Vert H^{\dagger }\xi (e^{i})\Vert ^2\}=\Vert H^{\dagger }\xi (e^{i_0})\Vert ^2 \end{aligned}$$

where \(e^{i_0}\in \{e^{i_1},\ldots ,e^{i_q}\}\). \(\square \)

5 Other application for characterize the set of admissible controls

In this section, a necessary and sufficient condition based on the set of fixed points of an other application appropriately chosen, for that a control is admissible. Indeed, let’s consider the operator L, H, \(\varPsi \) and \(\varGamma \) defined in preceding paragraph and we define the Hilbert spaces \(\mathcal{M} = T \times \mathcal{Y}\) with \(\mathcal{Y}=l^2(\sigma _1^N;Y)\) and the operators \(\mathcal{S}\), \(\mathcal{L}\), \(\mathcal{H}\) and \(\tilde{\xi }\) by

$$\begin{aligned}&\begin{array}{lccllccl} \mathcal{S} : &{} \mathcal{X} &{} \longrightarrow &{} \mathcal{M}, &{} \mathcal{L} &{} T &{} \longrightarrow &{} \mathcal{M},\\ &{} \alpha &{} {\longmapsto } &{}\!\left( \begin{array}{ll} \tilde{\varPsi }\alpha \\ \varGamma \varPsi ^{N}\alpha \end{array} \right) , &{} &{} x=(x_i)_{i\in \sigma _1^N} &{} \longmapsto &{} \left( \begin{array}{l} Lx\\ \varGamma (Lx)_N \end{array} \right) , \end{array}\\&\begin{array}{lccllccl} \mathcal{H} : &{} \mathcal{U} &{} \longrightarrow &{} \mathcal{M}, &{} \tilde{\xi } &{} \mathcal{M} &{} \longrightarrow &{} \mathcal{M},\\ &{} u &{} \longmapsto &{}\left( \begin{array}{l} Hu \\ \varGamma (Hu)_N \end{array} \right) , &{} &{}\left( \begin{array}{l} x \\ z \end{array} \right) &{} \longmapsto &{} \left( \begin{array}{l} x \\ z \end{array} \right) - \mathcal{S}e_0 - \mathcal{L}x. \end{array} \end{aligned}$$

We remind that the solution of system (5) is written in the form

$$\begin{aligned} e=(e_i)_{i\in \sigma _1^N}=\tilde{\varPsi }e_0+Le+Hu \end{aligned}$$

which give

$$\begin{aligned} \left( \begin{array}{ll} e \\ \varGamma e_N \end{array} \right) =\mathcal{S}e_0+\mathcal{L}e+\mathcal{H}u. \end{aligned}$$
(15)

We suppose that \(Im\;\mathcal{H}\) is closed, then the pseudo inverse \(\mathcal{H}^{\dagger }\) of \(\mathcal{H}\) is defined on all space \(\mathcal{M}\). Let’s \(y^d\) a fixed element of \(\mathcal{Y}\), we define the following application

$$\begin{aligned} \begin{array}{lccl} \mathcal{G} : &{} \mathcal{M} &{} \longrightarrow &{} \mathcal{M} \\ &{}\left( \begin{array}{l} x \\ z \end{array} \right) &{} \longmapsto &{} \mathcal{G}\left( \begin{array}{l} x \\ z \end{array} \right) = \mathcal{S}e_0 + \mathcal{L}x + p\tilde{\xi }\left( \begin{array}{l} x \\ y^d \end{array} \right) \\ &{} &{} &{} \qquad \qquad \qquad + \left( \begin{array}{l} 0_{{ IR}^N} \\ z-y^d \end{array} \right) . \end{array} \end{aligned}$$
(16)

with \( p: T \longrightarrow Im \; \mathcal{H}\) an any projection on . Then we have the following result.

Lemma 2

If \(\left( \begin{array}{l} x \\ z \end{array} \right) \in \mathcal{M}\) is a fixed point of \(\mathcal{G}\), then \(\tilde{\xi }\left( \begin{array}{l} x \\ y^d \end{array} \right) \in Im\;\mathcal{H}\).

Proof

If \(\left( \begin{array}{l} x \\ z \end{array} \right) \) is a fixed point of \(\mathcal{G}\), then we have

$$\begin{aligned} \mathcal{G}\left( \begin{array}{l} x \\ z \end{array} \right) = \mathcal{S}e_0 + \mathcal{L}x + p\tilde{\xi }\left( \begin{array}{l} x \\ y^d \end{array} \right) +\left( \begin{array}{l} 0_{{ IR}^N} \\ z-y^d \end{array} \right) = \left( \begin{array}{l} x \\ z \end{array} \right) , \end{aligned}$$

which implies

$$\begin{aligned} \begin{array}{ccl} p\tilde{\xi }\left( \begin{array}{l} x \\ y^d \end{array} \right) &{} = &{} \left( \begin{array}{l} x \\ z \end{array} \right) - \mathcal{S}e_0 - \mathcal{L}x - \left( \begin{array}{l} 0_{{ IR}^N} \\ z-y^d \end{array} \right) \\ &{} = &{} \left( \begin{array}{l} x \\ y^d \end{array} \right) - \mathcal{S}e_0 - \mathcal{L}x \\ &{} = &{} \tilde{\xi }\left( \begin{array}{l} x \\ y^d \end{array} \right) \end{array} \end{aligned}$$

which show that \(\tilde{\xi }\left( \begin{array}{l} x \\ y^d \end{array} \right) \in Im\;\mathcal{H}\). \(\square \)

Theorem 2

Let’s \(y^d\) a desired output, the control \(u^{*}= \mathcal{H}^{\dagger }\tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) + v^{*}\) ensure the \(\varGamma \)-controllability of system (5) where \(x^{*} \in T\) and \(v^{*}\in Ker \; \mathcal{H}\) if and only if \((x^{*}, y^d)\) is a fixed point of application \(\mathcal{G}\) given by (16).

Proof

Let’s consider \(x^{*} \in T\) and \(v^{*}\in Ker \; \mathcal{H}\). If we suppose that \(u^{*}= \mathcal{H}^{\dagger }\tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) + v^{*}\in \mathcal{U}\) ensure the \(\varGamma \)-controllability of system (\(S_1\)), i.e., \(\varGamma e_N^{u^{*}}=y^d\) where \(e=(e_1^{*},\ldots ,e_N^{*})\) is the trajectory of system (5) corresponding to the control \(u^{*}\). Then, by Eq. (15), we have

$$\begin{aligned} \mathcal{H}u^{*} = \left( \begin{array}{l} x^{*} \\ \varGamma e^{u^{*}} \end{array} \right) - \mathcal{S}e_0 - \mathcal{L}x^{*} = \tilde{\xi }\left( \begin{array}{l} x^{*} \\ \varGamma e^{u^{*}} \end{array} \right) = \tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) \end{aligned}$$

which implies that \(\tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) \in Im\;\mathcal{H}\), consequently

$$\begin{aligned} p\tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) =\tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) . \end{aligned}$$

So, we have

$$\begin{aligned} \mathcal{G} \left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) = \mathcal{S}e_0 + \mathcal{L}x^{*} + \tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) = \left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) \end{aligned}$$

which carries that \((x^{*},y^d)^{\top }\) is a fixed point of \(\mathcal{G}\).

Now, if we suppose that \((x^{*},y^d)^{\top }\) is a fixed point of \(\mathcal{G}\), then by Lemma 2, we have \(\tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) \in Im\;\mathcal{H}\). On other hand, we have

$$\begin{aligned} \left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) = \mathcal{S}e_0 + \mathcal{L}x^{*} + \tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) \end{aligned}$$

Let \(v^{*}\in Ker \; \mathcal{H}\), then if we consider the control \(u^{*}= \mathcal{H}^{\dagger }\tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) + v^{*}\), we have \(\mathcal{H}u^{*}= \tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) \) and

$$\begin{aligned} \left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) = \mathcal{S}e_0 + \mathcal{L}x^{*} + \mathcal{H}u^{*}. \end{aligned}$$

If we replace \(\mathcal{S}\), \(\mathcal{L}\) and \(\mathcal{H}\) by their expression, we obtain

$$\begin{aligned} \left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) = \left( \begin{array}{l} \tilde{\varPhi }e_0 + Lx^{*} + Hu^{*}\\ \varGamma \varPhi ^Ne_0 + \varGamma (Lx^{*})_N + \varGamma (Hu^{*})_N. \end{array} \right) \end{aligned}$$

So, \(x^{*} = \tilde{\varPhi }e_0 + Lx^{*} + Hu^{*}\) implies that \(x^{*}\) is the trajectory of system (5) corresponding to control \(u^{*}\) and

$$\begin{aligned} \begin{array}{ccl} y^d &{} = &{} \varGamma ( \varPhi ^Ne_0 + (Lx^{*})_N + (Hu^{*})_N)\\ &{} = &{} \varGamma (x_N^{*}). \end{array} \end{aligned}$$

Consequently the control \(u^{*}= \mathcal{H}^{\dagger }\tilde{\xi }\left( \begin{array}{l} x^{*} \\ y^d \end{array} \right) + v^{*}\) ensure the \(\varGamma \)-controllability of system (5), which end the proof. \(\square \)

6 Example and numerical simulation

Consider the following system

$$\begin{aligned} \left\{ \begin{array}{ccl} \dot{x}(t) &{} = &{} \varDelta x(t) +Mx(t) +Du(t),\quad 0<t<1\\ x(0) &{} = &{} 0, \end{array} \right. \end{aligned}$$
(17)

the output function is given by

$$\begin{aligned} y(t)=<x(t),\quad \phi _1>\in { IR}\end{aligned}$$
(18)

where \(x(t)\in X=L^2(0,1;{ IR})\), \(u(t)\in U={ IR}\), D and M are respectively the linear and the nonlinear maps defined by

$$\begin{aligned} \begin{array}{ccccl} D &{} : &{} { IR}&{} \longrightarrow &{} X\\ &{} &{} z &{} \longrightarrow &{} z\phi _1 \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{ccccl} M &{} : &{} X &{} \longrightarrow &{} X\\ &{} &{} x &{} \longrightarrow &{} sin(<x,\phi _1>)\phi _1 \end{array} \end{aligned}$$

where \(\phi _n=\sqrt{2}\sin {(n\pi .)}, \; n\ge 1\). The laplacien \(\varDelta \) is the infinitesimal generator of the strongly-continuous semi-group \((S(t))_{t\ge 0}\) defined by

$$\begin{aligned} S(t)={\displaystyle \sum _{n=1}^{\infty }}e^{-n^2\pi t}<x,\phi _n>\phi _n. \end{aligned}$$

The operator M satisfied the Lipshitz condition. Indeed, for all \(x,y \in X\), we have

$$\begin{aligned} \Vert M(x)-M(y)\Vert= & {} \Vert (sin(x_1)- sin(y_1))\phi _1\Vert \\&\le |sin(x_1)-sin(y_1)|\\&\le 2\left| sin\left( {\displaystyle \frac{x_1-y_1}{2}}\right) \right| \left| cos\left( {\displaystyle \frac{x_1+y_1}{2}}\right) \right| \\&\le |x_1-y_1| \le \Vert x-y\Vert . \end{aligned}$$

where \(x_1=<x,\phi _1>\) and \(y_1=<y,\phi _1>\).

Consequently, the system (17) has a unique mild solution in \(L^2(0,1;X)\) (see Balakrishnan [23]) given by

$$\begin{aligned} x(t) = {\displaystyle \int _{0}^{t}}S(t-r)Mx(r) dr + {\displaystyle \int _{0}^{t}}S(t-r)Du(r) dr. \end{aligned}$$

Let \(N\in { IN}\), \(\delta =\frac{1}{N}\) the sampling data, \(t_i=i\delta \), \(\forall i \in { IN}\), \(A=S(\delta )\), \(x_i=x(t_i)\), \(u_i=u(t_i)\) and \(y_i=y(t_i)\), the discrete version of system (17), (18) is the following

$$\begin{aligned}&\left\{ \begin{array}{ccl} x_{i+1} &{} = &{} A x_i +Ex_i +Bu_i, \quad i\ge 0\\ x_0 &{} = &{} 0, \end{array} \right. \end{aligned}$$
(19)
$$\begin{aligned}&y_i=Cx_i, \quad i\ge 0 \end{aligned}$$
(20)

where E, B, and C are given by

\(E={\displaystyle \int _{0}^{\delta }}S(r)M dr\), \(\quad B={\displaystyle \int _{0}^{\delta }}S(r)D dr\) and

\(Cx=<x,\phi _1>\), \(\quad \forall x \in X\).

By a direct calcul, we can verifies that the operator H, L and \(\xi \), the sets \(Ker \;H\), \((Ker \; H)^{\top }\) and \(Im \;H\) are given by

$$\begin{aligned} \left\{ \begin{array}{l} H: u \in l^2\left( \sigma _{0}^{N-1},{ IR}\right) \longrightarrow Hu \in l^2\left( \sigma _{1}^{N},\mathcal{X}\right) \\ (Hu)_i={\displaystyle \frac{1-e^{-\pi ^2 \delta }}{\pi ^2}} \left( \begin{array}{c} \bar{u}_{i-1}\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},\bar{u}_0,\ldots ,\bar{u}_{i-2}\right) \end{array}\right) \end{array} \right. \end{aligned}$$

where \(\bar{u}_i={\displaystyle \sum \nolimits _{j=0}^{i}}e^{-j\pi ^2 \delta }u_{i-j}\phi _1\) and \(i\in \sigma _{1}^{N}\),

$$\begin{aligned} \left\{ \begin{array}{l} L: e\in l^2\left( \sigma _{1}^{N},\mathcal{X}\right) \longrightarrow Le\in l^2\left( \sigma _{1}^{N},\mathcal{X}\right) \\ (Le)_i=\left( {\displaystyle \frac{1-e^{-\pi ^2 \delta }}{\pi ^2}}\right) \left( \begin{array}{c} \bar{x}_{i-1}\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},\bar{x}_{0},\ldots ,\bar{x}_{i-2}\right) \end{array}\right) \end{array} \right. \end{aligned}$$

where \(\bar{x}_i={\displaystyle \sum \nolimits _{j=0}^{i}}e^{-j\pi ^2 \delta }\sin {(<x_{i-j},\phi _1>)}\phi _1\) and \(x_i=Ge_i, \;\;\; i\in \sigma _{1}^{N}\) with

$$\begin{aligned}&\left\{ \begin{array}{ccccl} G &{} : &{} \mathcal{X} &{} \longrightarrow X\\ &{} &{} \left( \begin{array}{c} z\\ (z_{-N},\ldots ,z_{-1}) \end{array} \right)&\longrightarrow z \end{array}\right. \\&\left\{ \begin{array}{l} \xi : e\in l^2\left( \sigma _{1}^{N},\mathcal{X}\right) \longrightarrow \xi e\in l^2\left( \sigma _{1}^{N},\mathcal{X}\right) \\ (\xi e)_i=e_i - {\displaystyle \frac{1-e^{-\pi ^2 \delta }}{\pi ^2}} \left( \begin{array}{c} \bar{x}_{i-1}\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},\bar{x}_0,\ldots ,\bar{x}_{i-2}\right) \end{array}\right) \end{array} \right. \\&Ker \; H=\{0\}, \;\; (Ker \; H)^{\top }=\mathcal{U}=l^2\left( \sigma _0^{N-1};{ IR}\right) \end{aligned}$$

and

$$\begin{aligned} \begin{array}{l} Im \; H = \left\{ z=(z_i)_{i\in \sigma _{1}^{N}} \in l^2\left( \sigma _{1}^{N},\mathcal{X}\right) , \right. \; z_i=\\ \left. \left( \begin{array}{c} \alpha _i \phi _1\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},\alpha _1 \phi _1,\ldots ,\alpha _{i-1}\phi _1\right) \end{array} \right) , i\in \sigma _{1}^{N}, \alpha _i\in { IR}\right\} \end{array} \end{aligned}$$

Let \(\tilde{H}\) the operator defined by

$$\begin{aligned}&\begin{array}{ccccl} \tilde{H} &{} : &{} (Ker \; H)^{\top }=\mathcal{U} &{} \longrightarrow Im \; H\\ &{} &{} v &{} \longrightarrow \tilde{H}v=Hv \end{array}\\&\begin{array}{ccccl} \tilde{H}^{-1} &{} : &{} Im \; H &{} \longrightarrow \mathcal{U}\\ &{} &{} \left( \begin{array}{c} \alpha _i \phi _1\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},\alpha _1 \phi _1,\ldots ,\alpha _{i-1}\phi _1\right) \end{array} \right)&\longrightarrow v \end{array} \end{aligned}$$

where

$$\begin{aligned} \left\{ \begin{array}{l} v_0={\displaystyle \frac{\pi ^2}{1-e^{-\pi ^2\delta }}}\alpha _1\\ v_i={\displaystyle \frac{\pi ^2}{1-e^{-\pi ^2\delta }}}[\alpha _{i+1}-e^{-\pi ^2\delta }\alpha _i], \quad i\in \sigma _{1}^{N-1}. \end{array} \right. \end{aligned}$$

Lets \(\bar{e}=(b_1,0,\ldots ,0)^{\top }\) where \(b_1=\left( \begin{array}{c} \phi _1\\ (0,\ldots ,0) \end{array} \right) \) and the projection P

$$\begin{aligned}&\begin{array}{ccccl} P &{} : &{} l^2\left( \sigma _{1}^{N},\mathcal{X}\right) &{} \longrightarrow &{} Im \; H\\ &{} &{} z=(z_1,\ldots , z_N) &{} \longrightarrow &{} Pz \end{array}\\&Pz= \left( \begin{array}{c} \bar{z}_i\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},\bar{z}_1,\ldots , \bar{z}_{i-1}\right) \end{array} \right) _{i\in \sigma _{1}^{N}} \end{aligned}$$

with \(\bar{z}_i=<z_{i}^1,\phi _1>\phi _1\) and \(z_i^1=Gz_i, \;\;i\in \sigma _{1}^{N} \).

The map H is given by \(H : e\in l^2(\sigma _{1}^{N},\mathcal{X}) \longrightarrow H e\in l^2(\sigma _{1}^{N},\mathcal{X})\), where, for every \(i\in \sigma _{1}^{N}\), we have

$$\begin{aligned}&(He)_i = (f_{\bar{e}}(e))_i+ (Le)_i + (P\xi (e))_i\\&\quad = (f_{\bar{e}}(e))_i+\left( \begin{array}{c} \bar{x}_i\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},\bar{x}_1,\ldots ,\bar{x}_{i-k-2}\right) \end{array}\right) . \end{aligned}$$

where \(\bar{x}_i=<x_{i},\phi _1>\phi _1\).

Let e a fixe point of H, we have

$$\begin{aligned} e_i=(f_{\bar{e}}(e))_i+\left( \begin{array}{c} \bar{x}_i\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},\bar{x}_1,\ldots ,\bar{x}_{i-k-2}\right) \end{array}\right) \end{aligned}$$

where \(x_i=Ge_i\). As \((f_{\bar{e}}(e))_i\), \(i\in \sigma _{2}^{N}\), we show that \((f_{\bar{e}}(e))_1=0\). Indeed, if we suppose that \((f_{\bar{e}}(e))_1 \not = 0\), then \((f_{\bar{e}}(e))_1=\bar{e}_1=b_1=\left( \begin{array}{c} \phi _1\\ (0,\ldots ,0) \end{array} \right) \) thus

$$\begin{aligned}&<e_i,b_1>_{X\times l^2(\sigma _{-N}^{-1},\mathcal{X})} = \Vert b_1\Vert _ {X\times l^2(\sigma _{-N}^{-1},\mathcal{X})}^2 + \\&\quad <\left( \begin{array}{c} \bar{x}_{i}\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},\bar{x}_{1},\ldots , \bar{x}_{i-k-2}\right) \end{array}\right) ,b_1> \end{aligned}$$

which implies that

$$\begin{aligned}<x_i,\phi _1>=1+<x_i,\phi _1> \end{aligned}$$

which is absurd, then \(f_{\bar{e}}(e)=0\) and

$$\begin{aligned}&e_i=\left( \begin{array}{c} \bar{x}_{i}\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},\bar{x}_{1},\ldots , \bar{x}_{i-k-2}\right) \end{array}\right) , \;\;\; \forall i\in \sigma _{1}^{N}.\\&\begin{array}{ccl} \varGamma e_N=y^d &{} \Longleftrightarrow &{} (<x_1,\phi _1>,\ldots ,<x_{N},\phi _1>)= \\ {} &{} &{} (y_1^d,\ldots ,y_N^d)\\ &{} \Longleftrightarrow &{}<x_i,\phi _1>=y_i^d, \;\;\;i\in \sigma _{1}^{N}. \end{array} \end{aligned}$$

Consequently

$$\begin{aligned} e_i=\left( \begin{array}{c} y_i^d\phi _1\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},y_1^d\phi _1,\ldots ,y_{i-1}^d\phi _1\right) \end{array}\right) , \quad \forall i{\in } \sigma _{1}^{N}. \end{aligned}$$
$$\begin{aligned} (\xi (e))_i= & {} \left( \begin{array}{c} y_i^d\phi _1\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+1)-times}},y_1^d\phi _1,\ldots ,y_{i-1}^d\phi _1\right) \end{array}\right) \\&+ \left( {\displaystyle \frac{1-e^{-\pi ^2 \delta }}{\pi ^2}}\right) \left( \begin{array}{c} \bar{y}_{i-1}^d\\ \left( {\displaystyle \underbrace{0,\ldots ,0}_{(N-i+2)-times}},\bar{y}_{1}^d,\ldots , \bar{y}_{i-2}^d\right) \end{array}\right) \end{aligned}$$

where \(\bar{y}_{i}^d={\displaystyle \sum \nolimits _{k=0}^{i}}e^{-k\pi ^2 \delta }y_{i-k}^d\phi _1\) and \(\forall i\in \sigma _{1}^{N}\).

$$\begin{aligned} (H^{\dag }(\xi (e)))_i{=}\left( {\displaystyle \frac{\pi ^2}{1-e^{-\pi ^2 \delta }}}\right) \left( y_{i+1}^d- e^{-\pi ^2\delta }y_i^d\right) \!-sin\left( y_i^d\right) \end{aligned}$$

Consequently

$$\begin{aligned} \mathcal{U}_{ad}=\left\{ \left( {\displaystyle \frac{\pi ^2}{1-e^{-\pi ^2 \delta }}}\left( y_{i+1}^d- e^{-\pi ^2\delta }y_i^d\right) -sin\left( y_i^d\right) \right) _{i\in \sigma _{0}^{N-1}} \right\} \end{aligned}$$

Numerical simulation For

$$\begin{aligned} \begin{array}{ccl} y_i^d &{}=&{} \left( {\displaystyle \frac{1-e^{-\pi ^2 \delta }}{\pi ^2}}\right) {\displaystyle \sum _{j=0}^{i-1}} \left( 2sin\left( 2j\delta -\frac{\pi }{6}\right) +1\right) \left( e^{-\pi ^2\delta }\right. \\ &{} &{}\left. +\,\left( {\displaystyle \frac{1- e^{-\pi ^2 \delta }}{\pi ^2}}\right) sin(1)\right) ^{i-j-1}, \quad \forall i\in \sigma _{1}^{N}, \end{array} \end{aligned}$$

we obtain the numerical results describes in Table 1.

Table 1 The values of the exact control and the approximate control

The corresponding optimal cost is \(J(u^{*})=0.34819\). Some control trajectories for different values of N are depicted in Fig. 1.

Fig. 1
figure 1

Graph of the control for different values of N

7 Conclusion

In this paper, we investigate the output controllability problem for nonlinear discrete distributed system with energy constraint. We use a technique based on the fixed point theorem and we establish that the set of admissible controls can be characterized by the set of the fixed point of an appropriate mapping. A numerical example is given to illustrate the obtained results.