Abstract
The Inverse Function Theorem is a mighty tool of the local nonlinear analysis. It guarantees the existence of the inverse function and its differentiability. However the first property is sometimes not used. It is true, for example, for the optimal control theory and the inverse problems of mathematical physics. The inverse operator can be interpreted as a control-state mapping here. Its existence is a corollary of the state equation properties, and the differentiability of the inverse operator is used for the differentiation of the minimizing functional or the discrepancy. We establish a differentiability criterion of the inverse operator. Moreover, we prove a property which can be interpreted as a weak form of the operator differentiability. The Dirichlet problem for a nonlinear elliptic equation is considered as an example.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
- Inverse function theorem
- Operator derivative
- Extended derivative
- Nonlinear elliptic equation
- Necessary conditions of optimality
Mathematics Subject Classification
14.1 Introduction
Consider an operator A:V→Y, where V and Y are Banach spaces. Suppose that it is continuously differentiable at a neighborhood of a point y 0∈Y. Denote by A′(y 0) the derivative of the operator A at the point y 0. It is well known that the following result holds (see, for example, [1]).
The Inverse Function Theorem
Assume that there exists the continuous inverse operator A′(y 0)−1. Then there exists an open neighborhood O of the point y 0 such that the set O′=A(O) is an open neighborhood of the point v 0=Ay 0; moreover, there exists the continuously differentiable inverse map A −1:O′→O, and its derivative is defined by the formula
This result has very important applications. It has relationships to the Implicit Function Theorem [2], Newton–Kantorovich Method [1, 2], Lusternik Smooth Manifold Approximation Theorem [3, 4], Brower Fixed Point Theorem [1, 5], Morse Smooth Function Singularity Lemma [1], Graves Cover Theorem [4], etc. Extensions of the Inverse Function Theorem to high orders differentiability [6], nonsmooth operators [7–9], multiple-valued maps [1, 7, 10], etc., are also known.
In reality the Inverse Function Theorem involve two different results. These are the invertibility of the given operator and the differentiability of the corresponding inverse operator. Sometimes only the second property is important. It is true, for example, for the extremum theory and the inverse problems theory. In particular, consider the system described by the equation
The term v can be interpreted here as a control or an identifiable parameter, and y is a state function. Suppose that (1) has a unique solution y=y(v) from the space Y for all values v∈V. Then the operator A is invertible. This result can be proved by some tools which are applicable to the given equation. Therefore, it is not necessary to use of the Inverse Function Theorem here.
Let U be a convex closed subset of the space V. The state functional is defined by the formula
where J is a functional on the set V, and K is a functional on the set Y. We have the following optimization control problem.
Problem 1
Minimize the functional I on the set U.
A necessary condition for the minimum of a smooth functional F on a convex set W at a point v 0 is the variational inequality (see [11])
where 〈λ,φ〉 is the value of the linear continuous functional λ at the point φ.
The functional I is the sum of J and the map v→K[y(v)]. The last mapping is the superposition of the functional K and the map v→y(v), which is, in fact, the inverse operator A −1. Then the proof of the differentiability of the given functional requires the differentiation of the inverse operator. This result can be obtained using the Inverse Function Theorem.
Lemma 1
Suppose that the operator A has a continuous inverse operator, which is continuously differentiable at an open neighborhood of the point y 0=y(v 0), and there exists the continuous inverse operator A′(y 0)−1. Then the map y(⋅):V→Y is Gateaux differentiable at the point v 0, and its derivative satisfies the formula
where Y ∗ is the adjoint space of Y, and p μ (v 0) is the solution of the equation
Proof
By the Inverse Function Theorem the map y(⋅):V→Y is differentiable at the point v 0, and, moreover,
Then we get
It is known that each linear operator and its adjoint operator are invertible at the same time (see p. 460 in [2]). Therefore (4) has a unique solution
from the space V ∗. So the previous formula can be transformed to (4), and the equality (3) is true. □
Now we can prove the differentiability of the functional I and obtain necessary conditions of optimality. Let v 0 be the solution of the minimization problem for the functional I on the set U. Define y 0=y(v 0).
Lemma 2
Under the conditions of Lemma 1 suppose that the functional J is Gateaux differentiable at the point v 0, and the functional K is Frechet differentiable at the point y 0. Then the control v 0 satisfies the variational inequality
where p 0 is a solution of the adjoint equation
Proof
Using the Composite Function Theorem (see p. 637 in [2]), we obtain that the Gateaux derivative of the map v→K[y(v)] exists such that
By equality (3) we get
where p 0 is the solution of (4) for μ=−K′(y 0). Thus, we obtain the adjoint equation (6). So the derivative of the map v→K[y(v)] at the point v 0 equals to −p 0. Then the functional I has the derivative
at this point. Using (2), we obtain the variational inequality (5). □
Thus the Inverse Function Theorem is a good tool for proving the differentiability of the control-state mapping. This result is the basis for obtaining necessary optimality conditions. Note that we use now the serious assumption of the invertibility of the operator’s derivative. It is equivalent to the existence of the unique solution y∈Y for the linearized equation
for all v∈V.
Now we have the following questions:
-
How large is the class of operators that satisfy the mentioned assumption?
-
What is the criterion of the differentiability of the inverse operator at a concrete point?
-
Could we prove the differentiability of the inverse operator without using the Inverse Function Theorem?
-
Could we prove a weaker form of the differentiability of the inverse operator for obtaining optimality conditions in the case of non-invertibility of the operator’s derivative?
We will try to answer these questions.
14.2 Criterion for the Differentiability of the Inverse Operator
Consider an operator A:Y→V. Let it be continuous and differentiable at a neighborhood of a point y 0∈Y.
Theorem 1
Suppose the existence of an open neighborhood O of the point y 0 such that the set O′=A(O) is an open neighborhood of the point v 0=Ay 0. Suppose that there exists the inverse operator A −1:O′→O, and that (7) has not more than one solution. Then this inverse operator is Gateaux differentiable at v 0 if and only if the derivative A′(y 0) is a surjection.
Proof
Let the derivative A′(y 0) be a surjection. Then it is invertible by the assumptions of the theorem. By Banach Inverse Operator Theorem there exists the continuous inverse operator A′(y 0)−1. Therefore, the differentiability of the operator A −1 at the point v 0 follows from the Inverse Function Theorem directly.
Suppose now that the operator A −1 has the Gateaux derivative D at y 0, and that the derivative A′(y 0) is not a surjection. We get the equality
for all v∈V and small enough number σ. Dividing it by σ and passing to the limit as σ→0, using the Composite Function Theorem and differentiability of A −1, we get
Then there exists a point y=Dv from Y such that A′(y 0)y=v. So the derivative A′(y 0) is a surjection. However this conclusion contradicts our assumption. Hence, the operator A′(y 0) is a surjection whenever the inverse operator is differentiable. □
Thus Gateaux differentiability of the inverse operator is equivalent to the following property: the operator A′(y 0) is a surjection. It is called Lusternik Condition [4].
Consider as an example the homogeneous Dirichlet Problem for the equation
in the n-dimensional bounded set Ω, where ρ>0. Denote the space
where q=ρ+2. Using Monotone Operators Theory [12], we obtain that this boundary problem has the unique solution y∈Y for all v from the set V, which is the adjoint space
where 1/q+1/q′=1. Denote the operator A:Y→V such that Ay equals to the left side of the equality (8). The existence of the operator A −1 follows from the one-valued solvability of the boundary problem. Its differentiability can be obtained by using the properties of the linearized equation. It is the homogeneous Dirichlet Problem for the equation
Corollary 1
The solution of the Dirichlet problem for (8) is Gateaux differentiable with respect to the absolute term at the point v=v 0 iff (9) has a solution y∈Y for all v∈V.
Indeed, the continuous differentiability of the given operator A is obvious. The existence of the inverse operator follows from the one-valued solvability of the given boundary problem. It is obvious that the Dirichlet problem for the linear equation (9) cannot have two solutions. Then the criterion for the invertibility of the inverse operator is the Lusternik condition, by Theorem 1.
Now we obtain a criterion for the differentiability of the solution of (8) with respect to the absolute term on the space V.
Corollary 2
The solution of the Dirichlet problem for (8) is Gateaux differentiable with respect to the absolute term at an arbitrary point if and only if the embedding \(H^{1}_{0}(\varOmega )\subset L_{q}(\varOmega )\) is true.
Proof
Multiply equality (9) by the function y and integrate the result in x∈Ω using the Green formula and the boundary condition. We get
We have \(Y=H^{1}_{0}(\varOmega )\) by the given assumption, hence V=H −1(Ω). So the a-priori estimate of the solution of (9) in the sense of Y for all v∈V follows from the obtained equality. Now we get the one-valued solvability of the linearized equation by means of the standard theory of elliptic equations (see, for example, [11]). Thus the differentiability of the solution of (8) with respect to the absolute term at an arbitrary point follows from Corollary 1.
We prove now that the solution of (8) is not differentiable with respect to its absolute term, if the mentioned embedding does not hold. Let y 0 be a continuous function from the space Y. Then the left side of the equality (9) is a point of the space H −1(Ω) for all y∈Y. Therefore, the image of the derivative A′(y 0) is narrower than the set V, if the mentioned embedding does not hold. So (9) does not have any solutions from the space Y for all function v from the difference V∖H −1(Ω). Therefore, the solution of the homogeneous Dirichlet problem for (8) is not Gateaux differentiable at the point
by Corollary 1. This completes the proof of Corollary 2. □
By Sobolev Theorem the embedding \(H^{1}_{0}(\varOmega )\subset L_{q}(\varOmega )\) is true if n=2 or ρ≤4/(n−2) for n>2. Then the solution of (8) is differentiable with respect to the absolute term for small enough values of the set dimension n and nonlinearity parameter ρ. These characteristics determine a degree of the difficulty for the given equation. It is clear that the differentiability of the inverse operator (but not the absence of this property) follows from the Inverse Function Theorem. We will show soon that there exists another technique for proving this property. It is applicable even in the case of nondifferentiability in the sense of Gateaux. However it is important, that it satisfies some property which can be interpreted as a weak form of the differentiability.
The obtained result can be used for the analysis of optimization control problems for the system described by (8). Consider as an example the functional
where α>0, y d ∈H −1(Ω), and y(v) is the solution of the Dirichlet problem (8) for the control v, besides ∥⋅∥ and ∥⋅∥∗ are the norms of the spaces \(H^{1}_{0}(\varOmega )\) and H −1(Ω). Consider the following optimization problem.
Problem 2
Minimize the functional I on the convex closed subset U of the space V.
The solvability of this problem can be proved by a standard method (see, for example Chap. 1, Theorem 1.1 in [11]) using the weak continuity of the state function with respect to the absolute term. Note that the indeterminacy of the functional I on the complete set U is not an obstacle for the analysis of the optimization problem [13].
Corollary 3
If \(H^{1}_{0}(\varOmega )\subset L_{q}(\varOmega )\), then the solution v 0 of Problem 2 satisfies the inequality
where Λ is the canonical isomorphism of the spaces H −1(Ω) and \(H^{1}_{0}(\varOmega )\), and p 0 is the solution of the homogeneous Dirichlet problem for the equation
Proof
The derivative of the functional J (first term of the minimizing functional) is defined by the equality
where (⋅,⋅)∗ is the scalar product of the space H −1(Ω). By Riesz theorem there exists the canonical isomorphism \(\varLambda :H^{-1}(\varOmega )\rightarrow H^{1}_{0}(\varOmega )\). Then we get
The derivative of the functional K (second term of the minimizing functional) is defined by the equality
where (⋅,⋅) is the scalar product of the space \(H^{1}_{0}(\varOmega )\). Using Green formula, we obtain
The operator A′(y 0) is self-adjoint. Then the adjoint equation (6) transforms to (11), and the variational inequality (5) transforms to (10). This completes the proof of the corollary. □
14.3 Differentiation of the Inverse Operator
We will try to prove the differentiability of the inverse operator directly without using of the Inverse function Theorem. Consider again an operator A:V→Y and a point v 0∈V. Suppose the following assumption.
Property 1
The operator A is invertible in a neighborhood O of the point v 0.
Choose a small enough positive number σ such that the point v σ =v 0+σh is in O for all h∈V. Denote by y(v) the value A −1 v. Using the equalities Ay(v σ )=v σ , Ay(v 0)=v 0, we get
Assume the following property.
Property 2
The operator A is Gateaux differentiable.
By the Mean Value Theorem we obtain
where y 0=y(v 0). Then we have
where the linear continuous operator G(v):Y→V is defined by the formula
for all v∈V. We get
Consider the linear operator equation
It transforms to
for v=v 0. We will use the following assumption.
Property 3
Equation (13) has a unique solution p μ (v)∈V ∗ for all μ∈Y ∗, v∈O.
Defining λ=p μ (v σ ) for small enough σ in (12) we get
Define
Property 4
The convergence p μ (v σ )→p μ (v 0) *-weakly in V ∗ uniformly with respect to μ∈M as σ→0 is true for all v∈V.
Theorem 2
Let us suppose the Properties 1–4. Then the operator A −1 has the Gateaux derivative D at the point v 0 such that
Proof
Let the operator D be defined by (16). It is a map from V to Y. Besides it is linear continuous. Using (15) and (16) we get
by the definition of the norm. Then we obtain p μ (v σ )→p μ (v 0) *-weakly in V ∗ uniformly with respect to μ∈M for all h∈V because of Property 4. Passing to the limit in the last equality as σ→0, we get the convergence
So the operator D is the Gateaux derivative of the operator A −1 at the point v 0. □
Let us explain applications of this result.
Lemma 3
The operator A for (8) satisfies the Properties 1–4 if \(H^{1}_{0}(\varOmega )\subset L_{q}(\varOmega )\).
Proof
Property 1 is the one-valued solvability of (8). The differentiability of the operator A (Property 2) is obvious, moreover, its derivative is defined by the equality
Thus it is necessary to use Properties 3 and 4 and properties of the adjoint equation (13).
We have
where ε∈[0,1]. Define
so that we get
Then we obtain the equality
for all y∈Y, p∈V ∗, v∈V. So we get
and (13) is transformed to
Multiplying (17) by p μ (v σ ) and integrating in x∈Ω we have
Then we obtain the inequality
where ∥⋅∥ p is the norm in L p (Ω). So we get
Then (17) has the unique solution p μ (v σ )∈V ∗ for all μ∈Y ∗, h∈V, and σ, and hence Property 3 holds.
The space V is reflexive, so it is sufficient to prove that p μ (v σ )→p μ (v 0) weakly in V ∗ uniformly with respect to μ as σ→0 for all h∈V. The set {p μ (v σ )} is bounded in the space \(H^{1}_{0}(\varOmega )\), and the set {g(v σ )p μ (v σ )} is bounded in the space L 2(Ω) uniformly with respect to μ∈M for all h∈V because of the inequalities (18). Using the Banach–Alaogly Theorem we get p μ (v σ )→p weakly in \(H^{1}_{0}(\varOmega )\) uniformly with respect to μ∈M for all h∈V. Applying the Rellich–Kondrashov Theorem we get p μ (v σ )→p strongly in L 2(Ω) and a.e. on Ω. Using the continuity of the solution of (8) with respect to the absolute term, we obtain y(v σ )→y(v 0) in \(H^{1}_{0}(\varOmega )\) and a.e. on Ω. Then
The sets {p μ (v σ )}, {y(v σ )}, and {g(v σ )2/ρ} are uniformly bounded in L q (Ω). We have
So the set {g(v σ )2 p μ (v σ )} is uniformly bounded in L q′(Ω). Using Lemma 1.3 (see Chap. 1 in [12]), we get
uniformly with respect to μ∈M for all h∈V.
Let us multiply (16) by a function \(\lambda \in H^{1}_{0}(\varOmega )\). After integration we get
Passing to the limit as σ→0, we obtain, that the function p=p μ (v 0) satisfies the equation
Thus p μ (v σ )→p μ (v 0) weakly in \(H^{1}_{0}(\varOmega )\) uniformly with respect to μ∈M for all h∈V, notably the Property 4 is true. □
By Lemma 3 the differentiability of the solution of (8) with respect to the absolute term follows from Theorem 2 if the embedding \(H^{1}_{0}(\varOmega )\subset L_{q}(\varOmega )\) holds.
Lemma 4
Properties 1–4 follow from the assumptions of the Inverse Function Theorem.
Proof
The existence of the inverse operator is a corollary of the Inverse Function Theorem. The differentiability of the operator A is the assumption of this theorem. So our general difficulty is the analysis of (13), namely the justification of Assumptions 3 and 4. Equation (13) can be transformed to
The derivative A′(y 0) is invertible by the Inverse Function Theorem. So its adjoint operator is invertible too. Then (13) can be transformed to the equality
where the map L μ (σh):V ∗→V ∗ is defined by the formula
Using properties of the operator norm we get the inequality
for all p 1,p 2∈V ∗. Then we obtain
because of the equality of the norms for adjoint operators. The operator A −1 is continuous at the point v 0 by the Inverse Function Theorem. Therefore we get the convergence y(v 0+σh)→y 0 in Y as σ→0 for all h∈V. Using the continuous differentiability of the operator A at the point y 0, we get G(v σ )→G(v 0) in the sense of the corresponding operator norm. The value σ can be chosen small enough such that
where 0<χ<1. So we obtain the estimate
Thus the operator L μ (σh) is contracting. Then (20) has a unique solution p μ (v σ )∈V ∗ because of the Contracting Mapping Theorem.
We get G(v σ )→G(v 0) as σ→0. So G(v σ )λ→G(v 0)λ in V for all λ∈Y. Using the obtained inequalities, we get
So we have
Then p μ (v σ )→p *-weakly in V ∗ for all h∈V as σ→0.
Using inequality (13) we get
As a consequence {p μ (v σ )} converges *-weakly, and {G(v σ )} converges strongly. After passing to the limit we have A′(y 0)∗ p=μ, and p=p μ (v 0). □
Thus the assumptions of Theorem 2 follow from the assumptions of the Inverse Operator Theorem. However assertions of Theorem 2 may be true if assumptions of the Inverse Operator Theorem are not satisfied.
14.4 Extended Differentiation of the Inverse Operator
The solution of (8) is differentiable with respect to the absolute term for small enough values of the set dimension n and nonlinearity parameter ρ. But it is not differentiable for large enough values of these parameters. Suppose n≥3. By Sobolev Theorem the embedding \(H^{1}_{0}(\varOmega )\subset L_{q}(\varOmega )\) is true if ρ≤4/(n−2). It guarantees the differentiability of the considered inverse operator. However this embedding fails if the parameter ρ increases. Then the solution of the equation becomes non-differentiable with respect to the absolute term. It seems to be a strange situation. Properties of the inverse operator change with a jump at the neighborhood of some value ρ. The differentiability of the operator disappears after the passage of this value. This situation seems not likely. We could suppose the existence of a weaker operator differentiability than the Gateaux derivative. We would like also to determine the extension of the operator derivative because the solvability of our optimization problem was proved for all values of the set dimension and the nonlinearity parameter.
There exist extensions of classical operator differentiation, for example, subdifferential calculus [14], Clarke derivatives [15], quasidifferential calculus [7]. They are used also for the resolution of nonsmooth optimization problems. These results are effective enough for the analysis of operators with nonsmooth terms, for example, the absolute value or the maximum of functions. However similar terms are absent in our case. So we will try to define another form of operator derivatives extension.
It is known that “the general idea of the differential calculus is a local approximation of a function by a linear function” (see p. 170 in [16]). The differentiation is a tool of the local approximation of the analyzed object. The desired form of an operator derivative can be obtained by weakening of topological approximation properties of the differentiation. Then we get the extended operator derivative (see [17–19]).
Definition
An operator L:V→Y is called (V 0,Y 0;V 1,Y 1)-extended differentiable in the sense of Gateaux at the point v 0∈V if there exist linear topological spaces V 0, Y 0, V 1, Y 1 with continuous embeddings
and a linear continuous operator D:V 0→Y 0 such that
as σ→0.
It is obvious that the (V,Y;V,Y)-derivative is the standard Gateaux derivative. The following result is known (see Theorem 4 in [18]; Theorem 5.4 in [19]).
Lemma 5
The operator A −1 for (8) is (V 0,Y 0;V 1,Y 1)-extended differentiable in the sense of Gateaux at an arbitrary point v 0∈V, where
moreover, its derivative D satisfies the equality
and p μ (v 0) is the solution of the homogeneous Dirichlet problem for (19).
Thus the inverse operator for the given example is extended differentiable for all values of the set dimensions and nonlinearity parameters. Its extended derivative is transformed to the Gateaux one for small enough values of these characteristics. However the Gateaux derivative does not exist for its large enough values, notably in the case of the high enough degree of the difficulty for the problem. Besides the difference between standard derivative and extended one is determined by this degree of the difficulty. Thus the inverse operator is extended differentiable without any constraints. However the extended derivative differs from the classical one after the augmentation of the parameters that determine the degree of the difficulty for the problem. Then we obtain the gradual change of the inverse operator properties after the gradual change of its parameters, although the standard derivatives theory permits the change with a jump.
We will prove that the obtained result is sufficient for the analysis of the given optimization problem without any constraints.
Corollary 4
The solution of the minimization problem of the functional I on the set U for (8) satisfies the variational inequality
where U 1=U∩(v 0+V 1), and p 0 is a solution of (11).
Indeed, if v 0 is a solution of the optimization problem, then
Let us choose v∈v 0+V 1. Passing to the limit and using Lemma 5 after division by σ we get
Then the inequality (22) is true.
If \(H^{1}_{0}(\varOmega )\subset L_{q}(\varOmega )\), then U 1=U, and the variational inequalities (10) and (22) are equal. Thus necessary conditions of optimality can be obtained without any assumptions by means of the extended derivatives theory. Optimization problems for elliptic equations with power nonlinearity without Gateaux differentiability of the control-state mapping were considered in [18, 19]. But the control space was narrower, and the state functional was more regular there. This technique was used for the analysis of optimization problems for others equations in [20].
Note that Lemma 5 uses the technique of the proof of Theorem 2. We can suppose that it is possible to obtain the extended differentiability of the inverse operator in the general case. Consider Banach spaces Y, V, a map A:Y→V, and points y 0∈Y, v 0=Ay 0. Let V 1 be a Banach subspace of V with a neighborhood O 1 of zero. Then O=v 0+O 1 is a neighborhood of v 0. We suppose the following assertion.
Property 5
The operator A is invertible on the set O.
Define y(v)=A −1 v. We get the equality
for all v∈V 1 and small enough σ, where v σ =v 0+σh. Let G(v) be the operator from the proof of Theorem 2. We have
so
Consider Banach spaces V(v) and Y(v) such that the embeddings of the spaces Y, Y 1 and Y(v) to V(v), Y(v) and V, respectively, are continuous for all v∈O. Let the following assumption be true.
Property 6
The operator A is Gateaux differentiable, moreover, there exists the continuous extension \(\overline{G}(v)\) of the operator G(v) to Y(v) such that its image is a subset of V(v) for all v∈O.
Using the properties y(v)∈y 0+Y(v) and V(v)∗⊂V ∗ we get
It is an analogue of (12). Consider the linear operator equation
which is an analogue of (13). It can be transformed to
for v=v 0, where \(\overline{A}'(y_{0})=\overline{G}(v_{0})\) is the extension of the operator A′(y 0)=G(v 0) to the set Y(v 0).
Consider Banach space V 1 such that the embedding Y(v)⊂V 1 is continuous and dense for all v∈O. We suppose the following condition.
Property 7
Equation (24) has the unique solution p μ (v)∈V(v)∗ for all v∈O, μ∈Y(v)∗.
Defining in (23) λ=p μ (v σ ) for a small enough σ we get
We will use the additional assumption.
Property 8
The convergence p μ (v σ )→p μ (v 0) holds *-weakly in \(V_{1}^{*}\) uniformly with respect to μ∈M as σ→0 for all h∈V 1.
The extended differentiability of the inverse operator is guaranteed by the following result.
Theorem 3
Let us suppose the Properties 5–8. Then the operator A −1 has the (V(v 0),Y(v 0);V 1,Y 1)-extended Gateaux derivative D at the point v 0 such that
Proof
Then
We have p μ (v σ )→p μ (v 0)*-weakly in V 1 uniformly with respect to μ∈M for all h∈V 1 as σ→0 by Property 8. Passing to the limit in the last equality we obtain
for all h∈V 1. Thus D is an extended derivative of the inverse operator. □
A result of the extended differentiability of the inverse operator for nonnormalized spaces was obtained in [18].
Let us prove that the assumptions of the Theorem 3 are true for the considered example.
Lemma 6
The operator A, which is defined by (8), satisfies the Properties 5–8.
Proof
The Property 5 is the solvability of (8) at a neighborhood of the given point. It is obviously that this assumption is true. The differentiability of the operator A is clear. The operator G(v) for our case is determined by the equality
where
Let the spaces Y 1, V 1, Y(v), V(v) be those defined in the proof of Lemma 5. Define the map \(\overline{G}(v)\) by the equality
Then Property 6 is true. Reliability of Properties 7 and 8 was obtained in the proof of Lemma 5. □
Thus extended differentiability of the inverse operator for (8) follows from Theorem 3.
Lemma 7
the Properties 5–8 follow from the assumptions of the Inverse Function Theorem.
Indeed, Property 5 is a direct corollary of this theorem. Let us define the spaces V=V 1, Y=Y 1. Then we get Y(v)=Y, V(v)=V. So the operator \(\overline{G}(v)\) is equal to G(v), and Property 6 is trivial. Therefore Properties 7 and 8 are transformed to Properties 3 and 4. Its validity was proved before.
Thus Theorem 3 is a generalization of the Theorem 2. The obtained results can be used for other applications if it is necessary to differentiate an inverse operator. For example the extended differentiable submanifolds of Banach spaces are defined in [21, 22]. Optimization control problems on differentiable submanifolds are considered there. Analogical results could be obtained for the implicit operator, including the case of nonnormalized spaces (see [23]). Banach spaces with extended differentiable operators form a category, and necessary conditions of optimality have a category interpretation (see [24]).
References
Aubin, J.P., Ekeland, I.: Applied Nonlinear Analysis. Wiley, New York (1984)
Kantorovich, L.V., Akilov, G.P.: Functional Analysis. Nauka, Moscow (1977)
Lusternik, L.A.: About extremum functionals conditions. Math. USSR Sb. 3, 390–401 (1934)
Dmitruk, A.V., Milutin, A.A., Osmolovsky, N.P.: Lusternik’s theorem and extremum theory. Usp. Mat. Nauk 35(6), 11–46 (1980)
Milnor, J.: Analytic proof of the hairy ball theorem and the Brouwer fixed point theorem. Am. Math. Mon. 85, 521–524 (1978)
Frankowska, H.: High order inverse function theorems. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 6, 283–303 (1989)
Dem’yaniv, F.V., Rubinov, A.M.: Basis of the Nonsmooth Analysis and Quasidifferential Calculus. Nauka, Moscow (1990)
Radulescu, R., Radulescu, M.: Local inversion theorems without assuming continuous differentiability. J. Math. Anal. Appl. 138, 581–590 (1989)
Cristea, M.: Local inversion theorems without assuming continuous differentiability. J. Math. Anal. Appl. 143, 259–263 (1989)
Aubin, J.P., Frankowska, H.: On inverse function theorems for set-valued maps. J. Math. Pures Appl. 66, 71–89 (1987)
Lions, J.L.: Contrôle Optimal de Systèmes Gouvernés par des Equations aux Derivées Partielles. Dunod, Paris (1968)
Lions, J.L.: Quelques Méthods de Resolution des Problèmes aux Limites non Linèaires. Dunod, Paris (1969)
Lions, J.L.: Contrôle de Systèmes Distribués Singulièrs. Gauthier-Villars, Paris (1983)
Ekeland, I., Temam, R.: In: Convex Analysis and Variational Problems. North-Holland, Amsterdam (1976)
Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)
Dieudonne, J.: Foundation of Modern Analysis. Nauka, Moscow (1964)
Serovajsky, S.: Differentiation of the inverse function for nonnormalised spaces. Funct. Anal. Appl. 27(4), 84–87 (1993)
Serovajsky, S.: Calculation of functional gradients and extended differentiation of operators. J. Inverse Ill-Posed Probl. 13(4), 383–396 (2005)
Serovajsky, S.: Optimization and Differentiation, vol. 1. Print-S, Almaty (2006)
Serovajsky, S.: Necessary conditions of optimality for the case of nondifferentiability of the control-state mapping. Differ. Equ. 31(6), 1055–1059 (1995)
Serovajsky, S.: Extended differentiability of the implicit function in the spaces without norms. Russ. Math. (Izv. VUZ) 12, 55–63 (1991)
Serovajsky, S.: Extremal problems on differentiable manifolds of Banach spaces. Russ. Math. (Izv. VUZ) 5, 83–86 (1996)
Serovajsky, S.: Extended differentiable submanifolds. Russ. Math. (Izv. VUZ) 1, 56–65 (1997)
Serovajsky, S.: Differentiation of operators and conditions of the extremum with category interpretation. Russ. Math. (Izv. VUZ) 2, 66–76 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer International Publishing Switzerland
About this paper
Cite this paper
Serovajsky, S.Y. (2013). Differentiability of Inverse Operators. In: Reissig, M., Ruzhansky, M. (eds) Progress in Partial Differential Equations. Springer Proceedings in Mathematics & Statistics, vol 44. Springer, Heidelberg. https://doi.org/10.1007/978-3-319-00125-8_14
Download citation
DOI: https://doi.org/10.1007/978-3-319-00125-8_14
Publisher Name: Springer, Heidelberg
Print ISBN: 978-3-319-00124-1
Online ISBN: 978-3-319-00125-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)