Abstract
In this work, we study differential problems in which the reflection operator and the Hilbert transform are involved. We reduce these problems to ODEs in order to solve them. Also, we describe a general method for obtaining the Green’s function of reducible functional differential equations and illustrate it with the case of homogeneous boundary value problems with reflection and several specific examples.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Some special kind of functional differential equations, called reducible differential equations, can be solved by making operations on them which lead to a related problem with an ODE or system of ODEs (see, for instance, [20, 22]).
To be more specific, if \({\mathbb {R}}[D]\) is the ring of polynomials on the usual differential operator D and \({\mathscr {A}}\) is any operator algebra containing \({\mathbb {R}}[D]\), then an equation \(L x=0\), where \(L\in {\mathscr {A}}\), is said to be a reducible differential equation if there exits \(R\in {\mathscr {A}}\) such that \(\textit{RL}\in {\mathbb {R}}[D]\). A similar definition could be done for non-constant or complex coefficients.
This ODE problem can be solved and, of the solutions obtained for it, some may be solutions of the original problem as well. This approach has recently been extended to the obtaining of Green’s functions for some of those problems [3–6].
It is important to point out that these transformations necessary to reduce the problem to an ordinary one are of a purely algebraic nature. It is in this sense similar to the algebraic analysis theory which, through the study of Ore algebras and modules, obtains important information about some functional problems, including explicit solutions [2, 9]. Nevertheless, the algebraic structures we deal with here are somewhat different, e. g., they are not in general Ore algebras. We refer the reader to [13, 16–18] for an algebraic approach to the abstract theory of boundary value problems and its applications to symbolic computation.
Among the reducible functional differential equations, those with reflection have gathered great interest, some of it due to their applications to supersymmetric quantum mechanics [10, 15, 19] or to other areas of analysis like topological methods [8].
In this work, we put special emphasis in two operators appearing in the equations: the reflection operator and the Hilbert transform. Both of them have exceptional algebraic properties which make them fit for our approach.
In the next section, we study the case of operators with reflection and the algebra generated by them, illustrating its properties. In Sect. 3, we show how we can compute the Green’s function of a problem with reflections in a fairly general setting using the properties studied in Sect. 2. Section 3 introduces a particular case of a more general context. This new setting is studied in Sect. 4, where we outline the theory for abstract linear operators and prove, as a particular case, the result used in Sect. 3 to derive the Green’s function. Finally, in Sect. 5, we show that the application of our results extends beyond equations with reflection, study the case of differential equations in which the Hilbert transform is involved, and give an example of how to compute the solutions of these equations. Also, we show how these kind of operators relate to the complex polynomials and outline an analogous theory for hyperbolic polynomials.
2 Differential Operators with Reflection
In this Section, we will study a particular family of operators, those that are combinations of the differential operator D, the pullback operator of the reflection \(\varphi (t)=-t\), denoted by \(\varphi ^*(f)(t)=f(-t)\), and the identity operator, \({{\mathrm{Id}}}\). In order to freely apply the operator D without worrying too much about its domain of definition, we will consider that D acts on the set of functions locally of bounded variation on \({\mathbb {R}}\), \({{\mathrm{BV_{loc}}}}({\mathbb {R}})\).Footnote 1
It is well known that any function locally of bounded variation \(f\in {{\mathrm{BV_{loc}}}}({\mathbb {R}})\) can be expressed as
for any \(x_0\in {\mathbb {R}}\), where \(g\in L^1({\mathbb {R}})\), and h is the function of derivative zero almost everywhere [12]. This implies that the distributional derivative (we will call it weak derivative as shorthand) of f is
where \(\mu _{s}\) is a singular measure with respect to the Lebesgue measure. In this way, we will define \(D\,f:=g\) (we will restate this definition in a more general way further on).
We now consider the real abelian group \({\mathbb {R}}[D,\varphi ^* ]\) of generators \(\{D^k,\varphi ^*D^k\}_{k=0}^\infty \) where the powers \(D^k\) are taken in the sense of composition (as notation, \(D^0={{\mathrm{Id}}}\), and a constant, say a, will be considered, in so far as an operator, acting on a function f and returning the product \(a\,f\)). If we take the usual composition for operators in \({\mathbb {R}}[D,\varphi ^* ]\), we observe that \(D\varphi ^*=-\varphi ^*D\), so composition is closed in \({\mathbb {R}}[D,\varphi ^* ]\), which makes it a non-commutative algebra. In general, \(D^k\varphi ^*=(-1)^k\varphi ^*D^k\) for \(k=0,1,\ldots \)
The elements of \({\mathbb {R}}[D,\varphi ^* ]\) are of the form
For convenience, we consider the sums on i and j such that \(i,j\in \{0,1,\ldots \}\), but taking into account that the coefficients \(a_i,b_j\) are zero for big enough indices.
Despite the non-commutativity of the composition in \({\mathbb {R}}[D,\varphi ^* ]\) there are interesting relations in this algebra.
First notice that \({\mathbb {R}}[D,\varphi ^*]\) is not a unique factorization domain. Take a polynomial \(P=D^2+\beta D+\alpha \), where \(\alpha ,\beta \in {\mathbb {R}}\), and define the following operators.
If \(\beta ^2-4\alpha \ge 0\),
If \(\beta =0\) and \(\alpha \le 0\),
If \(\beta =0\) and \(\alpha \ge 0\),
If \(\beta =0\) and \(\alpha \le 1\),
If \(\beta =0\), \(\alpha \ne 0\) and \(\alpha \le 1\),
Then,
and, when \(\beta =0\),
Observe that only \(L_1\) and \(R_1\) commute in the case of \(\beta \ne 0\).
This rises the question on whether we can decompose every differential polynomial P in the composition of two ‘order one’ elements of \({\mathbb {R}}[D,\varphi ^* ]\), but this is not the case in general. Just take \(Q=D^2+D+1\) (observe that Q is not in any of the aforementioned cases). Consider a decomposition of the kind
where a, b, c, d, e, g, h, and j are real coefficients to be determined. The resulting system
has no solution for real coefficients.
Let \({\mathbb {R}}[D]\) be the ring of polynomials with real coefficients on the variable D. The following result states a very useful property of the algebra \({\mathbb {R}}[D,\varphi ^* ]\).
Theorem 2.1
Take L as defined in (2.1) and take
Then \(\textit{RL}=\textit{LR}\in {\mathbb {R}}[D]\).
Proof
Hence, \(\textit{RL}\in {\mathbb {R}}[D]\).
Observe that, if we take R in the place of L in the hypothesis of the Theorem, we obtain L in the place of R and so, by expression (2.3), \(\textit{LR}\in {\mathbb {R}}[D]\). \(\square \)
Remark 2.1
Some interesting remarks on the coefficients of the operator \(S=\textit{RL}\) defined in Theorem 2.1 can be made.
If we have
then
A closer inspection reveals that
This has some important consequences. If \(L=\sum _{i=0}^n a_i\varphi ^*D^i+\sum _{j=0}^n b_jD^j\) with \(a_n\ne 0\) or \(b_n\ne 0\), we have that \(c_{2n}=(-1)^n(a_n^2-b_n^2)\) Footnote 2 and so if \(a_n=\pm b_n\) then \(c_{2n}=0\). This shows that composing two elements of \({\mathbb {R}}[D,\varphi ^* ]\) we can get another element which has simpler terms in the sense of derivatives of less order. We illustrate this with two examples.
Take \(n\ge 3\), \(L=\varphi ^*D^n+D^n+D-{{\mathrm{Id}}}\) and \(R=\varphi ^*D^n-(-1 )^nD^n+D+{{\mathrm{Id}}}\). Then \(RL=2D^{\alpha (n)}+D^2-{\text {Id}}\) where \(\alpha (n)=n\) if n is even and \(\alpha (n)=n+1\) if n is odd.
If we take \(n\ge 0 \), \(L=\varphi ^*D^{2n+1}+D^{2n+1} + {{\mathrm{Id}}}\) and \(R=\varphi ^*D^n-(-1 )^nD^n+{{\mathrm{Id}}}\) then \(\textit{RL}= -{{\mathrm{Id}}}\).
Example 2.1
Consider the equation
Applying the operator \(\varphi ^*D^3+D^3-{{\mathrm{Id}}}\) to both sides of the equation we obtain \(x(t)=\sin t+2\cos t\). This is the unique solution of the equation, to which we had not imposed any extra conditions.
3 Boundary Value Problems
In this section, we obtain the Green’s function of boundary value problems with reflection and constant coefficients. We point out that the same approach used in this section is also valid for initial problems among other types of conditions.
Let \(I=[a,b]\subset {\mathbb {R}}\) be an interval and \(f\in {{\mathrm{L^1}}}(I)\). Consider now the following problem with the usual derivative.
The following Theorem from [7] states the cases where we can find a unique solution for problem (3.1).Footnote 3
Theorem 3.1
Assume the following homogeneous problem has a unique solution
Then there exists a unique function, called Green’s function, such that
-
(G1)
G is defined on the square \(I^2\).
-
(G2)
The partial derivatives \(\frac{\partial ^kG}{\partial t^k}\) exist and are continuous on \(I^2\) for \(k=0,\ldots ,n-2\).
-
(G3)
\(\frac{\partial ^{n-1}G}{\partial t^{n-1}}\) and \(\frac{\partial ^nG}{\partial t^n}\) exist and are continuous on \(I^2\backslash \{(t,t)\ :\ t\in I\}\).
-
(G4)
The lateral limits \(\frac{\partial ^{n-1}G}{\partial t^{n-1}}(t,t^+)\) and \(\frac{\partial ^{n-1}G}{\partial t^{n-1}}(t,t^-)\) exist for every \(t\in (a,b)\) and
$$\begin{aligned} \frac{\partial ^{n-1}G}{\partial t^{n-1}}(t,t^-)-\frac{\partial ^{n-1}G}{\partial t^{n-1}}(t,t^+)=\frac{1}{a_n}. \end{aligned}$$ -
(G5)
For each \(s\in (a,b)\) the function \(G(\cdot ,s)\) is a solution of the differential equation \(Su=0\) on \(I\backslash \{s\}\).
-
(G6)
For each \(s\in (a,b)\) the function \(G(\cdot ,s)\) satisfies the boundary conditions \(B_iu=0\ i=1,\ldots ,n\).
Furthermore, the function \(u(t):=\int _a^bG(t,s)f(s)\text {d}s\) is the unique solution of the problem (3.1).
Using the properties (G1)–(G6) and Theorem 2.1 one can prove Theorem 3.2. The proof of this result will be a direct consequence of Theorem 4.1.
Given an operator \({\mathscr {L}}\) for functions of one variable, define the operator \({\mathscr {L}}_\vdash \) as \({\mathscr {L}}_\vdash G(t,s):={\mathscr {L}}(G(\cdot ,s))|_{t}\) for every s and any suitable function G.
Theorem 3.2
Let \(I=[-T,T]\). Consider the problem
where L is defined as in (2.1), \(h\in L^1(I)\) and
Then, there exists \(R\in {\mathbb {R}}[D,\varphi ^* ]\) (as in (2.2)) such that \(S:=\textit{RL}\in {\mathbb {R}}[D]\) and the unique solution of problem (3.2) is given by \(\int _a^bR_\vdash G(t,s)h(s)\text {d}s\) where G is the Green’s function associated to the problem \(Su=0\), \(B_iRu=0\), \(B_iu=0\), \(i=1,\ldots ,n\), assuming that it has a unique solution.
For the following example, let us explain some notations. Let \(k,p\in {\mathbb {N}}\). We denote by \(W^{k,p}(I)\) the Sobolev Space defined by
Given a constant \(a\in {\mathbb {R}}\) we can consider the pullback by this constant as a functional \(a^*:{\mathscr {C}}(I)\rightarrow {\mathbb {R}}\) such that \(a^*f=f(a)\) in the same way we defined it for functions.
Example 3.1
Consider the following problem.
where \(h\in W^{2,1}(I)\). Then, the operator we are considering is \(L=D^2+a\,\varphi ^*+b\). If we take \(R:=D^2-a\,\varphi ^*+b\), we have that \(\textit{RL}=D^4+2b\,D^2+b^2-a^2\).
The boundary conditions are \(((T^*)-(-T)^*)u=0\) and \(((T^*)-(-T)^*)Du=0\). Taking this into account, we add the conditions
That is, our new reduced problem is
where \(f(t)=R\,h(t)=h''(t)-a\,h(-t)+b\,h(t)\).
Observe that this problem is equivalent to the system of equations (a chain of order two problems)
Thus, it is clear that
where, \(G_1\) and \(G_2\) are the Green’s functions related to the previous second-order problems. Explicitly, in the case \(b>|a|\) (the study for other cases would be analogous),
and
Hence, the Green’s function G for problem (3.4) is given by
Therefore, using Theorem 3.2, the Green’s function for problem (3.3) is
Remark 3.1
We can reduce the assumptions on the regularity of h to \(h\in {{\mathrm{L^1}}}(I)\) just taking into account the density of \(W^{2,1}(I)\) in \(L^1(I)\).
Remark 3.2
Example 2.1 illustrates the importance of the existence and uniqueness of solution of the problem \(Su=0,\ B_iRu=0,\ B_iu=0\) in the hypothesis of Theorem 3.2. In general, when we compose two linear ODEs, respectively, of orders m and n and a number m and n of conditions, we obtain a new problem of order \(m+n\) and \(m+n\) conditions. As we see this is not the case in the reduction of Theorem 3.2. In the case the order of the reduced problem is less than 2n anything is possible: we may have an infinite number of solutions, no solution or uniqueness of solution being the problem non-homogeneous. The following example illustrates this last case.
Example 3.2
Consider the problem
where \(h\in W^{4,1}([-1,1])\).
For this case, \(Ru(t):=-u^{(4)}(t)+u^{(4)}(-t)+u''(-t)\) and the reduced equation is \(\textit{RL}u=2u^{(6)}+u^{(4)}=Rh\), which has order \(6<2\times 4 = 8\), so there is a reduction of the order. Now, we have to be careful with the new reduced boundary conditions.
Being the two last conditions we obtained from applying the original boundary conditions to the original equation. Equation (3.6) is a system of linear equations which can be solved for u and its derivatives as
Consider now the reduced problem
and the change of variables \(v(t):=u^{(4)}(t)\). Now, we look the solution of
which is given by
where
Now, it is left to solve the problem
The solution is given by
where
Hence, taking \(J(t,s)=\int _{-1}^1H(t,r)G(r,s)\text {d}s\),
Therefore,
4 The Reduced Problem
The usefulness of a theorem of the kind of Theorem 3.2 is clear, for it allows the obtaining of the Green’s function of any problem of differential equations with constant coefficients and involutions, generalizing the works [3–5]. The proof of this Theorem relies heavily on the properties \((G1){-}(G6)\), so our main goal now is to consider abstractly these properties in order to apply them in a more general context with different kinds of operators.
Let X be a vector subspace of \({{\mathrm{L_{loc}^1}}}({\mathbb {R}})\), and \(({\mathbb {R}},\tau )\) the real line with its usual topology. Define \(X_U:=\{f|_U\ :\ f\in X\}\) for every \(U\in \tau \) (observe that \(X_U\) is a vector space as well). Assume that X satisfies the following property.
(P) For every partition of \({\mathbb {R}}\), \(\{S_j\}_{j\in J}\cup \{N\}\), consisting of measurable sets, where N has no accumulation points and the \(S_j\) are open, if \(f_j\in X_{S_j}\) for every \(j\in J\), then there exists \(f\in X\) such that \(f|_{S_j}=f_j\) for every \(j\in J\).
Example 4.1
The set of locally absolutely continuous functions \({{\mathrm{AC_{loc}}}}({\mathbb {R}})\subset {{\mathrm{L_{loc}^1}}}({\mathbb {R}})\) does not satisfy (P). To see this just take the following partition of \({\mathbb {R}}\): \(S_1=(-\infty ,0)\), \(S_2=(0,+\infty )\), \(N=\{0\}\) and consider \(f_1\equiv 0\), \(f_2\equiv 1\). \(f_j\in {{\mathrm{AC}}}({\mathbb {R}})_{S_j}\) for \(j=1,2\), but any function f such that \(f|_{S_j}=f_j\), \(j=1,2\) has a discontinuity at 0, so it cannot be absolutely continuous. That is, (P) is not satisfied.
Example 4.2
\(X={{\mathrm{BV_{loc}}}}({\mathbb {R}})\) satisfies (P). Take a partition of \({\mathbb {R}}\), \(\{S_j\}_{j\in J}\cup \{N\}\), consisting of measurable sets where N has no accumulation points and the \(S_j\) are open and a family of functions \((f_j)_{j\in J}\) such that \(f_j\in X_{S_j}\) for every \(j\in J\). We can further assume, without loss of generality, that the \(S_j\) are connected. Define a function f such that \(f|_{S_j}:=f_j\) and \(f|_N=0\). Take a compact set \(K\subset {\mathbb {R}}\). Then, by Bozano–Weierstrass’ and Heine–Borel’s Theorems, \(K\cap N\) is finite for N has no accumulation points. Therefore, \(J_K:=\{j\in J : S_j\cap K\ne \emptyset \}\) is finite as well. To see this denote by \(\partial S\) the boundary of a set S and observe that \(N\cup K=\cup _{j\in J}\partial (S_j\cap K)\) and that the sets \(\partial (S_j\cap K)\cap \partial (S_k\cap K)\) are finite for every \(j,k\in J\).
Thus, the variation of f in K is \(V_K(f)\le \sum _{j\in J_K}V_{S_j}(f)<\infty \) since f is of bounded variation on each \(S_j\). Hence, X satisfies (P).
Throughout this section, we will consider a function space X satisfying (P) and two families of linear operators \(L=\{L_U\}_{U\in \tau }\) and \(R=\{R_U\}_{U\in \tau }\) that satisfy
- Locality: :
-
\(L_U\in {\mathscr {L}}(X_U,{{\mathrm{L_{loc}^1}}}(U))\), \(R_U\in {\mathscr {L}}({{\mathrm{im}}}(L_U),{{\mathrm{L_{loc}^1}}}(U))\),
- Restriction: :
-
\(L_V(f|_V)=L_U(f)|_V\), \(R_V(f|_V)=R_U(f)|_V\) for every \(U,V\in \tau \) such that \(V\subset U\).Footnote 4
The following definition allows us to give an example of an space that satisfies the properties of locality and restriction.
Definition 1
Let \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) and assume there exists a partition \(\{S_j\}_{j\in J}\cup \{N\}\) of \({\mathbb {R}}\) consisting of measurable sets where N is of zero Lebesgue measure such that satisfying that the weak derivative \(g_i\) exists for every \(f|_{S_j}\), then a function g such that \(g|_{S_j}=g_j\) is called the very weak derivative (vw-derivative) of f.
Remark 4.1
The vw-derivative is uniquely defined save for a zero measure set and is equivalent to the weak derivative for absolutely continuous functions.
Nevertheless, the vw-derivative is different from the derivative of distributions. For instance, the derivative of the Heavyside function in the distributional sense is de Dirac delta at 0, whereas its vw-derivative is zero. What is more, the kernel of the vw-derivative is the set of functions which are constant on a family of open sets \(\{S_j\}_{j\in J}\) such \({\mathbb {R}}\backslash (\cup _{j\in J} S_j)\) has Lebesgue measure zero.
Example 4.3
Take \(X={{\mathrm{BV_{loc}}}}({\mathbb {R}})\) and \(L=D\) to be the very weak derivative. Then L satisfies the locality and restriction hypotheses.
Remark 4.2
The vw-derivative, as defined here, is the D operator defined in Sect. 2 for functions of bounded variation. In other words, the vw-derivative ignores the jumps and considers only those parts with enough regularity.
Remark 4.3
The locality property allows us to treat the maps L and R as if they were just linear operators in \({\mathscr {L}}(X,{{\mathrm{L_{loc}^1}}}({\mathbb {R}}))\) and \({\mathscr {L}}({{\mathrm{im}}}(X),{{\mathrm{L_{loc}^1}}}({\mathbb {R}}))\) respectively, although we must not forget their more complex structure.
Assume \(X_U\subset {{\mathrm{im}}}( L_U)\subset {{\mathrm{im}}}(R_U)\) for every \(U\in \tau \). \(B_i\in {\mathscr {L}}({{\mathrm{im}}}( R_{\mathbb {R}}),{\mathbb {R}})\), \(i=1,\ldots ,m\) and \(h\in {{\mathrm{im}}}(L_{\mathbb {R}})\). Consider now the following problem
Let
Z is a vector space.
Let \(f\in {{\mathrm{im}}}(L_{\mathbb {R}})\) and consider the problem
Let \(G\in Z\) and define the operator \(H_G\) such that \(H_G(h)|_t:=\int _{\mathbb {R}}G(t,s)h(s)\text {d}s\). We have now the following theorem relating problems (4.1) and (4.2).
Theorem 4.1
Assume L and R are the aforementioned operators with the locality and restriction properties and let \(h\in {{\mathrm{Dom}}}(R_{\mathbb {R}})\). Assume L commutes with R and that there exists \(G\in Z\) such that
- (I):
-
\((\textit{RL})_\vdash G=0\),
- (II):
-
\(B_{i\,\vdash } G=0,\quad i =1,\ldots ,m\),
- (III):
-
\(( B_iR)_\vdash G=0,\quad i =1,\ldots ,m\),
- (IV):
-
\(\textit{RLH}_Gh=H_{(\textit{RL})_\vdash G}h+h\),
- (V):
-
\(LH_{R_\vdash G}h=H_{L_\vdash R_\vdash G}h+h\).
- (VI):
-
\(B_iH_G= H_{B_{i\,\vdash } G},\quad i =1,\ldots ,m\),
- (VII):
-
\(B_iRH_G= B_iH_{R_\vdash G}=H_{( B_iR)_\vdash G},\quad i =1,\ldots ,m\).
Then, \(v:=H_G(h)\) is a solution of problem (4.2) and \(u:=H_{R_\vdash G}(h)\) is a solution of problem (4.1).
Proof
(I) and (IV) imply that
On the other hand, \((\textit{III})\) and \((\textit{VII})\) imply that, for every \(i=1,\ldots ,m\),
All the same, by \((\textit{II})\) and \((\textit{VI})\),
Therefore, v is a solution to problem (4.2).
Now, using (I) and (V) and the fact that \(\textit{LR}=\textit{RL}\), we have that
Taking into account \((\textit{III})\) and \((\textit{VII})\),
Hence, u is a solution of problem (4.1). \(\square \)
The following Corollary is proved in the same way as the previous Theorem.
Corollary 4.2
Assume \(G\in Z\) satisfies
-
(1)
\(L_\vdash G=0\),
-
(2)
\(B_{i\,\vdash } G=0,\quad i =1,\ldots ,m\),
-
(3)
\(LH_Gh=H_{L_\vdash G}h+h\),
-
(4)
\(B_iH_Gh=H_{ B_{i\,\vdash } G}h\).
Then \(u=H_Gh\) is a solution of problem (4.1).
Proof of Theorem 3.2
Originally we would need to take \(h\in {{\mathrm{Dom}}}(R)\), but by a simple density argument (\(\mathscr {C}^\infty (I)\) is dense in \({{\mathrm{L^1}}}(I)\)) we can take \(h\in {{\mathrm{L^1}}}(I)\). If we prove that the hypothesis of Theorem 4.1 are satisfied, then the existence of solution will be proved. First, Theorem 2.1 guarantees the commutativity of L and R. Now, Theorem 3.1 implies hypothesis \((I){-}(\textit{VII})\) of Theorem 4.1 in terms of the vw-derivative.
Indeed, (I) is straightforward from (G5). \((\textit{II})\) and \((\textit{III})\) are satisfied because \((G1)-(G6)\) hold and \(B_iu=B_iRu=0\). (G2) and (G4) imply \((\textit{IV})\) and (V). \((\textit{VI})\) and \((\textit{VII})\) hold because of (G2), (G5) and the fact that the boundary conditions commute with the integral.
On the other hand, the solution to problem (3.2) must be unique for, otherwise, the reduced problem \(Su=0\), \(B_iRu=0\), \(B_iu=0\), \(i=1,\ldots ,n\) would have several solutions, contradicting the hypotheses. \(\square \)
The following Lemma, in the line of [5], extends the application of Theorem 3.2 to the case of non-constant coefficients with some restrictions for problems similar to the one in Example 3.1.
Lemma 4.3
Consider the problem
where \(a\in W^{2,1}_{{\text {loc}}}({\mathbb {R}})\) is non-negative and even,
for some constant \(k\in {\mathbb {R}}\), \(k^2\ne 1\) and b is integrable.
Define \(A(t):= \int _0^t\sqrt{a(s)}{\text {d}}s\), consider
and assume it has a Green’s function G.
Then
is a solution of problem (4.3) where
and \(H(t,\cdot )h(\cdot )\) is assumed to be integrable in \([-T,T]\).
Proof
Let G be the Green’s function of the problem
Now, we show that H satisfies the equation, that is,
Therefore,
The boundary conditions are satisfied as well. \(\square \)
Remark 4.4
The same construction of Lemma 4.3 is valid for the case of the initial value problem. We illustrate this in the following example.
Example 4.4
Let \(a(t)=|t|^p\), k > 1. Taking b as in Lemma 4.3,
consider problems
and
Using an argument similar as the one in Example 3.1, and considering \(R=D^2-\varphi ^*+k\), we can reduce problem (4.5) to
which can be decomposed in
which have as Green’s functions, respectively,
where \(\chi _0^t(s)=1\) if \(s\in [0,t]\) and \(\chi _0^t(s)=-1\) if \(s\in [-t,0)\). Then, the Green’s function for problem (4.6) is
Observe that
Hence, considering
the Green’s function of problem (4.4) follows the expression
This is,
5 The Hilbert Transform and Other Algebras
In this section, we devote our attention to new algebras to which we can apply the previous results. To achieve this goal, we recall the definition and remarkable properties of the Hilbert transform (see [11]).
We define the Hilbert transform \(\mathsf {H}\) of a function f as
where the last integral is to be understood as the Cauchy principal value.
Among its properties, we would like to point out the following.
-
\({\mathsf {H}}:L^p({\mathbb {R}})\rightarrow L^p({\mathbb {R}})\) is a linear bounded operator for every \(p\in (1,+\infty )\) and
$$\begin{aligned} \Vert {\mathsf {H}}\Vert _p={\left\{ \begin{array}{ll} \tan \frac{\pi }{2 p}, &{} p\in (1,2], \\ \cot \frac{\pi }{2 p}, &{} p\in [2,+\infty ), \end{array}\right. } \end{aligned}$$in particular \(\Vert {\mathsf {H}}\Vert _2=1\).
-
\(\mathsf {H}\) is an anti-involution: \({\mathsf {H}}^2=-{{\mathrm{Id}}}\).
-
Let \(\sigma (t)=at+b\) for \(a,b\in {\mathbb {R}}\). Then \({\mathsf {H}}\sigma ^*={{\mathrm{sign}}}(a)\sigma ^*{\mathsf {H}}\) (in particular, \({\mathsf {H}}\varphi ^*=-\varphi ^*{\mathsf {H}}\)). Furthermore, if a linear bounded operator \(\mathsf {O}:L^p({\mathbb {R}})\rightarrow L^p({\mathbb {R}})\) satisfies this property, \(\mathsf {O}=\beta \mathsf H\), where \(\beta \in {\mathbb {R}}\).
-
\({\mathsf {H}}\) commutes with the derivative: \({\mathsf {H}}D=D{\mathsf {H}}\).
-
\({\mathsf {H}}(f*g)=f*{\mathsf {H}}g={\mathsf {H}}*g\), where \(*\) denotes the convolution.
-
\({\mathsf {H}}\) is an isometry in \(L^2({\mathbb {R}})\): \(\left<{\mathsf {H}}f,{\mathsf {H}}g\right>=\left<f,g\right>\), where \(\left<\ ,\ \right>\) is the scalar product in \(L^2({\mathbb {R}})\). In particular \(\Vert {\mathsf {H}}f\Vert _2=\Vert f\Vert _2\).
Consider now the same construction we did for \({\mathbb {R}}[D,\varphi ^*]\) changing \(\varphi ^*\) by \({\mathsf {H}}\) and denote this algebra as \({\mathbb {R}}[D,{\mathsf {H}}]\). In this case we are dealing with a commutative algebra. Actually, this algebra is isomorphic to the complex polynomials \({\mathbb {C}}[D]\). Just consider the isomorphism
Observe that \(\Xi |_{{\mathbb {R}}[D]}={{\mathrm{Id}}}|_{{\mathbb {R}}[D]}\).
We now state a result analogous to Theorem 2.1.
Theorem 5.1
Take
and define
Then \(\textit{LR}=\textit{RL}\in {\mathbb {R}}[D]\).
Remark 5.1
Theorem 5.1 is clear from the point of view of \({\mathbb {C}}[D]\). Since \(\Xi (R)=-\overline{\Xi (L)}\),
Therefore, \(|\Xi (L)|^2\in {\mathbb {R}}[D]\), implies \(\textit{RL}\in {\mathbb {R}}[D]\).
Remark 5.2
Since \({\mathbb {R}}[D,{\mathsf {H}}]\) is isomorphic to \({\mathbb {C}}[D]\), the Fundamental Theorem of Algebra also applies to \({\mathbb {R}}[D,{\mathsf {H}}]\), which shows a clear classification of the decompositions of an element of \({\mathbb {R}}[D,{\mathsf {H}}]\) in contrast with those of \({\mathbb {R}}[D,\varphi ^*]\) which, in Sect. 2, was shown not to be a unique factorization domain.
In the following example, we will use some properties of the Hilbert transform:
where the integral is considered as the principal value.
Example 5.1
Consider the problem
where \(a>0,\) composing the operator \(L=D+a{\mathsf {H}}\) with the operator \(R=D-a{\mathsf {H}}\) we obtain \(S=\textit{RL}=D^2+a^2\), the harmonic oscillator operator. The extra boundary conditions obtained applying R are \(u'(0)-a{\mathsf {H}}u(0)=0\). The general solution to the problem \(u''(t)+a^2u(t)=Rh(t)=2a\cos at, u(0)=0\) is given by
where \(\alpha \) is a real constant. Hence,
If we impose the boundary conditions \(u'(0)-a{\mathsf {H}}u(0)=0\) then we get \(\alpha =0\). Hence, the unique solution of problem (5.1) is
Remark 5.3
It is easy to check that the kernel of \(D+a{\mathsf {H}}\) (\(a>0\)) is spanned by \(\sin t\) and \(\cos t\) and therefore, the kernel of \(D-a{\mathsf {H}}\) is just 0. This defies, in the line of Remark 3.2, the usual relation between the degree of the operator and the dimension of the kernel which is held for ODEs, that is, the operator of a linear ODE of order n has a kernel of dimension n. In this case, we have the order 1 operator \(D+a{\mathsf {H}}\) with a dimension 2 kernel and the injective order 1 operator \(D-a{\mathsf {H}}\).
Now, we consider operators with reflection and Hilbert transforms, and denote the algebra as \({\mathbb {R}}[D,{\mathsf {H}},\varphi ^*]\). We can again state a reduction Theorem.
Theorem 5.2
Take
and define
Then \(\textit{LR}=\textit{RL}\in {\mathbb {R}}[D]\).
5.1 Hyperbolic Numbers as Operators
Finally, we use the same idea behind the isomorphism \(\Xi \) to construct an operator algebra isomorphic to the algebra of polynomials on the hyperbolic numbers.
The hyperbolic numbersFootnote 5 are defined, in a similar way to the complex numbers, as follows:
The arithmetic in \({\mathbb {D}}\) is that obtained assuming the commutative, associative, and distributive properties for the sum and product. In a parallel fashion to the complex numbers, if \(w\in {\mathbb {D}}\), with \(w=x+jy\), we can define
and, since \(w\overline{w}=x^2-y^2\in {\mathbb {R}}\), we set
which is called the Minkowski norm. It is clear that \(|w_1w_2|=|w_1||w_2|\) for every \(w_1,w_2\in {\mathbb {D}}\) and, if \(|w|\ne 0\), then \(w^{-1}=\overline{w}/|w|^2\). If we add the norm
we have that \(({\mathbb {D}}, \Vert \cdot \Vert )\) is a Banach algebra, so the exponential and the hyperbolic trigonometric functions are well defined. Although, unlike \(\mathbb C\), \({\mathbb {D}}\) is not a division algebra (not every non-zero element has an inverse), we can derive calculus (differentiation, integration, holomorphic functions...) for \({\mathbb {D}}\) as well [1].
In this setting, we want to derive an operator J defined on a suitable space of functions such that satisfies the same algebraic properties as the hyperbolic imaginary unity j. In other words, we want the map
to be an algebra isomorphism. This implies:
-
J is a linear operator,
-
\(J\not \in {\mathbb {R}}[D]\).
-
\(J^2={{\mathrm{Id}}}\), that is, J is an involution,
-
\(JD=DJ\).
There is a simple characterization of linear involutions on a vector space: every linear involution J is of the form
where P is a projection operator, that is, \(P^2=P\). It is clear that \(\pm (2P-{{\mathrm{Id}}})\) is, indeed a linear operator and an involution. On the other hand, it is simple to check that, if J Is a linear involution, \(P:=(\pm J+{{\mathrm{Id}}})/2\) is a projection, so \(J=\pm (2P-{{\mathrm{Id}}})\).
Hence, it is sufficient to look for a projection P commuting with de derivative.
Example 5.2
Consider the space \(W={{\mathrm{L^2}}}([-\pi ,\pi ])\) and define
that is, take only the sum over the even coefficients of the Fourier series of f. Clearly \(PD=DP\). \(J:=2P-{{\mathrm{Id}}}\) satisfies the aforementioned properties.
The algebra \({\mathbb {R}}[D,J]\), being isomorphic to \({\mathbb {D}}[D]\), satisfies also very good algebraic properties (see, for instance, [14]). In order to get an analogous theorem to Theorem 2.1 for the algebra \({\mathbb {R}}[D,J]\) it is enough to take, as in the case of \({\mathbb {R}}[D,J]\), \(R=\Theta ^{-1}(\overline{\Theta (L)})\).
Notes
Since we will be working with \({\mathbb {R}}\) as a domain throughout this article, it will be in our interest to take the local versions of the classical function spaces. By local version we mean that, if we restrict the function to a compact set, the restriction belongs to the classical space defined with that compact set as domain for its functions.
This is so because if \(i\in \{0,\ldots ,n-1\}\), then \(2n-i\in \{n+1,\ldots ,2n\}\) and \(a_n\) (respectively, \(b_n\)) are non-zero only for \(n\in \{0,\ldots ,n\}\).
In [7], this result is actually stated for non-constant coefficients, but the case of constant coefficients is enough for our purposes.
The definitions here presented of L and R are deeply related to Sheaf Theory. Since the authors want to make this work as self-contained as possible, we will not deepen into that fact.
References
Antonuccio, F.: Semi-Complex Analysis and Mathematical Physics (2008). http://arxiv.org/pdf/gr-qc/9311032
Bronstein, M., Petkovšek, M.: On ore rings. On ore rings, linear operators and factorisation 1, 27–44 (1994)
Cabada, A., Tojo, F.A.F.: Comparison results for first order linear operators with reflection and periodic boundary value conditions. Nonlinear Analysis: Theory, Methods and Applications 78, 32–46 (2013)
Cabada, A., Tojo, F.A.F.: Solutions of the first order linear equation with reflection and general linear conditions. Glob. J. Math. Sci. 2(1), 1–8 (2013)
Cabada, A., Tojo, F.A.F.: Existence results for a linear equation with reflection, non-constant coefficient and periodic boundary conditions. J. Math. Anal. Appl. 412(1), 529–546 (2013)
Cabada, A., Tojo, F.A.F.: Solutions and Greens function of the first order linear equation with reflection and initial conditions. Bound. Value Probl. 2014, 99 (2014)
Cabada, A., Cid, J.A., Máquez-Villamarín, B.: Computation of Green’s functions for boundary value problems with Mathematica. Appl. Math. Comput. 219, 1919–1936 (2012)
Cabada, A., Infante, G., Tojo, F.A.F.: Nontrivial solutions of perturbed Hammerstein integral equations with reflections. Bound. Value Probl. 2013, 86 (2013)
Cluzeau, T., Quadrat, A.: Factoring and decomposing a class of linear functional systems. Linear Algebr. Appl. 428, 324–381 (2008)
Gamboa, Jorge, Plyushchay, Mikhail, Zanelli, Jorge: Three aspects of bosonized supersymmetry and linear differential field equation with reflection. Nucl. Phys. B 543(1–2), 447–465 (1999)
King, F.W.: Hilbert Transforms, vol. 1. Encyclopedia of Mathematics and its Applications 124. Cambridge University Press, Cambridge (2009)
Kolmogorov, A.N., Fomin, S.V.: Elements of the Theory of Functions and Functional Analysis, vol. 2: Measure. The Lebesgue Integral. Hilbert Space. Translated from the first (1960) Russian ed. by H. Kamel and H. Komm. Graylock Press, Albany (1961)
Korporal, A., Regensburger, G.: Composing and factoring generalized Green’s operators and ordinary boundary problems. In: Barkatou, M., Cluzeau, T., Regensburger, G., Rosenkranz, M. (eds.) AADIOS 2012. Lecture Notes in Computer Science 8372, pp 116–134. Springer, Berlin (2014)
Poodiack, R., LeClair, K.: Fundamental theorems of algebra for the perplexes. Coll. Math. J. 40(5), 322–335 (2009)
Post, S., Vinet, L., Zhedanov, A.: Supersymmetric quantum mechanics with reflections. J. Phys. A 44(43), 435301 (2011)
Regensburger, G., Rosenkranz, M.: Solving and factoring boundary problems for linear ordinary differential equations in differential algebras. J. Symb. Comput. 43, 515–544 (2008)
Regensburger, G., Rosenkranz, M.: An algebraic foundation for factoring linear boundary problems. Ann. Mat. Pura Appl. 4(188), 123–151 (2009)
Rosenkranz, M., Regensburger, G., Tec, L., Buchberger, B.: Symbolic analysis for boundary problems: from rewriting to parametrized Gröbner bases. In: Langer, U., Paule, P. (eds.) Numerical and Symbolic Scientific Computing: Progress and Prospects, pp. 273–331. Springer, Wien (2012)
Roychoudhury, R., Roy, B., Dube, P.P.: Non-Hermitian oscillator and R-deformed Heisenberg algebra. J. Math. Phys. 54(1), 012104 (2013)
Shah, S.M., Wiener, J.: Reducible functional-differential equations. Int. J. Math. Math. Sci. 8, 1–27 (1985)
Tojo, F.A.F.: A hyperbolic analog of the Phasor Addition Formula (2014). http://arxiv.org/abs/1411.5498
Wiener, J.: Generalized Solutions of Functional-Differential Equations. World Scientific, River Edge (1993)
Acknowledgments
Alberto Cabada research was partially supported by FEDER and Ministerio de Educación y Ciencia, Spain, project MTM2010-15314. F. Adrián F. Tojo research was supported by FPU scholarship, Ministerio de Educación, Cultura y Deporte, Spain
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Ali Hasan Mohammed.
Rights and permissions
About this article
Cite this article
Cabada, A., Tojo, F.A.F. Green’s Functions for Reducible Functional Differential Equations. Bull. Malays. Math. Sci. Soc. 40, 1071–1092 (2017). https://doi.org/10.1007/s40840-016-0355-x
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40840-016-0355-x