1 Introduction

Some special kind of functional differential equations, called reducible differential equations, can be solved by making operations on them which lead to a related problem with an ODE or system of ODEs (see, for instance, [20, 22]).

To be more specific, if \({\mathbb {R}}[D]\) is the ring of polynomials on the usual differential operator D and \({\mathscr {A}}\) is any operator algebra containing \({\mathbb {R}}[D]\), then an equation \(L x=0\), where \(L\in {\mathscr {A}}\), is said to be a reducible differential equation if there exits \(R\in {\mathscr {A}}\) such that \(\textit{RL}\in {\mathbb {R}}[D]\). A similar definition could be done for non-constant or complex coefficients.

This ODE problem can be solved and, of the solutions obtained for it, some may be solutions of the original problem as well. This approach has recently been extended to the obtaining of Green’s functions for some of those problems [36].

It is important to point out that these transformations necessary to reduce the problem to an ordinary one are of a purely algebraic nature. It is in this sense similar to the algebraic analysis theory which, through the study of Ore algebras and modules, obtains important information about some functional problems, including explicit solutions [2, 9]. Nevertheless, the algebraic structures we deal with here are somewhat different, e. g., they are not in general Ore algebras. We refer the reader to [13, 1618] for an algebraic approach to the abstract theory of boundary value problems and its applications to symbolic computation.

Among the reducible functional differential equations, those with reflection have gathered great interest, some of it due to their applications to supersymmetric quantum mechanics [10, 15, 19] or to other areas of analysis like topological methods [8].

In this work, we put special emphasis in two operators appearing in the equations: the reflection operator and the Hilbert transform. Both of them have exceptional algebraic properties which make them fit for our approach.

In the next section, we study the case of operators with reflection and the algebra generated by them, illustrating its properties. In Sect. 3, we show how we can compute the Green’s function of a problem with reflections in a fairly general setting using the properties studied in Sect. 2. Section 3 introduces a particular case of a more general context. This new setting is studied in Sect. 4, where we outline the theory for abstract linear operators and prove, as a particular case, the result used in Sect. 3 to derive the Green’s function. Finally, in Sect. 5, we show that the application of our results extends beyond equations with reflection, study the case of differential equations in which the Hilbert transform is involved, and give an example of how to compute the solutions of these equations. Also, we show how these kind of operators relate to the complex polynomials and outline an analogous theory for hyperbolic polynomials.

2 Differential Operators with Reflection

In this Section, we will study a particular family of operators, those that are combinations of the differential operator D, the pullback operator of the reflection \(\varphi (t)=-t\), denoted by \(\varphi ^*(f)(t)=f(-t)\), and the identity operator, \({{\mathrm{Id}}}\). In order to freely apply the operator D without worrying too much about its domain of definition, we will consider that D acts on the set of functions locally of bounded variation on \({\mathbb {R}}\), \({{\mathrm{BV_{loc}}}}({\mathbb {R}})\).Footnote 1

It is well known that any function locally of bounded variation \(f\in {{\mathrm{BV_{loc}}}}({\mathbb {R}})\) can be expressed as

$$\begin{aligned} f(x)=f(x_0)+\int _{x_0}^xg(y)\text {d}{y}+h(x), \end{aligned}$$

for any \(x_0\in {\mathbb {R}}\), where \(g\in L^1({\mathbb {R}})\), and h is the function of derivative zero almost everywhere [12]. This implies that the distributional derivative (we will call it weak derivative as shorthand) of f is

$$\begin{aligned} f'=g +\mu _s, \end{aligned}$$

where \(\mu _{s}\) is a singular measure with respect to the Lebesgue measure. In this way, we will define \(D\,f:=g\) (we will restate this definition in a more general way further on).

We now consider the real abelian group \({\mathbb {R}}[D,\varphi ^* ]\) of generators \(\{D^k,\varphi ^*D^k\}_{k=0}^\infty \) where the powers \(D^k\) are taken in the sense of composition (as notation, \(D^0={{\mathrm{Id}}}\), and a constant, say a, will be considered, in so far as an operator, acting on a function f and returning the product \(a\,f\)). If we take the usual composition for operators in \({\mathbb {R}}[D,\varphi ^* ]\), we observe that \(D\varphi ^*=-\varphi ^*D\), so composition is closed in \({\mathbb {R}}[D,\varphi ^* ]\), which makes it a non-commutative algebra. In general, \(D^k\varphi ^*=(-1)^k\varphi ^*D^k\) for \(k=0,1,\ldots \)

The elements of \({\mathbb {R}}[D,\varphi ^* ]\) are of the form

$$\begin{aligned} L=\sum _ia_i\varphi ^*D^i+\sum _jb_jD^j\in {\mathbb {R}}[D,\varphi ^* ]. \end{aligned}$$
(2.1)

For convenience, we consider the sums on i and j such that \(i,j\in \{0,1,\ldots \}\), but taking into account that the coefficients \(a_i,b_j\) are zero for big enough indices.

Despite the non-commutativity of the composition in \({\mathbb {R}}[D,\varphi ^* ]\) there are interesting relations in this algebra.

First notice that \({\mathbb {R}}[D,\varphi ^*]\) is not a unique factorization domain. Take a polynomial \(P=D^2+\beta D+\alpha \), where \(\alpha ,\beta \in {\mathbb {R}}\), and define the following operators.

If \(\beta ^2-4\alpha \ge 0\),

$$\begin{aligned} L_1&:=D+\frac{1}{2} \left( \beta -\sqrt{\beta ^2-4 \alpha }\right) ,\\ R_1&:=D+\frac{1}{2} \left( \beta +\sqrt{\beta ^2-4 \alpha }\right) ,\\ L_2&:=\varphi ^*D-\sqrt{2}D+\frac{1}{2} \left( \beta -\sqrt{\beta ^2-4 \alpha }\right) \varphi ^*+\frac{\left( -\beta +\sqrt{\beta ^2-4 \alpha }\right) }{\sqrt{2}},\\ R_2&:=\varphi ^*D-\sqrt{2}D-\frac{1}{2} \left( \beta +\sqrt{\beta ^2-4 \alpha }\right) \varphi ^*-\frac{\left( \beta +\sqrt{\beta ^2-4 \alpha }\right) }{\sqrt{2}},\\ L_3&:=\varphi ^*D-\sqrt{2}D+\frac{1}{2} \left( \beta +\sqrt{\beta ^2-4 \alpha }\right) \varphi ^*-\frac{\left( \beta +\sqrt{\beta ^2-4 \alpha }\right) }{\sqrt{2}},\\ R_3&:=\varphi ^*D-\sqrt{2}D+\frac{1}{2} \left( -\beta +\sqrt{\beta ^2-4 \alpha }\right) \varphi ^*+\frac{\left( -\beta +\sqrt{\beta ^2-4 \alpha }\right) }{\sqrt{2}},\\ L_4&:=\varphi ^*D+\sqrt{2}D+\frac{1}{2} \left( \beta -\sqrt{\beta ^2-4 \alpha }\right) \varphi ^*+\frac{\left( \beta -\sqrt{\beta ^2-4 \alpha }\right) }{\sqrt{2}},\\ R_4&:=\varphi ^*D+\sqrt{2}D-\frac{1}{2} \left( \beta +\sqrt{\beta ^2-4 \alpha }\right) \varphi ^*+\frac{\left( \beta +\sqrt{\beta ^2-4 \alpha }\right) }{\sqrt{2}},\\ L_5&:=\varphi ^*D+\sqrt{2}D+\frac{1}{2} \left( \beta +\sqrt{\beta ^2-4 \alpha }\right) \varphi ^*+\frac{\left( \beta +\sqrt{\beta ^2-4 \alpha }\right) }{\sqrt{2}},\\ R_5&:=\varphi ^*D+\sqrt{2}D+\frac{1}{2} \left( -\beta +\sqrt{\beta ^2-4 \alpha }\right) \varphi ^*+\frac{\left( \beta -\sqrt{\beta ^2-4 \alpha }\right) }{\sqrt{2}}. \end{aligned}$$

If \(\beta =0\) and \(\alpha \le 0\),

$$\begin{aligned} L_6&:=\varphi ^*D+\sqrt{-\alpha }\varphi ^*,\\ L_7&:=\varphi ^*D-\sqrt{-\alpha }\varphi ^*. \end{aligned}$$

If \(\beta =0\) and \(\alpha \ge 0\),

$$\begin{aligned} L_8&:=D+\sqrt{\alpha }\varphi ^*,\\ L_9&:=D-\sqrt{\alpha }\varphi ^*. \end{aligned}$$

If \(\beta =0\) and \(\alpha \le 1\),

$$\begin{aligned} L_{10}&:=\varphi ^*D-\sqrt{1-\alpha }\varphi ^*+1,\\ R_{10}&:=-\varphi ^*D+\sqrt{1-\alpha }\varphi ^*+1,\\ L_{11}&:=\varphi ^*D+\sqrt{1-\alpha }\varphi ^*+1,\\ R_{11}&:=-\varphi ^*D-\sqrt{1-\alpha }\varphi ^*+1. \end{aligned}$$

If \(\beta =0\), \(\alpha \ne 0\) and \(\alpha \le 1\),

$$\begin{aligned} L_{12}&:=\varphi ^*D-\sqrt{1-\alpha }D+\alpha ,\\ R_{12}&:=-\frac{1}{\alpha }\varphi ^*D+\frac{\sqrt{1-\alpha }}{\alpha }D+1,\\ L_{13}&:=\varphi ^*D+\sqrt{1-\alpha }D+\alpha ,\\ R_{13}&:=-\frac{1}{\alpha }\varphi ^*D-\frac{\sqrt{1-\alpha }}{\alpha }D+1. \end{aligned}$$

Then,

$$\begin{aligned} P=L_1R_1=R_1L_1=R_2L_2=R_3L_3=R_4L_4=R_5L_5, \end{aligned}$$

and, when \(\beta =0\),

$$\begin{aligned} P= & {} -L_6^2=-L_7^2=L_8^2=L_9^2=R_{10}L_{10}=L_{10}R_{10}=R_{11}L_{11}=L_{11}R_{11}\\= & {} R_{12}L_{12}=L_{12}R_{12}=R_{13}L_{13}=L_{13}R_{13}. \end{aligned}$$

Observe that only \(L_1\) and \(R_1\) commute in the case of \(\beta \ne 0\).

This rises the question on whether we can decompose every differential polynomial P in the composition of two ‘order one’ elements of \({\mathbb {R}}[D,\varphi ^* ]\), but this is not the case in general. Just take \(Q=D^2+D+1\) (observe that Q is not in any of the aforementioned cases). Consider a decomposition of the kind

$$\begin{aligned} (a\varphi ^*D+bD+c\varphi ^*+d)(e\varphi ^*D+gD+h\varphi ^*+j)=Q, \end{aligned}$$

where abcdegh, and j are real coefficients to be determined. The resulting system

$$\begin{aligned} \left\{ \begin{array}{ll} d h + c j &{} = 0,\\ d e - c g + b h + a j &{} = 0 ,\\ b e - a g &{} = 0,\\ -a e + b g &{} = 1,\\ c h + d j &{} = 1,\\ -c e + d g + a h + b j &{} = 1, \end{array}\right. \end{aligned}$$

has no solution for real coefficients.

Let \({\mathbb {R}}[D]\) be the ring of polynomials with real coefficients on the variable D. The following result states a very useful property of the algebra \({\mathbb {R}}[D,\varphi ^* ]\).

Theorem 2.1

Take L as defined in (2.1) and take

$$\begin{aligned} R=\sum _{k}a_k\varphi ^*D^k+\sum _{l}(-1)^{l+1}b_lD^l\in {\mathbb {R}}[D,\varphi ^*]. \end{aligned}$$
(2.2)

Then \(\textit{RL}=\textit{LR}\in {\mathbb {R}}[D]\).

Proof

$$\begin{aligned} \begin{aligned} \textit{RL}&=\sum _{i,k}(-1)^ka_ia_kD^{i+k}+\sum _{j,k}b_ja_k\varphi ^*D^{j+k}+\sum _{i,l}(-1)^l(-1)^{l+1}a_ib_l\varphi ^*D^{i+l}\\&\quad +\sum _{j,l}(-1)^{l+1}b_jb_lD^{j+l}\\&= \sum _{i,k}(-1)^ka_ia_kD^{i+k}+\sum _{j,l}(-1)^{l+1}b_jb_lD^{j+l}. \end{aligned} \end{aligned}$$
(2.3)

Hence, \(\textit{RL}\in {\mathbb {R}}[D]\).

Observe that, if we take R in the place of L in the hypothesis of the Theorem, we obtain L in the place of R and so, by expression (2.3), \(\textit{LR}\in {\mathbb {R}}[D]\). \(\square \)

Remark 2.1

Some interesting remarks on the coefficients of the operator \(S=\textit{RL}\) defined in Theorem 2.1 can be made.

If we have

$$\begin{aligned} S=\sum _k c_kD^k=\textit{RL}=\sum _{i,k}(-1)^ka_ia_kD^{i+k}+\sum _{j,l}(-1)^{l+1}b_jb_lD^{j+l}, \end{aligned}$$

then

$$\begin{aligned} c_k=\sum _{i=0}^k(-1)^i(a_ia_{k-i}-b_ib_{k-i}). \end{aligned}$$

A closer inspection reveals that

$$\begin{aligned} c_k=\left\{ \begin{array}{ll} 0, &{} k \text { odd,} \\ 2\sum _{i=0}^{\frac{k}{2}-1}\left( -1\right) ^i\left( a_ia_{k-i}-b_ib_{k-i}\right) +\left( -1\right) ^\frac{k}{2}\left( a_\frac{k}{2}^2-b_\frac{k}{2}^2\right) &{} k \text { even.}\end{array}\right. . \end{aligned}$$

This has some important consequences. If \(L=\sum _{i=0}^n a_i\varphi ^*D^i+\sum _{j=0}^n b_jD^j\) with \(a_n\ne 0\) or \(b_n\ne 0\), we have that \(c_{2n}=(-1)^n(a_n^2-b_n^2)\) Footnote 2 and so if \(a_n=\pm b_n\) then \(c_{2n}=0\). This shows that composing two elements of \({\mathbb {R}}[D,\varphi ^* ]\) we can get another element which has simpler terms in the sense of derivatives of less order. We illustrate this with two examples.

Take \(n\ge 3\), \(L=\varphi ^*D^n+D^n+D-{{\mathrm{Id}}}\) and \(R=\varphi ^*D^n-(-1 )^nD^n+D+{{\mathrm{Id}}}\). Then \(RL=2D^{\alpha (n)}+D^2-{\text {Id}}\) where \(\alpha (n)=n\) if n is even and \(\alpha (n)=n+1\) if n is odd.

If we take \(n\ge 0 \), \(L=\varphi ^*D^{2n+1}+D^{2n+1} + {{\mathrm{Id}}}\) and \(R=\varphi ^*D^n-(-1 )^nD^n+{{\mathrm{Id}}}\) then \(\textit{RL}= -{{\mathrm{Id}}}\).

Example 2.1

Consider the equation

$$\begin{aligned} x^{(3)}(t)+x^{(3)}(-t)+x(t)=\sin t. \end{aligned}$$

Applying the operator \(\varphi ^*D^3+D^3-{{\mathrm{Id}}}\) to both sides of the equation we obtain \(x(t)=\sin t+2\cos t\). This is the unique solution of the equation, to which we had not imposed any extra conditions.

3 Boundary Value Problems

In this section, we obtain the Green’s function of boundary value problems with reflection and constant coefficients. We point out that the same approach used in this section is also valid for initial problems among other types of conditions.

Let \(I=[a,b]\subset {\mathbb {R}}\) be an interval and \(f\in {{\mathrm{L^1}}}(I)\). Consider now the following problem with the usual derivative.

$$\begin{aligned} Su(t):= & {} \sum _{k=0}^na_ku^{(k)}(t)=f(t),\ t\in I,\nonumber \\ B_iu:= & {} \sum _{j=0}^{n-1}\alpha _{ij}u^{(j)}(a)+\beta _{ij}u^{(j)}(b)=0, i=1,\ldots ,n. \end{aligned}$$
(3.1)

The following Theorem from [7] states the cases where we can find a unique solution for problem (3.1).Footnote 3

Theorem 3.1

Assume the following homogeneous problem has a unique solution

$$\begin{aligned} Su(t)=0,\ t\in I,\ B_iu=0,\quad i=1,\ldots n. \end{aligned}$$

Then there exists a unique function, called Green’s function, such that

  1. (G1)

    G is defined on the square \(I^2\).

  2. (G2)

    The partial derivatives \(\frac{\partial ^kG}{\partial t^k}\) exist and are continuous on \(I^2\) for \(k=0,\ldots ,n-2\).

  3. (G3)

    \(\frac{\partial ^{n-1}G}{\partial t^{n-1}}\) and \(\frac{\partial ^nG}{\partial t^n}\) exist and are continuous on \(I^2\backslash \{(t,t)\ :\ t\in I\}\).

  4. (G4)

    The lateral limits \(\frac{\partial ^{n-1}G}{\partial t^{n-1}}(t,t^+)\) and \(\frac{\partial ^{n-1}G}{\partial t^{n-1}}(t,t^-)\) exist for every \(t\in (a,b)\) and

    $$\begin{aligned} \frac{\partial ^{n-1}G}{\partial t^{n-1}}(t,t^-)-\frac{\partial ^{n-1}G}{\partial t^{n-1}}(t,t^+)=\frac{1}{a_n}. \end{aligned}$$
  5. (G5)

    For each \(s\in (a,b)\) the function \(G(\cdot ,s)\) is a solution of the differential equation \(Su=0\) on \(I\backslash \{s\}\).

  6. (G6)

    For each \(s\in (a,b)\) the function \(G(\cdot ,s)\) satisfies the boundary conditions \(B_iu=0\ i=1,\ldots ,n\).

Furthermore, the function \(u(t):=\int _a^bG(t,s)f(s)\text {d}s\) is the unique solution of the problem (3.1).

Using the properties (G1)–(G6) and Theorem 2.1 one can prove Theorem 3.2. The proof of this result will be a direct consequence of Theorem 4.1.

Given an operator \({\mathscr {L}}\) for functions of one variable, define the operator \({\mathscr {L}}_\vdash \) as \({\mathscr {L}}_\vdash G(t,s):={\mathscr {L}}(G(\cdot ,s))|_{t}\) for every s and any suitable function G.

Theorem 3.2

Let \(I=[-T,T]\). Consider the problem

$$\begin{aligned} Lu(t)=h(t),\ t\in I,\ B_iu=0,\quad i = 1,\ldots ,n, \end{aligned}$$
(3.2)

where L is defined as in (2.1), \(h\in L^1(I)\) and

$$\begin{aligned} B_iu:=\sum _{j=0}^{n-1}\alpha _{ij}u^{(j)}(-T)+\beta _{ij}u^{(j)}(T). \end{aligned}$$

Then, there exists \(R\in {\mathbb {R}}[D,\varphi ^* ]\) (as in (2.2)) such that \(S:=\textit{RL}\in {\mathbb {R}}[D]\) and the unique solution of problem (3.2) is given by \(\int _a^bR_\vdash G(t,s)h(s)\text {d}s\) where G is the Green’s function associated to the problem \(Su=0\), \(B_iRu=0\), \(B_iu=0\), \(i=1,\ldots ,n\), assuming that it has a unique solution.

For the following example, let us explain some notations. Let \(k,p\in {\mathbb {N}}\). We denote by \(W^{k,p}(I)\) the Sobolev Space defined by

$$\begin{aligned} W^{k,p}(I) = \left\{ u \in L^p(I) : D^{\alpha }u \in L^p(I) \,\, \forall \alpha \le k \right\} . \end{aligned}$$

Given a constant \(a\in {\mathbb {R}}\) we can consider the pullback by this constant as a functional \(a^*:{\mathscr {C}}(I)\rightarrow {\mathbb {R}}\) such that \(a^*f=f(a)\) in the same way we defined it for functions.

Example 3.1

Consider the following problem.

$$\begin{aligned} u''(t)+a\,u(-t)+b\,u(t)=h(t),\ t\in I,\quad u(-T)=u(T),\ u'(-T)=u'(T), \end{aligned}$$
(3.3)

where \(h\in W^{2,1}(I)\). Then, the operator we are considering is \(L=D^2+a\,\varphi ^*+b\). If we take \(R:=D^2-a\,\varphi ^*+b\), we have that \(\textit{RL}=D^4+2b\,D^2+b^2-a^2\).

The boundary conditions are \(((T^*)-(-T)^*)u=0\) and \(((T^*)-(-T)^*)Du=0\). Taking this into account, we add the conditions

$$\begin{aligned} 0= & {} ((T^*)-(-T)^*)Ru=((T^*)-(-T)^*)(D^2-a\,\varphi ^*+b)u\\= & {} ((T^*)-(-T)^*)D^2u,\\ 0= & {} ((T^*)-(-T)^*)RDu=((T^*)-(-T)^*)(D^2-a\,\varphi ^*+b)Du\\= & {} ((T^*)-(-T)^*)D^3u. \end{aligned}$$

That is, our new reduced problem is

$$\begin{aligned}&u^{(4)}(t)+2b\,u''(t)+(b^2-a^2)u(t)=f(t),\ t\in I,\nonumber \\&\quad u^{(k)}(-T)=u^{(k)}(T),\ k=0,\ldots ,3, \end{aligned}$$
(3.4)

where \(f(t)=R\,h(t)=h''(t)-a\,h(-t)+b\,h(t)\).

Observe that this problem is equivalent to the system of equations (a chain of order two problems)

$$\begin{aligned} u''(t)+(b+a)u(t)&= v(t),\ t\in I,\quad u(-T)=u(T),\ u'(-T)=u'(T),\\ v''(t)+(b-a)v(t)&= f(t),\ t\in I,\quad v(-T)=v(T),\ v'(-T)=v'(T). \end{aligned}$$

Thus, it is clear that

$$\begin{aligned} u(t)=\int _{-T}^TG_1(t,s)v(s)\text {d}s,\quad v(t)=\int _{-T}^TG_2(t,s)f(s)\text {d}s, \end{aligned}$$

where, \(G_1\) and \(G_2\) are the Green’s functions related to the previous second-order problems. Explicitly, in the case \(b>|a|\) (the study for other cases would be analogous),

$$\begin{aligned} 2\sqrt{b+a}\sin (\sqrt{b+a}\,T)G_1(t,s)={\left\{ \begin{array}{ll} \cos \sqrt{b+a}(T+s-t) &{} \text {if}\; s\le t,\\ \cos \sqrt{b+a}(T-s+t) &{} \text {if}\; s>t.\end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} 2\sqrt{b-a}\sin (\sqrt{b-a}\,T)G_2(t,s)={\left\{ \begin{array}{ll} \cos \sqrt{b-a}(T+s-t) &{} \text {if}\;s\le t,\\ \cos \sqrt{b-a}(T-s+t) &{} \text {if}\;s>t.\end{array}\right. } \end{aligned}$$

Hence, the Green’s function G for problem (3.4) is given by

$$\begin{aligned} G(t,s)=\int _{-T}^TG_1(t,r)G_2(r,s)\text {d}r. \end{aligned}$$

Therefore, using Theorem 3.2, the Green’s function for problem (3.3) is

$$\begin{aligned} \overline{G}(t,s)=R_\vdash G(t,s)=\frac{\partial ^2 G}{\partial t^2}(t,s)-a\,G(-t,s)+b\,G(t,s). \end{aligned}$$

Remark 3.1

We can reduce the assumptions on the regularity of h to \(h\in {{\mathrm{L^1}}}(I)\) just taking into account the density of \(W^{2,1}(I)\) in \(L^1(I)\).

Remark 3.2

Example 2.1 illustrates the importance of the existence and uniqueness of solution of the problem \(Su=0,\ B_iRu=0,\ B_iu=0\) in the hypothesis of Theorem 3.2. In general, when we compose two linear ODEs, respectively, of orders m and n and a number m and n of conditions, we obtain a new problem of order \(m+n\) and \(m+n\) conditions. As we see this is not the case in the reduction of Theorem 3.2. In the case the order of the reduced problem is less than 2n anything is possible: we may have an infinite number of solutions, no solution or uniqueness of solution being the problem non-homogeneous. The following example illustrates this last case.

Example 3.2

Consider the problem

$$\begin{aligned} Lu(t):= u^{(4)}(t)+u^{(4)}(-t)+u''(-t)=h(t),\ t\in [-1,1]\quad u(1)=u(-1)=0,\end{aligned}$$
(3.5)

where \(h\in W^{4,1}([-1,1])\).

For this case, \(Ru(t):=-u^{(4)}(t)+u^{(4)}(-t)+u''(-t)\) and the reduced equation is \(\textit{RL}u=2u^{(6)}+u^{(4)}=Rh\), which has order \(6<2\times 4 = 8\), so there is a reduction of the order. Now, we have to be careful with the new reduced boundary conditions.

$$\begin{aligned} \begin{aligned} B_1u(t)&=u(1)=0,\\ B_2u(t)&=u(-1)=0,\\ B_1Ru(t)&=-u^{(4)}(1)+u^{(4)}(-1)+u''(-1)=0,\\ B_2Ru(t)&=-u^{(4)}(-1)+u^{(4)}(-1)+u''(1)=0,\\ B_1Lu(t)&=u^{(4)}(1)+u^{(4)}(-1)+u''(-1)=h(1),\\ B_2Lu(t)&=u^{(4)}(-1)+u^{(4)}(-1)+u''(1)=h(-1).\\ \end{aligned} \end{aligned}$$
(3.6)

Being the two last conditions we obtained from applying the original boundary conditions to the original equation. Equation (3.6) is a system of linear equations which can be solved for u and its derivatives as

$$\begin{aligned} u(1)=u(-1)=0,\ u''(1)=-u''(-1)=\frac{1}{2}(h(1)-h(-1)),\ u^{(4)}(\pm 1)=\frac{h(\pm 1)}{2}. \end{aligned}$$
(3.7)

Consider now the reduced problem

$$\begin{aligned}&2u^{(6)}(t)+u^{(4)}(t)=Rh(t)=:f(t),\ t\in [-1,1], \\&u(1)=u(-1)=0,\ u''(1)=-u''(-1)=\frac{1}{2}(h(1)-h(-1)),\ u^{(4)}(\pm 1)=\frac{h(\pm 1)}{2}, \end{aligned}$$

and the change of variables \(v(t):=u^{(4)}(t)\). Now, we look the solution of

$$\begin{aligned} 2v''(t)+v(t)=f(t),\ t\in [-1,1],\ v(\pm 1)=\frac{h(\pm 1)}{2}, \end{aligned}$$

which is given by

$$\begin{aligned} v(t)=\int _{-1}^1G(t,s)f(s)\text {d}s-\frac{h(1)\csc \sqrt{2}}{2}\sin \left( \frac{t-1}{\sqrt{2}}\right) +\frac{h(-1)\csc \sqrt{2}}{2}\sin \left( \frac{t+1}{\sqrt{2}}\right) , \end{aligned}$$

where

$$\begin{aligned} G(t,s):=\frac{\csc \sqrt{2}}{\sqrt{2}}{\left\{ \begin{array}{ll} \sin \left( \frac{s+1}{\sqrt{2}}\right) \sin \left( \frac{t-1}{\sqrt{2}}\right) , &{} -1\le s\le t\le 1, \\ \sin \left( \frac{s-1}{\sqrt{2}}\right) \sin \left( \frac{t+1}{\sqrt{2}}\right) , &{} -1\le t<s\le 1. \end{array}\right. } \end{aligned}$$

Now, it is left to solve the problem

$$\begin{aligned} u^{(4)}(t)=v(t),\ u(1)=u(-1)=0,\ u''(1)=-u''(-1)=\frac{1}{2}(h(1)-h(-1)). \end{aligned}$$

The solution is given by

$$\begin{aligned} u(t)=\int _{-1}^{1}K(t,s)v(s)\text {d}s+\frac{h(1)-h(-1)}{12}t(t - 1) (t + 1), \end{aligned}$$

where

$$\begin{aligned} K(t,s)=\frac{1}{12}{\left\{ \begin{array}{ll} (s+1) (t-1) \left( s^2+2 s+t^2-2 t-2\right) , &{} -1\le s\le t\le 1, \\ (s-1) (t+1) \left( s^2-2 s+t^2+2 t-2\right) , &{} -1\le t<s\le 1. \end{array}\right. } \end{aligned}$$

Hence, taking \(J(t,s)=\int _{-1}^1H(t,r)G(r,s)\text {d}s\),

$$\begin{aligned}&J(t,s):= \\&\quad \frac{\csc \sqrt{2}}{12\sqrt{2}}{\left\{ \begin{array}{ll} \sqrt{2}\sin (\sqrt{2}) (s+1) (t-1) [s (s+2)+(t-2) t-14]+24 \cos \left( \frac{s-t+2}{\sqrt{2}}\right) -24 \cos \left( \frac{s+t}{\sqrt{2}}\right) , \\ -1\le s\le t\le 1, \\ \sqrt{2}\sin (\sqrt{2}) (s-1) (t+1)[(s-2) s+t (t+2)-14]+24 \cos \left( \frac{s-t-2}{\sqrt{2}}\right) -24 \cos \left( \frac{s+t}{\sqrt{2}}\right) , \\ -1\le t<s\le 1. \end{array}\right. } \end{aligned}$$

Therefore,

$$\begin{aligned} u(t)&=\int _{-1}^{1}J(t,s)f(s)\text {d}s\\&\quad -\frac{h(1)\csc \sqrt{2}}{2}\left[ \frac{1}{6} (t-5) (t-1) (t+3) \sin \left( \sqrt{2}\right) +4 \sin \left( \frac{t-1}{\sqrt{2}}\right) \right] \\&\quad +\frac{h(-1)\csc \sqrt{2}}{2}\left[ \frac{1}{6} (t-3) (t+1) (t+5) \sin \left( \sqrt{2}\right) +4 \sin \left( \frac{t+1}{\sqrt{2}}\right) \right] \\&\quad +\frac{h(1)-h(-1)}{12}t(t - 1) (t + 1). \end{aligned}$$

4 The Reduced Problem

The usefulness of a theorem of the kind of Theorem 3.2 is clear, for it allows the obtaining of the Green’s function of any problem of differential equations with constant coefficients and involutions, generalizing the works [35]. The proof of this Theorem relies heavily on the properties \((G1){-}(G6)\), so our main goal now is to consider abstractly these properties in order to apply them in a more general context with different kinds of operators.

Let X be a vector subspace of \({{\mathrm{L_{loc}^1}}}({\mathbb {R}})\), and \(({\mathbb {R}},\tau )\) the real line with its usual topology. Define \(X_U:=\{f|_U\ :\ f\in X\}\) for every \(U\in \tau \) (observe that \(X_U\) is a vector space as well). Assume that X satisfies the following property.

(P) For every partition of \({\mathbb {R}}\), \(\{S_j\}_{j\in J}\cup \{N\}\), consisting of measurable sets, where N has no accumulation points and the \(S_j\) are open, if \(f_j\in X_{S_j}\) for every \(j\in J\), then there exists \(f\in X\) such that \(f|_{S_j}=f_j\) for every \(j\in J\).

Example 4.1

The set of locally absolutely continuous functions \({{\mathrm{AC_{loc}}}}({\mathbb {R}})\subset {{\mathrm{L_{loc}^1}}}({\mathbb {R}})\) does not satisfy (P). To see this just take the following partition of \({\mathbb {R}}\): \(S_1=(-\infty ,0)\), \(S_2=(0,+\infty )\), \(N=\{0\}\) and consider \(f_1\equiv 0\), \(f_2\equiv 1\). \(f_j\in {{\mathrm{AC}}}({\mathbb {R}})_{S_j}\) for \(j=1,2\), but any function f such that \(f|_{S_j}=f_j\), \(j=1,2\) has a discontinuity at 0, so it cannot be absolutely continuous. That is, (P) is not satisfied.

Example 4.2

\(X={{\mathrm{BV_{loc}}}}({\mathbb {R}})\) satisfies (P). Take a partition of \({\mathbb {R}}\), \(\{S_j\}_{j\in J}\cup \{N\}\), consisting of measurable sets where N has no accumulation points and the \(S_j\) are open and a family of functions \((f_j)_{j\in J}\) such that \(f_j\in X_{S_j}\) for every \(j\in J\). We can further assume, without loss of generality, that the \(S_j\) are connected. Define a function f such that \(f|_{S_j}:=f_j\) and \(f|_N=0\). Take a compact set \(K\subset {\mathbb {R}}\). Then, by Bozano–Weierstrass’ and Heine–Borel’s Theorems, \(K\cap N\) is finite for N has no accumulation points. Therefore, \(J_K:=\{j\in J : S_j\cap K\ne \emptyset \}\) is finite as well. To see this denote by \(\partial S\) the boundary of a set S and observe that \(N\cup K=\cup _{j\in J}\partial (S_j\cap K)\) and that the sets \(\partial (S_j\cap K)\cap \partial (S_k\cap K)\) are finite for every \(j,k\in J\).

Thus, the variation of f in K is \(V_K(f)\le \sum _{j\in J_K}V_{S_j}(f)<\infty \) since f is of bounded variation on each \(S_j\). Hence, X satisfies (P).

Throughout this section, we will consider a function space X satisfying (P) and two families of linear operators \(L=\{L_U\}_{U\in \tau }\) and \(R=\{R_U\}_{U\in \tau }\) that satisfy

Locality: :

\(L_U\in {\mathscr {L}}(X_U,{{\mathrm{L_{loc}^1}}}(U))\), \(R_U\in {\mathscr {L}}({{\mathrm{im}}}(L_U),{{\mathrm{L_{loc}^1}}}(U))\),

Restriction: :

\(L_V(f|_V)=L_U(f)|_V\), \(R_V(f|_V)=R_U(f)|_V\) for every \(U,V\in \tau \) such that \(V\subset U\).Footnote 4

The following definition allows us to give an example of an space that satisfies the properties of locality and restriction.

Definition 1

Let \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) and assume there exists a partition \(\{S_j\}_{j\in J}\cup \{N\}\) of \({\mathbb {R}}\) consisting of measurable sets where N is of zero Lebesgue measure such that satisfying that the weak derivative \(g_i\) exists for every \(f|_{S_j}\), then a function g such that \(g|_{S_j}=g_j\) is called the very weak derivative (vw-derivative) of f.

Remark 4.1

The vw-derivative is uniquely defined save for a zero measure set and is equivalent to the weak derivative for absolutely continuous functions.

Nevertheless, the vw-derivative is different from the derivative of distributions. For instance, the derivative of the Heavyside function in the distributional sense is de Dirac delta at 0, whereas its vw-derivative is zero. What is more, the kernel of the vw-derivative is the set of functions which are constant on a family of open sets \(\{S_j\}_{j\in J}\) such \({\mathbb {R}}\backslash (\cup _{j\in J} S_j)\) has Lebesgue measure zero.

Example 4.3

Take \(X={{\mathrm{BV_{loc}}}}({\mathbb {R}})\) and \(L=D\) to be the very weak derivative. Then L satisfies the locality and restriction hypotheses.

Remark 4.2

The vw-derivative, as defined here, is the D operator defined in Sect. 2 for functions of bounded variation. In other words, the vw-derivative ignores the jumps and considers only those parts with enough regularity.

Remark 4.3

The locality property allows us to treat the maps L and R as if they were just linear operators in \({\mathscr {L}}(X,{{\mathrm{L_{loc}^1}}}({\mathbb {R}}))\) and \({\mathscr {L}}({{\mathrm{im}}}(X),{{\mathrm{L_{loc}^1}}}({\mathbb {R}}))\) respectively, although we must not forget their more complex structure.

Assume \(X_U\subset {{\mathrm{im}}}( L_U)\subset {{\mathrm{im}}}(R_U)\) for every \(U\in \tau \). \(B_i\in {\mathscr {L}}({{\mathrm{im}}}( R_{\mathbb {R}}),{\mathbb {R}})\), \(i=1,\ldots ,m\) and \(h\in {{\mathrm{im}}}(L_{\mathbb {R}})\). Consider now the following problem

$$\begin{aligned} Lu=h,\ B_iu=0,\quad i =1,\ldots ,m. \end{aligned}$$
(4.1)

Let

$$\begin{aligned} Z:=\{G:{\mathbb {R}}^2\rightarrow {\mathbb {R}}: G(t,\cdot )\in X\cap {{\mathrm{L^2}}}({\mathbb {R}})\text { and }{{\mathrm{supp}}}\{G(t,\cdot )\}\text { is compact}, s\in {\mathbb {R}}\}. \end{aligned}$$

Z is a vector space.

Let \(f\in {{\mathrm{im}}}(L_{\mathbb {R}})\) and consider the problem

$$\begin{aligned} \textit{RL}v=f,\ B_iv=0,\ B_iRv=0,\quad i =1,\ldots ,m. \end{aligned}$$
(4.2)

Let \(G\in Z\) and define the operator \(H_G\) such that \(H_G(h)|_t:=\int _{\mathbb {R}}G(t,s)h(s)\text {d}s\). We have now the following theorem relating problems (4.1) and (4.2).

Theorem 4.1

Assume L and R are the aforementioned operators with the locality and restriction properties and let \(h\in {{\mathrm{Dom}}}(R_{\mathbb {R}})\). Assume L commutes with R and that there exists \(G\in Z\) such that

(I):

\((\textit{RL})_\vdash G=0\),

(II):

\(B_{i\,\vdash } G=0,\quad i =1,\ldots ,m\),

(III):

\(( B_iR)_\vdash G=0,\quad i =1,\ldots ,m\),

(IV):

\(\textit{RLH}_Gh=H_{(\textit{RL})_\vdash G}h+h\),

(V):

\(LH_{R_\vdash G}h=H_{L_\vdash R_\vdash G}h+h\).

(VI):

\(B_iH_G= H_{B_{i\,\vdash } G},\quad i =1,\ldots ,m\),

(VII):

\(B_iRH_G= B_iH_{R_\vdash G}=H_{( B_iR)_\vdash G},\quad i =1,\ldots ,m\).

Then, \(v:=H_G(h)\) is a solution of problem (4.2) and \(u:=H_{R_\vdash G}(h)\) is a solution of problem (4.1).

Proof

(I) and (IV) imply that

$$\begin{aligned} \textit{RLv}=\textit{RLH}_Gh=H_{(\textit{RL})_\vdash G}h+h=H_0h+h=h. \end{aligned}$$

On the other hand, \((\textit{III})\) and \((\textit{VII})\) imply that, for every \(i=1,\ldots ,m\),

$$\begin{aligned} B_iRv= B_iRH_Gh=H_{( B_iR)_\vdash G}h=0. \end{aligned}$$

All the same, by \((\textit{II})\) and \((\textit{VI})\),

$$\begin{aligned} B_iv= B_iH_Gh=H_{B_{i\,\vdash } G}h=0. \end{aligned}$$

Therefore, v is a solution to problem (4.2).

Now, using (I) and (V) and the fact that \(\textit{LR}=\textit{RL}\), we have that

$$\begin{aligned} Lu=LH_{R_\vdash G}h=H_{L_\vdash R_\vdash G}h+h=H_{(\textit{LR})_\vdash G}h+h=H_{(\textit{RL})_\vdash G}h+h=h. \end{aligned}$$

Taking into account \((\textit{III})\) and \((\textit{VII})\),

$$\begin{aligned} B_iu= B_iH_{R_\vdash G}(h)=H_{( B_iR)_\vdash G}h=0,\quad i =1,\ldots ,m. \end{aligned}$$

Hence, u is a solution of problem (4.1). \(\square \)

The following Corollary is proved in the same way as the previous Theorem.

Corollary 4.2

Assume \(G\in Z\) satisfies

  1. (1)

    \(L_\vdash G=0\),

  2. (2)

    \(B_{i\,\vdash } G=0,\quad i =1,\ldots ,m\),

  3. (3)

    \(LH_Gh=H_{L_\vdash G}h+h\),

  4. (4)

    \(B_iH_Gh=H_{ B_{i\,\vdash } G}h\).

Then \(u=H_Gh\) is a solution of problem (4.1).

Proof of Theorem 3.2

Originally we would need to take \(h\in {{\mathrm{Dom}}}(R)\), but by a simple density argument (\(\mathscr {C}^\infty (I)\) is dense in \({{\mathrm{L^1}}}(I)\)) we can take \(h\in {{\mathrm{L^1}}}(I)\). If we prove that the hypothesis of Theorem 4.1 are satisfied, then the existence of solution will be proved. First, Theorem 2.1 guarantees the commutativity of L and R. Now, Theorem 3.1 implies hypothesis \((I){-}(\textit{VII})\) of Theorem 4.1 in terms of the vw-derivative.

Indeed, (I) is straightforward from (G5). \((\textit{II})\) and \((\textit{III})\) are satisfied because \((G1)-(G6)\) hold and \(B_iu=B_iRu=0\). (G2) and (G4) imply \((\textit{IV})\) and (V). \((\textit{VI})\) and \((\textit{VII})\) hold because of (G2), (G5) and the fact that the boundary conditions commute with the integral.

On the other hand, the solution to problem (3.2) must be unique for, otherwise, the reduced problem \(Su=0\), \(B_iRu=0\), \(B_iu=0\), \(i=1,\ldots ,n\) would have several solutions, contradicting the hypotheses. \(\square \)

The following Lemma, in the line of [5], extends the application of Theorem 3.2 to the case of non-constant coefficients with some restrictions for problems similar to the one in Example 3.1.

Lemma 4.3

Consider the problem

$$\begin{aligned} u''(t)+a(t)\,u(-t)+b(t)\,u(t)=h(t), u(-T)=u(T), \end{aligned}$$
(4.3)

where \(a\in W^{2,1}_{{\text {loc}}}({\mathbb {R}})\) is non-negative and even,

$$\begin{aligned} b=\,k a+\frac{a''}{4\,a}-\frac{5}{16}\left( \frac{a'}{a}\right) ^2, \end{aligned}$$

for some constant \(k\in {\mathbb {R}}\), \(k^2\ne 1\) and b is integrable.

Define \(A(t):= \int _0^t\sqrt{a(s)}{\text {d}}s\), consider

$$\begin{aligned}u''(t)+u(-t)+k\,u(t)=h(t),\ u(-A(T))=u(A(T)) \end{aligned}$$

and assume it has a Green’s function G.

Then

$$\begin{aligned} u(t)=\int _{-T}^{T}H(t,s)h(s){\text {d}}s \end{aligned}$$

is a solution of problem (4.3) where

$$\begin{aligned}H(t,s):=&\root 4 \of {\frac{a(s)}{a(t)}}G(A(t),A(s)) \end{aligned}$$

and \(H(t,\cdot )h(\cdot )\) is assumed to be integrable in \([-T,T]\).

Proof

Let G be the Green’s function of the problem

$$\begin{aligned} u''(t)+u(-t)+c\,u(t)=h(t),\ u(-A(T)) = u(A(T)), u \in \hbox {W}^{2,1}_{\mathrm{loc}}({\mathbb {R}}). \end{aligned}$$

Now, we show that H satisfies the equation, that is,

$$\begin{aligned}&\frac{\partial ^2 H}{\partial t^2}(t,s)+a(t)H(-t,s)+b(t)H(t,s)=0 \quad \text { for a.}\,\text {e. } t,s\in {\mathbb {R}}.\\&\frac{\partial ^2 H}{\partial t^2}(t,s)=\frac{\partial ^2 }{\partial t^2}\left[ \root 4 \of {\frac{a(s)}{a(t)}}G(A(t),A(s))\right] \\&\quad =\frac{\partial }{\partial t}\left[ -\frac{a'(t)}{4}\root 4 \of {\frac{a(s)}{a^5(t)}}G(A(t),A(s)) +\root 4 \of {a(s)a(t)}\frac{\partial G }{\partial t}(A(t),A(s))\right] \\&\quad = -\frac{a''(t)}{4}\root 4 \of {\frac{a(s)}{a^5(t)}}G(A(t),A(s))+\frac{5}{16}(a'(t))^2\root 4 \of {\frac{a(s)}{a^9(t)}}G(A(t),A(s))\\&\qquad +\root 4 \of {a(s)a^3(t)}\frac{\partial ^2 G }{\partial t^2}(A(t),A(s)). \end{aligned}$$

Therefore,

$$\begin{aligned}&\frac{\partial ^2 H}{\partial t^2}(t,s)+a(t)H(-t,s)+b(t)H(t,s) \\&\quad = \root 4 \of {a(s)a^3(t)}\frac{\partial ^2 G }{\partial t^2}(A(t),A(s)) +a(t)\root 4 \of {\frac{a(s)}{a(t)}}G(-A(t),A(s))\\&\quad \quad +c\,a(t)\root 4 \of {\frac{a(s)}{a(t)}}G(A(t),A(s))\\&\quad = \root 4 \of {a(s)a^3(t)}\left( \frac{\partial ^2 G }{\partial t^2}(A(t),A(s)) +G(-A(t),A(s))+c\,G(A(t),A(s))\right) =0. \end{aligned}$$

The boundary conditions are satisfied as well. \(\square \)

Remark 4.4

The same construction of Lemma 4.3 is valid for the case of the initial value problem. We illustrate this in the following example.

Example 4.4

Let \(a(t)=|t|^p\), k > 1. Taking b as in Lemma 4.3,

$$\begin{aligned} b(t)=k|t|^p-\frac{p(p+4)}{16t^2}, \end{aligned}$$

consider problems

$$\begin{aligned} u''(t)+a(t)\,u(-t)+b(t)\,u(t)=h(t),\quad u(0)=u'(0)=0 \end{aligned}$$
(4.4)

and

$$\begin{aligned} u''(t)+u(-t)+k\,u(t)=h(t),\quad u(0)=u'(0)=0. \end{aligned}$$
(4.5)

Using an argument similar as the one in Example 3.1, and considering \(R=D^2-\varphi ^*+k\), we can reduce problem (4.5) to

$$\begin{aligned}&\mathrm{u}^{(4)}(t) + 2{k}u''(t) + (k^2-1)u(t)= f(t), u^{(j)} (0) = 0, j= 0,...,3, \end{aligned}$$
(4.6)

which can be decomposed in

$$\begin{aligned} u''(t)+(k+1)u(t)&=v(t),\ t\in I,\quad u(0)=u'(0)=0,\\ v''(t)+(k-1)v(t)&=f(t),\ t\in I,\quad v(0)=v'(0)=0, \end{aligned}$$

which have as Green’s functions, respectively,

$$\begin{aligned}\tilde{G}_1(t,s)=\frac{\sin \left( \sqrt{k+1}\, (t-s)\right) }{\sqrt{k+1}}\chi _0^t(s),\ t\in \mathbb {R},\end{aligned}$$
$$\begin{aligned}\tilde{G}_2(t,s)=\frac{\sin \left( \sqrt{k-1}\, (t-s)\right) }{{\sqrt{k-1}}}\chi _0^t(s),\ t\in \mathbb {R},\end{aligned}$$

where \(\chi _0^t(s)=1\) if \(s\in [0,t]\) and \(\chi _0^t(s)=-1\) if \(s\in [-t,0)\). Then, the Green’s function for problem (4.6) is

$$\begin{aligned}&G(t,s) = \int _{s}^t\tilde{G}_1(t,r)\tilde{G}_2(r,s){\text {d}}r \\&\quad = \frac{1}{2\sqrt{k^2-1}} \left[ \sqrt{k-1} \sin \left( \sqrt{k+1} (s-t)\right) {-}\sqrt{k+1} \sin \left( \sqrt{k-1} (s{-}t)\right) \right] \chi _0^t(s),\end{aligned}$$

Observe that

$$\begin{aligned} R_\vdash G(t,s)=-\left[ \frac{\sin \left( \sqrt{k-1} (s-t)\right) }{2 \sqrt{k-1}}+\frac{\sin \left( \sqrt{k+1} (s-t)\right) }{2 \sqrt{k+1}}\right] \chi _0^t(s). \end{aligned}$$

Hence, considering

$$\begin{aligned}A(t):=\frac{2}{p+2}|t|^\frac{p}{2}t, \end{aligned}$$

the Green’s function of problem (4.4) follows the expression

$$\begin{aligned}H(t,s):= \root 4 \of {\frac{a(s)}{a(t)}}G(A(t),A(s)), \end{aligned}$$

This is,

$$\begin{aligned}H(t,s)=&{-}\left| \frac{s}{t}\right| ^\frac{p}{4}\left[ \frac{\sin \left( \frac{2 \sqrt{k-1} \left( s \left| s\right| ^{p/2}-t \left| t\right| ^{p/2}\right) }{p+2}\right) }{2 \sqrt{k{-}1}}+\frac{\sin \left( \frac{2 \sqrt{k+1} \left( s \left| s\right| ^{p/2}{-}t \left| t\right| ^{p/2}\right) }{p+2}\right) }{2 \sqrt{k+1}}\right] \chi _0^t(s).\end{aligned}$$

5 The Hilbert Transform and Other Algebras

In this section, we devote our attention to new algebras to which we can apply the previous results. To achieve this goal, we recall the definition and remarkable properties of the Hilbert transform (see [11]).

We define the Hilbert transform \(\mathsf {H}\) of a function f as

$$\begin{aligned} \mathsf {H} f(t):=\frac{1}{\pi }\lim _{\epsilon \rightarrow \infty }\int _{-\epsilon }^\epsilon \frac{f(s)}{t-s}\text {d}s\equiv \frac{1}{\pi }\int _{-\infty }^\infty \frac{f(s)}{t-s}\text {d}s, \end{aligned}$$

where the last integral is to be understood as the Cauchy principal value.

Among its properties, we would like to point out the following.

  • \({\mathsf {H}}:L^p({\mathbb {R}})\rightarrow L^p({\mathbb {R}})\) is a linear bounded operator for every \(p\in (1,+\infty )\) and

    $$\begin{aligned} \Vert {\mathsf {H}}\Vert _p={\left\{ \begin{array}{ll} \tan \frac{\pi }{2 p}, &{} p\in (1,2], \\ \cot \frac{\pi }{2 p}, &{} p\in [2,+\infty ), \end{array}\right. } \end{aligned}$$

    in particular \(\Vert {\mathsf {H}}\Vert _2=1\).

  • \(\mathsf {H}\) is an anti-involution: \({\mathsf {H}}^2=-{{\mathrm{Id}}}\).

  • Let \(\sigma (t)=at+b\) for \(a,b\in {\mathbb {R}}\). Then \({\mathsf {H}}\sigma ^*={{\mathrm{sign}}}(a)\sigma ^*{\mathsf {H}}\) (in particular, \({\mathsf {H}}\varphi ^*=-\varphi ^*{\mathsf {H}}\)). Furthermore, if a linear bounded operator \(\mathsf {O}:L^p({\mathbb {R}})\rightarrow L^p({\mathbb {R}})\) satisfies this property, \(\mathsf {O}=\beta \mathsf H\), where \(\beta \in {\mathbb {R}}\).

  • \({\mathsf {H}}\) commutes with the derivative: \({\mathsf {H}}D=D{\mathsf {H}}\).

  • \({\mathsf {H}}(f*g)=f*{\mathsf {H}}g={\mathsf {H}}*g\), where \(*\) denotes the convolution.

  • \({\mathsf {H}}\) is an isometry in \(L^2({\mathbb {R}})\): \(\left<{\mathsf {H}}f,{\mathsf {H}}g\right>=\left<f,g\right>\), where \(\left<\ ,\ \right>\) is the scalar product in \(L^2({\mathbb {R}})\). In particular \(\Vert {\mathsf {H}}f\Vert _2=\Vert f\Vert _2\).

Consider now the same construction we did for \({\mathbb {R}}[D,\varphi ^*]\) changing \(\varphi ^*\) by \({\mathsf {H}}\) and denote this algebra as \({\mathbb {R}}[D,{\mathsf {H}}]\). In this case we are dealing with a commutative algebra. Actually, this algebra is isomorphic to the complex polynomials \({\mathbb {C}}[D]\). Just consider the isomorphism

Observe that \(\Xi |_{{\mathbb {R}}[D]}={{\mathrm{Id}}}|_{{\mathbb {R}}[D]}\).

We now state a result analogous to Theorem 2.1.

Theorem 5.1

Take

$$\begin{aligned} L=\sum \limits _{j}(a_j{\mathsf {H}}+b_j)D^j\in {\mathbb {R}}[D,{\mathsf {H}}] \end{aligned}$$

and define

$$\begin{aligned} R=\sum \limits _{j}(a_j{\mathsf {H}}-b_j)D^j. \end{aligned}$$

Then \(\textit{LR}=\textit{RL}\in {\mathbb {R}}[D]\).

Remark 5.1

Theorem 5.1 is clear from the point of view of \({\mathbb {C}}[D]\). Since \(\Xi (R)=-\overline{\Xi (L)}\),

$$\begin{aligned} \textit{RL}=\Xi ^{-1}(-\Xi (L)\overline{\Xi (L)})=\Xi ^{-1}(-|\Xi (L)|^2). \end{aligned}$$

Therefore, \(|\Xi (L)|^2\in {\mathbb {R}}[D]\), implies \(\textit{RL}\in {\mathbb {R}}[D]\).

Remark 5.2

Since \({\mathbb {R}}[D,{\mathsf {H}}]\) is isomorphic to \({\mathbb {C}}[D]\), the Fundamental Theorem of Algebra also applies to \({\mathbb {R}}[D,{\mathsf {H}}]\), which shows a clear classification of the decompositions of an element of \({\mathbb {R}}[D,{\mathsf {H}}]\) in contrast with those of \({\mathbb {R}}[D,\varphi ^*]\) which, in Sect. 2, was shown not to be a unique factorization domain.

In the following example, we will use some properties of the Hilbert transform:

$$\begin{aligned} {\mathsf {H}}\cos&=\sin , \\ {\mathsf {H}}\sin&=-\cos , \\ {\mathsf {H}}( tf(t)){(t)}&=t\,{\mathsf {H}}f(t)-\frac{1}{\pi }\int _{-\infty }^\infty f(s)\text {d}s, \end{aligned}$$

where the integral is considered as the principal value.

Example 5.1

Consider the problem

$$\begin{aligned} Lu(t)\equiv u'(t)+a{\mathsf {H}}u(t)=h(t):=\sin a t,\ u(0)=0, \end{aligned}$$
(5.1)

where \(a>0,\) composing the operator \(L=D+a{\mathsf {H}}\) with the operator \(R=D-a{\mathsf {H}}\) we obtain \(S=\textit{RL}=D^2+a^2\), the harmonic oscillator operator. The extra boundary conditions obtained applying R are \(u'(0)-a{\mathsf {H}}u(0)=0\). The general solution to the problem \(u''(t)+a^2u(t)=Rh(t)=2a\cos at, u(0)=0\) is given by

$$\begin{aligned} v(t)=\int _0^t\frac{\sin \left( a\, [t-s]\right) }{a}Rh(s)\text {d}s+\alpha \sin at=(t+\alpha )\sin at, \end{aligned}$$

where \(\alpha \) is a real constant. Hence,

$$\begin{aligned} {\mathsf {H}}v(t)=-(t+\alpha )\cos at. \end{aligned}$$

If we impose the boundary conditions \(u'(0)-a{\mathsf {H}}u(0)=0\) then we get \(\alpha =0\). Hence, the unique solution of problem (5.1) is

$$\begin{aligned} u(t)=t\sin at. \end{aligned}$$

Remark 5.3

It is easy to check that the kernel of \(D+a{\mathsf {H}}\) (\(a>0\)) is spanned by \(\sin t\) and \(\cos t\) and therefore, the kernel of \(D-a{\mathsf {H}}\) is just 0. This defies, in the line of Remark 3.2, the usual relation between the degree of the operator and the dimension of the kernel which is held for ODEs, that is, the operator of a linear ODE of order n has a kernel of dimension n. In this case, we have the order 1 operator \(D+a{\mathsf {H}}\) with a dimension 2 kernel and the injective order 1 operator \(D-a{\mathsf {H}}\).

Now, we consider operators with reflection and Hilbert transforms, and denote the algebra as \({\mathbb {R}}[D,{\mathsf {H}},\varphi ^*]\). We can again state a reduction Theorem.

Theorem 5.2

Take

$$\begin{aligned} L=\sum _{i}a_i\varphi ^*{\mathsf {H}}D^i+\sum _{i}b_i{\mathsf {H}}D^i + \sum _{i}c_i\varphi ^* D^i+\sum _{i}d_i D^i\in {\mathbb {R}}[D,{\mathsf {H}},\varphi ^*] \end{aligned}$$

and define

$$\begin{aligned} R=\sum _{j}a_j\varphi ^*{\mathsf {H}}D^j+\sum _{j}(-1)^jb_j{\mathsf {H}}D^j + \sum _{j}c_j\varphi ^* D^j-\sum _{j}(-1)^jd_j D^j. \end{aligned}$$

Then \(\textit{LR}=\textit{RL}\in {\mathbb {R}}[D]\).

5.1 Hyperbolic Numbers as Operators

Finally, we use the same idea behind the isomorphism \(\Xi \) to construct an operator algebra isomorphic to the algebra of polynomials on the hyperbolic numbers.

The hyperbolic numbersFootnote 5 are defined, in a similar way to the complex numbers, as follows:

$$\begin{aligned} {\mathbb {D}}=\{x+jy\ :\ x,y\in {\mathbb {R}},\ j\not \in {\mathbb {R}},\ j^2=1\}. \end{aligned}$$

The arithmetic in \({\mathbb {D}}\) is that obtained assuming the commutative, associative, and distributive properties for the sum and product. In a parallel fashion to the complex numbers, if \(w\in {\mathbb {D}}\), with \(w=x+jy\), we can define

$$\begin{aligned} \overline{w}: =x-jy,\quad \mathfrak {R}(w):=x,\quad \mathfrak {I}(w):= y, \end{aligned}$$

and, since \(w\overline{w}=x^2-y^2\in {\mathbb {R}}\), we set

$$\begin{aligned} |w|:=\sqrt{|w\overline{w}|}, \end{aligned}$$

which is called the Minkowski norm. It is clear that \(|w_1w_2|=|w_1||w_2|\) for every \(w_1,w_2\in {\mathbb {D}}\) and, if \(|w|\ne 0\), then \(w^{-1}=\overline{w}/|w|^2\). If we add the norm

$$\begin{aligned} \Vert w\Vert =\sqrt{2(x^2+y^2)}, \end{aligned}$$

we have that \(({\mathbb {D}}, \Vert \cdot \Vert )\) is a Banach algebra, so the exponential and the hyperbolic trigonometric functions are well defined. Although, unlike \(\mathbb C\), \({\mathbb {D}}\) is not a division algebra (not every non-zero element has an inverse), we can derive calculus (differentiation, integration, holomorphic functions...) for \({\mathbb {D}}\) as well [1].

In this setting, we want to derive an operator J defined on a suitable space of functions such that satisfies the same algebraic properties as the hyperbolic imaginary unity j. In other words, we want the map

to be an algebra isomorphism. This implies:

  • J is a linear operator,

  • \(J\not \in {\mathbb {R}}[D]\).

  • \(J^2={{\mathrm{Id}}}\), that is, J is an involution,

  • \(JD=DJ\).

There is a simple characterization of linear involutions on a vector space: every linear involution J is of the form

$$\begin{aligned} J=\pm (2P-{{\mathrm{Id}}}), \end{aligned}$$

where P is a projection operator, that is, \(P^2=P\). It is clear that \(\pm (2P-{{\mathrm{Id}}})\) is, indeed a linear operator and an involution. On the other hand, it is simple to check that, if J Is a linear involution, \(P:=(\pm J+{{\mathrm{Id}}})/2\) is a projection, so \(J=\pm (2P-{{\mathrm{Id}}})\).

Hence, it is sufficient to look for a projection P commuting with de derivative.

Example 5.2

Consider the space \(W={{\mathrm{L^2}}}([-\pi ,\pi ])\) and define

$$\begin{aligned} P f(t):=\sum _{n\in {\mathbb {N}}}\int _{-\pi }^\pi f(s)\cos (2\,n\,s)\text {d}s\,\cos (2\,n\,t)\text { for every} f\in W, \end{aligned}$$

that is, take only the sum over the even coefficients of the Fourier series of f. Clearly \(PD=DP\). \(J:=2P-{{\mathrm{Id}}}\) satisfies the aforementioned properties.

The algebra \({\mathbb {R}}[D,J]\), being isomorphic to \({\mathbb {D}}[D]\), satisfies also very good algebraic properties (see, for instance, [14]). In order to get an analogous theorem to Theorem 2.1 for the algebra \({\mathbb {R}}[D,J]\) it is enough to take, as in the case of \({\mathbb {R}}[D,J]\), \(R=\Theta ^{-1}(\overline{\Theta (L)})\).