1 Introduction

The theory of Mellin transforms as well as Mellin approximation theory was introduced by R.G. Mamedov in his treatise [45], which includes also previous results in this subject obtained in collaboration with G.N. Orudzhev (see [4648]). In his review Professor H.J. Glaeske (MR1235339–94:44003) writes: This book deals with the theory of The Mellin transform and its applications to approximation theory based on results of the school of I.M. Dzhrbashyan and the methods of the school of P.L. Butzer on Fourier Analysis and approximation. Somewhat later Mellin transform theory was presented in a systematic form, fully independently of Fourier analysis, by Butzer and Jansche in their papers [18, 19]. Further important developments were then given in [20], and later on in the present line of research in [2, 3, 612, 49].

In the papers [2226] a broad study of fractional Mellin analysis was developed in which the so-called Hadamard-type integrals, which represent the appropriate extensions of the classical Riemann–Liouville and Weyl fractional integrals, are considered (see also the book [42]). These integrals are also connected with the theory of moment operators (see [11, 12, 14]). The purpose of this article is not only a continuation of these topics but also to present a new, almost independent approach, one starting from the very foundations. As remarked in [22], in terms of Mellin analysis, the natural operator of fractional integration is not the classical Riemann–Liouville fractional integral of order \(\alpha >0\) on \(\mathbb {R}^+\), namely (see [33, 34, 50, 55])

$$\begin{aligned} \left( I^\alpha _{0+}f\right) (x) = \frac{1}{\Gamma (\alpha )}\int _0^x (x-u)^{\alpha -1}f(u)du\,\,(x>0) \end{aligned}$$
(1)

but the Hadamard fractional integral, introduced essentially by Hadamard in [39],

$$\begin{aligned} \left( J^\alpha _{0+} f\right) (x) = \frac{1}{\Gamma (\alpha )} \int _0^x \bigg ( \log \frac{x}{u} \bigg )^{\alpha -1} f(u) \frac{du}{u}\,\,(x >0). \end{aligned}$$
(2)

Thus not \(\int _0^xf(u)du\) is the natural operator of integration (anti-differentiation) in the Mellin setting, but \(\int _0^x f(u)\frac{du}{u}\) (the case \(\alpha =1\)). It is often said that a study of Mellin transforms as an independent discipline is fully superfluous since one supposedly can reduce its theorems and results to the corresponding ones of Fourier analysis by a simple change of variables and functions. It may be possible to reduce a formula by such a change of operations but not the precise hypotheses under which a formula is valid. But alone since (1) is not the natural operator of integration in the Mellin frame but that the Hadamard fractional integral (2), [which is a compact form of the iterated integral (6) (see Sect. 4)] will turn out to be the operator of integration, thus anti-differentiation to the operator of differentiation \(D_{0+,0}f\) in (4) (see below)—in the sense that the fundamental theorem of the differential and integral calculus must be valid in the Mellin frame—makes the change of operation argument fully obsolete. This will become evident as we proceed along, especially in Theorems 34, and Theorems 612 below. Thus the very foundations to Mellin analysis are quite different to those of classical Fourier analysis. As a final remark, if one goes beyond the real variables and considers complex variables, the usual substitution \(w=e^z\) does not work. Indeed, while the range of \(e^x\) is the positive real semiaxis \(\mathbb {R}^+,\) in the complex case the range of \(e^z\) is all non zero complex numbers. In fact, \(e^z\) may be a negative real number and therefore out of the domain of the involved functions.

For the development of the theory, it will be important to consider the following generalization of the fractional integral, known as the Hadamard-type fractional integrals, for \(\mu \in \mathbb {R},\) namely (see [2226, 42])

$$\begin{aligned} \left( J^\alpha _{0+, \mu } f\right) (x) = \frac{1}{\Gamma (\alpha )} \int _0^x \bigg ( \frac{u}{x}\bigg )^\mu \bigg ( \log \frac{x}{u} \bigg )^{\alpha -1} f(u) \frac{du}{u}\,\,(x>0) \end{aligned}$$
(3)

for functions belonging to the space \(X_c\) of all measurable complex-valued functions defined on \(\mathbb {R}^+,\) such that \((\cdot )^{c-1}f(\cdot ) \in L^1(\mathbb {R}^+).\) As regards the classical Hadamard fractional integrals and derivatives, some introductory material about fractional calculus in the Mellin setting was already treated in [45] and [55].

In Sect. 2 we recall some basic tools and notations of Mellin analysis, namely the Mellin transform, along with its fundamental properties, the notion of the basic Mellin translation operator, which is now defined via a dilation operator instead of the usual translation (see [18]. For other classical references see [15, 36, 52, 58, 62, 63]).

In Sect. 3 we will introduce and study a notion of a strong fractional derivative in the spaces \(X_c,\) which represents an extension of the classical strong derivative of Fourier analysis in \(L^p-\)spaces (see [28]). The present notion is inspired by an analogous construction given in [33, 60] for the Riemann–Liouville fractional derivatives in a strong sense. This method is based on the introduction of certain fractional differences, which make use of the classical translation operator. Another important fact is that fractional differences are now defined by an infinite series. Our definition here, follows this approach, using the Mellin translation operator. Our definition reproduces the Mellin differences of integral order, as given in [20], in which we have a finite sum.

It should be noted that a different approach for spaces \(X_0\) was introduced in [45], pp. 175–176, starting with the incremental ratios of the integral (2). A relevant part of the present paper (Sect. 4) deals with the pointwise fractional derivative of order \(\alpha >0,\) known as the ”Hadamard-type fractional derivative” in the local spaces \(X_{c,loc},\) and with its links with the strong derivatives. This notion originates from the analogous concept of Riemann–Liouville theory, and was introduced in [23] using the Hadamard-type fractional integrals. It read as follows

$$\begin{aligned} \left( D^\alpha _{0+,\mu } f\right) (x) = x^{-\mu } \delta ^m x^\mu \left( J^{m- \alpha }_{0+, \mu } f\right) (x), \end{aligned}$$
(4)

where \(m = [\alpha ] +1\) and \(\delta :=\left( x \frac{d}{dx}\right) \) is the Mellin differential operator \((\delta f)(x) = x f'(x),\) provided \(f'(x)\) exists. For \(\mu = 0\) we have the so called Hadamard fractional derivative, treated also in [45, 55]. Note that the above definition reproduces exactly the Mellin derivatives \(\Theta _c^kf\) of integral order when \(\alpha = k \in \mathbb {N}.\) Thus \(D^\alpha _{0+,\mu } f\) represents the natural fractional version of the differential operator \(\Theta _c^k,\) in the same way that the Riemann–Liouville fractional derivative is the natural extension of the usual derivative. Paper [41], gives some sufficient conditions for the existence of the pointwise fractional derivative for functions defined in bounded intervals \(I \subset \mathbb {R}^+,\) involving spaces of absolutely continuous functions in \(I.\)

Since the definition of the pointwise fractional derivatives is based on a Hadamard-type integral, it is important to study in depth the domain and the range of these integral operators. As far as we are aware this was not sufficiently developed in the literature so far. Here we define the domain of the operator (3) as the subspace of all functions such that the integral exists as a Lebesgue integral. A basic result in this respect is the semigroup property of \(J^\alpha _{0+,c}.\) This was first studied in [45] and [55] for the Hadamard integrals (2) and then developed for the integrals (3) in [24] and [41] (see also the recent books [5, 42]). However, the above property was studied only for functions belonging to suitable subspaces of the domain, namely the space \(X^p_{c}\) of all the functions \(f: \mathbb {R}^+ \rightarrow \mathbb {C}\) such that \((\cdot )^{c-1}f(\cdot ) \in L^p(\mathbb {R}^+),\) or for \(L^p(a,b)\) where \(0< a < b < \infty .\)

Here we prove the semigroup property in a more general form, using minimal assumptions. This extension enables us to deduce the following chain of inclusions for the domains of the operators \(J^\alpha _{0+,c}.\)

$$\begin{aligned} Dom J^\beta _{0+,c} \subset X_{c,loc} = Dom J^1_{0+, c} \subset Dom J^\alpha _{0+,c}, \end{aligned}$$

for \(\alpha < 1 < \beta ,\) and all inclusions are strict.

Concerning the range, we show that \(J^\alpha _{0+,c}f \in X_{c,loc}\) whenever \(f\in Dom J^{\alpha +1}_{0+,c}f\) and in general \(f \in Dom J^\alpha _{0+,c}\) does not imply that \(J^\alpha _{0+,c}f \in X_{c,loc}.\)

For spaces \(X_c\) we have the surprising result that \(J^\alpha _{0+,c}f \not \in X_c\) for any nontrivial non-negative function \(f \in Dom J^\alpha _{0+,c}.\) This fact gives problems for the evaluation of the Mellin transform of the function \(J^\alpha _{0+,c}f.\) In order to avoid this problem, we prove that if \(f \in Dom J^\alpha _{0+,c} \cap \bigcap _{\mu \in [\nu ,c]}X_\mu ,\) then \(J^\alpha _{0+,c}f \in X_\nu \) and so its Mellin transform can be evaluated on the line \(\nu + it,\) with \(\nu < c.\)

We then apply the theory to deduce one of the main results of this paper, namely the fundamental theorem of the fractional differential and integral calculus in the Mellin frame, here established under sharp assumptions. We consider also some more general formulae, involving different orders of fractional integration and differentiation. Similar results were also given in [41, 42] however in restricted subspaces (see the remarks in Sect. 4). In particular, one of the two fundamental formulae is given there under the strong assumption that the functions \(f\) belongs to the range of \(J^\alpha _{0+, \mu }\left( X^p_{0+,c}\right) ,\) with \(\mu > c.\)

In Sect. 5 we prove an equivalence theorem with four equivalent statements, which connects fractional Hadamard-type integrals, strong and pointwise fractional Mellin derivatives and the Mellin transform (see Theorem 8 below). As far as we know, a fundamental theorem with four equivalent assertions in the form presented here for the Mellin transform in the fractional case has never been stated explicitly, for the Fourier transform.Footnote 1

As a fundamental theorem in the present sense it was first established for \(2\pi \)-periodic functions via the finite Fourier transform in [33], and for the Chebyshev transform (see e.g. [36], pp. 116–122), in [30, 31]. Fractional Chebyshev derivatives were there defined in terms of fractional order differences of the Chebyshev translation operator, the Chebyshev integral by an associate convolution product. The next fundamental theorem, after that for Legendre transforms (see e.g. [36], pp.122–131; [16, 57]), was the one concerned with the Jacobi transform, see e.g. [32]. In their inimitable book [36], Glaeske et al. study the Mellin transform and its essential properties (pp. 55–67), not as an independent discipline but by making use of the corresponding properties of the Fourier transform, the reduction being carried out with unusual precision. In other respects their presentation is standard. Thus their integral is the classical one, i.e. \(F(x) = \int _0^x f(u)du,\) with Mellin transform \(M[F](s) = -s^{-1}M[f](s+1).\) They were not aware of [18]. However, their sections on the Chebyshev, Legendre, Gegenbauer and Jacobi transforms make interesting reading and are unorthodox. Here their chief properties are based on the definitions of an associated translation operator for each transform, an approach carried out systematically for the Chebyshev and Legendre transforms in [31] and [57], which are cited by the three authors. However, they do not continue the process and define the associated derivative concepts in terms of the respective translation operators (probably due to lack of space). This would have led them to the fundamental theorems of the differential and integral calculus in the setting of the respective transforms. Nevertheless the material of these sections has never been treated in a book-form as yet. The chapter on Mellin transforms in the unique handbook [62], also written in the classical style, bears the individual stamp of the author, Zayed.

In Sect. 6 we describe some special cases of interest in applications, while in Sect. 7 we apply our theory to two fractional partial differential equations. The use of Mellin transforms for solving partial differential equations originates from certain boundary value problems in wedge-shaped regions, see e.g. [43, 63] and, in the fractional frame, was considered by various authors for the study of fractional versions of the diffusion equation (see e.g. [40, 42, 56, 61]). However, the use of Mellin transforms for solving fractional differential equations with Hadamard derivatives is not usual. Also, there are a few contributions dealing with pure Hadamard derivatives (see e.g. [5, 37, 42, 44, 53]). Most fractional equations, are studied using different types of fractional derivatives, (Riemann–Liouville, Caputo, etc). Here we apply our theory to an integro-differential equation which can be reduced to a fractional evolution equation, with Hadamard fractional derivative. A similar equation was also considered in [42] but with the Caputo fractional derivative. Here we give the exact solution of the evolution equation, using just Mellin transforms and the fractional theory developed in this paper. As a second example, we consider a boundary value problem for a fractional diffusion equation, using the same approach. In both the examples the (unique) solution is given in terms of a Mellin convolution operator.

In the very recent book [5] numerical methods for solving fractional differential equation are treated, using mainly Caputo and Riemann–Liouville fractional theories.

2 Preliminaries

Let \(L^1 = L^1(\mathbb {R}^+)\) be the space of all Lebesgue measurable and integrable complex-valued functions defined on \(\mathbb {R}^+,\) endowed with the usual norm.

Let us consider the space, for some \(c \in \mathbb {R},\)

$$\begin{aligned} X_c = \left\{ f: \mathbb {R}^+ \rightarrow \mathbb {C} : f(x) x^{c-1} \in L^1 (\mathbb {R}^+ )\right\} \end{aligned}$$

endowed with the norm

$$\begin{aligned} \Vert f\Vert _{X_c} = \Vert f(\cdot ) (\cdot )^{c-1} \Vert _{L^1} = \int _0^\infty |f(u)| u^{c-1} du. \end{aligned}$$

More generally by \(X^p_c\) we denote the space of all functions \(f: \mathbb {R}^+ \rightarrow \mathbb {C}\) such that \((\cdot )^c f(\cdot ) \in L^p(\mathbb {R}^+),\) with \(1< p < \infty .\) In particular when \(c=1/p,\) the space \(X^p_c\) coincides with the classical \(L^p(\mathbb {R}^+)\) space.

For \(a,b \in \mathbb {R}\) we define the spaces \(X_{(a,b)},\,X_{[a,b]}\) by

$$\begin{aligned} X_{(a,b)} = \bigcap _{c \in ]a,b[}X_c,\,\,X_{[a,b]} = \bigcap _{c \in [a,b]}X_c \end{aligned}$$

and, for every \(c\) in \((a,b)\) or \([a,b]\), \(\Vert f\Vert _{X_c}\) is a norm on them.

Note that, for any \(a,b \in \mathbb {R},\) with \(a<b,\) if \(f \in X_a \cap X_b\), then \(f \in X_{[a,b]}\) and moreover

$$\begin{aligned} \Vert f\Vert _{X_c} \le \Vert f\Vert _{X_a} + \Vert f\Vert _{X_b}, \end{aligned}$$

for every \(c \in [a,b].\) For these and other results see [18].

In what follows, we denote by \(\chi _A(x)\) the characteristic function of the set \(A \subset \mathbb {R}^+.\)

We define for every \(f\in X_c\) the Mellin transform \([f]^\wedge _M\) of \(f\) by

$$\begin{aligned} M[f](s) \equiv [f]^{\wedge }_M (s) = \int _0^\infty u^{s-1} f(u) du \end{aligned}$$

where \(s=c+ it, t\in \mathbb {R}.\)

The notation \(M[f(\cdot )](s)\) of the Mellin transform signifies the fact that one of its essential roles is to solve analytical problems by transforming them into another function space, solve the problem (which should be simpler) in the transformed state, and then apply a (suitable) Mellin inversion formula to obtain the solution in the original function space.

Basic in this respect are the linearity and boundedness properties, thus

$$\begin{aligned} M[a f(\cdot ) + b g(\cdot )](s) = a M[f(\cdot )] (s)+ b M[g(\cdot )](s)\,\,(f,g \in X_c,\,a,b \in \mathbb {R}) \end{aligned}$$
$$\begin{aligned} |M[f(\cdot )] (s)| \le \Vert f\Vert _{X_c}\,\,\,(s = c+it). \end{aligned}$$

As a consequence of the boundedness property, if \((f_n)_n\) is a sequence of functions in \(X_c\) convergent in \(X_c\) to a function \(f,\) then \(M[f_n]\) converges uniformly to \(M[f]\) on the line \(s=c+it,\,t \in \mathbb {R}.\)

We need several operational properties.

The Mellin translation operator \(\tau _h^c\), for \(h \in \mathbb {R}^+,\,c \in \mathbb {R},\) \(f:\mathbb {R}^+ \rightarrow \mathbb {C},\) is defined by

$$\begin{aligned} \left( \tau _h^c f\right) (x) := h^c f\left( hx\right) \,\,\left( x\in \mathbb {R}^+\right) . \end{aligned}$$

Setting \(\tau _h:= \tau ^0_h,\) then

$$\begin{aligned} \left( \tau _h^cf\right) (x) \!=\! h^c (\tau _hf)(x),\,\Vert \tau _h^c f\Vert _{X_c} \!=\! \Vert f\Vert _{X_c},\,\left( \tau _h^c\right) ^j f(x) \!=\! h^{jc} f\left( h^j x\right) \!= \!\left( \tau _{h^j}^c f\right) (x). \end{aligned}$$

Proposition 2 and Lemma 3 in [18], state the following:

Lemma 1

The Mellin translation operator \(\tau _h^{\overline{c}}: X_c \rightarrow X_c\) for \(c,\overline{c} \in \mathbb {R},\,h \in \mathbb {R}^+\) is an isomorphism with \((\tau _h^{\overline{c}})^{-1} = \tau _{1/h}^{\overline{c}}\) and

$$\begin{aligned} \Vert \tau _h^{\overline{c}}f\Vert _{X_c} = h^{\overline{c} - c}\Vert f\Vert _{X_c}\,~(f \in X_c) \end{aligned}$$

having the properties

  1. (i)

    \(M[\tau _h^{\overline{c}}f] (s) =h^{\overline{c} - s}M[f](s),\) in particular \(M[\tau _h f](s) = h^{-s}M[f](s);\)

  2. (ii)

    \(\lim _{h \rightarrow 1} \Vert \tau _h^{\overline{c}}f - f\Vert _{X_c} =0.\)

When \(\overline{c}=0\) Property (ii), in case of continuous functions \(f\), expresses uniform continuity in the Mellin frame, taking the usual \(L^\infty \)-norm, i.e.

$$\begin{aligned} \lim _{h \rightarrow 1}\Vert \tau _hf - f\Vert _{\infty } =0. \end{aligned}$$

It is equivalent to the so-called log-uniform continuity due to Mamedov (see [45], p. 7), which may be expressed as follows: a function \(f:\mathbb {R}^+ \rightarrow \mathbb {C}\) is log-uniformly continuous on \(\mathbb {R}^+\) if for every \(\varepsilon >0\) there exists \(\delta _\varepsilon >0\) such that \(|f(u) - f(v)| < \varepsilon ,\) whenever \(|\log u - \log v| < \delta _\varepsilon .\) Indeed the continuity of the operator \(\tau _h\) implies that \(|f(hx) - f(x)| < \varepsilon ,\) for \(|h|< \delta _\varepsilon ,\) uniformly with respect to \(x \in \mathbb {R}^+.\) It should be noted that this notion is different from the usual uniform continuity. For example, the function \(f(u) = \sin u\) is obviously uniformly continuous, but not log-uniformly continuous on \(\mathbb {R}^+\), while the function \(g(u) = \sin (\log u)\) is log-uniformly continuous but not uniformly continuous on \(\mathbb {R}^+.\) However, the two notions are equivalent on every bounded interval \([a,b]\) with \(a>0.\)

The Mellin convolution product, denoted by \(f*g\), of two functions \(f,g :\mathbb {R}^+ \rightarrow \mathbb {C},\) is defined by

$$\begin{aligned} (f*g)(x) := \int _0^{+\infty } g\left( \frac{x}{u}\right) f(u)\frac{du}{u} = \int _0^{+\infty }\left( \tau ^c_{1/u}f\right) (x)g(u)u^c\frac{du}{u}\,\,\,\left( x \in \mathbb {R}^+\right) \end{aligned}$$

in case the integral exists. It has the properties

Lemma 2

  1. (i)

    If \(f,g \in X_c,\) for \(c \in \mathbb {R},\) then \(f*g\) exists (a.e.) on \(\mathbb {R}^+,\) it belongs to \(X_c\), and

    $$\begin{aligned} \Vert f *g\Vert _c \le \Vert f\Vert _{X_c} \Vert g\Vert _{X_c}. \end{aligned}$$

    If in addition \(x^c f(x)\) is uniformly continuous on \(\mathbb {R}^+,\) then \(f*g\) is continuous on \(\mathbb {R}^+.\)

  1. (ii)

    (Convolution Theorem) If \(f,g \in X_c\) and \(s=c+it,\,t \in \mathbb {R},\) then

    $$\begin{aligned} M[f *g](s) = M[f](s) M[g](s). \end{aligned}$$
  2. (iii)

    (Commutativity and Associativity) The convolution product is commutative and associative, thus for \(f_1,f_2, f_3 \in X_c\) there holds true (a.e.)

    $$\begin{aligned} f_1 *f_2 = f_2 *f_1,\,\,(f_1 *f_2) *f_3 = f_1 *(f_2 *f_3). \end{aligned}$$

    In particular \(X_c\) is a Banach algebra.

3 The Strong Mellin Fractional Differential Operator

Let us denote by \(I\) the identity operator over the space of all measurable functions on \(\mathbb {R}^+.\)

The Mellin fractional difference of \(f \in X_c\) of order \(\alpha >0,\) defined by

$$\begin{aligned} \Delta _h^{\alpha , c} f(x):= (\tau _h^c - I)^\alpha f(x) = \sum _{j=0}^\infty \left( \begin{array}{c} \alpha \\ j \end{array} \right) (-1)^{\alpha -j} \tau _{h^j}^c f(x) \end{aligned}$$

for \(h>0\) with

$$\begin{aligned} \left( \begin{array}{c} \alpha \\ j \end{array} \right) = \frac{\alpha (\alpha -1)\cdots (\alpha -j +1)}{j!}, \end{aligned}$$

has the following properties

Proposition 1

For \(f \in X_c\) the difference \(\Delta _h^{\alpha ,c} f(x)\) exists a.e. for \(h >0,\) with

  1. (i)

    \(\Vert \Delta _h^{\alpha ,c} f\Vert _{X_c} \le \Vert f\Vert _{X_c}\sum _{j=0}^\infty \bigg |\left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg |\)

  2. (ii)

    \(M[\Delta _h^{\alpha ,c} f](c+it) = (h^{-it} -1)^\alpha M[f](c+it).\)

  3. (iii)

    The following semigroup property holds for \(\alpha , \beta >0,\)

    $$\begin{aligned} \left( \Delta _h^{\alpha ,c}\Delta _h^{\beta ,c}f\right) (x) = \left( \Delta _h^{\alpha + \beta ,c}f\right) (x). \end{aligned}$$

Proof

At first, we have for \(x >0,\,h>0\)

$$\begin{aligned} \left| \Delta _h^{\alpha ,c} f (x)\right| \le \frac{1}{x^c}\sum _{j=0}^\infty \bigg | \left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg | h^{cj}x^c|f(h^jx)|; \end{aligned}$$

thus we have to prove the convergence of the latter series. For this purpose, by integration, we have

$$\begin{aligned}&\int _0^\infty \sum _{j=0}^\infty \bigg | \left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg |(h^j x)^c |f(h^jx)|\frac{dx}{x} = \sum _{j=0}^\infty \bigg | \left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg | \int _0^\infty (h^j x)^c |f(h^jx)| \frac{dx}{x} := J. \end{aligned}$$

Now, putting in the second integral \(h^jx = t\), we have

$$\begin{aligned} J = \Vert f\Vert _{X_c}\sum _{j=0}^\infty \bigg | \left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg |. \end{aligned}$$

Thus, since \(\left( \begin{array}{c} \alpha \\ j \end{array} \right) = \mathcal{O}(j^{-\alpha - 1}),\,\,j \rightarrow + \infty ,\) we observe that the integral is finite for any \(h>0,\) if \(f \in X_c,\) and so the integrand is finite almost everywhere. Thus the original series defining the difference, converges almost everywhere.

As to (i), we have

$$\begin{aligned}&\Vert \Delta _h^{\alpha ,c} f\Vert _{X_c} = \int _0^{\infty } x^{c-1}\bigg |\sum _{j=0}^\infty \left( \begin{array}{c} \alpha \\ j \end{array} \right) (-1)^{\alpha -j} h^{cj} f(h^j x) \bigg |dx \\&\quad \le \int _0^{\infty } \frac{t^{c-1}}{h^{j(c-1)}} \sum _{j=0}^\infty \bigg |\left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg | h^{cj} |f(t)| \frac{dt}{h^j} = \Vert f\Vert _{X_c} \sum _{j=0}^\infty \bigg |\left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg |, \end{aligned}$$

and so the assertion.

An alternative proof makes use of Lemma 1 in the following way. The left hand side of (i) can be estimated by

$$\begin{aligned} \Vert \Delta _h^{\alpha ,c} f\Vert _{X_c} \le \sum _{j=0}^\infty \bigg |\left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg | \Vert \tau _{h^j}^cf\Vert _{X_c} = \sum _{j=0}^\infty \bigg |\left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg |\Vert f\Vert _{X_c} \end{aligned}$$

which is independent of \(h>0.\) As to (ii), the Mellin transform on the left equals, by the linearity property, which uses an integration by series,

$$\begin{aligned} \sum _{j=0}^\infty \left( \begin{array}{c} \alpha \\ j \end{array} \right) (-1)^{\alpha -j}h^{-itj}[f]^\wedge _M(c+it), \end{aligned}$$

which yields (ii). Note that the complex number \(h^{-it}\) has modulus \(1,\) so it lies on in the boundary of the circle of convergence of the power series which defines the binomial expansion. But, since the following series are absolutely convergent and bounded,

$$\begin{aligned} \sum _{j=0}^\infty \left( \begin{array}{c} \alpha \\ j \end{array} \right) (-1)^{\alpha -j}h^{-itj}, \quad \sum _{j=0}^\infty \left| \left( \begin{array}{c} \alpha \\ j \end{array} \right) \right| , \end{aligned}$$

using the Abel–Stolz theorem for power series (see e.g. [1]), we obtain

$$\begin{aligned} \sum _{j=0}^\infty \left( \begin{array}{c} \alpha \\ j \end{array} \right) (-1)^{\alpha -j}h^{-itj} = (h^{-it} - 1)^\alpha . \end{aligned}$$

In order to justify the integration by series, we have for \(s = c +it,\)

$$\begin{aligned}&\int _0^\infty |x^{s-1}| \sum _{j=0}^\infty \bigg |\left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg |h^{cj}|f(h^jx)|dx \\&\quad = \sum _{j=0}^\infty \bigg |\left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg |h^{cj}|h^{j(1-s) - j}|\int _0^\infty |t^{s-1}| |f(t)| dt = \sum _{j=0}^\infty \bigg |\left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg | \Vert f\Vert _{X_c} < + \infty . \end{aligned}$$

As to (iii), using (i) and (ii), taking the Mellin transform of both sides of the formula, we obtain

$$\begin{aligned} \left[ \Delta _h^{\alpha ,c}\Delta _h^{\beta ,c}f\right] ^\wedge _M(c+it)= & {} (h^{-it} -1)^\alpha \left[ \Delta _h^{\beta ,c}f\right] ^\wedge _M(c+it)\\= & {} (h^{-it} -1)^{\alpha + \beta }[f]^\wedge _M(c+it)\\= & {} \left[ \Delta _h^{\alpha + \beta ,c}f\right] ^\wedge _M(c+it), \end{aligned}$$

and so the assertion follows from the uniqueness theorem for Mellin transforms (see Theorem 8 in [18]). Note that the fractional differences introduced here depend fundamentally on the Mellin translation operator. In the classical theories of Riemann–Liouville and Grünwald–Letnikov fractional calculus, the corresponding differences were based on the classical translation operator, and were first studied in a precise and systematic form in [33]; see also [42], where property iii) for these differences is also given, without proof. Moreover, other generalizations of fractional differences, via the Stirling functions of first kind, were also introduced in [26, 27].

For spaces \(X_{[a,b]},\) we have the following \(\square \)

Proposition 2

Let \(f \in X_{[a,b]},\) and let \(c \in ]a,b[.\)

  1. (i)

    If \(0<h\le 1,\) we have \(\Delta _h^{\alpha ,c}f \in X_{[a,c]},\) and for every \(\nu \in [a,c[\)

    $$\begin{aligned} \left\| \Delta _h^{\alpha ,c}f\right\| _{X_\nu } \le \Vert f\Vert _{X_\nu } \sum _{j=0}^\infty \left| \left( \begin{array}{c} \alpha \\ j \end{array} \right) \right| h^{(c-\nu )j}. \end{aligned}$$

    Moreover,

    $$\begin{aligned} M\left[ \Delta _h^{\alpha ,c}f\right] (\nu +it) = (h^{c-\nu -it}-1)^\alpha M[f](\nu +it),\,\,t \in \mathbb {R}. \end{aligned}$$
  2. (ii)

    If \(h> 1,\) we have \(\Delta _h^{\alpha ,c}f \in X_{[c,b]},\) and for every \(\mu \in ]c,b]\)

    $$\begin{aligned} \Vert \Delta _h^{\alpha ,c}f\Vert _{X_\mu } \le \Vert f\Vert _{X_\mu } \sum _{j=0}^\infty \left| \left( \begin{array}{c} \alpha \\ j \end{array} \right) \right| h^{(c-\mu )j}. \end{aligned}$$

    Moreover,

    $$\begin{aligned} M\left[ \Delta _h^{\alpha ,c}f\right] (\mu +it) = (h^{c-\mu -it}-1)^\alpha M[f](\mu +it),\,\,t \in \mathbb {R}. \end{aligned}$$
    (5)

Proof

We prove only (i) since the proof of (ii) is similar. Let \(\nu \in [a,c[\) be fixed. Using an analogous reasoning as in Proposition 1, we have

$$\begin{aligned}&\Vert \Delta _h^{\alpha ,c} f\Vert _{X_\nu } \le \int _0^{\infty } x^{\nu -1}\sum _{j=0}^\infty \bigg |\left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg | h^{cj} |f(h^j x)| dx = \Vert f\Vert _{X_\nu } \sum _{j=0}^\infty \bigg |\left( \begin{array}{c} \alpha \\ j \end{array} \right) \bigg |h^{j(c-\nu )}, \end{aligned}$$

the last series being absolutely convergent for \(0<h\le 1.\) Moreover, as above, we can obtain, for \(\nu \in [a,c[\) the assertion (5). \(\square \)

Definition 1

If for \(f\in X_c\) there exists a function \(g\in X_c\) such that

$$\begin{aligned} \lim _{h\rightarrow 1} \bigg \Vert \frac{\Delta _h^{\alpha ,c} f(x)}{(h-1)^\alpha } - g(x) \bigg \Vert _{X_c} =0 \end{aligned}$$

then \(g\) is called the strong fractional Mellin derivative of \(f\) of order \(\alpha ,\) and it is denoted by \(g(x)\) = s-\(\Theta ^\alpha _c f(x).\) If \(\alpha =0\) it is easy to see that s-\(\Theta ^0_c f(x) =f(x).\)

We introduce now the Mellin Sobolev space \(W^\alpha _{X_c}\) by

$$\begin{aligned} W^\alpha _{X_c} := \left\{ f\in X_c : \text {s-}\Theta ^\alpha _c f\, \text {exists and}\, \text {s-}\Theta _c^\alpha f \in X_c \right\} , \end{aligned}$$

with \(W^0_{X_c} = X_c.\) Analogously, for any interval \(J\) we define the spaces \(W^\alpha _{X_{J}},\) by

$$\begin{aligned} W^\alpha _{X_{J}} =\left\{ f \in X_J: \text {s-}\Theta ^\alpha _c f\, \text {exists for every}\, c \in J\,\text {and}\,\text {s-}\Theta ^\alpha _c f\in X_J\right\} . \end{aligned}$$

For integral values of \(\alpha \) our definition of strong Mellin derivative and the corresponding Mellin–Sobolev spaces reproduce those introduced in [19], being the differences given now by a finite sum.

Using an approach introduced in [19] for the integer order case, we prove now

Theorem 1

The following properties hold:

  1. (i)

    If \(f\in W^\alpha _{X_c},\) then for \(s=c+it, t\in \mathbb {R}\) we have

    $$\begin{aligned} M[\text {s-}\Theta ^\alpha _c f](s) = (-it)^\alpha M[f](s). \end{aligned}$$
  2. (ii)

    If \(f\in W^\alpha _{X_{[a,b]}},\) then for every \(\nu , c \in [a,b]\) we have

    $$\begin{aligned} M[\text {s-}\Theta ^\alpha _c f] (\nu +it) = (c-\nu -it)^\alpha M[f] (\nu + it)\,\,(t \in \mathbb {R}). \end{aligned}$$

Proof

As to (i), since

$$\begin{aligned} \lim _{h\rightarrow 1} \bigg ( \frac{h^{-it} -1}{h-1} \bigg )^\alpha = (-it)^\alpha , \end{aligned}$$

we have, by Proposition 1(ii),

$$\begin{aligned} \bigg | (-it)^\alpha [f]^{\wedge }_M (s) - [\text {s-}\Theta _c^\alpha f]^{\wedge }_M (s) \bigg |= & {} \lim _{h\rightarrow 1} \bigg | \bigg ( \frac{h^{-it} -1}{h-1} \bigg )^\alpha [f]^{\wedge }_M (s) - [\text {s-}\Theta _c^\alpha f]^{\wedge }_M (s)\bigg |\\= & {} \lim _{h\rightarrow 1}\bigg |\bigg [\frac{\Delta _h^{\alpha ,c} f}{(h-1)^\alpha }\bigg ]^{\wedge }_M (s) - [\text {s-}\Theta _c^\alpha f]^{\wedge }_M (s)\bigg |\\= & {} \lim _{h\rightarrow 1} \bigg | \bigg [\frac{\Delta _h^{\alpha ,c} f}{(h-1)^\alpha } - \text {s-}\Theta _c^\alpha f \bigg ]^{\wedge }_M(s) \bigg |\\\le & {} \lim _{h\rightarrow 1} \bigg \Vert \frac{\Delta _h^{\alpha ,c} f}{(h-1)^\alpha } - \text {s-}\Theta _c^\alpha f \bigg \Vert _{X_c} =0 \end{aligned}$$

and thus (i) holds. As to (ii), we can use the same approach, applying one-sided limits and Proposition 2. \(\square \)

4 Mellin Fractional Integrals and the Pointwise Fractional Mellin Differential Operator

In terms of Mellin analysis the natural operator of fractional integration is not the classical Liouville fractional integral of order \(\alpha \in \mathbb {C},\) on \(\mathbb {R}^+,\) with Re \(\alpha >0,\) namely (1), but the integral (2)

$$\begin{aligned} \left( J^\alpha _{0+} f\right) (x) = \frac{1}{\Gamma (\alpha )} \int _0^x \bigg ( \log \frac{x}{u} \bigg )^{\alpha -1} f(u) \frac{du}{u}\,\,\,(x >0). \end{aligned}$$

The above integrals were treated already in Mamedov’s book [45], p. 168, in which the fractional integral of order \(\alpha \) is defined by \((-1)^\alpha \left( J^\alpha _{0+} f\right) (x).\) This is due to a different notion of Mellin derivatives (of integral order), see Sect. 4.2. Our approach here is more direct and simple since it avoids the use of the complex coefficient \((-1)^\alpha .\)

However, for the development of the theory, it is important to consider the generalization of the fractional integral, for \(\mu \in \mathbb {R},\) in the form (3).

Note that for integer values \(\alpha = r,\) in case \(\mu = c\) and \(f \in X_c\) (see [18], Definition 13), this turns into the iterated representation

$$\begin{aligned} \left( J^r_{0+,c}f\right) (x) = x^{-c}\int _0^x\int _0^{u_1}\ldots \int _0^{u_{r -1}} f(u_r) u_r^c \frac{du_r}{u_r}\ldots \frac{du_2}{u_2}\frac{du_1}{u_1}\,\,\,(x >0). \end{aligned}$$
(6)

Several important properties of the operators \(J^\alpha _{0+, \mu }\) were given by Butzer et al. in [2224], (see also the recent monographs [5] and [42]). In particular, a boundedness property is given in the space \(X_c,\) when the coefficient \(\mu \) is greater than \(c,\) (indeed a more general result is given there, for spaces \(X^p_c\)). This is due to the fact that only for \(\mu > c\) (or, in the complex case, Re \(\mu > c\)) we can view \(J^\alpha _{0+, \mu } f\) as a Mellin convolution between two functions \(f,g^*_\mu \in X_c,\) where

$$\begin{aligned} g^*_\mu \left( \frac{x}{u}\right) := \left( \frac{x}{u}\right) ^{-\mu }\frac{\chi _{]0,x]}(u)}{\Gamma (\alpha )}\bigg (\log \left( \frac{x}{u}\right) \bigg )^{\alpha -1}. \end{aligned}$$

Indeed, we have

$$\begin{aligned}&(J_{0+,\mu }f)(x) = \frac{1}{\Gamma (\alpha )} \int _0^x \bigg (\frac{u}{x}\bigg )^\mu \bigg (\log \frac{x}{u} \bigg )^{\alpha -1} f(u) \frac{du}{u} \\&\quad = \frac{1}{\Gamma (\alpha )} \int _0^{+\infty } \bigg (\frac{u}{x}\bigg )^\mu \chi _{]0,x]}(u) \bigg (\log \frac{x}{u} \bigg )^{\alpha -1} f(u) \frac{du}{u}\\&\quad =\int _0^{+\infty } g^*_\mu (\frac{x}{u})f(u)\frac{du}{u} = (f*g^*_\mu )(x). \end{aligned}$$

Now, for \(\mu >c\) the function:

$$\begin{aligned} g^*_\mu (u) = u^{-\mu }\frac{\chi _{]1,+\infty ]}(u)}{\Gamma (\alpha )}(\log u)^{\alpha -1} \end{aligned}$$

belongs to the space \(X_c,\) as it is immediate to verify.

However, we are interested in properties of \(J^\alpha _{0+, \mu } f\), when \(\mu = c,\) since in the definition of the pointwise fractional Mellin derivative (see Sect. 4.2), we have to compute such an integral with parameter \(c.\) Hence in Sect. 4.1 we will describe properties concerning the domain and the range of these fractional operators. As an example, we will show that for any non-trivial function \(f\) in the domain of \(J^\alpha _{0+, c}\) the image \(J^\alpha _{0+, c}f\) cannot be in \(X_c.\) This depends also on the fact that \(g^*_c \not \in X_c.\) This implies that we cannot compute its Mellin transform of \(g^*_c\) in the space \(X_c\).

4.1 The Domain of \(J^\alpha _{0+, c}\) and the Semigroup Property

From now on we can consider the case \(\alpha >0,\) the extension to complex \(\alpha \) with Re \(\alpha >0\) being similar but more technical. We define the domain of \(J^\alpha _{0+, c},\) for \(\alpha >0\) and \(c \in \mathbb {R},\) as the class of all the functions \(f:\mathbb {R}^+ \rightarrow \mathbb {C}\) such that

$$\begin{aligned} \int _0^x u^c \bigg ( \log \frac{x}{u} \bigg )^{\alpha -1} |f(u)| \frac{du}{u} < + \infty \end{aligned}$$
(7)

for a.e. \(x \in \mathbb {R}^+.\) In the following we will denote the domain of \(J^\alpha _{0+, c}\) by \(Dom J^\alpha _{0+, c}\).

Recall that \(X_{c, loc}\) is the space of all the functions such that \((\cdot )^{c-1}f(\cdot ) \in L^1(]0,a[)\) for every \(a >0.\)

Proposition 3

We have the following properties:

  1. (i)

    If \(f \in X_{c,loc},\) then the function \((\cdot )^c f(\cdot ) \in X_{1,loc}.\)

  2. (ii)

    If \(c < c',\) then \(X_{c,loc} \subset X_{c', loc}.\)

Proof

(i) Let \(a >0\) be fixed and let \(f \in X_{c,loc}.\) Then

$$\begin{aligned} \int _0^a x^c |f(x)| dx = \int _0^a x x^{c-1}|f(x)| dx \le a \int _0^a x^{c-1}|f(x)| dx \end{aligned}$$

and so the assertion.

(ii) Let \(f \in X_{c,loc}.\) Then, as before, setting \(\alpha = c' -c,\) we can write

$$\begin{aligned} \int _0^a x^{c'-1}|f(x)| dx \le a^{\alpha } \int _0^a x^{c-1}|f(x)|dx, \end{aligned}$$

that is (ii) holds. \(\square \)

Note that the inclusion in (ii) does not hold for spaces \(X_c.\)

Concerning the domain of the operator \(J^\alpha _{0+,c},\) we begin with the following proposition.

Proposition 4

Let \(\alpha >1,\) \(c \in \mathbb {R}\) be fixed. Then \(Dom J^\alpha _{0+,c} \subset X_{c,loc}.\)

Proof

Assume that for a.e. \(x \in \mathbb {R}^+\) the integral \(\left( J^\alpha _{0+, c} |f|\right) (x),\) exists and put \(F(u) = u^{c-1}f(u).\) We have to show that \(F\) is integrable over \(]0,a[,\) for any \(a >0.\) Let \(a >0\) be fixed and let \(x > a\) be such that \(\left( J^\alpha _{0+, c} |f|\right) (x)\) exists. Then, for \(u \in ]0,a[\) we have, since \(|\log (x/u)|\le |\log (x/a)|\)

$$\begin{aligned} |F(u)| \le \bigg | \log \frac{x}{u} \bigg |^{\alpha -1} |F(u)| \bigg | \log \frac{x}{a} \bigg |^{1-\alpha }, \end{aligned}$$

and the right-hand side of the inequality is integrable as a function of \(u.\) \(\square \)

Note that for \(\alpha = 1\) we have immediately \(DomJ^1_{0+,c} = X_{c,loc}.\)

The case \(0 < \alpha < 1\) is more delicate. We will show that in this instance \(X_{c,loc} \subset Dom J^\alpha _{0+,c}.\)

In order to give a more precise description of the domain of \(J^\alpha _{0+, c},\) we now give a direct proof of the semigroup property in the domain of fractional integrals. This property is treated in [23, 41, 42], but for the spaces \(X^p_c(a,b)\) of all the functions \(f: (a,b) \rightarrow \mathbb {C}\) such that \((\cdot )^c f(\cdot ) \in L^p(a,b),\) with \(0< a < b \le +\infty ,\quad 1\le p\le \infty .\) However we prove this property under minimal assumptions, working directly in \(Dom J^\alpha _{0+,c}\).

Theorem 2

Let \(\alpha , \beta >0, \,c \in \mathbb {R}\) be fixed. Let \(f \in Dom J^{\alpha + \beta }_{0+,c}.\) Then

  1. (i)

    \(f \in Dom J^{\alpha }_{0+,c}\cap Dom J^{\beta }_{0+,c}\)

  2. (ii)

    \(J^{\alpha }_{0+,c}f \in Dom J^{\beta }_{0+,c}\) and \(J^{\beta }_{0+,c}f \in Dom J^{\alpha }_{0+,c}.\)

  3. (iii)

    \(\left( J^{\alpha + \beta }_{0+,c}f\right) (x) = \left( J^{\alpha }_{0+,c} (J^{\beta }_{0+,c}f)\right) (x),\)  a.e. \(x \in \mathbb {R}^+.\)

  4. (iv)

    If \(\alpha < \beta \) then \(Dom J^\beta _{0+,c} \subset Dom J^\alpha _{0+,c}.\)

Proof

At first, let \(f \in Dom J^{\alpha + \beta }_{0+,c}\) be a positive function. Then the integral

$$\begin{aligned} \left( J^{\alpha + \beta }_{0+, c}f\right) (x) = \frac{1}{\Gamma (\alpha + \beta )} \int _0^x \bigg ( \frac{v}{x} \bigg )^c \bigg ( \log \frac{x}{v} \bigg )^{\alpha + \beta -1} f(v) \frac{dv}{v} \end{aligned}$$

is finite and nonnegative for a.e. \(x \in \mathbb {R}^+.\)

By Tonelli’s theorem on iterated integrals of non-negative functions, and using formula (2.8) concerning the Beta function in [23], namely

$$\begin{aligned} \int _v^x \bigg ( \log \frac{x}{u}\bigg )^{\alpha -1}\bigg ( \log \frac{u}{v}\bigg )^{\beta -1}\frac{du}{u} = B(\beta , \alpha ) \bigg ( \log \frac{x}{v} \bigg )^{\alpha + \beta -1}, \end{aligned}$$

we have

$$\begin{aligned}&(J^{\alpha + \beta }_{0+, c}f)(x) = \frac{1}{\Gamma (\alpha ) \Gamma (\beta )} \frac{\Gamma (\beta ) \Gamma (\alpha )}{\Gamma (\alpha + \beta )} \int _0^x \bigg ( \frac{v}{x} \bigg )^c \bigg ( \log \frac{x}{v} \bigg )^{\alpha + \beta -1} f(v) \frac{dv}{v}\\&\quad = \frac{x^{-c}}{\Gamma (\alpha ) \Gamma (\beta )} \int _0^x v^c f(v) \bigg [ B(\beta , \alpha ) \bigg ( \log \frac{x}{v} \bigg )^{\alpha + \beta -1} \bigg ] \frac{dv}{v}\\&\quad = \frac{x^{-c}}{\Gamma (\alpha ) \Gamma (\beta )} \int _0^{x} v^c f(v) \bigg [\int _v^x \bigg ( \log \frac{x}{u}\bigg )^{\alpha -1}\bigg ( \log \frac{u}{v}\bigg )^{\beta -1}\frac{du}{u} \bigg ] \frac{dv}{v}\\&\quad = \frac{x^{-c}}{\Gamma (\alpha ) \Gamma (\beta )} \int _0^x \int _0^{x} v^c \chi _{]v,x[}(u) \bigg ( \log \frac{x}{u}\bigg )^{\alpha -1}\bigg ( \log \frac{u}{v}\bigg )^{\beta -1} f(v) \frac{dv}{v} \frac{du}{u}\\&\quad = \frac{x^{-c}}{\Gamma (\alpha )\Gamma (\beta )} \int _0^x \int _0^{x} v^c \chi _{]0,u[}(v) \bigg ( \log \frac{x}{u}\bigg )^{\alpha -1}\bigg ( \log \frac{u}{v}\bigg )^{\beta -1} f(v) \frac{dv}{v} \frac{du}{u}\\&\quad = \frac{1}{\Gamma (\alpha )} \int _0^x \bigg ( \frac{u}{x}\bigg )^c \bigg ( \log \frac{x}{u} \bigg )^{\alpha -1} \bigg [ \frac{1}{\Gamma (\beta )}\int _0^u \bigg ( \frac{v}{u} \bigg )^c \bigg (\log \frac{u}{v}\bigg )^{\beta -1} \frac{f(v)}{v} dv \bigg ] \frac{du}{u}\\&\quad = (J^{\alpha }_{0+, c}(J^\beta _{0+, c}f ))(x). \end{aligned}$$

This proves all the assertions (i), (ii), (iii), for positive functions. In the general case, we can apply the above argument to the functions \(f^+,\,f^-\) using the linearity property of the integrals. Property (iv) follows immediately by writing \(\beta = \alpha + (\beta -\alpha )\) and applying (i). \(\square \)

Corollary 1

Let \(0 < \alpha \le 1, \,c \in \mathbb {R}\) be fixed. Then \(X_{c,loc} \subset Dom J^\alpha _{0+,c}.\)

By this corollary, a consequence of (iv), we have the inclusions for \(\alpha < 1 < \beta ,\)

$$\begin{aligned} Dom J^\beta _{0+,c} \subset X_{c,loc} \subset Dom J^\alpha _{0+,c}. \end{aligned}$$

These inclusions are strict. Indeed

Examples For any \(c \in \mathbb {R},\,\beta >1,\) consider the function

$$\begin{aligned} f(x) = \frac{x^{-c}}{|\log x|^{\beta }}\chi _{]0,1/2[}(x). \end{aligned}$$

Then \(f \in X_{c,loc}\) but for any \(x >1,\)

$$\begin{aligned}&\Gamma (\beta )\left( J^\beta _{0+,c}f\right) (x)\!=\! x^{-c}\int _0^x\! u^{c}\bigg (\log \frac{x}{u}\bigg )^{\beta -1}f(u) \frac{du}{u} \!=\!x^{-c}\int _0^{1/2}\!\frac{\bigg (\log \frac{x}{u}\bigg )^{\beta -1}}{u |\log u|^\beta }du\\&\quad \ge x^{-c}\int _0^{1/2}\frac{\bigg (\log \frac{1}{u}\bigg )^{\beta -1}}{u |\log u|^\beta }du = x^{-c}\int _0^{1/2} \frac{1}{u |\log u|}du = + \infty . \end{aligned}$$

Moreover, for \(0< \alpha < 1,\) consider the function:

$$\begin{aligned} f(x) = \frac{x^{-c}}{|\log x|^{\gamma }}\chi _{]0,1/2[}(x), \end{aligned}$$
(8)

where \(\alpha <\gamma < 1.\) Then \(f \not \in X_{c,loc},\) but for any \(x > 1/2,\) we have

$$\begin{aligned}&\Gamma (\alpha )\left( J^\alpha _{0+,c}f\right) (x) = x^{-c}\int _0^{1/2}\frac{1}{u \bigg (\log \frac{1}{u}\bigg )^{\gamma -\alpha +1}} \frac{\bigg (\log \frac{1}{u}\bigg )^{ 1-\alpha }}{\bigg (\log \frac{x}{u}\bigg )^{1-\alpha }}du \\&\quad \le \frac{M}{x^c}\int _0^{1/2} \frac{1}{u |\log u|^{\gamma -\alpha +1}}du < + \infty . \end{aligned}$$

Note that, more generally, the inclusion in (iv) of Theorem 2 is strict for any choice of \(\alpha \) and \(\beta .\) It is sufficient to consider the function (8) with \(\alpha < \gamma <\beta .\) The calculations are the same.

We now give some sufficient conditions in order that a function \(f\) belongs to the domain of the fractional integrals of order \(\alpha >1.\) In this respect we have the following:

Proposition 5

Let \(\alpha >1.\) If \(f\in X_{c,loc}\) is such that \(f(u) = \mathcal {O}(u^{-(r +c -1)})\) for \(u \rightarrow 0^+\) and \(0<r<1,\) then \(f \in Dom J^\alpha _{0+,c}.\)

Proof

Let \(x >0\) be fixed. Then we can write

$$\begin{aligned} \int _0^x u^{c-1} |f(u)| \bigg (\log \frac{x}{u} \bigg )^{\alpha -1} du= & {} \bigg ( \int _0^{x/2} + \int _{x/2}^x \bigg ) u^{c-1} |f(u)| \bigg (\log \frac{x}{u} \bigg )^{\alpha -1} du\\:= & {} I_1 + I_2. \end{aligned}$$

The integral \(I_1\) can be estimated by considering the order of infinity at the point \(0.\) The estimate of \(I_2\) is easy since the function \(\bigg (\log \frac{x}{u} \bigg )^{\alpha -1}\) is now bounded in the interval \([x/2,x]\). \(\square \)

Let us define

$$\begin{aligned} \widetilde{X}_{c,loc} =\left\{ f \in X_{c,loc}: \exists r \in ]0,1[,\, \text {such that}\, f(u) = \mathcal {O}(u^{-(r +c-1)}),\,u \rightarrow 0^+\right\} . \end{aligned}$$

We have the following

Corollary 2

Let \(\alpha > 0,\,c \in \mathbb {R}\) be fixed. Then

$$\begin{aligned} \widetilde{X}_{c,loc} \subset \bigcap _{\alpha >0} Dom J^\alpha _{0+,c}. \end{aligned}$$

Now let \(f\) be a convergent power series of type

$$\begin{aligned} f(x) = \sum _{k=0}^\infty a_k x^k\quad (a_k \in \mathbb {C},\,k \in \mathbb {N}_0), \end{aligned}$$

for \(x \in [0,\ell ],\) \(\ell >0.\) For these functions the following series representation for \(J^\alpha _{0+,c}f\) holds, when \(c >0\) (see Lemmas 4 and 5(i) in [25]):

$$\begin{aligned} \left( J^\alpha _{0+,c}f\right) (x) = \sum _{k=0}^{\infty }(c+k)^{-\alpha }a_kx^k\,\,\,\,(x \in [0,\ell ]). \end{aligned}$$

The assumption \(c >0\) is essential. For \(c=0,\) corresponding to the classical Hadamard integrals, we have the following

Proposition 6

Let \(\alpha >0\) be fixed and let \(f\) be a convergent power series as above. Then \(f \in Dom J^\alpha _{0+}\) if and only if \(f(0)=0.\) In this case we have

$$\begin{aligned} \left( J^\alpha _{0+}f\right) (x) = \sum _{k=1}^\infty a_k k^{-\alpha }x^{k}\,\,\,(0<x<\ell ). \end{aligned}$$
(9)

Proof

Let \(f \in Dom J^\alpha _{0+}.\) Then the integral (7) is finite and

$$\begin{aligned} \int _0^x \bigg (\log \frac{x}{u}\bigg )^{\alpha -1}f(u)\frac{du}{u}= & {} \int _0^x \bigg (\log \frac{x}{u}\bigg )^{\alpha -1}\sum _{k=1}^\infty a_k u^k\frac{du}{u}\\&+\, a_0 \int _0^x \bigg (\log \frac{x}{u}\bigg )^{\alpha -1}\frac{du}{u} = I_1 +I_2. \end{aligned}$$

As to \(I_1\) we obtain

$$\begin{aligned} \int _0^x \bigg (\log \frac{x}{u}\bigg )^{\alpha -1}\sum _{k=1}^\infty |a_k| u^{k-1}du \le \sum _{k=1}^\infty |a_k|x^{k-1}\int _0^x \bigg (\log \frac{x}{u}\bigg )^{\alpha -1}du. \end{aligned}$$

Since, using the change of variables \(\log (x/u) = t,\)

$$\begin{aligned} \int _0^x \bigg (\log \frac{x}{u}\bigg )^{\alpha -1}du = x \Gamma (\alpha ), \end{aligned}$$

we can integrate by series, yielding

$$\begin{aligned} I_1 = \sum _{k=1}^\infty a_k \int _0^x \bigg (\log \frac{x}{u}\bigg )^{\alpha -1} u^{k-1}du < + \infty . \end{aligned}$$

As to \(I_2,\) we get \(I_2 < + \infty \) if and only if \(a_0 = f(0) = 0,\) since

$$\begin{aligned} \int _0^x\bigg (\log \frac{x}{u}\bigg )^{\alpha -1}\frac{du}{u} = + \infty . \end{aligned}$$

As to formula (9),

$$\begin{aligned} \left( J^\alpha _{0+}f\right) (x)= & {} \sum _{k=1}^\infty a_k \frac{1}{\Gamma (\alpha )}\int _0^x \bigg (\log \frac{x}{u}\bigg )^{\alpha -1} u^{k-1}du = \sum _{k=1}^\infty a_k \left( J^\alpha _{0+}t^k\right) (x)\\= & {} \sum _{k=1}^\infty a_k k^{-\alpha }x^k, \end{aligned}$$

where in the last step we have applied the simple Lemma 3 in [25], namely that \(\left( J^\alpha _{0+}t^k\right) (x) = k^{-\alpha }x^k,\) \(k>0.\) \(\square \)

Concerning the range of the operators \(J^\alpha _{0+,c},\) we have the following important propositions.

Proposition 7

Let \(\alpha >0,\, c \in \mathbb {R}\) be fixed. If \(f \in Dom J^{\alpha +1}_{0+,c},\) then \(J^\alpha _{0+,c}f \in X_{c, loc}.\)

Proof

Let \(f\in Dom J^{\alpha +1}_{0+,c}.\) We can assume that \(f\) is nonnegative; thus, for any \(a >0,\)

$$\begin{aligned}&\Gamma (\alpha )\int _0^a x^{c-1}(J^{\alpha }_{0+,c}f)(x) dx =\int _0^a u^{c-1} f(u)\bigg [\int _u^a \frac{1}{x}\bigg (\log \frac{x}{u}\bigg )^{\alpha -1}dx\bigg ]du\\&\quad =\frac{1}{\alpha } \int _0^a u^{c-1}f(u) \bigg (\log \frac{a}{u}\bigg )^{\alpha }du < + \infty . \end{aligned}$$

\(\square \)

Note that, in view of Proposition 7, we can deduce that if \(f \in Dom J^\alpha _{0+,c},\) not necessarily does \(J^\alpha _{0+,c}f \in X_{c,loc},\) unless \(f \in Dom J^{\alpha +1}_{0+,c},\) which is a proper subspace of \(Dom J^\alpha _{0+,c}.\)

For example, we can take again the function \(f\) of (8) with \(\alpha < \gamma < \alpha +1.\) Then \(f \in Dom J^\alpha _{0+,c}\) but \(f \not \in Dom J^{\alpha +1}_{0+,c}\) and \(J^\alpha _{0+, c}f \not \in X_{c,loc}.\)

For spaces \(X_c\) we have the following

Proposition 8

Let \(\alpha >0,\, c \in \mathbb {R}\) be fixed. If \(f\in Dom J^\alpha _{0+,c}\) is a non-negative function, then \(J^\alpha _{0+, c} f \not \in X_{c},\) unless \(f=0\) a.e. in \(\mathbb {R}^+.\)

Proof

Using an analogous argument as above, assuming \(f \ge 0,\) we write

$$\begin{aligned}&\int _0^{+\infty } x^{c-1} (J^{\alpha }_{0+, c}f)(x) dx\\&\quad =\int _0^{+\infty } x^{-1} \bigg ( \frac{1}{\Gamma (\alpha )} \int _0^{+\infty } u^{c-1} \chi _{]0,x[}(u) \bigg ( \log \frac{x}{u} \bigg )^{\alpha -1} f(u) du\bigg ) dx\\&\quad = \int _0^{+\infty } \frac{1}{\Gamma (\alpha )} \bigg ( \int _0^{+\infty }x^{-1} u^{c-1} \chi _{]u, +\infty [}(x) \bigg ( \log \frac{x}{u} \bigg )^{\alpha -1} f(u) dx\bigg ) du\\&\quad = \frac{1}{\Gamma (\alpha )} \int _0^{+\infty } \bigg ( \int _u^{+\infty } x^{-1} \bigg ( \log \frac{x}{u} \bigg )^{\alpha -1} dx \bigg ) u^{c-1} f(u) du. \end{aligned}$$

Thus \((J^{\alpha }_{0+, c}f) \not \in X_c,\) since for every \(u\)

$$\begin{aligned} \int _u^{+\infty } \frac{1}{x ( \log \frac{x}{u})^{1-\alpha }} dx = +\infty . \end{aligned}$$

\(\square \)

The above result implies that a function \(f \in Dom J^\alpha _{0+,c}\) such that \(J^\alpha _{0+,c}f \in X_c,\) must necessarily change its sign. However the converse is not true in general, as proved by the following example: for a given \(a >1,\) put

$$\begin{aligned} f(x) = -\chi _{[1/a, 1]}(x) + \chi _{]1, a]}(x). \end{aligned}$$

It is easy to see that \(f \in Dom J^\alpha _{0+,c} \cap X_c,\) but \(J^\alpha _{0+,c}f \not \in X_c,\) for any \(c \in \mathbb {R}.\)

The following result, which will be useful in the following, is well known (see also [22, 42])

Proposition 9

Let \(\alpha >0\) and \(c \in \mathbb {R}\) be fixed and let \(f \in Dom J^\alpha _{0+,c}\cap X_c\) be such that \(J^\alpha _{0+,c}f \in X_c.\) Then

$$\begin{aligned} M[J^\alpha _{0+,c}f](c +it) = (-it)^{-\alpha } M[f](c +it),\,\,t \in \mathbb {R}. \end{aligned}$$

Using Proposition 9 we study the structure of the functions \(f\) such that \(f \in Dom J^\alpha _{0+,c} \cap X_c\) for which \(J^\alpha _{0+,c}f \in X_c.\)

Proposition 10

Let \(f \in Dom J^\alpha _{0+,c} \cap X_c.\) If \(J^\alpha _{0+,c}f \in X_c\) then

$$\begin{aligned} \int _0^{+\infty } x^{c-1}f(x) dx = 0. \end{aligned}$$

Proof

Since \(J^\alpha _{0+,c}f \in X_c,\) we can apply the Mellin transform on the line \(s = c +it,\) obtaining

$$\begin{aligned}{}[J^\alpha _{0+,c}f]^\wedge _M(s) = (-it)^{-\alpha }[f]^\wedge _M(s) \,\,(s= c+it, \,t \in \mathbb {R}) \end{aligned}$$

and this transform is a continuous and bounded function of \(s.\) Therefore, taking \(t=0\) we must have \([f]^\wedge _M(c) = 0,\) i.e. the assertion. \(\square \)

Classes of functions \(f \in Dom J^\alpha _{0+,c} \cap X_c\) for which \(J^\alpha _{0+,c}f \in X_c\) may be easily constructed among the functions (of non-constant sign) with compact support in \(\mathbb {R}^+.\)

However we have the following property (see also [42], Lemma 2.33). We give the proof for the sake of completeness

Proposition 11

Let \(\alpha >0,\, c, \nu \in \mathbb {R},\) \(\nu < c,\) being fixed. If \(f\in Dom J^\alpha _{0+,c}\cap X_{[\nu ,c]},\) then \(J^\alpha _{0+, c} f \in X_{\nu }\) and

$$\begin{aligned} \Vert J^\alpha _{0+,c}f\Vert _{X_\nu } \le \frac{\Vert f\Vert _{X_\nu }}{(c-\nu )^\alpha }. \end{aligned}$$

Moreover, for any \(s = \nu + it,\) we have

$$\begin{aligned}&M[J^\alpha _{0+,c}f](\nu +it) = (c - \nu -it)^{-\alpha }M[f](\nu +it),\,\,t \in \mathbb {R}.\\&|M[J^\alpha _{0+,c}f](s)| \le \frac{\Vert f\Vert _{X_\nu }}{(c-\nu )^\alpha }. \end{aligned}$$

Proof

We have by Tonelli’s theorem,

$$\begin{aligned}&\Gamma (\alpha ) \Vert J^\alpha _{0+,c} f\Vert _{X_\nu } \le \Gamma (\alpha ) \int _0^{+ \infty } u^{\nu -1}|\left( J^\alpha _{0+,c}f\right) (u)| du \\&\quad \le \int _0^{+ \infty } u^{\nu -1}\bigg [\int _0^u \bigg (\frac{y}{u}\bigg )^c \bigg (\log \frac{u}{y}\bigg )^{\alpha -1} |f(y)| \frac{dy}{y}\bigg ] du\\&\quad =\int _0^{+\infty }\bigg [\int _y^{+\infty } u^{\nu -1 -c}\bigg (\log \frac{u}{y}\bigg )^{\alpha -1}du\bigg ]y^{c-1}|f(y)|dy. \end{aligned}$$

For the inner integral, putting \(\log (u/y) = z,\) we have:

$$\begin{aligned} \int _y^{+\infty } u^{\nu -1 -c}\bigg (\log \frac{u}{y}\bigg )^{\alpha -1}du =\int _0^{+\infty } y^{\nu -c} e^{-(c-\nu ) z} z^{\alpha -1} dz = \frac{y^{\nu -c}}{(c-\nu )^\alpha }\Gamma (\alpha ), \end{aligned}$$

and thus

$$\begin{aligned} \Gamma (\alpha )\Vert J^\alpha _{0+,c}f\Vert _{X_\nu } = \frac{\Gamma (\alpha )}{(c-\nu )^\alpha }\Vert f\Vert _{X_\nu }. \end{aligned}$$

As to the last part, the formula for the Mellin transform is established in [22], noting that the Mellin transform on the line \(s= \nu +it\) of the function \(g^*_c(u) = u^{-c}(\log u)^{\alpha -1}\chi _{]1,+\infty [}(u) (\Gamma (\alpha ))^{-1}\) is given by \([g^*_c](s) = (c-s)^{-\alpha } = (c-\nu -it)^{-\alpha },\) while for the estimate we easily have

$$\begin{aligned} |M[J^\alpha _{0+,c}f](s)| \le \Vert J^\alpha _{0+,c}f\Vert _{X_\nu } = \frac{\Vert f\Vert _{X_\nu }}{(c-\nu )^\alpha }. \end{aligned}$$

\(\square \)

Note that when \(0<\alpha < 1,\) the assumption \(f \in Dom J^\alpha _{0+,c}\cap X_{[\nu ,c]},\) can be replaced by \(f \in X_{[\nu ,c]},\) since \(X_{[\nu ,c]} \subset Dom J^\alpha _{0+,c},\) by Corollary 1.

4.2 The Pointwise Fractional Mellin Differential Operator

The pointwise fractional Mellin derivative of order \(\alpha >0,\) or the Hadamard-type fractional derivative, associated with the integral \(J^\alpha _{0+,c} f\), \(c \in \mathbb {R},\) and \(f \in Dom J^{m- \alpha }_{0+, c},\) is given by

$$\begin{aligned} \left( D^\alpha _{0+,c} f\right) (x) = x^{-c} \delta ^m x^c (J^{m- \alpha }_{0+, c} f)(x) \end{aligned}$$
(10)

where \(m= [\alpha ]+1\) and \(\delta = \displaystyle (x \frac{d}{dx}).\) For \(c=0,\) corresponding to the Hadamard fractional derivative, we put \(\left( D^\alpha _{0+} f\right) (x):=\left( D^\alpha _{0+,0} f\right) (x).\) The above definition was introduced in [22], and then further developed in [41], in which some sufficient conditions for the existence of the pointwise derivative are given in spaces of absolutely continuous type functions on bounded domains. This notion originates from the theory of the classical Mellin differential operator, studied in [18]. We give a short survey concerning this classical operator.

In the frame of Mellin transforms, the natural concept of a pointwise derivative of a function \(f\) is given, as seen, by the limit of the difference quotient involving the Mellin translation; thus if \(f'\) exists,

$$\begin{aligned} \lim _{h \rightarrow 1}\frac{\tau _h^cf(x) - f(x)}{h-1} \!=\! \lim _{h \rightarrow 1}\bigg [h^c x \frac{f(hx) \!-\! f(x)}{hx -x} + \frac{h^c -1}{h-1}f(x)\bigg ] = x f'(x) + cf(x). \end{aligned}$$

This gives the motivation of the following definition: the pointwise Mellin differential operator \(\Theta _c,\) or the pointwise Mellin derivative \(\Theta _cf\) of a function \(f: \mathbb {R}^+ \rightarrow \mathbb {C}\) and \(c \in \mathbb {R},\) is defined by

$$\begin{aligned} \Theta _cf(x) := x f'(x) + c f(x),\,\,x \in \mathbb {R}^+ \end{aligned}$$
(11)

provided \(f'\) exists a.e. on \(\mathbb {R}^+.\) The Mellin differential operator of order \(r \in \mathbb {N}\) is defined iteratively by

$$\begin{aligned} \Theta ^1_c := \Theta _c ,\quad \quad \Theta ^r_c := \Theta _c \left( \Theta _c^{r-1}\right) . \end{aligned}$$
(12)

For convenience set \(\Theta ^r:= \Theta ^r_0\) for \(c=0\) and \(\Theta _c^0 := I,\) \(I\) denoting the identity. For instance, the first three Mellin derivatives are given by:

Let us return to Mamedov’s book [45]. He defined the Mellin derivative of integral order in case \(c=0\), in a slightly different, but essentially equivalent form, using the quotients

$$\begin{aligned} \frac{f(xh^{-1}) - f(x)}{\log h}, \end{aligned}$$

a definition connected with his notion of log-continuity. It must be emphasised that this was a fully innovative procedure at the time he introduced it. (His translation operator is actually \(\tau _hf(x) = f(xh^{-1}),\) with incremental ratio \(\log h,\) instead of \(\log (1/h) = -\log h).\) His first order derivative, \((E^1f)(x)\), turns into, noting L’Hospital’s rule,

$$\begin{aligned} (E^1 f)(x)= (-x)f'(x) =: -\Theta _0f(x). \end{aligned}$$

His derivatives of higher orders are defined inductively,

$$\begin{aligned} (E^mf)(x) = (-1)^m \Theta _0^mf(x),\quad m \in \mathbb {N}. \end{aligned}$$

This is also Mamedov’ s motivation of his definition of the fractional Mellin integral. Indeed, he used it to define the fractional derivative (\(c=0\)) for \( \alpha \in ]0,1[,\) by

$$\begin{aligned} (E^\alpha f)(x) := \lim _{h \rightarrow 1} \frac{(-1)^{1-\alpha } \left( J^{1-\alpha }_{0+}f\right) (xh^{-1}) - \left( J^{1-\alpha }_{0+}f\right) (x)]}{\log h}, \end{aligned}$$

and for \(\alpha > 1,\) by

$$\begin{aligned} (E^\alpha f)(x) = E^{[\alpha ]} (E^{\alpha -[\alpha ]}f)(x). \end{aligned}$$

Thus for example, if \(\alpha \in ]0,1[\), we have easily

$$\begin{aligned} (E^\alpha f)(x) = (-1)^{2-\alpha }\Theta _0\left( J^{1-\alpha }_{0+}f\right) (x) = (-1)^{2-\alpha } \left( D^\alpha _{0+}f\right) (x), \end{aligned}$$

which also gives the link between Mamedov’s definition and our present one. Analogously he proceeds in case \(\alpha >1.\) Using his definition of the fractional integral, Mamedov then studies the Mellin transforms of the fractional integrals and derivatives of a function \(f,\) (see Section 23 of [45]). From these results it would have been possible to deduce a version of the fundamental theorem of the integral and differential calculus in his fractional frame, in the special case when the function \(f,\) its fractional derivative and fractional integral belong to the space \(X_0.\) However he presents it explicitly only for integer values of \(\alpha ,\) (formula (22.3), p. 169). Nevertheless it is indeed a surprising result, the only comparative result being that for the Chebyshev transform [30] of 1975. For this very reason is the late Prof. Mamedov a true pioneer of Mellin analysis. On the other hand, the approach given in [18] is somewhat more direct and simpler, and the present versions of the fundamental theorem in local spaces \(X_{c,loc}\), given in Theorems 3 and 4 below, are more general and elegant. But recall that Mamedov’s first papers appeared in 1979/81, [4648], thus almost twenty years earlier than [18].

We have the following

Proposition 12

We have, for \(m \in \mathbb {N},\,x>0,\)

$$\begin{aligned} \delta ^m x^c f(x) = x^c \Theta _c^m f (x). \end{aligned}$$

Proof

For \(m=1\) we have

$$\begin{aligned} \delta x^c f(x) = x(c x^{c-1} f(x) + x^c f' (x)) = x^c (c f(x) + x f'(x)) = x^c \Theta _c f(x). \end{aligned}$$

Now we suppose that the relation holds for \(m\) and prove that it holds for \(m+1.\)

$$\begin{aligned} \delta ^{m+1}(x^c f(x)) \!=\! \delta (\delta ^m (x^c f(x)) = \delta (x^c \Theta _c^m f(x)) = x^c \Theta _c(\Theta ^m_c f(x))\!=\! x^c \Theta _c^{m+1}f(x), \end{aligned}$$

and so the assertion. \(\square \)

For \(r \in \mathbb {N},\) \(\Theta ^r_cf(x)\) is given by the following proposition, also giving the connections between Mellin and ordinary derivatives (these relations was also given in [22], [42], but without proofs).

Proposition 13

Let \(f \in X_{c,loc}\) be such that \(\Theta ^r_cf(x)\) exists at the point \(x\) for \(r \in \mathbb {N}.\) Then \((D^r_{0+,c} f)(x)\) exists and

$$\begin{aligned} (D^r_{0+,c} f)(x) = \Theta ^r_cf(x) = \sum _{k=0}^r S_c(r,k) x^k f^{(k)}(x), \end{aligned}$$

where \(S_c(r,k),\) \(0\le k\le r,\) denote the generalized Stirling numbers of second kind, defined recursively by

$$\begin{aligned} S_c(r,0) := c^r,\, S_c(r,r) := 1,\, S_c(r+1,k) = S_c(r, k-1)+ (c+k) S_c(r,k). \end{aligned}$$

In particular for \(c=0\)

$$\begin{aligned} \Theta ^r f(x) = \sum _{k=0}^r S(r,k) x^k f^{(k)}(x) \end{aligned}$$

\(S(r,k):= S_0(r,k)\) being the (classical) Stirling numbers of the second kind.

Proof

For \(r = 1\) (that is \(m=2\)), we have

$$\begin{aligned}&(D^1_{0+,c}f)(x) =x^{-c}\delta ^2 x^c \frac{1}{\Gamma (1)} \int _0^x \bigg (\frac{u}{x}\bigg )^c \bigg (\log \frac{x}{u}\bigg )^{1-1} f(u) \frac{du}{u}\\&\quad = x^{-c}\delta \bigg (x \frac{d}{dx}\bigg ) \int _0^x u^{c-1}f(u) du = x^{-c}\delta x^c f(x) = \Theta _cf(x). \end{aligned}$$

For \(r=2,\) (that is \(m=3\)), we have

$$\begin{aligned}&\left( D^2_{0+,c}f\right) (x) =x^{-c}\delta ^3 x^c \frac{1}{\Gamma (2)} \int _0^x \bigg (\frac{u}{x}\bigg )^c \bigg (\log \frac{x}{u}\bigg )^{1-1} f(u) \frac{du}{u}\\&\quad = x^{-c}\delta \bigg (x \frac{d}{dx}\bigg )x^c f(x) =x^{-c}\delta x(cx^{c-1} f(x) + x^c f'(x))\\&\quad = cxf'(x) + c^2f(x) + (c+1)xf'(x) + x^2 f''(x) \\&\quad = x^2 f''(x) + (2c+1) xf'(x) + c^2 f(x)\\&\quad = \Theta ^2_cf(x). \end{aligned}$$

In the general case, using Proposition 12 we have

$$\begin{aligned}&\left( D^{r}_{0+,c}f\right) (x) = x^{-c}\delta ^{r}\left( \delta x^c (J^1_{0+,c}f\right) (x)) \\&\quad = x^{-c} \delta ^{r}(x^c f(x)) = x^{-c} x^c \Theta ^r_cf(x) = \Theta ^r_cf(x). \end{aligned}$$

Now in accordance with (11) and (12) we have (see [18])

$$\begin{aligned}&\Theta ^{r+1}_cf(x) = \Theta _c (\Theta ^r_cf)(x) = x\frac{d}{dx}\Theta ^r_cf(x) + c \Theta ^r_cf(x)\\&\quad =\sum _{k=0}^r S_c(r,k)((k+c)x^k f^{(k)}(x) + x^{k+1}f^{(k+1)}f(x)) \!=\! \sum _{k=0}^{r+1}S_c(r\!+\!1,k)x^k f^{(k)}(x), \end{aligned}$$

and so the assertion follows. \(\square \)

Note that in Proposition 13 the basic assumption that \(f \in X_{c,loc}\) is essential. Let for example \(g(x) = 1,\) for every \(x \in \mathbb {R}^+\) and \(c=0.\) Then \(g \not \in X_{0,loc} = Dom J^1_{0+}.\) This implies that we cannot compute \(D^r_{0+}f,\) while obviously we have \(\Theta ^rf(x) = 0,\) for any \(r \in \mathbb {N}.\) Another example is given by the function \(h(x) = \log x,\) \(x \in \mathbb {R}^+.\) In this instance, for \(c=0\) and \(r=1\) we have \(\Theta h(x) = 1,\) while \(h \not \in X_{0, loc}.\)

Now we turn to the fractional case. The above Proposition shows that the notion of Hadamard-type fractional derivative \(D^\alpha _{0+,c}\) is the natural extension of the Mellin derivative \(\Theta _c^kf,\) with \(k \in \mathbb {N},\) to the fractional case as also applies to the ordinary and Riemann–Liouville fractional derivatives. A simple consequence of Proposition 12 is the following alternative representation of the fractional derivative of \(f\), for \(\alpha >0\)

$$\begin{aligned} \left( D^\alpha _{0+,c} f\right) (x) = \Theta _c^{m} \left( J^{m- \alpha }_{0+, c} f\right) (x) \end{aligned}$$

where \(m= [\alpha ]+1.\) Using this representation we can obtain the following Proposition

Proposition 14

Let \(\alpha >0, c \in \mathbb {R},\) be fixed and \(m-1 \le \alpha < m.\) Let \(f\in X_{c, loc}\) be such that \(f^{(m)}\in X_{c, loc},\) then

$$\begin{aligned} \left( D^\alpha _{0+,c} f\right) (x) =\sum _{k=0}^m S_c(m,k) x^k (J^{m- \alpha }_{0+, c+k} f^{(k)})(x)). \end{aligned}$$

Proof

At first note that from the assumptions, for any \(0<\gamma \le 1\) the derivatives \(f^{(k)},\) \(k=1,\ldots m,\) belongs to the domain of \(J^\gamma _{0+,c+k}.\) Note that using a simple change of variable we can write, for every \(c \in \mathbb {R},\)

$$\begin{aligned} \left( J^\gamma _{0+,c}f\right) (x) = \frac{1}{\Gamma (\gamma )}\int _1^{+ \infty }\frac{1}{v^{c+1}}(\log v)^{\gamma -1} f\left( \frac{x}{v}\right) dv. \end{aligned}$$

Thus differentiating under the integral we easily have

$$\begin{aligned} \left( J^\gamma _{0+,c}f\right) '(x) = \frac{1}{\Gamma (\gamma )}\int _1^{+ \infty }\frac{1}{v^{c+2}}(\log v)^{\gamma -1} f'\left( \frac{x}{v}\right) dv = \left( J^\gamma _{0+,c+1}f'\right) (x) \end{aligned}$$

and by an easy induction we obtain, for \(x >0\) and \(k \in \mathbb {N},\)

$$\begin{aligned} \left( J^\gamma _{0+,c}f\right) ^{(k)}(x) = \left( J^\gamma _{0+,c+k}f^{(k)}\right) (x). \end{aligned}$$

Hence by Lemma 9 in [18], (see also Proposition 13), we have

$$\begin{aligned} \left( D^\alpha _{0+,c}f\right) (x) = \Theta ^m_c\left( J^{m-\alpha }_{0+,c}f\right) (x) = \sum _{k=0}^m S_c(m,k) x^k \left( J^{m-\alpha }_{0+, c+k}f^{(k)}\right) (x), \end{aligned}$$

that is the assertion. \(\square \)

First, let \(f\) be a convergent power series as in Proposition 6. In this instance, we obtain the following formula for the derivative \(D^\alpha _{0+}f\):

Proposition 15

Let \(\alpha >0\) be fixed and \(f\) be as in Proposition 6, such that \(f(0) =0.\) Then for \(0<x<\ell ,\)

$$\begin{aligned} \left( D^\alpha _{0+}f\right) (x) = \sum _{k=1}^\infty a_k k^\alpha x^k. \end{aligned}$$

Proof

Putting \(m = [\alpha ] +1,\) by integration and differentiation by series, using similar reasonings as in Proposition 6, we have

$$\begin{aligned} \left( D^\alpha _{0+}f\right) (x)= & {} \delta ^m \frac{1}{\Gamma (m-\alpha )}\int _0^x \bigg (\log \frac{x}{u}\bigg )^{m-\alpha -1} \sum _{k=1}^\infty a_k u^k \frac{du}{u}\\= & {} \delta ^m \sum _{k=1}^\infty a_k \left( J^{m -\alpha }_{0+}t^k\right) (x)\\= & {} \delta ^m \sum _{k=1}^\infty a_k k^{-(m-\alpha )}x^k = \sum _{k=1}^\infty a_k k^{-(m-\alpha )}\delta ^m x^k = \sum _{k=1}^\infty a_k k^{\alpha }x^k. \end{aligned}$$

\(\square \)

The above Proposition extends Lemma 5 (ii) in [25] to the case \(c=0.\)

An interesting representation, for analytic functions, of the derivative \(D^\alpha _{0+,c}f\) is given in terms of infinite series involving the Stirling functions of the second kind \(S_c(\alpha , k),\) which can be defined for \(c \in \mathbb {R}\) by

$$\begin{aligned} S_c(\alpha , k) := \frac{1}{k!}\sum _{j=0}^k (-1)^{k-j} \left( \begin{array}{c} k\\ j \end{array} \right) (c+j)^\alpha \,\,\,(\alpha \in \mathbb {C},\,k \in I\!\!N_0). \end{aligned}$$

This representation, given in [25], is as follows:

Proposition 16

Let \(f:\mathbb {R}^+ \rightarrow \mathbb {R}\) be an arbitrarily often differentiable function such that its Taylor series converges and let \(\alpha > 0, \,c >0.\) Then

$$\begin{aligned} \left( D^\alpha _{0+,c}f\right) (x) = \sum _{k=0}^\infty S_c(\alpha , k)x^k f^{(k)}(x)\,\,\,(x>0). \end{aligned}$$

For \(c=0\) also an inverse formula is available, expressing the classical Riemann–Liouville fractional derivative in terms of the Mellin derivatives (see [27]), namely

$$\begin{aligned} x^\alpha \left( {\mathcal D}^\alpha _{0+}f\right) (x) = \sum _{k=0}^\infty s(\alpha , k)\left( D^k_{0+}f\right) (x),\,\,(\alpha >0,\,x>0), \end{aligned}$$

where \({\mathcal D}^\alpha _{0+}f\) denotes the Riemann–Liouville fractional derivative and \(s(\alpha , k)\) the Stirling functions of the first kind.

An analogous representation holds also for the fractional integrals \(J^\alpha _{0+,c}f,\) namely (see [25]).

Proposition 17

Let \(f \in Dom J^\alpha _{0+,c},\) and \(f:\mathbb {R}^+ \rightarrow \mathbb {R}\) satisfy the hypothesis of Proposition 13. Then

$$\begin{aligned} \left( J^\alpha _{0+,c}f\right) (x) = \sum _{k=0}^\infty S_c(-\alpha , k)x^k f^{(k)}(x)\,\,\,(x>0). \end{aligned}$$

Since \(\left( D^\alpha _{0+,c}f\right) (x),\,\left( J^\alpha _{0+,c}f\right) (x),\) for \(\alpha >0,\) and \(S_c(\alpha , k),\) for \(\alpha \in \mathbb {R},\,k \in \mathbb {N}_0,\) are three continuous functions of \(c \in \mathbb {R}\) at \(c=0,\) we can let \(c\rightarrow 0\) in the previous Propositions, and can deduce corresponding representations of Hadamard fractional differentiation and integrations in terms of the Stirling functions \(S(\alpha ,k)\) and classical derivatives, if both \(J^\alpha _{0+}f\) and \(D^\alpha _{0+}f\) exists (for details see [25]).

Now we introduce certain Mellin–Sobolev type spaces which will be useful in the following (see also [18]). Firstly, we define

$$\begin{aligned} AC_{loc}:=\left\{ f: \mathbb {R}^+\rightarrow \mathbb {C}: f(x) = \int _0^xg(t)dt,\, \text {for a given} \, g\in L^1_{loc}(\mathbb {R}^+)\right\} . \end{aligned}$$

Recall that \(L^1_{loc}(\mathbb {R}^+)\) stands for the space of all (Lebesgue) measurable functions \(g:\mathbb {R}^+ \rightarrow \mathbb {C}\) such that

$$\begin{aligned} \int _0^x g(t)dt \end{aligned}$$

exists as a Lebesgue integral for every \(x >0.\) For any \(f \in AC_{loc}\) we have \(f' = g\) a.e., where \(f'\) denotes the usual derivative. For any \(c \in \mathbb {R},\) we define

$$\begin{aligned} AC_{c,loc} :=\left\{ f\in X_{c,loc}: (\cdot )^c f(\cdot ) \in AC_{loc}\right\} . \end{aligned}$$

For any \(c\in \mathbb {R}\) we define \(AC^1_{c,loc} = AC_{c,loc}\) and for \(m \in \mathbb {N},\) \(m\ge 2,\)

$$\begin{aligned} AC_{c,loc}^m :=\left\{ f \in AC_{c,loc}: \delta ^{m-1}((\cdot )^cf(\cdot )) \in AC_{loc}\right\} . \end{aligned}$$

We have the following

Lemma 3

If \(f \in AC_{c,loc}^m,\) then the Mellin derivative \(\Theta _c^mf\) exists and \(\Theta _c^mf \in X_{c,loc}.\)

Proof

Since \(\delta ^{m-1}((\cdot )^cf(\cdot )) \in AC_{loc},\) we have

$$\begin{aligned} \frac{d}{dx}\delta ^{m-1}(x^cf(x)) \in L^1_{loc}(\mathbb {R}^+). \end{aligned}$$

But, using Proposition 12

$$\begin{aligned} \frac{d}{dx}\delta ^{m-1}(x^cf(x)) = x^{-1}\delta ^m(x^c f(x)) = x^{c-1}\Theta _c^mf(x), \end{aligned}$$

and so the assertion follows. \(\square \)

Lemma 4

If \(f \in AC^m_{c,loc},\) \(m \ge 2,\) then \(\delta ^j((\cdot )^c f(\cdot )) \in AC_{loc},\) for \(j=0,1,\ldots ,m-2,\) and

$$\begin{aligned} \lim _{x \rightarrow 0^+}\delta ^j((x)^c f(x)) = 0. \end{aligned}$$

Proof

The case \(m=2\) follows immediately from the definitions, while for \(m>2\) one can use the relation

$$\begin{aligned} \delta ^{j-1}((x)^cf(x)) = \int _0^x \delta ^j((u)^cf(u))\frac{du}{u},\,\,j=1,2\ldots m-2. \end{aligned}$$

\(\square \)

The following result gives a representation of functions in \(AC^m_{c,loc}.\) A similar result for functions defined on a compact interval \([a,b] \subset \mathbb {R}^+\) is given in [41].

Lemma 5

Let \(f \in AC^m_{c,loc},\) \(m\ge 1,\) and let us assume that \(\varphi _m:= \frac{d}{dx}\delta ^{m-1}((\cdot )^c f(\cdot )) \in Dom J^m_{0+,1}.\) If there exists \(\alpha \in ]0,1[\) such that \(\varphi _m (x) = \mathcal{O}(x^{-\alpha }),\,\,x \rightarrow 0^+,\) then we have necessarily

$$\begin{aligned} f(x) = x^{1-c}J^m_{0+,1}\varphi _m(x). \end{aligned}$$

Proof

For \(m=1,\) there exists \(\varphi _1 \in L^1_{loc}(\mathbb {R}^+)\) such that

$$\begin{aligned} f(x) = x^{-c}\int _0^x \varphi _1(t)dt = x^{1-c}J^1_{0+,1}\varphi _1(x), \end{aligned}$$

and so the assertion follows.

For \(m=2,\) \(\delta ((\cdot )^cf(\cdot )) \in AC_{loc},\) and so there exists \(\varphi _2 \in L^1_{loc}(\mathbb {R}^+)\) such that

$$\begin{aligned} \delta (t^cf(t)) = \int _0^t \varphi _2(u)du. \end{aligned}$$

Let \(\varepsilon >0\) be fixed. Integrating the above relation in the interval \([\varepsilon , x]\) we have

$$\begin{aligned} \int _\varepsilon ^x \delta (t^cf(t))\frac{dt}{t} = \int _\varepsilon ^x\bigg (\int _0^t \varphi _2(u)du\bigg )\frac{dt}{t}. \end{aligned}$$

Integrating by parts, we get

$$\begin{aligned}&x^cf(x) - \varepsilon ^c f(\varepsilon ) = \bigg [\log t \int _0^t \varphi _2(u)du \bigg ]_\varepsilon ^x - \int _\varepsilon ^x \log t \varphi _2(t) dt\\= & {} \log x \int _0^\varepsilon \varphi _2(t) dt + \int _\varepsilon ^x \log \frac{x}{t}\,\varphi _2(t) dt - \log \varepsilon \int _0^\varepsilon \varphi _2(t) dt. \end{aligned}$$

Letting \(\varepsilon \rightarrow 0^+,\) since \(\varphi _2 \in Dom J^2_{0+,1},\) by Lemma 4, we obtain

$$\begin{aligned} x^cf(x) = \int _0^x \log \frac{x}{t}\,\varphi _2(t) dt - \lim _{\varepsilon \rightarrow 0^+} \log \varepsilon \int _0^\varepsilon \varphi _2(t) dt. \end{aligned}$$

Since by assumption, \(\varphi _2(t) = \mathcal{O}(t^{-\alpha }),\,\,t \rightarrow 0^+,\) using the De L’Hopital rule, the limit on the right-hand side of the previous relation is zero. Thus,

$$\begin{aligned} f(x) = x^{-c} \int _0^x \log \frac{x}{t}\,\varphi _2(t) dt = x^{1-c}J^2_{0+,1}\varphi _2(x). \end{aligned}$$

For the general case one can apply the same method, using the binomial formula \(\square \)

Now, for every \(c \in \mathbb {R}\) and \(m \in \mathbb {N},\) we introduce the Mellin–Sobolev space by

$$\begin{aligned} {\mathcal X}_{c,loc}^m :=\left\{ f \in X_{c,loc}: f=g \,\text {a.e. in}\, \mathbb {R}^+, \text {for}\,g \in AC^m_{c,loc}\right\} \end{aligned}$$

A non-local version of the above space, denoted by \({\mathcal X}_{c}^m\) is defined in [18]. It contains all the functions \(f:\mathbb {R}^+ \rightarrow \mathbb {C}\) such that \(f \in X_c\) and there exists \(g \in AC^m_{c,loc}\) such that \(f=g\) a.e. in \(\mathbb {R}^+\) with \(\Theta ^m_cf \in X_c.\)

Note that, if \(f \in X_c\) is such that \(J^m_{0+,c}f \in X_c\) then \(J^m_{0+,c}f \in {\mathcal X}^m_c\) (see [18], Theorem 11).

In particular, a function \(f \in {\mathcal X}_{c}^m\) is such that \(f\in X_c,\) \(\Theta ^m_{c}f\) exists and \(\Theta ^m_{c}f \in X_c.\) This suggests a way to define the fractional versions of the above spaces. For a given \(\alpha >0,\) we define

$$\begin{aligned} {\mathcal X}_{c}^\alpha :=\left\{ f \in X_{c}: \left( D^\alpha _{0+,c} f\right) (x)\,\text {exists a.e. and}\, D^\alpha _{0+,c} f \in X_{c}\right\} \end{aligned}$$

and its local version

$$\begin{aligned} {\mathcal X}_{c,loc}^\alpha :=\left\{ f \in X_{c,loc}: \left( D^\alpha _{0+,c} f\right) (x)\,\text {exists a.e. and}\, D^\alpha _{0+,c} f \in X_{c,loc}\right\} . \end{aligned}$$

Analogously we can define the spaces \({\mathcal X}^\alpha _{J},\) for any interval \(J\) as

$$\begin{aligned} {\mathcal X}^\alpha _{J}:=\left\{ f \in X_J: \left( D^\alpha _{0+,c} f\right) (x)\, \text {exists a.e. for every}\, c \in J\,\text {and}\,\left( D^\alpha _{0+,c} f\right) (x)\in X_J\right\} \end{aligned}$$

and its local version \({\mathcal X}^\alpha _{J, loc}.\) We begin with the following

Proposition 18

Let \(f \in {\mathcal X}^\alpha _{c,loc}\) be such that \(\Theta _c^mf \in X_{c,loc},\) where \(m = [\alpha ] + 1.\) Then

$$\begin{aligned} \left( D^\alpha _{0+,c} f\right) (x)= \Theta _c^m\left( J^{m-\alpha }_{0+,c}f\right) (x) = J^{m-\alpha }_{0+,c} \left( \Theta _c^m f\right) (x). \end{aligned}$$

Proof

Since \(f\in X_{c,loc}\) and \(0<m-\alpha <1,\) \(f \in Dom J^{m-\alpha }_{0+,c}\) and \(\Theta ^m_cf \in Dom J^{m-\alpha }_{0+,c}\) by Corollary 1. The first equality is already stated as a consequence of Proposition 12, thus we will prove the other equality. We obtain by (10)

$$\begin{aligned}&\left( D^\alpha _{0+,c}f\right) (x) = x^{-c} \delta ^m \left( x^c \left( J^{m-\alpha }_{0+,c}f\right) \right) (x)\\&\quad =x^{-c} \bigg (\delta ^m \bigg [x^c \frac{1}{\Gamma (m-\alpha )}\int _0^x \bigg (\frac{v}{x}\bigg )^c \bigg (\log \frac{x}{v}\bigg )^{m -\alpha -1} f(v) \frac{dv}{v}\bigg ]\bigg )(x)\\&\quad = x^{-c}\bigg (\delta ^m \bigg [ \frac{1}{\Gamma (m-\alpha )}\int _1^{+ \infty } \frac{x^c}{t^{c+1}}(\log t)^{m-\alpha -1}f\left( \frac{x}{t}\right) dt\bigg ]\bigg )(x) \\&\quad = x^{-c}\sum _{k=0}^m S(m,k) x^k \frac{d^k}{dx^k} \bigg [\frac{1}{\Gamma (m-\alpha )}\int _1^{+ \infty } \frac{x^c}{t^{c+1}}(\log t)^{m-\alpha -1}f\left( \frac{x}{t}\right) dt \bigg ]\\&\quad = \frac{x^{-c}}{\Gamma (m-\alpha )}\int _1^{+ \infty } \sum _{k=0}^m S(m,k) x^k \frac{d^k}{dx^k}\left( x^c f\left( \frac{x}{t}\right) \right) (\log t)^{m-\alpha -1} \frac{dt}{t^{c+1}}. \end{aligned}$$

Using the elementary formula for the derivatives of the product, we have

$$\begin{aligned}&\left( D^\alpha _{0+,c}f\right) (x) \\&\quad = \frac{x^{-c}}{\Gamma (m-\alpha )}\int _1^{+ \infty } \sum _{k=0}^m S(m,k) x^k \sum _{j=0}^k \left( \begin{array}{c} k\\ j \end{array} \right) \\&\qquad \times \prod _{\nu =0}^{j-1}(c-\nu )\frac{x^{c-j}}{t^{k-j}}f^{(k-j)}(x/t)(\log t)^{m-\alpha -1} \frac{dt}{t^{c+1}}\\&\quad = \frac{x^{-c}}{\Gamma (m-\alpha )}\int _0^x \sum _{k=0}^m S(m,k) v^k \frac{d^k}{dv^k} (v^c f(v))(\log (x/v))^{m-\alpha -1} \frac{dv}{v}\\&\quad = \frac{x^{-c}}{\Gamma (m-\alpha )}\int _0^x (\delta ^m (v^cf(v))(\log (x/v))^{m-\alpha -1} \frac{dv}{v}= J^{m-\alpha }_{0+,c} \left( \Theta _c^m f\right) (x). \end{aligned}$$

where we have used Proposition 12. Thus the assertion follows. \(\square \)

In order to prove a new fractional version of the fundamental theorem of the differential and integral calculus in the Mellin frame, first we give the following proposition concerning the case \(\alpha = m\in \mathbb {N}.\) Recall that in this case, using the representation in terms of iterated integrals, \(J^m_{0+,c}f\) is m-times differentiable, whenever \(f \in Dom J^m_{0+,c}.\)

Proposition 19

We have:

  1. (i)

    Let \(f \in {\mathcal X}^1_{c,loc},\) then

    $$\begin{aligned} J^1_{0+,c}(\Theta _cf)(x) = f(x),\,\,a.e.\, x \in \mathbb {R}^+. \end{aligned}$$
  2. (ii)

    Let \(m \in \mathbb {N}, m>1,\) and let \(f \in {\mathcal X}^m_{c, loc}\) be such that \(\Theta ^m_cf \in Dom J^m_{0+,c}.\) Then

    $$\begin{aligned} J^m_{0+,c}\left( \Theta ^m_cf\right) (x) = f(x),\,\,a.e.\, x \in \mathbb {R}^+. \end{aligned}$$
  3. (iii)

    Let \(f \in X_{c,loc},\) then

    $$\begin{aligned} \Theta _c \left( J^1_{0+,c}f\right) (x) = f(x),\,a.e.\, x \in \mathbb {R}^+. \end{aligned}$$
  4. (iv)

    Let \(f \in Dom J^m_{0+,c},\) then

    $$\begin{aligned} \Theta ^m_c \left( J^m_{0+,c}f\right) (x) = f(x),\,\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$

Proof

As to (i) we have, by the absolute continuity and Lemma 4

$$\begin{aligned} J^1_{0+,c}(\Theta _cf)(x) = \int _0^x \bigg (\frac{u}{x}\bigg )^c (\Theta _cf) (u)\frac{du}{u} = x^{-c}\int _0^x \frac{d}{du}(u^c f(u))du = f(x), \end{aligned}$$

a.e. \(x \in \mathbb {R}^+.\)

For (ii) we can obtain the result using the iterated representation of \(J^m_{0+,c}f,\) m-times the absolute continuity, Lemma 4 and Proposition 12.

As to (iii) note

$$\begin{aligned} \Theta _c\left( J^1_{0+,c}f\right) (x) = x \frac{d}{dx} \left( J^1_{0+,c}f\right) (x) + c \left( J^1_{0+,c}f\right) (x). \end{aligned}$$

and we have

$$\begin{aligned} x \frac{d}{dx} \left( J^1_{0+,c}f\right) (x) = -cx^{-c}\int _0^x u^{c-1}f(u)du + f(x) \end{aligned}$$

from which we obtain the assertion.

Finally we prove (iv) and we will use again induction. Assuming that (iv) holds for \(m-1,\) we have by Theorem 2 (iii),

$$\begin{aligned} \Theta ^{m}_c\left( J^{m}_{0+,c}f\right) (x)= & {} \Theta _c \left( \Theta ^{m-1}_c \left( J^{m-1}_{0+,c}\left( J^{1}_{0+,c}f\right) \right) \right) (x) = f(x), \end{aligned}$$

a.e. \(x \in \mathbb {R}^+,\) by the induction assumption. \(\square \)

Proposition 19 gives a version of Theorem 11 in [18] for the spaces \(X_{c,loc},\) without the use of Mellin transforms and under sharp assumptions. As a consequence, for spaces \(X_c,\) we deduce again the formula for the Mellin transform of \(J^m_{0+,c}f\) whenever \(J^m_{0+,c}f \in X_c\)

$$\begin{aligned} \left[ J^m_{0+,c}f\right] ^\wedge _M(c+it) = (-it)^{-m}[f]^\wedge _M(c+it). \end{aligned}$$

Now we are ready to prove the fundamental theorem of the fractional differential and integral calculus in the Mellin frame.

Theorem 3

Let \(\alpha >0\) be fixed and \(m= [\alpha ] + 1.\)

  1. a)

    Let \(f \in {\mathcal X}^\alpha _{c,loc} \cap {\mathcal X}^m_{c, loc},\) such that \(D^\alpha _{0+,c}f, \Theta ^{m}_cf \in Dom J^{m}_{0+,c}.\) Then

    $$\begin{aligned} \left( J^\alpha _{0+, c}\left( D^\alpha _{0+,c}f\right) \right) (x) = f(x),\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$
  2. b)

    Let \(f \in Dom J^{m}_{0+,c},\) be such that \(J^\alpha _{0+,c}f \in X_{c,loc}.\) Then

    $$\begin{aligned} \left( D^\alpha _{0+,c}\left( J^\alpha _{0+,c}f\right) \right) (x) = f(x),\,\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$

Proof

As to part (a), by Propositions 12, 18, 19 and Theorem 2, we have for a.e. \(x \in \mathbb {R}^+\)

$$\begin{aligned}&\left( J^\alpha _{0+, c}\left( D^\alpha _{0+,c}f\right) \right) (x) = J_{0+, c}^\alpha \bigg (x^{-c}(\delta ^m(x^c J_{0+,c}^{m- \alpha }f))\bigg )(x) \\&=\left( J^\alpha _{0+, c}\left( J_{0+,c}^{m- \alpha }\left( \Theta _c^mf\right) \right) \right) (x) = (J^m_{0+, c}\left( \Theta _c^mf\right) )(x) = f(x). \end{aligned}$$

As to part (b), we have, by Propositions 12, 19 and Theorem 2,

$$\begin{aligned}&\left( D^\alpha _{0+,c}(J_{0+,c}^\alpha f)\right) (x) = x^{-c}\delta ^m\left( x^c J_{0+,c}^{m-\alpha }(J_{0+,c}^\alpha f)\right) (x)\\&= x^{-c}\delta ^m \left( x^c J_{0+,c}^m f\right) (x) = \Theta _c^m \left( J_{0+,c}^mf\right) (x) = f(x), \end{aligned}$$

almost everywhere. \(\square \)

A result related to part (a) is also described in [42], Lemma 2.35, for functions \(f\) belonging to the subspace

$$\begin{aligned} J^\alpha _{0+, \mu }(X^p_c):=\left\{ f= J^\alpha _{0+,\mu }g,\,\text {for}\, g \in X^p_c\right\} \end{aligned}$$

with \(\mu > c.\) In this instance the formula is a simple consequence of part (b), using the integral representation of \(f.\) Related results in spaces \(X^p_\nu \) with \(c> \nu \) are given in [42], Property 2.28. Note that for \(p=1,\) if \(f \in X_\nu ,\) with \(c >\nu ,\) then \(J^\alpha _{0+,c}f \in X_\nu ,\) so that our assumption is satisfied. For bounded intervals \(I\) similar results are also given in [41], for functions belonging to \(L^p(I)\).

More generally, we can, with our approach, also consider compositions between the operators of Hadamard-type fractional integrals and derivatives, in local spaces (for similar results in \(X^p_c\) spaces see [41] on bounded intervals, and [42] in \(\mathbb {R}^+\)).

Theorem 4

Let \(\alpha , \beta >0\) with \(\beta >\alpha \) and \(m= [\alpha ] + 1.\)

  • (a’) Let \(f \in {\mathcal X}^\alpha _{c,loc}\cap {\mathcal X}^m_{c, loc},\) such that \(D^\alpha _{0+,c}f \in Dom J^{\beta }_{0+,c}\) and \(\Theta ^{m}_cf \in Dom J^{m+\beta -\alpha }_{0+,c}.\) Then

    $$\begin{aligned} \left( J^\beta _{0+, c}\left( D^\alpha _{0+,c}f\right) \right) (x) = \left( J^{\beta -\alpha }_{0+,c}f\right) (x),\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$
  • (b’) Let \(f \in Dom J^{m + \beta - \alpha }_{0+,c}.\) Then

    $$\begin{aligned} \left( D^\alpha _{0+,c}\left( J^\beta _{0+, c}f\right) \right) (x) = \left( J^{\beta - \alpha }_{0+, c}f\right) (x),\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$

Proof

Regarding (a’), as in the proof of Theorem 3, we have for a.e. \(x \in \mathbb {R}^+\)

$$\begin{aligned} \left( J^\beta _{0+, c}\left( D^\alpha _{0+,c}f\right) \right) (x)= & {} \left( J^\beta _{0+, c}\left( J_{0+,c}^{m- \alpha }\left( \Theta _c^mf\right) \right) \right) (x) = J^{\beta -\alpha }_{0+,c}(J^m_{0+, c}\left( \Theta _c^mf\right) )(x) \\= & {} (J^{\beta -\alpha }_{0+,c}f)(x). \end{aligned}$$

Regarding (b’), we have

$$\begin{aligned} \left( D^\alpha _{0+,c}\left( J_{0+,c}^\beta f\right) \right) (x)= & {} x^{-c}\delta ^m (x^c J_{0+,c}^{m-\alpha +\beta } f)(x) = \Theta _c^m (J_{0+,c}^{m +\beta -\alpha } f)(x)\\= & {} (J^{\beta - \alpha }_{0+,c}f)(x). \end{aligned}$$

\(\square \)

5 A Relation Between Pointwise and Strong Fractional Mellin Derivatives

In this section we will compare the definitions of the Mellin derivative in the strong and pointwise versions. For this purpose, we need some further notations and preliminary results.

For \(h,x \in \mathbb {R}^+\) and \(\widetilde{c} \in \mathbb {R},\) we define

$$\begin{aligned} m^{\widetilde{c}}_h(x) : = \left\{ \begin{array}{ll} x^{-\widetilde{c}}\chi _{[1/h,1]}(x),\,\,\text {if}\,h\ge 1,\\ -x^{-\widetilde{c}}\chi _{[1,1/h]}(x),\,\,\text {if}\,0<h <1. \end{array} \right. \end{aligned}$$

It is clear that \(m^{\widetilde{c}}_h \in X_{(-\infty , \infty )}\) and we have (see [18])

$$\begin{aligned}{}[m^{\widetilde{c}}_h]^\wedge _M(s) = \left\{ \begin{array}{ll} \frac{1}{\widetilde{c} -s} (h^{\widetilde{c} -s}-1),\,\,\text {if}\,s\in \mathbb {C}{\setminus } \{\widetilde{c}\},\\ \log h,\,\,\text {if}\,s = \widetilde{c}. \end{array} \right. \end{aligned}$$

Denoting the \(r\)th-fold convolution of \(m^{\widetilde{c}}_h\) with itself by \((m^{\widetilde{c}}_h *)^r,\,r \in \mathbb {N},\) we have, by Theorem 3 in [18]

$$\begin{aligned} \left[ (m^{\widetilde{c}}_h *)^r\right] ^\wedge _M(s) = (\widetilde{c}-s)^{-r}(h^{\widetilde{c}-s}-1)^r,\,\,s \in \mathbb {C}{\setminus } \{\widetilde{c}\}. \end{aligned}$$

We recall that by Proposition 2, one has for \(f \in X_{[a,b]},\) \(c,\nu \in [a,b]\) and \(r \in \mathbb {N},\)

$$\begin{aligned} M\left[ \sum _{k=0}^r (-1)^{r-k} \left( \begin{array}{c} r\\ k \end{array} \right) \tau ^{c}_{h^k}f\right] (\nu + it) = (h^{c-\nu -it} -1)^r M[f](\nu + it). \end{aligned}$$
(13)

In [18] (Proposition 6, formula (8.8)), the following lemma was established:

Lemma 6

If \(f \in {\mathcal X}^r_{[a,b]},\,r \in \mathbb {N},\) then for \(c \in [a,b],\,h>1\) we have, for \(x \in \mathbb {R}^+\)

$$\begin{aligned} \sum _{k=0}^r (-1)^{r-k} \left( \begin{array}{c} r\\ k \end{array} \right) \tau ^{c}_{h^k}f(x) = x^{-c}\bigg ((m^0_h*)^r *(\Theta ^r_cf(\cdot ) (\cdot )^c)\bigg )(x). \end{aligned}$$

Lemma 7

If \(f \in {\mathcal X}^r_{[a,b]},\,r \in \mathbb {N},\) then for \(c, \nu \in [a,b],\) we have

$$\begin{aligned} M[\Theta ^r_cf](\nu +it) = (c- \nu -it)^r M[f](\nu +it). \end{aligned}$$

Proof

Let us put, for \(x \in \mathbb {R}^+\)

$$\begin{aligned} G(x) = \bigg ((m^0_h*)^r *(\Theta ^r_cf(\cdot ) (\cdot )^c)\bigg )(x). \end{aligned}$$

Since \(\Theta ^r_cf \in X_\nu \) by assumption, it is easy to see that \(\Theta ^r_cf(\cdot )(\cdot )^c \in X_{\nu -c}.\) Then \(G \in X_{\nu -c}\) and so \((\cdot )^{-c}G(\cdot ) \in X_\nu .\) Hence by Lemma 6, Proposition 2 and (13) we have

$$\begin{aligned}&M[(\cdot )^{-c}G(\cdot )](\nu +it) = M\left[ \sum _{k=0}^r (-1)^{r-k} \left( \begin{array}{c} r\\ k \end{array} \right) \tau ^{c}_{h^k}f\right] (\nu +it) \\&= (h^{c-\nu -it}-1)^r M[f](\nu +it). \end{aligned}$$

Using Proposition 1(c) in [18] and the convolution theorem, (Lemma 2(ii)), we have

$$\begin{aligned} M[(\cdot )^{-c}G(\cdot )](\nu +it)= & {} M[G](\nu -c +it) \\= & {} M[(m^0_h*)^r](\nu -c +it) M[\Theta ^r_cf(\cdot )(\cdot )^c](\nu -c+it)\\= & {} (h^{c -\nu -it} -1)^r (c-\nu -it)^{-r}M[\Theta ^r_cf](\nu +it), \end{aligned}$$

from which we deduce the assertion. \(\square \)

We prove the main theorem of this section.

Theorem 5

Let \(\alpha >0\) be fixed.

  1. (i)

    Let \(f \in {\mathcal X}^\alpha _{c}\) such that \(\Theta ^{m}_c f \in X_{c},\) where \(m = [\alpha ] +1.\) Then \(f \in W^\alpha _{X_{c}}\) and

    $$\begin{aligned} \left( D^\alpha _{0+,c}f\right) (x) = \text {s-}\Theta ^\alpha _cf(x),\,\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$
  2. (ii)

    Let \(f \in {\mathcal X}^\alpha _{[a,b]}\) such that \(\Theta ^{m}_c f \in X_{[a,b]},\) for \(c \in ]a,b[,\) where \(m = [\alpha ] +1.\) Then \(f \in W^\alpha _{X_{[a,b]}}\) and

    $$\begin{aligned} \left( D^\alpha _{0+,c}f\right) (x) = \text {s-}\Theta ^\alpha _cf(x),\,\,a.e. \,x \in \mathbb {R}^+,\,\,\,c \in ]a,b[. \end{aligned}$$

Proof

(i) By Proposition 18 we have

$$\begin{aligned} \left( D^\alpha _{0+,c}f\right) (x) = (J^{m -\alpha }_{0+,c}(\Theta ^{m}_cf))(x), \end{aligned}$$

which belongs to \(X_c.\) Thus, passing to Mellin transforms, we have, for \(t \in \mathbb {R},\)

$$\begin{aligned}&\left[ D^\alpha _{0+,c}f \right] ^\wedge _M(c + it) = \left[ \left( J^{m -\alpha }_{0+,c}\left( \Theta ^{m}_cf\right) \right) \right] ^\wedge _M(c +it) \\&= (-it)^{\alpha -m}\left[ \Theta _c^{m}f\right] ^\wedge _M(c +it) = (-it)^\alpha [f]^\wedge _M(c +it) = [\text {s-}\Theta ^\alpha _cf]^\wedge _M(c+it). \end{aligned}$$

Hence, \(D^\alpha _{0+,c}f\) and s-\(\Theta ^\alpha _cf\) have the same Mellin transform along the line \(s = c +it\), and so the assertion follows by the identity theorem (see [18]).

(ii) Again, using Proposition 18, and taking the Mellin transform on the line \(s=\nu + it,\) for \(\nu \in ]a,b[\) with \(\nu < c,\) we obtain

$$\begin{aligned}{}[D^\alpha _{0+,c}f ]^\wedge _M(\nu + it) = [J^{m-\alpha }_{0+,c}(\Theta ^m_cf)]^\wedge _M(\nu + it) = (c-\nu -it)^\alpha [f]^\wedge _M(\nu +it), \end{aligned}$$

and so the assertion follows as before. \(\square \)

The above theorem reproduces Theorem 4.3 in [19], for integral values of \(\alpha ,\) i.e. for \(k \in \mathbb {N},\) the pointwise Mellin derivative \(\Theta ^k_cf\) equals the strong derivative s-\(\Theta ^k_cf,\) as defined in [19], for functions belonging to the space \(W^k_{X_c}.\)

As a consequence of Theorem 5, for the spaces \(X_c\) we can give more direct proofs of the fundamental formulae of integral and differential calculus in the fractional Mellin setting, now using the Mellin transform. We begin with the following

Theorem 6

Let \(\alpha >0\) be fixed.

  1. (a)

    Let \(f \in {\mathcal X}^\alpha _c\) be such that \(D^\alpha _{0+,c}f \in Dom J^\alpha _{0+,c},\) and \(J^\alpha _{0+,c}f \in X_c.\) Then

    $$\begin{aligned} \left( J^\alpha _{0+, c}\left( D^\alpha _{0+,c}f\right) \right) (x) = f(x)\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$
  2. (b)

    Let \(f \in Dom J^{\alpha }_{0+,c}\cap X_{c}\) be such that \(J^{\alpha }_{0+,c}f \in {\mathcal X}^\alpha _c.\) Then we have

    $$\begin{aligned} \left( D^\alpha _{0+,c}J^\alpha _{0+, c}f\right) (x) = f(x),\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$

Proof

As to part (a), we can compute the Mellin transforms, obtaining

$$\begin{aligned}&\left[ J^\alpha _{0+, c}\left( D^\alpha _{0+,c}f\right) \right] ^\wedge _M(c +it) = (-it)^{-\alpha }\left[ D^\alpha _{0+,c}f\right] ^\wedge _M(c +it)\\= & {} (-it)^{-\alpha }(-it)^{\alpha }[f]^\wedge _M(c +it) = [f]^\wedge _M(c +it) \end{aligned}$$

and so the assertion follows by the uniqueness theorem of Mellin transform.

Part (b) is carried out using the same approach. \(\square \)

In comparison with Theorem 4 we have, under different assumptions, the following

Theorem 7

Let \(\alpha , \beta >0\) with \(\beta > \alpha .\)

  • (a’) Let \(f \in {\mathcal X}^\alpha _{c}.\) If \(D^\alpha _{0+,c}f \in Dom J^\beta _{0+,c},\) and \(J^\beta _{0+, c}\left( D^\alpha _{0+,c}f\right) \in X_c,\) then

    $$\begin{aligned} \left( J^\beta _{0+, c}\left( D^\alpha _{0+,c}f\right) \right) (x) = J^{\beta -\alpha }_{0+,c}f(x),\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$
  • (b’) Let \(f \in Dom J^{\beta }_{0+,c}\cap X_{c}\) be such that \(J^{\beta }_{0+,c}f \in {\mathcal X}^\alpha _c.\) Then

    $$\begin{aligned} \left( D^\alpha _{0+,c}J^\beta _{0+, c}f\right) (x) = (J^{\beta - \alpha }_{0+, c}f)(x),\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$

Proof

As to part (a’), using again the Mellin transform, we have

$$\begin{aligned}&\left[ J^\beta _{0+, c}\left( D^\alpha _{0+,c}f\right) \right] ^\wedge _M(c +it) = (-it)^{-\beta }[D^\alpha _{0+,c}]^\wedge _M(c +it)\\= & {} (-it)^{-\beta }(-it)^{\alpha }[f]^\wedge _M(c +it) = [J^{\beta -\alpha }_{0+,c}f]^\wedge _M(c +it), \end{aligned}$$

and so the assertion follows again by the uniqueness theorem. As to part (b’), the proof is similar to the previous one. \(\square \)

For the special case of spaces \(X_{[a,b]},\) we have the following two further results.

Theorem 8

Let \(\alpha >0\) be fixed.

  • (a”) Let \(f \in {\mathcal X}^\alpha _{[a,b]}\) and \(c \in ]a,b].\) If \(D^\alpha _{0+,c}f \in Dom J^\alpha _{0+,c},\) then

    $$\begin{aligned} \left( J^\alpha _{0+, c}\left( D^\alpha _{0+,c}f\right) \right) (x) = f(x) \,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$
  • (b”) Let \(c,\nu \in [a,b]\) with \(\nu < c.\) If \(f \in Dom J^{\alpha }_{0+,c}\cap X_{[a,b]}\) is such that \(J^{\alpha }_{0+,c}f \in {\mathcal X}^\alpha _\nu ,\) then

    $$\begin{aligned} \left( D^\alpha _{0+,c}J^\alpha _{0+, c}f\right) (x) = f(x),\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$

Proof

As to part (a”) take \(\nu \in [a,b]\) with \(\nu < c.\) Using the Mellin transform on the line \(s = \nu +it,\) we have by Proposition 11 and Theorem 5

$$\begin{aligned} \left[ J^\alpha _{0+, c}\left( D^\alpha _{0+,c}f\right) \right] ^\wedge _M(\nu +it) = (c-\nu -it)^{-\alpha }\left[ D^\alpha _{0+,c}\right] ^\wedge _M(\nu +it)\\ =(c- \nu -it)^{-\alpha }(c - \nu -it)^{\alpha }[f]^\wedge _M(\nu +it) = [f]^\wedge _M(\nu +it) \end{aligned}$$

and so the assertion again follows by the uniqueness theorem. Part (b”) is carried out similarly. \(\square \)

Theorem 9

Let \(\alpha , \beta >0\) be fixed with \(\beta > \alpha .\)

  • (a”’) Let \(f \in {\mathcal X}^\alpha _{[a,b]}\) and \(c \in ]a,b].\) If \(D^\alpha _{0+,c}f \in Dom J^\beta _{0+,c},\) then

    $$\begin{aligned} \left( J^\beta _{0+, c}\left( D^\alpha _{0+,c}f\right) \right) (x) = (J^{\beta - \alpha }_{0+,c}f)(x) \,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$
  • (b”’) Let \(c,\nu \in [a,b]\) with \(\nu < c.\) If \(f \in Dom J^{\beta }_{0+,c}\cap X_{[a,b]}\) is such that \(J^{\beta }_{0+,c}f \in {\mathcal X}^\alpha _\nu ,\) then

    $$\begin{aligned} \left( D^\alpha _{0+,c}J^\beta _{0+, c}f\right) (x) = (J^{\beta - \alpha }_{0+,c}f)(x),\,a.e. \,x \in \mathbb {R}^+. \end{aligned}$$

Proof

The proof is essentially the same as in Theorem 8 taking Mellin transforms in the space \(X_\nu .\) \(\square \)

For what concerns the strong fractional Mellin derivatives we have

Theorem 10

Let \(\alpha >0\) be fixed.

  1. (i)

    Let \(f \in W^\alpha _{X_c}\) be such that \(\text {s-}\Theta ^\alpha _cf \in Dom J^\alpha _{0+,c}\) and \(J^\alpha _{0+,c}(\text {s-}\Theta ^\alpha _cf) \in X_c.\) Then

    $$\begin{aligned} J^\alpha _{0+,c}(\text {s-}\Theta ^\alpha _cf)(x) = f(x),\,\,\text {a.e}\,x \in \mathbb {R}^+. \end{aligned}$$
  2. (ii)

    Let \(f \in W^\alpha _{X_{[a,b]}}\) be such that \(\text {s-}\Theta ^\alpha _cf \in Dom J^\alpha _{0+,c},\) for \(c \in ]a,b[.\) Then

    $$\begin{aligned} J^\alpha _{0+,c}(\text {s-}\Theta ^\alpha _cf)(x) = f(x),\,\,\text {a.e}\,x \in \mathbb {R}^+. \end{aligned}$$

Proof

(i) By assumptions, we can compute the Mellin transform of the function \(J^\alpha _{0+,c}(\text {s-}\Theta ^\alpha _cf),\) on the line \(s=c+it,\) obtaining, as before,

$$\begin{aligned} \left[ J^\alpha _{0+,c}\left( \text {s-}\Theta ^\alpha _cf\right) \right] ^\wedge _M (c +it) = [f]^\wedge _M(c +it),\,\,(t \in \mathbb {R}). \end{aligned}$$

Analogously (ii) follows, by taking Mellin transforms on \(s=\nu +it,\) with \(\nu < c.\) \(\square \)

Theorem 11

Let \(\alpha >0\) be fixed.

  1. (i)

    Let \(f \in X_c\) be such that \(J^\alpha _{0+,c}f \in W^\alpha _{X_c}.\) Then

    $$\begin{aligned} \text {s-}\Theta ^\alpha _c\left( J^\alpha _{0+,c}f\right) (x) = f(x),\,\,\text {a.e}\,x \in \mathbb {R}^+. \end{aligned}$$
  2. (ii)

    Let \(f \in X_{[a,b]}\) be such that \(J^\alpha _{0+,c}f \in W^\alpha _{X_{\nu }},\) \(\alpha >0\) and \(c \in ]a,b], \nu < c.\) Then

    $$\begin{aligned} \text {s-}\Theta ^\alpha _c\left( J^\alpha _{0+,c}f\right) (x) = f(x),\,\,\text {a.e}\,x \in \mathbb {R}^+. \end{aligned}$$

Proof

The proof is essentially the same as for the previous theorem. \(\square \)

In order to state an extension to the fractional setting of the equivalence theorem proved in [18] Theorem 10, we introduce the following subspace of \(W^\alpha _{X_c},\) for \(\alpha >0\) and \(c \in \mathbb {R},\)

$$\begin{aligned} \widetilde{W}^\alpha _{X_c} =\{f \in W^\alpha _{X_c} : \text {s-}\Theta ^\alpha _cf \in Dom J^\alpha _{0+,c} \,\text {and}\, J^\alpha _{0+,c}(\text {s-}\Theta ^\alpha _cf) \in X_c\}. \end{aligned}$$

Theorem 12

Let \(f\in X_{c}\) and \(\alpha >0.\) The following four assertions are equivalent

  1. (i)

    \(f\in \widetilde{W}^\alpha _{X_{c}}\).

  2. (ii)

    There is a function \(g_1\in X_{c}\cap Dom J^\alpha _{0+,c}\) with \(J^\alpha _{0+,c}g_1 \in X_c\) such that

    $$\begin{aligned} \lim _{h\rightarrow 1} \bigg \Vert \frac{\Delta ^{\alpha ,c}_h f}{(h-1)^\alpha } - g_1 \bigg \Vert _{X_c} =0. \end{aligned}$$
  3. (iii)

    There is \(g_2\in X_{c} \cap Dom J^\alpha _{0+,c}\) with \(J^\alpha _{0+,c}g_2 \in X_c\) such that

    $$\begin{aligned} (-it)^\alpha M[f](c+it) = M[g_2](c+it). \end{aligned}$$
  4. (iv)

    There is \(g_3\in X_{c}\cap Dom J^\alpha _{0+,c}\) such that \(J^\alpha _{0+,c}g_3 \in X_c,\) and

    $$\begin{aligned} f(x) = \frac{1}{\Gamma (\alpha )} \int _0^x \bigg (\frac{u}{x}\bigg )^c \bigg ( \log \frac{u}{x}\bigg )^{\alpha -1} g_3(u) \frac{du}{u}\quad a.e. \quad x\in \mathbb {R^+}. \end{aligned}$$

If one of the above assertions is satisfied, then \(D^\alpha _{0+,c} f(x) =\) s-\(\Theta ^\alpha _c f(x)= g_1 = g_2= g_3\) a.e. \(x\in \mathbb {R^+}.\)

Proof

It is easy to see that (i) implies (ii) and (ii) implies (iii) by Theorem 1. We prove now (iii) implies (iv). Let \(g_2\in X_{c}\) be such that (iii) holds. Then, putting \(g_3= g_2\) we have, by Proposition 9,

$$\begin{aligned} M[J^\alpha _{0+,c} g_2](c+it) = (-it)^{-\alpha }M[g_3](c+it) = M[f](c + it). \end{aligned}$$

Thus by (iii) we have immediately the assertion by the identity theorem for Mellin transforms. Finally we prove that (iv) implies (i). By (iv), we have in particular that \(J^\alpha _{0+,c}g_3 \in X_{c,loc}.\) This implies that \(J^\alpha _{0+,c}g_3 \in Dom J^{m-\alpha }_{0+,c},\) since \(0 <m-\alpha < 1.\) So, by the semigroup property (Theorem 2), \(g_3 \in Dom J^m_{0+,c}.\) Therefore the assumptions of Theorem 3 part b) are satisfied and we have

$$\begin{aligned} \left( D^\alpha _{0+,c}f\right) (x) = \left( D^\alpha _{0+,c} \left( J^\alpha _{0+,c} g_3\right) \right) (x) = g_3(x) \quad a.e. \quad x\in \mathbb {R^+}. \end{aligned}$$

So the assertion follows. \(\square \)

Analogous equivalence theorems hold for the spaces \(W^\alpha _{X_{[a,b]}}.\)

6 Some Particular Applications

In this section we discuss some basic examples.

  1. 1.

    The first example also discussed in [25], Lemma 3 and in Property 2.25 in [42] and used in the proof of Propositions 6 and 15, is the following: Consider the function \(g(x) = x^b,\) \(b \in \mathbb {R}.\) Then for any \(c \in \mathbb {R}^+\) such that \(c+b > 0\) we have \(g \in Dom J^\alpha _{0+,c},\) and

    $$\begin{aligned} \left( J^\alpha _{0+,c}g\right) (x) = (c+b)^{-\alpha }x^b. \end{aligned}$$

    In particular, for \(b>0\) and \(c=0\) we get \(\left( J^\alpha _{0+}g\right) (x) = b^{-\alpha }x^b.\) Analogously, we have also

    $$\begin{aligned} \left( D^\alpha _{0+,c}g\right) (x) = (c+b)^{\alpha }x^b. \end{aligned}$$

    This also well enlightens the fundamental theorem in the fractional frame. It should be noted that in this case we cannot compute \(J^\alpha _{0+}1\) and \(D^\alpha _{0+}1\) since the function \(g(t) = 1,\) corresponding to \(b=0,\) is not in the domain of \(J^\alpha _{0+}.\) However we can compute \(J^\alpha _{0+,c}1\) and \(D^\alpha _{0+,c} 1,\) with \(c>0,\) obtaining easily \(\left( J^\alpha _{0+,c} 1\right) (x) = c^{-\alpha },\) and \(\left( D^\alpha _{0+,c} 1\right) (x) = c^\alpha .\) The last relation follows by \(\delta ^m(x^c c^{\alpha - m}) = c^\alpha x^c,\) for \(m-1 < \alpha < m,\) which is proved by an easy induction. Moreover we could also calculate \(J^\alpha _{a+}1\) and \(D^\alpha _{a+}1,\) with \(a>0\) in place of \(0\) in the definitions of the Hadamard-type integrals and derivatives (see [42]).

  2. 2.

    As a second example, let us consider the function \(g_k(x) = \log ^kx,\) for \(k \in \mathbb {N}.\) For any \(\alpha >0\) and \(c>0\) we have \(g_k \in DomJ^\alpha _{0+,c}\) and by a change of variables and using the binomial theorem, we can write

    $$\begin{aligned} \left( J^\alpha _{0+,c}g_k\right) (x)= & {} \frac{1}{\Gamma (\alpha )}\int _0^{+\infty } e^{-cv} v^{\alpha -1}(\log x - v)^k dv\\= & {} \frac{1}{\Gamma (\alpha )} \sum _{j=0}^k (-1)^{k-j} \left( \begin{array}{c} k\\ j \end{array} \right) \frac{\Gamma (\alpha +k -j)}{c^{\alpha +k -j}} \log ^jx. \end{aligned}$$

    Putting

    $$\begin{aligned} B_\alpha (k,j) := \frac{\Gamma (\alpha +k-j)}{\Gamma (\alpha )} = \prod _{\nu =1}^{k-j} (\alpha +k -j -\nu ), \end{aligned}$$

    we finally obtain

    $$\begin{aligned} \left( J^\alpha _{0+,c}g_k\right) (x) = \sum _{j=0}^k (-1)^{k-j} \left( \begin{array}{c} k\\ j \end{array} \right) \frac{B_\alpha (k,j)}{c^{\alpha +k -j}} \log ^jx. \end{aligned}$$

    For the fractional derivative, putting \(m = [\alpha ] +1,\) we have

    $$\begin{aligned} \left( D^\alpha _{0+,c} g_k\right) (x) = x^{-c}\sum _{j=0}^k (-1)^{k-j} \left( \begin{array}{c} k\\ j \end{array} \right) \frac{B_{m-\alpha }(k,j)}{c^{m-\alpha +k -j}} \delta ^m(x^c \log ^j x). \end{aligned}$$

    In particular for \(k=1,\) we have

    $$\begin{aligned} \left( D^\alpha _{0+,c}g_1\right) (x)= & {} x^{-c}\bigg [\frac{-B_{m-\alpha }(1,0)}{c^{m-\alpha +1}}\delta ^m x^c + \frac{B_{m-\alpha }(1,1)}{c^{m-\alpha }} \delta ^m (x^c \log x)\bigg ]\\ {}= & {} x^{-c}\bigg [\frac{m-\alpha }{c^{m-\alpha +1}}\delta ^m x^c + \frac{1}{c^{m-\alpha }} \delta ^m (x^c \log x)\bigg ]. \end{aligned}$$

    Now using an easy induction, \(\delta ^m (x^c \log x) = c^m x^c \log x + mc^{m-1}x^c,\) thus we finally obtain the formula:

    $$\begin{aligned} \left( D^\alpha _{0+,c}g_1\right) (x) = \alpha c^{\alpha -1}+ c^\alpha \log x. \end{aligned}$$

    This is another explanation of the fundamental theorem of fractional calculus in the Mellin frame. Indeed, it is easy to see that

    $$\begin{aligned} J^\alpha _{0+,c}\left( D^\alpha _{0+,c}g_1\right) (x) = \log x. \end{aligned}$$

    We can obviously obtain formulae for higher values of \(k.\) Note that the assumption \(c >0\) is essential. Indeed as we remarked earlier, for \(c=0,\) the function \(\log x\) does not belong to the domain of the operator \(J^\alpha _{0+}.\) In [42], Property 2.24, some related examples are treated concerning the Hadamard integrals \(J^\alpha _{a+}f,\) with \(a>0\) in place of \(0.\)

  3. 3.

    Let us consider the function \(g(x) = e^{bt},\) \(b \in \mathbb {R}.\) Then for any \(c>0\) and \(\alpha >0,\) we have \(g \in Dom J^\alpha _{0+,c}\) and using the representation formula proved in [25], Lemma 5(i), we have

    $$\begin{aligned} \left( J^\alpha _{0+,c}g\right) (x) = \sum _{k=0}^\infty (c +k)^{-\alpha }\frac{b^k}{k!}x^k, \quad x \in \mathbb {R}^+ \end{aligned}$$

    and the corresponding formula for the derivative, given in (Lemma 5(ii), [25])

    $$\begin{aligned} \left( D^\alpha _{0+,c}g\right) (x) = \sum _{k=0}^\infty (c +k)^{\alpha }\frac{b^k}{k!}x^k, \quad x \in \mathbb {R}^+. \end{aligned}$$

    As already remarked, assumption \(c>0\) is essential. Indeed for \(c=0\) Propositions 6 and 15 imply that we have similar representations for \(J^\alpha _{0+}\) and \(D^\alpha _{0+},\) only if the analytic function \(f\) satisfies \(f(0) = 0.\) Alternative representations are given by Propositions 16, 17, in terms of Stirling functions. We have, for \(\alpha >0,\)

    $$\begin{aligned} \left( D^\alpha _{0+,c}e^{bt}\right) (x) = e^{bx}\sum _{k=0}^\infty S_c(\alpha , k) x^k b^k\,\,\,\,(x>0) \end{aligned}$$

    and

    $$\begin{aligned} \left( J^\alpha _{0+,c}e^{bt}\right) (x) = e^{bx}\sum _{k=0}^\infty S_c(-\alpha , k) x^k b^k\,\,\,\,(x>0). \end{aligned}$$
  4. 4.

    Let us consider the ”sinc” function which is analytic over the entire real line. The Taylor series is given by:

    $$\begin{aligned} \text {sinc} (x) = \frac{\sin \pi x}{\pi x} = \sum _{k=0}^\infty (-1)^k\frac{\pi ^{2k}}{(2k+1)!}x^{2k}. \end{aligned}$$

    Moreover it is easy to see that sinc \( \in X_{c, loc}\) for \(c>0,\) while sinc \(\not \in X_{0, loc}.\) Using Lemma 5 in [25] we have immediately

    $$\begin{aligned} \left( J^\alpha _{0+,c}\text {sinc}\right) (x) = \sum _{k=0}^\infty (-1)^k (c+2k)^{-\alpha } \frac{\pi ^{2k}}{(2k+1)!}x^{2k} \end{aligned}$$

    and

    $$\begin{aligned} \left( D^\alpha _{0+,c}\text {sinc}\right) (x) = \sum _{k=0}^\infty (-1)^k (c+2k)^\alpha \frac{\pi ^{2k}}{(2k+1)!}x^{2k}. \end{aligned}$$

    Another representation in term of Stirling functions of second type is a consequence of Proposition 13. A formula for the (classical) derivatives of sinc can be found in [29]. For a given \(s \in I\!\!N,\) differentiating by series, we have

    $$\begin{aligned} (\text {sinc}\,x)^{(s)}= & {} \sum _{k=s}^\infty (-1)^k \frac{\pi ^{2k}}{(2k+1)!}\frac{d^s}{dx^s}x^{2k} = \sum _{k=s}^\infty (-1)^k \frac{\pi ^{2k}}{(2k+1)!}A_{s,k}x^{2k-s}\\= & {} \sum _{k=0}^\infty (-1)^{k+s} \frac{\pi ^{2k +2s}}{(2k+2s+1)!}A_{s,k+s}x^{2k+s}, \end{aligned}$$

    where

    $$\begin{aligned} A_{s,k} = \prod _{\nu = 0}^{s-1} (2k -\nu ). \end{aligned}$$

    Thus, using again Lemma 5 in [25], we have

    $$\begin{aligned} \left( J^\alpha _{0+,c}(\text {sinc}\,t)^{(s)}\right) (x) = \sum _{k=0}^\infty (-1)^{k+s}(c + 2k +s)^{-\alpha } \frac{\pi ^{2k + 2s}}{(2k+2s +1)!}A_{s, k+s}x^{2k+s} \end{aligned}$$

    and

    $$\begin{aligned} (D^\alpha _{0+,c}(\text {sinc}\,t)^{(s)})(x) = \sum _{k=0}^\infty (-1)^{k+s}(c + 2k +s)^\alpha \frac{\pi ^{2k + 2s}}{(2k+2s +1)!}A_{s, k+s}x^{2k+s}. \end{aligned}$$

    Note that for every odd \(s,\) the above formula is valid also for \(c=0,\) since in this instance \((\text {sinc}\,x)^{(s)} \in X_{0,loc}.\)

7 Applications to Partial Differential Equations

In this section we apply our theory to certain fractional differential equations. We notice here that the use of Mellin analysis in the theory of differential equations was considered in [4], dealing with Cauchy problems for ordinary differential equations, involving Mellin derivatives of integral order. In [35], Mellin analysis was applied to numerical solutions of Mellin integral equations. In the fractional case, differential equations were treated using various types of fractional derivatives, e.g. Riemann–Liouville, Caputo, Hadamard, etc (see [42]). The use of integral transforms is a very useful and used method for certain Cauchy or boundary value problems. However, the use of Mellin transforms in fractional differential equations involving Hadamard derivatives is so far not common.

Here we will examine certain boundary value problems related to an evolution equation and to a diffusion problem, using the Mellin transform approach and using Hadamard derivatives. In the first example, the fractional evolution equation originates from a Volterra integral equation with a special kernel. The second example is a fractional diffusion equation.

7.1 An Integro-Differential Equation

Let \(\alpha \in ]0,1[\) be fixed, and let

$$\begin{aligned} K_\alpha (x,u):=\frac{1}{\Gamma (1-\alpha )}\bigg (\log \frac{x}{u}\bigg )^{-\alpha }\chi _{]0,x[}(u),\quad x >0. \end{aligned}$$

Let us consider the following problem: find a function \(w:\mathbb {R}^+ \times \mathbb {R}^+ \rightarrow \mathbb {C}\) such that \(w(x,0) = f(x),\) for a fixed boundary data \(f:\mathbb {R}^+ \rightarrow \mathbb {C},\) and

$$\begin{aligned} Ax \frac{\partial }{\partial x}\int _0^\infty K_\alpha (x,u) w(x,y)\frac{du}{u} + B \frac{\partial }{\partial y}w(x,y) = 0, \end{aligned}$$
(14)

\(A, B\) being two positive constants.

Now, Eq. (14) can be rewritten as a fractional partial differential evolution equation in the Hadamard sense, as

$$\begin{aligned} A \left( D^\alpha _{0+}w(\cdot , y)\right) (x) + B \frac{\partial }{\partial y}w(x,y) = 0,\,\,(x,y \in \mathbb {R}^+). \end{aligned}$$

Without loss of generality we can assume \(A = B = 1,\) thus

$$\begin{aligned} \left( D^{\alpha }_{0+} w(\cdot , y)\right) (x)= -\frac{\partial }{\partial y} w(x,y),\,\,(x,y \in \mathbb {R}^+) \end{aligned}$$
(15)

with initial data \(w(x,0) = f(x),\,x>0.\) We call for a function \(w: \mathbb {R}^+ \times \mathbb {R}^+ \rightarrow \mathbb {C}\) satisfying the following properties

  1. (1)

    \(w(\cdot ,y) \in {\mathcal X}^{\alpha }_{[a, 0]}\) for every \(y>0\) and for a fixed \(a<0\)

  2. (2)

    there is a function \(K\in X_\nu ,\) \(\nu \in [a,0[,\) such that for every \(x,y>0\)

    $$\begin{aligned} \bigg | \frac{\partial }{\partial y} w(x,y) \bigg | \le K(x) \end{aligned}$$
  3. (3)

    for a fixed \(f \in X_\nu ,\) we have \( \lim _{y\rightarrow 0^+} \Vert w(\cdot , y) - f(\cdot ) \Vert _{X_\nu } =0.\)

Assuming that such a function exists we apply the Mellin transform with respect to the variable \(x\) on the line \(\nu + it\) to both sides of (15), obtaining

$$\begin{aligned} \left[ D^{\alpha }_{0+} w(\cdot , y)\right] ^{\wedge }_M(\nu +it)= -\bigg [\frac{\partial }{\partial y} w(\cdot , y)\bigg ]^{\wedge }_M(\nu +it). \end{aligned}$$

Using Theorems 1 and 5 we have

$$\begin{aligned} \left[ D^{\alpha }_{0+} w(\cdot , y)\right] ^{\wedge }_M(\nu +it)=(-\nu -it)^{\alpha }[w(\cdot , y)]^{\wedge }_M(\nu +it). \end{aligned}$$

Moreover by property (2),

$$\begin{aligned} \bigg [\frac{\partial }{\partial y} w(\cdot , y)\bigg ]^{\wedge }_M(\nu +it)=\int _0^{+\infty } x^{\nu +it-1} \frac{\partial }{\partial y} w(x,y) dx = \frac{\partial }{\partial y} [w(\cdot ,y)]^{\wedge }_M (\nu +it), \end{aligned}$$

thus Eq. (15) is transformed into a first order ordinary differential equation

$$\begin{aligned} (-\nu -it)^{\alpha }[w(\cdot ,y)]^{\wedge }_M(\nu +it) = -\frac{\partial }{\partial y} [w(\cdot ,y)]^{\wedge }_M(\nu +it) \end{aligned}$$

which has the solution

$$\begin{aligned}{}[w(\cdot ,y)]^{\wedge }_M(\nu +it) = A(\nu +it) e^{-(-\nu -it)^{\alpha }y} \end{aligned}$$

where \(A(\nu +it)\) is independent of \(y.\) The determination of \(A(\nu +it)\) follows from condition 3); indeed we have that \( [w(\cdot ,y)]^{\wedge }_M(\nu +it) \rightarrow [f]^{\wedge }_M (\nu +it)\) uniformly for \(y\rightarrow 0^+\) and for \(t\in \mathbb {R},\) and so \(A(\nu +it) = [f]^{\wedge }_M (\nu +it),\) obtaining

$$\begin{aligned}{}[w(\cdot ,y)]^{\wedge }_M(\nu +it)= [f]^{\wedge }_M (\nu +it) e^{-(-\nu -it)^{\alpha }y}. \end{aligned}$$

Now putting \(s = -\nu -it,\) we have Re \(s = -\nu >0\) and so, since \(y>0,\) the inverse Mellin transform of \(e^{-y s^{\alpha }}\) exists and it is given by (see Theorem 6 in [18])

$$\begin{aligned} G(x,y):= \frac{1}{2\pi }\int _{-\infty }^{+\infty } e^{-(-\nu -it)^\alpha y}x^{-\nu -it}dt. \end{aligned}$$
(16)

Thus if the solution of (15) exists, by the Mellin–Parseval formula (see [18]), it has the form

$$\begin{aligned} w(x,y) = \int _0^{+\infty } f(v) G(\frac{x}{v},y)\frac{dv}{v},\,x,y>0. \end{aligned}$$

In order to verify that the function \(w(x,y)\) is actually a solution of the problem we make a direct substitution. We have, by differentiating under the integral

$$\begin{aligned}&-\frac{\partial w}{\partial y}(x,y) = \int _0^{+\infty } f(v) \bigg [\frac{1}{2\pi }\int _{-\infty }^{+\infty }\bigg (\frac{x}{v}\bigg )^{-\nu -it}(-\nu -it)^\alpha e^{-(-\nu -it)^\alpha y}dt\bigg ]\frac{dv}{v}\\&\quad = \frac{1}{2\pi }\int _{-\infty }^{+\infty }(-\nu -it)^\alpha x^{-\nu -it} e^{-(-\nu -it)^\alpha y}[f]^\wedge _M(\nu +it)dt. \end{aligned}$$

Now, let us consider

$$\begin{aligned} (D^{\alpha }_{0+} w(\cdot , y))(x) = \delta \left( J^{1-\alpha }_{0+}w(\cdot ,y)\right) (x). \end{aligned}$$

We have

$$\begin{aligned}&\left( D^{\alpha }_{0+} w(\cdot , y)\right) (x) = (x \frac{\partial }{\partial x})\bigg [\frac{1}{\Gamma (1-\alpha )} \int _0^x \bigg (\log \frac{x}{u}\bigg )^{-\alpha }\\&\quad \times \bigg (\int _0^{\infty } f(v) G(\frac{u}{v},y)\frac{dv}{v}\bigg )\frac{du}{u}\bigg ]\\&\quad =(x \frac{\partial }{\partial x})\bigg [\frac{1}{\Gamma (1-\alpha )} \int _1^{+\infty } (\log z)^{-\alpha }\bigg (\int _0^{\infty } f(v) G(\frac{x}{zv},y)\frac{dv}{v}\bigg )\frac{dz}{z}\bigg ]\\&\quad = \frac{x}{\Gamma (1-\alpha )} \int _1^{+\infty } (\log z)^{-\alpha }\bigg (\int _0^{\infty } f(v) \frac{\partial }{\partial x}G(\frac{x}{zv},y)\frac{dv}{v}\bigg )\frac{dz}{z}. \end{aligned}$$

Since

$$\begin{aligned} \frac{\partial }{\partial x}G(x,y) = \frac{1}{2\pi }\int _{-\infty }^{+ \infty }e^{-(-\nu -it)^\alpha y}(-\nu -it)x^{-\nu -it -1}dt, \end{aligned}$$

putting \(s = -\nu -it,\) we obtain

$$\begin{aligned}&\left( D^{\alpha }_{0+} w(\cdot , y)\right) (x)\\&\quad = \frac{x}{\Gamma (1\!-\!\alpha )} \int _1^{+\infty }\! (\log z)^{-\alpha }\bigg (\int _0^{\infty }\! f(v) \frac{1}{zv} \bigg (\frac{1}{2\pi }\int _{-\infty }^{+ \infty }\!e^{-s^\alpha y}s\bigg (\frac{x}{zv}\bigg )^{s -1}dt\bigg )\frac{dv}{v}\bigg )\frac{dz}{z}\\&\quad =\frac{1}{2\pi }\frac{1}{\Gamma (1-\alpha )} \int _1^{+\infty } (\log z)^{-\alpha }\bigg (\int _{-\infty }^{+\infty } e^{-s^\alpha y}s \bigg (\frac{x}{z}\bigg )^{s}[f]^\wedge _M(-s)dt\bigg ) \frac{dz}{z} \\&\quad = \frac{1}{2\pi }\frac{1}{\Gamma (1-\alpha )} \int _{-\infty }^{+\infty } [f]^\wedge _M(-s) e^{-s^\alpha y}s\bigg (\int _0^{x} \bigg (\log \frac{x}{u}\bigg )^{-\alpha } u^{s} \frac{du}{u}\bigg )dt. \end{aligned}$$

Since Example 1 of Sect. 6, holds for \(c=0\) and complex \(b\) with Re \( b >0,\) we have

$$\begin{aligned} \frac{1}{\Gamma (1-\alpha )} \int _0^x \bigg (\log \frac{x}{u}\bigg )^{-\alpha } u^s \frac{du}{u} = \left( J^{1-\alpha }_{0+}u^s\right) (x) = s^{\alpha -1}x^s, \end{aligned}$$

and so we have

$$\begin{aligned} (D^{\alpha }_{0+} w(\cdot , y))(x) = \frac{1}{2\pi }\int _{-\infty }^{+\infty }s^\alpha x^{s} e^{-s^\alpha y}[f]^\wedge _M(\nu +it)dt, \end{aligned}$$

i.e. the assertion. So we have proved the following

Theorem 13

Under the assumptions imposed, Eq. (15) with the initial data \(f\), has the unique solution given by

$$\begin{aligned} w(x,y) = \int _0^{+\infty } f(v) G(\frac{x}{v},y)\frac{dv}{v},\,x,y>0, \end{aligned}$$

where the function \(G(x,y)\) is defined in (16).

Note that for \(\alpha = 1/2,\) we have a closed form for the function \(G(x,y).\) Indeed, using formula 3.7 p. 174 in [51], we obtain

$$\begin{aligned} G(x,y) = \frac{y}{2\sqrt{\pi }} (-\log x)^{-3/2}\text {exp}\bigg (\frac{y^2}{4 \log x}\bigg )\chi _{]0,1[}(x) \end{aligned}$$

and the solution is then given by

$$\begin{aligned} w(x,y) = \frac{y}{2\sqrt{\pi }} \int _0^{1} f(\frac{x}{v}) (-\log v)^{-3/2}\text {exp}\bigg (\frac{y^2}{4 \log v}\bigg )\frac{dv}{v}. \end{aligned}$$

Equation (15) was also discussed in [42], but using fractional Caputo derivatives. Our treatment, however, contains real proofs.

7.2 A Diffusion Equation

For \(\alpha >0\) let us consider the fractional diffusion equation

$$\begin{aligned} \left( D^\alpha _{0+} w(\cdot , y)\right) (x) = \frac{\partial ^2}{\partial y^2}w(x,y), \,\,\,(x,y \in \mathbb {R}^+) \end{aligned}$$
(17)

with the initial condition

$$\begin{aligned} \lim _{y \rightarrow 0^+}\Vert w(\cdot ,y) - f(\cdot )\Vert _{X_0} = 0, \end{aligned}$$

for a fixed \(f \in X_0.\)

We call for a function \(w: \mathbb {R}^+\times \mathbb {R}^+ \rightarrow \mathbb {C}\) satisfying the following assumptions:

  1. (1)

    \(w(\cdot ,y) \in {\mathcal X}^{\alpha }_{0}\) for every \(y>0,\) and there exists \(N>0\) such that \(\Vert w(\cdot ,y)\Vert _{X_0} \le N,\) for every \(y \in \mathbb {R}^+.\)

  2. (2)

    there are functions \(K_1, K_2 \in X_0,\) such that for every \(x,y>0\)

    $$\begin{aligned} \bigg | \frac{\partial }{\partial y} w(x,y) \bigg | \le K_1(x),\, \bigg | \frac{\partial ^2 }{\partial y^2} w(x,y) \bigg | \le K_2(x) \end{aligned}$$
  3. (3)

    for a fixed \(f \in X_0,\) we have \( \lim _{y\rightarrow 0^+} \Vert w(\cdot , y) - f(\cdot ) \Vert _{X_0} =0.\)

Using the same approach as in the previous example, taking the Mellin transforms of both sides of the Eq. (17), we obtain

$$\begin{aligned} \left[ D^{\alpha }_{0+} w(\cdot , y)\right] ^{\wedge }_M(it)= -\bigg [\frac{\partial ^2}{\partial y^2} w(\cdot , y)\bigg ]^{\wedge }_M(it). \end{aligned}$$

Using Theorems 1 and 5 we have

$$\begin{aligned}{}[D^{\alpha }_{0+} w(\cdot , y)]^{\wedge }_M(\nu +it)=(-it)^{\alpha }[w(\cdot , y)]^{\wedge }_M(it). \end{aligned}$$

Moreover by property (2),

$$\begin{aligned} \bigg [\frac{\partial ^2}{\partial y^2} w(\cdot , y)\bigg ]^{\wedge }_M(it)= \frac{\partial ^2}{\partial y^2} [w(\cdot ,y)]^{\wedge }_M (it), \end{aligned}$$

thus Eq. (17) is transformed into the second order linear ordinary differential equation

$$\begin{aligned} (-it)^\alpha z_t(y) = z''_t(y),\quad y >0, \end{aligned}$$
(18)

with respect to the function

$$\begin{aligned} z_t(y):= [w(\cdot ,y)]^\wedge _M(it),\,\,\,t \in \mathbb {R}. \end{aligned}$$

If \(t=0\) the solution is the linear function \(z_0(y) = A(0) + B(0)y,\) while for \(t \ne 0,\) the characteristic equation associated with (18)

$$\begin{aligned} \lambda ^2 = \exp (\alpha \log (-it)), \end{aligned}$$

has two complex solutions

$$\begin{aligned} \lambda _1:= |t|^{\alpha /2}\bigg (\cos \frac{\alpha \pi }{4} +i \sin \frac{\alpha \pi }{4}(-\mathop {\mathrm {sgn}}t)\bigg ), \end{aligned}$$
$$\begin{aligned} \lambda _2:= -|t|^{\alpha /2}\bigg (\cos \frac{\alpha \pi }{4} +i \sin \frac{\alpha \pi }{4}(-\mathop {\mathrm {sgn}}t)\bigg ). \end{aligned}$$

Thus, for \(t \ne 0,\) we obtain the general solution

$$\begin{aligned} z_t(y)= & {} A(t) e^{-|t|^{\alpha /2}(\cos (\alpha \pi /4) + i(-\mathop {\mathrm {sgn}}t)\sin (\alpha \pi /4))y} \\&+ B(t) e^{|t|^{\alpha /2}(\cos (\alpha \pi /4) + i(-\mathop {\mathrm {sgn}}t)\sin (\alpha \pi /4))y}. \end{aligned}$$

Now, let \(\alpha \) be such that \(\cos (\alpha \pi /4) >0.\) By the boundary condition (3), we have also that \(z_t(y)\) is uniformly convergent to \([f]^\wedge _M\) as \(y\rightarrow 0^+.\) Moreover, by assumption (1), there exists a constant \(N>0\) such that \(|z_t(y)| \le N,\) for every \(t \in \mathbb {R}.\) This means that we must have \(B(t) = 0\) for every \(t \in \mathbb {R},\) thus

$$\begin{aligned} z_t(y) = [w(\cdot , y)]^\wedge _M(it) = [f]^\wedge _M(it) e^{-|t|^{\alpha /2}(\cos (\alpha \pi /4) + i(-\mathop {\mathrm {sgn}}t)\sin (\alpha \pi /4))y}. \end{aligned}$$

Now, the function

$$\begin{aligned} e^{-|t|^{\alpha /2}(\cos (\alpha \pi /4) + i(-\mathop {\mathrm {sgn}}t)\sin (\alpha \pi /4))y} \end{aligned}$$

is summable as a function of \(t \in \mathbb {R},\) and its inverse Mellin transform is given by

$$\begin{aligned} G(x,y):= \frac{1}{2\pi }\int _{-\infty }^\infty e^{-|t|^{\alpha /2}(\cos (\alpha \pi /4) + i(-\mathop {\mathrm {sgn}}t)\sin (\alpha \pi /4))y} x^{-it}dt. \end{aligned}$$

Then if a solution exists it has the form

$$\begin{aligned} w(x,y) = \int _0^\infty f(u) G(\frac{x}{u}, y)\frac{du}{u},\,\,\,\,x,y >0. \end{aligned}$$

Analogously, if \(\alpha \) is such that \(\cos (\alpha \pi /4) <0,\) then we have \(A(t)=0\) for every \(t \in \mathbb {R},\) and the corresponding function \(G(x,y)\) takes the form

$$\begin{aligned} G(x,y) = \frac{1}{2\pi }\int _{-\infty }^\infty e^{-|t|^{\alpha /2}(|\cos (\alpha \pi /4)| + i(-\mathop {\mathrm {sgn}}t)\sin (\alpha \pi /4))y} x^{-it}dt. \end{aligned}$$

That the above function is really a solution can be proved, as before, by a direct substitution into the differential equation.

The function \(G(x,y)\) can be written in a more simple form. Indeed, using Euler’s formula, putting \(a:= |\cos (\alpha \pi /4)|,\) \(b:= \sin (\alpha \pi /4),\) we can write:

$$\begin{aligned} G(x,y)= & {} \frac{1}{2\pi }\int _0^\infty e^{-|t|^{\alpha /2}(a-ib)y}(\cos (t \log x - i\sin (t \log x))dt \\&+ \frac{1}{2\pi }\int _0^\infty e^{-t^{\alpha /2}(a+ib)y}(\cos (t \log x + i\sin (t \log x))dt\\= & {} \frac{1}{\pi }\int _0^\infty e^{-t^{\alpha /2}ay}[\cos (t \log x)\cos (t^{\alpha /2}by + \sin (t\log x) \sin (t^{\alpha /2}by)]dt\\= & {} \frac{1}{\pi }\int _0^\infty e^{-t^{\alpha /2}ay}\cos (t \log x - t^{\alpha /2}by)dt. \end{aligned}$$

For \(\alpha = 1\) using Proposition 13, we obtain the (not fractional) equation

$$\begin{aligned} x\frac{\partial w}{\partial x}(x,y) = \frac{\partial ^2 w}{\partial y^2}(x,y),\,\,\,\,x,y \in \mathbb {R}^+ \end{aligned}$$

and using our approach the corresponding problem has a unique solution of the form

$$\begin{aligned} w(x,y) = \int _0^\infty f(u) G\left( \frac{x}{u}, y\right) \frac{du}{u},\,\,\,\,x,y >0, \end{aligned}$$

where

$$\begin{aligned} G(x,y) = \frac{1}{\pi }\int _0^\infty \exp (-\frac{\sqrt{2t}y}{2})\cos (\frac{\sqrt{2t}y}{2}- t \log x) dt. \end{aligned}$$

This integral has a closed form. Indeed by an elementary substitution, we can write

$$\begin{aligned} I:= & {} \int _0^\infty \exp \left( -\frac{\sqrt{2t}y}{2}\right) \cos \left( \frac{\sqrt{2t}y}{2}- t \log x\right) dt \\= & {} 2\int _0^\infty \exp \left( -\frac{\sqrt{2}yu}{2}\right) \cos \left( \frac{\sqrt{2}yu}{2}- u^2 \log x\right) dt. \end{aligned}$$

Now, the above integral, depending on the sign of \(\log x,\) can be reduced to the integrals (\(p>0)\) (see [38], p. 499)

$$\begin{aligned} \int _0^\infty v e^{-pv}\cos (2v^2 -pv)dv = \frac{p\sqrt{\pi }}{8}\exp (-p^2/4), \end{aligned}$$

if \(\log x>0\) and

$$\begin{aligned} \int _0^\infty v e^{-pv}\cos (2v^2 + pv)dv = 0, \end{aligned}$$

if \(\log x \le 0.\) Indeed, if we put \(u= \sqrt{2/\log x}v\) in the first case, and \(u=\sqrt{2/|\log x|}v\) in the second case, we get easily

$$\begin{aligned} I = \frac{\sqrt{\pi }}{\log x\sqrt{2\log x}}\exp \bigg (-\frac{y}{2\sqrt{2}\log x}\bigg ),\,\,\,x>1, \end{aligned}$$

while \(I=0\) for \(0<x<1.\) Therefore,

$$\begin{aligned} G(x,y) = \left\{ \begin{array}{ll} \sqrt{\displaystyle \frac{\pi }{2}}\displaystyle \frac{1}{(\log x)^{3/2}}\exp \bigg (-\displaystyle \frac{y}{2\sqrt{2}\log x}\bigg ), \quad \quad \,x>1,\,y>0 \\ \\ 0, \quad \quad \,0<x\le 1,\,y>0. \end{array} \right. \end{aligned}$$

For \(\alpha = 1/2,\) the equation becomes

$$\begin{aligned} (D^{1/2}_{0+} w(\cdot , y))(x) = \frac{\partial ^2}{\partial y^2}w(x,y),\,\,\,\,(x,y \in \mathbb {R}^+) \end{aligned}$$

and putting \(a:= \cos (\pi /8) = \sqrt{2 + \sqrt{2}}/2, \,b:= \sin (\pi /8) = \sqrt{2 - \sqrt{2}}/2,\) we obtain the following representation of the function \(G(x,y):\)

$$\begin{aligned} G(x,y) = \frac{1}{\pi }\int _0^\infty \exp (-\root 4 \of {t}ay) \cos (t \log x - \root 4 \of {t}by)dt. \end{aligned}$$

For \(\alpha = 4,\) using Proposition 13, our equation has the form

$$\begin{aligned} \sum _{k=0}^4 S_0(4,k) x^k\bigg (\frac{\partial }{\partial x}\bigg )^{(k)}w(x,y) = \frac{\partial ^2 w}{\partial y^2}(x,y),\,\,(x,y \in \mathbb {R}^+) \end{aligned}$$
(19)

i.e.

$$\begin{aligned}&x^4\frac{\partial ^4 w}{\partial x^4}(x,y)+ 6x^3\frac{\partial ^3w}{\partial x^3}(x,y) + 7x^2\frac{\partial ^2 w}{\partial x}(x,y) + x\frac{\partial w}{\partial x}(x,y) =\frac{\partial ^2 w}{\partial y^2}(x,y),\\&\quad (x,y \in \mathbb {R}^+) \end{aligned}$$

In this instance we have \(\cos (\alpha \pi /4) = -1,\) and so, the unique solution of our problem for Eq. (19) has the form

$$\begin{aligned} w(x,y) = \int _0^\infty f(u) G\left( \frac{x}{u}, y\right) \frac{du}{u},\,\,\,\,x,y >0, \end{aligned}$$

where

$$\begin{aligned} G(x,y) = \frac{1}{\pi }\int _0^\infty e^{-t^2y}\cos (t \log x) dt. \end{aligned}$$

This integral can be reduced by an elementary substitution, to the classical integral

$$\begin{aligned} g(v) = \int _0^\infty e^{-t^2} \cos (tv) dt = \frac{\sqrt{\pi }}{2}\exp (-v^2/4), \end{aligned}$$

thus obtaining

$$\begin{aligned} G(x,y) = \frac{1}{2}\sqrt{\frac{\pi }{y}}\exp (-\log ^2 x/4y). \end{aligned}$$

Another example in fractional case, is \(\alpha = 5/2.\) In this case we have \(a = |\cos ((5/8)\pi )|= \sqrt{2 - \sqrt{2}}/2\) and \(b= \sin ((5/8)\pi ) = \sqrt{2 + \sqrt{2}}/2.\) The corresponding function \(G(x,y)\) is given by

$$\begin{aligned} G(x,y) = \frac{1}{\pi }\int _0^\infty \exp \left( {-t^{5/8}\sqrt{2-\sqrt{2}}y}\right) \cos \left( t \log x - t^{5/4}\sqrt{2+\sqrt{2}}y/2\right) dt. \end{aligned}$$

The above approach works for every value of \(\alpha \) except those for which \(\cos (\alpha \pi /4) = 0.\) For \(\alpha = 2,\) the resulting wave equation in the Mellin setting reads

$$\begin{aligned} x^2 \frac{\partial ^2}{\partial x^2}w(x,y) + x \frac{\partial }{\partial x}w(x,y) = \frac{\partial ^2}{\partial y^2}w(x,y),\,\,(x,y \in \mathbb {R}^+) \end{aligned}$$

But this equation is treated in detail in [18] with different boundary conditions. Experts in the evaluations of integrals could surely obtain more elegant representations of the \(G(x,y)-\) functions.

8 A Short Biography of R.G. Mamedov and Some Historical Notes

Rashid Gamid-oglu Mamedov (changed into Mammadov since 1991), born in a peasant family on December 27, 1931, in the village Dashsalakhly, Azerbaijan SSR, lost his father at the age 6 and grew up with his mother and three sisters (Fig. 1).

Fig. 1
figure 1

A photo of Prof. Rashid Mamedov together with his spouse Flora Mamedova, who now takes her husband’s role in keeping alive Azerbaijani customs among her grandchildren. It was taken in the year of his death, 2000

After spending the school years 1938–1948 in the middle school of his home village, he was admitted to the Azerbaijan Pedagogical Institute (API) in Baku. In 1952, he graduated from its Mathematics Department with a so-called red diploma-honours (i.e. diploma cum laude). Immediately he was accepted for post-graduate study at the Chair of Mathematical Analysis of API, and defended his PhD thesis (”Kandidatskaya”) entitled ”Some questions of approximation by entire functions and polynomials” in 1955. This dissertation was one basis to the monograph ”Extremal Properties of Entire Functions” published in 1962 by his scientific supervisor I.I. Ibragimov. During the years 1953–1960, R.G. Mamedov was affiliated with the Chair of Mathematical Analysis at API in various positions, first as assistant (1953–1956) and senior lecturer (1956–1957), later as docent (assistant professor, 1957–1960).

In 1960–1963, R.G. Mamedov held a position as senior researcher at the Institute of Mathematics and Mechanics of the Azerbaijan Academy of Science. Free of teaching duties, he published in a very short period of time his fundamental contributions to the theory of approximation by linear operators which made him known both in the former Soviet Union and abroad. These deep results comprised his ”Doktorskaya” (Habilitation degree) ”Some questions of approximation of functions by linear operators” submitted to Leningrad State Pedagogical A.I.Herzen-Institute in 1964. At the age of 33 years, R.G. Mamedov was awarded the Dr. of Phys. and Math. degree and was appointed as full professor to the Chair of Higher Mathematics at Azerbaijan Polytechnic Institute in Baku. Here he started his remarkable career as university teacher and educator, supervising as many as 23 PhD theses over the years, two of his students obtained the Dr. of Phys. and Math. degree themselves. In 1966, he gave a contributed talk at the ICM Congress in Moscow (Fig. 2).

Fig. 2
figure 2

Private photo of the 30 Azerbaijani participants at the ICM, held in Moscow 1966, and kindly forwarded to the authors by Prof. Boris Golubov. Prof. Mamedov stands with his large briefcase in the first row, on the extreme left, the President of the Azerbaijani Academy of Sciences (in 1966), Acad. Z. Khalilov, stands in the center of the first row, eighth from the left, together with Prof.I.I. Ibragimov (fifth from the left) and the Dean of the Mechanical-Mathematical Department of Azerbaijani State University, Prof. A.I. Guseinov, sixth from the left. Prof.Golubov was invited to be present in this photo since he spent his first 3 years (1956–1958) as a student at their university, and also participated in the Congress. We find him in the second row, third from the right

In 1967, he published his first monograph ”Approximation of Functions by Linear Operators”, recognised by the international mathematical community, although it was written in Azerbaijani. His son Aykhan reported that his father possessed a copy of [28] and recalls him speaking about the authors. In 1969, R.G. Mamedov, was appointed head of the Chair of Higher Mathematics at Azerbaijan State Oil Academy in Baku, a position which he held for 26 years. His cycle of investigations on properties of integral transforms of Mellin-type led to the publication of several research monographs, in particular ”On Approximation of Conjugate Functions by Conjugate M-Singular Integrals” (1977), ”On Approximation of Functions by Singular Integrals of Mellin Type” (1979), and ”Mellin Transform and Theory of Approximation” (1991). With equal enthusiasm, he created textbooks for use at the Azerbaijan institutions of higher education that are still of widespread use. His three-volume ”Course of Higher Mathematics” (1978, 1981, 1984) has several editions. R.G. Mamedov is also the author of 20 booklets and articles popularising mathematics among the general public and raising the standards of mathematics education in his home country.

R.G. Mamedov was not only an outstanding scientist and educator but also impressed everybody who met him by his outgoing character, friendly personality, and for being very accessible and supportive in personal and scientific matters. He married in 1960, two of his three sons being mathematicians themselves. R.G. Mamedov died on May 2, 2000, at the age of 68 after an infarct. He is survived by his spouse Flora Mamadova and three sons, there now being seven grandchildren, five boys and two girls, four being born after his death.

Work in the broad area of approximation theory at the University of Perugia, was initiated by its former visionary, departmental director, C. Vinti (1926–1997) a master in Calculus of Variation (see [59]). It was decisively influenced by the work of J. Musielak, a chief representative of the Orlicz analysis school at Poznan, its first joint work bring in the direction of (nonlinear) integral operators in the setting of modular spaces ([13]), as well as by the work at Aachen, together with P.L. Butzer and R.L. Stens. During recent research at Perugia in matters asymptotic expansions of certain Mellin-type convolution operators and convergence properties in the spaces of functions of bounded variation (see [2, 3, 612]), a MathSciNet search led to the treatise of R.G. Mamedov under discussion. Since it was nowhere to be found, it was finally A. Gadjiev, Academy of Sciences of Azerbaijan, who within a few weeks kindly sent a copy, as a present. It has served us well not only in our local work at Perugia but also in the present joint investigation.

As to the work at Aachen, although we knew of the existence of the great school of approximation theory at Leningrad since 1949 (through G.G. Lorentz), it was the Second All-Union Conference on Constructive Theory of Functions, held at Baku on October 8–13, 1962, that drew our attention to approximation theory at Baku. That was a couple of years after its proceedings (with p. 638) appeared in 1965. (The Aachen group organised the first conference on approximation in the West (August 4–10, 1963; ISNM, Vol. 5, Birkhaeuser, Basel, 1964)).

It was Aachen’s former student E.L. Stark (1940–1984), who in view of his fluent knowledge of Russian kept well aware of approximation theoretical studies at Leningrad, Moscow and Kiev, was surprised when he discovered the Baku proceedings. In fact, Russian approximation theory was a model for us in Aachen, especially in its earlier years; and Stark’s great input benefited us all. We exchanged letters with R.G.Mamedov and in 1974 invited him to participate in our Oberwolfach conference on Linear Operators and Approximation II, held March 30–April 6. But he was unable to attend at the last moment (likewise in the case of S.M. Nikolskii, S.A. Teljakovski and B.S. Mitijagin), as is recorded in its Proceedings (ISNM, Vol. 25, Birkhaeuser Basel, 1974). In our volume with R.J Nessel,” Fourier Analysis and Approximation (Birkhaeuser/Academic Press, 1971), we cited eight papers of R.G. Mamedov , plus his book ”Approximation of Functions by Linear Operators” (Azerbaijani, Baku, 1966). They played a specific role in our book. The work on Mellin analysis at Aachen , together with S. Jansche (see [1821]) was independent of that at Baku.

9 Concluding Remarks

The theory of Mellin analysis is a fascinating field of research, one still in the state of development, one which will surely have further important applications in various fields of applied mathematics. As noted in the Introduction, a pioneering contribution in this direction was the treatise of R.G. Mamedov [45]. The translation into English of the main part of Mamedov’s preface reads: In classical approximation theory, approximation of functions by polynomials and entire functions are considered, and relations between the order of best approximation of the functions and their structural and differential properties are studied. In connection with the saturation problem and P.P. Korovkin theorems on the convergence of linear positive operators, numerous investigations are dedicated to the approximations of functions by linear operators, in particular by linear positive operators, and by various singular integral operators. To this aim some function classes are introduced and studied. Moreover, the saturation classes of different linear operators by means of Fourier transform or other integral transforms are investigated. Many results in this field and the base of the theory of integral Fourier transform were published in the fundamental monograph of P.L. Butzer and R.J. Nessel ”Fourier analysis and approximation”. At present some other integral transforms are also used in studying different function classes and the associated saturation order of approximation by linear operators.

The Mellin transform has important applications in the solution of boundary value problems in wedge shaped regions. It is also one of the most important methods for the study of classes of functions defined on the positive real line. The theory of Mellin transform requires the introduction of new concepts of derivative and integral, called M-derivative and M-integral. In this field in recent years many results have been produced. In this monograph we attempt significantly to complement those results and introduce them from the unified point of view. I have used material written earlier in the book with G.N. Orudzhev, namely ”On the approximation of functions by singular integrals of Mellin type, Baku, 1979.

After that, Mellin analysis was introduced in a systematic way in [1820], then developed in [2226] and later on in [612, 49]. Many other results and applications are surely to be discovered and the present paper is a further contribution in this direction.

Our theory of Hadamard-type fractional integrals and derivatives is concerned with real values of the parameter \(\alpha .\) The extension to complex values of \(\alpha \) can be carried out essentially in the same way, assuming Re \(\alpha \ge 0\) in place of \(\alpha \ge 0\) (see also e.g. [25]). For general complex values \(\alpha \in \mathbb {C},\) the theory may be more delicate. As an example, in Theorem 1, the assumption Re \(\alpha >0\) is basic for the application of the Abel–Stolz theorem. Indeed, for complex values of \(\alpha \) such that Re \(\alpha <0\) the convergence of the binomial series on the boundary of its convergence disk may fail. For Re \(\alpha \le -1,\) this convergence fails at every point of the boundary, while for \(-1< Re\, \alpha <0,\) it fails at just one point.