Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This volume begins with Part I which consists of Introduction composed of extended abstracts to all the core chapters containing in Parts II–V. Part II is entitled “Fourier transform, its generalizations and applications”. It begins with the chapter “Characterization of Gevrey regularity by a class of FBI transforms” written by Shiferaw Berhanu and Abraham Hailu. The classical FBI (Fourier-Bros-Iagolintzer) transform has the form

$$\displaystyle{ \mathcal{F}u(y,\xi ) =\int _{\mathbb{R}^{m}}e^{i\xi \cdot (y-x)-\vert \xi \vert \vert y-x\vert ^{2} }u(x)dx, y,\xi \in \mathbb{R}^{m}. }$$
(1)

It was introduced by J. Bros and D. Iagolintzer in order to characterize the local and microlocal analyticity of functions (or distributions) in terms of appropriate decays of their transforms in the spirit of the Paley-Wiener theorem. This chapter by Shiferaw Berhanu and Abraham Hailu characterizes local and microlocal Gevrey regularity in terms of appropriate decays of a more general class of FBI transforms introduced recently by S. Berhanu and J. Hounie.

The next chapter “A Novel Mathematical Approach to the Theory of Translation Invariant Linear Systems” is authored by Hans G. Feichtinger. The chapter is devoted to the theory of linear, translation invariant systems (TILSs). It is known that the traditional way of deriving the impulse response using the so-called sifting property of the Dirac delta “function” is not consistent with the claim that every such system is a convolution operator with a bounded measure, the so-called impulse response. It was I. Sandberg, who constructed a translation invariant linear operator T on \(C_{b}(\mathbb{R}^{d})\) that cannot be defined by convolution with a bounded measure. The main idea of this chapter is to replace \(C_{b}(\mathbb{R}^{d})\), which is not separable, by separable \(C_{0}(\mathbb{R}^{d})\) whose dual is the space of finite Borel measures. The author is using this duality to provide a mathematically rigorous way to identifying TILSs while avoiding unnecessary mathematical technicalities. The chapter can also be considered as a summary of some of ideas published by the author elsewhere. Overall, the chapter provides a solid view on methods in harmonic analysis with applications to numerical analysis and data processing.

Chapter “Asymptotic behavior of the Fourier transform of functions of bounded variation” belongs to E. Liflyand. The author establishes new results on the asymptotic behavior of the multidimensional Fourier transform of an arbitrary locally absolutely continuous function of bounded variation. In particular, the results reveal new relations between the Fourier transform of a function of bounded variation and the Hilbert transform of its derivative.

In chapter “Convergence and regularization of sampling series” its author W.R. Madych reviews some of his own interesting results related to the classical cardinal sine series

$$\displaystyle{ \sum _{n\in \mathbf{Z}}c_{n}\frac{\sin \pi (z - n)} {\pi (\pi -n)}. }$$

For the classical cardinal sine series, f(z) is said to be a convergent cardinal series if the partial sum of the cardinal sine series with coefficients c n = f(n) converges to f(z) uniformly on compact subsets of \(\mathbb{C}\). The author shows that a convergent cardinal series is in E π while the converse is not always true. Various sufficient conditions are provided to guarantee that functions in E π are convergent cardinal sine series and that entire functions can be represented by cardinal sine series. The second part of the chapter considers the regularized cardinal sine series of the Bernstein-Boas type

$$\displaystyle{ f_{\epsilon }(z) =\sum _{n\in \mathbb{Z}}f(n)\frac{\sin \pi (z - n)} {\pi (\pi -n)} \phi (\epsilon (z - n)), }$$

where ϕ is an entire functions of exponential type that are bounded on the real axis. Some sufficient conditions on ϕ and f are established under which the above series converges when ε → 0 to f absolutely and uniformly on compact subsets of \(\mathbb{C}\).

Extended Bernstein-Boas regularization associated with a family of functions is also considered. The author also discusses the spline type sampling series, where piecewise polynomial cardinal splines are defined in terms of the fundamental splines. It is shown that a piecewise polynomial cardinal series of a fixed order is a special case of a shift invariant subspace with one generator.

Part III “Analysis on non-Euclidean spaces” begins with the chapter “Harmonic Analysis in Non-Euclidean Spaces: Theory and Application” by Stephen D. Casey. It discusses some aspects of applications of harmonic analysis in the hyperbolic space of dimension two from the point of view of geometric classification of simply connected Riemann surfaces, with respect to their conformal equivalence, i.e., the Euclidean plane, the two dimensional sphere, and the hyperbolic plane. It focuses on the geometric analysis tools and techniques relevant in sampling and numerical reconstruction problems. The chapter starts by reviewing the background on the geometry of surfaces, then presents the basics of Fourier analysis on the Euclidean plane, the two dimensional sphere, and the hyperbolic plane, and then it continues with the selected aspects of sampling, oriented towards Beurling-Landau densities and Nyquist tiles, in the three corresponding geometric contexts. The chapter finishes with a presentation of some results on network tomography, a discrete context exhibiting network phenomena of hyperbolic type. This chapter can be of interest to readers working on a Shannon-type sampling on Riemannian manifolds.

The goal of the chapter “An harmonic analysis of directed graphs from arithmetic functions and primes” by Ilwoo Cho and Palle E. T. Jorgensen is to study combinatorial structures of combinatorial directed graphs, encoded into the corresponding graph groupoids, via operator theoretic methods. The chapter brings together and explores a wide range of ideas from algebra and analysis: (i) number theory; (ii) algebraic structures of discrete (finite or infinite) graphs, (iii) free probability spaces. Background information, motivation, and link to previous work are laid down carefully in the first four sections. Given a directed graph G, a graph groupoid action is established acting on a non-commutative algebra, called the G-arithmetic algebra. Then the Krein-space representation of the G-arithmetic algebra is studied via a tensor product construction. Authors also study an action of the Lie group (R; +) in the above tensor representation, formulated in the context of free probability. At the end, a stochastic calculus is developed.

Chapter “Sheaf and duality methods for analyzing multi-model systems” is written by Michael Robinson. It suggests using the languages of the category theory and of the general topology for reconstruction of a “big picture” from a number of local samples. Sheaves and cosheaves are the mathematical objects that naturally describe how local data can be assembled into a global model in a consistent manner.

FormalPara Definition 1.

Suppose \(X = (X;\mathcal{T})\) is a topology on a set X with \(\mathcal{T}\) being a collection of open sets. A presheaf \(\mathcal{F}\) of sets on \(X = (X;\mathcal{T})\) consists of the following specification:

  1. 1.

    For each open set \(U \in \mathcal{ T}\), a set \(\mathcal{F}(U)\), called the stalk at U,

  2. 2.

    For each pair of open sets UV, there is a function \(\mathcal{F}(U \subseteq V ):\mathcal{ F}(V ) \rightarrow \mathcal{ F}(U)\) called a restriction function (or just a restriction), such that

  3. 3.

    For each triple UVW of open sets, \(\mathcal{F}(U \subseteq V ) =\mathcal{ F}(U \subseteq V ) \circ \mathcal{ F}\) (VW).

Those elements of the stalks that are mutually consistent across the entire space are called sections.

FormalPara Definition 2.

A presheaf \(\mathcal{F}\) on the topological space \(X = (X;\mathcal{T})\) is called a sheaf on \(X = (X;\mathcal{T})\) if for every open set \(U \in \mathcal{ T}\) and every collection of open sets \(\mathcal{U} \subseteq \mathcal{ T}\) with \(U = \cup \mathcal{U}\) the stalk \(\mathcal{F}(U)\) is isomorphic to the space of sections over the set of elements \(\mathcal{U}\).

Sections are what the combined multi-model system produces as output, and amount to the simultaneous solution of a number of equations. The chapter focuses on constructing multi-model systems described by systems of equations using the language of sheaves. The main research objectives are the following:

  1. (1)

    to combine different dynamic models into a multi-model system by encoding the topology of the system in a diagram formed by the spaces and maps using sheaves.

  2. (2)

    to study homological invariants to obtain some information about the states of the system locally and globally.

The theory is largely based on the fact that every topological space \(X = (X;\mathcal{T})\) defines a partially ordered set (a poset) \(\mathbf{Open}(X;\mathcal{T}) = (\mathcal{T};\subseteq )\) on the open sets, partially ordered by the subset relation.

Part IV “Harmonic Analysis and Differential Equations” opens with the chapter “On boundary-value problems for a partial differential equation with Caputo and Bessel operator” by Praveen Agarwal, Erkinjon Karimov, Murat Mamchuev, and Michael Ruzhansky. During the last several decades, many applications of various kinds of fractional differential equations became a subject of intensive research due to both theoretical and practical importance. This chapter begins with preliminary information on direct and inverse-source problems, Bessel equation, Fourier-Bessel series, as well as on general solutions to the corresponding two-term fractional differential equation with Caputo derivative. The authors investigate a unique solvability of a direct and inverse-source problem for a time-fractional partial differential equation with the Caputo and Bessel operators. Using spectral expansion method, explicit forms of solutions to formulated problems in terms of multinomial Mittag-Leffler and first kind Bessel functions are given.

Chapter “On the Solvability of the Zaremba Problem in Infinite Sectors and the Invertibility of Associated Singular Integral Operators” is written by Hussein Awala, Irina Mitrea, and Katharine Ott. The Zaremba’s problem for the Laplacian in a domain Ω in \(\mathbb{R}^{n}\) is a mixed boundary value problem where one specifies Dirichlet data on a part of the boundary ∂Ω and Neumann data on the remainder of the boundary. This chapter focuses on a mixed boundary problem in a sector in the plane. It is an excellent introduction on how Hardy kernels and the Mellin transform can be used to treat elliptic mixed boundary value problems in domains with corners. The authors consider the Zaremba problem with L p data and they study this problem using the method of layer potentials. The operators involved are of Hardy type and Fourier analysis on the group of multiplicative reals (Mellin transform) allows for the explicit computation of the spectrum of such an operator on L p -spaces. Based on the result for the spectrum, they are able to give a complete set of indices p (depending on the angle of the sector) for which the boundary value problem is solvable by the method of layer potentials. The results are of intrinsic interest and can be used as the starting point of studying the Zaremba problem in more general domains.

Chapter “On the Solution of the Oblique Derivative Problem by Constructive Runge-Walsh Concepts” is authored by Willi Freeden and Helga Nutz. The goal of this chapter is to provide the conceptual setup of the Runge-Walsh theorem for the oblique derivative problem of physical geodesy. The Runge-Walsh concept presented in the chapter reflects constructive approximation capabilities of the Earth’s gravitational potential for geoscientifically realistic geometries. The force of gravity is generally not perpendicular to the actual Earth’s surface and it leads to a model which involves an oblique derivative problem. The main focus is on constructive approximation for potential field problems motivated by the Runge-Walsh theorem. This chapter contains an extensive overview on the development and the established state of special function systems and their use for the approximate solution of geodetic boundary value problems. The authors introduce classical spherical function systems like spherical harmonics, inner and outer harmonics, and their connection via the Kelvin transformation. Then they use potential theoretic concepts to transfer closure results of these function systems from the sphere to more general georelevant geometries. Moreover, they formulate generalized Fourier expansions based on function systems like the fundamental solution of the Laplacian, multipole kernels, and more general kernels that are expressed as series expansions. In the last part of the chapter, they go over to spline methods in a reproducing kernel Hilbert space setup. Latter are illustrated for the exterior Dirichlet problem on general geometries and for the oblique boundary value problem.

The first chapter in Part V “Harmonic Analysis for data science” is called “An Overview of Numerical Acceleration Techniques for Non-Linear Dimension Reduction” and it is written by Wojciech Czaja, Timothy Doster, and Avner Halevy. The chapter is an exposition on recent techniques for computationally efficient approaches to non-linear dimension reduction. The recent advances in instrumentation created massive amounts of large, high dimensional data sets being collected in many fields, such as Biology, Medicine, Physics, Chemistry, and Astronomy. During the last years, along with a number of more traditional different linear methods the so-called non-linear dimension reduction methods were developed to extract important features in high dimensional large data sets. However, the computational cost of non-linear dimension reduction methods is usually very high and it can limit their applicability. The authors discuss some of the important numerical techniques which increase the computational efficiency of non-linear dimension reduction methods while preserving much of their representational power. They address Random Projections, Approximate k-Nearest Neighborhoods, Approximate Kernel methods, and Approximate Matrix Decomposition methods. Several numerical experiments are also provided.

In the chapter “Adaptive Density Estimation on the Circle by Nearly-Tight Frames” Claudio Durastanti constructs an adaptive estimator of the density function in a nonparametric density estimation problem on the unit circle S 1. The chapter contains an adaptive procedure based on hard thresholding technique on Mexican needlets over Besov spaces. So far, the problem has been tackled with classical needlets. It should be noted that classical needles and Mexican needles are wavelet-like frames on spheres and even more general manifolds. It is important for some applications that Besov spaces on manifolds can be characterized in terms of the coefficients with respect to such frames. The reason the author resorts to Mexican needlets is that they enjoy better localization properties than classical needlets. The main result obtained in this chapter is the statement about an upper bound for the L 2-risk for the estimator. This bound is optimal up to a logarithmic factor, achieving well-known rates of convergence.

The contribution “Interactions between Kernels, Frames, and Persistent Homology” by Mijail Guillemard and Armin Iske presents connections between kernel methods for Hilbert space representations, frame analysis, and persistent homology. Interactions between kernels and frames are based on the following observations. Consider a Hilbert space H which is a subspace of a Hilbert space L 2(Ω, ) of square integrable functions over a measure space \(\left (\varOmega,d\mu \right )\). If {ϕ j } jJ and {ϕ j } jJ are dual frames in H, then

$$\displaystyle{ K_{x}(y) =\sum _{j\in J}\phi _{j}^{{\ast}}(y)\phi _{ j}(x), x,y \in \varOmega, }$$
(2)

is a reproducing kernel in H under condition that ∥K x H < . Conversely, if it is given that the above-mentioned space H is a reproducing kernel Hilbert space with kernel K(x, y) that contains frame {ϕ j } jJ , then K can be expressed by formula (2).

The goal of computational topology is to extract geometric/topological structure in data (for example, a smooth manifold) from a point cloud (a mesh on a manifold). Clearly, the outcome of such procedure for a single mesh is not reliable since it depends on the diameter of the mesh. The idea of persistent homology is to take into account topological outcomes for a sequence of meshes whose diameters go to zero. One of the main results of the chapter is a theoretical statement concerning stability properties of the so-called persistent diagrams in terms of frames associated with corresponding meshes and their diameters.

Chapter “Multi-penalty regularization for detecting relevant variables” by Kate\(\check{\text{r}}\) ina Hlavá\(\check{\text{c}}\) ková-Schindler, Valeriya Naumova, and Sergiy Pereverzyev Jr. introduces a combined Tikhonov regularization - relevant variable identification method for regression using reproducing kernel Hilbert spaces [RKHS] of functions, assuming a noisy input-output data set \(\{\overrightarrow{x}_{\mathbf{j}},\overrightarrow{y}_{\mathbf{j}}\}_{\mathbf{}}\) where \(\overrightarrow{x}_{\mathbf{j}} \in \mathbb{R}^{N}\). The method depends on a generalized linear model, in which the predictor function is formed by a sum of non-linear functions of single variables \(\overrightarrow{x}_{\mathbf{j}}\). While the method is introduced inductively, the three theorems are devoted to a special case in which there are only two relevant variables. These results are treated as indications of how the general theory develops in a more controlled set of circumstances. The authors’ method involves a recursive procedure in which the next relevant variable is identified based on the relative size of the full Tikhonov functional and the discrepancy (the error part of this functional), for different values (small and large) of the regularization parameter.

The predictor function or regularizer is a linear combination of non-linear univariate predictor functions in a RKHS. This regularizer is constructed by jointly minimizing the empirical L 2 error and reproducing kernel Hilbert space (RKHS) norms for functions under Tikhonov regularization. Formulas for kernel-based predictor functions are found by using SVD factorizations of sampling operators from a RKHS into \(\mathbb{R}^{N}\) and a reproducing kernel representation for the dual of this sampling operator.

In the applications section, the authors discuss the application of their approach based on multi-penalty regularization to the inverse problem of detecting causal relationships between genes from the time series of their expression levels. They demonstrate that the developed multi-penalty relevant variable method produces better results than the standard current methods.

Chapter “Stable Likelihood Computation for Gaussian Random Fields” by Michael McCourt and Gregory E. Fasshauer brings together a number of mathematical ideas and methods such as geostatistics, reproducing kernel Hilbert spaces, and numerical analysis.

Given scattered data realized from a Gaussian random field, unobserved values of the field can be predicted via kriging. Kriging or Gaussian process regression is a method of interpolation for which the interpolated values are modeled by a Gaussian process governed by prior covariances. Kriging gives the best linear unbiased prediction of the intermediate values. To implement kriging Gaussian random field must be estimated from the data which often leads to ill-conditioned problem. Also, such estimation has a well-known computational burden: for N being the number of sites, the computational cost is of order O(N 3) thus becoming prohibitive for large data sets.

The chapter presents some accurate techniques for parameter estimation for kriging predictors (or approximations based on kernels and radial basis functions) by the Hilbert-Schmidt singular value decomposition (SVD) factorization of the Gaussians. The main goal is to illustrate the alternatives to maximum likelihood estimation of the parameters of the covariance functions.

The authors discuss the use of maximum likelihood estimation to choose optimal kernel parameters for prediction, and how the unstable likelihood function can be stably approximated using the Hilbert-Schmidt SVD. They also introduce kriging variance as another possible parametrization criterion along with a criterion which combines the kriging variance with the maximum likelihood criterion. The effectiveness of the Hilbert-Schmidt SVD as a tool to stabilize all the discussed parametrization strategies is demonstrated in the context of numerical experiments.