1 Introduction

In this article we discuss uniform approximation of extremal functions in weighted Bergman spaces. In general, we approximate these functions by solutions to extremal problems restricted to spaces of polynomials.

Definition 1.1

For \(1< p < \infty \) and \(-1< \alpha < \infty \) we define the weighted Bergman space \(A^p_\alpha \) to be the space of all analytic functions in \(\mathbb {D}\) such that

$$\begin{aligned} \Vert f\Vert _{p,\alpha } = \left( \int _{\mathbb {D}} |f(z)|^p dA_\alpha (z) \right) ^{1/p} < \infty , \end{aligned}$$

where \(dA_\alpha = (\alpha + 1)\pi ^{-1}(1- |z|^2)^\alpha \, dA(z)\) and dA is the Lebesgue area measure.

For \(1< p < \infty \), it is known that the dual of \(A^p_\alpha \) is isomorphic to \(A^q_\alpha \), where \(1/p + 1/q = 1\). Also, if \(\phi \in (A^p_\alpha )^*\) and \(k \in A^q_\alpha \) correspond to each other, then \(\Vert \phi \Vert _{(A^p_\alpha )^*} \le \Vert k\Vert _{q,\alpha } \le C \Vert \phi \Vert _{(A^p_\alpha )^*}\), where C is some constant depending of p and \(\alpha \).

Definition 1.2

Let \(k \in A^q_\alpha \) be given, where \(1< q < \infty \) and k is not identically 0. Let \(F \in A^p_\alpha \) be such that \(\Vert F\Vert = 1\) and \({{\mathrm{Re}}}\int _{\mathbb {D}} F \overline{k} \, dA_\alpha \) is as large as possible, where \(1/p + 1/q = 1\). There is always a unique function F with this property. We say that F is the extremal function for the integral kernel k, and also that F is the extremal function for the functional \(\phi \) defined by \(\phi (f) = \int _{\mathbb {D}} f \overline{k} \, dA_\alpha \).

We do not usually study the case \(p=2\) because in this case F is a scalar multiple of k.

It is known (see [4]) that the spaces \(A^p_\alpha \), since they are subspaces of \(L^p\) spaces, are uniformly convex. In [7], general results are proven about approximating extremal functions in uniformly convex spaces, and a proof is given there of the well-known fact that extremal functions are unique in uniformly convex spaces. See [2, 3] for more information on extremal problems in spaces of analytic functions. See also [8, 10, 12, 14] for more information on regularity questions related to the extremal problems we discuss.

Definition 1.3

Let \(f \in A^p_\alpha \). Suppose

$$\begin{aligned} \Vert f(e^{it} \cdot ) + f(e^{-it} \cdot ) - 2f(\cdot ) \Vert _{p,\alpha } \le C |t|^{\beta } \end{aligned}$$

for some constants \(C > 0\) and \(0 < \beta \le 2\). We then say that \(f \in \Lambda ^*_{\beta , A^p_\alpha }\). Furthermore, we define \(\Vert f\Vert _{\Lambda ^*,\beta ,A^p_\alpha }\) to be the infimum of the constants C such that the above inequality holds.

We refer to functions in the \(\Lambda ^*\) classes as being (mean) Bergman-Hölder continuous (see [8]). We prove several estimates that relate the mean Bergman-Hölder continuity of \(A^p_\alpha \) functions to the minimum error in approximating these functions with polynomials of fixed degree. We apply these results to obtain estimates for how close the solution of an extremal problem is to the solution to the problem with the same linear functional posed over the space of polynomials of degree at most n. By using inequalities related to uniform convexity due to Clarkson [4] and Ball, Carlen and Lieb [1], we are able to obtain quantitative estimates for the distance from approximate extremal functions to the true extremal functions.

The estimates just mentioned are all in the \(A^p_\alpha \) norm. However, our goal is to approximate (in certain cases) extremal functions in the uniform norm (i.e. the \(L^\infty \) norm). To do so, we use results from [8] to obtain bounds on the \(C^{\beta }\) norm of the extremal functions and the functions approximating them for certain \(\beta \), as long as the integral kernels are sufficiently regular. We also use Theorem 4.2, which allows us to conclude that two functions that are each not too large in the \(C^\beta \) norm and that are close in the \(A^p_\alpha \) norm must actually be close in the uniform norm. In stating the theorems, we do not aim for the most general estimates possible; however, the estimates we state do apply to the case where k is a polynomial, or even in \(C^2(\overline{\mathbb {D}})\).

We note that in [11], Khavinson and Stessin derive Hölder regularity results for extremal problems in unweighted Bergman spaces. However, they do not state explicit bounds on the exponent \(\beta \) or on the \(C^\beta \) norm of the extremal function, so we cannot use their result to get explicit bounds on extremal functions.

The following lemma about the uniform convexity of \(L^p\) will be needed. The inequality for \(1 < p \le 2\) can be proved from [1, Thm. 1]. The other inequality follows from Eq. (3) in [4, Thm. 2].

Lemma 1.1

Let \(\Vert f\Vert _p = \Vert g\Vert _p = 1\) and \((1/2)\Vert f + g\Vert _p > 1 - \delta \). Let \(\Vert f - g\Vert _p = \epsilon \). If \(1< p \le 2\) then \(\epsilon < \sqrt{\frac{8}{p-1}} \delta ^{1/2}\). If \(p \ge 2\) then \(\epsilon < 2 p^{1/p} \delta ^{1/p}\).

2 Mean Holder Continuity and Best Polynomial Approximation

In this section we prove several results relating mean Hölder continuity of functions to their distance from the space of polynomials of degree at most n. Some of these results are used in the rest of the paper. The proofs of these results are similar to the proofs for similar results about classical Hölder continuity that can be found in [15, Vol, 1, p. 115 ff.].

Definition 2.1

Let \(f \in A^p_\alpha \). We define

$$\begin{aligned} E_n^{p,\alpha }(f) = \min \{\Vert f - P\Vert _{p,\alpha }: P \text { is a polynomial of degree at most } n\}. \end{aligned}$$

Theorem 2.1

Let \(0< \beta < 1\). Suppose that \(\Vert f\Vert _{\Lambda ^*, \beta , A^p_\alpha } < \infty \). Let

$$\begin{aligned} A_\beta = \frac{2^{1+\beta }}{\pi } \int _0^\infty |\cos (t) - \cos (2t)|t^{\beta - 2} \, dt. \end{aligned}$$

Then

$$\begin{aligned} E_n^{p,\alpha } \le A_{\beta } n^{-\beta } \Vert f\Vert _{\Lambda ^*, \beta , A^p_\alpha }. \end{aligned}$$

Proof

Let \(f_{|r}\) represent the function f restricted to the circle of radius r. Let \(T_n\) be the best polynomial approximant of f, let \(R_n = f - T_n\) be the remainder and let \(\rho _k\) be the \(k^{\text {th}}\) Cesàro sum of the remainder. Let \(K_m\) be the Fejér kernel for the \(m^{th}\) Cesàro sum. Then \(K_m\) has \(L^1\) norm of 1, and Young’s inequality for convolutions shows that \(M_p(r,\rho _k) = \Vert R_{n|r} * K_k\Vert _p \le M_p(r,R_n) \Vert K_k\Vert _1 = M_p(r,R_n)\). Let \(\sigma _k\) be the \(k^{\text {th}}\) Cesàro sum of f. From [15, Vol. 1, p. 115, eq. (13.4)] we see that

$$\begin{aligned} \left( 1 + \frac{n}{h}\right) \sigma _{n+h-1} - \frac{n}{h}\sigma _{n-1} = T_n + \left( 1 + \frac{n}{h}\right) \rho _{n+h-1} - \frac{n}{h} \rho _{n-1}. \end{aligned}$$

Using this equation with \(h=n\), subtracting f from both sides and using the fact that \(M_p(r,\rho _k) \le M_p(r,R_n)\) shows that \(M_p\big [r,(2\sigma _{2n-1} - \sigma _{n-1}) - f\big ] \le 4 M_p(r,R_n)\). Multiply by \((\alpha + 1)2r(1-r^2)^\alpha \) and integrate r from 0 to 1 to see that

$$\begin{aligned} \Vert (2\sigma _{2n-1} - \sigma _{n-1}) - f\Vert _{p,\alpha } \le 4 \Vert R_n\Vert _{p,\alpha }. \end{aligned}$$

Let \(\tau _n = 2\sigma _{2n-1}-\sigma _{n-1}\). Now

$$\begin{aligned} \tau _m(re^{ix}) - f(re^{ix}) = \frac{2}{\pi } \int _0^\infty \left[ f(re^{i[x+(t/m)]}) + f(re^{i[x-(t/m)]}) - 2f(re^{ix}) \right] \frac{h(t)}{t^2} \, dt\nonumber \\ \end{aligned}$$
(2.1)

where \(h(t) = (\cos (t) - \cos (2t))/2\). Let \(M = \Vert f\Vert _{\Lambda ^*, \beta , A^p_\alpha }.\) Apply Minkowski’s inequality to see that

$$\begin{aligned} E^{p,\alpha }_{2m-1} \le \Vert \tau _m(re^{ix}) - f(re^{ix}) \Vert _{p,\alpha } \le \frac{2}{\pi } \int _0^\infty t^\beta M m^{-\beta } \frac{|h(t)|}{t^2} \, dt = A_\beta M (2m)^{-\beta }. \end{aligned}$$

Since \(E^{p,\alpha }_{2m} \le E^{p,\alpha }_{2m-1}\), the theorem follows. \(\square \)

We can also prove the following theorem. The symbol \(D_\theta ^n\) stands for \(\frac{d^n}{d\theta ^n}\):

Theorem 2.2

Let \(K \ge 0\) be an integer. Suppose that \(\Vert D^K_\theta f(re^{i\theta })\Vert _{p,\alpha } \le M\). Let

$$\begin{aligned} C_j = \frac{4}{\pi } \int _0^\infty |H_j(t)| \, dt \end{aligned}$$

where

$$\begin{aligned} h(t) = (\cos (t) - \cos (2t))/2, \qquad H_0(t) = h(t)/t^2, \qquad H_j(t) = \int _t^\infty H_{j-1}(x) \, dx. \end{aligned}$$

Then \(E_n^{p,\alpha } \le 2^K C_K M n^{-K}\).

Proof

Let \(\displaystyle f^{(n,\theta )}(re^{i\theta }) = \frac{\partial ^n}{\partial \theta ^n} f(re^{i\theta })\). Then integrating by parts in Eq. (2.1) shows that

$$\begin{aligned}&\tau (re^{ix}) - f(re^{ix})\\&\quad =\frac{2}{\pi m^K} \int _0^\infty \left[ f^{(K,\theta )}(re^{i[x+(t/m)]}) + (-1)^K f^{(K,\theta )}(re^{i[x-(t/m)]})\right] H_K(t) \, dt \end{aligned}$$

Applying Minkowski’s inequality shows that

$$\begin{aligned} \Vert \tau _m(x) - f(x)\Vert _{p,\alpha } \le C_K m^{-K} \Vert D_\theta ^m f\Vert _{p,\alpha }. \end{aligned}$$

As above, this implies that \(E_n^{p,\alpha } \le C_K 2^K M n^{-K}\). \(\square \)

Define the \(A^p_\alpha \) modulus of continuity for f by

$$\begin{aligned} \omega _{p,\alpha }(\delta , f) = \sup _{t \le \delta } \Vert f(e^{it}z) - f(z)\Vert _{p,\alpha }. \end{aligned}$$

Theorem 2.3

Let \(K \ge 0\) be an integer. Suppose \(D_\theta ^K f\) has modulus of continuity \(\omega _{p,\alpha }(\delta )\). Then

$$\begin{aligned} E_n^{p,\alpha }(f) \le B_K \omega _{p,\alpha }\left( \frac{2\pi }{n}\right) n^{-K}, \end{aligned}$$

where \(B_K = 2^{K} C_{K+1}/\pi + 2^K C_K\).

Let \(f_\delta (z) = \frac{1}{2\delta } \int _{-\delta }^{\delta } f(e^{it}z) \, dt\). Note that \(D_\theta f_\delta = (D_\theta f)_\delta \). Minkowski’s inequality shows that \(\Vert f_\delta - f\Vert _{p, \alpha } \le \omega _{p,\alpha }(\delta , f)\). Let \(f = f_\delta + g\). Then using the fundamental theorem of calculus, we see that

$$\begin{aligned} \Vert D_\theta ^{K+1} f_\delta \Vert _{p,\alpha } = \frac{\Vert D_\theta ^K f(ze^{i\delta }) - D_\theta ^K f(ze^{-i\delta })\Vert _{p,\alpha }}{2\delta } \le 2\delta ^{-1} \omega _{p,\alpha }(2\delta ; D_\theta ^K f) \end{aligned}$$

Also \(\Vert D_\theta ^Kg\Vert _{p,\alpha } \le \omega _{p,\alpha }(\delta ,D_\theta ^K f)\).

Thus by Theorem 2.2,

$$\begin{aligned} E_n^{p,\alpha }(f) \le 2^{K+1} C_{K+1} n^{-(K+1)} (2\delta )^{-1} \omega _{p,\alpha }(2\delta ,D_\theta ^K f) + 2^K C_K n^{-K} \omega (\delta ; D_\theta ^K f). \end{aligned}$$

Taking the supremum over \(|t| < \delta \) in the inequality

$$\begin{aligned} \Vert f(\cdot )-f(e^{-2it}\cdot )\Vert _{p,\alpha } \le \Vert f(\cdot )-f(e^{-it}\cdot )\Vert _{p,\alpha } + \Vert f(e^{-it}\cdot )-f(e^{-2it}\cdot )\Vert _{p,\alpha } \end{aligned}$$

shows that \(\omega _{p,\alpha }(2\delta ,f) \le 2 \omega _{p,\alpha }(\delta ,f)\). Thus

$$\begin{aligned} E_n^{p,\alpha }(f) \le 2^{K+1} C_{K+1} n^{-(K+1)} \delta ^{-1} \omega _{p,\alpha }(\delta ,D_\theta ^K f) + 2^K C_K n^{-K} \omega (\delta ; D_\theta ^K f). \end{aligned}$$

Now choose \(\delta = 2\pi /n\) to see that

$$\begin{aligned} E_n^{p,\alpha }(f) \le B_K \omega _{p,\alpha }\left( \frac{2\pi }{n}\right) n^{-K} \end{aligned}$$

where \(B_K = 2^{K} C_{K+1}/\pi + 2^K C_K\).

From this it follows that if \(f \in \Lambda ^*_{\beta ,A^p_\alpha }\) for \(0< \beta < 1,\) then

$$\begin{aligned} E_n^{p,\alpha }(f) \le (2\pi )^\beta B_0 \Vert f\Vert _{\Lambda ^*,\beta , A^p_\alpha } n^{-\beta }. \end{aligned}$$

Theorem 2.4

Suppose that \(f^{(\theta ,K)} \in \Lambda ^*_{1,A^p_\alpha }\) and that \(\Vert f^{(\theta ,K)}\Vert _{\Lambda ^*,1,A^p_\alpha } = M\). Then \(E_n^{p,\alpha }(f) \le \widetilde{B_K} M n^{-K-1}\) where

$$\begin{aligned} \widetilde{B_K} = 2^K (C_{K+2}/\pi + \pi C_K). \end{aligned}$$

Proof

Write \(f = f_{\delta \delta } + g\) where \(f_{\delta \delta } = (f_\delta )_\delta \). Then

$$\begin{aligned} \partial _t^{K+2} f_{\delta \delta }(re^{it}) = \frac{f^{(\theta ,K)}(re^{i(t+2\delta )}) + f^{(\theta ,K)}(re^{i(t-2\delta )}) - 2 f^{(\theta ,K)}(re^{it})}{4\delta ^{2}} \end{aligned}$$

as in the last equation on [15, Vol. 1, p. 117]. Thus

$$\begin{aligned} \Vert \partial _t^{K+2} f_{\delta \delta }\Vert _{p,\alpha } \le \frac{M}{2\delta }. \end{aligned}$$

Following the first and second equations on [15, Vol. 1, p. 118] shows that

$$\begin{aligned}&\Vert g^{(\theta ,K)}(z)\Vert _{p,\alpha } \\&\quad = \frac{1}{4\delta ^2} \left\| \int _0^{2\delta } f^{(\theta ,K)}(ze^{it}) + f^{(\theta ,K)}(ze^{-it}) - 2f^{(\theta ,K)}(z) (2\delta - t) \, dt \right\| _{p,\alpha }, \end{aligned}$$

which shows that \(\Vert g^{(\theta ,K)}(z)\Vert _{p,\alpha } \le (1/2) M \delta \). Applying Theorem 2.2 to g and \(f_{\delta \delta }\) and setting \(\delta = 2\pi /n\) now yields the result. \(\square \)

3 Approximation of Extremal Functions by Polynomials in the Bergman Norm

We now discuss extremal problems restricted to the space of polynomials of degree n. Let \(F_n\) denote the extremal polynomial of degree n, for the extremal problem of maximizing \({{\mathrm{Re}}}\phi (f)\) where f ranges over all polynomials of degree at most n with norm 1. We will need the following theorem from [8]:

Theorem 3.1

Suppose that \(k \in \Lambda ^*_{\beta ,A^{q}_\alpha }\), and let F be the extremal function for k. Then if \(2 \le p < \infty \) we have \(F \in \Lambda ^*_{\beta /p, A^p_\alpha }\) while if \(1 < p \le 2\) we have \(F \in \Lambda ^*_{\beta /2, A^p_\alpha }\).

Furthermore, suppose that \(\int _{\mathbb {D}} F \overline{k} \, dA_{\alpha } = 1\) and \(\Vert k(e^{it}\cdot ) + k(e^{-it} \cdot ) - 2k(\cdot )\Vert _{q,\alpha } \le B|t|^{\beta }\). If \(p \ge 2\) then \(\Vert F\Vert _{\Lambda ^*, \beta /p, A^p_\alpha } \le 2p^{1/p} (B/2)^{1/p} \le 2e^{1/e} (B/2)^{1/p},\) whereas if \(1< p < 2\) then \(\Vert F\Vert _{\Lambda ^*, \beta /2, A^p_\alpha } \le 2(p-1)^{-1/2}B^{1/2}\).

The space of polynomials of degree n is isomorphic with \(\mathbb {R}^{2n+2}\). The set of all \(x \in \mathbb {R}^{2n + 2}\) for which the corresponding polynomial has norm of at most 1 is a convex set. Thus, the extremal problem for finding \(F_n\) can be thought of as a problem of maximizing a (real) linear functional over a convex set in \(\mathbb {R}^{2n + 2}\). This is a convex optimization problem, and many algorithms for approximating the solution are known.

We first introduce a bound on the rate of convergence of \(F_n\) to F in the Bergman space norm.

Theorem 3.2

Let F be the extremal function for \(\phi \) and let \(F_n\) be the extremal polynomial of degree n, when the problem is posed over polynomials of degree n. Suppose \(k \in \Lambda ^*_{\beta , A^q_\alpha }.\) Then for \(1< p < 2\) we have \(\Vert F - F_n\Vert _{p,\alpha } = O(n^{-\beta /4})\). Similarly if \(2< p < \infty \) we have \(\Vert F - F_n\Vert _{p,\alpha } = O(n^{-\beta /p^2})\).

More precisely, for \(1< p <2\) and \(0< \beta < 2,\)

$$\begin{aligned} \Vert F-F_n\Vert _{p,\alpha } \le 4(p-1)^{-3/4} A_{\beta /2}^{1/2} \Vert k\Vert _{\Lambda ^*, \beta , A^q_\alpha }^{1/4} n^{-\beta /4} ; \end{aligned}$$

for \(1< p <2\) and \(\beta =2\),

$$\begin{aligned} \Vert F-F_n\Vert _{p,\alpha } \le 4(p-1)^{-3/4} \widetilde{B_0}^{1/2} \Vert k\Vert _{\Lambda ^*, \beta , A^q_\alpha }^{1/4} n^{-\beta /4} ; \end{aligned}$$

for \(2< p < \infty \) and \(0 < \beta \le 2,\)

$$\begin{aligned} \Vert F-F_n\Vert _{p,\alpha } \le 2^{1+1/p-1/p^2} p^{1/p+1/p^2} A_{\beta /p}^{1/p} \Vert k\Vert _{\Lambda ^*, \beta , A^q_\alpha }^{1/p} n^{-\beta /p^2} . \end{aligned}$$

Proof

Let \(\Vert \phi \Vert \) denote \(\Vert \phi \Vert _{(A^p_\alpha )^*}\). The argument in [7, Thm. 4.1] shows that, if \(T_n\) is the best approximate of F of degree n and \(E_n^{p,\alpha } < \delta \) and \(\widetilde{T}_n = T_n/\Vert T_n\Vert _{p,\alpha }\), then \({{\mathrm{Re}}}\phi (\widetilde{T}_n) \ge \frac{1-\delta }{1+\delta } \Vert \phi \Vert \). This also shows that \({{\mathrm{Re}}}\phi (F_n) \ge \frac{1-\delta }{1+\delta } \Vert \phi \Vert \). Thus

$$\begin{aligned} \phi ((F_n + F)/2) \ge \Vert \phi \Vert \left( \frac{1}{2} + \frac{1-\delta }{2(1+\delta )} \right) . \end{aligned}$$

Therefore, \((1/2)\Vert F_n + F \Vert \ge \frac{1}{2} + \frac{1-\delta }{2(1+\delta )} \ge 1-\delta \). This shows that \(\Vert F_n - F\Vert \le \sqrt{\frac{8}{p-1}} \delta ^{1/2}\) for \(p < 2\) and \(\Vert F_n - F\Vert \le 2 p^{1/p} \delta ^{1/p}\) for \(p > 2\). \(\square \)

The convergence rate in the previous theorem may be slow, especially for large p. However, a given \(F_n\) may be more accurate than this predicts. The following theorem gives a way to bound the distance of a given function g from F in terms of the distance from \(\mathcal {P}_\alpha (|F|^p/\overline{F})\) to \(\mathcal {P}_\alpha (|g|^p/\overline{g})\). An advantage of the theorem is that it applies to any \(A^p_\alpha \) function g, so we can directly apply it to an approximation of \(F_n\), and not just \(F_n\) itself. In the theorem statement, \(\mathcal {P}_\alpha \) denotes the Bergman projection for \(A^p_\alpha \), which is the orthogonal projection from \(L^2_\alpha \) onto \(A^2_\alpha \). Also \(|F|^p/\overline{F} = F^{p/2} \overline{F^{(p/2)-1}} = |F|^{p-1} {{\mathrm{sgn}}}F\) should be interpreted to equal 0 when F has a zero. It is known that \(P_\alpha \) is bounded from \(L^p_\alpha \) to \(A^p_\alpha \) for \(1< p < \infty \) (see [9]).

Lemma 3.1

Suppose that \(F_1\) and \(F_2\) are the \(A^p_\alpha \) extremal functions for \(\phi _1\) and \(\phi _2,\) respectively. Suppose that \(\Vert \phi _1\Vert = \Vert \phi _2\Vert = 1\) and \(\Vert \phi _1 - \phi _2\Vert < \delta \). Then for \(2< p < \infty \)

$$\begin{aligned} \Vert F_1 - F_2\Vert < 2^{1-(1/p)} p^{1/p} \delta ^{1/p}; \end{aligned}$$

for \(1< p < 2\)

$$\begin{aligned} \Vert F_1 - F_2\Vert < 2 (p-1)^{-1/2} \delta ^{1/2}. \end{aligned}$$

Proof

Note that

$$\begin{aligned} |\phi _1(F_1) + \phi _1(F_2)|\ge & {} |\phi _1(F_1) + \phi _2(F_2)| + |(\phi _1 - \phi _2)(F_2)| = \Vert \phi _1\Vert + \Vert \phi _2\Vert - \delta \\\ge & {} 2 - \delta . \end{aligned}$$

Since \(\phi _1\) has norm 1, this implies that

$$\begin{aligned} \left\| \frac{F_1 + F_2}{2} \right\| > 1 - \frac{\delta }{2}. \end{aligned}$$

The result now follows by Lemma 1.1. \(\square \)

It is known that if k is a positive scalar multiple of \(\mathcal {P}_\alpha (|F|^p/\overline{F})\), where F has unit norm, then F is the extremal function for k. It is also known that any function \(\widetilde{k}\) which also has F for its extremal function must be a positive scalar multiple of k (see [7]). Since \(\int _{\mathbb {D}} F \overline{\mathcal {P}_\alpha (|F|^p/\overline{F})} \, dA = \int _{\mathbb {D}} F \overline{|F|^p/\overline{F}} \, dA = 1\), we see that if k is scaled so that \(\int _{\mathbb {D}} F \overline{k} \, dA_\alpha = 1\), then \(k = \mathcal {P}_\alpha (|F|^p/\overline{F})\).

Theorem 3.3

Let \(k \in A^q_\alpha \), and let F be the extremal function for k. Let \(\widehat{k}\) be any positive scalar multiple of k (so that \(\widehat{k}\) also has F as extremal function.) Let \(G \in A^p_\alpha \) and suppose that for some \(\delta \) such that \(0< \delta < 1\) the inequality

$$\begin{aligned} \Vert \mathcal {P}_\alpha (|G|^p/\overline{G}) - \widehat{k} )\Vert _{q,\alpha } < \delta \end{aligned}$$

is satisfied. Then for \(2< p < \infty \),

$$\begin{aligned} \Vert F - G\Vert < 2 p^{1/p} \delta ^{1/p} \end{aligned}$$

and for \(1< p < 2\)

$$\begin{aligned} \Vert F - G\Vert < 2\sqrt{2} (p-1)^{-1/2} \delta ^{1/2}. \end{aligned}$$

Proof

Let \(\psi \) be the functional of unit norm for which G is the extremal function. Then \(\psi \) has kernel \(\mathcal {P}_\alpha (|G|^p/\overline{G})\) and \(\Vert \psi \Vert =1\). Let \(\phi \) be the functional with kernel \(\widehat{k}\). We then have

$$\begin{aligned} \Vert \phi - \psi \Vert _{(A^p_\alpha )^*} \le \Vert \mathcal {P}_\alpha (|G|^p/\overline{G}) - \widehat{k} )\Vert _{q,\alpha } < \delta . \end{aligned}$$

This implies that \(1-\delta< \Vert \phi \Vert < 1 + \delta \). Let \(\widetilde{\phi } = \phi / \Vert \phi \Vert \). Then \( \Vert \phi - \widetilde{\phi } \Vert < \delta \) and thus \(\Vert \widetilde{\phi } - \psi \Vert < 2 \delta \). The conclusion now follows from the previous lemma. \(\square \)

4 Approximation of Extremal Functions by Polynomials in the Supremum Norm

We now show how to use the results in the previous section to bound the distance from a given function to F in the supremum norm. We will use the following theorem found in [8, Cor. 4.3]. The proof of this theorem shows that the same results hold if F is replaced by \(F_n\). However, we may need to multiply k by a positive scalar constant greater than 1 so that the condition \(\int _{\mathbb {D}} F_n \overline{k} \, dA_{\alpha } \ge 1\) holds.

Theorem 4.1

Let \(1< p < \infty \) and let p and q be conjugate exponents. Suppose \(k \in \Lambda ^*_{2,A^q_\alpha }\) and that \(\int _{\mathbb {D}} F \overline{k} \, dA_{\alpha } \ge 1\). If \(1< p < \infty \) and \(-1< \alpha < \min (0,p-2)\), then F has Hölder continuous boundary values.

Let \(B = \Vert k\Vert _{\Lambda ^*,2, A^p_\alpha }\). For \(p > 2\), the Hölder exponent can be taken to be \(-\alpha /p\). The Hölder constant is bounded above by

$$\begin{aligned} 1532 (pB/2)^{1/p} \left( 1-\frac{2}{p}\right) ^{-1} \left( \frac{\Gamma (q-1)}{\Gamma (q/2)^2}\right) ^{1/q} \left( 1 - \frac{2p}{\alpha }\right) . \end{aligned}$$

For \(p < 2\), if we let \(\eta \) be any number greater than 0, then the Hölder exponent can be taken to be \(1-2/p-\alpha /p-\eta \) (if the indicated exponent is positive). The Hölder constant is bounded above by

$$\begin{aligned} 768\left( \frac{B}{p-1}\right) ^{1/2} \left( 1-\frac{2}{p}\right) ^{-1} \left( \frac{\Gamma (q-1)}{\Gamma (q/2)^2}\right) ^{1/q} \left( 1 - \frac{2}{1-2/p-\alpha /p-\eta }\right) . \end{aligned}$$

For ease of notation, we will call the Hölder exponent \(\beta (p, \alpha )\) for \(p > 2\) and \(\beta (p, \alpha , \eta )\) for \(p < 2\). We will denote the constant by \(C(B,p,\alpha )\) and \(C(B,p,\alpha ,\eta ),\) respectively. For \(p > 2\), if we refer to \(\beta (p, \alpha , \eta )\) and \(C(p, \alpha , \eta )\), we mean \(\beta (p, \alpha )\) and \(C(p, \alpha ),\) respectively.

Since \(\Vert F\Vert _{p,\alpha } = 1\), it follows that \(|F(0)| < 1\). Thus the preceding estimate can be used to bound \(\Vert F\Vert _\infty \). However, the estimates do not allow one to conclude directly that \(\Vert F - F_n\Vert _\infty \) must be small for large n. The following theorem remedies this situation. It says that if a function is Hölder continuous (with control on the exponent and size of the constant) and the function has small \(L^p_\alpha \) norm, then its uniform norm cannot be too large.

Theorem 4.2

Let \(\epsilon > 0\) and \(0 < \beta \le 1\) be given. Suppose that \(f \in L^p_\alpha (\mathbb {D})\) and that for some \(C > 0\) we have \(|f(z) - f(w)| \le C |z-w|^\beta \) for every \(z, w \in \mathbb {D}\). Then there exists a \(\delta > 0\) such that if \(\Vert f\Vert _{p,\alpha } < \delta ,\) then \(\Vert f\Vert _{\infty } < \epsilon \). In fact, we may take \(\delta \) to be

$$\begin{aligned} \left( \frac{(\alpha + 1)\pi }{4}\right) ^{1/p} B(2/\beta , p+1)^{1/p} C^{-2/(\beta p)}\epsilon ^{1 + 2/(\beta p)} \end{aligned}$$
(4.1)

as long as \(\epsilon < 2^{\beta /2} C\). Here B(xy) is the Beta function.

For ease of notation we will denote the \(\delta \) in the theorem by \(\delta (\epsilon ; C, \beta , p, \alpha )\). We let \(\epsilon (\delta ; C, \beta , p, \alpha )\) denote the inverse function of \(\delta (\epsilon ) = \delta (\epsilon ; C, \beta , p, \alpha )\), so

$$\begin{aligned} \epsilon (\delta ; C, \beta , p, \alpha ) = \frac{\delta ^{1/[1 + 2/(\beta p)]}}{ \left[ \left( \frac{(\alpha + 1)\pi }{4}\right) ^{1/p} B(2/\beta , p+1)^{1/p} C^{-2/(\beta p)}\right] ^{1/[1+2/(\beta p)]}} \end{aligned}$$
(4.2)

as long as \(\epsilon < 2^{\beta /2} C\).

Proof

Suppose that \(|f(z_0)|> b > 0\). Then \(|f(z)| > b - C |z-z_0|^\beta \) for \(0 \le |z-z_0| \le r_0\), where \(r_0 = (b/C)^{1/\beta }\). So

$$\begin{aligned} \Vert f\Vert _p^p > (\alpha + 1)\int \limits _{\begin{array}{c} z \in \mathbb {D}\\ |z-z_0|<r_0 \end{array}} (b - C|z-z_0|^\beta )^p \, (1-|z|^2)^\alpha \, dA(z). \end{aligned}$$

Now for fixed b, the quantity on the right is a continuous function of \(z_0\) for \(z_0 \in \overline{\mathbb {D}}\), and thus has a minimum; call the minimum \(\delta (b)^p\). Then if \(|f(z_0)| \ge b\) we have \(\Vert f\Vert _p \ge \delta (b)\). So if \(\Vert f\Vert _p < \delta (\epsilon )\) we have \(\Vert f\Vert _\infty < \epsilon \).

We may estimate \(\delta (\epsilon )\) for \(r_0 < \sqrt{2}\) by noting that in this case the region \(\mathbb {D} \cap \{z: |z-z_0|<r_0\}\) contains at least a quarter sector of the disc \(\{z:|z-z_0| < r_0\}\), so

$$\begin{aligned} \delta (b)^p \ge \frac{\alpha + 1}{4} \int _0^{r_0} \int _0^{\pi / 2} (b-Cr^\beta )^p 2r \, dr = \frac{(\alpha + 1)\pi }{4} b^{p+(2/\beta )} C^{-2/\beta } B(2/\beta , p+1) \end{aligned}$$

where B(xy) is the beta function. \(\square \)

We may also prove the following theorem. It will not be used in the sequel, but we include it for completeness:

Theorem 4.3

Let \(\epsilon > 0\) and \(0<\gamma < \beta \le 1\) be given. Suppose that \(f \in L^p(\mathbb {D})\) and that for some \(C > 0\) we have \(|f(z) - f(w)| \le C |z-w|^\beta \) for every \(z, w \in \mathbb {D}\). Then there exists a \(\delta > 0\) such that if \(\Vert f\Vert _p < \delta \) then \(|f(z)-f(w)| < \epsilon |z-w|^\gamma \).

Proof

Suppose \(|f(z) - f(w)| \ge \epsilon |z-w|^{\gamma }\) for some z and w. Since \(|f(z) - f(w)| < C |z-w|^{\beta }\) we have \(|z-w|^{\beta -\gamma } > \epsilon / C\). Thus \(|z-w|^{\gamma } > (\epsilon /C)^{\gamma /(\beta -\gamma )}\), and so \(|f(z) - f(w)| > \epsilon ^{\beta /(\beta -\gamma )} C^{-\gamma /(\beta - \gamma )}\). But this contradicts the previous theorem if \(\delta \) is small enough. \(\square \)

Theorem 4.4

Let \(1< p < \infty \) and let p and q be conjugate exponents. Suppose \(k \in \Lambda ^*_{2,A^q_\alpha }\), that \(-1< \alpha < \min (0,p-2)\) and that \(\int _{\mathbb {D}} F \overline{k} \, dA_{\alpha } \ge 1\). Then if \(\Vert F-F_n\Vert _{p,\alpha } < \delta \) then \(\Vert F-F_n\Vert _\infty < \epsilon \left( \delta ; C, \beta , p, \alpha \right) \), where \(\beta = \beta (p, \alpha , \eta )\) and

$$\begin{aligned} C = C(\Vert k\Vert _{\Lambda ^*, 2, A^q_\alpha }, p, \alpha , \eta ) + C((1-\delta )^{-1} \Vert k\Vert _{\Lambda ^*, 2, A^q_\alpha }, p, \alpha , \eta ). \end{aligned}$$

Here \(\eta \) is any number greater than 0 such that \(1-2/p-\alpha /p-\eta > 0\) and the functions \(C(B,p,\alpha ,\eta )\) and \(\beta (p,\alpha ,\eta )\) are as defined in Theorem 4.1 and the function \(\epsilon (\delta ; C, \beta , p, \alpha )\) is as defined immediately prior to Eq. (4.2).

Proof

This follows from Theorems 4.1 and 4.2. We use the fact that Theorem 4.1 applies to \(F_n\) if k is first multiplied by \(1/(1-\delta )\), which ensures that the condition \(\int _{\mathbb {D}} F_n \overline{k} \, dA_\alpha \ge 1\) holds, and we apply Theorem 4.1 to the function \(F-F_n\). \(\square \)

In practice, we are unlikely to know the function \(F_n\) explicitly. Thus, the following theorem may be more useful. The proof is similar to the proof of the preceding theorem.

Theorem 4.5

Let \(1< p < \infty \) and let p and q be conjugate exponents. Suppose \(k \in \Lambda ^*_{2,A^q_\alpha }\), that \(-1< \alpha < \min (0,p-2)\) and that \(\int _{\mathbb {D}} F \overline{k} \, dA_{\alpha } \ge 1\). Let \(\eta \) be any number greater than 0 such that \(1-2/p-\alpha /p-\eta > 0\) and let the constants \(C(B,p,\alpha ,\eta )\) and \(\beta = \beta (p,\alpha ,\eta )\) be as defined in Theorem 4.1. Let \(\epsilon (\delta ; C, \beta , p, \alpha )\) be as defined immediately prior to Eq. (4.2).

Let \(G \in A^p_\alpha \) with \(M = \Vert G\Vert _{\Lambda ^*,\beta ,A^p_\alpha } < \infty .\) If \(\Vert F-G\Vert _{p,\alpha } < \delta \) then \(\Vert F-G\Vert _\infty < \epsilon \left( \delta ; C, \beta , p, \alpha \right) \), where

$$\begin{aligned} C = C(\Vert k\Vert _{\Lambda ^*, 2, A^q_\alpha }, p, \alpha , \eta ) + \Vert G\Vert _{\Lambda ^*,\beta ,A^p_\alpha }. \end{aligned}$$

5 Approximation of Extremal Functions for Even p

We will give an example of approximating an extremal function. The case where p is even is in some ways easier than other cases since then we can explicitly compute \(\mathcal {P}(|f|^p/\overline{f}) = \mathcal {P}(f^{p/2} \overline{f}^{p/2-1})\) when f is a polynomial, due to the fact that \(f^{p/2}\) and \(f^{p/2-1}\) are polynomials, so our example will involve this case.

Define

$$\begin{aligned} \gamma (n,\alpha ) = \Vert z^n\Vert ^2_{2,\alpha } = (\alpha + 1) B(n+1, \alpha + 1) = \frac{\Gamma (\alpha + 2) \Gamma (n+1)}{\Gamma (n + \alpha + 2)}. \end{aligned}$$

Then

$$\begin{aligned} \mathcal {P}_\alpha (z^m \overline{z}^n) = {\left\{ \begin{array}{ll} \frac{\gamma (m, \alpha )}{\gamma (m-n,\alpha )} z^{m-n} &{}\text { if } m \ge n \\ 0 &{}\text { if } m < n \end{array}\right. } \end{aligned}$$
(5.1)

(see [9, Sec. 1.1]).

Example 5.1

Let us approximate the solution to the problem of maximizing the (real part of) the functional \(f \mapsto a_0 + a_1 + a_2\), where the \(a_n\) are the Taylor series coefficients of f about 0, and where \(p = 4\) and \(\alpha = -1/2\) (and where f has unit norm). Then \(k = 1 + z/\gamma (1,-1/2) + z^2/\gamma (2,-1/2) = 1 + (3/2)z + (15/8)z^2\).

This problem is made simpler because the uniqueness of F implies that it must have real coefficients. Let us take the approximation of degree \(N = 35\). We thus seek to maximize \(a_0 + a_1 + a_2\) subject to the constraint \( \Vert f\Vert ^4_{4,-1/2} = \Vert f^2\Vert ^2_{2,-1/2} \le 1\), i.e.

$$\begin{aligned} \sum _{n=0}^{2N} \left( \sum _{m=0}^n a_m a_{n-m} \right) ^2 \gamma (n,-1/2) \le 1. \end{aligned}$$

Here we let \(a_n = 0\) for \(n > N\). This is a convex optimization problem, and we are aided by the fact that any local maximum must be a global maximum, since if F is any local maximum (necessarily of norm 1) then a variational argument similar to the one in the proof of [6, Cha. 5, Lem. 2] shows that the \(\mathcal {P}_{\alpha , 35}(|F|^p/\overline{F})\) is a scalar multiple of k, and thus F is the extremal function (see [13, p. 55]). Here we let \(\mathcal {P}_{\alpha , 35}\) denote the orthogonal projection from \(L^p_\alpha (\mathbb {D})\) onto the subspace of \(A^p_\alpha \) consisting of polynomials of degree at most 35.

When looking at the approximate solution the computer gave for this problem, we noticed that the first term of \(F_{35}\) appears to be positive (unless our approximation is very inaccurate). If we assume this is the case, then an equivalent problem is to maximize

$$\begin{aligned} (1 + a_1 + a_2) \left( \sum _{n=0}^{70} \left( \sum _{m=0}^n a_m a_{n-m} \right) ^2 \gamma (n,-1/2)\right) ^{-1/4}, \end{aligned}$$

where we let \(a_0 = 1\) and \(a_n = 0\) for \(n > 35\). This seemed to be solved more quickly by the computer, so this is the problem we solved to find a non-normalized form of \(\widehat{F}\). An important point is that it does not matter for our computation of error bounds whether our approximation for \(\widehat{F}\) is close to the true \(F_{35}\), since we will compute our error bounds using Theorems 3.3 and 4.5. In particular, we do not require the first coefficient of \(F_{35}\) to be positive for these error bounds to be accurate. (Our computations do eventually prove that the first coefficient of F at least is positive, since \(E_2\) defined below is larger than the first coefficient of \(\widehat{F}\).)

Using Sage (for example) to approximate a solution yields a maximum functional value of approximately 1.78785 and

$$\begin{aligned} \widehat{F}= & {} 0.431458 + 0.496143 x + 0.860246 x^2 - 0.341597 x^3 - 0.0225994 x^4 \\&+\, 0.110914 x^5 - 0.052023 x^6 - 0.00952824 x^7 + 0.0235905 x^8 + \cdots + \\&+\,3.15\cdot 10^{-8} x^{35} \end{aligned}$$

Here, \(\widehat{F}\) is our approximation of \(F_{35}\). We have normalized \(\widehat{F}\) so its norm is very close to 1. (In fact, the fourth power of its norm is less than 1 by about \(3 \cdot 10^{-15}\), since due to rounding error we cannot guarantee that \(\Vert \widehat{F}\Vert _{4,-1/2} = 1\). The fact that \(\Vert \widehat{F}\Vert _{4,-1/2} \le 1\) will be important later.) To save space, not all decimals and terms are shown. All of the omitted terms have coefficients of absolute value less than 1 / 100. Also, note that to avoid loss of precision, once \(\widehat{F}\) was calculated, we stored it as a polynomial with rational coefficients (we checked that its norm was still smaller than 1 after approximating its coefficients by rational numbers).

If we compute \(\widetilde{k} = \mathcal {P}_\alpha (|\widehat{F}|^4/\overline{\widehat{F}})\), we find that it is approximately

$$\begin{aligned} .559332 + 0.838998 z + 1.04875 z^2 + \cdots + 4.28 \cdot 10^{-9} z^{40}. \end{aligned}$$

All of the omitted terms have coefficients of absolute value at most \(2 \cdot 10^{-8}\). We must now find a multiple of k close to \(\widetilde{k}\). We could find the closest one as an optimization problem, but we will choose \(\widehat{k} = k/(a_0 + a_1 + a_2)\), where the \(a_j\) are the coefficients of \(\widetilde{F}\). Since we have used rational number arithmetic, this guarantees that \(\int _{\mathbb {D}} \widehat{F} \overline{\widehat{k}} \, dA_\alpha = 1\), so \(\int _{\mathbb {D}} F \overline{\widehat{k}} \, dA_\alpha > 1\), where F is the true extremal function. (It is here that we require that \(\Vert \widehat{F}\Vert _{4,-1/2} \le 1\)).

It seems difficult to compute \(\Vert \widetilde{k} - \widehat{k}\Vert _{4/3,-1/2}\), with numerical integration and to guarantee the level of accuracy of the computation. However, since \(\widetilde{k} - \widehat{k}\) is a polynomial we can bound its norm by using the triangle inequality and the fact that \(\Vert z^n\Vert _{4/3, -1/2} = \Vert z^{2n/3}\Vert _{2,-1/2}^{3/2} = \gamma (2n/3, -1/2)^{3/2}\). This shows that \(\Vert \widetilde{k} - \widehat{k}\Vert _{4/3,-1/2}\) is at most \(E_1 \approx 3.3406 \cdot 10^{-8}\). Using Theorem 3.3 shows that \(\Vert F - \widehat{F}\Vert _{4,-1/2}\) is less than \(E_2\) which is approximately .0383. In applying the theorem, we take \(\delta = E_1 / \Vert \widehat{F}\Vert _{4,-1/2}^3\), since \(\widehat{F}/\Vert \widehat{F}\Vert _{4,-1/2}\) has norm 1 and \(\widehat{k}/\Vert \widehat{F}\Vert _{4,-1/2}^3\) is a scalar multiple of k. (Recall that \(\Vert \widehat{F}\Vert _{4,-1/2}\) is nearly equal to 1, but we made it slightly less.)

The second \(\theta \) derivative of \(\widehat{k}\) is at most the absolute value of the z coefficient of \(\widehat{k}\) plus 4 times its \(z^2\) coefficient. This may be computed exactly since the coefficients of \(\widehat{k}\) are rational. Call it K (it equals approximately 5.034). Thus \(\Vert \widehat{k}\Vert _{\Lambda ^*,2,A^{4/3}_{-1/2}}\) is at most K. A bound on the first \(\theta \) derivative of \(\widehat{F}\) may be computed in a similar way: call this bound M. It is approximately 1.3824. Now, for any two points \(r e^{i\theta _1}\) and \(r e^{i \theta _2}\) we may choose \(|\theta _1 - \theta _2| \le \pi \), and thus \(|\theta _1 - \theta _2| \le \pi ^{7/8} |\theta _1 - \theta _2|^{1/8}\). Thus

$$\begin{aligned} |\widehat{F}(re^{i\theta _1}) - \widehat{F}(re^{i\theta _2})| \le M |\theta _1 - \theta _2| \le M \pi ^{7/8} |\theta _1 - \theta _2|^{1/8}. \end{aligned}$$

Thus \(\Vert \widehat{F}\Vert _{\Lambda ^*, \beta , A^4_{-1/2}} \le M \pi ^{7/8}.\) We now apply Theorem 4.5. To do so, we first compute \(\beta (4,-1/2) = 1/8\) and \(C(B,4,-1/2)\) from the statement of Theorem 4.1 using \(B = K\). We then add the value \(M \pi ^{7/8}\) to \(C(K,4,-1/2)\) to obtain the value of C in Theorem 4.5. (The value we obtain is approximately 123821). We now compute \(\epsilon \) from Eq. (4.2) using \(\beta = -(-1/2)/4 = 1/8\) and \(\delta = E_2\) and \(C \approx 123821\) from above. This shows that \(\Vert F - \widehat{F}\Vert _{\infty } < 11363.28.\) This calculation is valid since \(11363.28 < 2^{\beta /2} C\). I suspect the true error is much smaller. For example, if \(\widehat{F}_{40}\) is the approximation to \(F_{40}\) found using a similar method to the above, then \(\Vert \widehat{F}_{40}-\widehat{F}\Vert _\infty < 8 \cdot 10^{-9}\), and the true error may be this order of magnitude.

It would be interesting to see if the estimates in this paper can be improved in order to yield better estimates on the approximation of extremal functions in the uniform norm. The example above shows that the estimates in the paper are likely too large by a substantial margin. However, the estimates in this paper are the only ones known (as far as I know) that allow approximation of these extremal functions in the uniform norm, and they have the advantage of being explicitly computable without great difficulty.

6 Non-zero Extremal Functions

The preceding results can be used to find explicit conditions on k that guarantee that F is non-zero. In Theorem 6.2 we give one such result.

Theorem 6.1

Let \(0< \theta < 2\pi \) and \(\theta < 2\pi (p-1)\). Suppose that \(k \in A^{q}_\alpha \) has range that is a subset of the sector \(-\theta /2< \arg z < \theta /2\), and that \( \Vert k\Vert _{q,\alpha } = 1 \). Let F be the extremal function for k and let

$$\begin{aligned} C_\theta = 2 C_{p,\alpha } \left| \sin \left( \frac{(p-2)\theta }{4(p-1)}\right) \right| , \end{aligned}$$

where \(C_{p,\alpha }\) is the norm of the Bergman projection from \(L^p_\alpha \) onto \(A^p_\alpha \). Then if \(2< p < \infty \) we have \(\Vert F - k^{1/(p-1)}\Vert _{p,\alpha } \le 2p^{1/p} C_\theta ^{1/p}\) and if \(1< p < 2\) we have \(\Vert F - k^{1/(p-1)}\Vert _{p,\alpha } \le 2\sqrt{2}(p-1)^{-1/2} C_\theta ^{1/2}\).

Proof

Note that \(G = k^{1/(p-1)}\) is well defined, where we take the branch with \(1^{1/(p-1)} = 1\). Notice that \(|G|^{p-1}{{\mathrm{sgn}}}G = |k| e^{i\arg (k)/(p-1)}\). Thus

$$\begin{aligned} |k - |G|^{p-1}{{\mathrm{sgn}}}G| = |k|\left| e^{i\arg (k)} - e^{i\arg (k)/(p-1)}\right| \le 2|k| \left| \sin \left( \frac{(p-2)\theta }{2(p-1)}\right) \right| \end{aligned}$$

and, therefore,

$$\begin{aligned} \Vert k - |G|^{p-1}{{\mathrm{sgn}}}G\Vert _{p,\alpha } \le 2\Vert k\Vert _{p,\alpha } \left| \sin \left( \frac{(p-2)\theta }{2(p-1)}\right) \right| . \end{aligned}$$

Let \(C_{p,\alpha }\) be the bound for the Bergman projection from \(L^p_\alpha \) onto \(A^p_\alpha \). Then

$$\begin{aligned} \Vert k - \mathcal {P}_\alpha (|G|^{p-1} {{\mathrm{sgn}}}G)\Vert _{q,\alpha } \le 2 C_{p,\alpha } \left| \sin \left( \frac{(p-2)\theta }{4(p-1)}\right) \right| \Vert k\Vert _{q,\alpha }. \end{aligned}$$

since \(\mathcal {P}_\alpha (k) = k\). The result now follows from Theorem 3.3. \(\square \)

Theorem 6.2

Let \(0< d < 1\) and \(1< p < \infty \) and \(-1< \alpha < \min (0,p-2)\). Let \(\Vert k\Vert _{q,\alpha } = 1\) and suppose that \(\Vert k\Vert _{\Lambda ^*,2,A^p_\alpha } < B\). Then there exists a \(\theta > 0\) depending only on d, B, p, and \(\alpha \) such that if the range of k is a subset of \(\{z: -\theta /2< \arg z < \theta / 2 \text { and } |z| > d\}\) then F is non-zero.

Proof

Let \(\theta > 0\) be given. This \(\theta \) will make the conclusion of the theorem true if the assumptions show that \(\Vert F - k^{1/(p-1)}\Vert _\infty < d^{1/(p-1)}\). Let \(\lambda = d^{1/(p-1)}\). For \(p < 2\) choose \(0< \eta <1 - 2/p - \alpha /p\) and let \(\beta = \beta (p,\alpha , \eta )\); otherwise, let \(\beta = \beta (p, \alpha )\).

Note that by [8, Thm. 3.1, Thm. 1.2] and [5, Thm.-5.9, Thm. 5.1], we have \(k \in C^{2-2/p-\alpha /p} \subset C^{\beta }\) with Hölder constant depending only on B, p, and \(\alpha \). Since k is bounded away from 0, we also have that \(k^{1/(p-1)} \in C^{\beta }\) with constant depending only on B, d, p, and \(\alpha \). Let D be the smallest constant such that \(|k(z)^{1/(p-1)} - k(w)^{1/(p-1)}| \le D |z-w|^\beta \) .

By Theorem 4.2 we will be done if we can show that

$$\begin{aligned} \Vert F - k^{1/(p-1)}\Vert _{p,\alpha } < \delta , \end{aligned}$$

where \( \delta = \delta (\lambda , C + D, \beta , p, \alpha ), \) where \(C = C(B, p,\alpha ,\eta )\). But by the previous theorem, this is true if \(\theta \) is small enough. \(\square \)

Notice that, given B, d, \(\epsilon \), p and \(\alpha \), we could if we wish calculate an explicit value for \(\theta \).