1 Introduction

In 1987, Lupaş introduced the first known q-extension of the Bernstein operators. Notice that, in distinction to the classical Bernstein operators, the Lupaş operators \(R_{n,q}\) generate rational functions rather than polynomials. Due to the very limited availability of [9], it stayed as unnoticed for more than one decade and only after [11] had been published, the operator received the deserved attention. Apart from bringing out to light some noteworthy outcomes on the q-analogue, profound connection of this operator with various fields such numerical analysis, quantum physics, computer-aided geometric design, methods of summation and others have been discovered. See, for example, [1, 3, 4, 7, 10,11,12,13,14].

Given \(q>0,\) the q-integers \([n]_{q}\) are defined by

$$\begin{aligned}{}[0]_q:=0, \quad [n]_q:=1+q+\ldots + q^{n-1}, \quad n=1,2, \ldots . \end{aligned}$$

The q-factorials and q-binomial coefficients are introduced as natural extensions of the factorials and binomial coefficients. Namely,

$$\begin{aligned}{}[0]_q!:=1, \quad [n]_q!:=[1]_q[2]_q \ldots [n]_q, \quad n=1,2, \ldots , \end{aligned}$$

for the q-factorials and

$$\begin{aligned} \left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q}:=\frac{[n]_q!}{[k]_q! [n-k]_q!}, \end{aligned}$$

for the q-binomial coefficients.

The q-Pochhammer symbol \((x;q)_n\), for \(n \in \mathbb {N}_0\) and \(x \in \mathbb {R}\), is defined by:

$$\begin{aligned} (x;q)_0:=1, \qquad (x;q)_n:=\prod _{j=0}^{n-1}(1-xq^j). \end{aligned}$$

In the sequel, the Gauss q-binomial formula (see [2, Chapter 10, Corollary 10.2.2]) will be used:

$$\begin{aligned} (-x;q)_n=\sum _{k=0}^{n} \left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} q^{k(k-1)/2}x^k. \end{aligned}$$
(1.1)

Definition 1.1

[9] Let \(q >0\), \(f \in C[0,1]\). The Lupaş q-analogue of the Bernstein operator, \(R_{n,q}:C[0,1] \rightarrow C[0,1]\) is given by

$$\begin{aligned} (R_{n,q}f)(x)=R_{n,q}(f;x)= \sum _{k=0}^{n} f\left( \frac{[k]_q}{[n]_q}\right) b_{nk}(q;x), \end{aligned}$$
(1.2)

where

$$\begin{aligned} b_{nk}(q;x)=\left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} \frac{ q^{k(k-1)/2} x^k (1-x)^{n-k}}{(1-x+qx)\ldots (1-x+q^{n-1}x)}, \quad k=0,\ldots ,n. \end{aligned}$$
(1.3)

Observe that, for \(q=1\), (1.2) gives the classical Bernstein polynomials. By tacit agreement, the term ‘Lupaş q-analogue’ is used for \(q \ne 1,\) in which case (1.2) provides rational functions instead of polynomials. It follows directly from the definition that \(R_{n,q}(f;x)\) satisfy the end-point interpolation property,

$$\begin{aligned} R_{n,q}(f;0)=f(0), \quad R_{n,q}(f;1)=f(1), \end{aligned}$$
(1.4)

for all \(q>0\) and all \(n=1,2,\ldots .\) Moreover, the images of \(R_{n,q}\) comprise the rational functions of the form \(P_n(x)/\tau _{n-1}(x)\), where \(P_n(x)\) is a polynomial of degree at most n and

$$\begin{aligned} \tau _{n-1}(x)=(1-x+qx)\ldots (1-x+q^{n-1}x), \quad n=1,2,\ldots . \end{aligned}$$
(1.5)

Obviously, \(R_{n,q}(f;x)\) depends solely on the values \(f_k\) of f at the nodes \(x_k:=[k]_q/[n]_q\), \(k=0, \ldots ,n\).

The focus of the present work is on the properties of the image of \(R_{n,q}\). It is worth pointing out that although some of the problems considered here conceptually bear resemblance to those in the classical case, such as finding moments and diagonalizability, all of the problems differ fundamentally either in terms of methods or results. What is more, the problem considered in Sect.  3 cannot even be stated for the classical Bernstein operators.

The paper is organized as follows. In Sect.  2, explicit formulae for the images of the monomials under \(R_{n,q}\) are provided. Section  3 investigates the conditions under which the images \(R_{n,q}(f;x)\) of a function f coincide for different values of the parameter q. In Sect.  4, it is shown that operator \(R_{n,q}\) is diagonalizable. Finally, Sect. 5 presents the results demonstrating that the image of a modulus of continuity may not be a modulus of continuity.

2 Moments of \(R_{n,q}\)

Since moments of positive linear operators play an important role in the approximation theory, the moments for \(k=0,1,2\) were presented in several papers starting from Lupaş himself [9, 10]. In 2012, Mahmudov and Sabancıgil derived a recurrence relation for the moments and used it to estimate the first four moments. In this work, with a different approach, an explicit formula for all moments is established. To this purpose, the functions \(\rho _m(x)=(1-(1-q^n)x)^m\), \(m \in \mathbb {Z}\), are used.

Proposition 2.1

For all \(m \in \mathbb {Z}\), there holds

$$\begin{aligned} R_{n,q}(\rho _m;x)=\prod ^{n-1}_{s=0}{\frac{1-x+q^{m+s}x}{1-x+q^sx}}. \end{aligned}$$
(2.1)

Proof

Using (1.2) and the definition of \(\rho _m(x)\), one can write

$$\begin{aligned} R_{n,q}(\rho _m;x)&=\sum _{k=0}^{n} \left( 1-(1-q^n)\frac{[k]_q}{[n]_q} \right) ^m \left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} \frac{ q^{k(k-1)/2} x^k (1-x)^{n-k}}{(1-x+qx)\ldots (1-x+q^{n-1}x)} \\&=\sum _{k=0}^{n} q^{km}\left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} \frac{ q^{k(k-1)/2} x^k (1-x)^{n-k}}{(1-x+qx)\ldots (1-x+q^{n-1}x)} \\&=(1-x)^n\sum _{k=0}^{n} \left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} \frac{ q^{k(k-1)/2} (\frac{q^mx}{1-x})^k }{(1-x+qx)\ldots (1-x+q^{n-1}x)}. \end{aligned}$$

Then, applying the Gauss q-binomial formula (1.1), one derives that

$$\begin{aligned} R_{n,q}(\rho _m;x)&= \frac{(1-x)^n\left( 1+\frac{q^mx}{1-x}\right) \left( 1+\frac{q^{m+1}x}{1-x}\right) \ldots \left( 1+\frac{q^{m+n-1}x}{1-x}\right) }{(1-x+qx)\ldots (1-x+q^{n-1}x)} \\&=\prod ^{n-1}_{s=0}{\frac{1-x+q^{m+s}x}{1-x+q^sx}}. \end{aligned}$$

\(\square \)

Remark 2.1

When \(0\leqslant m \leqslant n\), formula (2.1) takes the following simpler form

$$\begin{aligned} R_{n,q}(\rho _m;x)=\prod ^{m-1}_{s=0}\frac{(1-x+q^{n+s}x)}{(1-x+q^sx)}. \end{aligned}$$

Proposition 2.1 yields the explicit formula for the moments of all orders which is presented below.

Corollary 2.2

For all \(m=0,1, \ldots \), the following identity holds

$$\begin{aligned} R_{n,q}(t^m;x)=\frac{1}{(1-q^n)^m} \sum _{j=0}^m \left( {\begin{array}{c}m\\ j\end{array}}\right) (-1)^j \prod ^{n-1}_{s=0}\frac{(1-x+q^{j+s}x)}{(1-x+q^sx)}. \end{aligned}$$
(2.2)

Proof

By plain calculations, one has

$$\begin{aligned} t^m&=\frac{1}{(1-q^n)^m} [1-(1-(1-q^n)t)]^m=\frac{1}{(1-q^n)^m} \sum _{j=0}^m \left( {\begin{array}{c}m\\ j\end{array}}\right) (-1)^j \rho _j(t). \end{aligned}$$

Applying \(R_{n,q}\) to the both sides of the latter identity, one obtains

$$\begin{aligned} R_{n,q}(t^m;x)=\frac{1}{(1-q^n)^m} \sum _{j=0}^m \left( {\begin{array}{c}m\\ j\end{array}}\right) (-1)^j R_{n,q}(\rho _j;x). \end{aligned}$$

\(\square \)

3 Uniqueness with Respect to q

Although, in general, the image of f under \(R_{n,q}\) depends on q, there exist functions in C[0, 1] whose images do not vary with q. As it can be confirmed by (2.2), linear functions are reproduced by \(R_{n,q}\), that is, \(R_{n,q}(at+b;x)=ax+b\) whatever q is. The next statement demonstrates that only linear functions have this property.

Theorem 3.1

Let \(f \in C[0,1]\). If \(R_{n,q}(f;x)=R_{n,r}(f;x)\) for all \(q,r \in (0, \infty ),\) then f is a linear function.

Proof

First, assume that \(f(0)=f(1)=0\). Applying the substitution \(y=x/(1-x)\), one can write

$$\begin{aligned} R_{n,q}(f;x)=S_q(f;y)=\sum _{k=0}^{n} f_{k,q}\left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} \frac{ q^{k(k-1)/2} y^k}{(-y;q)_n}, \end{aligned}$$

where \(f_{k,q}=f\left( [k]_q/ [n]_q\right) .\)

With the help of formula [2, Corollary 10.2.2], one obtains, for \(|y|<1\),

$$\begin{aligned} S_q(f;y)=\sum _{k=0}^{n} \sum _{m=0}^{\infty } f_{k,q}\left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} \left[ \genfrac{}{}{0.0pt}{}{n+m-1}{m}\right] _{q} q^{k(k-1)/2} (-1)^{m} y^{k+m} \end{aligned}$$
(3.1)

By default, \(\left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q}=0\) whenever \(k>n.\) Hence, (3.1) can be rewritten as

$$\begin{aligned} S_q(f;y)&=\sum _{m=0}^{\infty } \sum _{k=0}^{\infty } f_{k,q}\left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} \left[ \genfrac{}{}{0.0pt}{}{n+m-1}{m}\right] _{q} q^{k(k-1)/2} (-1)^{m} y^{k+m} \\&=\sum _{m=0}^{\infty } \sum _{k=0}^{m} f_{k,q}\left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} \left[ \genfrac{}{}{0.0pt}{}{n+m-k-1}{m-k}\right] _{q} q^{k(k-1)/2} (-1)^{m-k} y^{m}. \end{aligned}$$

The condition \(S_q(f;y)=S_r(f;y)\) for all \(q,r \in (0, \infty )\) leads to, for all \(m=0,1, \ldots ,\)

$$\begin{aligned}{} & {} \sum _{k=0}^{m} f_{k,q} \left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} \left[ \genfrac{}{}{0.0pt}{}{n+m-k-1}{m-k}\right] _{q} q^{k(k-1)/2} (-1)^{k} \\{} & {} \quad =\sum _{k=0}^{m} f_{k,r} \left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{r} \left[ \genfrac{}{}{0.0pt}{}{n+m-k-1}{m-k}\right] _{r} r^{k(k-1)/2} (-1)^{k}. \end{aligned}$$

Bearing in mind that \(f_{0,q}=f_{0,r}=0\) and substituting \(m=1\) results in \(f_{1,q}[n]_q=f_{1,r}[n]_r\), whence \(f(1/[n]_q)[n]_q=C\) for \(q \in (0, \infty )\). Put \(x=1/[n]_q\). As q varies over the interval \((0, \infty )\), the variable x covers (0, 1). Consequently, we derive \(f(x)/x=C\), whence f is linear. \(\square \)

In what follows \(\mathcal {P}_n\) stands for the vector space of polynomials of degree at most n.

Theorem 3.2

Let \(f \in \mathcal {P}_n\) and \(degf\geqslant 2\). Then, \(R_{n,q}(f;x)=R_{n,r}(f;x)\) if and only if \(q=r.\)

Proof

Let f be a polynomial of degree \(m \geqslant 2\). Write

$$\begin{aligned} f(x)= \sum _{k=0}^m \alpha _k \rho _{kq}(x)= \sum _{k=0}^m \beta _k \rho _{kr}(x), \quad \alpha _m \beta _m \ne 0, \end{aligned}$$

where \(\rho _{kq}(x)=(1-(1-q^n)x)^k\). Equality \(R_{n,q}(f;x)=R_{n,r}(f;x)\) implies that

$$\begin{aligned} \sum _{k=0}^m \alpha _k \frac{\prod ^{k-1}_{s=0}{(1-x+q^{n+s}x)}}{\prod ^{k-1}_{s=1}{(1-x+q^sx)}}=\sum _{k=0}^m \beta _k \frac{\prod ^{k-1}_{s=0}{(1-x+r^{n+s}x)}}{\prod ^{k-1}_{s=1}{(1-x+r^sx)}}. \end{aligned}$$

Being a rational function \(R_{n,q}(f;x)\) can be naturally extended by formulae (1.2) and (1.3) beyond the interval [0, 1]. Obviously, \(R_{n,q}(f;x)\) has singularities at \(x=1/(1-q), \; x=1/(1-q^2), \ldots , x=1/(1-q^{m-1})\). Since

$$\begin{aligned} \lim _{x \rightarrow 1/(1-q^{m-1}) } (1-x+q^{m-1}x)R_{n,q}(f;x)= \alpha _m \frac{\left( 1-\frac{[n]_q}{[m-1]_q}\right) \ldots \left( 1-\frac{[n+m-1]_q}{[m-1]_q}\right) }{\left( 1-\frac{[1]_q}{[m-1]_q}\right) \ldots \left( 1-\frac{[m-2]_q}{[m-1]_q}\right) } \ne 0, \end{aligned}$$

one concludes that \(x=1/(1-q^{m-1})\) is a simple pole of \(R_{n,q}(f;x)\). As \(R_{n,q}(f;x)=R_{n,r}(f;x)\), this pole should be a singularity of \(R_{n,r}(f;x)\). That is,

$$\begin{aligned} \frac{1}{1-q^{m-1}}=\frac{1}{1-r^j} \quad \text {for} \,\text {some} \, j \in \{1, \ldots , m-1 \}. \end{aligned}$$

Similarly, \(x=1/(1-r^{m-1})\) is a simple pole of \(R_{n,r}(f;x)\) and

$$\begin{aligned} \frac{1}{1-r^{m-1}}=\frac{1}{1-q^s} \quad \text {for} \, \text {some} \,s \in \{1, \ldots , m-1 \} . \end{aligned}$$

Juxtaposing \(q^{m-1}=r^j\) and \(q^s=r^{m-1}\), one gets \(sj=(m-1)^2\) if and only if \(s=j=m-1\) which means that \(q=r.\) \(\square \)

4 On the Diagonalizability of \(R_{n,q}\)

To begin with, the following statement is presented showing that every \(R_{n,q}\) has \(\lambda _0=\lambda _1=1\) as its eigenvalue of multiplicity 2.

Lemma 4.1

The linear functions are the only fixed points of \(R_{n,q}\).

Proof

Assume that \(R_{n,q}(f;x)=f(x)\) and f is not a linear function. Then, for \(0 \not \equiv g(x)=f(x)-[f(0)+(f(1)-f(0))x]\), one has \(R_{n,q}(g;x)=g(x)\) and \(g(0)=g(1)=0\). Let \(\Vert g\Vert =M=g(x^*)\), \(x^* \in (0,1)\). Then,

$$\begin{aligned} M=R_{n,q}(g;x^*)=\sum _{k=1}^{n-1} g\left( \frac{[k]_q}{[n]_q}\right) b_{nk}(q;x^*) \leqslant M \sum _{k=1}^{n-1} b_{nk}(q;x^*). \end{aligned}$$
(4.1)

Using (1.2) with \(f \equiv 1\) and (2.1) with \(m=0\) leads to

$$\begin{aligned} \sum _{k=0}^{n} b_{nk}(q;x)=1. \end{aligned}$$

As \(b_{nk}(q;x)>0\) for \(x \in (0,1)\), (4.1) gives

$$\begin{aligned} M \leqslant M \sum _{k=1}^{n-1} b_{nk}(q;x^*)<M\sum _{k=0}^{n} b_{nk}(q;x^*)=M. \end{aligned}$$

This contradiction shows that f is a linear function. \(\square \)

Theorem 4.2

\(R_{n,q}\) is a diagonalizable linear operator, whose eigenvalues satisfy

$$\begin{aligned} 1=\lambda _0=\lambda _1>\lambda _2> \ldots> \lambda _n>0. \end{aligned}$$

Proof

By virtue of Lemma 4.1, \(R_{n,q}\) has a double eigenvalue \(\lambda =1.\) If \(\lambda \ne 1\) is an eigenvalue with a corresponding eigenfunction f, then the end-point interpolation property (1.4) implies that \(f(0)=f(1)=0.\) To search for nonlinear eigenfunctions, notice that \(V=\{P_n(x)/ \tau _{n-1}(x): P_n(x) \in \mathcal {P}_n\}\), where \(\tau _{n-1}(x)\) is given by (1.5), is an invariant subspace of C[0, 1] for \(R_{n,q}.\) Moreover, V is isomorphic to \(\mathbb {R}^{n+1}.\) Therefore, \(R_{n,q}\) can be viewed as a linear operator \(T_{n,q}\) on \(\mathbb {R}^{n+1}.\) With this in mind, equality \(R_{n,q}(f;x)=g(x)\) can be understood as \(T_{n,q}([f_0, \ldots f_n]^T)=[g_0, \ldots g_n]^T\) where \(f_i=f([i]_q/[n]_q)\) and \(g_i=g([i]_q/[n]_q)\), \(i=0,1, \ldots ,n.\) Also, the matrix of \(T_{n,q}\) in the standard basis of \(\mathbb {R}^{n+1}\) is \( {\textbf {A}}=[b_{nj}([i]_q/[n]_q)]_{0 \leqslant i,j \leqslant n}\). Obviously,

$$\begin{aligned} \textbf{A}=\begin{bmatrix} 1 &{} 0 &{} 0 &{} \ldots &{} 0 \\ * &{} * &{} *&{}\ldots &{} * \\ \vdots &{} \vdots &{} \vdots &{}\ddots &{} \vdots \\ * &{} * &{} *&{}\cdots &{} * \\ 0 &{} 0&{} 0&{}\ldots &{} 1 \end{bmatrix}, \end{aligned}$$

whence \(\textbf{A} \textbf{f}=\lambda \textbf{f}\), \(\lambda \ne 1\) implies \(f_0=f_n=0.\) Therefore, \(\textbf{A} \textbf{f}=\lambda \textbf{f} \Leftrightarrow \tilde{\textbf{A}} \tilde{\textbf{f}}= \lambda \tilde{\textbf{f}}\), where \(\tilde{\textbf{A}}=[b_{nj}([i]_q/[n]_q)]_{1 \leqslant i,j \leqslant n-1},\) \(\tilde{f}_i=f_i,\) \(1 \leqslant i \leqslant n-1.\) It has been observed in [3] that \(\tilde{\textbf{A}}\) is a totally positive matrix. This is because

$$\begin{aligned} a_{kj}=\frac{\left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} q^{k(k-1)/2} (x_j/(1-x_j))^k }{(-x_j/(1-x_j);q)_n}=:\frac{\left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} q^{k(k-1)/2} y_j^k}{(-y_j;q)_n}>0, \quad 1 \leqslant k,j \leqslant n-1, \end{aligned}$$

\(x \mapsto x/(1-x)\) is increasing on (0, 1), and the Vandermonde matrix \([y_j^k]_{1 \leqslant k,j \leqslant n-1}\) is known to be totally positive [6]. By virtue of Gantmacher–Krein theorem (see [5, Theorem 6]) being totally positive, the matrix \(\tilde{\textbf{A}}\) possesses \((n-1)\) simple eigenvalues, say,

$$\begin{aligned} \lambda _2>\lambda _3> \ldots> \lambda _n>0. \end{aligned}$$

Since \(\Vert R_{n,q}\Vert =1\), one derives that the eigenvalues of \(R_{n,q}\) are

$$\begin{aligned} 1=\lambda _0=\lambda _1>\lambda _2> \ldots> \lambda _n>0. \end{aligned}$$

\(\square \)

Example 4.1

For \(n=2\), one has \(\lambda _0=\lambda _1=1\), \(\lambda _2=1/2\) and for \(n=3\), the eigenvalues are \(\lambda _0=\lambda _1=1\), \(\lambda _2=(1+q)(1+q+\sqrt{q})/(2q^2+5q+2)\), \(\lambda _3=(1+q)(1+q-\sqrt{q})/(2q^2+5q+2)\).

Example 4.2

Numerical calculations performed on Matlab reveal, for \(q=0.5\) and some different values of n, the eigenvalues as shown in Table 1. The normalized eigenfunctions for the case \(n=5\) are depicted in Fig. 1.

Table 1 Eigenvalues of \(R_{n,0.5}\) for different values of n
Fig. 1
figure 1

The eigenfunctions of \(R_{5,0.5}\)

5 On the Image of a Modulus of Continuity

In [8], it is shown that \(B_n(\omega ;t)\) is a modulus of continuity whenever \(\omega (t)\) is a modulus of continuity. This property is not inherited by Lupaş q-analogue.

Definition 5.1

A function \(\omega \) on [0, 1] is said to be a modulus of continuity provided that

  1. (i)

    \(\omega (0)=0\),

  2. (ii)

    \(\omega \) is continuous on [0, 1],

  3. (iii)

    \(\omega \) is non-decreasing on [0, 1],

  4. (iv)

    \(\omega \) is subadditive: \(\omega (t_1+t_2) \leqslant \omega (t_1)+\omega (t_2)\).

Before we prove the respective result, let us present auxiliary lemmas. From here on, it is assumed that \(0<q<1.\)

Lemma 5.1

Let \(t_0 \in \left( [n-1]_q/[n]_q,1 \right) \). Define \(\omega (t)\) to be a continuous piecewise linear function such that \( \omega (0)=0, \, \omega (1)=\beta >0\) and \(\omega (t)=\alpha \) for \(t \in (1-t_0,t_0).\) Then, \(\omega (t)\) is a modulus of continuity if and only if \(\alpha \leqslant \beta \leqslant 2 \alpha \).

Proof

(\(\Rightarrow \)) Suppose that \(\omega (t)\) is a modulus of continuity. Then, by (iii), \(\alpha \leqslant \beta \). Also, for (iv) to be true, \(\omega (t)+\omega (1-t)\geqslant \beta =\omega (1)\) for all \(t \in [0,1].\) Setting \(t=1/2\) yields \(2\alpha \geqslant \beta .\)

(\(\Leftarrow \)) For all \(\alpha \leqslant \beta \), (i), (ii) and (iii) are obvious. Assume that \(\alpha \leqslant \beta \leqslant 2\alpha \), so that a graph of \(\omega (t)\) looks like in Fig. 2.

The slope of \(\omega (t)\) on \([0,1-t_0]\) is greater than that on \([t_0,1].\) Depending on the locations of \(t_1, t_2\) and \(t_1+t_2\), several cases may be considered. For example, if both \(t_1, t_2 \in [0,1-t_0],\) then \(\omega (t_1+t_2)=\omega (t_1)+\omega (t_2) \) if \(t_1+t_2 \in [0,1-t_0]\) and \(\omega (t_1+t_2)<\omega (t_1)+\omega (t_2) \), otherwise. Also if \(t_1 \in [0,1-t_0],\) \(t_2 \in [1-t_0, t_0]\) and \(t_1+t_2 \in [t_0,1]\), then, since \(t_2-t_0<0\) and \(\beta -\alpha \leqslant \alpha ,\) one has

$$\begin{aligned} \omega (t_1+t_2)=\alpha +(t_1+t_2-t_0)\frac{\beta -\alpha }{1-t_0}<\frac{\alpha }{1-t_0}t_1+\alpha =\omega (t_1)+\omega (t_2). \end{aligned}$$

All other cases can be treated similarly. \(\square \)

Fig. 2
figure 2

The graph of \(\omega \)

Lemma 5.2

For \(x \in [0,1]\), \(n \geqslant 2\), one has \(b_{nn}(q;1-x) \leqslant b_{n0}(q;x)\).

Proof

Obviously, for \(x=0,1,\) the equality holds. For \(x \in (0,1),\) set \(y=x/1-x \in (0, \infty )\). Then, \(b_{nn}(q;1-x)=q^{n(n-1)/2} y^{-n}/ \left( -1/y;q\right) _n\) and \(b_{n0}(q;x)=1/(-y;q)_n\). Consequently,

$$\begin{aligned} b_{nn}(q;1-x)-b_{n0}(q;x)=\frac{q^{n(n-1)/2}(-y;q)_n-y^n\left( -1/y;q\right) _n}{y^n\left( -1/y;q\right) _n(-y;q)_n}. \end{aligned}$$

As the denominator is positive for all \(y>0\), it suffices to consider the sign of

$$\begin{aligned} \varphi _n(y)=q^{n(n-1)/2}(-y;q)_n-y^n\left( -1/y;q\right) _n. \end{aligned}$$

By virtue of the Gauss q-binomial formula,

$$\begin{aligned} \varphi _n(y)&=\sum _{k=0}^n \left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} q^{n(n-1)/2+k(k-1)/2}y^k-\sum _{k=0}^n \left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} q^{k(k-1)/2}y^{n-k} \\&= \sum _{k=0}^n \left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} (q^{n(n-1)/2+k(k-1)/2}-q^{(n-k)(n-k-1)/2})y^k \\&=\sum _{k=0}^n \left[ \genfrac{}{}{0.0pt}{}{n}{k}\right] _{q} q^{n(n-1)/2+k(k-1)/2}(1-q^{-k(n-1)})y^k. \end{aligned}$$

As \(q^{-k(n-1)}>1\) for \(k>0, \, n \geqslant 2,\) \(\varphi _n(y)<0\) for all \(y>0\). \(\square \)

Theorem 5.3

For each \(n \geqslant 2\), there exists a modulus of continuity \(\omega (t)\) such that \(R_{n,q}(\omega ;x)\) is not a modulus of continuity.

Proof

Set \(\omega (t)\) as in Lemma 5.1 and take \(\beta =2 \alpha ,\) then

$$\begin{aligned} R_{n,q}(\omega ;x)&=\alpha \sum _{k=1}^{n-1} b_{nk}(q;x)+ 2 \alpha b_{nn}(q;x) \\&= \alpha (1-b_{n0}(q;x)-b_{nn}(q;x))+ 2 \alpha b_{nn}(q;x) \\&= \alpha -\alpha b_{n0}(q;x)+\alpha b_{nn}(q;x). \end{aligned}$$

To prove the theorem, it will be shown that

$$\begin{aligned} R_{n,q}(\omega ;x)+R_{n,q}(\omega ;1-x)<2 \alpha =\beta =R_{n,q}(\omega ;1). \end{aligned}$$

Plain calculations lead to

$$\begin{aligned} R_{n,q}(\omega ;x)+R_{n,q}(\omega ;1-x)&=2 \alpha + \alpha (b_{nn}(q;1-x)-b_{n0}(q;x))+\alpha (b_{nn}(q;x)\\&\qquad -b_{n0}(q;1-x)) \\&=:2 \alpha +S_1+S_2. \end{aligned}$$

Observe that \(S_1<0\) for all \(x \in (0,1)\) by Lemma 5.2. Replacing x by \(1-x\), we obtain that \(S_2<0\) for all \(x \in (0,1)\) as well. As a result,

$$\begin{aligned} R_{n,q}(\omega ;x)+R_{n,q}(\omega ;1-x)<2 \alpha = \beta . \square \end{aligned}$$