Keywords

1 Introduction

Let L(x) be a slowly varying function (s.v.f.), i.e. a positive measurable function such that, for any fixed v ∈ (0, ) holds L(vx) ∼ L(x) as x → :

$$\lim _{x\rightarrow \infty }\frac{L(vx)} {L(x)} = 1.$$
(1)

Among the most important and often used results on s.v.f.’s are the Uniform Convergence Theorem (see property (U) below) and the Integral Representation Theorem (property (I)), the latter result essentially relying on the former. These theorems, together with their proofs, can be found e.g. in monographs [1] (Theorems 1.2.1 and 1.3.1) and [2] (see §1.1).

(U) :

For any fixed \(0 < v_{1} < v_{2} < \infty \), convergence (1) is uniform in \(v \in [v_{1},v_{2}]\).

(I) :

A functionL(x) is an s.v.f. iff the following representation holds true:

$$L(x) = c(x)\exp \left \{\int \limits _{1}^{x}\frac{\epsilon (t)} {t} \,dt\right \},\qquad x \geq 1,$$
(2)

wherec(t) > 0 and \(\epsilon (t)\) are measurable functions, c(t) → c ∈ (0, ) and \(\epsilon (t) \rightarrow 0\) as t → .

The concept of a s.v.f. is closely related to that of a regularly varying function (r.v.f.) R(x), which is specified by the relation

$$R(x) = {x}^{\alpha }L(x),\qquad \alpha \in \mathbb{R},$$

where L is an s.v.f. and α is called the index of the r.v.f.‘R. The class of all r.v.f.’s we will denote by \(\mathcal{R}\).

R.v.f.’s are characterised by the relation

$$\lim _{x\rightarrow \infty }\frac{R(vx)} {R(x)} = {v}^{\alpha },\quad v \in (0,\infty ).$$
(3)

For them, convergence (3) is also uniform in v on compact intervals, while representation (2) holds for r.v.f.’s with \(\epsilon (t) \rightarrow \alpha \) as t → .

In Probability Theory there exists a large class of limit theorems on large deviations of sums of random variable whose distributions \(\mathbf{F}\) have the property that their right tails \(F_{+}(x) := \mathbf{F}\left ([x,\infty )\right )\) are r.v.f.’s. The following assertion (see e.g. Theorem 4.4.1 in [2]) is a typical representative of such results. Let \(\xi ,\xi _{1},\xi _{2},\ldots \) be independent identically distributed random variables, Eξ = 0, Eξ2 < , \(S_{n} := \sum \limits _{k=1}^{n}\xi _{k}\) and \(\overline{S}_{n} :=\max _{k\leq n}S_{k}\).

Theorem A.

If F + (t) = P(ξ ≥ t) is an r.v.f. of index α < −2 then, as x →∞, x(nln n) −1∕2 →∞, one has

$$\mathbf{P}(S_{n} \geq x) \sim nF_{+}(x),\qquad \mathbf{P}(\overline{S}_{n} \geq x) \sim nF_{+}(x).$$
(4)

Similar assertions hold true under the assumption that the distributions of the scaled sums S n tend to a stable law (see Chaps. 2 and 3 in [2]).

There arises the natural question of how essential the condition \(F_{+} \in \mathcal{R}\) is for (4) to hold. It turns out that this condition can be significantly relaxed.

The aim of the present paper is to describe and study classes of functions that are wider than \(\mathcal{R}\) and have the property that the condition that F  +  belongs to such a class, together with some other natural conditions, would ensure the validity of limit laws of the form (4).

In Sect. 2 of the present note we give the definitions of the above-mentioned broad classes of functions which we call asymptotically ψ-locally constant functions. The section also contains assertions in which conditions sufficient for relations (4) are given in terms of these functions. Section 3 presents the main results on characterisation of asymptotically ψ-locally constant functions. Section 4 contains the proofs of these results.

2 The Definitions of Asymptotically Locally Constant Functions. Applications to Limit Theorems on Large Deviations

Following §1.2 in [2], we will call a positive function g(x) an asymptotically locally constant function (l.c.f.) if, for any fixed v ∈ ( − , ),

$$\lim _{x\rightarrow \infty }\frac{g(x + v)} {g(x)} = 1$$
(5)

(the function g(x), as all the other functions appearing in the present note, will be assumed measurable; assumptions of this kind will be omitted for brevity’s sake).

If one puts x = lny, v = lnu, then g(x + v) = g(lnyu), so that the composition L = g ∘ ln will be an s.v.f. by virtue of (5) and (1). From here and the equality g(x) = L(e x) it follows that an l.c.f. g will have the following properties:

(U 1 ):

For any fixed \(-\infty \,<\,v_{1}\,<\,v_{2}\,<\,\infty \), convergence (5) is uniform in \(v \in [v_{1},v_{2}]\).

(I 1 ):

A functiong(x) is an l.c.f. iff it admits a representation of the form

$$g(x) = c(x)\exp \left \{\int \limits _{1}^{{e}^{x} }\frac{\epsilon (t)} {t} \,dt\right \},\qquad x \geq 1,$$
(6)

wherec(t) and \(\epsilon (t)\) have the same properties as in (I).

Probability distributions F on \(\mathbb{R}\) such that \(F_{+}(t) := \mathbf{F}\left ([t,\infty )\right )\) is an l.c.f. are sometimes referred to as long-tailed distributions, or class \(\mathcal{L}\) distributions. Such distributions often appear in papers on limit theorems for sums of random variables with “heavy tails”. Examples of l.c.f.’s are provided by r.v.f.’s and functions of the form \(\exp \{{x}^{\alpha }L(x)\}\), where L is an s.v.f., α ∈ (0, 1).

It is not hard to see that, by virtue of property (U1), definition (5) of an l.c.f. is equivalent to the following one: for any fixed v ∈ ( − , ) and function v(x) → v as x → , one has

$$\lim _{x\rightarrow \infty }\frac{g(x + v(x))} {g(x)} = 1.$$
(7)

Now we will consider a broader concept, which includes both s.v.f.’s and l.c.f.’s as special cases.

Let ψ(t) > 1 be a fixed non-decreasing function.

Definition 1.

(See also Definition 1.2.7 in [2].) A function g(x) > 0 is said to be an asymptotically ψ-locally constant function (ψ-l.c.f.) if, for any fixed v ∈ ( − , ) such that x + vψ(x) ≥ cx for some c > 0 and all large enough x, one has

$$\lim _{x\rightarrow \infty }\frac{g(x + v\psi (x))} {g(x)} = 1.$$
(8)

If ψ(x) ≡ 1 then the class of ψ-l.c.f.’s coincides with the class of l.c.f.’s, while if ψ(x) ≡ x then the class of ψ-l.c.f.’s coincides with the class of s.v.f.’s. In the case when ψ(x) →  and ψ(x) = o(x) as x → , the class of ψ-l.c.f.’s occupies, in a sense, an intermediate (in terms of the zone where its functions are locally constant) place between the classes of s.v.f.’s and l.c.f.’s.

Clearly, all functions from \(\mathcal{R}\) are ψ-l.c.f.’s for any function ψ(x) = o(x).

Note that the concept of ψ-l.c.f.’s is closely related to that of h-insensitive functions extensively used in [5] (see Definition 2.18 therein). Our Theorem 1 below shows that, under broad conditions, a ψ-l.c.f. will he h-insensitive with ψ = h. 

We will also need the following

Definition 2.

(See also Definition 1.2.20 in [2].)  We will call a function g an upper-power function if it is an l.c.f. and, for any p ∈ (0, 1), there exists a constant c(p), \(\inf _{p\in (p_{1},1)}c(p) > 0\) for any \(p_{1} \in (0,1)\), such that

$$g(t) \geq c(p)g(pt),\qquad t > 0.$$

It is clear that all r.v.f.’s are upper-power functions.

The concept of ψ-l.c.f.’s and that of an upper-power function enable one to substantially extend the assertion of Theorem A. It is not hard to derive from Theorem 4.8.1 in [2] the following result.

Let h(v) > 0 be a non-decreasing function such that \(h(v) \gg \sqrt{v\,\ln v}\) as v → . Such a function always has a generalised inverse \({h}^{(-1)}(t) :=\inf \{ v :\, h(v) \geq t\}\).

Theorem B.

Assume that Eξ = 0, 2 < ∞ and that the following conditions are satisfied:

  1. 1.

    \(F_{+}(t) \leq V (t) = {t}^{\alpha }L(t)\) , where α < −2 and L is an s.v.f.

  2. 2.

    The function F + (t) is upper-power and a ψ-l.c.f. for \(\psi (t) = \sqrt{{h}^{(-1) } (t)}\).

Then relations (4) hold true provided that x →∞, x ≥ h(n) and

$$n{V }^{2}(x) = o\left (F_{ +}(x)\right ).$$
(9)

In particular, if \(x = h(n) \sim c{n}^{\beta }\) as n → , β > 1 ∕ 2, then one can put \(\psi (t) := {t}^{1/2\beta }\) (\(\psi (t) := \sqrt{t}\) if x ∼ cn).

Condition (9) is always met provided that \(F_{+}(t) \geq cV (t){t}^{-\epsilon }\) for some \(\epsilon > 0\), \(\epsilon < -\alpha - 2,\) and c = const. Indeed, in this case, for \(x \geq \sqrt{n}\), x → ,

$$n{V }^{2}(x) \leq {c}^{-1}{x}^{2+\epsilon }V (x)F_{ +}(x) = o\left (F_{+}(x)\right ).$$

Now consider the case where Eξ2 = . Let, as before, V (t) = t α L(t) is an r.v.f. and set σ(v) : = V ( − 1)(1 ∕ v)). Observe that σ(v) is also an r.v.f. (see e.g. Theorem 1.1.4 in [2]). Further, let h(v) > 0 be a non-decreasing function such that h(v) ≫ σ(v) as v → . Employing Theorem 4.8.6 in [2] (using this opportunity, note that there are a couple of typos in the formulation of that theorem: the text “with \(\psi (t) = \sigma (t) = {V }^{(-1)}(1/t)\)” should be omitted, while the condition “x ≫ σ(n)” must be replaced with “\(x \gg \sigma (n) = {V }^{(-1)}(1/n)\)”) it is not difficult to establish the following result.

Theorem C.

Let Eξ = 0 and the following conditions be met:

  1. 1.

    \(F_{+}(t) \leq V (t) = {t}^{\alpha }L(t)\) , where − α ∈ (1,2) and L is an s.v.f.

  2. 2.

    P(ξ < −t) ≤ cV (t) for all t > 0.

  3. 3.

    The function F + is upper-power and a ψ-l.c.f. for \(\psi (t) = \sigma \left ({h}^{(-1)}(t)\right )\) .

Then relations (4) hold true provided that x →∞, x ≥ h(n) and relation  (9) is satisfied.

If, for instance, \(V (t) \sim c_{1}{t}^{\alpha }\) as t → , \(x \sim c_{2}{n}^{\beta }\) as n → , \(c_{i} =\mathrm{ const}\), i = 1, 2, and β >  − 1 ∕ α, then one can put ψ(t) : = t  − 1 ∕ (αβ).

Condition (9) of Theorem C is always satisfied provided that x ≥ n δ − (1 ∕ α), \(F_{+}(t) \geq cV (t){t}^{-\epsilon }\) for some δ > 0 and \(\epsilon < {\alpha }^{2}\delta /(1 - \alpha \delta )\). Indeed, in this case \(n \leq {x}^{-\alpha /(1-\alpha \delta )}\) and

$$n{V }^{2}(x) \leq {c}^{-1}F_{ +}(x){x}^{\epsilon -\alpha /(1-\alpha \delta )}V (x) = o\left (F_{ +}(x)\right ).$$

Note also that the conditions of Theorems B and C do not stipulate that n → .

The proofs of Theorems B and C basically consist in verifying, for the indicated choice of functions ψ, the conditions of Theorems 4.8.1 and 4.8.6 in [2], respectively. We will omit them.

It is not hard to see (e.g. from the representation theorem on p. 74 in [1]) that Theorems B and C include, as special cases, situations when the right tail of F satisfies the condition of extended regular variation, i.e. when, for any b > 1 and some \(0 < \alpha _{1} \leq \alpha _{2} < \infty \),

$${b}^{-\alpha _{2} } \leq liminf _{x\rightarrow \infty }\frac{F_{+}(bx)} {F_{+}(x)} \leq limsup_{x\rightarrow \infty }\frac{F_{+}(bx)} {F_{+}(x)} \leq {b}^{-\alpha _{1} }.$$
(10)

Under the assumption that the random variable ξ = ξ − Eξ was obtained by centering a non-negative random variable ξ ≥ 0, the former of the asymptotic relations (4) was established in the above-mentioned case in [3]. One could mention here some further efforts aimed at extending the conditions of Theorem A that ensure the validity of (4), see e.g. [4, 6].

In conclusion of this section, we will make a remark showing that the presence of the condition that F  + (t) is a ψ-l.c.f. in Theorems B and C is quite natural. Moreover, it also indicates that any further extension of this condition in the class of “sufficiently regular” functions is hardly possible. If we turn, say, to the proof of Theorem 4.8.1 in [2], we will see that when x ∼ cn, the main term in the asymptotic representation for P(S n  ≥ x) is given by

$$n\int \limits _{-N\sqrt{n}}^{N\sqrt{n}}\mathbf{P}(S_{ n-1} \in dt)F_{+}(x - t),$$
(11)

where N →  slowly enough as n → . It is clear that, by virtue of the Central Limit Theorem, the integral in this expression is asymptotically equivalent to F  + (t) (implying that the former relation in (4) will hold true), provided that F  + (t) is a ψ-l.c.f. for \(\psi (t) = \sqrt{t}\).

Since E S n − 1 = 0, one might try to obtain such a result in the case when F  + (t) belongs to a broader class of “asymptotically ψ-locally linear functions”, i.e. such functions that, for any fixed v and t → ,

$$F_{+}(t + v\psi (t)) = F_{+}(t)(1 - cv + o(1)),\quad c =\mathrm{ const} > 0.$$

However, such a representation is impossible as 1 − cv < 0 when v > 1 ∕ c.

3 The Chracterization of ψ-l.c.f.’s

The aim of the present section is to prove that, for any ψ-l.c.f. g, convergence (8) is uniform in v on any compact set and, moreover, that g admits an integral representation similar to (2) and (6). To do that, we will need some restrictions on the function ψ.

We assume that ψ is a non-decreasing function such that ψ(x) = o(x) as x → . For such functions, we introduce the following condition:

  • (A) For any fixedv > 0, there exists a valuea(v) ∈ (0, ) such that

    $$\frac{\psi (x - v\psi (x))} {\psi (x)} \geq a(v)\quad \text{ for all sufficiently large}\ x.$$
    (12)

Letting y : = x + vψ(x) > x and using the monotonicity of ψ, one has

$$\psi (y - v\psi (y)) \leq \psi (x).$$

Therefore, relation (12) implies that, for all large enough x,

$$\frac{\psi (x + v\psi (x))} {\psi (x)} \equiv \frac{\psi (y)} {\psi (x)} \leq \frac{\psi (y)} {\psi (y - v\psi (y))} \leq \frac{1} {a(v)} \in (0,\infty ).$$

Thus, any function ψ satisfying condition (A) will also satisfy the following relation: for any fixed v > 0,

$$\frac{\psi (x + v\psi (x))} {\psi (x)} \leq \frac{1} {a(v)}\quad \text{ for all sufficiently large}\ x.$$
(13)

Observe that the converse is not true: it is not hard to construct an example of a (piece-wise linear, globally Lipschitz) non-decreasing function ψ which satisfies condition of the form (13), but for which condition (A) will hold for no finite function a(v).

It is clear that if ψ is a ψ-l.c.f., ψ(x) = o(x), then ψ satisfies condition (A).

Introduce class \(\mathcal{K}\) consisting of non-decreasing functions ψ(x) ≥ 1, x ≥ 0, that satisfy condition (A) for a function a(v) such that

$$\int \limits _{0}^{\infty }a(u)\,du = \infty.$$
(14)

Class \(\mathcal{K}_{1}\) we define as the class of continuous r.v.f.’s \(\psi (x) = {x}^{\alpha }L(x)\) with index α < 1 and such that \(x/\psi (x) \uparrow \infty \) as x →  and the following “asymptotic smoothness” condition is met: for any fixed v, 

$$\psi (x + \Delta ) = \psi (x) + \frac{\alpha \Delta \psi (x)} {x} \,(1 + o(1))\quad \text{ as }x \rightarrow \infty ,\quad \Delta = v\psi (x).$$
(15)

Clearly, \(\mathcal{K}_{1}\,\subset \,\mathcal{K}\). Condition (15) is always met for any \(\Delta \,\geq \,c_{1}\,=\,\mathrm{const}\), \(\Delta \,=\,o(x)\), provided that the function L(x) is differentiable and \(L{^\prime}(x)\,=\,o\left (L(x)/x\right )\) as x → .

In the assertions to follow, it will be assumed that ψ belongs to the class \(\mathcal{K}\) or \(\mathcal{K}_{1}\). We will not dwell on how far the conditions \(\psi \in \mathcal{K}\) or \(\psi \in \mathcal{K}_{1}\) can be extended. The function ψ specifies the “asymptotic local constancy zone width” of the function g under consideration, and what matters for us is just the growth rate of ψ(x) as x → . All its other properties (smoothness, presence of oscillations etc.) are for us to choose, and so we can assume the function ψ to be as smooth as we need. In this sense, the assumption that ψ belongs to the class \(\mathcal{K}\) or \(\mathcal{K}_{1}\) is not restrictive. For example, it is quite natural to assume in Theorems B and C from Sect. 2 that \(\psi \in \mathcal{K}_{1}\).

The following assertion establishes the uniformity of convergence in (8).

Theorem 1.

If g is a ψ-l.c.f. with \(\psi \in \mathcal{K}\) , then convergence in  (8) is uniform: for any fixed real numbers \(v_{1} < v_{2}\) ,

$$(\mathbf{U}_{\psi })\quad \lim _{x\rightarrow \infty }\sup _{v_{1}\leq v\leq v_{2}}\left \vert \frac{g\left (x + v\psi (x)\right )} {g(x)} - 1\right \vert = 0.$$
(16)

Observe that, for monotone g, the condition \(\psi \in \mathcal{K}\) in Theorem 1 is superfluous. Indeed, assume for definiteness that g is a non-decreasing ψ-l.c.f. Then, for any v and v(x) → v, there is a v 0 > v such that, for all sufficiently large x, one has v(x) < v 0, and therefore

$$limsup_{x\rightarrow \infty }\,\frac{g(x + v(x)\psi (x))} {g(x)} \leq limsup_{x\rightarrow \infty }\,\frac{g(x + v_{0}\psi (x))} {g(x)} = 1.$$
(17)

A converse inequality for lim inf is established in a similar way. As a consequence,

$$\lim _{x\rightarrow \infty }\frac{g(x + v(x)\psi (x))} {g(x)} = 1,$$
(18)

which is easily seen to be equivalent to (16) (cf. (7)).

Note also that it is not hard to see that monotonicity property required to derive (17) and (18), could be somewhat relaxed.

Now set

$$\gamma (x) := \int \limits _{1}^{x} \frac{dt} {\psi (t)}.$$
(19)

Theorem 2.

Let \(\psi \in \mathcal{K}\) . Then g is a ψ-l.c.f. iff it admits a representation of the form

$$(\mathbf{I}_{\psi })\quad g(x) = c(x)\exp \left \{\int \limits _{1}^{{e}^{\gamma (x)} } \frac{\epsilon (t)} {t} \,dt\right \},\qquad x \geq 1,$$
(20)

where c(t) and \(\epsilon (t)\) have the same properties as in  (I) .

Since, for any \(\epsilon > 0\) and all large enough x,

$$\int \limits _{1}^{{e}^{\gamma (x)} } \frac{\epsilon (t)} {t} \,dt < \epsilon \ln {e}^{\gamma (x)} = \epsilon \gamma (x)$$

and a similar lower bound holds true, Theorem 2 implies the following result.

Corollary 1.

If \(\psi \in \mathcal{K}\) and g is a ψ-l.c.f., then

$$g(x) = {e}^{o(\gamma (x))},\qquad x \rightarrow \infty.$$

For \(\psi \in \mathcal{K}_{1}\) we put

$$\theta (x) := \frac{x} {\psi (x)}.$$

Clearly, θ(x) ∼ (1 − α)γ(x) as x → .

Theorem 3.

Let \(\psi \in \mathcal{K}_{1}\) . Then the assertion of Theorem  2 holds true with γ(x) replaced by θ(x).

Corollary 2.

If \(\psi \in \mathcal{K}_{1}\) and g is a ψ-l.c.f., then

$$g(x) = {e}^{o(\theta (x))},\qquad x \rightarrow \infty.$$

Since the function θ(x) has a “more explicit” representation in terms of ψ than the function γ(x), the assertions of Theorem 3 and Corollary 2 display the asymptotic properties ψ-l.c.f.’s in a more graphical way than those of Theorem 2 and Corollary 1. A deficiency of Theorem 3 is the fact that the condition \(\psi \in \mathcal{K}_{1}\) is more restrictive than the condition that \(\psi \in \mathcal{K}\). It is particularly essential that, in the former condition, the equality α = 1 is excluded for the index α of the r.v.f. ψ.

4 Proofs

Proof of Theorem  1. Our proof will use an argument modifying H. Delange’s proof of property (U) (see e.g. p. 6 in [1] or §1.1 in [2]).

Let \(l(x) :=\ln g(x).\) It is clear that (8) is equivalent to the convergence

$$l(x + v\psi (x)) - l(x) \rightarrow 0,\qquad x \rightarrow \infty ,$$
(21)

for any fixed \(v \in \mathbb{R}\). To prove the theorem, it suffices to show that

$$H_{v_{1},v_{2}}(x) :=\sup _{v_{1}\leq v\leq v_{2}}\left \vert l(x + v\psi (x)) - l(x)\right \vert \rightarrow 0,\qquad x \rightarrow \infty.$$

It is not hard to see that the above relation will follow from the convergence

$$H_{0,1}(x) \rightarrow 0,\qquad x \rightarrow \infty.$$
(22)

Indeed, let v 1 < 0 (for v 1 ≥ 0 the argument will be even simpler) and

$$x_{0} := x + v_{1}\psi (x),\qquad x_{k} := x_{0} + k\psi (x_{0}),\qquad k = 1,2,\ldots $$

By virtue of condition (A), one has \(\psi (x_{0}) \geq a(-v_{1})\psi (x)\) with a( − v 1) > 0. Therefore, letting \(n := \lfloor (v_{2} - v_{1})/a(-v_{1})\rfloor + 1,\) where ⌊x⌋ denotes the integer part of x, we obtain

$$H_{v_{1},v_{2}}(x) \leq \sum \limits _{k=0}^{n}H_{ 0,1}(x_{k}),$$

which establishes the required implication.

Assume without loss of generality that ψ(0) = 1. To prove (22), fix an arbitrary small \(\epsilon \in \left (0,a(1)/(1 + a(1))\right )\) and set

$$I_{x} := [x,x + 2\psi (x)],\qquad I_{x}^{{_\ast}} :=\{ y \in I_{ x} :\, \vert l(y) - l(x)\vert \geq \epsilon /2\},$$
$$I_{0,x}^{{_\ast}} :=\{ u \in I_{ 0} :\, \vert l(x + u\psi (x)) - l(x)\vert \geq \epsilon /2\}.$$

One can easily see that all these sets are measurable and

$$I_{x}^{{_\ast}} = x + \psi (x)I_{ 0,x}^{{_\ast}},$$

so that for the Lebesgue measure μ( ⋅) on \(\mathbb{R}\) we have

$$\mu (I_{x}^{{_\ast}}) = \psi (x)\mu (I_{ 0,x}^{{_\ast}}).$$
(23)

It follows from (21) that, for any u ∈ I 0, the value of the indicator \(\mathbf{1}_{I_{0,x}^{{_\ast}}}(u)\) tends to zero as x → . Therefore, by the dominated convergence theorem,

$$\int \limits _{I_{0}}\mathbf{1}_{I_{0,x}^{{_\ast}}}(u)\,du \rightarrow 0,\qquad x \rightarrow 0.$$

From here and (23) we see that there exists an \(x_{(\epsilon )}\) such that

$$\mu (I_{x}^{{_\ast}}) \leq \frac{\epsilon } {2}\,\psi (x),\qquad x \geq x_{(\epsilon )}.$$

Now observe that, for any s ∈ [0, 1], the set \(I_{x} \cap I_{x+s\psi (x)} = [x + s\psi (x),x + 2\psi (x)]\) has the length \((2 - s)\psi (x) \geq \psi (x).\) Hence for \(x \geq x_{(\epsilon )}\) the set

$$J_{x,s} := \left (I_{x} \cap I_{x+s\psi (x)}\right ) \setminus \left (I_{x}^{{_\ast}}\cup I_{ x+s\psi (x)}^{{_\ast}}\right )$$

will have the length

$$\begin{array}{rl} \mu (J_{x,s})& \geq \psi (x) -\frac{\epsilon } {2}\,\left [\psi (x) + \psi (x + s\psi (x))\right ] \\ & \geq \psi (x) -\frac{\epsilon } {2}\,\left (1 + \frac{1} {a(1)}\right )\psi (x) \geq \frac{1} {2}\psi (x) \geq \frac{1} {2}, \end{array}$$

where we used relation (13) to establish the second inequality. Therefore \(J_{x,s}\neq \varnothing \) and one can choose a point y ∈ J x, s . Then \(y\not\in I_{x}^{{_\ast}}\) and \(y\not\in I_{x+s\psi (x)}^{{_\ast}}\), so that

$$\vert l(x + s\psi (x)) - l(x)\vert \leq \vert l(x + s\psi (x)) - l(y)\vert + \vert l(y) - l(x)\vert < \epsilon.$$

Since this relation holds for any s ∈ [0, 1], the required convergence (22) and hence the assertion of Theorem 1 are proved. □ 

Proof of Theorem  2. First let g be a ψ-l.c.f. with \(\psi \in \mathcal{K}\). Since ψ(t) = o(t), one has γ(x) ↑∞ as x↑∞ (see (19)). Moreover, the function γ(x) is continuous and so always has an inverse γ( − 1)(t) ↑∞ as t → , so that we can consider the composition function

$$g_{\gamma }(t) := (g \circ {\gamma }^{(-1)})(t).$$

If we show that g γ is an l.c.f. then representation (20) will immediately follow from the relation g(x) = g γ(γ(x)) and property (I 1 ).

By virtue of the uniformity property \((\mathbf{U}_{\boldsymbol \psi })\) which holds for g by Theorem 1, for any bounded function r(x) one has

$$g_{\gamma }\left (\gamma (x)\right ) \equiv g(x) \sim g\left (x + r(x)\psi (x)\right ) = g_{\gamma }\left (\gamma (x + r(x)\psi (x))\right ).$$
(24)

Next we will show that, for a given v (let v > 0 for definiteness), there is a bounded (as x → ) value r(x, v) such that

$$\gamma (x + r(x,v)\psi (x)) = \gamma (x) + v.$$
(25)

Indeed, we have

$$\gamma (x + r\psi (x)) - \gamma (x) = \int \limits _{x}^{x+r\psi (x)} \frac{dt} {\psi (t)} = \int \limits _{0}^{r} \frac{\psi (x)\,dz} {\psi (x + z\psi (x))} =: I(r,x),$$

where, by Fatou’s lemma and relation (13),

$$liminf _{x\rightarrow \infty }I(r,x) \geq \int \limits _{0}^{r} liminf _{ x\rightarrow \infty } \frac{\psi (x)} {\psi (x + z\psi (x))}\,dz \geq I(r) := \int \limits _{0}^{r}a(z)dz \uparrow \infty $$

as \(r \uparrow \infty \) (see (14)). Since, moreover, for any x the function I(r, x) is continuous in r, there exists \(r(v,x) \leq r_{v} < \infty \) such that \(I\left (r(v,x),x\right ) = v\), where r v is the solution of the equation I(r) = v.

Now choosing r(x) in (24) to be the function r(x, v) from (25) we obtain that

$$g_{\gamma }\left (\gamma (x)\right ) \sim g_{\gamma }\left (\gamma (x) + v\right )$$

as x → , which means that g γ is an l.c.f. and hence (20) holds true.

Conversely, let representation (20) be true. Then, for a fixed v ≥ 0, any \(\epsilon > 0\) and x → , one has

$$\begin{array}{rcl} \left \vert \ln \,\frac{g\left (x + v\psi (x)\right )} {g(x)} \right \vert & \leq & \int \limits _{{e}^{\gamma (x)}}^{{e}^{\gamma (x+v\psi (x))} } \frac{\left \vert \epsilon (t)\right \vert } {t} \,dt + o(1) \leq \left (\gamma (x + v\psi (x)) - \gamma (x)\right )\epsilon + o(1) \\ & \leq & \epsilon \int \limits _{0}^{v} \frac{\psi (x)ds} {\psi (x + s\psi (x))} + o(1) \leq \epsilon v + o(1). \end{array}$$
(26)

This clearly means that the left-hand side of this relation is o(1) as x → .

If v =  − u < 0 then, bounding in a similar fashion the integral

$$\int \limits _{{e}^{\gamma (x-u\psi (x))}}^{{e}^{\gamma (x)} } \frac{\left \vert \epsilon (t)\right \vert dt} {t} \leq \epsilon \int \limits _{0}^{u} \frac{\psi (x)ds} {\psi (x - s\psi (x))},$$

we will obtain from condition (A) that

$$limsup_{x\rightarrow \infty }\left \vert \ln \frac{g(x + v(\psi (x)))} {g(x)} \right \vert \leq \epsilon \int \limits _{0}^{u} limsup_{ x\rightarrow \infty } \frac{\psi (x)ds} {\psi (x - s\psi (x))} \leq \epsilon \int \limits _{0}^{u} \frac{ds} {a(s)},$$

so that the left-hand side of (26) is still o(1) as x → . Therefore g(x + vψ(x)) ∼ g(x) and hence g is a ψ-l.c.f. Theorem 2 is proved. □ 

It is evident that the assertion of Theorem 2 can also be stated as follows: for \(\psi \in \mathcal{K}\), a function g is a ψ-l.c.f. iff g γ(x) is an l.c.f. (which, in turn, holds iff g γ(lnx) is an s.v.f.).

Proof of Theorem 3. One can employ an argument similar to the one used to prove Theorem 2.

Since the function θ(x) is continuous and increasing, it has an inverse θ( − 1)(t). It is not hard to see that if ψ has property (15), then the function θ(x) = x ∕ ψ(x) also possesses a similar property: for a fixed v and Δ = vψ(x), x → , one has

$$\theta (x + \Delta ) = \theta (x) + \frac{(1 - \alpha )\Delta \theta (x)} {x} \,(1 + o(1)).$$
(27)

Therefore, as x → ,

$$\theta (x + v\psi (x)) = \theta (x) + (1 - \alpha )v(1 + o(1)).$$

As the function θ is monotone and continuous, this relation means that, for any v, there is a function v(x) → v as x →  such that

$$\theta (x + v(x)\psi (x)) = \theta (x) + (1 - \alpha )v.$$
(28)

Let g be a ψ-l.c.f. Then, for the function \(g_{\theta } := g \circ {\theta }^{(-1)}\) we obtain by virtue of (28) that

$$g_{\theta }(\theta (x)) \equiv g(x) \sim g(x+v(x)\psi (x)) = g_{\theta }\left (\theta (x + v(x)\psi (x))\right ) = g_{\theta }(\theta (x)+(1-\alpha )v).$$

Since θ(x) →  as x → , the relation above means that g θ is an l.c.f. The direct assertion of the integral representation theorem follows from here and (6).

The converse assertion is proved in the same way as in Theorem 2. Theorem 3 is proved. □ 

Similarly to our earlier argument, it follows from Theorem 3 that if \(\psi \in \mathcal{K}_{1}\) then g is a ψ-l.c.f. iff g θ is an l.c.f. (and g θ(lnx) is an s.v.f.).