1 Introduction

The directed polymer model was first introduced by Huse and Henley [23] in the study of the Ising model. A later motivation for studying the model and generalize it in arbitrary dimension was the observation that when a polymer chain stretches in some media with impurity or charges, the behavior of the polymer chain will be influenced by the interaction between the polymer chain and the environment. The polymer chain is modelled by a directed random walk, and a family of random variables in space represents the random environment. The first mathematical study of the directed polymer model was due to Imbrie and Spencer [24], which was then followed by many other authors e.g. [3, 8, 11, 14, 16, 17, 26, 32]. For an early review, see [15], and for a comprehensive introduction to directed polymer in random environment and other related polymer models, see [18].

So far, most of the results achieved in the study of directed polymer are based on the assumption that the polymer chain performs a simple symmetric random walk. In that case, we call the model the nearest-neighbor directed polymer. It is natural to consider replacing the simple random walk by more complicated random walks to reflect a variety of physical phenomena. In [13], Comets considered long-range random walks, whose increment distribution is in the domain of attraction of some \(\alpha \)-stable law. The reason that we consider the long-range model is that it models superdiffusive motions, unlike the nearest-neighbor model, which only models the diffusive motion. Another reason that motivates the study of long-range directed polymer is that in recent years, long-range random walks have played an increasingly important role in related fields, such as mathematical finance and statistics. It is likely that the directed polymer model may be applied to the study of other subjects.

In [13], the author extended some early results for the nearest-neighbor directed polymer to the long-range case. Since then, much progress has been made for the nearest-neighbor model. The goal of this paper is to investigate whether these newer results can also be extended to the long-range model and what are the important differences between the two cases. We will see later that there are indeed some differences between the long-range model and the nearest-neighbor model due to the heavy-tailed increments, which will result in some technical difficulties.

Remark 1.1

In [28], Miura, Tawara, and Tsuchida also studied a long-range model. They considered a continuous case in which the polymer chains are modelled by symmetric Lévy processes and the random environment is given by a time-space Poisson point process. The continuous model is worth investigating so we mention this reference here for literature completeness but we will focus on discrete model in this paper.

1.1 Long-Range Directed Polymer Model

Let \(S=(S_{n})_{n\ge 0}\) be a heavy-tailed random walk on \(\mathbbm {Z}\) with i.i.d. increments, starting at 0. The law of S is denoted by \(\mathbf {P}\) and the corresponding expectation is denoted by \(\mathbf {E}\). We assume that the increment distribution of S is in the domain of attraction of some stable law, which is equivalent to

$$\begin{aligned} \mathbf {P}(|S_{1}|\ge n)=n^{-\alpha }L(n),~\forall n\ge 1,~\text{ for } \text{ some }~\alpha \in (0,2), \end{aligned}$$
(1.1)

or

$$\begin{aligned} \mathbf {E}\left[ (S_{1})^{2}\mathbbm {1}_{\{|S_{1}|\le n\}}\right] =L(n),~\forall n\ge 1,~\text{ for }~\alpha =2, \end{aligned}$$
(1.2)

where \(L(\cdot )\) is some positive function slowly varying at infinity (see [21, Theorem 3.2] and [7, Chapter 1]). Under the condition (1.1) or (1.2), the random walk S converges to some \(\alpha \)-stable law after centering and scaling, that is, we can find a sequence of centering factors \(\{b_{n}\}_{n\in \mathbbm {N}}\) and a sequence of scaling factors \(\{a_{n}\}_{n\in \mathbbm {N}}\), such that

$$\begin{aligned} \frac{S_{n}-b_{n}}{a_{n}}\Rightarrow X_{\alpha }~\text{ weakly } \text{ as }~n\rightarrow \infty , \end{aligned}$$
(1.3)

where \(X_{\alpha }\) is some stable law with stable exponent \(\alpha \in (0,2]\). The scaling factor \(a_{n}\) is determined by the stable exponent \(\alpha \) and the slowly varying function L(x), and can be expressed as \(n^{\frac{1}{\alpha }}l(n)\) for some slowly varying function l(n). When \(\alpha \in (0,1)\), the centering factors \(b_{n}\) can be chosen as 0. When \(\alpha \in (1,2]\), \(\mathbf {E}[S_{1}]\) exists and \(b_{n}\) can be chosen as \(n\mathbf {E}[S_{1}]\). For simplicity, we just assume \(\mathbf {E}[S_{1}]=0\) and we will see that all proof can be adapted in a straight-forward manner for the non-zero mean case. When \(\alpha =1\), \(b_{n}\) can be computed, but we set \(b_{n}=0\) for technical reason. For details, see [29, Chapter 7]. Therefore, throughout this paper, we assume \(b_{n}=0\).

We assume from now on that the random environment is described by a family of i.i.d. random variables \(\omega =(\omega _{i,x})_{(i,x)\in \mathbbm {N}\times \mathbbm {Z}}\), which is independent of the random walk S. The law of \(\omega \) is denoted by \(\mathbbm {P}\) and the corresponding expectation is denoted by \(\mathbbm {E}\). We also assume that the random environment has a finite logarithmic moment generating function, at least for small enough \(|\beta |\),

$$\begin{aligned} \lambda (\beta ):=\log \mathbbm {E}[\exp (\beta \omega _{i,x})]<\infty ,~~\forall \beta \in [-c,c],~\text{ for } \text{ some }~c>0. \end{aligned}$$
(1.4)

Without loss of generality, we can further assume that \(\mathbbm {E}[\omega _{i,x}]=0\) and \(\mathbbm {E}[(\omega _{i,x})^{2}]=1\).

Given the random environment \(\omega \), for any \(N\ge 0\), and \(\beta >0\), we can define the polymer measure through Gibbs transformation of the law \(\mathbf {P}\) of the random walk up to time N by

$$\begin{aligned} \frac{{\mathrm{d}}\mathbf {P}_{N,\beta }^{\omega }}{{\mathrm{d}}\mathbf {P}}(S):=\frac{1}{Z_{N,\beta }^{\omega }}\exp \left( \sum \limits _{n=1}^{N}\beta \omega _{n,S_{n}}\right) , \end{aligned}$$
(1.5)

where

$$\begin{aligned} Z_{N,\beta }^{\omega }=\mathbf {E}\left[ \exp \left( \sum \limits _{n=1}^{N}\beta \omega _{n,S_{n}}\right) \right] \end{aligned}$$
(1.6)

is the partition function which makes \(\mathbf {P}_{N,\beta }^{\omega }\) a probability measure and \(\beta \) is the inverse temperature. We also denote the Hamiltonian of the system by

$$\begin{aligned} H_{N}^{\omega }(S):=-\sum \limits _{n=1}^{N}\omega _{n,S_{n}}, \end{aligned}$$
(1.7)

which represents the energy of the path of the random walk. It can be seen from (1.5) that under the polymer measure \(\mathbf {P}_{N,\beta }^{\omega }\), the random walk paths with low energy carry more weights.

Remark 1.2

Unlike many other papers concerning the nearest-neighbor model, in this paper, we only consider the model on \(\mathbbm {Z}^{1+1}\) instead of \(\mathbbm {Z}^{d+1}\). The reason is that when we later consider the significant classification of the strong disorder regime and the weak disorder regime, whether the random walk is recurrent or transient plays a key role, see [13, 14]. It is known that for heavy-tailed random walks on \(\mathbbm {Z}\) satisfying (1.1) or (1.2), the random walk is recurrent for \(\alpha \in (1,2]\) and transient for \(\alpha \in (0,1)\), and for the critical case \(\alpha =1\), whether the random walk is recurrent or transient depends on the slowly varying function L(x). In dimension 2, the random walk is transient for \(\alpha \in (0,2)\). And in higher dimension, the random walk is transient for all \(\alpha \in (0,2]\). As we can see, the phase transition mostly occurs in dimension 1. Therefore, most of the interesting behaviors are contained in one dimensional model as we vary \(\alpha \in (0,2]\). We also mention that our Proposition 1.13 can adapts the case \(d=2\), \(\alpha =2\), which might be of interest.

Denote the \(\sigma \)-field generated by the random environment up to time N by \(\mathcal {G}_{N}=\sigma ((\omega _{n,x})_{0\le n\le N, x\in \mathbbm {Z}})\). It is easy to see that the normalized partition function

$$\begin{aligned} \hat{Z}_{N,\beta }^{\omega }:=\frac{Z_{N,\beta }^{\omega }}{\exp (N\lambda (\beta ))} \end{aligned}$$
(1.8)

is a \(\mathbbm {P}\)-martingale with respect to the filtration \((\mathcal {G}_{N})_{N\ge 0}\). Since \(\hat{Z}_{N,\beta }^{\omega }\) is nonnegative, it converges to some random variable \(\hat{Z}_{\infty ,\beta }^{\omega }\) almost surely by the martingale convergence theorem. It can be seen that the event \(\{\hat{Z}_{\infty ,\beta }^{\omega }=0\}\) is in the tail \(\sigma \)-field \(\bigcap \limits _{N=0}^{\infty }\sigma ((\omega _{n,x})_{n\ge N, x\in \mathbbm {Z}})\). By Kolmogorov’s 0-1 law, either \(\mathbbm {P}(\hat{Z}_{\infty ,\beta }^{\omega }=0)=0\) or \(\mathbbm {P}(\hat{Z}_{\infty ,\beta }^{\omega }=0)=1\). We call the first case the weak disorder regime and the second one the strong disorder regime. This simple but significant observation was first made by Bolthausen for a binary random environment (see [8]).

It is believed that in the weak disorder regime, under the polymer measure \(\mathbf {P}_{N,\beta }^{\omega }\), the random walk’s behavior is comparable to that under \(\mathbf {P}\), i.e., the random walk fluctuates on the scale \(N^{\frac{1}{\alpha }}\) (up to some extra slowly varying function) as \(N\rightarrow \infty \). While in the strong disorder regime, under the polymer measure \(\mathbf {P}_{N,\beta }^{\omega }\), there will be some narrow corridors at a distance \({\gg }N^{\frac{1}{\alpha }}\) from the origin, in which the random walk falls with high probability. In particular, the random walk’s end-point distribution contains macroscopic atoms. We call this expected phenomenon in the weak disorder regime delocalization, and the one in the strong disorder regime localization.

One important result that connects strong disorder with localization is [14, Theorem 2.1], for the nearest-neighbor model which is then extended to the long-range case model in [13]. We cite that result here:

Theorem 1.3

(Comets [13]) Denote

$$\begin{aligned} I_{N}:=(\mathbf {P}_{N-1,\beta }^{\omega })^{\bigotimes 2}\big (S_{N}^{1}=S_{N}^{2}\big ), \end{aligned}$$
(1.9)

where \(S^{1}\), \(S^{2}\) are two independent copies of the random walk S satisfying (1.1) or (1.2) and \((\mathbf {P}_{N-1,\beta }^{\omega })^{\bigotimes 2}\) can be viewed as the distribution of the couple \((S^{1},S^{2})\) with the same environment \(\omega \). Let \(\beta >0\). Then,

$$\begin{aligned} \mathbbm {P}\big (\hat{Z}_{\infty ,\beta }^{\omega }=0\big )=\mathbbm {P}\left( \sum \limits _{N=1}^{\infty }I_{N}=\infty \right) . \end{aligned}$$
(1.10)

Moreover, if \(\mathbbm {P}(\hat{Z}_{\infty ,\beta }^{\omega }=0)=1\), then \(\mathbbm {P}\)-a.s., there exist \(c_{1}\), \(c_{2}\in (0,\infty )\), such that

$$\begin{aligned} -c_{1}\log \hat{Z}_{N,\beta }^{\omega }\le \sum \limits _{n=1}^{N}I_{n}\le -c_{2}\log \hat{Z}_{N,\beta }^{\omega }~~\text{ for }~N~\text{ large } \text{ enough. } \end{aligned}$$
(1.11)

The quantity \(I_{N}\) can be considered as the end-point overlap of two i.i.d. copies of the polymer at time N. For technical reasons, we consider the probability of \(\{S_{N}^{1}=S_{N}^{2}\}\) under the product measure \((\mathbf {P}_{N-1,\beta }^{\omega })^{\bigotimes 2}\), where the increment \(S_{N}-S_{N-1}\) is distributed as the original random walk. This theorem heuristically indicates that trajectories should intersect infinitely often in strong disorder.

The free energy of the system is defined by

$$\begin{aligned} F(\beta ):=\lim \limits _{N\rightarrow \infty }\frac{1}{N}\log Z_{N,\beta }^{\omega }. \end{aligned}$$
(1.12)

This limit is known to exist and to be deterministic \(\mathbbm {P}\)-a.s. One can refer to [13, 14] to see that

$$\begin{aligned} F(\beta )=\lim \limits _{N\rightarrow \infty }\frac{1}{N}\mathbbm {E}[\log Z_{N,\beta }^{\omega }]. \end{aligned}$$
(1.13)

We set

$$\begin{aligned} p(\beta ):=\lim \limits _{N\rightarrow \infty }\frac{1}{N}\log \hat{Z}_{N,\beta }^{\omega }=F(\beta )-\lambda (\beta )\le 0, \end{aligned}$$
(1.14)

where the inequality is due to Jensen’s inequality. It can be seen that if \(p(\beta )<0\), then \(\hat{Z}_{N,\beta }^{\omega }\) has exponential decay, which implies that strong disorder holds, but not the converse. Thus, the case \(p(\beta )<0\) is called the very strong disorder regime.

Since we have the dichotomy between weak disorder and strong disorder, we can draw the phase diagram of the system, which was first established for the nearest-neighbor directed polymer in [17] and then extended to the long-range case in [13]. We summarize their results as follows.

Theorem 1.4

(Comets-Yoshida [17], Comets [13]) For any \(\alpha \in (0,2]\), there exist \(0\le \beta _{c}^{1}:=\beta _{c}^{1}(\alpha )\le \beta _{c}^{2}:=\beta _{c}^{2}(\alpha )\le \infty \), such that

$$\begin{aligned} \mathbbm {P}(\hat{Z}_{\infty ,\beta }^{\omega }=0)= {\left\{ \begin{array}{ll} 0,&{}\quad \text{ if }\;\beta \in \{0\}\cup (0,\beta _{c}^{1}),\\ 1,&{}\quad \text{ if }\;\beta >\beta _{c}^{1}. \end{array}\right. } \end{aligned}$$
(1.15)

And

$$\begin{aligned} p(\beta ){\left\{ \begin{array}{ll} =0,&{}\quad \text{ if }\;\beta \in [0,\beta _{c}^{2}]\\ <0,&{}\quad \text{ if }\;\beta >\beta _{c}^{2}. \end{array}\right. } \end{aligned}$$
(1.16)

Remark 1.5

The reason that \(p(\beta _{c}^{2})=0\) is that \(p(\beta )\) is continuous in \(\beta \), since \(\frac{1}{N}\log \hat{Z}_{N,\beta }^{\omega }\) is convex in \(\beta \). It is conjectured that there is no intermediate phase between weak disorder and very strong disorder (except at the critical point \(\beta =\beta _{c}^{2}\)), i.e., \(\beta _{c}^{1}=\beta _{c}^{2}\). But so far this conjecture has only been proved for the nearest-neighbor directed polymer on \(\mathbbm {Z}^{1+1}\) in [16] and on \(\mathbbm {Z}^{2+1}\) in [26]. A recent more refined result for the nearest-neighbor polymer on \(\mathbbm {Z}^{2+1}\) is achieved in [3], which gives the exact asymptotic behavior of \(p(\beta )\) at high temperature. Another open question is to determine whether there is weak disorder or strong disorder at critical point \(\beta =\beta _{c}^{1}\).

To close this subsection, we cite two quantitative results which give sufficient conditions for the existence of a weak disorder regime, respectively a strong disorder regime.

Theorem 1.6

(Comets [13]) If the heavy-tailed random walk S is transient, and denote

$$\begin{aligned} \pi _{p}:=\mathbf {P}^{\bigotimes 2}(\exists n\ge 1, \text{ s.t. }~S_{n}-\tilde{S}_{n}=0)<1, \end{aligned}$$
(1.17)

where \(\tilde{S}\) is an i.i.d. copy of S, then for all \(\beta \) such that

$$\begin{aligned} \lambda (2\beta )-2\lambda (\beta )<-\log {\pi _{p}}, \end{aligned}$$
(1.18)

weak disorder holds.

Theorem 1.7

(Comets [13]) For any \(\alpha \in (0,2]\), if

$$\begin{aligned} \beta \lambda '(\beta )-\lambda (\beta )>-\sum \limits _{x\in \mathbbm {Z}}q(x)\log {q(x)}, \end{aligned}$$
(1.19)

then \(p(\beta )<0\), where \(q(x):=\mathbf {P}(S_{1}=x)\).

Remark 1.8

By Theorem 1.6 and Remark 1.2, there is always a weak disorder regime for \(\alpha \in (0,1)\), since \(\lambda (0)=0\). In Theorem 1.7, \(\sum \limits _{x\in \mathbbm {Z}}q(x)\log {q(x)}\) is always finite and for the random environment satisfying \(\mathrm{essup}|\omega _{1,0}|=\infty \), we have \(\beta \lambda '(\beta )-\lambda (\beta )\rightarrow \infty \) as \(\beta \rightarrow \infty \) (see [13]). Hence, strong disorder holds at low temperature.

1.2 Main Results

We summarize the results of this paper in this subsection. Unless otherwise specified, the random walk S and the random environment \(\omega \) that we consider here are introduced in Sect. 1.1, from (1.1) to (1.4). In Theorem 1.18, we need some extra but mild conditions on S and \(\omega \), which we will mention there.

We first study the path behavior of the long-range directed polymer chain in the weak disorder regime. As in [17, Theorem 1.2], we will establish a stable-law version of invariance principle under the polymer measure \(\mathbf {P}_{N,\beta }^{\omega }\). For heavy-tailed random walks that were introduced in (1.1), (1.2) and (1.3), define the following càdlàg process

$$\begin{aligned} X_{t}^{N}=\frac{S_{n}}{a_{N}},~~\text{ for }~t\in \left[ \frac{n}{N},\frac{n+1}{N}\right) ,~n=0,1,\ldots ,N. \end{aligned}$$
(1.20)

Then \((X_{t}^{N})_{t\in [0,1]}\) converges to an \(\alpha \)-stable Lévy process \((X_{t})_{t\in [0,1]}\in D[0,1]\) in distribution (see [30, Proposition 3.4]), which we call an analogue of invariance principle for \(\alpha \)-stable process. Here D[0, 1] is the space of all functions on [0, 1] which are right continuous with left limits equipped with the Skorohod topology induced by the metric

$$\begin{aligned} d(x,y)=\inf \limits _{\lambda \in \Lambda }\left\{ \sup \limits _{0\le t\le 1}|\lambda (t)-t|\vee \sup \limits _{0\le t\le 1}|x(t)-y(\lambda (t))|\right\} , \end{aligned}$$
(1.21)

where \(\Lambda \) is the set of all the strictly increasing functions \(\lambda (t)\) on [0, 1] with \(\lambda (0)=0\) and \(\lambda (1)=1\) (see [6, Chapter 3]).

Following the notations above, our first result is

Theorem 1.9

For the long-range directed polymer model defined in Sect. 1.1, assume that \(\alpha \in (0,1]\) and weak disorder holds. Then for all bounded continuous functions F on the path space D[0, 1], we have

$$\begin{aligned} \mathbf {E}_{N,\beta }^{\omega }[F((X_{t}^{N})_{t\in [0,1]})]\overset{\mathbbm {P}}{\rightarrow }\mathbf {E}^{X}[F((X_{t})_{t\in [0,1]})]~~\text{ as }~N\rightarrow \infty , \end{aligned}$$
(1.22)

where \(\mathbf {E}^{X}\) denotes the expectation for the \(\alpha \)-stable Lévy process X.

This theorem says that in the weak disorder regime and under the polymer measure \(\mathbf {P}_{N,\beta }^{\omega }\), the polymer chain converges to the same \(\alpha \)-stable Lévy process as S under the measure \(\mathbf {P}\). It is expected that the “in probability” convergence (1.22) can be improved to an “almost sure” version, but we cannot prove it for the moment.

Remark 1.10

In [13], Comets proved a scaling limit result for the long-range directed polymer under a stronger condition (1.18), which implies weak disorder. Here by applying the procedure developed in [17], we can weaken the condition (1.18) and improve the scaling limit result to the analogue of invariance principle for \(\alpha \)-stable process in the entire weak disorder regime.

Our second result concerns the phase diagram. We can characterize the phase diagram in Theorem 1.4 in more detail. More precisely, we prove

Theorem 1.11

Following the same notations and assumptions as in Theorem 1.4, we have

  1. (i)

    \(\beta _{c}^{1}=0\) if and only if S is recurrent.

  2. (ii)

    \(\beta _{c}^{1}=\beta _{c}^{2}=0\) for \(\alpha \in (1,2]\).

Remark 1.12

For the nearest-neighbor directed polymer model, \((\mathrm{i})\) has been proved in [14, Theorem 2.3]. Our result is the analogue for long-range directed polymer.

It can be seen from Theorem 1.6 that transience of S implies the existence of a weak disorder regime. Therefore, to complete the statement of Theorem 1.11 (i), what we need to prove is the following result.

Proposition 1.13

If the heavy-tailed random walk S is recurrent, then only strong disorder holds, i.e., \(\beta _{c}^{1}=0\).

Remark 1.14

As we have mentioned in Remark 1.2, recurrence holds for \(\alpha \in (1,2]\) and transience holds for \(\alpha \in (0,1)\). For the critical case \(\alpha =1\), \(\beta _{c}^{1}\) can be either 0 or positive, which depends on the slowly varying function \(L(\cdot )\).

Theorem 1.11 (ii) will be proved by showing \(p(\beta )<0\) for any \(\beta >0\) if \(\alpha \in (1,2]\). In fact, we can give an upper bound for the free energy that we believe to be sharp up to multiplication by a constant.

Theorem 1.15

If \(\alpha \in (1,2]\), then there exists a slowly varying function \(\varphi \), which can be expressed by \(\alpha \) and \(L(\cdot )\), an inverse temperature \(\beta _{0}>0\) and a constant \(C>0\) (all depend on \(\alpha \) and on \(L(\cdot )\)), such that for \(0<\beta \le \beta _{0}\),

$$\begin{aligned} p(\beta )<-C\beta ^{\frac{2\alpha }{\alpha -1}}\varphi \left( \frac{1}{\beta }\right) . \end{aligned}$$
(1.23)

Remark 1.16

It is conjectured that the asymptotic behavior of the free energy of long-range directed polymer is \(p(\beta )\sim -\mathbf F \beta ^{\frac{2\alpha }{\alpha -1}}\varphi (\frac{1}{\beta })\), where \(\mathbf F \) is the free energy of a continuum model and \(\varphi \) is some function slowly varying at infinity, although the existence of \(\mathbf F \) is still an open question. For more information, see [9, Conjectures 3.5, 3.11], where the authors consider the scaling limits of disordered systems, including the long-range directed polymer. Although in that paper the slowly varying function is ignored in long-range directed polymer models, it can be easily included as done in the conjecture on the critical curve of the pinning models. It is also natural to conjecture that for \(\alpha =1\), \(\beta _{c}^{1}=0\) can imply \(\beta _{c}^{2}=0\), which we are now trying to prove. However, there is still some technical difficulty to deal with the critical case. We will shortly discuss that in Remark 3.7 after the proof of Theorem 1.15.

Our next result concerns the phenomenon of localization in the very strong disorder regime. A strong result for localization was given by Vargas, in [32, Theorem 3.6], who considered \(\epsilon \)-atoms of polymer measure \(\mathbf {P}_{N,\beta }^{\omega }\) for the nearest-neighbor directed polymer. By some modifications in the proof of his key lemma [32, Lemma 5.3], we can extend his result to the long-range model.

Theorem 1.17

Denote

$$\begin{aligned} \mathcal {A}_{N,\beta }^{\epsilon ,\omega }=\big \{x\in \mathbbm {Z}:\mathbf {P}_{N-1,\beta }^{\omega }(S_{N}=x)>\epsilon \big \}. \end{aligned}$$
(1.24)

If \(p(\beta )<0\), i.e., very strong disorder holds, then for \(\mathbbm {P}\)-a.s., there exists an \(\epsilon >0\), such that

$$\begin{aligned} \varliminf \limits _{N\rightarrow \infty }\frac{1}{N}\sum \limits _{n=1}^{N}\mathbf {1}_{\mathcal {A}_{n,\beta }^{\epsilon ,\omega }\ne \emptyset }>0. \end{aligned}$$
(1.25)

This theorem says that in the very strong disorder regime, as the random walk moves in the random environment, there will be atoms carrying mass bigger than \(\epsilon \) in the polymer’s end-point distribution under \(\mathbf {P}_{N,\beta }^{\omega }\) for arbitrarily large N.

Our last result concerns the fluctuation of the polymer in the very strong disorder regime. It has been shown in [5, 27] that under the polymer measure, a Brownian polymer \(B_{t}\) in a continuous Gaussian field will fluctuate on a scale which is not less than \(t^{\frac{3}{5}}\) as \(t\rightarrow \infty \), if the Gaussian field has weak coorelation. Since \(t^{\frac{3}{5}}\) is larger than the underlying scale \(t^{\frac{1}{2}}\), it reflects a superdiffusive phenomenon. By adapting the methods in [5, 27], we can establish a similar result for the long-range model. For some technical reason, we will consider a family of heavy-tailed random walks with more regular tails in a Gaussian random environment. We show that for any stable exponent \(\alpha \in (1,2]\), the random walk fluctuates on a scale \(\gg N^{\frac{1}{\alpha }}\) under \(\mathbf {P}_{N,\beta }^{\omega }\) as \(N\rightarrow \infty \). In such case, we say that the random walk has a super-\(\alpha \)-stable motion.

Theorem 1.18

Let \((X_{n})_{n\in \mathbbm {N}}\) be a sequence of i.i.d. integer-valued random variables with symmetric distribution

$$\begin{aligned} \mathbf {P}(X_{1}=k)= \left\{ \begin{array}{lll} \frac{L(|k|)}{|k|^{\alpha +1}},&{}\quad \forall k\in \mathbbm {Z}\setminus \{0\},\\ p_{0}>0,&{}\quad \text{ for }~k=0, \end{array}\right. \end{aligned}$$
(1.26)

where \(L(\cdot ):(0,\infty )\rightarrow (0,\infty )\) is some slowly varying function and \(\alpha \) is some constant strictly larger than 1 (not necessarily less than 2). We denote the heavy-tailed random walk by

$$\begin{aligned} S_{N}=\sum \limits _{n=1}^{N}X_{n}. \end{aligned}$$
(1.27)

The random environment \(\omega :=(\omega _{i,x})_{(i,x)\in \mathbbm {N}\times \mathbbm {Z}}\) is a family of i.i.d. standard Gaussian random variables and we define the related polymer measure as in (1.5) and (1.6). Then given \(\alpha \) in (1.26) and \(\beta >0\), for any arbitrarily small \(\epsilon >0\), we have

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\mathbbm {E}\left[ \mathbf {P}_{N,\beta }^{\omega }\left( \max \limits _{1\le n\le N}|S_{n}|\ge \frac{\beta ^{2}N}{4(\alpha +1+\epsilon )^{2}(\log N)^{2}}\right) \right] =1. \end{aligned}$$
(1.28)

Remark 1.19

The condition (1.26) is a bit stronger than (1.1) or (1.2), since by [7, Proposition 1.5.8], (1.26) implies that for \(\alpha \in (1,2), X_{1}\) is in the domain of attraction of the stable law with stable exponent \(\alpha \), and for \(\alpha \ge 2\), \(X_{1}\) is in the domain of attraction of the Gaussian law.

In summary, in this paper, we draw a more detailed phase diagram for the long-range directed polymer model, and we extend the invariance principle in the weak disorder regime and a localization result in the very strong disorder regime from the nearest-neighbor directed polymer model to long-range directed polymer model. We also provide an upper bound for free energy of the model and a lower bound for the fluctuation scale for \(\alpha \in (1,2]\). We hope that our results lay the foundation for further investigations of the long-range directed polymer model.

1.3 Organization and Strategy of the Proof

In Sect. 2, we will prove Theorem 1.9. The procedure is the same as that in the proof of [17, Theorem 5.1] for the nearest-neighbor model. The difference is that we need to do some estimates for heavy-tailed random walks instead of the simple random walk. We will apply some technical lemmas from [17] without stating their proof. Those lemmas can be extended to the long-range directed polymer model after careful checking.

In Sect. 3, we will prove Proposition 1.13 and Theorem 1.15. For Proposition 1.13, we will adapt the method used in the proof of [14, Proposition 2.4(b)]. We will also give some equivalent criterion for the recurrence of long-range random walks. For Theorem 1.15, we will use the now standard fractional moment/coarse graining/change of measure method, developed in the pinning model literature, used in [26].

In Sect. 4, we will prove Theorem 1.17, which is based on the techniques developed by Vargas in [32].

In Sect. 5, we will prove Theorem 1.18. The methods that we will use were developed in [5, 27]. Instead of computing the covariance of the random environment as that in [5], we will apply the change of measure method as that in [27], which is also used in the proof of Theorem 1.15.

Each section is independent and can be read separately.

Remark 1.20

One main difficulty in extending results for the nearest-neighbor model to the long-range ones is that, up to time N, the simple random walk can only reach at most \((2d)^{N}\) sites on \(\mathbbm {Z}^{d}\), but the heavy-tailed random walk can reach infinitely many sites in one step. As we will see, the method in [16], or the greedy lattice animal argument in [32, Sect. 3.1] cannot be directly applied to the long-range model.

2 Proof of Theorem 1.9

In this section, we will always assume that weak disorder holds, i.e. \(\mathbbm {P}(\hat{Z}_{\infty ,\beta }^{\omega }>0)=1\). According to the definition of the polymer measure \(\mathbf {P}_{N,\beta }^{\omega }\), we perform change of measure for \(\mathbf {P}\) at time N with respect to the first N steps of the random walk S. First we introduce the notation

$$\begin{aligned} \hat{Z}_{N,\beta }^{\omega }(i,x):=\mathbf {E}^{x}\left[ \exp \left( \sum \limits _{n=1}^{N}(\beta \omega _{n+i,S_{n}}-\lambda (\beta ))\right) \right] , \end{aligned}$$
(2.1)

where \(\mathbf {E}^{x}[\cdot ]\) denotes the expectation with respect to \(\mathbf {P}^{x}:=\mathbf {P}(\cdot |S_{0}=x)\), the probability measure for the random walk starting at x. Then it is not hard to observe that given \(\beta \) and \(\omega \), \(\mathbf {P}_{N,\beta }^{\omega }\) is an inhomogeneous Markov chain and the transition probabilities are given by

$$\begin{aligned} \begin{array}{ll} &{}\mathbf {P}_{N,\beta }^{\omega }(S_{i+1}=y|S_{i}=x)=\\ &{}\quad {\left\{ \begin{array}{ll} \frac{\exp (\beta \omega _{i+1,y}-\lambda (\beta ))\hat{Z}_{N-i-1,\beta }^{\omega }(i+1,y)}{\hat{Z}_{N-i,\beta }^{\omega }(i,x)} \mathbf {P}(S_{1}=y|S_{0}=x),&{}\quad \text{ for }~0\le i\le N-1,\\ \mathbf {P}(S_{1}=y|S_{0}=x),&{}\quad \text{ for }~i\ge N. \end{array}\right. } \end{array} \end{aligned}$$
(2.2)

Moreover, we can rewrite

$$\begin{aligned} \hat{Z}_{N,\beta }^{\omega }(0,x)=\mathbf {E}^{x}\big [\exp (\beta \omega _{1,S_{1}}-\lambda (\beta ))\hat{Z}_{N-1,\beta }^{\omega }(1,S_{1})\big ]. \end{aligned}$$
(2.3)

It can be seen that

$$\begin{aligned} \hat{Z}_{\infty ,\beta }^{\omega }(0,x):=\lim \limits _{N\rightarrow \infty }\hat{Z}_{N,\beta }^{\omega }(0,x)\ge \mathbf {E}^{x}\big [\exp (\beta \omega _{1,S_{1}}-\lambda (\beta ))\hat{Z}_{\infty ,\beta }^{\omega }(1,S_{1})\big ],~~\mathbbm {P}\text{-a.s. },\qquad \end{aligned}$$
(2.4)

where the first limit exists by martingale convergence theorem, and the inequality is due to Fatou’s lemma. Notice that \((\hat{Z}_{\infty ,\beta }^{\omega }(i,x))_{i\ge 0, x\in \mathbbm {Z}}\) are identically distributed since \(\omega \) is i.i.d., and \(\omega _{1,S_{1}}\) is independent of \(\hat{Z}_{\infty ,\beta }^{\omega }(1,0)\). Hence, by taking expectation on both side of (2.4) and switch the order of \(\mathbbm {E}\) and \(\mathbf {E}^{x}\), we have

$$\begin{aligned} \mathbbm {E}[\hat{Z}_{\infty ,\beta }^{\omega }(0,x)]\ge \mathbbm {E}\big [\mathbf {E}^{x}[\exp (\beta \omega _{1,S_{1}}-\lambda (\beta ))\hat{Z}_{\infty ,\beta }^{\omega }(1,S_{1}) ]\big ]=\mathbbm {E}[\hat{Z}_{\infty ,\beta }^{\omega }(1,x)]. \end{aligned}$$
(2.5)

By the argument from (2.4) to (2.5), notice that \(\hat{Z}_{\infty ,\beta }^{\omega }(0,x)\) and \(\hat{Z}_{\infty ,\beta }^{\omega }(1,x)\) have the same distribution, and then it follows that

$$\begin{aligned} \hat{Z}_{\infty ,\beta }^{\omega }(0,x)=\mathbf {E}^{x}[\exp (\beta \omega _{1,S_{1}}-\lambda (\beta ))\hat{Z}_{\infty ,\beta }^{\omega }(1,S_{1})], ~~\mathbbm {P}\text{-a.s. } \end{aligned}$$
(2.6)

Next, for all \(A\in \mathcal {F}_{\infty }=\sigma \left( \bigcup \limits _{N=1}^{\infty }\mathcal {F}_{N}\right) \), where \(\mathcal {F}_{N}\) is the \(\sigma \)-field generated by the first N steps of the random walk S, the limit

$$\begin{aligned} \mathbf {P}_{\infty ,\beta }^{\omega }(A):=\lim \limits _{N\rightarrow \infty }\mathbf {P}_{N,\beta }^{\omega }(A)=\lim \limits _{N\rightarrow \infty } \frac{\mathbf {E}\left[ \mathbbm {1}_{A}\exp \left( \sum \limits _{n=1}^{N}\beta \omega _{n,S_{n}}-N\lambda (\beta )\right) \right] }{\hat{Z}_{N,\beta }^{\omega }} \end{aligned}$$
(2.7)

exists \(\mathbbm {P}\)-a.s by applying martingale convergence theorem to both the numerator and the denominator and the positivity of \(\hat{Z}_{\infty ,\beta }^{\omega }\).

Motivated by the argument above, we can define a random, inhomogeneous Markov chain with transition probabilities

$$\begin{aligned} \mathbf {P}_{\beta ,\text{ mc }}^{\omega }(S_{i+1}=y|S_{i}=x)=\frac{\exp (\beta \omega _{i+1,y}-\lambda (\beta ))\hat{Z}_{\infty ,\beta }^{\omega }(i+1,y)}{\hat{Z}_{\infty ,\beta }^{\omega }(i,x)}\mathbf {P}(S_{1}=y|S_{0}=x).\nonumber \\ \end{aligned}$$
(2.8)

Note that (2.8) is obtained by taking limits in both numerator and denominator in (2.2), which is well-defined by (2.6). The reason we define \(\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }\) is that \(\mathbf {P}_{\infty ,\beta }^{\omega }\) is not known to be countably additive on \(\mathcal {F}_{\infty }\), while \(\mathbf {P}_{\beta , \mathrm{mc}}^{\omega }\) is indeed a probability measure on \(\mathcal {F}_{\infty }\) and coincides with \(\mathbf {P}_{\infty ,\beta }^{\omega }\) on \(\bigcup \limits _{n=1}^{\infty }\mathcal {F}_{n}\). The probability measure \(\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }\) will play an important role in the proof of Theorem 1.9.

We cite the following results from [17], which we do not prove and will be used in our proof.

2.1 Useful Preliminary Result

Proposition 2.1

([17, Proposition 4.1]) Assume weak disorder.

$$\begin{aligned} \mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }(A)=\mathbf {P}_{\infty }^{\beta ,\omega }(A),~~\mathbbm {P}\text{-a.s. } \text{ for } \text{ all }~A\in \bigcup \limits _{N=1}^{\infty }\mathcal {F}_{N}. \end{aligned}$$
(2.9)

Moreover,

$$\begin{aligned}&\mathbbm {E}\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }(A)=\mathbbm {E}\mathbf {P}_{\infty ,\beta }^{\omega }(A)~~\forall A\in \mathcal {F}_{\infty },\end{aligned}$$
(2.10)
$$\begin{aligned}&\mathbf {P}\ll \mathbbm {E}\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }\ll \mathbf {P}~~\text{ on }~\mathcal {F}_{\infty }. \end{aligned}$$
(2.11)

It is not hard to deduce Proposition 2.1 from [17, Lemma 4.2]. We state a weaker version of [17, Lemma 4.2] here, which will be helpful later in the proof of Proposition 2.5.

Lemma 2.2

([17, Lemma 4.2]) Suppose \(\{A_{N}\}_{N\ge 1}\subset \mathcal {F}_{\infty }\) such that

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\mathbf {P}(A_{N})=0. \end{aligned}$$
(2.12)

Then

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\mathbbm {E}\big [\mathbf {P}_{N,\beta }^{\omega }(A_{N})\big ]=\lim \limits _{N\rightarrow \infty }\mathbbm {E} \big [\mathbf {P}_{\infty ,\beta }^{\omega }(A_{N})\big ]=0. \end{aligned}$$
(2.13)

The next proposition we cite concerns the total variation distance between the polymer measure \(\mathbf {P}_{N+k,\beta }^{\omega }\) and the Markov chain \(\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }\). We introduce the total variational norm

$$\begin{aligned} ||\mu -\nu ||_{\mathcal {F}_{N}}:=2\sup \{\mu (A)-\nu (A):A\in \mathcal {F}_{N}\}. \end{aligned}$$
(2.14)

Proposition 2.3

([17, Proposition 4.3]) In the weak disorder regime,

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }\sup \limits _{N}\mathbbm {E}\left[ ||\mathbf {P}_{N+k,\beta }^{\omega }-\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }||_{\mathcal {F}_{N}}\right] =0. \end{aligned}$$
(2.15)

The last result we cite here is the following lemma, which is a key ingredient to deduce our main result Theorem 2.7.

Lemma 2.4

([17, Lemma 5.3]) For all \(B\in \mathcal {F}_{\infty }^{\bigotimes 2}\), the following limits exists \(\mathbbm {P}\)-a.s. in the weak disorder regime:

$$\begin{aligned} \big (\mathbf {P}_{\infty ,\beta }^{\omega }\big )^{(2)}(B):=\lim \limits _{N\rightarrow \infty }\big (\mathbf {P}_{N,\beta }^{\omega }\big )^{\bigotimes 2}(B), \end{aligned}$$
(2.16)

where the definition of \((\mathbf {P}_{N-1,\beta }^{\omega })^{\bigotimes 2}\) is given in Theorem 1.3.

Moreover,

$$\begin{aligned} \big (\mathbf {P}_{\infty ,\beta }^{\omega }\big )^{(2)}(B)&=\big (\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }\big )^{\bigotimes 2}(B),~~\forall B\in \bigcup \limits _{N=1}^{\infty }\mathcal {F}_{N}^{\bigotimes 2}\end{aligned}$$
(2.17)
$$\begin{aligned} \mathbbm {E}\big [\big (\mathbf {P}_{\infty ,\beta }^{\omega }\big )^{(2)}(B)\big ]&=\mathbbm {E}\big [\big (\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }\big )^{\bigotimes 2}(B)\big ],~~\forall B\in \mathcal {F}_{\infty }^{\bigotimes 2}\end{aligned}$$
(2.18)
$$\begin{aligned} \mathbbm {E}\big (\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }\big )^{\bigotimes 2}&\ll \mathbf {P}^{\bigotimes 2},~~\text{ on }~\mathcal {F}_{\infty }^{\bigotimes 2}. \end{aligned}$$
(2.19)

Note that by [17, Remark 5.3], we cannot identify \((\mathbf {P}_{\infty ,\beta }^{\omega })^{(2)}\) with \((\mathbf {P}_{\infty ,\beta }^{\omega })^{\bigotimes 2}\) because we do not know whether \(\mathbf {P}_{\infty ,\beta }^{\omega }\) is a countably additive product measure. Although Lemma 2.4 looks similar to Proposition 2.1, the proof of Lemma 2.4 is much more technical, involving Doob’s decomposition of submartingale, since \(\mathbf {E}^{\bigotimes 2}\left[ \exp \left( \sum \limits _{n=1}^{N}\beta (\omega _{n,S_{n}^{1}}+\omega _{n,S_{n}^{2}})-2N\lambda (\beta )\right) \right] \) is no longer a \(\mathbbm {P}\)-martingale with respect to filtration \(\mathcal {G}_{N}\).

2.2 End of the Proof of Theorem 1.9

Now we can prove Theorem 1.9. First, under the probability measure \(\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }\), we establish an analogue of averaged invariance principle for the càdlàg process \((X_{t}^{N})_{t\in [0,1]}\) via a second moment calculation and Proposition 2.1. Since the Markov chain and the limit of the polymer measure \(\mathbf {P}_{N,\beta }^{\omega }\) coincide on the \(\sigma \)-field generated by the random walk S up to any finite time, we can apply Proposition 2.3 to extend the analogue of averaged invariance principle from \(\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }\) to the polymer measure \(\mathbf {P}_{N,\beta }^{\omega }\). Then, by the same procedure above, we can establish the analogue of averaged invariance principle for the i.i.d. couple \(((X_{t}^{N})_{t\in [0,1]},(\tilde{X}_{t}^{N})_{t\in [0,1]})\) under the product measure \((\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega })^{\bigotimes 2}\) via Lemma 2.4. Finally, since \((X_{t}^{N})_{t\in [0,1]}\) and \((\tilde{X}_{t}^{N})_{t\in [0,1]}\) are i.i.d., \(\mathbf {E}_{\beta ,\mathrm{mc}}^{\omega }[F((X_{t}^{N})_{t\in [0,1]})]\) converges in \(L^{2}\) (thus it converges in probability). The convergence in probability of \(\mathbf {E}_{N,\beta }^{\omega }[F((X_{t}^{N})_{t\in [0,1]})]\) will then follow by applying Proposition 2.1 again.

More precisely, our first step is to establish the following proposition.

Proposition 2.5

Assume that \(\alpha \in (0,1]\) and weak disorder holds. Then the path measures

$$\begin{aligned} \mathbbm {E}\mathbf {P}_{N,\beta }^{\omega }\big (\big (X_{t}^{N}\big )_{t\in [0,1]}\in \cdot \big )&\Rightarrow \mathbf {P}^{X}((X_{t})_{t\in [0,1]}\in \cdot )~~\text{ weakly } \text{ as }~N\rightarrow \infty , \end{aligned}$$
(2.20)
$$\begin{aligned} \mathbbm {E}\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }\big (\big (X_{t}^{N}\big )_{t\in [0,1]}\in \cdot \big )&\Rightarrow \mathbf {P}^{X}((X_{t})_{t\in [0,1]}\in \cdot )~~\text{ weakly } \text{ as }~N \rightarrow \infty . \end{aligned}$$
(2.21)

Remark 2.6

The analogue of Proposition 2.5 was proved for the nearest-neighbor model in [17]. To extend it to the long-range model, we use the observation that under the Skorohod distance, \(X_{t}^{N}\) and \(X_{t}^{N-k}\) are close for fixed k and large enough N.

Applying Proposition 2.5, we will then prove

Theorem 2.7

Assume that \(\alpha \in (0,1]\) and weak disorder holds. Then, for all bounded continuous functions F on the path space D[0, 1],

$$\begin{aligned} \mathbf {E}_{N,\beta }^{\omega }\big [F\big (\big (X_{t}^{N}\big )_{t\in [0,1]}\big )\big ]&\overset{\mathbbm {P}}{\rightarrow }\mathbf {E}^{X}[F((X_{t})_{t\in [0,1]})]~~\text{ as }~N\rightarrow \infty .\end{aligned}$$
(2.22)
$$\begin{aligned} \mathbf {E}_{\beta ,\mathrm{mc}}^{\omega }\big [F\big (\big (X_{t}^{N}\big )_{t\in [0,1]}\big )\big ]&\overset{\mathbbm {P}}{\rightarrow }\mathbf {E}^{X}[F((X_{t})_{t\in [0,1]})]~~\text{ as }~N\rightarrow \infty . \end{aligned}$$
(2.23)

Proof of Proposition 2.5

We first prove (2.21). Since the path space D[0, 1] is separable, by [19, Theorem 11.3.3], it suffices to show that

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\mathbbm {E}[\mathbf {E}_{\beta ,\mathrm{mc}}^{\omega }[F((X_{t}^{N})_{t\in [0,1]})]=\mathbf {E}^{X}[F((X_{t})_{t\in [0,1]})],~~\forall F\in \mathrm{BL}(D[0,1]), \end{aligned}$$
(2.24)

where BL(D[0, 1]) is the set of all the bounded Lipschitz functionals on D[0, 1]. To simplify the notations, we denote \(F((X_{t}^{N})_{t\in [0,1]})\) by \(f_{N}\) and \(F((X_{t})_{t\in [0,1]})\) by f.

Our first statement is that for any sequence \((N_{k})_{k\ge 1}\), such that for all \(k\ge 1\), \(\frac{N_{k+1}}{N_{k}}\ge \rho >1\), we have

$$\begin{aligned} \frac{1}{n}\sum \limits _{k=1}^{n}f_{N_{k}}\overset{\mathbf {P}}{\rightarrow }\mathbf {E}^{X}[f],~~\text{ as }~n\rightarrow \infty . \end{aligned}$$
(2.25)

To prove (2.25), we start by observing that

$$\begin{aligned} \begin{array}{ll} \mathbf {P}\left( \left| \frac{1}{n}\sum \limits _{k=1}^{n}f_{N_{k}}-\mathbf {E}^{X}[f]\right|>\epsilon \right) \le &{}\mathbf {P}\left( \left| \frac{1}{n}\sum \limits _{k=1}^{n}\left( f_{N_{k}}-\mathbf {E}[f_{N_{k}}]\right) \right|> \frac{\epsilon }{2}\right) \\ &{}+\,\mathbf {P}\left( \left| \frac{1}{n}\sum \limits _{k=1}^{n}\mathbf {E}[f_{N_{k}}]-\mathbf {E}^{X}[f]\right| >\frac{\epsilon }{2}\right) . \end{array} \end{aligned}$$
(2.26)

The second term on the right-hand side vanishes as n tends to infinity by the analogue of invariance principle for stable laws. For the first term,

$$\begin{aligned}&\mathbf {P}\left( \left| \frac{1}{n}\sum \limits _{k=1}^{n}\left( f_{N_{k}}-\mathbf {E}[f_{N_{k}}]\right) \right| >\frac{\epsilon }{2}\right) \le \frac{4}{n^{2}\epsilon ^{2}}\mathbf {E}\left| \sum \limits _{k=1}^{n}\left( f_{N_{k}}-\mathbf {E}[f_{N_{k}}] \right) \right| ^{2}\nonumber \\&\quad \le \frac{4}{n^{2}\epsilon ^{2}}\sum \limits _{k=1}^{n}\mathbf {E}\left( f_{N_{k}}-\mathbf {E}[f_{N_{k}}]\right) ^{2} +\frac{8}{n^{2}\epsilon ^{2}}\sum \limits _{k=1}^{n}\sum \limits _{j=k+1}^{n}\left| \mathbf {E}\left[ \left( f_{N_{k}}-\mathbf {E}[f_{N_{k}}]\right) \left( f_{N_{j}}-\mathbf {E}[f_{N_{j}}]\right) \right] \right| . \end{aligned}$$
(2.27)

The first term on right hand side is bounded by \(\mathcal {O}(\frac{1}{n})\), since F is bounded. For the second term on the right hand side, by the method that was used in [2, p. 99], each term in the summation is bounded by \(C(p)\left( \frac{a_{N_{k}}}{a_{N_{j}}}\right) ^{p}\) for any \(p<\alpha \) and then bounded by \(C(\delta )\rho ^{-(\frac{1}{\alpha }-\delta )(j-k)}\) further for some \(0<\delta <\frac{1}{\alpha }\) (see Potter bounds in [7]), where C(p) and \(C(\delta )\) are constants only depending on p and \(\delta \) respectively (one can find the full details in [25, Theorem 4.18]). Therefore, the summation in the second term is also bounded by \(\mathcal {O}(\frac{1}{n})\). Combine (2.26) and (2.27), we obtain (2.25). By (2.11), the convergence in (2.25) also holds in \(\mathbbm {E}\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }\)-probability.

Denote \(E_{N}=\mathbbm {E}[\mathbf {E}_{\beta ,\mathrm{mc}}^{\omega }[f_{N}]]\). For any converging subsequence \(E_{N_{k}}\), we can find a sub-subsequence \(E_{N_{k_{j}}}\), such that \(\inf \limits _{j}(N_{k_{j+1}}/N_{k_{j}})=\rho >1\), and then by (2.25) and bounded convergence theorem, \(\lim \limits _{n\rightarrow \infty }\frac{1}{n}\sum \limits _{j=1}^{n}E_{N_{k_{j}}}=\mathbf {E}^{X}[f]\). Therefore we conclude that (2.24) holds.

Next we prove (2.20). The basic idea is the same as the proof of (2.21), we only need to prove that for all \(F\in \mathrm{BL}(D[0,1])\),

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\mathbbm {E}[\mathbf {E}_{N,\beta }^{\omega }[F((X_{t}^{N})_{t\in [0,1]})]]=\mathbf {E}^{X}[F((X_{t})_{t\in [0,1]})]. \end{aligned}$$
(2.28)

For \(0\le k\le N\),

$$\begin{aligned} \begin{array}{ll} \left| \mathbbm {E}\left[ \mathbf {E}_{N,\beta }^{\omega }\left[ f_{N}-\mathbf {E}^{X}[f]\right] \right] \right| \le &{}\mathbbm {E}\left[ \mathbf {E}_{N,\beta }^{\omega }\left| f_{N}-f_{N-k}\right| \right] \\ &{}+\,\mathbbm {E}\left| \mathbf {E}_{N,\beta }^{\omega }[f_{N-k}]-\mathbf {E}_{\beta ,\mathrm{mc}}^{\omega }[f_{N-k}]\right| \\ &{}+\,\left| \mathbbm {E}\left[ \mathbf {E}_{\beta ,\mathrm{mc}}^{\omega }[f_{N-k}]\right] -\mathbf {E}^{X}[f]\right| . \end{array} \end{aligned}$$
(2.29)

For any fixed k, let N tend to infinity, then by (2.24), the last term vanishes. For the first term, denote \(d((X_{t}^{N})_{t\in [0,1]}, (X_{t}^{N-k})_{t\in [0,1]})\) by d(Nk), where \(d(\cdot ,\cdot )\) is the Skorohod metric on D[0, 1], which was introduced in (1.21). Then for any \(\delta >0\), we have

$$\begin{aligned} \mathbbm {E}\left[ \mathbf {E}_{N,\beta }^{\omega }\left| f_{N}-f_{N-k}\right| \right]\le & {} L\mathbbm {E}[\mathbf {E}_{N,\beta }^{\omega }[d(N,k)\mathbbm {1}_{d(N,k)\le \delta }]]\nonumber \\&\quad +\,2\left( \sup \limits _{x\in D[0,1]}|F(x)|\right) \mathbbm {E} [\mathbf {E}_{N,\beta }^{\omega }[\mathbbm {1}_{d(N,k)>\delta }]], \end{aligned}$$
(2.30)

where L is the Lipschitz norm of F. The first term on the right-hand side of (2.30) can be made sufficiently small by choosing \(\delta \) sufficiently small. The expectation in the second term can be bounded by

$$\begin{aligned} \mathbbm {E}\left[ \mathbf {P}_{N,\beta }^{\omega }\left( \left\{ \sup \limits _{1\le j\le N-k}\left| \frac{S_{j}}{a_{N}}-\frac{S_{j}}{a_{N-k}}\right|>\delta \right\} \bigcup \left\{ \sup \limits _{1\le j\le k}\left| \frac{S_{N-k+j}}{a_{N}}-\frac{S_{N-k}}{a_{N-k}}\right| >\delta \right\} \right) \right] ,\nonumber \\ \end{aligned}$$
(2.31)

since Skorokhod distance allows us to align the jumps of two different càdlàg functions. To be specific here, in (1.21), we can choose \(\lambda (t)=\frac{N-k}{N}\) on \(\left[ 0,\frac{N-k-1}{N}\right] \) and linear on \(\left[ \frac{N-k-1}{N},1\right] \) to make the first \(N-k-1\) jumps of both \(X_{t}^{N}\) and \(X_{t}^{N-k}\) occur at the same time, which gives an upper bound (2.31). We observe that

$$\begin{aligned} \begin{array}{ll} &{}\mathbf {P}\left( \left\{ \sup \limits _{1\le j\le N-k}\left| \frac{S_{j}}{a_{N}}-\frac{S_{j}}{a_{N-k}}\right|>\delta \right\} \bigcup \left\{ \sup \limits _{1\le j\le k}\left| \frac{S_{N-k+j}}{a_{N}}-\frac{S_{N-k}}{a_{N-k}}\right|>\delta \right\} \right) \\ &{}\quad \le \mathbf {P}\left( \sup \limits _{1\le j\le N-k}\left| \frac{S_{j}}{a_{N-k}}\right|>\frac{\delta }{|1-\frac{a_{N-k}}{a_{N}}|}\right) +\mathbf {P}\left( \left| \frac{S_{N-k}}{a_{N-k}}\right|>\frac{\delta }{2|1-\frac{a_{N-k}}{a_{N}}|}\right) \\ &{}\qquad +\,\mathbf {P}\left( \sup \limits _{1\le j\le k}\left| \frac{S_{N-k+j}-S_{N-k}}{a_{N}}\right| >\frac{\delta }{2}\right) . \end{array} \end{aligned}$$
(2.32)

Note that \(\frac{a_{N-k}}{a_{N}}\rightarrow 1\), as \(N\rightarrow \infty \). By weak convergence of \(a_{N-k}^{-1}S_{N-k}\), the continuous mapping theorem and the fact that \(\sup \limits _{0\le t\le 1}|X_{t}|<\infty \) a.s., the first two terms on the right-hand side of (2.32) tend to 0 as N tends to infinity. The last term also tends to 0, since \(S_{N-k+j}-S_{N-k}\overset{d}{=}S_{j}\). Denote

$$\begin{aligned} \left\{ \sup \limits _{1\le j\le N-k}\left| \frac{S_{j}}{a_{N}}-\frac{S_{j}}{a_{N-k}}\right|>\delta \right\} \bigcup \left\{ \sup \limits _{1\le j\le k}\left| \frac{S_{N-k+j}}{a_{N}}-\frac{S_{N-k}}{a_{N-k}}\right| >\delta \right\} \end{aligned}$$
(2.33)

by \(A_{N,k}\) for \(N>k\). We have \(\lim \limits _{N\rightarrow \infty }\mathbf {P}(A_{N,k})=0\). Then by Lemma 2.2, (2.31) tends to 0 as N tends to infinity. Therefore, the first term on the right-hand side of (2.29) vanishes.

Finally, the second term on the right-hand side of (2.29) is bounded by \(\sup \limits _{N}\mathbbm {E}\left| \mathbf {E}_{N+k,\beta }^{\omega }[f_{N}]\right. \left. -\mathbf {E}_{\beta ,\mathrm{mc}}^{\omega }[f_{N}]\right| \). Note that \(f_{N}\) is measurable w.r.t. \(\mathcal {F}_{N}\). Hence

$$\begin{aligned} \sup \limits _{N}\mathbbm {E}\left| \mathbf {E}_{N+k,\beta }^{\omega }[f_{N}]-\mathbf {E}_{\beta ,\mathrm{mc}}^{\omega }[f_{N}]\right| \le \left( \sup \limits _{x\in D[0,1]}|F(x)|\right) \sup \limits _{N}\mathbbm {E}\left[ ||\mathbf {P}_{N+k,\beta }^{\omega }-\mathbf {P}_{\beta ,\mathrm{mc}}^{\omega }|| _{\mathcal {F}_{N}}\right] .\nonumber \\ \end{aligned}$$
(2.34)

Let k tend to infinity and apply Proposition 2.3. The right-hand side of (2.34) tends to 0. This completes the proof of (2.20). \(\square \)

Proof of Theorem 2.7

By the same procedure as in the proof of (2.21), but using Lemma 2.4 instead of Proposition 2.1, for any \(G\in \mathrm{C}_{b}(D[0,1]\times D[0,1])\), we have

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\mathbbm {E}\left[ (\mathbf {E}_{\beta ,\mathrm{mc}}^{\omega })^{\bigotimes 2}[G((X_{t}^{N})_{t\in [0,1]},(\tilde{X}_{t}^{N})_{t\in [0,1]})]\right] =(\mathbf {E}^{X})^{\bigotimes 2}[G((X_{t})_{t\in [0,1]},(\tilde{X}_{t})_{t\in [0,1]})].\nonumber \\ \end{aligned}$$
(2.35)

If we choose \(G(x,\tilde{x})=(F(x)-\mathbf {E}^{X}[f])(F(\tilde{x})-\mathbf {E}^{X}[f])\), then it follows that

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\mathbbm {E}\left[ \left( \mathbf {E}_{\beta ,\mathrm{mc}}^{\omega }[f_{N}-\mathbf {E}^{X}[f]]\right) ^{2}\right] =0, \end{aligned}$$
(2.36)

which proves (2.23). To prove (2.18), it suffices to show that for all \(F\in \mathrm{C}_{b}(D[0,1])\),

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\mathbbm {E}\left| \mathbf {E}_{N,\beta }^{\omega }[f_{N}-\mathbf {E}^{X}[f]]\right| =0. \end{aligned}$$
(2.37)

The proof of (2.37) is the same as that of (2.28). \(\square \)

3 Proof of Proposition 1.13 and Theorem 1.15

3.1 Proof of Proposition 1.13

We will first give some equivalent conditions for the recurrence of heavy-tailed random walks, which will be used later.

Proposition 3.1

Suppose that \(S=(S_{n})_{n\le 0}\) is a heavy-tailed random walk satisfying (1.3) with \(b_{n}=0\).

  1. (i)

    S is recurrent if and only if \(\sum \limits _{n=1}^{\infty }\frac{1}{a_{n}}=\infty \), where \(a_{n}=n^{\frac{1}{\alpha }}l(n)\) for some slowly varying function l(n), is the scaling factor in (1.3).

  2. (ii)

    If \(\mathbf {P}\) is in the domain of attraction of the Cauchy distribution, i.e., \(\alpha =1\), then S is recurrent if and only if \(\sum \limits _{n=1}^{\infty }\frac{1}{nL(n)}=\infty \), where L(n) is some slowly varying function defined in (1.1).

Proof

  1. (i)

    For \(\alpha \in (0,1)\), the random walk S is always transient (see Remark 1.2), and \(\sum \limits _{n=1}^{\infty }a_{n}=\sum \limits _{n=1}^{\infty }\frac{1}{n^{\frac{1}{\alpha }}l(n)}<\infty \) since \(\frac{1}{\alpha }>1\). Hence, the result is obvious. For \(\alpha \in [1,2]\), \(S_{1}\) should take both positive and negative values. We can set the possible smallest return time k by the greatest common divisor of \(\{n\in \mathbbm {N}:\mathbf {P}(S_{n}=0)>0\}\), which is finite. By Gnedenko’s local limit theorem (see [7, Theorem 8.4.1]), we have

    $$\begin{aligned} \mathbf {P}(S_{nk}=0)\sim \frac{g_{\alpha }(0)h}{a_{nk}}~\mathrm{as}~n\rightarrow \infty , \end{aligned}$$
    (3.1)

    where h is the largest integer such that \(\{z+h\mathbbm {Z}\}\) contains all the values of \(S_{1}\) for some integer z and \(g_{\alpha }\) is the density function of the limiting stable distribution \(X_{\alpha }\). Note that S is recurrent if and only if \(\sum \limits _{n=0}^{\infty }\mathbf {P}(S_{nk}=0)=\sum \limits _{n=0}^{\infty }\mathbf {P}(S_{n}=0)=\infty \), and \(\sum \limits _{m=0}^{k-1}\frac{1}{a_{(n-1)k+m}}\) has the same order of \(\frac{k}{a_{nk}}\) as \(n\rightarrow \infty \) by Uniform Convergence Theorem of slowly varying function ([7, Theorem 1.2.1]), then the result follows by (3.1).

  2. (ii)

    Again, by [7, Theorem 1.2.1], for any slowly varying function L(x), there exist two constants \(C_{1}\) and \(C_{2}\), such that \(C_{1}<\frac{L(x)}{L(n)}<C_{2}\), for \(x\in [n, n+1)\). Hence, \(\sum \limits _{n=1}^{\infty }\frac{1}{nL(n)}=\infty \Leftrightarrow \int _{1}^{\infty }\frac{dt}{tL(t)}=\infty \). By [7, Proposition 1.3.4], we can extend \(a_{n}\) to a regularly varying function \(a(t)=tl(t)\) for \(t\in (0,\infty )\) and further assume that a(t) is non-decreasing and differentiable. By [7, Proposition 1.5.8], \(\frac{d}{dt}a(t)\sim \frac{a(t)}{t}\). Note that \(a_{n}\sim nL(a_{n})\) since \(a_{n}\) can be chosen by \(n\mathbf {P}(|S_{1}|>n)\sim 1\) (see [29, Chapter 7]), we then obtain

    $$\begin{aligned} \int _{1}^{\infty }\frac{dt}{a(t)}=\infty \Leftrightarrow \int _{1}^{\infty }\frac{dt}{tL(a(t))}=\infty \Leftrightarrow \int _{1}^{\infty } \frac{ds}{sL(s)}=\infty , \end{aligned}$$
    (3.2)

    where the last equivalence follows from the change of variables \(s=a(t)\). Now the result holds by part (i). \(\square \)

Remark 3.2

By [12, Theorem 8.3.4], a random walk S whose expectation \(\mathbf {E}[S_{1}]\) exists is recurrent if and only if \(\mathbf {E}[S_{1}]=0\). For \(\alpha \in (1,2]\), since \(S_{1}-\mathbf {E}[S_{1}]\) has expectation 0 and \(\sum \limits _{n=1}^{\infty }\frac{1}{a_{n}}=\infty \) holds always, hence, setting \(b_{n}=0\) does not reduce much generality.

To prove Proposition 1.13, we apply the fractional moment method as in the proof of [14, Theorem 2.3(b)]. We cite two lemmas [14, Lemmas 3.1, 4.2] here without proof.

Lemma 3.3

Let \((\xi _{i})_{i\ge 1}\) be positive, non-constant i.i.d. random variables such that \(\mathbbm {E}[\xi _{1}]=1\) and \(\mathbbm {E}[\xi _{1}^{3}+\log ^{2}\xi _{1}]<\infty \). For \((\alpha _{i})_{i\ge 1}\in [0,1]^{\mathbbm {N}}\) such that \(\sum \limits _{i=1}^{\infty }\alpha _{i}=1\), define a centered random variable \(U>-1\) by \(U=\sum \limits _{i=1}^{\infty }\alpha _{i}\xi _{i}-1\). Then there exists a constant \(c\in (0,\infty )\), independent of \((\alpha _{i})_{i\ge 1}\), such that

$$\begin{aligned} \frac{1}{c}\sum \limits _{i=1}^{\infty }\alpha _{i}^{2}\le \mathbbm {E}\left[ \frac{U^{2}}{2+U}\right] . \end{aligned}$$
(3.3)

Remark 3.4

In [14], the authors considered sequences \((\alpha _{i})_{1\le i\le n}\) for any finite n. It can be seen that the proof for a countable sequence \((\alpha _{i})_{i\ge 1}\) follows the same lines as that for finite \((\alpha _{i})_{1\le i\le n}\). Note that U is a well defined random variable by monotone convergence theorem.

Lemma 3.5

Recall the overlap \(I_{N}\) from (1.9). For \(\theta \in [0,1]\) and \(\Lambda \in \mathbbm {Z}\),

$$\begin{aligned} \mathbbm {E}[(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }I_{N}]\ge \frac{1}{|\Lambda |}\mathbbm {E}[(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }]-\frac{2}{|\Lambda |} \mathbf {P}(S_{N}\notin \Lambda )^{\theta }. \end{aligned}$$
(3.4)

Proof of Proposition 1.13

We will show \(\lim \limits _{N\rightarrow \infty }\mathbbm {E}[(\hat{Z}_{N,\beta }^{\omega })^{\theta }]=0\) for some \(\theta \in (0,1)\) via a recursive inequality between \(\mathbbm {E}[(\hat{Z}_{N,\beta }^{\omega })^{\theta }]\) and \(\mathbbm {E}[(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }]\).

We first establish the connection between \(\hat{Z}_{N,\beta }^{\omega }\) and \(\hat{Z}_{N-1,\beta }^{\omega }\) by writing

$$\begin{aligned} \frac{\hat{Z}_{N,\beta }^{\omega }}{\hat{Z}_{N-1,\beta }^{\omega }}=U_{N,\beta }^{\omega }+1, \end{aligned}$$
(3.5)

where it can be seen that

$$\begin{aligned} U_{N,\beta }^{\omega }=\mathbf {E}_{N-1,\beta }^{\omega }[\exp (\beta \omega _{N,S_{N}}-\lambda (\beta ))]-1. \end{aligned}$$
(3.6)

Therefore, conditionally on \(\mathcal {G}_{N-1}\), which is the \(\sigma \)-field generated by \((\omega _{i,x})_{0\le i\le N-1, x\in \mathbbm {Z}}\), \(U_{N}\) satisfies the definition of U in Lemma 3.3. Then we have

$$\begin{aligned} \mathbbm {E}[(\hat{Z}_{N,\beta }^{\omega })^{\theta }|\mathcal {G}_{N-1}]=(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }\mathbbm {E} [(U_{N,\beta }^{\omega }+1)^{\theta }|\mathcal {G}_{N-1}]. \end{aligned}$$
(3.7)

To deal with the right-hand side of (3.7), we define an auxiliary function. Assume \(\theta \in (0,1)\). Set \(f:(-1,\infty )\mapsto [0,\infty )\) by

$$\begin{aligned} f(u)=1+\theta u-(1+u)^{\theta }. \end{aligned}$$
(3.8)

It is easy to see that there exist \(c_{1},c_{2}\in (0,\infty )\) such that for all \(u\in (-1,\infty )\), we have

$$\begin{aligned} \frac{c_{1}u^{2}}{2+u}\le f(u)\le c_{2}u^{2}. \end{aligned}$$
(3.9)

Notice that the left-hand side of (3.9) has the form of the right hand side of (3.3).

Then

$$\begin{aligned} \begin{array}{ll} &{}(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }\mathbbm {E} [(U_{N,\beta }^{\omega }+1)^{\theta }|\mathcal {G}_{N-1}]\\ &{}\quad =(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }\mathbbm {E}[1+\theta U_{N,\beta }^{\omega }-f(U_{N,\beta }^{\omega })|\mathcal {G}_{N-1}]\\ &{}\quad =(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }-(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }\mathbbm {E}[f(U_{N,\beta }^{\omega })|\mathcal {G}_{N-1}]\\ &{}\quad \le (\hat{Z}_{N-1,\beta }^{\omega })^{\theta }-(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }\mathbbm {E}\left[ \frac{c_{1}(U_{N,\beta }^{\omega })^{2}}{2+U_{N,\beta }^{\omega }}\big |\mathcal {G}_{N-1}\right] \\ &{}\quad \le (\hat{Z}_{N-1,\beta }^{\omega })^{\theta }-c_{3}(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }I_{N}, \end{array} \end{aligned}$$
(3.10)

where the last inequality is due to Lemma 3.3, with \((\alpha _{x})_{x\in \mathbbm {Z}}=(\mathbf {P}_{N-1,\beta }^{\omega }(S_{N}=x))_{x\in \mathbbm {Z}}\) and noticing that \(I_{N}=\sum \limits _{x\in \mathbbm {Z}}(\mathbf {P}_{N-1,\beta }^{\omega }(S_{N}=x))^{2}\). Taking expectation on both sides of (3.7) and (3.10) and using Lemma 3.5, we obtain

$$\begin{aligned} \mathbbm {E}[(\hat{Z}_{N,\beta }^{\omega })^{\theta }]\le \left( 1-\frac{c_{3}}{|\Lambda _{N}|}\right) \mathbbm {E}[(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }] +\frac{2c_{3}}{|\Lambda _{N}|}\mathbf {P}(S_{N}\notin \Lambda _{N})^{\theta } \end{aligned}$$
(3.11)

for any sequence of bounded sets \((\Lambda _{i})_{i\ge 1}\).

For a recurrent S, by Proposition 3.1 (i), we have \(\sum \limits _{n=1}^{\infty }\frac{1}{a_{n}}=\infty \). Then we can always find a sequence \((b_{n})_{n\ge 1}\) such that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\frac{b_{n}}{a_{n}}=\infty ,~\mathrm{and}~\sum \limits _{n=1}^{\infty }\frac{1}{b_{n}}=\infty . \end{aligned}$$
(3.12)

Hence, we can choose \(\Lambda _{N}=(-b_{N},b_{N})\) such that \(\mathbf {P}(S_{N}\notin \Lambda _{N})\) tends to 0, since \(a_{N}^{-1}S_{N}\) converges in probability to some stable law. For any \(\epsilon >0\), for large enough N, we have \(2\mathbf {P}(S_{N}\notin \Lambda _{N})^{\theta }<\epsilon \), and then

$$\begin{aligned} \mathbbm {E}[(\hat{Z}_{N,\beta }^{\omega })^{\theta }]-\epsilon\le & {} \left( 1-\frac{c_{3}}{2b_{N}}\right) (\mathbbm {E}[(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }]-\epsilon )\nonumber \\\le & {} \exp \left( -\frac{c_{3}}{2b_{N}}\right) (\mathbbm {E}[(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }] -\epsilon ). \end{aligned}$$
(3.13)

Iterating this inequality and using Fatou’s lemma, we obtain

$$\begin{aligned} \mathbbm {E}[(\hat{Z}_{\infty ,\beta }^{\omega })^{\theta }]-\epsilon\le & {} \varliminf \limits _{M\rightarrow \infty }\mathbbm {E}[(\hat{Z}_{M,\beta }^{\omega })^{\theta }]-\epsilon \nonumber \\\le & {} \varliminf \limits _{M\rightarrow \infty }\exp \left( -\sum \limits _{n=N}^{M}\frac{c_{3}}{2b_{n}}\right) (\mathbbm {E}[(\hat{Z}_{N-1,\beta }^{\omega })^{\theta }]-\epsilon )=0. \end{aligned}$$
(3.14)

Since \(\epsilon \) is arbitrary, it follows that \(\mathbbm {E}[(\hat{Z}_{\infty ,\beta }^{\omega })^{\theta }]=0\), i.e., strong disorder holds. \(\square \)

3.2 Proof of Theorem 1.15

In this subsection, we prove Theorem 1.15, which gives bounds on the free energy when \(\alpha \in (1,2]\). The technique that is used here has been developed in many articles, see [20, 26, 31]. We only give a proof for Gaussian environment. It is not hard to deduce the result for general environment from Gaussian environment, see [26, p. 481].

Proof of Theorem 1.15 in Gaussian environment

We start with a simple observation. By Jensen’s inequality, for \(\theta \in (0,1)\),

$$\begin{aligned} p(\beta )=\lim \limits _{N\rightarrow \infty }\frac{1}{N}\mathbbm {E}[\log \hat{Z}_{N,\beta }^{\omega }]\le \varliminf \limits _{N\rightarrow \infty }\frac{1}{\theta N}\log \mathbbm {E} [(\hat{Z}_{N,\beta }^{\omega })^{\theta }]. \end{aligned}$$
(3.15)

Hence, we only need to show that the fractional moment of \(\hat{Z}_{N,\beta }^{\omega }\), for some power \(\theta \in (0,1)\) that will be determined later, decays exponentially in N.

To conclude (1.23), it is sufficient to focus on a subsequence of \(\hat{Z}_{N,\beta }^{\omega }\) by (3.15). We use the coarse-graining method in this step. Consider the sequence \(N=mn\), where m will tend to infinity and n is fixed once chosen, which will be determined by \(\beta \) later. The idea is that we will only investigate the heavy-tailed random walk S at time \(n, 2n,\ldots , mn\). For each in, where \(i=1,\ldots ,m\), we can find a time-space window in \(\mathbbm {N}\times \mathbbm {Z}\), in which \(S_{in}\) falls with high probability, thanks to convergence to stable law.

Let \((a_{n})_{n\ge 1}\) be the scaling sequence such that \(a_{N}^{-1}S_{N}\) converges to an \(\alpha \)-stable law in distribution. Notice that we can choose \((a_{n})_{n\ge 1}\) to be non-decreasing and integer-valued, which will simplify our argument. Denote \(I_{k}=[ka_{n},(k+1)a_{n})\) and we make the decomposition

$$\begin{aligned} \hat{Z}_{N,\beta }^{\omega }=\sum \limits _{y_{1},\ldots ,y_{m}\in \mathbbm {Z}}\hat{Z}_{(y_{1},\ldots ,y_{m})}^{\beta ,\omega }, \end{aligned}$$
(3.16)

where

$$\begin{aligned} \hat{Z}_{(y_{1},\ldots ,y_{m})}^{\beta ,\omega }=\mathbf {E}\left[ \exp \left\{ \sum \limits _{i=1}^{N}\left( \beta \omega _{i,S_{i}}-\frac{\beta ^{2}}{2}\right) \right\} \mathbbm {1}_{\{S_{in}\in I_{y_{i}}, \forall i=1,\ldots ,m\}}\right] . \end{aligned}$$
(3.17)

Then

$$\begin{aligned} \mathbbm {E}[(\hat{Z}_{N,\beta }^{\omega })^{\theta }]\le \sum \limits _{y_{1},\ldots ,y_{m}\in \mathbbm {Z}}\mathbbm {E}[(\hat{Z}_{(y_{1},\ldots ,y_{m})}^{\beta ,\omega }) ^{\theta }], \end{aligned}$$
(3.18)

since the inequality \((\sum a_{n})^{\theta }\le \sum a_{n}^{\theta }\) holds for any countable sequence for any \(\theta \in (0,1]\). Note that the length of each interval \(I_{k}\) is chosen to match the scaling of \(S_{n}\), and if \(S\in \{S_{in}\in I_{y_{i}}~\forall i=1,\ldots ,m\}\), then \((y_{1},\ldots ,y_{m})\) is called the coarse-grained version of the trajectory of S.

Next, to estimate \(\mathbbm {E}[(\hat{Z}_{(y_{1},\ldots ,y_{m})}^{\beta ,\omega })^{\theta }]\), we use a change of measure procedure, which we now explain. We will define a new law for the random environment, which shifts down the expectation of \(\omega _{j,x}\) at sites, where the random walk S visits with relatively high probability, to a negative value. This can significantly decrease the expectation of \(\hat{Z}_{(y_{1},\ldots ,y_{m})}^{\beta ,\omega }\) under the new law of \(\omega \), and the cost of the change of measure can be chosen to be small.

For any \(Y=(y_{0},\ldots ,y_{m-1})\), we introduce the set

$$\begin{aligned} J_{Y}=\{(kn+i,y_{k}a_{n}+z):~k=0,\ldots ,m-1,~i=1,\ldots ,n,~|z|\le C_{1}a_{n}\}, \end{aligned}$$
(3.19)

where \(y_{0}=0\) for convenience and \(C_{1}\) is a large integer to be determined later. Note that \(|J_{Y}|=2C_{1}a_{n}mn\), where |A| denote the cardinality of a set A. We can consider the choice of \(J_{Y}\) in the following way: suppose that the random walk S reaches \(y_{k}a_{n}\) at time kn. Then for the next n steps of this random walk, its path will probably fall in the set

$$\begin{aligned} B_{k}=\{(kn+i,y_{k}a_{n}+z):~i=1,\ldots ,n,~|z|\le C_{1}a_{n}\}. \end{aligned}$$
(3.20)

Note that \((B_{k})_{0\le k\le m-1}\) are disjoint and \(J_{Y}=\bigcup \limits _{k=0}^{m-1}B_{k}\). According to argument above (3.19), we will perform the change of measure on \(J_{Y}\) (see Fig. 1 in the next page).

Fig. 1
figure 1

This figure represents the coarse-grained version of a trajectory of the random walk S. We investigate the random walk S at time in, \(i=1,\ldots ,m\). The bold vertical line segments mean that at time in, the random walk S falls in the interval \(I_{y_{i}}\), where \(y_{i}\) is the vertical coordinate of the lower endpoint of the \(i+1\)-th bold vertical line segments. The rectangles \(B_{k}\) containing \(n\times 2C_1 a_n\) sites are defined in (3.20), on which we will make change of measure

We define the new measure \(\mathbbm {P}_{Y}\), under which \((\omega _{i,x})_{i\ge 0,x\in \mathbbm {Z}}\) are independent Gaussian random variables with variance 1 and expectation \(\mathbbm {E}_{Y}[\omega _{1,0}]=-\delta (n)\mathbbm {1}_{(i,x)\in J_{Y}}\), where \(\delta (n)\) is a small number and will be determined later. Some direct computation shows that

$$\begin{aligned} \frac{d\mathbbm {P}_{Y}}{d\mathbbm {P}}=\exp \left\{ -\sum \limits _{(i,x)\in J_{Y}}\left( \delta (n)\omega _{i,x}+\frac{\delta (n)^{2}}{2}\right) \right\} . \end{aligned}$$
(3.21)

Then by Hölder’s inequality,

$$\begin{aligned} \begin{array}{ll} \mathbbm {E}[(\hat{Z}_{(y_{1},\ldots ,y_{m})}^{\beta ,\omega })^{\theta }]&{}=\mathbbm {E}_{Y}\left[ \frac{d\mathbbm {P}}{d\mathbbm {P}_{Y}}(\hat{Z}_{(y_{1},\ldots ,y_{m})} ^{\beta ,\omega })^{\theta }\right] \\ &{}\le \left( \mathbbm {E}_{Y}\left[ \left( \frac{d\mathbbm {P}}{d\mathbbm {P}_{Y}}\right) ^{\frac{1}{1-\theta }}\right] \right) ^{1-\theta }\left( \mathbbm {E}_{Y}[\hat{Z} _{(y_{1},\ldots ,y_{m})}^{\beta ,\omega }]\right) ^{\theta }. \end{array} \end{aligned}$$
(3.22)

Here

$$\begin{aligned} \left( \mathbbm {E}_{Y}\left[ \left( \frac{d\mathbbm {P}}{d\mathbbm {P}_{Y}}\right) ^{\frac{1}{1-\theta }}\right] \right) ^{1-\theta }=\exp \left( \frac{|J_{Y}|\theta \delta (n)^{2}}{2(1-\theta )}\right) =\exp \left( \frac{C_{1}a_{n}mn\theta \delta (n)^{2}}{1-\theta }\right) . \end{aligned}$$
(3.23)

To make this term independent of n, we can set \(\delta (n)=(C_{1}na_{n})^{-\frac{1}{2}}\).

To estimate

$$\begin{aligned} \mathbbm {E}_{Y}[\hat{Z}_{(y_{1},\ldots ,y_{m})}^{\beta ,\omega }]=\mathbf {E}[\exp (-\beta \delta (n)|\{i:(i,S_{i})\in J_{Y}\}|)\mathbbm {1}_{\{S_{kn}\in I_{y_{k}}, 1\le k\le m\}}], \end{aligned}$$
(3.24)

we define

$$\begin{aligned} J&=\{(i,x):i=1,\ldots ,n,~|x|\le (C_{1}-1)a_{n}\}, \end{aligned}$$
(3.25)
$$\begin{aligned} \bar{J}&=\{(i,x):i=1,\ldots ,n,~|x|\le (C_{1}-2)a_{n}\}, \end{aligned}$$
(3.26)

Recall that \(J_{Y}=\bigcup \limits _{k=0}^{m-1}B_{k}\) and \(B_{k}\cap B_{l}=\emptyset \) for \(k\ne l\) by (3.20). We have

$$\begin{aligned} \begin{array}{ll} &{}\mathbf {E}\left[ \exp (-\beta \delta (n)|\{i:(i,S_{i})\in J_{Y}\}|)\mathbbm {1}_{\{S_{kn}\in I_{y_{k}}, 1\le k\le m\}}\right] \\ &{}\quad =\mathbf {E}\left[ \prod \limits _{k=1}^{m}\exp (-\beta \delta (n)|\{i:(i,S_{i})\in B_{k-1}\}|)\mathbbm {1}_{S_{kn}\in I_{y_{k}}}\right] \\ &{}\quad \le \prod \limits _{k=1}^{m}\max \limits _{x\in I_{y_{k-1}}}\mathbf {E}^{x}\left[ \exp (-\beta \delta (n)|\{i:(i+(k-1)n,S_{i})\in B_{k-1}\}|)\mathbbm {1}_{S_{n}\in I_{y_{k}}}\right] \\ &{}\quad \le \prod \limits _{k=1}^{m}\max \limits _{x\in I_{0}}\mathbf {E}^{x}\left[ \exp (-\beta \delta (n)|\{i:(i,S_{i})\in J\}|)\mathbbm {1}_{S_{n}\in I_{y_{k}-y_{k-1}}}\right] , \end{array} \end{aligned}$$
(3.27)

where the first inequality is due to the Markov property and the last inequality is due to our definition of \(I_{k}\) and J. Combine (3.14), (3.17), (3.18), and (3.23), it follows that

$$\begin{aligned} \begin{array}{ll} &{}\log \mathbbm {E}[(\hat{Z}_{N,\beta }^{\omega })^{\theta }]\\ &{}\quad \le \log \sum \limits _{y_{1},\ldots ,y_{m}\in \mathbbm {Z}}\exp \left( \frac{\theta m}{1-\theta }\right) \left( \mathbbm {E}_{Y}[\hat{Z}_{N,\beta }^{\omega }]\right) ^{\theta }\\ &{}\quad \le \frac{\theta m}{1-\theta }+\log \sum \limits _{y_{1},\ldots ,y_{m}\in \mathbbm {Z}}\left( \prod \limits _{k=1}^{m}\max \limits _{x\in I_{0}}\mathbf {E}^{x}[\exp (-\beta \delta (n)|\{i:(i,S_{i})\in J\}|)\mathbbm {1}_{S_{n}\in I_{y_{k}-y_{k-1}}}]\right) ^{\theta }\\ &{}\quad =m\left[ \frac{\theta }{1-\theta }+\log \sum \limits _{z\in \mathbbm {Z}}(\max \limits _{x\in I_{0}}\mathbf {E}^{x}[\exp (-\beta \delta (n)|\{i:(i,S_{i})\in J\}|) \mathbbm {1}_{S_{n}\in I_{z}}])^{\theta }\right] \end{array}\qquad \end{aligned}$$
(3.28)

If we can show that the quantity in the square brackets is smaller than \(-1\), then \(p(\beta )\le -\frac{1}{\theta n}\) by (3.15), which will imply very strong disorder. It suffices to show that

$$\begin{aligned} \sum \limits _{z\in \mathbbm {Z}}\max \limits _{x\in I_{0}}\mathbf {E}^{x}[\exp (-\beta \delta (n)|\{i:(i,S_{i})\in J\}|)\mathbbm {1}_{S_{n}\in I_{z}}]^{\theta } \end{aligned}$$
(3.29)

can be made sufficiently small.

Observe that

$$\begin{aligned} \begin{array}{ll} &{}\sum \limits _{z\in \mathbbm {Z}}\max \limits _{x\in I_{0}}\mathbf {E}^{x}[\exp (-\beta \delta (n)|\{i:(i,S_{i})\in J\}|)\mathbbm {1}_{S_{n}\in I_{z}}]^{\theta }\\ &{}\quad \le \sum \limits _{|y|\ge K}\max _{x\in I_{0}}\mathbf {P}^{x}(S_{n}\in I_{y})^{\theta }+2K\max \limits _{x\in I_{0}}\mathbf {E}^{x}[\exp (-\beta \delta (n)|\{i:(i,S_{i})\in J\}|)]^{\theta }. \end{array} \end{aligned}$$
(3.30)

For the first term,

$$\begin{aligned} \begin{array}{ll} \sum \limits _{|y|\ge K}\max \limits _{x\in I_{0}}\mathbf {P}^{x}(S_{n}\in I_{y})^{\theta }&{}\le 2\sum \limits _{y=K-2}^{\infty }\mathbf {P}\left( y\le \frac{S_{n}}{a_{n}} <y+2\right) ^{\theta }\\ &{}\le 2\sum \limits _{y=K-2}^{\infty }\left( \frac{1}{y^{\gamma }}\mathbf {E}\left| \frac{S_{n}}{a_{n}}\right| ^{\gamma }\right) ^{\theta }\\ &{}\le 2C\sum \limits _{y=K-2}^{\infty }y^{-\gamma \theta }. \end{array} \end{aligned}$$
(3.31)

The last inequality follows from [25, Theorem 2.14] by choosing some \(\gamma \in (1,\alpha )\). Therefore, we can fix \(\theta \) such that \(\gamma \theta >1\) and then choose K large enough such that (3.31) is small enough.

For the second term,

$$\begin{aligned} \begin{array}{ll} &{}2K\max \limits _{x\in I_{0}}\mathbf {E}^{x}\left[ \exp (-\beta \delta (n)|\{i:(i,S_{i})\in J\}|)\right] ^{\theta }\\ &{}\quad \le 2K\mathbf {E}\left[ \exp (-\beta \delta (n)|\{i:(i,S_{i})\in \bar{J}\}|)\right] ^{\theta }\\ &{}\quad \le 2K[\exp (-n\beta \delta (n))+\mathbf {P}\{\text{ the } \text{ random } \text{ walk } \text{ goes } \text{ out } \text{ of }~\bar{J}\}] \end{array} \end{aligned}$$
(3.32)

By choosing a large \(C_{1}\), the second term in the square brackets can be made small by the analogue of invariance principle for heavy-tailed random walks. For the first term, notice that

$$\begin{aligned} n\beta \delta (n)=\frac{\beta }{\sqrt{C_{1}}}\sqrt{{\frac{n}{a_{n}}}}=\frac{\beta n^{\frac{\alpha -1}{2\alpha }}}{\sqrt{C_{1}l(n)}}. \end{aligned}$$
(3.33)

We can choose the smallest \(n=n(\beta )\) such that \(\beta n^{\frac{\alpha -1}{2\alpha }}l(n)^{-\frac{1}{2}}\ge C_{2}\), where \(C_{2}\) is a large constant so that \(\exp (-\frac{C_{2}}{\sqrt{C_{1}}})\) is small enough. By our choice of n, if follows that

$$\begin{aligned} \lim \limits _{\beta \rightarrow 0}\beta n^{\frac{\alpha -1}{2\alpha }}l(n)^{-\frac{1}{2}}=C_{2}. \end{aligned}$$
(3.34)

Therefore,

$$\begin{aligned} n^{\frac{\alpha -1}{2\alpha }}\frac{1}{\sqrt{l(n)}}\sim \frac{C_{2}}{\beta },~~\text{ as }~\beta \rightarrow 0. \end{aligned}$$
(3.35)

Define

$$\begin{aligned} l_{\alpha }(x):=\frac{1}{\sqrt{l(x^{\frac{2\alpha }{\alpha -1}})}}. \end{aligned}$$
(3.36)

Then \(l_{\alpha }(x)\) is also a slowly varying function. We then have

$$\begin{aligned} n^{\frac{\alpha -1}{2\alpha }}l_{\alpha }(n^{\frac{\alpha -1}{2\alpha }})\sim \frac{C_{2}}{\beta },~~\text{ as }~\beta \rightarrow 0. \end{aligned}$$
(3.37)

By [7, Theorem 1.5.13], we can find a slowly varying function \(l^{\#}_{\alpha }(x)\), such that

$$\begin{aligned} l^{\#}_{\alpha }(xl_{\alpha }(x))\sim \frac{1}{l_{\alpha }(x)},~~\text{ as }~x\rightarrow \infty . \end{aligned}$$
(3.38)

Therefore,

$$\begin{aligned} \frac{1}{l_{\alpha }(n^{\frac{\alpha -1}{2\alpha }})}\sim l^{\#}_{\alpha }\left( \frac{C_{2}}{\beta }\right) ,~~\text{ as }~\beta \rightarrow 0, \end{aligned}$$
(3.39)

that is,

$$\begin{aligned} \sqrt{l(n)}\sim l^{\#}_{\alpha }\left( \frac{C_{2}}{\beta }\right) ,~~\text{ as }~\beta \rightarrow 0. \end{aligned}$$
(3.40)

Combine (3.35) and (3.40), and recall that if l(x) is a slowly varying function, then l(ax) and \((l(x))^{\gamma }\) are both slowly varying functions for any \(a>0\) and \(\gamma \in \mathbbm {R}\) (see[7]). By setting \(\varphi =(l^{\#}_{\alpha })^{\frac{\alpha -1}{2\alpha }}\), we then obtain

$$\begin{aligned} \frac{1}{n}\sim C_{2}\beta ^{\frac{2\alpha }{\alpha -1}}\varphi \left( \frac{1}{\beta }\right) , \end{aligned}$$
(3.41)

where \(\varphi \) is some slowly varying function. Then for some constant C

$$\begin{aligned} p(\beta )\le -\frac{1}{\theta n}\le -C\beta ^{\frac{2\alpha }{\alpha -1}}\varphi \left( \frac{1}{\beta }\right) , \end{aligned}$$
(3.42)

which completes the proof. \(\square \)

Remark 3.6

In [26, Proposition 1.5], Lacoin also gave a lower bound for the free energy of 1-dimensional nearest-neighbor directed polymer with an extra logarithmic term. Later in [1], the authors proved that one can actually remove the logarithmic term so that the lower bound and the upper bound are consistent (differ up to some prefactor). The proof of the lower bound involves the site percolation. To extend their lower bounds to the long-range model, some property of the long-range percolation may be needed, which has not been systematically studied, however. Besides, the negativity of the upper bound implies very strong disorder, which reflects the qualitative behavior of the polymer chain. Therefore, the upper bound is more significant than the lower bound and we just leave out the lower bound in this paper. Recently, in [10], the authors identified the sharp high temperature asymptotic behavior of the shift of the critical point for the pinning model with exponent \(\alpha \in (\frac{1}{2},1)\). We expect that their approach is also applicable to the long-range directed polymer with \(\alpha \in (1,2]\).

Remark 3.7

We continue the discussion in Remark 1.14. Although we have given some equivalent conditions for recurrence of heavy-tailed random walks in Proposition 3.1, for stable exponent \(\alpha =1\), we have not deduced very strong disorder for all \(\beta >0\) from the recurrence of the random walk S. The reason is that the slowly varying function L(x) is subtle and the tail distribution of the random walk S has much slower decay than that of a simple random walk. Therefore, some more delicate techniques are needed. In Berger and Lacoin’s recent papers [3, 4], they developed a more elaborate change of measure procedure. By that method, the authors identified the sharp high temperature asymptotic behavior for the nearest-neighbor directed polymer in \(\mathbbm {Z}^{2+1}\) in [3], and the sharp asymptotics on the critical point shift for the pinning of one dimensional simple random walk. Note that \(d=2\) is the critical dimension for the existence of the weak disorder regime in the nearest-neighbor directed polymer model on \(\mathbbm {Z}^{d+1}\), and the case \(\alpha =\frac{1}{2}\) for pinning model is critical for whether the disorder is relevant. Hence, we believe that their new method can also be applied to provide the asymptotic behavior of the free energy for long-range directed polymer model in the critical case \(\alpha =1\). This paper does not include that case because it is quite involved and hence should be treated separately.

4 Proof of Theorem 1.17

We will first extend the key lemma [32, Lemma 5.3] so that it holds not only for finite \((\eta _{i})_{i=1}^{n}\), but also for countable many \((\eta _{i})_{i\ge 1}\).

Lemma 4.1

Denote \(\Lambda =\{(\lambda _{i})_{i\ge 1}\subset [0,1]^{\mathbbm {N}}:\sum \limits _{i=1}^{\infty }\lambda _{i}=1\}\), and let \((\eta _{i})_{i\ge 1}\) be an i.i.d.sequence of positive random variables such that \(\mathbbm {E}[|\log \eta _{1}|]<\infty \). Then for any positive integer k, we have

$$\begin{aligned} \inf \limits _{\begin{array}{c} (\lambda _{i})\in \Lambda \\ \sup (\lambda _{i})\le \frac{1}{k} \end{array}}\mathbbm {E}\left[ \log \left( \sum \limits _{i=1}^{\infty }\lambda _{i} \eta _{i}\right) \right] =\mathbbm {E}\left[ \log \left( \frac{1}{k}\sum \limits _{i=1}^{k}\eta _{i}\right) \right] . \end{aligned}$$
(4.1)

Proof

We prove the lemma by contradiction. Assume that

$$\begin{aligned} \inf \limits _{\begin{array}{c} (\lambda _{i})\in \Lambda \\ \sup (\lambda _{i})\le \frac{1}{k} \end{array}}\mathbbm {E}\left[ \log \left( \sum \limits _{i=1}^{\infty } \lambda _{i}\eta _{i}\right) \right] <\mathbbm {E}\left[ \log \left( \frac{1}{k}\sum \limits _{i=1}^{k}\eta _{i}\right) \right] , \end{aligned}$$
(4.2)

then we can find a sequence \((\bar{\lambda }_{i})\) such that

$$\begin{aligned} \mathbbm {E}\left[ \log \left( \sum \limits _{i=1}^{\infty }\bar{\lambda }_{i}\eta _{i}\right) \right] <\mathbbm {E}\left[ \log \left( \frac{1}{k}\sum \limits _{i=1}^{k}\eta _{i}\right) \right] . \end{aligned}$$
(4.3)

Note that there are only finite many \(\bar{\lambda }_{i}\)’s that equal to \(\frac{1}{k}\), and by continuity, we can adjust those \(\bar{\lambda }_{i}\)’s if necessary such that \(\sup _{i}\bar{\lambda }_{i}=\epsilon <\frac{1}{k}\) and (4.3) still holds. For any fixed integer n which is large enough such that \(\Lambda _{n}=\sum \limits _{i}^{n}\bar{\lambda }_{i}>\epsilon k\), we set \(\tilde{\lambda }_{i}=\frac{\bar{\lambda }_{i}}{\Lambda _{n}}\) for \(1\le i\le n\). Then,

$$\begin{aligned} \mathbbm {E}\left[ \log \left( \sum \limits _{i=1}^{\infty }\bar{\lambda }_{i}\eta _{i}\right) \right] \ge \mathbbm {E}\left[ \log \left( \sum \limits _{i=1}^{n}\tilde{\lambda }_{i}\eta _{i}\right) \right] +\log \Lambda _{n} \ge \mathbbm {E}\left[ \log \left( \frac{1}{k}\sum \limits _{i=1}^{k}\eta _{i}\right) \right] +\log \Lambda _{n},\nonumber \\ \end{aligned}$$
(4.4)

where the first inequality is due to the positivity of \(\eta _{i}\) and the second inequality holds by [32, Lemma 5.3] since \(\sup \limits _{1\le i\le n}\tilde{\lambda }_{i}\le \frac{1}{k}\) and \(\sum \limits _{i=1}^{n}\tilde{\lambda _{i}}=1\). Let n tend to infinity, then \(\log \Lambda _{n}\) tends to 0 and (4.4) contradicts (4.3). \(\square \)

Proof of Theorem 1.17

We follow the same strategy of proof as that for [32, Theorem 3.7] in the nearest-neighbor case. We will decompose \(N^{-1}\log \hat{Z}_{N,\beta }^{\omega }\) to construct a martingale by successively conditioning on \(\mathcal {G}_{N}\), which is the \(\sigma \)-field generated by \((\omega _{i,x})_{1\le i\le N,x\in \mathbbm {Z}}\). First, define

$$\begin{aligned} A_{N,\beta }^{\epsilon }=\{\omega :\sup \limits _{x\in \mathbbm {Z}}\mathbf {P}_{N-1,\beta }^{\omega }(S_{N}=x)>\epsilon \}. \end{aligned}$$
(4.5)

Then

$$\begin{aligned} \begin{array}{ll} \frac{\log Z_{N,\beta }^{\omega }}{N}&{}=\frac{1}{N}\sum \limits _{j=1}^{N}\log \frac{Z_{j,\beta }^{\omega }}{Z_{j-1,\beta }^{\omega }}\\ &{}=\frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{A_{j,\beta }^{\epsilon }}\log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x)\exp (\beta \omega _{j,x})\right) \\ &{}\quad +\,\frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{(A_{j,\beta }^{\epsilon })^{c}}\log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x)\exp (\beta \omega _{j,x})\right) \end{array} \end{aligned}$$
(4.6)

Note that in the second term of the right-hand side of (4.6), \(\sup \limits _{x\in \mathbbm {Z}}\mathbf {P}_{N-1,\beta }^{\omega }(S_{N}=x)\le \epsilon \). Hence, we can apply Lemma 4.1 to this term later.

Define \(\mathcal {G}_{N}\)-martingales

$$\begin{aligned} \begin{array}{ll} M_{N}:=&{}\sum \limits _{j=1}^{N}\mathbbm {1}_{(A_{j,\beta }^{\epsilon })^{c}}\log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x) \exp (\beta \omega _{j,x})\right) \\ &{}-\sum \limits _{j=1}^{N}\mathbbm {1}_{(A_{j,\beta }^{\epsilon })^{c}}\mathbbm {E}\left[ \left. \log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega } (S_{j}=x)\exp (\beta \omega _{j,x})\right) \right| \mathcal {G}_{j-1}\right] , \end{array} \end{aligned}$$
(4.7)

and

$$\begin{aligned} \begin{array}{ll} L_{N}:=&{}\sum \limits _{j=1}^{N}\mathbbm {1}_{A_{j,\beta }^{\epsilon }}\log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\epsilon }(S_{j}=x) \exp (\beta \omega _{j,x})\right) \\ &{}-\sum \limits _{j=1}^{N}\mathbbm {1}_{A_{j,\beta }^{\epsilon }}\mathbbm {E}\left[ \left. \log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x) \exp (\beta \omega _{j,x})\right) \right| \mathcal {G}_{j-1}\right] \end{array} \end{aligned}$$
(4.8)

Then

$$\begin{aligned} \begin{array}{ll} &{}\frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{A_{j,\beta }^{\epsilon }}\log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x)\exp (\beta \omega _{j,x})\right) \\ &{}\quad =\frac{L_{N}}{N}+\frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{A_{j,\beta }^{\epsilon }}\mathbbm {E}\left[ \left. \log \left( \sum \limits _{x\in \mathbbm {Z}} \mathbf {P}_{j-1,\beta }^{\epsilon }(S_{j}=x)\exp (\beta \omega _{j,x})\right) \right| \mathcal {G}_{j-1}\right] \\ &{}\quad \ge \frac{L_{N}}{N}+\beta \mathbbm {E}[\omega _{1,0}]\cdot \frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{A_{j}^{\epsilon ,\beta }}=\frac{L_{N}}{N}. \end{array} \end{aligned}$$
(4.9)

by Jensen’s inequality. And

$$\begin{aligned} \begin{array}{ll} &{}\frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{(A_{j,\beta }^{\epsilon })^{c}}\log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x)\exp (\beta \omega _{j,x})\right) \\ &{}\quad =\frac{M_{N}}{N}+\frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{(A_{j,\beta }^{\epsilon })^{c}}\mathbbm {E}\left[ \left. \log \left( \sum \limits _{x\in \mathbbm {Z}} \mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x)\exp (\beta \omega _{j,x})\right) \right| \mathcal {G}_{j-1}\right] \\ &{}\quad \ge \frac{M_{N}}{N}+\frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{(A_{j,\beta }^{\epsilon })^{c}}\mathbbm {E}\left[ \log \left( \epsilon \sum \limits _{i=1} ^{\frac{1}{\epsilon }}\exp (\beta \omega _{i,0})\right) \right] \end{array} \end{aligned}$$
(4.10)

by (4.1) with \((\exp (\beta \omega _{j,x}))_{j\ge 0, x\in \mathbbm {Z}}\) and \((\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x))_{x\in \mathbbm {Z}}\) playing respectively the role of \((\eta _{i})_{i\le 1}\) and \((\lambda _{i})_{i\ge 1}\) in (4.1). We obtain

$$\begin{aligned} \frac{\log Z_{N,\beta }^{\omega }}{N}-\frac{M_{N}}{N}-\frac{L_{N}}{N}\ge \left( \frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{(A_{j,\beta }^{\epsilon })^{c}}\right) \mathbbm {E}\left[ \log \left( \epsilon \sum \limits _{i=1}^{\frac{1}{\epsilon }}\exp (\beta \omega _{i,0})\right) \right] . \end{aligned}$$
(4.11)

We will then prove that \(\frac{M_{N}}{N}\) and \(\frac{L_{N}}{N}\) tend to 0 as N tends to infinity by applying the following theorem [22, Theorem 2.19]

Theorem 4.2

(Hall-Heyde [22]) Let \((Y_{n})_{n\ge 1}\) be a sequence of random variables and \((\mathcal {F}_{n})_{n\ge 1}\) an increasing sequence of \(\sigma \)-fields with \(Y_{n}\) measurable with respect to \(\mathcal {F}_{n}\) for each n. Let Y be a random variable and c a constant such that \(\mathbbm {E}|Y|<\infty \) and \(\mathbbm {P}(|Y_{n}|>x)\le c\mathbbm {P}(|Y|>x)\) for each \(x>0\) and \(n\ge 1\). Then

$$\begin{aligned} n^{-1}\sum \limits _{i=1}^{n}[Y_{i}-\mathbbm {E}[Y_{i}|\mathcal {F}_{i-1}]]\overset{\mathbbm {P}}{\rightarrow }0~~\text{ as }~n\rightarrow \infty . \end{aligned}$$
(4.12)

If \(\mathbbm {E}[|Y|\log ^{+}|Y|]<\infty \), then the convergence in probability in (4.12) can be strengthen to almost sure convergence.

First, by Jensen’s inequality, we have

$$\begin{aligned} \beta \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x)\omega _{j,x}\le \log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega } (S_{j}=x)\exp (\beta \omega _{j,x})\right) . \end{aligned}$$
(4.13)

And by using that \(\log x\le x^{\frac{1}{\theta }}\) for \(1<\theta \le e\), we have

$$\begin{aligned} \log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x)\exp (\beta \omega _{j,x})\right) \le \left( \sum \limits _{x\in \mathbbm {Z}} \mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x)\exp (\beta \omega _{j,x})\right) ^{\frac{1}{\theta }}.\qquad \end{aligned}$$
(4.14)

Then, by applying (4.13) when \(\log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega } (S_{j}=x)\exp (\beta \omega _{j,x})\right) <0\) and (4.14) when

\(\log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega } (S_{j}=x)\exp (\beta \omega _{j,x})\right) >0\), it follows that for all j,

$$\begin{aligned}&\mathbbm {E}\left| \log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x)\exp (\beta \omega _{j,x})\right) \right| ^{\theta }\nonumber \\&\quad \le \beta ^{\theta }\mathbbm {E}\left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x)|\omega _{j,x}|\right) ^{\theta } +\mathbbm {E}\left[ \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x)\exp (\beta \omega _{j,x})\right] \nonumber \\&\quad \le \beta ^{\theta }\mathbbm {E}|\omega _{1,0}|^{\theta }+\exp (\lambda (\beta ))=C. \end{aligned}$$
(4.15)

Let \(\mathbbm {1}_{(A_{j,\beta }^{\epsilon })^{c}}\log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x) \exp (\beta \omega _{j,x})\right) \) and \(\mathbbm {1}_{A_{j,\beta }^{\epsilon }}\log \left( \sum \limits _{x\in \mathbbm {Z}}\mathbf {P}_{j-1,\beta }^{\omega }(S_{j}=x) \exp \right. \left. (\beta \omega _{j,x})\right) \) play the role \(Y_{j}\) in Theorem 4.2, and define a random variable Y such that for all \(x>C^{\frac{1}{\theta }}\),

$$\begin{aligned} \mathbbm {P}(|Y|>x)=\frac{C}{x^{\theta }}, \end{aligned}$$
(4.16)

where C is the same as that in (4.15). Then,

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\frac{M_{N}}{N}=\lim \limits _{N\rightarrow \infty }\frac{L_{N}}{N}=0,~~\text{ in }~\mathbbm {P}\text{-probability }. \end{aligned}$$
(4.17)

Note that, recalling \(\theta >1\) and by the definition (4.16), \(\mathbbm {E}[|Y|\log ^{+}|Y|]<\infty \). Therefore, the convergence in (4.17) can be strengthened to almost sure convergence. By taking limits on both sides of (4.11), we have

$$\begin{aligned} \varlimsup \limits _{N\rightarrow \infty }\frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{(A_{j,\beta }^{\epsilon })^{c}}\le \frac{F(\beta )}{\mathbbm {E}\left[ \log \left( \epsilon \sum \limits _{i=1}^{\frac{1}{\epsilon }}\exp (\beta \omega _{i,0})\right) \right] },~\mathbbm {P}\text{-a.s. }, \end{aligned}$$
(4.18)

where \(F(\beta )\) is the free energy of the system by (1.10). Let \(\epsilon \) tend to 0 along the sequence \((\frac{1}{k})_{k\ge 1}\). By Jensen’s inequality, the law of large numbers and Fatou’s lemma, it is not hard to see

$$\begin{aligned} \lim \limits _{\epsilon \rightarrow 0}\mathbbm {E}\left[ \log \left( \epsilon \sum \limits _{i=1}^{\frac{1}{\epsilon }}\exp (\beta \omega _{i,0})\right) \right] =\lambda (\beta )>F(\beta ). \end{aligned}$$
(4.19)

The last inequality is due to our very strong disorder assumption.

Hence, we can choose \(\epsilon \) small enough such that

$$\begin{aligned} \mathbbm {E}\left[ \log \left( \epsilon \sum \limits _{i=1}^{\frac{1}{\epsilon }}\exp (\beta \omega _{i,0})\right) \right] >F(\beta ). \end{aligned}$$
(4.20)

Then by (4.14), for \(\mathbbm {P}\)-a.s.,

$$\begin{aligned} \varlimsup \limits _{N\rightarrow \infty }\frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{(A_{j,\beta }^{\epsilon })^{c}}<1\Leftrightarrow \varliminf \limits _{N\rightarrow \infty }\frac{1}{N}\sum \limits _{j=1}^{N}\mathbbm {1}_{(A_{j,\beta }^{\epsilon })}>0 \end{aligned}$$
(4.21)

Recall the definition of \(\mathcal {A}_{N,\beta }^{\epsilon ,\omega }\) and \(A_{N,\beta }^{\epsilon }\) in (1.24) and (4.5), and then (4.21) implies (1.25). \(\square \)

5 Proof of Theorem 1.18

The basic idea of the proof is to compare the entropy cost and the energy gain when a heavy-tailed random walk introduced by (1.26) and (1.27) stays in a distance of \(\mathcal {O}\left( \frac{N}{(\log N)^{2}}\right) \) away from the origin. It can be seen that

$$\begin{aligned} Z_{N,\beta }^{\omega }=\sum \limits _{S}\exp (-\beta H_{N}^{\omega }(S))\mathbf {P}(S), \end{aligned}$$
(5.1)

where \(H_{N}^{\omega }(S)\) is the energy introduced by (1.7). For technical feasibility, we may study the second half of the trajectory of the random walk, i.e., \((S_{N/2},\ldots ,S_{N})\). On one hand, if the second half of \(S_{N}\) stays in a distance \(N/(\log N)^{2}\), which is \(\gg N^{\frac{1}{\alpha }}\) for \(\alpha \in (1,2]\) and makes \(\mathbf {P}(S)\) very small, then there is a significant entropy cost. On the other hand, with some random variable Y, we may write \(\exp (-\beta H_{N}^{\omega }(S))\approx \exp (-\sqrt{N}Y)\), which fluctuates dramatically. Therefore, it is possible that we can find some block with very high energy on \(\mathbbm {Z}\) in a distance of \(\mathcal {O}\left( \frac{N}{(\log N)^{2}}\right) \) away from the origin, and if the energy gain wins the entropy cost, then the random walk is likely to stay in that block instead of somewhere near the origin.

Our proof consists of two parts. We will first investigate the energy gain. However, we will not estimate the energy directly. Instead, we will compare the contribution to the partition function from the environment on different blocks. In order to do that, we will use a change of measure argument developed in [27], since it is more likely to extend to the model with some general environment and it is much shorter than the method used in [5]. Then we need to compute the entropy cost, which will be done by an estimate on a Radon-Nikodym derivative, although it is not as accurate as the Girsanov Theorem used in [5, 27].

Proof of Theorem 1.18

Without loss of generality, we can assume that the integer N is always even throughout the proof, such that we can omit many “\(\lfloor \cdot \rfloor \)” symbols to make the proof more readable.

For any given \(\epsilon >0\), to be consistent with (1.28), we denote

$$\begin{aligned} J_{N}=\left( -\frac{\beta ^{2}N}{4(\alpha +1+\epsilon )^{2}(\log N)^{2}},\frac{\beta ^{2}N}{4(\alpha +1+\epsilon )^{2}(\log N)^{2}}\right) \cap \mathbbm {Z}. \end{aligned}$$
(5.2)

Then we can define a change of measure from \(\mathbbm {P}\) to a new probability measure \({\hat{\mathbb {P}}}\) with Ladon-Nikodym derivative

$$\begin{aligned} \frac{\text{ d }{{{\hat{\mathbb {P}}}}}}{\mathrm{d}\mathbbm {P}}:=\exp \left( -W-\frac{1}{2}\right) , \end{aligned}$$
(5.3)

where

$$\begin{aligned} W=\frac{\sum \limits _{n=\frac{N}{2}+1}^{N}\sum \limits _{x\in J_{N}}\omega _{n,x}}{\sqrt{\frac{N}{2}|J_{N}|}}. \end{aligned}$$
(5.4)

It is not hard to check that \(\hat{\omega }:=(\hat{\omega }_{i,x})_{(i,x)\in \mathbbm {N}\times \mathbbm {Z}}\) defined by

$$\begin{aligned} \hat{\omega }_{i,x}=\omega _{i,x}+\mathbbm {1}_{\left\{ (i,x)\in \left[ \frac{N}{2}+1,N\right] \times J_{N}\right\} }\left( \frac{N}{2}|J_{N}|\right) ^{-\frac{1}{2}} \end{aligned}$$
(5.5)

is a family of i.i.d. standard Gaussian random variables under \({\hat{\mathbb {P}}}\). Probability measure \({\hat{\mathbb {P}}}\) has some important property. Firstly, it makes the random environment on \(\left[ \frac{N}{2},N\right] \times J_{N}\) become less attractive to the random walk. Secondly, it does not differ from \(\mathbbm {P}\) too much, which can be seen by the following application of the Hölder inequality:

$$\begin{aligned} \mathbbm {P}(A)={\hat{\mathbb {E}}}\left[ \frac{\mathrm{d}\mathbbm {P}}{\mathrm{d}{{{\hat{\mathbb {P}}}}}}\mathbbm {1}_{A}\right] \le \sqrt{\mathbbm {E} \left[ \frac{\mathrm{d}\mathbbm {P}}{\mathrm{d}{{{\hat{\mathbb {P}}}}}}\right] }\sqrt{{{{\hat{\mathbb {P}}}}}(A)}\le \sqrt{e{{{\hat{\mathbb {P}}}}}(A)}. \end{aligned}$$
(5.6)

Then for any \(p\in (0,1]\), we have

$$\begin{aligned} \begin{array}{ll} &{}\mathbbm {P}\left( \mathbf {P}_{N,\beta }^{\omega }\left( \max \limits _{1\le n\le N}|S_{n}|<\frac{1}{2}|J_{N}|\right) \ge p\right) \le \sqrt{e{{{\hat{\mathbb {P}}}}}\left( \mathbf {P}_{N,\beta }^{\omega }\left( \max \limits _{1\le n\le N}|S_{n}|<\frac{1}{2}|J_{N}|\right) \ge p\right) }\\ &{}\quad =\sqrt{e{{{\hat{\mathbb {P}}}}}\left( \frac{\mathbf {E}\left[ \exp \left( \beta \sum \limits _{n=1}^{N}\omega _{n,S_{n}}\right) \mathbbm {1}_{\{|S_{n}| <\frac{1}{2}|J_{N}|,~\forall n\in [1,N]\}}\right] }{\mathbf {E}\left[ \exp \left( \beta \sum \limits _{n=1}^{N}\omega _{n,S_{n}}\right) \right] }\ge p\right) }. \end{array} \end{aligned}$$
(5.7)

In order to deal with the last term in (5.7), we partition all integer \(\mathbbm {Z}\) by

$$\begin{aligned} I_{N}^{k}=\left[ (2k-1)L, (2k+1)L\right) \cap \mathbbm {Z},~\forall k\in \mathbbm {Z}, \end{aligned}$$
(5.8)

where

$$\begin{aligned} L=\left\lfloor \frac{\beta ^{2}N}{4(\alpha +1+\epsilon _{0})^{2}(\log N)^{2}}\right\rfloor \end{aligned}$$
(5.9)

with some \(\epsilon _{0}\in (0,\epsilon )\) so that when N is large enough, \(J_{N}\subset I_{N}^{0}\). Note that under this partition, only those \(\omega _{i,x}\)’s with \((i,x)\in [\frac{N}{2}+1,N]\times I_{N}^{0}\) is influenced by changing measure from \(\mathbbm {P}\) to \({{{\hat{\mathbb {P}}}}}\). We define

$$\begin{aligned} Z_{N,\beta }^{\omega }(k):=\mathbf {E}\left[ \exp \left( \beta \sum \limits _{n=1}^{N}\omega _{n,S_{n}}\right) \mathbbm {1}_{\left\{ S_{n}\in I_{N}^{k},~\forall n\in \left[ \frac{N}{2}+1,N\right] \right\} }\right] \end{aligned}$$
(5.10)

and

$$\begin{aligned} \hat{Z}_{N,\beta }^{\omega }:=\mathbf {E}\left[ \exp \left( \beta \sum \limits _{n=1}^{N}\omega _{n,S_{n}}\right) \mathbbm {1}_{\{|S_{n}| <\frac{1}{2}|J_{N}|,~\forall n\in [1,N]\}}\right] . \end{aligned}$$
(5.11)

Since \(I_{N}^{k}\) and \(I_{N}^{j}\) are disjoint for \(k\ne j\), we have for any positive integer M,

$$\begin{aligned} Z_{N,\beta }^{\omega }\ge \sum \limits _{k\in \{-M,\ldots ,M\}\setminus \{0\}}Z_{N,\beta }^{\omega }(k). \end{aligned}$$
(5.12)

Then we can bound the last term in (5.7) by

$$\begin{aligned} \begin{array}{ll} &{}{{{\hat{\mathbb {P}}}}}\left( \frac{\mathbf {E}\left[ \exp \left( \beta \sum \limits _{n=1}^{N}\omega _{n,S_{n}}\right) \mathbbm {1}_{\{|S_{n}| <\frac{1}{2}|J_{N}|,~\forall n\in [1,N]\}}\right] }{\mathbf {E}\left[ \exp \left( \beta \sum \limits _{n=1}^{N}\omega _{n,S_{n}}\right) \right] }\ge p\right) \\ &{}\quad \le {{{\hat{\mathbb {P}}}}}\left( \frac{\hat{Z}_{N,\beta }^{\omega }}{\sum \limits _{k\in \{-M,\ldots ,M\}\setminus \{0\}}Z_{N,\beta }^{\omega }(k)}\ge p\right) \\ &{}\quad ={{{\hat{\mathbb {P}}}}}\left( \exp \left( -\beta \frac{N}{2}\left( \frac{N}{2}|J_{N}|\right) ^{-\frac{1}{2}}\right) \frac{\hat{Z}_{N,\beta }^{\hat{\omega }}}{\sum \limits _{k\in \{-M,\ldots ,M\}\setminus \{0\}}Z_{N,\beta }^{\omega }(k)}\ge p\right) \\ &{}\quad =\mathbbm {P}\left( \exp \left( -\beta \frac{N}{2}\left( \frac{N}{2}|J_{N}|\right) ^{-\frac{1}{2}}\right) \frac{\hat{Z}_{N,\beta }^{\omega }}{\sum \limits _{k\in \{-M,\ldots ,M\}\setminus \{0\}}Z_{N,\beta }^{\omega }(k)}\ge p\right) , \end{array} \end{aligned}$$
(5.13)

where in the first equality, we change \(\omega \) to \(\hat{\omega }\) and the last equality results from the property \(\mathcal {L}_{\mathbbm {P}}(\omega )=\mathcal {L}_{{{{\hat{\mathbb {P}}}}}}(\hat{\omega })\). The proof will be completed by the following proposition, whose proof will be given later.

Proposition 5.1

For any \(\epsilon >0\), there exists some constant \(C>0\), such that for any positive integer M and large enough even integer N, we have

$$\begin{aligned} \sum \limits _{k\in \{-M,\ldots ,M\}\setminus \{0\}}Z_{N,\beta }^{\omega }(k)\ge C(MN)^{-(\alpha +1+\frac{\epsilon }{2})}Z_{N,\beta }^{\omega }(0) \end{aligned}$$
(5.14)

with \(\mathbbm {P}\)-probability greater than \(1-\frac{1}{2M}\).

By \(J_{N}\subset I_{N}^{0}\) and Proposition 5.1,

$$\begin{aligned} \frac{\hat{Z}_{N,\beta }^{\omega }}{\sum \limits _{k\in \{-M,\ldots ,M\}\setminus \{0\}}Z_{N,\beta }^{\omega }(k)}\le \frac{Z_{N,\beta }^{\omega }(0)}{\sum \limits _{k\in \{-M,\ldots ,M\}\setminus \{0\}}Z_{N,\beta }^{\omega }(k)}\le C(MN)^{\alpha +1+\frac{\epsilon }{2}} \end{aligned}$$
(5.15)

with \(\mathbbm {P}\)-probability greater than \(1-\frac{1}{2M}\). Note that by our choice of \(J_{N}\),

$$\begin{aligned} \exp \left( -\beta \frac{N}{2}\left( \frac{N}{2}|J_{N}|\right) ^{-\frac{1}{2}}\right) \sim N^{-(\alpha +1+\epsilon )}. \end{aligned}$$
(5.16)

Combine (5.7), (5.13), (5.16), (5.17) and choosing \(p=N^{-\frac{\epsilon }{4}}\) in (5.7), and then we have

$$\begin{aligned} \mathbbm {P}\left( \mathbf {P}_{N,\beta }^{\omega }\left( \max \limits _{1\le n\le N}|S_{n}|<\frac{1}{2}|J_{N}|\right) \ge N^{-\frac{\epsilon }{4}}\right) \le \sqrt{\frac{e}{2M}} \end{aligned}$$
(5.17)

when N is large enough. Thus

$$\begin{aligned} \mathbbm {E}\left[ \mathbf {P}_{N,\beta }^{\omega }\left( \max \limits _{1\le n\le N}|S_{n}|<\frac{1}{2}|J_{N}|\right) \right] \le \sqrt{\frac{e}{2M}}+N^{-\frac{\epsilon }{4}}. \end{aligned}$$
(5.18)

By sending N to infinity and then sending M to infinity, we finish the proof of Theorem 1.18. \(\square \)

Now we prove Proposition 5.1, which gives an estimate on the entropy cost for a random walk staying in the blocks which are far away from the origin.

Proof of Proposition 5.1

For all \(k\in \{-M,\ldots ,M\}\), by recalling L in (5.9) and \(I_{N}^{k}\) in (5.8), we define

$$\begin{aligned} h_{N}(n,k)= {\left\{ \begin{array}{ll} 0,~&{}\text{ for }~1\le n\le \frac{N}{2},\\ 2kL,~&{}\text{ for }~\frac{N}{2}+1\le n\le N, \end{array}\right. } \end{aligned}$$
(5.19)

and

$$\begin{aligned} \overline{Z}_{N,\beta }^{\omega }(k):=\mathbf {E}\left[ \exp \left( \beta \sum \limits _{n=1}^{N}\omega _{n,S_{n}+h_{N}(n,k)}\right) \mathbbm {1}_{\left\{ S_{n}\in I_{N}^{0},~\forall n\in \left[ \frac{N}{2}+1,N\right] \right\} }\right] . \end{aligned}$$
(5.20)

When \(S_{n}\in I_{N}^{0}\) for all \(n\in \left[ \frac{N}{2}+1,N\right] \), \(\left\{ (\omega _{n,S_{n}+h_{N}(n,k)})_{n\in \left[ \frac{N}{2}+1,N\right] }\right\} _{k\in \{-M,\ldots ,M\}}\) are independent families for different k. Hence, it is easy to show that \(\mathbbm {P}(\overline{Z}_{N,\beta }^{\omega }(k)=\overline{Z}_{N,\beta }^{\omega }(j))=0\) for \(k\ne j\) and \((\overline{Z}_{N,\beta }^{\omega }(k))_{k\in \{-M,\ldots ,M\}}\) is an exchangeable sequence. Therefore,

$$\begin{aligned} \mathbbm {P}\left( \overline{Z}_{N,\beta }^{\omega }(0)=\max \limits _{k\in \{-M,\ldots ,M\}}\overline{Z}_{N,\beta }^{\omega }(k)\right) =\frac{1}{2M+1} \end{aligned}$$
(5.21)

Note that \(\overline{Z}_{N,\beta }^{\omega }(0)=Z_{N,\beta }^{\omega }(0)\) and we need to compare \(\overline{Z}_{N,\beta }^{\omega }(k)\) with \(Z_{N,\beta }^{\omega }(k)\) for \(k\ne 0\). By writing \(\overline{S}_{n}=S_{n}-h_{N}(n,k)\), we have

$$\begin{aligned} Z_{N,\beta }^{\omega }=\mathbf {E}\left[ \exp \left( \beta \sum \limits _{n=1}^{N}\omega _{n,\overline{S}_{n}+h_{N}(n,k)}\right) \mathbbm {1}_{\left\{ \overline{S}_{n}\in I_{N}^{0},~\forall n\in \left[ \frac{N}{2}+1,N\right] \right\} }\right] . \end{aligned}$$
(5.22)

We can complete the proof with the help of the following lemma.

Lemma 5.2

Define a sequence of random variables \((\overline{X}_{n})_{1\le n\le N}\) by

$$\begin{aligned} \overline{X}:= {\left\{ \begin{array}{ll} X_{n},&{}~\mathrm{for}~n\ne \frac{N}{2}+1,\\ X_{n}-h_{N}(n,k),&{}~\mathrm{for}~n=\frac{N}{2}+1. \end{array}\right. } \end{aligned}$$
(5.23)

We change the measure from \(\mathbf {P}\) to a new probability measure \({\overline{\mathbf {P}}}\) by Ladon-Nikodym Theorem such that \(\mathcal {L}_{{\overline{\mathbf {P}}}}((\overline{X})_{1\le n\le N})=\mathcal {L}_{\mathbf {P}}((X)_{1\le n\le N})\). Then for any \(\delta >0\), we can find a constant \(C>0\), such that for any \(k\in \{-M,\ldots ,M\}\) and large enough integer N, we have

$$\begin{aligned} \frac{\mathrm{d}\mathbf {P}}{\mathrm{d}{{{\overline{\mathbf {P}}}}}}\ge C(|k|N)^{-(\alpha +1+\delta )}. \end{aligned}$$
(5.24)

Proof of Lemma 5.2

It is obvious that \({\overline{\mathbf {P}}}\) and \(\mathbf {P}\) only differ on the distribution of \(X_{\frac{N}{2}+1}\). We use the notations

$$\begin{aligned} \mathbf {P}\left( X_{\frac{N}{2}+1}=x\right) =p_{x},~\forall x\in \mathbbm {Z} \end{aligned}$$
(5.25)

and \(h=h_{N}(n,k)\) for short. Then

$$\begin{aligned} {\overline{\mathbf {P}}}\left( X_{\frac{N}{2}+1}=x\right) ={\overline{\mathbf {P}}}\left( \overline{X}_{\frac{N}{2}+1}=x-h\right) =p_{x-h}. \end{aligned}$$
(5.26)

The Radon-Nikodym derivative can be written explicitly by

$$\begin{aligned} \frac{\mathrm{d}\mathbf {P}}{\mathrm{d}{{{\overline{\mathbf {P}}}}}}=\sum \limits _{x\in \mathbbm {Z}\setminus \{0,h\}}\mathbbm {1}_{\left\{ X_{\frac{N}{2}+1}=x \right\} }\frac{p_{x}}{p_{x-h}}+\mathbbm {1}_{\left\{ X_{\frac{N}{2}+1}=0\right\} }\frac{p_{0}}{p_{-h}}+\mathbbm {1}_{\left\{ X_{\frac{N}{2}+1}=h \right\} }\frac{p_{h}}{p_{0}}. \end{aligned}$$
(5.27)

The summand in the summation on the right hand side of (5.27) is

$$\begin{aligned} \frac{L(|x|)}{L(|x-h|)}\left| 1-\frac{h}{x}\right| ^{\alpha +1}. \end{aligned}$$
(5.28)

By Potter’s bound (see [7, Theorem 1.5.6]), given a slowly varying function L(x), for any \(\delta >0\), \(A\ge 1\), there exists some constant \(C=C(\delta ,A)\), such that

$$\begin{aligned} \frac{L(x)}{L(y)}\ge C\min \limits _{x\ge A,y\ge A}\left\{ \left( \frac{x}{y}\right) ^{\delta },\left( \frac{x}{y}\right) ^{-\delta }\right\} . \end{aligned}$$
(5.29)

We can partition the summation range by \((-\infty ,1],[1,h-1],[h+1,\infty )\) to get rid of the absolute value in (5.28) and then apply (5.29) to achieve (5.24). Note that the term \((\log N)^{2}\) in L can be ignored by some adjustment in the power \(\delta \), since it is a slowly varying function. \(\square \)

Now by (5.22) and Lemma 5.2, for \(\delta =\frac{\epsilon }{2}\) and some constant C, we have

$$\begin{aligned} \begin{array}{ll} Z_{N,\beta }^{\omega }(k)&{}=\overline{E}\left[ \frac{\mathrm{d}\mathbf {P}}{\mathrm{d}{{{\overline{\mathbf {P}}}}}}\exp \left( \beta \sum \limits _{n=1}^{N} \omega _{n,\overline{S}_{n}+h_{N}(n,k)}\right) \mathbbm {1}_{\left\{ \overline{S}_{n}\in I_{N}^{0},~\forall n\in \left[ \frac{N}{2}+1,N\right] \right\} }\right] \\ &{}\ge C(MN)^{-\alpha +1+\frac{\epsilon }{2}}\overline{Z}_{N,\beta }^{\omega }(k), \end{array} \end{aligned}$$
(5.30)

where in the last inequality, we use the property that \(\mathcal {L}_{{{{\overline{\mathbf {P}}}}}}((\overline{X})_{1\le n\le N})=\mathcal {L}_{\mathbf {P}}((X)_{1\le n\le N})\). Combine (5.21) and (5.30) and then we finish the proof of Proposition 5.1. \(\square \)

Remark 5.3

In [27], the author also showed that for a Brownian polymer \(B_{t}\) in a continuous Gaussian field, \(B_{t}\) cannot fluctuate on a scale larger than \(\mathcal {O}(N^{\frac{3}{4}})\). However, in Theorem 1.16, we have shown that if the one step distribution of the random walk has polynomial decay, then even though it is in the domain of attraction of the Gaussian law, it will fluctuate on a scale larger than \(\mathcal {O}(N^{1-\epsilon })\) for arbitrarily small \(\epsilon >0\), which is much larger than \(N^{\frac{3}{4}}\). This is a remarkable difference between the long-range model and the short-range model, which is comparable to the nearest-neighbor model.