1 Introduction and Main Results

Recently, there has been substantial work on Ising models on random graphs, as a paradigmatic model for dependent random variables on complex networks. While much work exists on random graphs with independent randomness on the edges or vertices, such as percolation and first-passage percolation (see [25] for a substantial overview of results for these models on random graphs), the dependence of the random variables on the vertices raises many interesting new questions. We refer to [4, 5, 8, 11,12,13, 18, 19] for recent results on the Ising model on random graphs, as well as [25, Chapter 5] and [9] for overviews. The crux about the Ising model is that the variables that are assigned to the vertices of the random graph wish to be aligned, thus creating positive dependence. Since the Ising model lives on a random graph, we are dealing with non-trivial double randomness of both the spin system as well as the random environment. While [8, 11, 12, 18] study the quenched setting, in which the random graph is either fixed (random-quenched) or the Boltzmann–Gibbs measure is averaged out with respect to the random medium (averaged–quenched), recently the annealed setting, in which both the partition function and the Boltzmann weight are averaged out separately has attracted substantial attention [4, 5, 13, 19]. The random graph models investigated are rank-1 inhomogeneous random graphs [13, 19], as well as random regular graphs and configuration models [4, 5, 18]. Depending on the setting, the annealed setting may have a different critical temperature. However, as predicted by the non-rigorous physics work [15, 22], the annealed Ising model turns out to be in the same universality class as the quenched model for all settings investigated [5, 12, 13].

In this paper, we extend the analysis of the annealed Ising model on inhomogeneous random graphs to their large deviation properties. We investigate both the large deviations of the total spin, which is a classical problem dating back at least to Ellis [16, 17], but we also consider the large deviation properties under the annealed measure of purely graph quantities, such as the number of edges or the vertex degrees. Such problems are in general difficult since the strong interaction causes the rate function to become non-convex at low temperatures (\(\beta >\beta _c\)), so the Gärtner–Ellis theorem cannot be used directly.

Our main results provide a formula for the large deviation function of the total spin that holds true even when the hypothesis of the theorem are not satisfied, i.e., at low temperatures. This formula is indeed valid for all values of the parameters determining the phase diagram. To overcome the lack of differentiability of the annealed pressure (which is a necessary condition for the application of the Gärtner–Ellis theorem) at low temperatures, we shall use the key property that the annealed Ising model on the generalized random graph can be mapped to an inhomogeneous mean-field (Curie–Weiss) model. As a consequence, the large deviation function of the total spin can be deduced from classical results for independent variables and an application of Varadhan’s lemma. Using this analysis, we also obtain alternative forms of the annealed pressure. In [14], similar techniques have been used to derive the annealed pressure for models with more general spins, including continuous spins on a bounded interval.

The study of large deviations for the number of edges brings the fact to light that, if one focuses solely on graph observables and properties, then annealing can be described in terms of a modified law for the graph. Our results show that in the annealed setting, the typical number of edges present is substantially larger than the typical value under the original law of the graph, thus quantifying the effect that the annealing has on the structure of the random graph involved. As explained in more detail below, one could think of the annealed Ising model on a random graph as giving rise to a random graph with an interesting correlation structure between the edges. To gain more understanding on this correlation structure we also investigate the degrees distribution under the annealed Ising measure. Again we find that the degree of a fixed vertex (or the degree of a uniformly chosen vertex) under the modified graph law has a distribution with a larger mean.

1.1 The Annealed Ising Model on Generalized Random Graphs

We now introduce the model. We first define the specific random graph model, the so-called generalized random graph, and then define the (annealed) Ising model.

1.1.1 Generalized Random Graph

To construct the generalized random graph [3], let \(I_{ij}\) denote the Bernoulli indicator that the edge between vertex i and vertex j is present and let \(p_{ij} = \mathbb {P}\left( I_{ij} = 1\right) \) be the edge probability, where different edges are present independently. Further, consider a sequence of non-negative weights \(\varvec{w} = (w_i)_{i \in [n]}\) whose label i runs through the vertex set \([n]=\{1,\ldots ,n\}\). Then, the generalized random graph, denoted by \(\mathrm {GRG}_{n}(\varvec{w})\), is defined by

$$\begin{aligned} p_{ij} \, = \, \frac{w_i w_j}{\ell _{n} + w_i w_j}, \end{aligned}$$
(1.1)

where \(\ell _{n}= \sum _{i\in [n]} w_i\) is the total weight of all vertices. Denote the law of \(\mathrm {GRG}_{n}(\varvec{w})\) by \(\mathbb P\) and its expectation by \(\mathbb E\). There are many related random graph models (also called rank-1 inhomogeneous random graphs [2]), such as the random graph with specified expected degrees or Chung–Lu model [6, 7] and the Poisson random graph or Norros–Reittu model [23]. Janson [20] shows that many of these models are asymptotically equivalent. Even though his results do not apply to the large deviation properties of these random graphs, all our results also apply to these other models.

We need to assume that the vertex weight sequences \(\varvec{w} = (w_i)_{i \in [n]}\) are sufficiently nicely behaved. Let \(U_{n} \in [n]\) denote a uniformly chosen vertex in \(\mathrm {GRG}_{n}(\varvec{w})\) and \(W_{n} = w_{U_{n}}\) its weight. Then, the following condition defines the asymptotic weight W and set the convergence properties of \((W_{n})_{n\ge 1}\) to W:

Condition 1.1

(Weight regularity) There exists a random variable W such that, as \(n\rightarrow \infty \),

  1. (a)

    \(W_{n} {\mathop {\longrightarrow }\limits ^\mathcal{D}} W\), where \( {\mathop {\longrightarrow }\limits ^\mathcal{D}}\) denotes convergence in distribution;

  2. (b)

    \(\mathbb {E}[W_{n}]=\frac{1}{n}\sum _{i\in [n]} w_{i} \rightarrow \mathbb {E}[W]< \infty \);

  3. (c)

    \(\mathbb {E}[W_{n}^2]=\frac{1}{n}\sum _{i\in [n]} w_{i}^2 \rightarrow \mathbb {E}[W^2]< \infty \);

Further, we assume that \(\mathbb {E}[W]>0\).

As explained in more detail in [24, Chapter 6], conditions (a) and (b) imply that the empirical degree distribution of the random graph converges to a mixed Poisson distribution with mixing distribution W, i.e., the proportion of vertices with degree k is close to the probability that a Poisson random variable with random parameter W equals k.

We note also that, by uniform integrability, Condition 1.1 (c) implies (b).

Notation Throughout this paper, we denote the average w.r.t. the probability measure \({\mu }\) by \(\mathbb E_{\mu }\).

1.1.2 Annealed Ising Model

Let \(\sigma =(\sigma _i)_{i\in [n]} \in \{-1,+1\}^{n} =: \Omega _{n}\) be a spin configuration. Then, for a given graph \(\mu ^\mathrm{qe}_n(\sigma ) =([n],E_{n})\), where \(E_n \subset [n] \times [n]\) denotes the edge set, the Ising model is defined by the following Boltzmann–Gibbs measure

$$\begin{aligned} \mu ^\mathrm{qe}_n(\sigma ) = \frac{1}{Z^{\mathrm {qe}}_{n}(\beta ,B)} \exp \left\{ \beta \sum _{(i,j)\in E_{n}} \sigma _i \sigma _j + B\sum _{i\in [n]} \sigma _i\right\} , \end{aligned}$$
(1.2)

where

$$\begin{aligned} Z^{\mathrm {qe}}_{n}(\beta ,B) = \sum _{\sigma \in \Omega _{n}} \exp \left\{ \beta \sum _{(i,j)\in E_{n}} \sigma _i \sigma _j + B\sum _{i\in [n]} \sigma _i\right\} \end{aligned}$$

is the quenched partition function. Here \(\beta \ge 0\) is the inverse temperature and \(B\in \mathbb {R}\) is the external field. When \(G_{n}\) is a random graph, this is known as the random quenched Ising model [18].

To obtain the annealed model, we take expectations with respect to the random graph measure in both the numerator and denominator of (1.2), i.e., we define the annealed Ising measure by

$$\begin{aligned} {\mu }^\mathrm{{an}}_{n}(\sigma ) = \frac{\mathbb E\biggl [\exp \biggl \{\beta \sum _{(i,j)\in E_{n}} \sigma _i \sigma _j + B\sum _{i\in [n]} \sigma _i\biggr \}\biggr ]}{Z^{\mathrm {an}}_{n}(\beta ,B)} , \end{aligned}$$
(1.3)

where the annealed partition function \(Z^{\mathrm {an}}_{n}(\beta ,B)\) is equal to

$$\begin{aligned} Z^{\mathrm {an}}_{n}(\beta ,B)= \mathbb E[Z^{\mathrm {qe}}_{n}(\beta ,B)] = \sum _{\sigma \in \Omega _{n}} \mathbb E\biggl [\exp \biggl \{\beta \sum _{(i,j)\in E_{n}} \sigma _i \sigma _j + B\sum _{i\in [n]} \sigma _i\biggr \}\biggr ]. \end{aligned}$$

1.1.3 Previous Results for the Annealed Ising Model on the Generalized Random Graph

In this section, we describe some important results about the annealed Ising model that have been derived previously. An important quantity in the study of the annealed Ising model is the annealed pressure defined by

$$\begin{aligned} \psi ^{\mathrm {an}}_{n}(\beta ,B) = \frac{1}{n}\log Z^{\mathrm {an}}_{n}(\beta ,B). \end{aligned}$$

The thermodynamic limit of this quantity \(\psi ^{\mathrm {an}}(\beta ,B):=\lim _{n\rightarrow \infty } \psi ^{\mathrm {an}}_{n}(\beta ,B)\) is determined in the following theorem:

Theorem 1.2

(Annealed pressure [19]) Suppose that Condition 1.1 holds. Then for all \(0\le \beta < \infty \) and all \(B \in \mathbb R\),

$$\begin{aligned} \psi ^{\mathrm {an}}(\beta ,B) = \log 2 + \alpha (\beta ) + \mathbb E\biggl [\log \cosh \biggl (\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} W z^*(\beta ,B) + B\biggr )\biggr ] - z^*(\beta ,B)^2/2, \end{aligned}$$
(1.4)

where \(\alpha (\beta )=\lim _{n\rightarrow \infty } \alpha _{n}(\beta )\) with \(\alpha _{n}(\beta )\) defined in (1.19) below is given by

$$\begin{aligned} \alpha (\beta ) = \tfrac{1}{2}(\cosh (\beta )-1)\mathbb {E}[W], \end{aligned}$$
(1.5)

and \(z^*(\beta ,B)\) is, for \(B\ne 0\), given by the unique solution with the same sign as B of the fixed-point equation

$$\begin{aligned} z=\mathbb E\biggl [\tanh \biggl (\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} W z + B\biggr )\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} W\biggr ], \end{aligned}$$
(1.6)

whereas for \(B=0\), \(z^*(\beta ,0) = \lim _{B\searrow 0}z^*(\beta ,B)\).

This theorem is proved in [19, Theorem 1.1]. In Sect. 2.2 we provide an alternative expression for the annealed pressure that is instrumental for our large deviation analysis.

In [19, Thm 1.1] it is also proved that the annealed Ising model on the generalized random graph has a second order phase transition at a critical inverse temperature \(\beta ^{\mathrm {an}}_c\) given by

$$\begin{aligned} \beta ^{\mathrm {an}}_c= \mathrm{asinh}\left( \frac{\mathbb {E}[W]}{\mathbb {E}[W^2]}\right) \;. \end{aligned}$$
(1.7)

Denote by

$$\begin{aligned} S_{n} = \sum _{i\in [n]} \sigma _i, \end{aligned}$$

the total spin, and by

$$\begin{aligned} M^{\mathrm {an}}_n(\beta ,B)=\mathbb {E}_{{\mu }^\mathrm{{an}}_{n}} \biggl (\frac{S_{n}}{{n}} \biggr ), \end{aligned}$$

the finite-volume annealed magnetization. It is shown in [19, Theorems 1.2 and 1.3] that a strong law of large numbers (SLLN) and central limit theorem (CLT) holds for the total spin:

Theorem 1.3

(SLLN and CLT [19]) Suppose that Condition 1.1 (a)–(c) hold. Define the uniqueness regime of the parameters \((\beta ,B)\) by

$$\begin{aligned} \mathcal {U} = \left\{ (\beta ,B) \,:\, \beta \ge 0, B\ne 0 \mathrm{\ or\ } 0<\beta <\beta ^{\mathrm {an}}_c, B=0 \right\} , \end{aligned}$$

and suppose that \((\beta ,B)\in \mathcal {U} \). Then, for all \(\varepsilon >0\) there exists a constant \(L=L(\varepsilon )>0\) such that, for all n sufficiently large,

$$\begin{aligned} \mathbb {P}_{{\mu }^\mathrm{{an}}_{n}} \biggl (\Bigl |\frac{1}{n} S_{n} - M^{\mathrm {an}} \Bigr |\biggr ) \le {\mathrm e}^{-nL}, \end{aligned}$$

where

$$\begin{aligned} M^{\mathrm {an}}(\beta ,B) =\mathbb E\biggl [\tanh \biggl (\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} W z^*(\beta ,B) + B\biggr )\biggr ], \end{aligned}$$

being \(z^*(\beta ,B)\) the solution of (1.6), equals the annealed magnetization, that is \(\lim _{n\rightarrow \infty } M^{\mathrm {an}}_n(\beta ,B)\).

Furthermore,

$$\begin{aligned} \frac{S_{n} - \mathbb {E}_{{\mu }^\mathrm{{an}}_n} (S_{n})}{\sqrt{n}} {\mathop {\longrightarrow }\limits ^\mathcal{D}} \mathcal {N}(0,\chi ^{\mathrm {an}}), \qquad \mathrm{w.r.t.\ }\quad {\mu }^\mathrm{{an}}_{n} \mathrm{\ as\ } n\rightarrow \infty , \end{aligned}$$

where \(\chi ^{\mathrm {an}}(\beta ,B)=\frac{\partial }{\partial B} M^{\mathrm {an}}(\beta ,B)\) is the annealed susceptibility and \(\mathcal {N}(0,\sigma ^2)\) denotes a centered normal random variable with variance \(\sigma ^2\).

Analogously one can define the random quenched pressure:

$$\begin{aligned} \psi ^{\mathrm {qe}}(\beta ,B) = \lim _{n\rightarrow \infty } \psi ^{\mathrm {qe}}_{n}(\beta ,B) = \lim _{n\rightarrow \infty } \frac{1}{n} \log Z^{\mathrm {qe}}_{n}(\beta ,B). \end{aligned}$$

This has been determined for the GRG as well as other locally tree-like random graph models in [8, 11], where it is also proven that \(\psi ^{\mathrm {qe}}(\beta ,B)\) is a non-random quantity. An SLLN and CLT for the total spin w.r.t. \(\mu ^\mathrm{qe}_n\) have been obtained in [18]. In general, the quenched and annealed pressures are different, and also the critical temperatures of the models are different. The only exception that we are aware of is the random regular graph (see [4]). The critical temperature in the quenched setting will be denoted by \(\beta ^{\mathrm {qe}}_c\).

1.2 Main Results

In this paper, we study the spin sum in more detail (i.e. beyond the CLT scale) and prove a large deviation principle for \(S_{n}\), as well as a weighted version that plays a crucial role in the annealed Ising model. Let us start by recalling what a large deviation principle is. Given a sequence of random variables \((X_n)_{n\ge 1}\) taking values in the measurable space \((\mathcal{X}, \mathcal{B})\), with \(\mathcal{X}\) a topological space and \(\mathcal{B}\) a \(\sigma \)-field of subsets of \(\mathcal{X}\), then the large deviation principle (LDP) is defined as follows:

Definition 1.4

(Large deviation principle [10]) We say that \((X_n)_{n\ge 1}\) satisfies an LDP with rate function I(x) and speed \(n\) w.r.t. a probability measure \({(\mathbb P_{n})_{n\ge 1}}\) if, for all \(F\in \mathcal{B}\),

$$\begin{aligned} -\inf _{x\in F^o} I(x)\le \liminf _{n\rightarrow \infty } \frac{1}{n} \log \mathbb P_{{n}}(X_n\in F)\le \limsup _{n\rightarrow \infty } \frac{1}{n} \log \mathbb P_{{n}}(X_n\in F)\le -\inf _{x\in \bar{F}} I(x), \end{aligned}$$

where \(F^o\) denotes the interior of F and \(\bar{F}\) its closure.

In this definition \(I:\mathcal{X}\rightarrow [0, \infty ]\) is a lower semicontinuous function. Our first main result is an LDP for the total spin in the high-temperature regime for both the random quenched and the annealed Ising model.

Theorem 1.5

(Total spin LDPs in high-temperature regime) In the annealed Ising model, under Condition 1.1, the total spin \(S_{n}\) satisfies an LDP w.r.t. \({\mu }^{\mathrm {an}}_{n}\) for \(\beta \le \beta ^{\mathrm {an}}_c\) and \(B\in \mathbb {R}\), with rate function

$$\begin{aligned} {{I}^{\mathrm {an}} }(x)= \sup _{t}\left\{ x\, t - {\psi ^{\mathrm {an}}(\beta ,B+t)}\right\} + \psi ^{\mathrm {an}}(\beta ,B). \end{aligned}$$
(1.8)

In the random quenched Ising model, under Condition 1.1, the total spin \(S_{n}\) also satisfies an LDP w.r.t. \(\mu ^\mathrm{qe}_n\) for \(\beta \le \beta ^{\mathrm {qe}}_c\) and \(B\in \mathbb {R}\), with rate function

$$\begin{aligned} {{I}^{\mathrm {qe}}}(x)= \sup _{t}\left\{ x\, t - {\psi ^{\mathrm {qe}}(\beta ,B+t)}\right\} + \psi ^{\mathrm {qe}}(\beta ,B). \end{aligned}$$

The proof of Theorem 1.5 is highly general, and applies to settings where the pressure is known to exist and to be differentiable. As such, the proof is basically identical for the annealed and quenched Ising models on \(\mathrm {GRG}_{n}(\varvec{w})\).

For the annealed Ising model we also prove an LDP for all positive temperatures. For this, we also introduce the total weighted spin

$$\begin{aligned} S^{\scriptscriptstyle (w)}_{n}=\sum _{i\in [n]} w_{i}\sigma _{i}. \end{aligned}$$

Theorem 1.6

(LDPs for the annealed Ising model and alternative forms of the pressure) For all \(\beta \ge 0\) and \(B\in \mathbb {R}\), under Condition 1.1, the couple \((S_{n}, S^{\scriptscriptstyle (w)}_{n})\) satisfies an LDP w.r.t. \({\mu }^\mathrm{{an}}_{n}\) with rate function

$$\begin{aligned} I^{\mathrm {an}}_{\beta ,B}(x_1,x_2)=I(x_1,x_2) - \frac{\sinh (\beta )}{2\mathbb E[W]}x_2^{2}- B x_1 - \log 2 - \alpha (\beta )+ \psi ^{\mathrm {an}}(\beta ,B), \end{aligned}$$
(1.9)

where

$$\begin{aligned} I(x_1,x_2)= \sup _{(t_1,t_2)} \left( t_1 x_1 + t_2 x_2 - \mathbb E[\log \cosh (t_1 + W t_2)]\right) , \end{aligned}$$

and \(\psi ^{\mathrm {an}}(\beta ,B)\) is an alternative form of the annealed pressure given by

$$\begin{aligned} \psi ^{\mathrm {an}}(\beta ,B) = -\inf _{(x_1,x_2)} \left( I(x_1,x_2) - \frac{\sinh (\beta )}{2\mathbb E[W]}x_2^{2}- B x_1 - \log 2 - \alpha (\beta )\right) . \end{aligned}$$
(1.10)

Furthermore, \((S_{n}, S^{\scriptscriptstyle (w)}_{n})\) satisfies an LDP with the alternative expression of the rate function given by

$$\begin{aligned} I^{{\mathrm {an}}\scriptscriptstyle (B)}_{\beta ,B}(x_1,x_2)= I^{\scriptscriptstyle (B)}(x_1,x_2) - \frac{\sinh (\beta )}{2\mathbb E[W]} x_2^{2} -\log \cosh B-\log 2-\alpha (\beta ) + \psi ^{\mathrm {an}}(\beta ,B), \end{aligned}$$
(1.11)

where

$$\begin{aligned} I^{\scriptscriptstyle (B)}(x_1,x_2) = \sup _{(t_1,t_2)} \left( t_1 x_1+t_2 x_2 - \mathbb E[ \log \cosh (B + t_1 + W t_2)] \right) + \log \cosh B, \end{aligned}$$

and \(\psi ^{\mathrm {an}}(\beta ,B)\) is an alternative form of the annealed pressure given by

$$\begin{aligned} \psi ^{\mathrm {an}}(\beta ,B) = -\inf _{(x_1,x_2)} \left( I^{\scriptscriptstyle (B)}(x_1,x_2) - \frac{\sinh (\beta )}{2\mathbb E[W]} x_2^{2} -\log \cosh B-\log 2-\alpha (\beta ) \right) . \end{aligned}$$
(1.12)

Naturally, in the high-temperature setting, the large deviation rate functions in (1.8) and (1.9) [or (1.11)] coincide after the application of a contraction principle. Combining Theorems 1.2 and 1.6 we see that the annealed pressure is either given by the optimization of a real function [as in (1.4)] or it can be expressed as the solution of a two-dimensional variational problem [as in (1.10) or (1.12)]. In Sect. 2.2 we shall prove Theorem 1.2 starting from Theorem 1.6, thus obtaining that the expressions for the annealed pressure do coincide.

We next discuss the LDP for the total number of edges in the annealed Ising model on \(\mathrm {GRG}_{n}(\varvec{w})\):

Theorem 1.7

(LDPs for the edges in the annealed Ising model) Suppose that Condition 1.1 holds. For all \(\beta \ge 0\) and \(B\in \mathbb {R}\), the total number of edges \(|E_{n}|\) satisfies an LDP w.r.t. \({\mu }^\mathrm{{an}}_{n}\) with rate function that is the Legendre transform of the function which is explicitly computed in (3.18) below. Further, the number of edges under the annealed Ising model on \(\mathrm {GRG}_{n}(\varvec{w})\) satisfies

$$\begin{aligned} \frac{1}{n}|E_{n}|{\mathop {\longrightarrow }\limits ^{\scriptscriptstyle {\mathbb P}}}\frac{1}{2} {z^*(\beta ,B)}^2 + \frac{1}{2} \cosh (\beta ) \mathbb E{[W]}. \end{aligned}$$
(1.13)

We continue by investigating the limiting distribution of the degrees of vertices. Our main result is as follows:

Theorem 1.8

(Degrees in the annealed Ising model) Suppose that Condition 1.1 holds. For all \(\beta \ge 0\) and \(B\in \mathbb {R}\), the moment generating function of the degree \(D_j\) of vertex j under \({\mu }^\mathrm{{an}}_n\) satisfies

$$\begin{aligned} \mathbb E_{{\mu }^\mathrm{{an}}_n}\big [{\mathrm e}^{tD_j}\big ]= (1+o(1)) {\mathrm e}^{\cosh (\beta ) w_j ({\mathrm e}^t-1)}\frac{\cosh \Big (z^*(\beta ,B){\mathrm e}^t w_j \sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}{\cosh \Big (z^*(\beta ,B) w_j \sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}. \end{aligned}$$
(1.14)

Consequently, the degree \(D_{U}\) of a uniformly chosen vertex satisfies

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb E_{{\mu }^\mathrm{{an}}_n}\big [{\mathrm e}^{tD_{U}}\big ]= \mathbb E\bigg [{\mathrm e}^{\cosh (\beta ) W ({\mathrm e}^t-1)}\frac{\cosh \Big (z^*(\beta ,B) {\mathrm e}^t W \sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}{\cosh \Big (z^*(\beta ,B) W\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}\bigg ]. \end{aligned}$$
(1.15)

In the above, \(z^*(\beta ,B)\) is the solution to (1.6).

We remark that in (1.15) we both take the average w.r.t. the annealed measure \({\mu }^\mathrm{{an}}_n\) as well as w.r.t. the uniform vertex \(U\in [n]\).

Remark 1.9

(Degree distribution annealed Ising model) We can restate (1.15) as

$$\begin{aligned} \frac{1}{n}\sum _{v\in [n]} \mathbb E_{{\mu }^\mathrm{{an}}_n}\big [{\mathrm e}^{tD_{v}}\big ]\rightarrow \mathbb E\bigg [{\mathrm e}^{\cosh (\beta ) W ({\mathrm e}^t-1)}\frac{\cosh \Big (z^*(\beta ,B) {\mathrm e}^t W \sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}{\cosh \Big (z^*(\beta ,B) W\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}\bigg ]. \end{aligned}$$
(1.16)

In (1.14), we see that the moment generating function of a vertex having weight w is close to

$$\begin{aligned} {\mathrm e}^{\cosh (\beta ) w ({\mathrm e}^t-1)}\frac{\cosh \Big (z^*(\beta ,B){\mathrm e}^t w \sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}{\cosh \Big (z^*(\beta ,B) w\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}. \end{aligned}$$

We recognize \({\mathrm e}^{\cosh (\beta ) w ({\mathrm e}^t-1)}\) as the moment generating function of a Poisson random variable with mean \(\cosh (\beta ) w\), which is multiplied by another function. However, this factor does not turn out to be a moment generating function.

By setting \(a(\beta )=\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}}\) for the sake of notation, we can rewrite the product of the second and third factors in the r.h.s. of (1.14) as

$$\begin{aligned} \frac{ {\mathrm e}^{ (w_j+B) a(\beta ) z^*} {\mathrm e}^{w_j (\cosh (\beta ) + a(\beta )z^*) ({\mathrm e}^t-1)} + {\mathrm e}^{ -(w_j+B) a(\beta ) z^*} {\mathrm e}^{w_j (\cosh (\beta ) - a(\beta )z^*) ({\mathrm e}^t-1)} }{2 \cosh \Big (\left( w_j+B \right) a(\beta ) z^*\Big )}. \end{aligned}$$

This shows that the limiting moment generating function of \(D_j\) is a mixed Poisson random variables with parameters \(w_j (\cosh (\beta ) +Ya(\beta )z^*)\), where

$$\begin{aligned} \mathbb P(Y=1)=1-\mathbb P(Y=-1)=\frac{{\mathrm e}^{ (w_j+B) a(\beta ) z^*}}{2 \cosh \Big ( \left( w_j+B \right) a(\beta ) z^*\Big )}, \end{aligned}$$

provided \(w_j (\cosh (\beta )\pm a(\beta )z^*)\) are both positive. We lack a more detailed interpretation of the above two realizations.

Let us next relate Theorems 1.8 to 1.7. We can use (1.15) to show that, as in (1.13),

$$\begin{aligned} \mathbb E_{{\mu }^\mathrm{{an}}_n} \left[ \frac{1}{n}|E_{n}|\right] \rightarrow \frac{1}{2} {z^*(\beta ,B)}^2 + \frac{1}{2} \cosh (\beta ) \mathbb E{[W]}. \end{aligned}$$

Indeed, note that

$$\begin{aligned} \mathbb E_{{\mu }^\mathrm{{an}}_n} \left[ \frac{1}{n}|E_{n}|\right] =\tfrac{1}{2}\mathbb E_{{\mu }^\mathrm{{an}}_n}[D_{U}]=\frac{1}{2} \frac{d}{dt} \mathbb E_{{\mu }^\mathrm{{an}}_n}\big [{\mathrm e}^{tD_{U}}\big ]\Big |_{t=0}. \end{aligned}$$

Here, in the middle formula, we again take the average w.r.t. both \({\mu }^\mathrm{{an}}_n\) as well as the uniform vertex \(U\in [n]\). Convergence of the moment-generating function implies convergence of all moments, so that

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb E_{{\mu }^\mathrm{{an}}_n} \left[ \frac{1}{n}|E_{n}|\right]= & {} \frac{1}{2}\frac{d}{dt} \mathbb E\left[ {\mathrm e}^{\cosh (\beta ) W ({\mathrm e}^t-1)}\frac{\cosh \Big (z^*(\beta ,B){\mathrm e}^t W\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}}+B\Big )}{\cosh \Big (z^*(\beta ,B) W\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}}+B\Big )}\right] \Big |_{t=0}\nonumber \\= & {} \frac{1}{2} \cosh (\beta ) \mathbb E{[W]} +\frac{1}{2}z^*(\beta ,B)\mathbb E\left[ W\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}}\tanh \left( z^*(\beta ,B)\right. \right. \nonumber \\&\times \, \left. \left. W\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}}+B\right) \right] \nonumber \\= & {} \frac{1}{2} \cosh (\beta ) \mathbb E{[W]} + \frac{1}{2} z^*(\beta ,B)^2, \end{aligned}$$
(1.17)

as required, where we have made use of (1.6) in the last step. Thus, for (1.13), it suffices to prove that \(\frac{1}{n}|E_{n}|\) is concentrated. \(\square \)

In the next theorem, we extend Theorem 1.8 to several vertices:

Theorem 1.10

(Degrees of m vertices in the annealed Ising model) Suppose that Condition 1.1 holds. For all \(\beta \ge 0\) and \(B\in \mathbb {R}\) and \(m\in \mathbb {N}\), the moment generating function of the degrees \((D_1,D_2,\ldots , D_m)\) under \({\mu }^\mathrm{{an}}_n\) satisfies

$$\begin{aligned} \mathbb E_{{\mu }^\mathrm{{an}}_n}\Big [{\mathrm e}^{\sum _{i=1}^m t_i D_i} \Big ] = \prod _{i=1}^m {\mathrm e}^{ \cosh (\beta ) w_i ({\mathrm e}^{t_i}-1) }\prod _{i=1}^m\frac{\cosh \Big (z^*(\beta ,B) {\mathrm e}^{t_i} w_i\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}{\cosh \Big (z^*(\beta ,B) w_i\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )} (1+o(1)). \end{aligned}$$

Theorem 1.10 implies that the degrees of different vertices under the annealed measure are approximately independent.

1.3 Discussion

In this section, we discuss our results and state some further conjectures.

1.3.1 Random-Quenched LDP

For the random-quenched model we only obtain an LDP in the high-temperature regime. The difficulty in this analysis is that the rate function is non-convex at low temperature. This means that the usual technique relying on the Gärtner–Ellis theorem, by taking the Legendre transform of the cumulant generating function, does not work. The cumulant generating function can easily be expressed in terms of the difference of the pressure for different values of the external field B. However, this Legendre transform is the convex envelope of the cumulant generating function. This raises the question how to do this for all inverse temperatures \(\beta \).

1.3.2 Averaged–Quenched LDP

The averaged–quenched measure is defined as \(\mathbb {E}[\mu ^\mathrm{qe}_n(\sigma )]\) (recall (1.2)). Here, even in the high-temperature regime, we are in trouble since the averaged–quenched cumulant generating function is not a difference of pressures. Independently of the explicit computation, an interesting question is whether it is possible to relate the random-quenched and the averaged–quenched large deviation rate functions.

1.3.3 Large Deviations of Random Graph Quantities

As already mentioned in the introduction, if one is interested only in graph quantities, then the effect of the annealing amounts to changing the graph law from \(\mathbb {P}\) (the law of of \(\mathrm {GRG}_{n}(\varvec{w})\)) to a new law \(\mathbb {P}_{\beta ,B}\) depending on the two parameters \(\beta \) and B. Evidently \(\lim _{\beta \rightarrow 0, B\rightarrow 0} \mathbb {P}_{\beta ,B} = \mathbb {P}\). We know that under the law \(\mathbb {P}\) a uniform vertex has an asymptotic degree that is a mixed Poisson distribution with mixing distribution W, see e.g. [24, Theorem 6.10]. In Theorem 1.8, we derive the asymptotic moment generating function of a uniform degree under \(\mathbb {P}_{\beta ,B}\), see (1.15). From this formula, we see that the single degree is still mixed Poisson distributed (provided a technical condition discussed in Remark 1.9 holds true), however with important differences. In particular, in zero external field \(B=0\), the moment generating function of a uniform degree changes in two ways: firstly, in the high-temperature regime, the mixing distribution changes to \(W\cosh (\beta )\) (since \(z^{*}(\beta ,0)=0\) there); secondly, in the low-temperature region a new effect appears due to the non-zero value of \(z^{*}(\beta ,0)\). It would be of interest to invert the moment generating function (1.15) and thus explicitly characterize the distribution of a uniform degree at low temperatures. This can be done once we know that \(\cosh (\beta )- {\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}}}z^*(\beta ,0)\) is non-negative (see Remark 1.9), but we do not know this to be true in general. Also, as of yet, we have no interpretation for this novel mixed Poisson distribution for the degrees. It might also be interesting to investigate other properties of the random graph under the annealed Ising model. An example would be the distribution of triangles, for which the positive dependence of edges enforced by the annealed Ising model might have a pronounced effect. A further interesting problem is to identify the large deviation rate function in a joint LDP for both the spin as well as the total number of edges.

1.3.4 Organisation of this Paper

We start in Sect. 1.4 by describing an enlightening computation that is at the heart of our analysis. In Sect. 2, we derive the LDP for the total spin and the total weighted spin. In Sect. 3, we investigate the large deviation properties, as well as the weak convergence, of the number of edges in the annealed Ising model, thus quantifying the statement that under the annealed Ising model, there are more edges in the graph than for the typical graph. In Sect. 4, we investigate the degree distribution under the annealed Ising model. Finally in the Appendix we re-derive the LDP for the total spin by combinatorial arguments.

1.4 Preliminaries: An Enlightening Computation

Our large deviations results are obtained from exact expressions for moment generating functions of spin or of edge variables under the annealed \(\mathrm {GRG}_{n}(\varvec{w})\) measure. Such exact expressions follow from the observation (already contained in [19, Sect. 2.1]) that the annealed \(\mathrm {GRG}_{n}(\varvec{w})\) measure can be identified as an inhomogeneous Ising model on the complete graph, which is called the rank-1 inhomogeneous Curie–Weiss model in [19]. In this paper, we will extend such computations significantly, for example by also including the edge statuses. We can write the numerator in the Definition (1.3) of \({\mu }^\mathrm{{an}}_{n}\) as

$$\begin{aligned} \mathbb E\biggl [\exp \biggl \{\beta \sum _{(i,j)\in E_{n}} \sigma _i \sigma _j + B\sum _{i\in [n]} \sigma _i\biggr \}\biggr ]&=\mathbb E\biggl [\exp \biggl \{\beta \sum _{1\le i< j \le {n}}I_{ij} \sigma _i \sigma _j + B\sum _{i\in [n]} \sigma _i\biggr \}\biggr ]\\&= {\mathrm e}^{B\sum _{i\in [n]} \sigma _i} \prod _{i<j} \mathbb E\biggl [{\mathrm e}^{\beta I_{ij} \sigma _i \sigma _j }\biggr ] \\&= {\mathrm e}^{B\sum _{i\in [n]} \sigma _i} \prod _{i<j} \biggl [{\mathrm e}^{\beta \sigma _i \sigma _j }p_{ij} +1-p_{ij}\biggr ], \end{aligned}$$

where we have used the independence of the edges in the second equality. Define

$$\begin{aligned} \beta _{ij} = \frac{1}{2} \log \frac{1+p_{ij}({\mathrm e}^\beta -1)}{1+p_{ij}(e^{-\beta }-1)}, \qquad \mathrm{and} \qquad C_{ij} =\frac{1+p_{ij}(\cosh (\beta )-1)}{\cosh (\beta _{ij})}. \end{aligned}$$
(1.18)

Then, we can write

$$\begin{aligned} {\mathrm e}^{\beta \sigma _i \sigma _j }p_{ij} +1-p_{ij} = C_{ij}{\mathrm e}^{\beta _{ij}\sigma _i\sigma _j}. \end{aligned}$$

Hence, also using the symmetry \(\beta _{ij}=\beta _{ji}\),

$$\begin{aligned} \mathbb E\left[ \exp \left\{ \beta \sum _{(i,j)\in E_{n}} \sigma _i \sigma _j + B\sum _{i\in [n]} \sigma _i\right\} \right]&= G_n(\beta ) \; {\mathrm e}^{\frac{1}{2} \sum _{i,j\in [n]} \beta _{ij} \sigma _i\sigma _j+B\sum _{i\in [n]} \sigma _i}, \end{aligned}$$

where

$$\begin{aligned} G_n(\beta ) = \left( \prod _{1\le i<j \le n} C_{ij}\right) \left( \prod _{i\in [n]}{\mathrm e}^{-\beta _{ii}/2}\right) \end{aligned}$$

and \(\beta _{ii}\) is defined as in (1.18) with \(p_{ii} = w_i^2 / (\ell _{n}+w_i^2)\). Defining

$$\begin{aligned} \alpha _{n}(\beta ) = \frac{1}{n} \log G_n(\beta ) \end{aligned}$$
(1.19)

one has

$$\begin{aligned} \mathbb E\left[ \exp \left\{ \beta \sum _{(i,j)\in E_{n}} \sigma _i \sigma _j + B\sum _{i\in [n]} \sigma _i\right\} \right] = {\mathrm e}^{n\alpha _{n}(\beta )} {\mathrm e}^{\frac{1}{2} \sum _{i,j\in [n]} \beta _{ij} \sigma _i\sigma _j+B\sum _{i\in [n]} \sigma _i}. \end{aligned}$$

We observe that the quantity \({\mathrm e}^{\frac{1}{2} \sum _{i,j\in [n]} \beta _{ij} \sigma _i\sigma _j+B\sum _{i\in [n]} \sigma _i}\) can be regarded as the Hamiltonian of an inhomogeneous Curie–Weiss model with couplings given by \((\beta _{ij})_{ij}\). Thus, the annealed Ising model on the \(\mathrm {GRG}_{n}(\varvec{w})\) is equivalent to such inhomogeneous model, see [13, 19]. Moreover, since \(\beta _{ij}\) is close to factorizing into a contribution due to i and to j, one can prove [13, 19] that:

$$\begin{aligned} \mathbb E\left[ \exp \left\{ \beta \sum _{(i,j)\in E_{n}} \sigma _i \sigma _j + B\sum _{i\in [n]} \sigma _i\right\} \right] = {\mathrm e}^{n\alpha _n(\beta )} {\mathrm e}^{\frac{1}{2} \frac{\sinh (\beta )}{\ell _n} \left( \sum _{i} w_i \sigma _i\right) ^2 +B\sum _{i\in [n]} \sigma _i + o(n)}. \end{aligned}$$
(1.20)

This computation shows that, in the large n-limit, the annealed measure \({\mu }^\mathrm{{an}}_{n}\) at inverse temperature \(\beta \) is close to the Boltzmann–Gibbs measure \({{\mu }}^{\scriptscriptstyle {\mathrm {ICW}}}_n\) of the rank-1 inhomogeneous Curie–Weiss model at inverse temperature \(\tilde{\beta }=\sinh (\beta )\)

$$\begin{aligned} {{\mu }}^{\scriptscriptstyle {\mathrm {ICW}}}_n(\sigma ) = \frac{\exp (H^{{\scriptscriptstyle {\mathrm {ICW}}}}_n(\sigma ))}{Z^{\scriptscriptstyle \mathrm {ICW}}_{n}(\tilde{\beta },B)} \end{aligned}$$
(1.21)

with Hamiltonian

$$\begin{aligned} H^{{\scriptscriptstyle {\mathrm {ICW}}}}_n(\sigma )= \frac{1}{2} \frac{\tilde{\beta }}{\ell _n} \left( \sum _{i} w_i \sigma _i\right) ^2 +B\sum _{i\in [n]} \sigma _i \end{aligned}$$
(1.22)

and normalizing partition function

$$\begin{aligned} Z^{\scriptscriptstyle \mathrm {ICW}}_{n}(\tilde{\beta },B)=\sum _{\sigma \in \Omega _{n}} {\mathrm e}^{B \sum _{i \in [n]} {\sigma _i}}{\mathrm e}^{\frac{1}{2}\frac{ \tilde{\beta }}{\ell _{n}}\left( \sum _{i\in [n]}w_i \sigma _i\right) ^2}. \end{aligned}$$
(1.23)

The above analysis can be simply extended to moment generating functions involving (some of) the edge variables \((I_{ij})_{1\le i<j\le n}\), as these can be incorporated into the exponential term and the expectation w.r.t. them can then again be taken. Of course, in such settings, the connection to the rank-1 inhomogeneous Curie–Weiss model is changed as well, and a large part of our paper deals precisely with the description of such changes, as well as their effects.

2 LDP for the Total Spin

2.1 LDP in the High-Temperature Regime

We first prove the LDP in the high-temperature regime for the annealed Ising model using the Gärtner–Ellis theorem.

Proof of Theorem 1.5

To apply the Gärtner–Ellis theorem we need the thermodynamic limit of the cumulant generating function of \(S_{n}\) w.r.t. \({\mu }^\mathrm{{an}}_{n}\), given by

$$\begin{aligned} {c}(t) = \lim _{n\rightarrow \infty } \frac{1}{n} \log {\mathbb {E}_{{\mu }^\mathrm{{an}}_{n}} \left[ \exp \left( t S_{n}\right) \right] }. \end{aligned}$$

Observe that

$$\begin{aligned} \mathbb {E}_{{\mu }^\mathrm{{an}}_{n}} \left[ \exp \left( t S_{n}\right) \right]&=\frac{Z^\mathrm {an}_{n}(\beta ,B+t)}{Z^\mathrm {an}_{n}(\beta ,B)}. \end{aligned}$$

Hence,

$$\begin{aligned} {c}(t) = \lim _{n\rightarrow \infty } \frac{1}{n} \log {\frac{Z^\mathrm {an}_{n}(\beta ,B+t)}{Z^\mathrm {an}_{n}(\beta ,B)}}= \psi ^{\mathrm {an}}(\beta ,B+t) - \psi ^{\mathrm {an}}(\beta ,B), \end{aligned}$$

where the existence of the limit follows from Theorem 1.2. We know that, for \(B\ne 0\),

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}B} \psi ^{\mathrm {an}}(\beta ,B) = M^\mathrm {an}(\beta ,B). \end{aligned}$$

For \(\beta \le \beta ^{\mathrm {an}}_c\),

$$\begin{aligned} \lim _{B\searrow 0} M^\mathrm {an}(\beta ,B) = \lim _{B\nearrow 0} M^\mathrm {an}(\beta ,B) =0, \end{aligned}$$

so that c(t) is differentiable in t. Hence, it follows from the Gärtner–Ellis theorem [10, Thm. 2.3.6] that \(S_{n}\) satisfies an LDP with rate function given by the Legendre transform of c(t) which is given by (1.8). The proof for the random quenched Ising model is analogous. \(\square \)

Let us now elaborate on the interpretation of the above results. The stationarity condition for (1.8) is

$$\begin{aligned} x= {M^\mathrm {an}}(\beta , B+t), \end{aligned}$$
(2.1)

which defines a function \(\check{t}=\check{t}(x;\beta , B)\) such that

$$\begin{aligned} {{I^\mathrm {an}}}(x)= x\, \check{t}(x;\beta , B) - {{\psi ^{\mathrm {an}}}(\beta ,B+ \check{t}(x;\beta , B) )}+ \psi ^{\mathrm {an}}(\beta ,B). \end{aligned}$$

Given \((\beta ,B)\), the total spin per particle will concentrate around its typical value \(M^\mathrm {an}(\beta ,B)\) coinciding with the magnetization. To observe the atypical value x the field must be changed from B to \(B+t\), where t is determined by requiring that x is the magnetization \(M^\mathrm {an}(\beta ,B+t)\). Note that we have not made use of any specifics about the graph sequence, or whether we are in the annealed or quenched setting. Hence, the above holds for Ising models on any graph sequence, as long as the appropriate thermodynamic limit of the pressure exists. It also shows that in the high-temperature regime, the Gärtner–Ellis theorem can be applied, since the spins in \(S_{n}\) are weakly dependent.

For \(\beta >\beta ^{\mathrm {an}}_c\),

$$\begin{aligned} m^+ := \lim _{B\searrow 0} M^\mathrm {an}(\beta ,B)> 0 > \lim _{B\nearrow 0} M^\mathrm {an}(\beta ,B) = -m^+, \end{aligned}$$

and hence c(t) is not differentiable for \(t=-B\) and the Gärtner–Ellis theorem can no longer be applied directly. This is caused by the strong interaction, and hence dependence, between the spins at low temperature. Since the spontaneous magnetization is not zero, it is not possible to find a t such that (2.1) holds for \(-m^+<x<m^+\). Therefore, the Legendre transform (1.8) has a flat piece. By the Gärtner–Ellis theorem, this Legendre transform still gives a lower bound on the rate function, but it is only an upper bound for so-called exposed points of the Legendre transform, i.e., for x outside this flat piece. In fact, we show that the Legendre transform in general does not give the correct rate function, since the Legendre transform of the pressure is convex and we show that the rate function in the low temperature regime in general is not.

2.2 LDPs for the Total Spin and Weighted Spin

In this section we prove Theorem 1.6 and then we deduce from it a new proof of Theorem 1.2 (thus by a method different from that of [19]). Following Ellis’ approach [16], we can compute the annealed pressure \(\psi ^{\mathrm {an}}(\beta , B)\) and the large deviation function of \(\mathbf{Y}_{n}(\sigma ):=(m_{n}(\sigma ),m^{\scriptscriptstyle (w)}_{n}(\sigma )) \equiv {(S_n(\sigma )/n,S^{\scriptscriptstyle (w)}_n(\sigma )/n)}\) w.r.t. the annealed measure \({\mu }^\mathrm{{an}}_{n}\), starting from the LDP of \((m_{n},m^{\scriptscriptstyle (w)}_{n})\) w.r.t. the product measure

$$\begin{aligned} P_{n}=\bigotimes _{i=1}^{n}\left( \frac{1}{2} \delta _{-1}+\frac{1}{2} \delta _{+1} \right) . \end{aligned}$$
(2.2)

The large deviations of \(\mathbf{Y}_{n}=(m_{n},m^{\scriptscriptstyle (w)}_{n})\) w.r.t. \(P_{n}\) can easily be obtained by applying the Gärtner–Ellis theorem.

Proof of Theorem 1.6

Let \(\mathbf{t}=(t_1,t_2)\) and compute

$$\begin{aligned} \mathbb E_{P_{n}} [\exp (n\, \mathbf{t} \cdot \mathbf{Y}_{n})]= & {} \mathbb E_{P_{n}} [\exp ( t_1 S_{n}+ t_2 S^{\scriptscriptstyle (w)}_{n})]\\= & {} \mathbb E_{P_{n}} [ \Pi _{i\in [n]} \exp ( t_1 + w_i t_2)\sigma _i ]\\= & {} \Pi _{i\in [n]} \cosh (t_1 + w_i t_2), \end{aligned}$$

where \(\mathbb E_{P_{n}}\) denotes average w.r.t. \(P_{n}\). Thus, the cumulant generating function of the vector \(\mathbf{Y}_{n}=(m_{n},m^{\scriptscriptstyle (w)}_{n})\) w.r.t. \(P_{n}\) equals

$$\begin{aligned} c_{n}(\mathbf{t})= \frac{1}{n} \log \mathbb E_{P_{n}} [\exp n(\mathbf{t} \cdot \mathbf{Y}_{n})] = \frac{1}{n}\sum _{i\in [n]} \log \cosh (t_1 + w_i t_2)=\mathbb E[\log \cosh (t_1 + W_{n} t_2)], \end{aligned}$$

here \(\mathbb E\) represents the average w.r.t. the uniformly chosen vertex \(W_{n}\). Since \(|\log \cosh (t_1 + W_{n} t_2)| \le |t_1 + W_{n} t_2| \le |t_1| + W_{n} |t_2|\) it follows from Condition 1.1(b) and the dominated convergence theorem that

$$\begin{aligned} c(\mathbf{t}):=\lim _{n\rightarrow \infty } c_{n}(\mathbf{t})= \mathbb E[ \log \cosh (t_1 + W t_2) ], \end{aligned}$$

with W limiting weight of the graph. By the Gärtner–Ellis theorem, we conclude that \(\mathbf{Y}_{n}\) satisfies an LDP with rate function

$$\begin{aligned} I(x_1,x_2)= \sup _{(t_1,t_2)} \left( t_1 x_1 + t_2 x_2 - \mathbb E[\log \cosh (t_1 + W t_2)]\right) . \end{aligned}$$

We have

$$\begin{aligned} I(x_1,x_2)=\left\{ \begin{array}{ll} {t^*_1} x_1 + {t^*_2} x_2 - \mathbb E[\log \cosh ({t^*_1} + W {t^*_2})],&{}\quad \text{ if }\; |x_1|< 1, |x_2|< \mathbb E[W],\\ +\infty , &{} \quad \text{ otherwise, } \end{array} \right. \end{aligned}$$

where \(t^*_1=t^{*}_1(x_1,x_2)\) and \(t^*_2=t^*_2(x_1,x_2)\) are given by the stationarity condition

$$\begin{aligned} \left\{ \begin{array}{l} x_1= \mathbb E[ \tanh (t_1 + W t_2)],\\ x_2 = \mathbb E[ W \tanh (t_1 + W t_2)], \end{array} \right. \end{aligned}$$
(2.3)

for \( |x_1|< 1, |x_2|< \mathbb E[W]\).

For any function \(f \,:\, \Omega _{n} \rightarrow \mathbb {R}\) we can write

$$\begin{aligned} \sum _{\sigma \in \Omega _{n}} f(\sigma ) = 2^{n} \int _{\Omega _{n}} f(\sigma ) \mathrm{d}P_{n}(\sigma ). \end{aligned}$$

Hence, also using (1.20),

$$\begin{aligned} Z^\mathrm {an}_{n}(\beta ,B)&= 2^{n} {\mathrm e}^{n\alpha _{n}} \int _{\Omega _{n}} {\mathrm e}^{\frac{1}{2} \frac{\sinh (\beta )}{n\mathbb E[W_{n}]} \left( \sum _{i} w_i \sigma _i\right) ^2 +B\sum _{i\in [n]} \sigma _i + o(n)}\mathrm{d}P_{n}(\sigma )\\&= 2^{n} {\mathrm e}^{n\alpha _{n}} \int _{\Omega _{n}} {\mathrm e}^{\frac{n}{2} \frac{\sinh (\beta )}{\mathbb E[W_{n}]} (m^{\scriptscriptstyle (w)}_{n})^2 +nB m_{n} + o(n)}\mathrm{d}P_{n}(\sigma ) \end{aligned}$$

and, similarly,

$$\begin{aligned} {\mu }^\mathrm{{an}}_{n}(\cdot ) = \frac{2^{n} {\mathrm e}^{n\alpha _{n}}}{Z_{n}^\mathrm {an}(\beta ,B)}\int _{\Omega _{n}} (\cdot ) \; {\mathrm e}^{\frac{n}{2} \frac{\sinh (\beta )}{\mathbb E[W_{n}]} (m^{\scriptscriptstyle (w)}_{n})^2 +nB m_{n} + o(n)}\mathrm{d}P_{n}(\sigma ) . \end{aligned}$$

Then, by applying Varadhan’s lemma [17, Thm. II.7.1],

$$\begin{aligned} \psi ^{\mathrm {an}}(\beta ,B)=\lim _{n\rightarrow \infty }\frac{1}{n} Z_{n}^\mathrm {an}(\beta ,B) =\log (2) + \alpha (\beta )+ \sup _{(x_1,x_2)} \left[ \frac{\sinh (\beta )}{2\mathbb E[W]}x_2^{2}+Bx_1 - I(x_1,x_2) \right] \end{aligned}$$

which is equivalent to (1.10), and the rate function of \((m_{n},m^{\scriptscriptstyle (w)}_{n})\) w.r.t. the annealed measure is [17, Thm. II.7.2]

$$\begin{aligned} I^\mathrm {an}_{\beta ,B}(x_1,x_2)=I(x_1,x_2) - \frac{\sinh (\beta )}{2\mathbb E[W]}x_2^{2}- B x_1 - \log (2) - \alpha (\beta )+ \psi ^\mathrm {an}(\beta ,B). \end{aligned}$$

This shows that indeed \((S_{n}, S^{\scriptscriptstyle (w)}_{n})\) satisfies an LDP w.r.t. \({\mu }^\mathrm{{an}}_{n}\) with rate function given by (1.9). By applying the contraction principle, we obtain the rate functions \(I^\mathrm {an}_{\beta ,B}\) of \(m_{n}\) and \(J^\mathrm {an}_{\beta ,B}\) of \(m^{\scriptscriptstyle (w)}_{n}\) as

$$\begin{aligned} \quad I^\mathrm {an}_{\beta ,B}(x_1)= \inf _{x_2} I^\mathrm {an}_{\beta ,B}(x_1,x_2), \qquad J^\mathrm {an}_{\beta ,B}(x_2)= \inf _{x_1} I^\mathrm {an}_{\beta ,B}(x_1,x_2). \end{aligned}$$
(2.4)

In a similar way, we can also immediately obtain an LDP by incorporating the magnetic field in the a priori measure on the spins. For this, define

$$\begin{aligned} P^{\scriptscriptstyle (B)}_{n}=\bigotimes _{i=1}^{n}\left( \frac{{\mathrm e}^{-B}}{{\mathrm e}^B+{\mathrm e}^{-B}} \delta _{-1}+\frac{{\mathrm e}^{B}}{{\mathrm e}^B+{\mathrm e}^{-B}} \delta _{+1}\right) . \end{aligned}$$

Then

$$\begin{aligned} \mathbb E_{P^{\scriptscriptstyle (B)}_{n}} [\exp (n\, \mathbf{t} \cdot \mathbf{Y}_{n})] = \mathbb E_{P^{\scriptscriptstyle (B)}_{n}} [ \prod _{i\in [n]} \exp ( t_1 + w_i t_2)\sigma _i ] = \prod _{i\in [n]} \frac{\cosh (t_1 + w_i t_2)}{\cosh (B)}, \end{aligned}$$

where \(\mathbb E_{P^{\scriptscriptstyle (B)}_{n}}\) denotes average w.r.t. \(P_{n}^{\scriptscriptstyle (B)}\). Hence, the cumulant generating function is given by

$$\begin{aligned} {c_{n}^{\scriptscriptstyle (B)}}(\mathbf{t}) = \mathbb E[\log \cosh (B + t_1 + W_{n} t_2) ]- \log \cosh B, \end{aligned}$$

(with \(\mathbb E\) the average w.r.t. the uniformly chosen vertex \(W_{n}\)) which, as in the previous case, converges to

$$\begin{aligned} c{^{\scriptscriptstyle (B)}}(\mathbf{t}) = \mathbb E[ \log \cosh (B + t_1+W t_2)] - \log \cosh B. \end{aligned}$$

We can apply the Gärtner–Ellis theorem to obtain that \((m_{n},m^{\scriptscriptstyle (w)}_{n})\) satisfies an LDP w.r.t. \(P^{\scriptscriptstyle (B)}_{n}\) with rate function

$$\begin{aligned} I^{\scriptscriptstyle (B)}(x_1,x_2) = \sup _{t_1,t_2} \left( t_1 x_1+t_2 x_2 - \mathbb E[ \log \cosh (B + t_1 + W t_2)] \right) + \log \cosh B. \end{aligned}$$
(2.5)

The stationarity conditions are given by

$$\begin{aligned} \left\{ \begin{array}{l} x_1= \mathbb E[ \tanh (B+t_1 + W t_2)],\\ x_2 = \mathbb E[ W \tanh (B+t_1 + W t_2)]. \end{array} \right. \end{aligned}$$
(2.6)

Note that

$$\begin{aligned} \sum _{\sigma \in \Omega _{n}} f(\sigma ){\mathrm e}^{B\sum _{i\in [n]}\sigma _i} = (2\cosh B)^{n} \int _{\Omega _{n}} f(\sigma ) \mathrm{d}P^{(B)}_{n}(\sigma ). \end{aligned}$$

Hence,

$$\begin{aligned} {\mu }^\mathrm{{an}}_{n}(\cdot ) = \frac{(2\cosh B)^{n} {\mathrm e}^{n\alpha _{n}}}{{Z^{\mathrm {an}}_{n}(\beta ,B)}}\int _{\Omega _{n}} (\cdot ) \; {\mathrm e}^{\frac{n}{2} \frac{\sinh (\beta )}{\mathbb E[W_{n}]} (m^{\scriptscriptstyle (w)}_{n})^2 + o(n)}\mathrm{d}P^{(B)}_{n}(\sigma ) \end{aligned}$$
(2.7)

where

$$\begin{aligned} Z^\mathrm {an}_{n}(\beta ,B) = (2\cosh B)^{n} {\mathrm e}^{n\alpha _{n}}\int _{\Omega _{n}} {\mathrm e}^{\frac{n}{2} \frac{\sinh (\beta )}{\mathbb E[W_{n}]} (m^{\scriptscriptstyle (w)}_{n})^2 + o(n)}\mathrm{d}P^{(B)}_{n}(\sigma ). \end{aligned}$$

As above, it immediately follows that \((m_{n},m^{\scriptscriptstyle (w)}_{n})\) satisfies an LDP w.r.t. the annealed measure with rate function

$$\begin{aligned} I^{\mathrm {an}\scriptscriptstyle (B)}_{\beta ,B}(x_1,x_2) = I^{\scriptscriptstyle (B)}(x_1,x_2) - \frac{\sinh (\beta )}{2\mathbb E[W]} x_2^{2} -\log \cosh B-\log 2-\alpha (\beta ) + \psi ^{\mathrm {an}}(\beta ,B), \end{aligned}$$

where the pressure is given by

$$\begin{aligned} \psi ^{\mathrm {an}}(\beta ,B) = \sup _{x_1,x_2} \left( \frac{\sinh (\beta )}{2\mathbb E[W]}x_2^{2}- I^{\scriptscriptstyle (B)}(x_1,x_2) \right) +\log \cosh B+\log 2+\alpha (\beta ). \end{aligned}$$
(2.8)

This proves that also (1.11) is a rate function for the LDP of \((S_{n}, S^{\scriptscriptstyle (w)}_{n})\). The uniqueness of the large deviation function  [17, Thm. II.3.2] implies that (1.11) and  (1.9) coincide. \(\square \)

We can rewrite the pressure in (2.8) to prove Theorem 1.2:

Proof of Theorem 1.2

Note that (2.8) is equivalent to

$$\begin{aligned} \psi ^{\mathrm {an}}(\beta ,B) = \sup _{x_2} \left( \frac{\sinh (\beta )}{2\mathbb E[W]} x_2^{2}- \inf _{x_1}I^{\scriptscriptstyle (B)}(x_1,x_2) \right) +\log \cosh B+\log 2+\alpha (\beta ), \end{aligned}$$
(2.9)

where it should be noted that, by the contraction principle, \(\inf _{x_1}I^{\scriptscriptstyle (B)}(x_1,x_2)\) is equal to the rate function \(I^{\scriptscriptstyle (w)}\) for the LDP of \(m^{\scriptscriptstyle (w)}_{n}\) w.r.t. \(P^{\scriptscriptstyle (B)}_{n}\). Setting \(t_1=0\) in the above computations, this can be proved to be

$$\begin{aligned} I^{\scriptscriptstyle (w)}(x) = \sup _{t} \left( t x - \mathbb E[ \log \cosh (B + W t)] \right) + \log \cosh B, \end{aligned}$$
(2.10)

so that

$$\begin{aligned} \psi ^{\mathrm {an}}(\beta ,B) = \sup _{x_2} \left( \frac{\sinh (\beta )}{2\mathbb E[W]} x_2^{2}-I^{\scriptscriptstyle (w)}(x_2) \right) +\log \cosh B+\log 2+\alpha (\beta ). \end{aligned}$$
(2.11)

The supremum in (2.10) is attained for t satisfying

$$\begin{aligned} x = \mathbb E[W \tanh (B+Wt)] =: f(t). \end{aligned}$$

Since f(t) is strictly increasing, its inverse \(f^{-1}\) is well defined. Hence,

$$\begin{aligned} I^{\scriptscriptstyle (w)}(x) = f^{-1}(x) x - \mathbb E[ \log \cosh (B + W f^{-1}(x))] + \log \cosh B, \end{aligned}$$

and

$$\begin{aligned} \frac{d}{dx}\left( \frac{\sinh (\beta )}{2\mathbb E[W]}x^{2}- I^{\scriptscriptstyle (w)}(x) \right)&=\frac{\sinh (\beta )}{\mathbb E[W]}x- f^{-1}(x) - \left( x-\mathbb E[W\tanh (B+W f^{-1}(x))]\right) \\&\quad \times \, \frac{d}{dx}f^{-1}(x)\\&=\frac{\sinh (\beta )}{\mathbb E[W]}x- f^{-1}(x). \end{aligned}$$

Hence, the supremum in (2.11) for x satisfying \( f^{-1}(x)=\frac{\sinh (\beta )}{\mathbb E[W]}x\), or equivalently,

$$\begin{aligned} x = f\left( \frac{\sinh (\beta )}{\mathbb E[W]}x\right) = \mathbb E\left[ W\tanh \left( B+\frac{\sinh (\beta )}{\mathbb E[W]}W x\right) \right] . \end{aligned}$$
(2.12)

For any solution \(x^*\) of (2.12),

$$\begin{aligned} F(x^*)&:= \frac{\sinh (\beta )}{2\mathbb E[W]} x^{*2}- I^{\scriptscriptstyle (w)}(x^*) +\log \cosh B+\log 2+\alpha (\beta ) \\&\,\,= - \frac{\sinh (\beta )}{2\mathbb E[W]} x^{*2}+\mathbb E\left[ \log \cosh \left( B + \frac{\sinh (\beta )}{2\mathbb E[W]} W x^*\right) \right] +\log 2+\alpha (\beta ). \end{aligned}$$

For \(B>0\), f(t) is an increasing, bounded and concave function for \(t\ge 0\) with \(f(0)>0\), and hence there is a unique positive solution \(x^+\) to (2.12). For any negative solution to (2.12), \(x^-\) say,

$$\begin{aligned} F(x^-) < F(-x^-)\le F(x^+), \end{aligned}$$

since \(x^+\) is the unique positive local maximum. An analogous argument holds for \(B<0\). Hence,

$$\begin{aligned} \psi ^{\mathrm {an}}(\beta ,B) = - \frac{\sinh (\beta )}{2\mathbb E[W]} x^{*2}+\mathbb E\left[ \log \cosh \left( B + \frac{\sinh (\beta )}{2\mathbb E[W]} W x^*\right) \right] +\log 2+\alpha (\beta ), \end{aligned}$$

where \(x^*\) is the unique solution to (2.12) with the same sign as B. The value for \(B=0\) follows from Lipschitz continuity. This is equivalent to the formulation in (1.4) by making a change of variables \(z^*= \sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} x^*\). \(\square \)

3 LDP for the Number of Edges: Proof of Theorem 1.7

So far we have considered large deviations of the total spin. We now consider observables that depend only on the graph and investigate their large deviation properties w.r.t. the annealed Ising measure. Such an analysis sheds light on what graph structures optimize the Ising Hamiltonian.

3.1 Strategy of the Proof

In this section, we investigate the large deviation properties for the number of edges \(|E_{n}| = \sum _{i<j} I_{ij} \) under the annealed Ising model on the generalized random graph, where we recall that \((I_{ij})_{1\le i<j\le n}\) denote the independent Bernoulli indicators of the event that the edge ij is present in the graph, which occurs with probability \(p_{ij}\) in (1.1). We aim to apply the Gärtner–Ellis theorem, for which we need to compute the generating function of \(|E_{n}| \) w.r.t. the annealed measure \({\mu }^\mathrm{{an}}_{n}\) given by

$$\begin{aligned} \mathbb E_{{\mu }^\mathrm{{an}}_n}\Big [{\mathrm e}^{t |E_{n}|}\Big ] = \frac{\mathbb E\left[ \sum _{\sigma } {\mathrm e}^{\sum _{i<j} I_{ij} (t+\beta \sigma _i\sigma _j) + B \sum _{i\in [n]} \sigma _i} \right] }{\mathbb E\left[ \sum _{\sigma } {\mathrm e}^{\sum _{i<j} I_{ij} (\beta \sigma _i\sigma _j) + B \sum _{i\in [n]} \sigma _i} \right] }. \end{aligned}$$
(3.1)

For later purposes, we will generalize the above computation and, introducing the variables \(t_{ij}\), instead compute the generating function of the Bernoulli indicators \((I_{ij})_{ij}\) defined for \(\mathbf{t}=(t_{ij})_{ij}\in {\mathbb R}^{n(n-1)/2}\)

$$\begin{aligned} R_{\beta , B,n}({\mathbf t}) := \mathbb E_{{\mu }^\mathrm{{an}}_n}\Big [{\mathrm e}^{\sum _{1\le i<j\le n} t_{ij} I_{ij}} \Big ] =\frac{\mathbb E\left[ \sum _{\sigma } {\mathrm e}^{\sum _{i<j} I_{ij} (t_{ij}+\beta \sigma _i\sigma _j) + B \sum _{i\in [n]} \sigma _i} \right] }{\mathbb E\left[ \sum _{\sigma } {\mathrm e}^{\sum _{i<j} I_{ij} (\beta \sigma _i\sigma _j) + B \sum _{i\in [n]} \sigma _i} \right] }. \end{aligned}$$
(3.2)

This can be carried out in a similar way as in [19]. Let us focus on the numerator in the previous display, which we denote by \(\mathcal{A}_{n}({\mathbf t},\beta , B)\), so that

$$\begin{aligned} R_{\beta , B,n}({\mathbf t}) = \frac{\mathcal{A}_{n}({\mathbf t},\beta , B)}{\mathcal{A}_{n}(\mathbf{0},\beta , B)}\;. \end{aligned}$$
(3.3)

We have

$$\begin{aligned} \mathcal{A}_{n}({\mathbf t},\beta , B)\,&= \, \sum _{\sigma \in \Omega _{n}} {\mathrm e}^{B \sum _{i \in [n]} {\sigma _i}}\mathbb E\left[ {\mathrm e}^{ \sum _{i<j}{I_{ij}(t_{ij}+\beta \sigma _i \sigma _j})} \right] \\&= \, \sum _{\sigma \in \Omega _{n}} {\mathrm e}^{B \sum _{i \in [n]} {\sigma _i}}\prod _{i<j} \mathbb E\big [{\mathrm e}^{I_{ij}(t_{ij}+\beta \sigma _i \sigma _j)}\big ]\\&= \, \sum _{\sigma \in \Omega _{n}}{\mathrm e}^{B \sum _{i \in [n]} {\sigma _i}} \prod _{i<j} \left( {\mathrm e}^{t_{ij}+\beta \sigma _i \sigma _j}p_{ij} + \left( 1 - p_{ij}\right) \right) . \end{aligned}$$

We rewrite

$$\begin{aligned} {\mathrm e}^{t_{ij}+\beta \sigma _i \sigma _j}p_{ij} + \left( 1 - p_{ij}\right) \, = \, C_{ij}(t_{ij}){\mathrm e}^{\beta _{ij}(t_{ij})\sigma _i \sigma _j}, \end{aligned}$$

where \(\beta _{ij}(t_{ij})\) and \(C_{ij}(t_{ij})\) are chosen such that

$$\begin{aligned}&{\mathrm e}^{t_{ij}-\beta }p_{ij} + \left( 1 - p_{ij}\right) \, =\, C_{ij}(t_{ij}){\mathrm e}^{-\beta _{ij}(t_{ij})} \qquad \text {and}\\ \qquad&{\mathrm e}^{t_{ij}+\beta }p_{ij} + \left( 1 - p_{ij}\right) \, =\, C_{ij}(t_{ij}){\mathrm e}^{\beta _{ij}(t_{ij})}. \end{aligned}$$

From the above system, we get

$$\begin{aligned} \beta _{ij}(t_{ij})\, = \, \frac{1}{2} \log \frac{{\mathrm e}^{t_{ij}+\beta }p_{ij} + \left( 1 - p_{ij}\right) }{{\mathrm e}^{t_{ij}-\beta }p_{ij} + \left( 1 - p_{ij}\right) }, \qquad \quad C_{ij}(t_{ij}) \, = \, \frac{{\mathrm e}^{t_{ij}} p_{ij} \cosh (\beta ) + \left( 1 - p_{ij}\right) }{\cosh \left( \beta _{ij}(t_{ij})\right) }. \end{aligned}$$
(3.4)

By symmetry \(\beta _{ij}(t_{ij})=\beta _{ji}(t_{ij})\). Furthermore, defining \(t_{ji} = t_{ij}\) for \(1\le i < j \le n\) and

$$\begin{aligned} \beta _{ii}(t_{ii})=\frac{1}{2} \log \frac{{\mathrm e}^{t_{ii}+\beta }p_{ii} + \left( 1 - p_{ii}\right) }{{\mathrm e}^{t_{ii}-\beta }p_{ii} + \left( 1 - p_{ii}\right) } \qquad \text {with} \qquad p_{ii}=w_i^2/(\ell _{n}+w_i^2) \end{aligned}$$
(3.5)

we obtain

$$\begin{aligned} \mathcal{A}_{n}({\mathbf {t}},\beta ,B)&= \, G_{n}({\mathbf {t}},\beta ) \sum _{\sigma \in \Omega _{n}} {\mathrm e}^{B \sum _{i \in [n]} {\sigma _i}}{\mathrm e}^{\frac{1}{2}\sum _{i,j \in [n]}{\beta _{ij}(t_{ij})\sigma _i \sigma _j}}, \end{aligned}$$
(3.6)

where

$$\begin{aligned} G_{n}({\mathbf {t}},\beta )=\prod _{1\le i<j\le n}C_{ij} (t_{ij})\prod _{1\le i \le n}e^{-\beta _{ii}(t_{ii})/2}. \end{aligned}$$
(3.7)

The Equations (3.3) and (3.6) give us an explicit formula for the moment generating function of the edge variables \((I_{ij})_{ij}\) in the annealed \(\mathrm {GRG}_{n}(\varvec{w})\) that will prove useful throughout the remainder of this paper.

3.2 Moment Generating Function for the Number of Edges

Since the moment generating function for the number of edges in (3.1) can be obtained from \(R_{\beta , B,n}({\mathbf t})\) in (3.2) by choosing \(t_{ij}=t\) for all \(1\le i<j\le n\), we continue by studying the asymptotics of \(\mathcal{A}_{n}({\mathbf {t}},\beta ,B)\) for such case, which we denote as \(\mathcal{A}_{n}(t,\beta ,B)\). By a Taylor expansion of \(x\mapsto \log (1+x)\),

$$\begin{aligned} \beta _{ij} (t)\,&= \, \frac{1}{2}\log \left( 1 + p_{ij}({\mathrm e}^{t+\beta } -1)\right) - \frac{1}{2}\log \left( 1 + p_{ij}({\mathrm e}^{t-\beta } -1)\right) \nonumber \\&= \,\frac{1}{2}p_{ij}({\mathrm e}^{t+\beta } -1) - \frac{1}{2}p_{ij}({\mathrm e}^{t-\beta } -1) + O(p_{ij}^2({\mathrm e}^{t+\beta } -1)^2 )+ O(p_{ij}^2({\mathrm e}^{t-\beta } -1)^2 )\nonumber \\&= \, {\mathrm e}^t\sinh (\beta )p_{ij}+ O(p_{ij}^2({\mathrm e}^{t\pm \beta } -1)^2), \end{aligned}$$
(3.8)

therefore

$$\begin{aligned} \mathcal{A}_{n}(t,\beta ,B)&= \, G_{n}(t,\beta ) \sum _{\sigma \in \Omega _{n}} {\mathrm e}^{B \sum _{i \in [n]} {\sigma _i}}\exp \left\{ \frac{1}{2} {\mathrm e}^{t}\sinh (\beta )\sum _{i,j \in [n]}{ p_{ij}\sigma _i \sigma _j}\right. \\&\left. \quad +\,O\left( \sum _{i,j\in [n]}p_{ij}^2({\mathrm e}^{t \pm \beta } - 1)^2\right) \right\} . \end{aligned}$$

For any fixed t, the term \( O( \sum _{i,j\in [n]} p_{ij}^2({\mathrm e}^{t\pm \beta } -1)^2)\) can be controlled by using \(p_{ij}\le {w_i w_j/\ell _{n}}\) and Condition 1.1(c), which implies that

$$\begin{aligned} \Big |\sum _{i,j \in [n]}p_{ij}^2\Big |\le \sum _{i,j \in [n]}\Big (\frac{w_i w_j}{\ell _{n}}\Big )^2 \, = \, \left( \frac{\sum _{i\in [n]} w_i^2}{\ell _{n}}\right) ^2 \, = \, o(n), \end{aligned}$$

so that

$$\begin{aligned} \mathcal{A}_{n}(t,\beta ,B)&= \, G_{n}(t,\beta ) {\mathrm e}^{o(n)} \sum _{\sigma \in \Omega _{n}} {\mathrm e}^{B \sum _{i \in [n]} {\sigma _i}}\exp \Big \{\frac{1}{2} {\mathrm e}^{t}\sinh (\beta )\sum _{i,j \in [n]}{ p_{ij}\sigma _i \sigma _j}\Big \}. \end{aligned}$$

We can proceed further and write

$$\begin{aligned} \mathcal{A}_{n}(t,\beta ,B) \,&= \,G_{n}(t,\beta ) {\mathrm e}^{o(n)} \sum _{\sigma \in \Omega _{n}}{\mathrm e}^{B \sum _{i \in [n]} {\sigma _i}}{\mathrm e}^{\frac{1}{2}{\mathrm e}^t \sinh (\beta ) \sum _{i,j \in [n]}\frac{w_i w_j}{\ell _{n}}\sigma _i \sigma _j}\\&= \, G_{n}(t,\beta ) {\mathrm e}^{o(n)} \sum _{\sigma \in \Omega _{n}} {\mathrm e}^{B \sum _{i \in [n]} {\sigma _i}}{\mathrm e}^{\frac{1}{2}\frac{{\mathrm e}^t\sinh (\beta )}{\ell _{n}}\left( \sum _{i\in [n]}w_i \sigma _i\right) ^2}, \end{aligned}$$

where we have also used that, under Condition 1.1(c),

$$\begin{aligned} \sum _{i\in [n]} \frac{w_i^2}{\ell _{n}}=o(n), \qquad \sum _{i,j\in [n]} \Big [\frac{w_iw_j}{\ell _{n}}-p_{ij}\Big ]=\sum _{i,j\in [n]} \frac{w_i^2w_j^2}{\ell _{n}(\ell _{n}+w_iw_j)}=o(n). \end{aligned}$$

Recalling the definition of the partition function of the inhomogeneous Curie–Weiss model we can thus rewrite

$$\begin{aligned} \mathcal{A}_{n}(t,\beta ,B)=G_{n}(t,\beta ) {\mathrm e}^{o(n)} \, Z^{\scriptscriptstyle \mathrm {ICW}}_{n}({\mathrm e}^t \sinh (\beta ),B)\, , \end{aligned}$$

while the denominator in (3.1) equals

$$\begin{aligned} \mathcal{A}_{n}(0,\beta ,B)=G_{n}(0,\beta ) {\mathrm e}^{o(n)} \, Z^{\scriptscriptstyle \mathrm {ICW}}_{n}( \sinh (\beta ),B). \end{aligned}$$

Therefore, the annealed cumulant generating function of the number of the edges is

$$\begin{aligned} \varphi _{\beta ,B,n}(t):= & {} \frac{1}{n} \log \mathbb E_{{\mu }^\mathrm{{an}}_n}\big [{\mathrm e}^{t |E_{n}|} \big ]\nonumber \\= & {} \frac{1}{n}\log Z^{\scriptscriptstyle \mathrm {ICW}}_{n}({\mathrm e}^t \sinh (\beta ),B) - \frac{1}{n} \log Z^{\scriptscriptstyle \mathrm {ICW}}_{n}( \sinh (\beta ),B)\nonumber \\&+\,\frac{1}{n}\log \frac{\,G_{n}(t,\beta ) }{G_{n}(0,\beta )}+o(1). \end{aligned}$$
(3.9)

In order to apply the Gärtner–Ellis theorem, we need to compute the limit of \(\varphi _{\beta ,B,n}(t)\). We can deal with the first and second term in the r.h.s. of (3.9) by using the results obtained in [19], in which the limit pressure of the Inhomogeneous Curie–Weiss model has been computed. Indeed, by [19],

$$\begin{aligned} \psi ^{\scriptscriptstyle \mathrm {ICW}}(\sinh (\beta ),B){:}= & {} \lim _{n\rightarrow \infty }\,\frac{1}{n} \log Z^{\scriptscriptstyle \mathrm {ICW}}_{n} \left( \sinh (\beta ), B \right) \nonumber \\= & {} \log 2 + \left[ \mathbb {E} \log \cosh \left( \sqrt{\frac{\sinh (\beta )}{\mathbb {E}\left[ W\right] }}W z^*(\beta ,B) + B\right) \right] - \frac{z^*(\beta ,B)^2}{2},\nonumber \\ \end{aligned}$$
(3.10)

with \(z^*(\beta ,B)\) defined in Theorem 1.2. Similarly

$$\begin{aligned} \psi ^{\scriptscriptstyle \mathrm {ICW}}({\mathrm e}^t \sinh (\beta ),B):= & {} \lim _{n\rightarrow \infty }\, \frac{1}{n} \log \left( Z^{\scriptscriptstyle \mathrm {ICW}}_{n} \left( {\mathrm e}^t \sinh (\beta ), B \right) \right) \nonumber \\= & {} \log 2 + \left[ \mathbb {E} \log \cosh \left( \sqrt{\frac{{\mathrm e}^t\sinh (\beta )}{\mathbb {E}\left[ W\right] }}W z^*(t,\beta ,B) + B\right) \right] \nonumber \\&\quad - \frac{z^*(t,\beta ,B)^2}{2}. \end{aligned}$$
(3.11)

with \(z^*(t,\beta ,B)\) the unique fixed point having the same sign as B of the equation

$$\begin{aligned} z= \mathbb {E}\left[ \tanh \left( \sqrt{\frac{e^t\sinh (\beta )}{\mathbb {E}[W]}}W z + B\right) \sqrt{\frac{{\mathrm e}^t\sinh (\beta )}{\mathbb {E}[W]}}\,W\right] . \end{aligned}$$
(3.12)

Next, we have to deal with the third term in (3.9) which, recalling (3.7) and (3.4), we write explicitly as

$$\begin{aligned} \frac{1}{n}\log \frac{\,G_{n}(t,\beta ) }{G_{n}(0,\beta )}&= \frac{1}{n} \sum _{i<j} \log \left( \frac{{\mathrm e}^t p_{ij}\cosh (\beta ) +1-p_{ij}}{p_{ij}\cosh (\beta )+1-p_{ij} } \right) + \frac{1}{n} \sum _{i<j} \log \left( \frac{\cosh (\beta _{ij}(0))}{\cosh (\beta _{ij} (t))} \right) \nonumber \\&\qquad + \frac{1}{n} \sum _{i\in [n]} \left( \frac{\beta _{ii}(0) - \beta _{ii}(t)}{2}\right) . \end{aligned}$$
(3.13)

We start by computing the first term in the r.h.s. of (3.13), then we show that the remaining terms vanish in the limit. We start by recalling that, on the basis of the Weight Regularity Condition 1.1(a) and (c), \(\ell _{n}=n(\mathbb E{[W]}+o(1))=O(n)\) and \(\sum _{1\le i<j\le n}p_{ij}^2=O(n^{-1})\). Thus, we write the first term in (3.13) as

$$\begin{aligned} \frac{1}{n} \sum _{i<j} \log \left( \frac{{\mathrm e}^t p_{ij}\cosh (\beta ) +1-p_{ij}}{p_{ij}\cosh (\beta )+1-p_{ij} } \right)= & {} \frac{1}{n} \sum _{i<j} \log \left( 1+ \frac{({\mathrm e}^t-1) p_{ij}\cosh (\beta ) }{1+p_{ij}(\cosh (\beta )-1)} \right) \\= & {} \frac{1}{n} \sum _{i<j} \log \left( 1+ ({\mathrm e}^t-1) p_{ij}\cosh (\beta ) +O(p_{ij}^2)\right) \\= & {} ({\mathrm e}^t-1) \cosh (\beta ) \frac{1}{n} \sum _{i<j} p_{ij} +O(n^{-1}), \end{aligned}$$

where the Taylor expansions of \(1/(1+x)\) and \(\log (1+x)\) have been used. Therefore,

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n} \sum _{i<j} \log \left( \frac{{\mathrm e}^t p_{ij}\cosh (\beta ) +1-p_{ij}}{p_{ij}\cosh (\beta )+1-p_{ij} } \right) = \frac{1}{2} ({\mathrm e}^t-1) \cosh (\beta ) \mathbb E{[W]}, \end{aligned}$$
(3.14)

since \(\frac{1}{n} \sum _{i<j} p_{ij} \rightarrow \frac{1}{2} \mathbb E[W]\). By (3.8) and a Taylor expansion

$$\begin{aligned} \log \left( \frac{\cosh (\beta _{ij}(0))}{\cosh (\beta _{ij} (t))} \right) = O(p^2_{ij}). \end{aligned}$$

Then, by Condition 1.1(c) and \(p_{ij}\le w_iw_j/\ell _{n}\),

$$\begin{aligned} \frac{1}{n} \sum _{i<j} \log \left( \frac{\cosh (\beta _{ij}(0))}{\cosh (\beta _{ij} (t))} \right) = \frac{1}{n}\sum _{i<j} O({p^2_{ij}}) = O(n^{-1}). \end{aligned}$$
(3.15)

Furthermore,

$$\begin{aligned} \frac{1}{n} \sum _{i\in [n]} \left( \frac{\beta _{ii}(0) - \beta _{ii}(t)}{2}\right) = \frac{1}{n} \sum _{i\in [n]} O(p_{ii}) = O(n^{-1}), \end{aligned}$$
(3.16)

where the definition of \(\beta _{ii}(t)\) in (3.5) has been used. Combining (3.13) with the estimates in (3.14), (3.15) and (3.16) leads to

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n}\log \frac{\,G_{n}(t,\beta ) }{G_{n}(0,\beta )} = \frac{1}{2} ({\mathrm e}^t-1) \cosh (\beta ) \mathbb E{[W]}. \end{aligned}$$
(3.17)

Considering the limit \(n\rightarrow \infty \) in (3.9) and using (3.17), (3.11) and (3.10), we conclude that

$$\begin{aligned} \varphi _{\beta , B}(t)&:=\lim _{n\rightarrow \infty } \varphi _{\beta , B, n}(t) =\mathbb {E}\left[ \log \cosh \left( \sqrt{\frac{{\mathrm e}^t\sinh (\beta )}{\mathbb {E}\left[ W\right] }}W z^*(t,\beta ,B) + B\right) \right] \nonumber \\&\qquad - \mathbb {E}\left[ \log \cosh \left( \sqrt{\frac{\sinh (\beta )}{\mathbb {E}\left[ W\right] }}W z^*(\beta ,B) + B\right) \right] +\,\frac{1}{2} \left( z^*(\beta ,B)^2 -\,z^*(t,\beta ,B)^2 \right) \nonumber \\&\qquad + \frac{1}{2} ({\mathrm e}^t-1) \cosh (\beta ) \mathbb E{[W]}. \end{aligned}$$
(3.18)

3.3 Conclusion of the Proof

With (3.18) in hand, we are finally ready to prove Theorem 1.7. Equation (3.18) identifies the infinite-volume limit of the cumulant generating function of the number of edges. By the Gärtner–Ellis theorem, this also identifies the rate function as its Legendre transform, provided that \(t\mapsto \varphi _{\beta , B}(t)\) is differentiable.

We compute the derivative of \(t\mapsto \varphi _{\beta , B}(t)\) in (3.18) explicitly as

$$\begin{aligned} \frac{d}{dt}\varphi _{\beta , B}(t)&=\mathbb {E}\left[ \tanh \left( \sqrt{\frac{{\mathrm e}^t\sinh (\beta )}{\mathbb {E}[W]}}W z^*(t,\beta ,B) + B\right) \sqrt{\frac{{\mathrm e}^t\sinh (\beta )}{\mathbb {E}[W]}}W\right] \nonumber \\&\quad \times \left( \frac{1}{2} z^*(t,\beta ,B)+\frac{d}{dt}z^*(t,\beta ,B)\right) \nonumber \\&\qquad - z^*(t,\beta ,B)\frac{d}{dt}z^*(t,\beta ,B) + \frac{1}{2} {\mathrm e}^t\cosh (\beta ) \mathbb E{[W]}. \end{aligned}$$
(3.19)

Since \(z^*(t,\beta ,B)\) is the fixed point for the ICW with \(\tilde{\beta }={\mathrm e}^t \sinh (\beta )\), which is an analytic function of t, it holds that \(z^*(t,\beta ,B)\) is analytic in t for \(B\ne 0\) and hence \(\frac{d}{dt}z^*(t,\beta ,B)\) exists. By (3.12), the first expectation equals \(z^*(t,\beta ,B)\), so that the two terms containing the factors \(\frac{d}{dt}z^*(t,\beta ,B)\) cancel, and

$$\begin{aligned} \frac{d}{dt}\varphi _{\beta , B}(t)&=\frac{1}{2} z^*(t,\beta ,B)^2 + \frac{1}{2} {\mathrm e}^t\cosh (\beta ) \mathbb E{[W]}. \end{aligned}$$
(3.20)

For \(B=0\), \(\frac{d}{dt}z^*(t,\beta ,B)\) might not exist in the critical point \({\mathrm e}^t \sinh (\beta ) = \tilde{\beta }_c\). However, since the specific heat is finite, both the left and right derivative exist. Therefore, the above argument can be repeated for the left and right derivative, which both give the r.h.s. of (3.20), so that this equation is also true for \(B=0\).

This shows that \(t\mapsto \varphi _{\beta , B}(t)\) is differentiable and it concludes the proof of the main statement in Theorem 1.7 about the large deviations function for the number of edges in the annealed \(\mathrm {GRG}_{n}(\varvec{w})\). Formula (1.13) for the expected number of edges is immediately obtained by evaluating (3.20) in \(t=0\).

Finally, we note that by the LDP derived in the previous section, and the fact that the limiting rate function is strictly convex (this can be seen by noting that both terms on the r.h.s. of (3.20) are strictly increasing) the rate function has a unique minimum, which immediately shows that \(|E_{n}|/n\) is concentrated around its mean, which has already been derived in (1.17) as well as in (3.20).\(\square \)

Remark 3.1

(Moment generating function of total degree for \(\mathrm {GRG}_{n}(\varvec{w})\)) At zero magnetic field \(B=0\) and infinite temperature \(\beta =0\), the annealed average of any function of the graph coincides with the average with respect to the law of the graph. Then, \(\varphi _{0,0,n}(t)\) is the cumulant generating function of the number of edges of the \(\mathrm {GRG}_{n}(\varvec{w})\). In this case, (3.18) gives

$$\begin{aligned} \varphi _{0, 0}(t)=\frac{1}{2} ({\mathrm e}^t-1) \mathbb E{[W]}, \end{aligned}$$
(3.21)

because \(z^*(0,0)=0\), which can also be seen by direct computation.

4 Degree Distribution Under Annealed Measure: Proof of Theorem 1.8

Given \((D_i)_{i\in [n]}\), the degree sequence of the \(\mathrm {GRG}_{n}(\varvec{w})\) we want to compute its moment generating function with respect to the annealed measure \({\mu }^\mathrm{{an}}_{{n}}\), i.e.,

$$\begin{aligned} {{\texttt {\textit{g}}}}_{\beta ,B,n}(\mathbf{s})=\mathbb E_{{\mu }^\mathrm{{an}}_n}\Big [{\mathrm e}^{\sum _{i \in [n]} s_i D_i} \Big ], \end{aligned}$$

for \(\mathbf{s}=(s_1,s_2,\ldots , s_{n}) \in {\mathbb R}^{n}\). Since \(D_i=\sum _{j \ne i} I_{ij}\), where \((I_{ij})_{1\le i < j \le n}\) are the independent Bernoulli variables with parameters \(p_{ij}\) representing the indicator that the edge ij exists and \(I_{ji}=I_{ij}\), we can write \( \sum _{i \in [n]} s_i D_i= \sum _{i<j} I_{ij}(s_i+s_j)\), then recalling (3.2) we have

$$\begin{aligned} {{\texttt {\textit{g}}}}_{\beta ,B,n}(\mathbf{s})= R_{n,\beta ,B}(\mathbf{t}(\mathbf{s})) \end{aligned}$$
(4.1)

where we define \(t_{ij}(\mathbf{s}) := s_i +s_j\) for \(1\le i < j \le n\). Furthermore, by (3.3),

$$\begin{aligned} {{\texttt {\textit{g}}}}_{\beta ,B,n}(\mathbf{s}) =\frac{\mathcal{A}_{n}(\mathbf{t}(\mathbf{s}),\beta , B)}{\mathcal{A}_{n}({\mathbf {0}},\beta , B)}, \end{aligned}$$
(4.2)

where we recall that \(\mathcal{A}_{n}(\mathbf{t},\beta , B)\) was defined in (3.6). This is the starting point of our analysis. In Sect. 4.1 we simplify the expression for the moment generating function of the degrees by using the mapping of the annealed Ising measure to the rank-1 inhomogeneous Curie–Weiss model. We then investigate the degree of a fixed vertex under the annealed Ising model in Sect. 4.2 and we consider finitely many degrees in Sect. 4.3.

4.1 Moment Generating Function of the Degrees

We start by rewriting the generating function of the degree \({{\texttt {\textit{g}}}}_{\beta ,B,n}(\mathbf{s})\). To this aim, due to (4.2), we need to rewrite \(\mathcal{A}_{n}({\mathbf {t}}(\mathbf{s}),\beta , B)\). This can be done using again the Hubbard-Stratonovich identity. Introducing the standard Gaussian variable Z, we will show that we can extend the arguments in [19] to show that

$$\begin{aligned} \mathcal{A}_{n}({\mathbf {t}}(\mathbf{s}),\beta , B)= & {} G_{n}(\mathbf{t}(\mathbf{s}),\beta )\, 2^{n}\, {\mathrm e}^{-\kappa (\mathbf{t})} \,\mathbb E_Z \left[ \exp \left\{ \sum _{i=1}^{n} \log \cosh \left( a_{n}(\beta ) {\mathrm e}^{s_i} w_i Z+B \right) \right\} \right] \nonumber \\&\times \,(1+o(1)), \end{aligned}$$
(4.3)

where \(a_{n}(\beta ) =\sqrt{{\sinh (\beta )/\ell _{n}}}\), \(\kappa (\mathbf{t})\) is some appropriate constant and \(\mathbb E_Z\) denotes the expectation w.r.t. the Gaussian variable Z. This boils down to proving convergence of the moment generating function, which requires sharp asymptotics for \(\mathcal{A}_{n}({\mathbf {t}}(\mathbf{s}),\beta , B)\), while in [19], it sufficed to study the logarithmic asymptotics.

To see (4.3), we define the \(\mathbf{s}\)-dependent rank-1 inhomogeneous Curie–Weiss model measure as

$$\begin{aligned} \mu ^{\scriptscriptstyle {\mathrm {ICW}}}_{n, {\mathbf {s}}}(\sigma )=\frac{1}{Z^{\scriptscriptstyle {\mathrm {ICW}}}_{n,\mathbf {s}}(\sinh (\beta ),B)}{\mathrm e}^{\frac{1}{2} \sum _{i,j} \sinh (\beta ) {\mathrm e}^{s_i}{\mathrm e}^{s_j}\frac{w_iw_j}{\ell _{n}} \sigma _i\sigma _j +B\sum _{i\in [n]} \sigma _i}, \end{aligned}$$

with \(Z^{\scriptscriptstyle {\mathrm {ICW}}}_{n,\mathbf {s}}(\sinh (\beta ),B)\) the appropriate partition function. Then, using (3.6), we can follow [13, (4.64)] to obtain that

$$\begin{aligned} \mathcal{A}_{n}({\mathbf {t}(\mathbf{s})},\beta , B)=G_{n}({\mathbf {t}}(\mathbf{s}),\beta ) Z^{\scriptscriptstyle {\mathrm {ICW}}}_{n,\mathbf {s}}(\sinh (\beta ),B) \,\mathbb {E}_{\mu ^{\scriptscriptstyle {\mathrm {ICW}}}_{n, {\mathbf {s}}}}\Big [{\mathrm e}^{F_{n}({\mathbf {s}})}\Big ], \end{aligned}$$
(4.4)

where now

$$\begin{aligned} F_{n}(\mathbf{s})=\frac{1}{2} \sum _{i,j} \big [\beta _{ij}(s_i+s_j)-{\mathrm e}^{s_i + s_j}\sinh (\beta )\frac{w_iw_j}{\ell _{n}}\big ]\sigma _i\sigma _j, \end{aligned}$$

and we have adapted notation from \(E_{n}\) in [13, (4.64)] to \(F_{n}\) here to avoid confusion with the total number of edges. To further simplify (4.4), we observe that, following the proof of [13, Lemma 4.1], one has

$$\begin{aligned} Z^{\scriptscriptstyle {\mathrm {ICW}}}_{n,\mathbf {s}}(\sinh (\beta ),B) = 2^{n}\mathbb E_Z \left[ \exp \left\{ \sum _{i=1}^{n} \log \cosh \left( a_{n}(\beta ) {\mathrm e}^{s_i} w_i Z+B \right) \right\} \right] . \end{aligned}$$

Further, under Condition 1.1(a)–(c), we can follow the proof of [13, Lemma 4.7] to identify the limit of \(\mathbb {E}_{\mu ^{\scriptscriptstyle {\mathrm {ICW}}}_{n, {\mathbf {s}}}}\Big [{\mathrm e}^{F_{n}(\mathbf{s})}\Big ]\), as formulated in the next lemma:

Lemma 4.1

(Asymptotics correction term) Define \(W_{n}(\mathbf{{s}})=w_U{\mathrm e}^{s_U}\), where \(U\in [n]\) is a uniform vertex. Assume that \(\mathbf{s}\) is such that \(W_{n}(\mathbf{{s}}){\mathop {\longrightarrow }\limits ^{\scriptscriptstyle {\mathcal D}}}W(\mathbf{{s}})\) and \(\mathbb E[W_{n}(\mathbf{{s}})^2]\rightarrow \mathbb E[W(\mathbf{{s}})^2]\). Then, there exists \(\kappa (\mathbf {s})\ge 0\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb {E}_{\mu ^{\scriptscriptstyle {\mathrm {ICW}}}_{n, {\mathbf {s}}}}\Big [{\mathrm e}^{F_{n}(\mathbf{s})}\Big ]={\mathrm e}^{-\kappa (\mathbf {s})}. \end{aligned}$$

In particular, \(\kappa (\mathbf {s})=\kappa (\mathbf {0})\) when \(\mathbf{s}=(s_1,\ldots ,s_n)\) only contains finitely many non-zero coordinates.

Proof of Lemma 4.1

We follow the proof of [13, Lemma 4.7] to obtain that

$$\begin{aligned} F_{n}(\mathbf{s})=-\frac{1}{2} \sinh (\beta )\cosh (\beta ) \left( \sum _{i\in [n]} {\mathrm e}^{s_{i}} \sigma _i\frac{w_i^2}{\ell _{n}}\right) ^2+o(1). \end{aligned}$$

Due to the negativity of this term, Lemma 4.1 follows when we prove that, for some \(\bar{\kappa }(\mathbf{s})\),

$$\begin{aligned} \sum _{i\in [n]} {\mathrm e}^{s_{i}} \sigma _i\frac{w_i^2}{\ell _{n}}{\mathop {\longrightarrow }\limits ^{\scriptscriptstyle {\mathbb P}}}\bar{\kappa }(\mathbf{s}), \end{aligned}$$
(4.5)

and then Lemma 4.1 follows with \(\kappa (\mathbf{s})=\tfrac{1}{2}(\bar{\kappa }(\mathbf{s}))^2 \sinh (\beta )\cosh (\beta )\). We proceed to prove (4.5), which, in turn, is equivalent to proving that as \(n\rightarrow \infty \)

$$\begin{aligned} \mathbb {E}_{\mu ^{\scriptscriptstyle {\mathrm {ICW}}}_{n, {\mathbf {s}}}} \Big [{\mathrm e}^{r\sum _{i\in [n]} {\mathrm e}^{s_{i}} \sigma _i\frac{w_i^2}{\ell _{n}}}\Big ]\rightarrow {\mathrm e}^{r\bar{\kappa }(\mathbf{s})}. \end{aligned}$$

Following [19, (4.71)]] we start by applying again the Hubbard-Stratonovich identity that gives

$$\begin{aligned} \mathbb {E}_{\mu ^{\scriptscriptstyle {\mathrm {ICW}}}_{n, {\mathbf {s}}}} \Big [{\mathrm e}^{r\sum _{i\in [n]} {\mathrm e}^{s_{i}} \sigma _i\frac{w_i^2}{\ell _{n}}}\Big ]= & {} \frac{\sum _{\sigma \in \Omega _n}\mathbb {E}_Z\Big [\exp \Big \{\sum _i\Big (\frac{r}{\ell _n}{\mathrm e}^{s_{i}}w_i^2 + \sqrt{\frac{\sinh (\beta )}{\ell _n}}{\mathrm e}^{s_{i}} w_i Z + B \Big )\sigma _i\Big \}\Big ]}{\sum _{\sigma \in \Omega _n}\mathbb {E}_Z\Big [\exp \Big \{\sum _i\Big ( \sqrt{\frac{\sinh (\beta )}{\ell _n}}{\mathrm e}^{s_{i}} w_i Z + B \Big )\sigma _i\Big \}\Big ]}. \end{aligned}$$

The sum over the spins can now be performed yielding

$$\begin{aligned} \mathbb {E}_{\mu ^{\scriptscriptstyle {\mathrm {ICW}}}_{n, {\mathbf {s}}}} \Big [{\mathrm e}^{r\sum _{i\in [n]} {\mathrm e}^{s_{i}} \sigma _i\frac{w_i^2}{\ell _{n}}}\Big ]= & {} \frac{\mathbb {E}_Z\Big [\exp \Big \{\sum _i \log \cosh \Big (\frac{r}{\ell _n}{\mathrm e}^{s_{i}}w_i^2 + \sqrt{\frac{\sinh (\beta )}{\ell _n}}{\mathrm e}^{s_{i}} w_i Z + B \Big )\Big \}\Big ]}{\mathbb {E}_Z\Big [\exp \Big \{\sum _i \log \cosh \Big ( \sqrt{\frac{\sinh (\beta )}{\ell _n}}{\mathrm e}^{s_{i}} w_i Z + B \Big )\Big \}\Big ]}. \end{aligned}$$

By introducing the random variables \(W_{n}(\mathbf{{s}})=w_U{\mathrm e}^{s_U}\), where \(U\in [n]\) is a uniform vertex, the previous expression can be rewritten as

$$\begin{aligned}&\mathbb {E}_{\mu ^{\scriptscriptstyle {\mathrm {ICW}}}_{n, {\mathbf {s}}}} \Big [{\mathrm e}^{r\sum _{i\in [n]} {\mathrm e}^{s_{i}} \sigma _i\frac{w_i^2}{\ell _{n}}}\Big ] \nonumber \\&\quad = \frac{\int _{\mathbb {R}} \exp \Big \{-z^2/2 + n \mathbb {E}\Big [\log \cosh \Big (\frac{r}{\ell _n} W_n^2(\mathbf{s}/2)+ \sqrt{\frac{\sinh (\beta )}{\ell _n}}W_n(\mathbf{s}) z + B \Big )\Big ]\Big \} dz}{\int _{\mathbb {R}} \exp \Big \{-z^2/2 + n \mathbb {E}\Big [\log \cosh \Big (\sqrt{\frac{\sinh (\beta )}{\ell _n}}W_n(\mathbf{s}) z + B \Big )\Big ]\Big \} dz}. \end{aligned}$$

We do a change of variables replacing \(\frac{z}{\sqrt{n}}\) by z, so that

$$\begin{aligned}&\mathbb {E}_{\mu ^{\scriptscriptstyle {\mathrm {ICW}}}_{n, {\mathbf {s}}}} \left[ {\mathrm e}^{r\sum _{i\in [n]} {\mathrm e}^{s_{i}} \sigma _i\frac{w_i^2}{\ell _{n}}}\right] \\&= \frac{\int _{\mathbb {R}} \exp \Big \{-n z^2/2 + n \mathbb {E}\Big [\log \cosh \Big (\frac{r}{\ell _n} W_n^2(\mathbf{s}/2)+ \sqrt{\frac{\sinh (\beta )}{\mathbb {E}[W_n]}}W_n(\mathbf{s}) z + B \Big )\Big ]\Big \} dz}{\int _{\mathbb {R}} \exp \Big \{-n z^2/2 + n \mathbb {E}\Big [\log \cosh \Big (\sqrt{\frac{\sinh (\beta )}{\mathbb {E}[W_n]}}W_n(\mathbf{s}) z + B \Big )\Big ]\Big \} dz}. \end{aligned}$$

Assuming that \(W_{n}(\mathbf{{s}}){\mathop {\longrightarrow }\limits ^{\scriptscriptstyle {\mathcal D}}}W(\mathbf{{s}})\) for some limiting distribution, as well as \(\mathbb E[W_{n}(\mathbf{{s}})^2]\rightarrow \mathbb E[W(\mathbf{{s}})^2]\) (which in fact is a condition on \(\mathbf{{s}}\)), an application of the Laplace method yields

$$\begin{aligned} \mathbb {E}_{\mu ^{\scriptscriptstyle {\mathrm {ICW}}}_{n, {\mathbf {s}}}} \left[ {\mathrm e}^{r\sum _{i\in [n]} {\mathrm e}^{s_{i}} \sigma _i\frac{w_i^2}{\ell _{n}}}\right]= & {} \exp \left[ {r\mathbb E\left[ \tanh \left( \sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}}W(\mathbf{{s}})z^*(\mathbf{s}, \beta ,B) + B\right) \frac{W(\frac{\mathbf{{s}}}{2})^2}{\mathbb E[W]}\right] }\right] \\&\times \,(1+o(1)), \end{aligned}$$

where \(z^*(\mathbf{s}, \beta ,B)\) is the solution with the same sign as B of

$$\begin{aligned} z= \mathbb {E}\left[ \tanh \left( \sqrt{\frac{\sinh (\beta )}{\mathbb {E}[W]}}W(\mathbf{s}) z + B\right) \sqrt{\frac{\sinh (\beta )}{\mathbb {E}[W]}}\,W(\mathbf{s})\right] . \end{aligned}$$

All in all, the previous computation shows that (4.5) holds with

$$\begin{aligned} \bar{\kappa }(\mathbf{s})=\mathbb E\left[ \tanh \left( \sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}}W(\mathbf{{s}})z^*(\mathbf{s}, \beta ,B)\right) \frac{W({\mathbf{{s}}/2})^2}{\mathbb E[W]}\right] . \end{aligned}$$

When \(\mathbf{{s}}\) only has a finite number of non-zero coordinates, it holds that \(W_{n}(\mathbf{{s}}){\mathop {\longrightarrow }\limits ^{\scriptscriptstyle {\mathcal D}}}W\) and \(\mathbb E[W_{n}(\mathbf{{s}})^2]\rightarrow \mathbb E[W^2]\), so that \(\bar{\kappa }(\mathbf{s})=\bar{\kappa }(\mathbf{0})\), as required. \(\square \)

Armed with (4.3), we recall (4.2) and thus conclude that the moment generating function of the degrees is given by

$$\begin{aligned} {{\texttt {\textit{g}}}}_{\beta ,B,n}(\mathbf{s})= (1+o(1)){\mathrm e}^{\kappa (\mathbf {0})-\kappa (\mathbf {s})} \frac{G_{n}(\mathbf{t}(\mathbf{s}),\beta )\mathbb E_Z \left[ \exp \sum _{i=1}^{n} \log \cosh \left( a_{n}(\beta ) {\mathrm e}^{s_i} w_i Z+B \right) \right] }{G_{n}(\mathbf{0},\beta )\mathbb E_Z \left[ \exp \sum _{i=1}^{n} \log \cosh \left( a_{n}(\beta ) w_i Z+B \right) \right] }, \end{aligned}$$
(4.6)

with

$$\begin{aligned} a_{n}(\beta ) =\sqrt{\frac{\sinh (\beta )}{\ell _{n}}}=O\left( n^{-\frac{1}{2} }\right) . \end{aligned}$$

4.2 Degree of a Fixed Vertex: Proof of Theorem 1.8

We want to study the distribution of the degree of a fixed vertex. With no loss of generality we can fix, for instance, vertex \(i=1\). Thus, we choose \(\mathbf{s}=\mathbf{s}_1\) with \(\mathbf{s}_1=(s,0,\ldots ,0)\), and write

$$\begin{aligned} \exp \left[ \sum _{i=1}^{n} \log \cosh \left( a_{n}(\beta ) {\mathrm e}^{s_i} w_i Z+B \right) \right]= & {} \frac{\cosh (a_{n}(\beta ) {\mathrm e}^s w_1 Z+B)}{\cosh (a_{n}(\beta ) w_1 Z+B)}\\&\times \exp \left[ \sum _{i=1}^{n} \log \cosh \left( a_{n}(\beta ) w_i Z+B \right) \right] . \end{aligned}$$

Defining

$$\begin{aligned} h_{n}(Z;\beta ,B):= & {} \exp \left\{ \sum _{i=1}^{n} \log \cosh \left( a_{n}(\beta ) w_i Z+B \right) \right\} \nonumber \\&\,=&\exp \left\{ n\mathbb E_{W_n} \left[ \log \cosh \left( a_{n}(\beta ) W_{n} Z+B \right) \right] \right\} , \end{aligned}$$
(4.7)

where \(\mathbb E_{W_n}\) is the average w.r.t. \(W_{n}=w_U\) being U an uniformly chosen vertex in \([n]\), we can introduce the probability measure on \(\mathbb R\) by

$$\begin{aligned} \gamma _{\beta ,B,n}(\cdot ):= \frac{ \mathbb E_Z [\; \cdot \; h_{n}(Z;\beta ,B) ] }{ \mathbb E_Z [ h_{n}(Z;\beta ,B) ] }, \end{aligned}$$

and write (4.6) as

$$\begin{aligned} {{\texttt {\textit{g}}}}_{\beta ,B,n}(\mathbf{s}_1)=(1+o(1)) \frac{ G_{n}(\mathbf{t}(\mathbf{s}_1),\beta )}{G_{n}(\mathbf{0},\beta )} \mathbb {E}_{\gamma _{\beta ,B,n}}\left( \frac{\cosh \left( a_{n}(\beta ) {\mathrm e}^s w_1 Z+B \right) }{\cosh \left( a_{n}(\beta ) w_1 Z+B \right) } \right) , \end{aligned}$$
(4.8)

since, by Lemma 4.1, \(\kappa (\mathbf {t})=\kappa (\mathbf {0})\).

Now, under the measure \(\gamma _{\beta ,B,n}\), \(Z/\sqrt{n}{\mathop {\longrightarrow }\limits ^{\scriptscriptstyle {\mathbb P}}}z^*(\beta ,B)\), which can be seen by performing a Laplace method on the integral

$$\begin{aligned} \mathbb E_Z [\; \cdot \; h_{n}(Z;\beta ,B) ] =\int _{-\infty }^{+\infty } \; \cdot \; \exp \left[ \sum _{i=1}^{n} \log \cosh \left( a_{n}(\beta ) w_i Z+B \right) \right] {\mathrm e}^{-z^2/2}\frac{dz}{\sqrt{2\pi }}. \end{aligned}$$

In fact, that is precisely the interpretation that \(z^*(\beta ,B)\) in Theorem 1.2 has. As a result,

$$\begin{aligned} \mathbb {E}_{\gamma _{\beta ,B,n}}\left( \frac{\cosh \left( a_{n}(\beta ) {\mathrm e}^s w_1 Z+B \right) }{\cosh \left( a_{n}(\beta ) w_1 Z+B \right) } \right) \rightarrow \frac{\cosh \Big (z^*(\beta ,B) {\mathrm e}^s w_1\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}{\cosh \Big (z^*(\beta ,B) w_1\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}. \end{aligned}$$

Thus,

$$\begin{aligned} \mathbb E_{{\mu }^\mathrm{{an}}_n}\big [{\mathrm e}^{s D_1}\big ]= (1+o(1)) \frac{G_{n}(\mathbf{t}(\mathbf{s}_1),\beta )}{G_{n}(\mathbf{0},\beta )} \frac{\cosh \Big (z^*(\beta ,B){\mathrm e}^s w_1 \sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}{\cosh \Big (z^*(\beta ,B)w_1\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}, \end{aligned}$$
(4.9)

and we are left with the problem of studying the limit of \(G_{n}(\mathbf{t}(\mathbf{s}_1),\beta )/G_{n}(\mathbf{0},\beta )\). We have

$$\begin{aligned} \frac{ G_{n}(\mathbf{t}(\mathbf{s}_1),\beta ) }{G_{n}(\mathbf{0},\beta )\, }= & {} \frac{{\mathrm e}^{-\beta _{11}(2s)/2} \prod _{j>1} C_{1j}(s)\, \prod _{1<i<j} C_{ij}(0)\, \prod _{i>1} {\mathrm e}^{-\beta _{ii}(0)/2} }{\, {\mathrm e}^{-\beta _{11}(0)/2} \prod _{j>1} C_{1j}(0)\prod _{1<i<j} C_{ij}(0)\, \prod _{i>1} {\mathrm e}^{-\beta _{ii}(0)/2} }\nonumber \\= & {} \frac{{\mathrm e}^{-\beta _{11}(2s)/2}}{{\mathrm e}^{-\beta _{11}(0)/2}}\prod _{j>1} \left( \frac{C_{1j}(s)}{C_{1j}(0)} \right) , \end{aligned}$$
(4.10)

where (3.7) has been used. From the definition of \(C_{ij}(s)\)’s, we get

$$\begin{aligned} \prod _{j>1} \left( \frac{C_{1j}(s)}{C_{1j}(0)} \right) = \prod _{j>1} \frac{{\mathrm e}^{s} \cosh (\beta ) p_{1j}+1-p_{1j} }{ \cosh (\beta ) p_{1j}+1-p_{1j}} \cdot \prod _{j>1} \frac{\cosh (\beta _{1j}(0) )}{\cosh (\beta _{1j}(s))}. \end{aligned}$$
(4.11)

Putting \(p_{ij}=w_i w_j/(\ell _{n}+w_i w_j)\), the first term in the l.h.s. is rewritten as

$$\begin{aligned} \prod _{j>1} \frac{{\mathrm e}^s \cosh (\beta ) p_{1j}+1-p_{1j} }{ \cosh (\beta ) p_{1j}+1-p_{1j}}= \prod _{j>1} \frac{\ell _{n} + {\mathrm e}^s \cosh (\beta )w_1 w_j }{\ell _{n} + \cosh (\beta ) w_1 w_j} = {\mathrm e}^{\cosh (\beta ) w_1 ({\mathrm e}^s-1)}(1+o(1)) \end{aligned}$$

as \(n\rightarrow \infty \). Next, we consider the second factor in the r.h.s. of (4.11). Arguing as in (3.15) in the previous section,

$$\begin{aligned} \sum _{1<j} \log \left( \frac{\cosh (\beta _{ij}(0))}{\cosh (\beta _{ij} (s))} \right) = \sum _{1<j} O(p_{1j}^2)\le w_1^2\sum _{1<j} \frac{w_j^2}{\ell _{n}^2}=o(1), \end{aligned}$$
(4.12)

since \(\max _{j\in [n]} w_j=o(n)\). Taking the exponential of the previous relation, we obtain

$$\begin{aligned} \prod _{j>1} \frac{\cosh (\beta _{1j}(0) )}{\cosh (\beta _{1j}(s))} =1+o(1), \end{aligned}$$

as \(n\rightarrow \infty \). Finally, since \(\beta _{ij}(s)=o(1)\) as \(n\rightarrow \infty \) (since \(p_{ij}\rightarrow 0\) in the same limit), the second factor in the r.h.s. of (4.10) is \(1+o(1)\). This proves that

$$\begin{aligned} \frac{ G_{n}(\mathbf{t}(\mathbf{s}_1),\beta ) }{G_{n}(\mathbf{0},\beta )\, } = {\mathrm e}^{\cosh (\beta ) w_1 ({\mathrm e}^s-1)}(1+o(1)). \end{aligned}$$

and from (4.9), we finally obtain

$$\begin{aligned} \mathbb E_{{\mu }^\mathrm{{an}}_n}\big [{\mathrm e}^{sD_1}\big ]= (1+o(1)) {\mathrm e}^{\cosh (\beta ) w_1 ({\mathrm e}^s-1)}\frac{\cosh \Big (z^*(\beta ,B) {\mathrm e}^s w_1 \sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}{\cosh \Big (z^*(\beta ,B) w_1 \sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}, \end{aligned}$$

as required.\(\square \)

4.3 Degree of a Fixed Number of Vertices: Proof of Theorem 1.10

We can generalize the previous computation by considering the degrees \((D_1,D_2,\ldots , D_m)\), with \(m\in [n]\) fixed. The generating function of this random vector can be obtained by plugging \(\mathbf{s}= \mathbf{s}_m\) with \(\mathbf{s}_m=(s_1,s_2,\ldots ,s_m,0,\ldots ,0)\) into (4.6). By the same arguments of the previous section, we obtain

$$\begin{aligned} {g}_{\beta ,B,n}(\mathbf{s}_m)=(1+o(1)) \frac{ G_{n}({\mathbf{t}}(\mathbf{s}_m),\beta )}{G_{n}(\mathbf{0},\beta )} \mathbb {E}_{\gamma _{\beta ,B,n}}\left( \prod _{i=1}^m \frac{\cosh \left( a_{n}(\beta ) {\mathrm e}^{s_i} w_i Z+B \right) }{\cosh \left( a_{n}(\beta ) w_i Z+B \right) } \right) , \end{aligned}$$
(4.13)

with

$$\begin{aligned} \mathbb {E}_{\gamma _{\beta ,B,n}}\left( \prod _{i=1}^m \frac{\cosh \left( a_{n}(\beta ) {\mathrm e}^{s_i} w_i Z+B \right) }{\cosh \left( a_{n}(\beta ) w_i Z+B \right) } \right) \rightarrow \prod _{i=1}^m\frac{\cosh \Big (z^*(\beta ,B) {\mathrm e}^{s_i} w_i\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}{\cosh \Big (z^*(\beta ,B) w_i\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )} \end{aligned}$$
(4.14)

as \(n\rightarrow \infty \). Now we have to study the limit of \( G_{n}({\mathbf{t}}(\mathbf{s}_m),\beta )/ G_{n}(\mathbf{0}_m,\beta )\). From the definition of \(G_{n}({\mathbf {t}},\beta )\) given in (3.7) and recalling that \(t_{ij}(\mathbf{s})=s_i+s_j\),

$$\begin{aligned} \frac{ G_{n}({\mathbf{t}}(\mathbf{s}_m),\beta ) }{G_{n}(\mathbf{0},\beta )} = \prod _{ 1\le i < j \le m} \left( \frac{C_{ij}(s_i+s_j)}{C_{ij}(0)} \right) \cdot \prod _{\small {\begin{array}{*{20}{c}} {1 \le i \le m}\\ {j > m} \end{array}}}\left( \frac{C_{ij} (s_{i})}{ C_{ij}(0) } \right) \cdot \prod _{i=1}^m \left( \frac{{\mathrm e}^{-\beta _{ii}(2s_i)/2}}{{\mathrm e}^{-\beta _{ii}(0)/2}} \right) . \end{aligned}$$
(4.15)

We analyze the three factors separately:

\(\rhd \) First and third factors of (4.15). By the definition of \(C_{ij} (t_{ij})\),

$$\begin{aligned} \prod _{ 1\le i< j \le m} \left( \frac{C_{ij}(s_i+s_j)}{C_{ij}(0)} \right)= & {} \prod _{ 1\le i< j \le m} \frac{{\mathrm e}^{s_i} {\mathrm e}^{s_j} \cosh (\beta ) p_{ij}+1-p_{ij} }{ \cosh (\beta ) p_{ij}+1-p_{ij}}\nonumber \\&\cdot \prod _{ 1\le i < j \le m} \frac{\cosh (\beta _{ij}(0) )}{\cosh (\beta _{ij}(s_i+s_j))}, \end{aligned}$$
(4.16)

where, by definition of \(p_{ij}\),

$$\begin{aligned} \prod _{ 1\le i< j \le m} \frac{{\mathrm e}^{s_i} {\mathrm e}^{s_j} \cosh (\beta ) p_{ij}+1-p_{ij} }{ \cosh (\beta ) p_{ij}+1-p_{ij}} = \prod _{1\le i < j \le m} \frac{\ell _{n} + {\mathrm e}^{s_i} {\mathrm e}^{s_j} \cosh (\beta )w_i w_j }{\ell _{n} + \cosh (\beta ) w_i w_j}. \end{aligned}$$

We show that this factor is \(1+o(1)\). Indeed, following [24], we expand \(\log (1+x)\) to obtain

$$\begin{aligned} \log \prod _{1\le i< j \le m} \frac{\ell _{n} + {\mathrm e}^{s_i} {\mathrm e}^{s_j} \cosh (\beta )w_i w_j }{\ell _{n} + \cosh (\beta ) w_i w_j}= & {} \frac{\cosh (\beta )}{\ell _{n}} \sum _{1\le i< j \le m} w_i w_j ({\mathrm e}^{s_i} {\mathrm e}^{s_j} -1)\\&{+}\quad \frac{\cosh (\beta )}{\ell _{n}^2} \, O(\sum _{1\le i < j \le m} w^2_i w^2_j )\\= & {} O(n^{-1}), \end{aligned}$$

since \(\ell _{n}=O(n)\) and m is fixed. The second term in the r.h.s. of (4.16) and the third factor of (4.15) converge to 1. Thus, we have shown that that the first and third factors of (4.15) are \(1+o(1)\).

\(\rhd \) Second factor of (4.15). For any fixed \(1\le i \le m\),

$$\begin{aligned} \prod _{ j>m } \left( \frac{C_{ij}(s_{i})}{ C_{ij}(0) } \right) = \prod _{j> m} \frac{\ell _{n} + {\mathrm e}^{s_i} \cosh (\beta )w_i w_j }{\ell _{n} + \cosh (\beta ) w_i w_j} \cdot \prod _{j > m} \frac{\cosh (\beta _{ij}(0) )}{\cosh (\beta _{ij}(s_i))}. \end{aligned}$$

The second factor in the r.h.s. of the previous display can be treated as in (4.12), showing that it is \(1+o(1)\), while the first factor is close to the generating function of \(D_i\) in a GRG with vertex set \(\{i, m+1,\ldots , n\}\) and weight of vertex i given by \(\cosh (\beta ) w_i\). We can deal with this term as we have already done, that is,

$$\begin{aligned} \log \prod _{j> m} \frac{\ell _{n} + {\mathrm e}^{s_i} \cosh (\beta )w_i w_j }{\ell _{n} + \cosh (\beta ) w_i w_j} =\cosh (\beta ) w_i ({\mathrm e}^{s_i}-1) \frac{1}{\ell _{n}} \sum _{j> m} w_j + \frac{\cosh (\beta )}{\ell ^2_{n}}\, O\left( \sum _{j>m} w_j^2\right) . \end{aligned}$$

Since m is fixed \( \frac{1}{\ell _{n}} \sum _{j > m} w_j =1+o(1)\), and \(\frac{1}{\ell ^2_{n}}\, O( \sum _{j>m} w_j^2){\le \max _{i\in [n]} w_i/\ell _n=o(1)}\) by Condition 1.1. Then,

$$\begin{aligned} \prod _{j > m} \frac{\ell _{n} + {\mathrm e}^{s_i} \cosh (\beta )w_i w_j }{\ell _{n} + \cosh (\beta ) w_i w_j} = {\mathrm e}^{ \cosh (\beta ) w_i ({\mathrm e}^{s_i}-1) }(1+o(1)), \end{aligned}$$

and the second factor in (4.15) is \( \prod _{i=1}^m {\mathrm e}^{ \cosh (\beta ) w_i ({\mathrm e}^{s_i}-1) }(1+o(1)). \) Thus we conclude that

$$\begin{aligned} \frac{ G_{n}({\mathbf{t}}(\mathbf{s}_m),\beta ) }{G_{n}(\mathbf{0},\beta )} = \prod _{i=1}^m {\mathrm e}^{ \cosh (\beta ) w_i ({\mathrm e}^{s_i}-1) }(1+o(1)). \end{aligned}$$

Going back to (4.13), we finally obtain that

$$\begin{aligned} \mathbb E_{{\mu }^\mathrm{{an}}_n}\Big [{\mathrm e}^{\sum _{i=1}^m s_i D_i} \Big ] = \prod _{i=1}^m {\mathrm e}^{ \cosh (\beta ) w_i ({\mathrm e}^{s_i}-1) } \prod _{i=1}^m\frac{\cosh \Big (z^*(\beta ,B) {\mathrm e}^{s_i} w_i\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )}{\cosh \Big (z^*(\beta ,B) w_i\sqrt{\frac{\sinh (\beta )}{\mathbb E[W]}} +B \Big )} (1+o(1)), \end{aligned}$$

as required.\(\square \)