1 Introduction

A network (graph) consists of a set of agents and a set of pairwise interactions among the agents. Networks are canonical models that capture relations within or between data sets. Due to the increasing popularity of relational data, network data analysis has been a primary research topic in statistics, machine learning and many other scientific fields (Abbe 2018; Bickel and Sarkar 2016; Goldenberg et al. 2010; Kolaczyk 2009; Newman 2009). One of the fundamental problems in network data analysis is to understand the structural properties of a given network. The structure of a small network can be easily described by its visualization. However, larger networks can be difficult to envision and describe. It is thus important to have several summary statistics that provide us with meaningful insight into the structure of a network. Based on these statistics, we are able to compare networks or classify them according to properties that they exhibit. There are a wealth of descriptive statistics that measure some aspect of the structure or characteristics of a network. For example, the diameter of a network measures the maximum distance between two individuals; the global clustering coefficient measures the extent to which individuals in a graph tend to cluster together; the modularity is a measure of the strength of division of a network into subgroups.

Summary statistics of networks are sometimes termed topological indices, especially in chemical or pharmacological science (Ma et al. 2018). One of the most popular topological indices is the Randić index invented in Randić (1975). The Randić index measures the extent of branching of a network (Bonchev and Trinajstic 1978; Randić 1975). It was observed that the Randić index is strongly correlated with a variety of physico-chemical properties of alkanes (Randić 1975). The Randić index play a central role in understanding quantitative structure–property and structure–activity relations in chemistry and pharmocology (Randić 2008; Randic et al. 2016). In subsequent years, the Randić index finds countless applications. For instance, it is used to characterize and quantify the similarity between different networks or subgraphs of the same network (Fourches and Tropsha 2013), it serves as a quantitative characterization of network heterogeneity (Estrada 2010), and graph robustness can be easily estimated by the Randić index (De Meo et al. 2018; Dattola et al. 2021). Moreover, the Randić index possesses a wealth of non-trivial and interesting mathematical properties (Bollobás and Erdos 1998; Bollobás et al. 1999; Cavers et al. 2010; Das et al. 2017; Li and Shi 2008). Motivated by the Randić index, various Randić-type indices have been introduced and attracted great interest in the past years. Among them, the harmonic index is a well-known one (Fajtlowicz 1987; Favaron et al. 1993; RodrIguez and Sigarreta 2017; Zhong 2012).

One of the popular research topics in the study of topological indices is to derive bounds of the indices and study their asymptotic properties. Recently, Martinez-Martinez et al. (2020, 2021) performed numeric and analytic analyses of the Randić index and the harmonic index in the Erdős–Rényi random graph. Analytic upper and lower bounds of the two indices are obtained and simulation studies show that the indices converge to one half of the number of nodes. Additionally, De Meo et al. (2018), Doslic et al. (2020) and Li et al. (2021) ind the expectations of variants of the Randić index in the Erdős–Rényi random graph. However, these results only apply to the Erdős–Rényi random graph and the exact limits of the indices are not theoretically studied.

In this paper, we shall derive the limits of the general Randić index and the general sum-connectivity index in an inhomogeneous Erdős–Rényi random graph. The general Randić index and the general sum-connectivity index contain the Randić index and the harmonic index as a special case, respectively. Thus our results theoretically validate the empirical observations in Martinez-Martinez et al. (2020, 2021) that the indices of the Erdős–Rényi random graph converge to one half of the number of nodes. In addition, our results explicitly describe how network heterogeneity affects the indices. We also observe that the limits of the Randić index and the harmonic index do not depend on the sparsity of a network, while the limits of their variants do. In this sense, the Randić index and the harmonic index are more preferable than their variants as measures of network structure.

The structure of the article is as follows. In Sect. 2 we present the main results. Section 3 summarizes simulation results and real data application. The proof is deferred to Sect. 4.

Notations: Let \(c_1,c_2\) be positive constants and \(n_0\) be a positive integer. For two positive sequence \(a_n\), \(b_n\), denote \(a_n\asymp b_n\) if \(c_1\le \frac{a_n}{b_n}\le c_2\) for \(n\ge n_0\); denote \(a_n=O(b_n)\) if \(\frac{a_n}{b_n}\le c_2\) for \(n\ge n_0\); \(a_n=o(b_n)\) if \(\lim _{n\rightarrow \infty }\frac{a_n}{b_n}=0\). Let \(X_n\) be a sequence of random variables. \(X_n=O_P(a_n)\) means \(\frac{X_n}{a_n}\) is bounded in probability. \(X_n=o_P(a_n)\) means \(\frac{X_n}{a_n}\) converges to zero in probability. Denote \(a_+=\max \{a,0\}\).

2 The Randić index and its variants

A graph is a mathematical model of network that consists of nodes (vertices) and edges. Let \(\mathcal {V}=[n] :=\{1,2,\dots ,n\}\) for a given positive integer n. An undirected graph on \(\mathcal {V}\) is a pair \(\mathcal {G}=(\mathcal {V},\mathcal {E})\) in which \(\mathcal {E}\) is a collection of subsets of \(\mathcal {V}\) such that \(|e|=2\) for every \(e\in \mathcal {E}\). Elements in \(\mathcal {E}\) are called edges. A graph can be conveniently represented as an adjacency matrix A, where \(A_{ij}=1\) if \(\{i,j\}\) is an edge, \(A_{ij}=0\) otherwise and \(A_{ii}=0\). It is clear that A is symmetric, since \(\mathcal {G}\) is undirected. A graph is said to be random if \(A_{ij} (1\le i<j\le n)\) are random.

Let \(f=(f_{ij})\), \((1\le i<j\le n)\) be a vector of numbers between 0 and 1. The inhomogeneous Erdős–Rényi random graph \(\mathcal {G}(n,p_n, f)\) is defined as

$$\begin{aligned} \mathbb {P}(A_{ij}=1)=p_n f_{ij}, \end{aligned}$$

where \(p_n\in [0,1]\) and \(A_{ij}\ (1\le i<j\le n)\) are independent. If all \(f_{ij}\) are the same, then \(\mathcal {G}(n,p_n, f)\) is the Erdős–Rényi random graph. For a non-constant vector f, \(\mathcal {G}(n,p_n, f)\) is an inhomogeneous version of the Erdős–Rényi random graph. This model covers several random graphs that have been extensively studied in random graph theory and algorithm analysis (Chakrabarty et al. 2020a, b, 2021; Chiasserini et al. 2016; Yu et al. 2021).

Given a constant \(\alpha \), the general Randić index of a graph \(\mathcal {G}\) is defined as (Bollobás and Erdos 1998)

$$\begin{aligned} \mathcal {R}_{\alpha }=\sum _{\{i,j\}\in \mathcal {E}}d_i^{\alpha }d_j^{\alpha }, \end{aligned}$$
(1)

where \(d_k\) is the degree of node k, that is, \(d_k=\sum _{j\ne k}A_{kj}\). The index \(\mathcal {R}_{\alpha }\) generalizes the well-known Randić index \(\mathcal {R}_{-\frac{1}{2}}\) invented in Randić (1975). When \(\alpha =-1\), the index \(\mathcal {R}_{-1}\) corresponds to the modified second Zagreb index (Cavers et al. 2010; Nikolic et al. 2003).

Another popular variant of the Randić index is the general sum-connectivity index (Zhou and Trinajstic 2009, 2010) defined as

$$\begin{aligned} \chi _{\alpha }=\sum _{\{i,j\}\in \mathcal {E}}(d_i+d_j)^{\alpha }. \end{aligned}$$
(2)

An important special case is the harmonic index \(\mathcal {H}=2\chi _{-1}\) (Fajtlowicz 1987; Favaron et al. 1993; Zhong 2012).

Recently Martinez-Martinez et al. (2020, 2021) conduct a simulation study of the Randić index \(\mathcal {R}_{-\frac{1}{2}}\) and the harmonic index \(\mathcal {H}=2\chi _{-1}\) in the Erdős–Rényi random graph and observe that the indices converge to n/2. Moreover, De Meo et al. (2018), Doslic et al. (2020) and Li et al. (2021) derive analytical expressions of the expectations for the indices \({\mathcal {R}}_{-1}\), \({{\chi }}_{1}\),\({{\chi }}_{2}\) of the Erdős–Rényi random graph. In this paper, we shall derive the exact limits of the general Randić index \(\mathcal {R}_{\alpha }\) and the general sum-connectivity index \(\chi _{\alpha }\) in \(\mathcal {G}(n,p_n,f)\). Our results significantly improve the results in De Meo et al. (2018), Doslic et al. (2020), Martinez-Martinez et al. (2020, 2021) and Li et al. (2021) and provide new insights about the Randić index and its variants.

Theorem 2.1

Let \(\alpha \) be a fixed constant and \(\mathcal {G}(n,p_n,f)\) be the inhomogeneous Erdős–Rényi random graph. Suppose \( np_n\log 2\ge \log n\) and \(\min _{1\le i<j\le n}\{f_{ij}\}>\epsilon \) for some positive constant \(\epsilon \in (0,1)\). Then

$$\begin{aligned} \mathcal {R}_{\alpha }= & {} \left[ 1+O_P\left( \frac{(\log (np_n))^{4(1-\alpha )_+}}{\sqrt{np_n}}\right) \right] p_n^{2\alpha +1}\sum _{i<j}f_i^{\alpha }f_j^{\alpha }f_{ij}, \end{aligned}$$
(3)
$$\begin{aligned} {\chi }_{\alpha }= & {} \left[ 1+O_P\left( \frac{(\log (np_n))^{2(1-\alpha )_+}}{\sqrt{np_n}}\right) \right] p_n^{\alpha +1}\sum _{i<j}(f_i+f_j)^{\alpha }f_{ij}, \end{aligned}$$
(4)

where \(f_i=\sum _{j\ne i}^nf_{ij}\).

The condition \(\min _{1\le i<j\le n}\{f_{ij}\}>\epsilon \) implies the minimum expected degree scales with \(np_n\). The condition \( np_n\log 2\ge \log n\) means that the graph is relatively dense. A similar condition is assumed in Chakrabarty et al. (2020a) to study the maximum eigenvalue of the inhomogeneous random graph.

Note that the expected total degree of \(\mathcal {G}(n,p_n,f)\) has order \(n^2p_n\). Thus \(p_n\) controls the sparsity of the network: a graph with smaller \(p_n\) would have fewer edges. By (3) and (4), the limits of the Randić index \(\mathcal {R}_{-\frac{1}{2}}\) and the harmonic \({\chi }_{-1}\) do not depend on \(p_n\), while the limits of their variants do involve \(p_n\). Asymptotically, the Randić index and the harmonic are uniquely determined by the network structure parametrized by f. In this sense, they are superior to their variants as measures of global structure of networks.

Now we present two examples of \(\mathcal {G}(n,p_n,f)\). The simplest example is the Erdős–Rényi random graph, that is, \(f_{ij}\equiv 1\). We denote the graph as \(\mathcal {G}(n,p_n)\).

Corollary 2.2

Let \(\alpha \) be a fixed constant. For the Erdős–Rényi random graph \(\mathcal {G}(n,p_n)\) with \( np_n\log 2\ge \log n\), we have

$$\begin{aligned} \mathcal {R}_{\alpha }= & {} \frac{n^{2(1+\alpha )}p_n^{2\alpha +1}}{2}\left[ 1+O_P\left( \frac{(\log (np_n))^{4(1-\alpha )_+}}{\sqrt{np_n}}\right) \right] , \end{aligned}$$
(5)
$$\begin{aligned} {\chi }_{\alpha }= & {} 2^{\alpha -1}n^{\alpha +2}p_n^{\alpha +1}\left[ 1+O_P\left( \frac{(\log (np_n))^{2(1-\alpha )_+}}{\sqrt{np_n}}\right) \right] . \end{aligned}$$
(6)

Especially, the Randić index \(\mathcal {R}_{-\frac{1}{2}}\) is equal to

$$\begin{aligned} \mathcal {R}_{-\frac{1}{2}}=\frac{n}{2}\left[ 1+O_P\left( \frac{(\log (np_n))^{4(1-\alpha )_+}}{\sqrt{np_n}}\right) \right] , \end{aligned}$$

the modified second Zagreb index \(\mathcal {R}_{-1}\) is equal to

$$\begin{aligned} \mathcal {R}_{-1}=\frac{1}{2p_n}\left[ 1+O_P\left( \frac{(\log (np_n))^{4(1-\alpha )_+}}{\sqrt{np_n}}\right) \right] , \end{aligned}$$

and the harmonic index \(\mathcal {H}\) is equal to

$$\begin{aligned} \mathcal {H}=\frac{n}{2}\left[ 1+O_P\left( \frac{(\log (np_n))^{2(1-\alpha )_+}}{\sqrt{np_n}}\right) \right] . \end{aligned}$$

According to Corollary 2.2, the ratio \( \frac{2}{n}\mathcal {R}_{-\frac{1}{2}}\) or \( \frac{2}{n}\mathcal {H}\) converges in probability to 1 when \( np_n\log 2\ge \log n\). This theoretically confirms the empirical observation in Martinez-Martinez et al. (2020, 2021) that the Randić index \(\mathcal {R}_{-\frac{1}{2}}\) or the harmonic index \(\mathcal {H}\) is approximately equal to \(\frac{n}{2}\). The expectation of the indices \(\mathcal {R}_{-1},{\chi }_{1},{\chi }_{2}\) are derived in De Meo et al. (2018), Doslic et al. (2020) and Li et al. (2021). Our results show the indices are asymptotically equal to their expectations. Moreover, Corollary 2.2 clearly quantifies how \(p_n\) affects the convergence rates: the larger \(p_n\) is, the faster the convergence rates are.

In addition, (5) and (6) explicitly characterize how the leading terms of \(\mathcal {R}_{\alpha }\) and \( {\chi }_{\alpha }\) depend on \(\alpha \). Note that

$$\begin{aligned} \frac{n^{2(1+\alpha )}p_n^{2\alpha +1}}{2}= & {} \frac{n}{2}(np_n)^{2\alpha +1}, \\ 2^{\alpha -1}n^{\alpha +2}p_n^{\alpha +1}= & {} 2^{\alpha -1}n(np_n)^{\alpha +1}. \end{aligned}$$

For given \(n,p_n\) such that \( np_n\log 2\ge \log n\), the leading terms are increasing functions of \(\alpha \). The indices would be extremely large or small for large \(|\alpha |\) and large n. In this sense, it is preferable to use \(\mathcal {R}_{\alpha }\) or \( {\chi }_{\alpha }\) with small \(|\alpha |\) (for instance, \(|\alpha |\le 1\)).

Next, we provide a non-trivial example. Let \(f_{ij}=e^{-\kappa \frac{i}{n}}e^{-\kappa \frac{j}{n}}\) with a positive constant \(\kappa \). Then \(e^{-2\kappa }\le f_{ij}\le 1\) for \(0\le i<j\le n\). In this case, \(\min _{1\le i<j\le n}\{f_{ij}\}>\epsilon \) holds with \(\epsilon =e^{-2\kappa }\). Straightforward calculation yields \( f_i=ne^{-\kappa \frac{i}{n}}\frac{(1-e^{-\kappa })}{\kappa }(1+o(1))\) and

$$\begin{aligned} \sum _{i<j}f_i^{-1}f_j^{-1}f_{ij}= & {} \frac{\kappa ^2}{2(1-e^{-\kappa })^{2}}+o(1),\\ \sum _{i<j}f_i^{\alpha }f_j^{\alpha }f_{ij}= & {} \frac{n^{2(\alpha +1)}\left( 1-e^{-\kappa }\right) ^{2\alpha }\left( 1-e^{-(1+\alpha )\kappa }\right) ^2}{2(1+\alpha )^2\kappa ^{2(\alpha +1)}}(1+o(1)),\ \ \alpha \ne -1,\\ \sum _{i<j}(f_i+f_j)^{\alpha }f_{ij}= & {} \frac{n^{\alpha +2}}{2}\left( \frac{1-e^{-\kappa }}{\kappa }\right) ^{\alpha }\int _0^1\int _0^1\frac{\left( e^{-\kappa x}+e^{-\kappa y}\right) ^{\alpha }}{e^{\kappa (x+y)}}dxdy+o(1). \end{aligned}$$

Then

$$\begin{aligned} \mathcal {R}_{-1}= & {} \left[ 1+O_P\left( \frac{(\log (np_n))^{2}}{\sqrt{np_n}}\right) \right] \frac{1}{2p_n} \frac{\kappa ^2}{(1-e^{-\kappa })^{2}}, \end{aligned}$$
(7)
$$\begin{aligned} \mathcal {R}_{\alpha }= & {} \left[ 1+O_P\left( \frac{(\log (np_n))^{2}}{\sqrt{np_n}}\right) \right] \frac{n^{2(\alpha +1)} p_n^{2\alpha +1}}{2} \frac{ \left( 1-e^{-\kappa }\right) ^{2\alpha }\left( 1-e^{-(1+\alpha )\kappa }\right) ^2}{(1+\alpha )^2\kappa ^{2(\alpha +1)}},\ \ \nonumber \\{} & {} \alpha \ne -1, \end{aligned}$$
(8)
$$\begin{aligned} {\chi }_{\alpha }= & {} \left[ 1+O_P\left( \frac{(\log (np_n))^{2}}{\sqrt{np_n}}\right) \right] \frac{n^{\alpha +2}p_n^{\alpha +1}}{2}\left( \frac{1-e^{-\kappa }}{\kappa }\right) ^{\alpha }\int _0^1\nonumber \\{} & {} \int _0^1\frac{\left( e^{-\kappa x}+e^{-\kappa y}\right) ^{\alpha }}{e^{\kappa (x+y)}}dxdy. \end{aligned}$$
(9)

Since larger \(\kappa \) makes the expected degrees more heterogeneous, the parameter \(\kappa \) can be considered as heterogeneity level of the graph. As \(\kappa \) increases, \(\mathcal {R}_{\alpha }\) or \({\chi }_{\alpha }\) decreases if \(\alpha >-1\), and \(\mathcal {R}_{\alpha }\) or \({\chi }_{\alpha }\) increases if \(\alpha \le -1\). This shows the effect of heterogeneity on \(\mathcal {R}_{\alpha }\) or \({\chi }_{\alpha }\). The indices could be used as indicators whether a network follows the Erdős–Rényi random graph model.

3 Real data application

In this section, we apply the general Randić index and the general sum index to the following real-world networks: ‘karate’, ‘macaque’, ‘UKfaculty’, ‘enron’, ‘USairports’, ‘immuno’, ‘yeast’. These networks are available in the ‘igraphdata’ package of R.

Table 1 The Randić index and harmonic index of real networks

For each network, the indices \(\mathcal {R}_{-\frac{1}{2}}\), \(\mathcal {R}_{-1}\), \({\chi }_{-\frac{1}{2}}\), \({\chi }_{-1}\) and the bound \(\log n/(n\log 2)\) are calculated. Here, \(\log n/(n\log 2)\) is the sparsity lower bound required by Theorem 2.1 and Corollary 2.2. In addition, we also compute several descriptive statistics: the number of nodes (n), the edge density, the maximum degree (\(d_{max}\)), the median degree (\(d_{mean}\)) and the minimum degree (\(d_{min}\)). These results are summerized in Table 1. The edge densities of networks ‘macaque’, ‘UKfaculty’, ‘enron’ and ‘USairports’ are greater than \(\log n/(n\log 2)\), which indicates our theoretical results are applicable. The Randić indices \(\mathcal {R}_{-\frac{1}{2}}\) and the harmonic indices \(2{\chi }_{-1}\) of ‘enron’ and ‘USairports’ are much smaller than \(\frac{n}{2}\), the indices of the Erdős–Rényi random graph. Thus the Erdős–Rényi random graph may not be a good model for these two networks. The networks ‘macaque’ and ‘UKfaculty’ have the indices close to \(\frac{n}{2}\). In this sense, they can be considered as samples from the Erdős–Rényi random graph model. For the networks ‘karate’, ‘immuno’ and ‘yeast’, the edge densities are slightly smaller than the bound \(\log n/(n\log 2)\). Note that the condition \(p_n>\log n/(n\log 2)\) is a sufficient condition for Theorem 2.1 and Corollary 2.2 to hold and can not be relaxed based on the current proof technique. We conjecture that Theorem 2.1 and Corollary 2.2 still hold if \(np_n\rightarrow \infty \). Currently, we are not clear whether our theoretical results can be applied to the networks ‘karate’, ‘immuno’ and ‘yeast’ or not. For sparse networks, that is, \(np_n=O\left( 1\right) \), the Randić index \(\mathcal {R}_{-\frac{1}{2}}\) could assume any value between 0 and \(\frac{n}{2}\), which is empirically verified in Martinez-Martinez et al. (2020). Therefore, the Randić index \(\mathcal {R}_{-\frac{1}{2}}\) far less than \(\frac{n}{2}\) does not necessarily imply the network are not generated from the Erdős–Rényi random graph model. We point out that a statistical hypothesis testing is needed to test whether the Randić index is equal to some number. Based on our knowledge, there is no such test available in literature. It is an interesting future topic to propose a test for the Randić index.

4 Proof of main results

In this section, we provide the detailed proofs of the main results. Recall that \(A_{ij}=1\) if and only if \(\{i,j\}\) is an edge. Then the general Randić index in (1) and the general sum-connectivity index in (2) can be written as

$$\begin{aligned} \mathcal {R}_{\alpha }= & {} \sum _{1\le i<j\le n}A_{ij}d_i^{\alpha }d_j^{\alpha },\\ \chi _{\alpha }= & {} \sum _{1\le i<j\le n}A_{ij}(d_i+d_j)^{\alpha }. \end{aligned}$$

Note that the degrees \(d_i\) are not independently and identically distributed. Moreover, \(\mathcal {R}_{\alpha }\) and \(\chi _{\alpha }\) are non-linear functions of \(d_i\). These facts make it a non-trivial task to derive the limits of \(\mathcal {R}_{\alpha }\) and \(\chi _{\alpha }\) for general \(\alpha \). The proof strategy is as follows: (a) use the Taylor expansion to expand \(\mathcal {R}_{\alpha }\) or \(\chi _{\alpha }\) as a sum of leading term and reminder terms; (b) find the order of the leading term and the reminder terms.

Proof of Theorem 2.1

(I) We prove the result of the general Randić index first. For convenience, let

$$\begin{aligned} \mathcal {R}_{-\alpha }=\sum _{1\le i<j\le n}A_{ij}d_i^{-\alpha }d_j^{-\alpha }. \end{aligned}$$
(10)

We provide the proof in two cases: \(\alpha >-1\) and \(\alpha \le -1\). Denote \(\mu _i=\mathbb {E}(d_i)=p_nf_i\).

Let \(\alpha >-1\). Applying the mean value theorem to the mapping \(x\rightarrow x^{-\alpha }\), we have

$$\begin{aligned} \frac{1}{d_i^{\alpha }}=\frac{1}{\mu _i^{\alpha }}-\alpha \frac{d_i-\mu _i}{X_i^{\alpha +1}}, \end{aligned}$$

where \(d_i\le X_i\le \mu _i\) or \(\mu _i\le X_i\le d_i\). Since \(A_{ii}=0\) \((i=1,2,\dots ,n)\) and the adjacency matrix A is symmetric, by (10) one has

$$\begin{aligned} \mathcal {R}_{-\alpha }= & {} \frac{1}{2}\sum _{1\le i,j\le n}\frac{A_{ij}}{d_i^{\alpha }d_j^{\alpha }}\nonumber \\= & {} \frac{1}{2}\sum _{1\le i,j\le n}\frac{A_{ij}}{\mu _i^{\alpha }\mu _j^{\alpha }}-\frac{\alpha }{2} \sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)}{X_i^{\alpha +1}\mu _j^{\alpha }}-\frac{\alpha }{2}\sum _{1\le i,j\le n}\frac{A_{ij}(d_j-\mu _j)}{X_j^{\alpha +1}\mu _i^{\alpha }}\nonumber \\{} & {} +\frac{\alpha ^2}{2}\sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}}. \end{aligned}$$
(11)

Next we show the first term in (11) is the leading term. To this end, we will find the exact order of the first term and show the remaining terms are of smaller order.

Firstly, we show the first term in (11) is asymptotically equal to its expectation. By the assumption \(\min _{1\le i,j\le n}\{f_{ij}\}>\epsilon \), it is clear that \(np_n\epsilon \le \mu _i\le np_n\) for all \(i\in [n]\) and \(\epsilon n^2\le \sum _{1\le i,j\le n}f_{ij}\le n^2\). Note that \(A_{ij}\ (1\le i<j\le n)\) are independent and \(\mathbb {E}(A_{ij})=p_nf_{ij}\). Then

$$\begin{aligned} \mathbb {E}\left[ \sum _{1\le i<j\le n}\frac{A_{ij}-p_nf_{ij}}{\mu _i^{\alpha }\mu _j^{\alpha }}\right] ^2= & {} \sum _{1\le i<j\le n}\mathbb {E}\left[ \frac{A_{ij}-p_nf_{ij}}{\mu _i^{\alpha }\mu _j^{\alpha }}\right] ^2=O\left( \frac{n^2p_n}{(np_n)^{4\alpha }}\right) . \end{aligned}$$

By the Markov’s inequality, it follows that

$$\begin{aligned} \left| \sum _{1\le i<j\le n}\frac{A_{ij}}{\mu _i^{\alpha }\mu _j^{\alpha }}-\sum _{1\le i<j\le n}\frac{p_nf_{ij}}{\mu _i^{\alpha }\mu _j^{\alpha }}\right| =\left| \sum _{1\le i<j\le n}\frac{A_{ij}-p_nf_{ij}}{\mu _i^{\alpha }\mu _j^{\alpha }}\right| =O_P\left( \frac{\sqrt{n}\sqrt{np_n}}{(np_n)^{2\alpha }}\right) . \end{aligned}$$

Then we get

$$\begin{aligned} \sum _{1\le i<j\le n}\frac{A_{ij}}{\mu _i^{\alpha }\mu _j^{\alpha }}{} & {} =\sum _{1\le i<j\le n}\frac{p_nf_{ij}}{\mu _i^{\alpha }\mu _j^{\alpha }}+O_P\left( \frac{\sqrt{n}\sqrt{np_n}}{(np_n)^{2\alpha }}\right) \nonumber \\{} & {} =\sum _{1\le i<j\le n}\frac{p_nf_{ij}}{\mu _i^{\alpha }\mu _j^{\alpha }}\left( 1+O_P\left( \frac{1}{\sqrt{n}\sqrt{np_n}}\right) \right) . \end{aligned}$$
(12)

Now we find a bound of the second term in (11). The idea is to find an upper bound of the expectation of its absolute value and then apply the Markov’s inequality to get a bound. Note that

$$\begin{aligned} \mathbb {E}\left[ \left| \sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)}{X_i^{\alpha +1}\mu _j^{\alpha }}\right| \right]= & {} \mathbb {E}\left[ \left| \sum _{1\le i\le n}\left( \sum _{1\le j\le n}\frac{A_{ij}}{\mu _j^{\alpha }}\right) \frac{(d_i-\mu _i)}{X_i^{\alpha +1}}\right| \right] \nonumber \\\le & {} \mathbb {E}\left[ \sum _{1\le i\le n}\left( \sum _{1\le j\le n}\frac{A_{ij}}{\mu _j^{\alpha }}\right) \frac{|d_i-\mu _i|}{X_i^{\alpha +1}}\right] . \end{aligned}$$
(13)

Let \(\delta _n=[\log (np_n)]^{-2}\). Recall that \(X_i\) is between \(d_i\) and \(\mu _i\). If \(X_i<\delta _n\mu _i\) and \(X_i<d_i\), then \(X_i<d_i\) and \(X_i<\mu _i\). In this case, \(X_i\) can not be between \(d_i\) and \(\mu _i\). Therefore, \(X_i<\delta _n\mu _i\) implies \(d_i\le X_i\). Then \(I[X_i<\delta _n\mu _i]\le I[d_i\le X_i<\delta _n\mu _i]\le I[X_i<\delta _n\mu _i]\). Note that \(np_n\epsilon \le \mu _i\le np_n\) for all \(i\in [n]\), then we have

$$\begin{aligned} \mathbb {E}\left[ \left| \sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)}{X_i^{\alpha +1}\mu _j^{\alpha }}\right| \right]{} & {} \le O\left( \frac{1}{( np_n)^{\alpha }}\right) \sum _{1\le i\le n}\mathbb {E}\left[ \frac{d_i|d_i-\mu _i|}{X_i^{\alpha +1}}\right] \nonumber \\{} & {} =O\left( \frac{1}{( np_n)^{\alpha }}\right) \sum _{1\le i\le n}\mathbb {E}\left[ \frac{d_i|d_i-\mu _i|}{X_i^{\alpha +1}}I[\delta _n\mu _i\le X_i]\right] \nonumber \\{} & {} \quad +O\left( \frac{1}{( np_n)^{\alpha }}\right) \sum _{1\le i\le n}\mathbb {E}\left[ \frac{d_i|d_i-\mu _i|}{X_i^{\alpha +1}}I[\delta _n\mu _i>X_i]\right] ,\nonumber \\{} & {} =O\left( \frac{1}{( np_n)^{\alpha }}\right) \sum _{1\le i\le n}\mathbb {E}\left[ \frac{d_i|d_i-\mu _i|}{X_i^{\alpha +1}}I[\delta _n\mu _i\le X_i]\right] \nonumber \\{} & {} \quad +O\left( \frac{1}{( np_n)^{\alpha }}\right) \sum _{1\le i\le n}\mathbb {E}\left[ \frac{d_i|d_i-\mu _i|}{X_i^{\alpha +1}}I[d_i\le X_i<\delta _n\mu _i]\right] .\nonumber \\ \end{aligned}$$
(14)

Note that \(\alpha >-1\). If \(\delta _n\mu _i\le X_i\), then

$$\begin{aligned} \frac{1}{X_i^{\alpha +1}}\le \frac{1}{(\delta _n\mu _i)^{\alpha +1}}=O\left( \frac{1}{(\delta _nnp_n)^{\alpha +1}}\right) . \end{aligned}$$

Hence we have

$$\begin{aligned}{} & {} \frac{1}{( np_n)^{\alpha }}\sum _{1\le i\le n}\mathbb {E}\left[ \frac{d_i|d_i-\mu _i|}{X_i^{\alpha +1}}I[\delta _n\mu _i\le X_i]\right] \nonumber \\{} & {} \quad \le O\left( \frac{1}{(\delta _nnp_n)^{\alpha +1}(np_n)^{\alpha }}\right) \sum _{1\le i\le n}\mathbb {E}\left[ d_i|d_i-\mu _i|I[\delta _n\mu _i\le X_i]\right] \nonumber \\{} & {} \quad \le O\left( \frac{1}{(\delta _nnp_n)^{\alpha +1}(np_n)^{\alpha }}\right) \sum _{1\le i\le n}\mathbb {E}\left[ d_i|d_i-\mu _i|\right] . \end{aligned}$$
(15)

By definition, the second moment of degree \(d_i\) is equal to

$$\begin{aligned} \mathbb {E}[d_i^2]=\mathbb {E}\left[ \sum _{j\ne k}A_{ij}A_{ik}+\sum _{j}A_{ij}\right] =p_n^2\sum _{j\ne k}f_{ij}f_{ik}+p_n\sum _{j}f_{ij}, \end{aligned}$$

and \(Var(d_i)=\sum _{j\ne i}p_nf_{ij}(1-p_nf_{ij})\), then by the Cauchy-Schwarz inequality, one has

$$\begin{aligned}{} & {} \sum _{1\le i\le n}\mathbb {E}\left[ d_i|d_i-\mu _i|\right] \le \sum _{1\le i\le n}\sqrt{\mathbb {E}[d_i^2]\mathbb {E}[(d_i-\mu _i)^2]}\nonumber \\{} & {} \quad =\sum _{1\le i\le n}\sqrt{\left( p_n^2\sum _{j\ne k}f_{ij}f_{ik}+p_n\sum _{j}f_{ij}\right) \sum _{j}p_nf_{ij}(1-p_nf_{ij})}\nonumber \\{} & {} \quad =O\left( n\sqrt{n^3p_n^3}\right) . \end{aligned}$$
(16)

Combining (15) and (16) yields

$$\begin{aligned} \frac{1}{( np_n)^{\alpha }}\sum _{1\le i\le n}\mathbb {E}\left[ \frac{d_i|d_i-\mu _i|}{X_i^{\alpha +1}}I[\delta _n\mu _i\le X_i]\right]= & {} O\left( \frac{n\sqrt{n^3p_n^3}}{(\delta _nnp_n)^{\alpha +1}(np_n)^{\alpha }}\right) \nonumber \\= & {} \frac{n^2p_n}{(np_n)^{2\alpha }}O\left( \frac{1}{\delta _n^{\alpha +1}\sqrt{np_n}}\right) \nonumber \\= & {} \frac{n^2p_n}{(np_n)^{2\alpha }}O\left( \frac{(\log (np_n))^{2(\alpha +1)}}{\sqrt{np_n}}\right) .\nonumber \\ \end{aligned}$$
(17)

Now we bound the second term of (14). Note that if \(d_i\le X_i<\delta _n\mu _i\), then \(d_i<\mu _i\) and \(\frac{d_i}{X_i^{\alpha +1}}\le \frac{1}{d_i^{\alpha }}\). Since \(d_i\) is the degree of node i, it can only take integer value between 0 and \(n-1\). Moreover, \(d_i=0\) implies \(A_{ij}=0\) for any \(j\in [n]\). By the definition of the Randić index (1), these terms with \(d_i=0\) are zero in (10) and (11). Therefore, we only consider the terms with \(d_i\ge 1\) and \(d_j\ge 1\). Then the second term of (14) can be bounded by

$$\begin{aligned}{} & {} \frac{1}{( np_n)^{\alpha }}\sum _{1\le i\le n}\mathbb {E}\left[ \frac{d_i|d_i-\mu _i|}{X_i^{\alpha +1}}I[d_i\le X_i<\delta _n\mu _i]\right] \nonumber \\{} & {} \quad \le \frac{1}{(np_n)^{\alpha }} \sum _{1\le i\le n}\mathbb {E}\left[ \frac{\mu _i-d_i}{d_i^{\alpha }}I[d_i<\delta _n\mu _i]\right] \nonumber \\{} & {} \quad =\frac{1}{( np_n)^{\alpha }}\sum _{1\le i\le n}\sum _{k=1}^{\delta _n\mu _i}\frac{\mu _i-k}{k^{\alpha }}\mathbb {P}(d_i=k). \end{aligned}$$
(18)

Next we obtain an upper bound of \(\mathbb {P}(d_i=k)\). Note that the degree \(d_i\) follows the Poisson-Binomial distribution \(PB(p_nf_{i1},p_nf_{i2},\dots ,p_nf_{in})\). Then

$$\begin{aligned} \mathbb {P}(d_i=k)= & {} \sum _{S\subset [n]\setminus \{i\},|S|=k}\prod _{j\in S}p_nf_{ij}\prod _{j\in S^C\setminus \{i\}}(1-p_nf_{ij})\nonumber \\\le & {} \sum _{S\subset [n]\setminus \{i\},|S|=k}\prod _{j\in S}p_n\prod _{j\in S^C\setminus \{i\}}(1-p_n\epsilon )\nonumber \\= & {} \left( {\begin{array}{c}n\\ k\end{array}}\right) p_n^k(1-p_n\epsilon )^{n-k}. \end{aligned}$$
(19)

Note that \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \le e^{k\log n-k\log k+k}\) and \((1-p_n\epsilon )^{n-k}=e^{(n-k)\log (1-p_n\epsilon )}\). Then by (19) we get

$$\begin{aligned} \mathbb {P}(d_i=k)\le & {} \exp \left( k\log (np_n)-k\log k+k+(n-k)\log (1-p_n\epsilon )\right) . \end{aligned}$$
(20)

Let \(g(k)=k\log (np_n)-k\log k+k+(n-k)\log (1-p_n\epsilon )\). Then

$$\begin{aligned} g^{\prime }(k)=\log \left( \frac{np_n}{1-p_n\epsilon }\right) -\log k. \end{aligned}$$

For \(k<\frac{np_n}{1-p_n\epsilon }\), \(g^{\prime }(k)<0\). For \(k>\frac{np_n}{1-p_n\epsilon }\), \(g^{\prime }(k)>0\). Hence g(k) achieves its maximum at \(k=\frac{np_n}{1-p_n\epsilon }\). For \(k\le \delta _nnp_n\), \(g(k)\le g(\delta _nnp_n)\). Hence

$$\begin{aligned} \mathbb {P}(d_i=k){} & {} \le \exp \left( \delta _nnp_n\log \frac{1}{\delta _n(1-p_n\epsilon )}+\delta _nnp_n+n\log (1-p_n\epsilon )\right) \\{} & {} \le \exp \left( -np_n\epsilon (1+o(1))\right) . \end{aligned}$$

Note that \(\mu _i\le np_n\). Then for \(k\le \delta _n\mu _i\le \delta _nnp_n\), by (18), (19), (20), one has

$$\begin{aligned}{} & {} \mathbb {E}\left[ \frac{d_i|d_i-\mu _i|}{X_i^{\alpha +1}}I[d_i\le X_i<\delta _n\mu _i]\right] \nonumber \\{} & {} \quad \le \exp \left( \log (\delta _nnp_n)\right) \exp \left( \log (np_n)\right) \exp \left( -np_n\epsilon (1+o(1))\right) \nonumber \\{} & {} \quad =\exp \left( -np_n\epsilon (1+o(1))\right) . \end{aligned}$$
(21)

Hence, we get

$$\begin{aligned}{} & {} \frac{1}{( np_n)^{\alpha }}\sum _{1\le i\le n}\mathbb {E}\left[ \frac{d_i|d_i-\mu _i|}{X_i^{\alpha +1}}I[d_i\le X_i<\delta _n\mu _i]\right] =\frac{1}{( np_n)^{\alpha }}ne^{-\epsilon np_n(1+o(1))}\nonumber \\{} & {} \quad =\frac{n^2p_n}{(np_n)^{2\alpha }}e^{-\epsilon np_n(1+o(1))}. \end{aligned}$$
(22)

Recall that \(np_n\log 2\ge \log n\). Then \(\frac{(\log (np_n))^{s}}{(np_n)^k}e^{-\epsilon np_n(1+o(1))}=o(1)\) for any fixed positive constants \(k,s,\epsilon \). By (13), (14), (17), (22) and the Markov’s inequality, one has

$$\begin{aligned} \sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)}{X_i^{\alpha +1}\mu _j^{\alpha }}=O_P\left( \frac{n^2p_n}{(np_n)^{2\alpha }}\frac{(\log (np_n))^{2(\alpha +1)}}{\sqrt{np_n}}\right) . \end{aligned}$$
(23)

The third term in (11) can be similarly bounded as the second term. Now we consider the last term in (11). Note that

$$\begin{aligned}{} & {} \sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}}\nonumber \\{} & {} \quad =\sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}}I[X_i\ge \delta _n\mu _i,X_j\ge \delta _n\mu _j]\nonumber \\{} & {} \qquad +\sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}}I[X_i<\delta _n\mu _i,X_j\ge \delta _n\mu _j]\nonumber \\{} & {} \qquad +\sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}}I[X_i\ge \delta _n\mu _i,X_j<\delta _n\mu _j]\nonumber \\{} & {} \qquad +\sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}}I[X_i<\delta _n\mu _i,X_j<\delta _n\mu _j]. \end{aligned}$$
(24)

We shall bound each term in (24). The first term can be bounded as follows.

$$\begin{aligned}{} & {} \mathbb {E}\left[ \left| \sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}}I[X_i\ge \delta _n\mu _i,X_j\ge \delta _n\mu _j]\right| \right] \nonumber \\{} & {} \quad \le \frac{1}{\delta _n^{2(\alpha +1)}}\sum _{1\le i,j\le n}\mathbb {E}\left[ \frac{A_{ij}|d_i-\mu _i||d_j-\mu _j|}{\mu _i^{\alpha +1}\mu _j^{\alpha +1}}I[X_i\ge \delta _n\mu _i,X_j\ge \delta _n\mu _j]\right] \nonumber \\{} & {} \quad \le \frac{1}{\delta _n^{2(\alpha +1)}}O\left( \frac{1}{(np_n)^{2(\alpha +1)}}\right) \sum _{1\le i,j\le n}\mathbb {E}\left[ A_{ij}|d_i-\mu _i||d_j-\mu _j|\right] . \end{aligned}$$
(25)

Denote \(\tilde{d}_i=\sum _{k\ne j,i}A_{ik}\), \(\tilde{d}_j=\sum _{k\ne j,i}A_{jk}\), \(\tilde{\mu }_i=\mathbb {E}(\tilde{d}_i)\) and \(\tilde{\mu }_j=\mathbb {E}(\tilde{d}_j)\). Then \(\tilde{d}_i\) and \(\tilde{d}_j\) are independent, \(d_i=\tilde{d}_i+A_{ij}\) and \(d_j=\tilde{d}_j+A_{ij}\). It is easy to get that

$$\begin{aligned} |d_i-\mu _i|= & {} |\tilde{d}_i-\tilde{\mu }_i+A_{ij}-p_nf_{ij}|\le |\tilde{d}_i-\tilde{\mu }_i|+|A_{ij}-p_nf_{ij}|\le |\tilde{d}_i-\tilde{\mu }_i|+1,\\{} & {} \mathbb {E}[|\tilde{d}_i-\tilde{\mu }_i|]\le \sqrt{\mathbb {E}[\big (\tilde{d}_i-\tilde{\mu }_i\big )^2]}= \sqrt{\sum _{k\ne j,i}p_nf_{ik}(1-p_nf_{ik})}=O(\sqrt{np_n}). \end{aligned}$$

Similarly, \(|d_j-\mu _j|\le |\tilde{d}_j-\tilde{\mu }_j|+1\) and \(\mathbb {E}[|\tilde{d}_j-\tilde{\mu }_j|]=O(\sqrt{np_n})\). Then we have

$$\begin{aligned} \mathbb {E}\left[ A_{ij}|d_i-\mu _i||d_j-\mu _j|\right]\le & {} \mathbb {E}[A_{ij}]+\mathbb {E}[A_{ij}|\tilde{d}_i-\tilde{\mu }_i||\tilde{d}_j-\tilde{\mu }_j|]\nonumber \\{} & {} +\mathbb {E}[A_{ij}|\tilde{d}_i-\tilde{\mu }_i|]+\mathbb {E}[A_{ij}|\tilde{d}_j-\tilde{\mu }_j|]\nonumber \\= & {} p_nf_{ij}+p_nf_{ij}\mathbb {E}[|\tilde{d}_i-\tilde{\mu }_i|]\mathbb {E}[|\tilde{d}_j-\tilde{\mu }_j|] \nonumber \\{} & {} +p_nf_{ij}\mathbb {E}[|\tilde{d}_i-\tilde{\mu }_i|]+p_nf_{ij}\mathbb {E}[|\tilde{d}_j-\tilde{\mu }_j|] \nonumber \\= & {} O\left( np_n^2\right) . \end{aligned}$$
(26)

Combining (25) and (26) yields

$$\begin{aligned}{} & {} \mathbb {E}\left[ \left| \sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}}I[X_i\ge \delta _n\mu _i,X_j\ge \delta _n\mu _j]\right| \right] \nonumber \\{} & {} \quad \le \frac{1}{\delta _n^{2(\alpha +1)}}O\left( \frac{n^3p_n^2}{(np_n)^{2(\alpha +1)}}\right) \nonumber \\{} & {} \quad =\frac{n^2p_n}{(np_n)^{2\alpha }}O\left( \frac{1}{\delta _n^{2(\alpha +1)}np_n}\right) \nonumber \\{} & {} \quad =\frac{n^2p_n}{(np_n)^{2\alpha }}O\left( \frac{(\log (np_n))^{4(\alpha +1)}}{np_n}\right) , \end{aligned}$$
(27)

The second term in (24) can be bounded as follows.

$$\begin{aligned}{} & {} \mathbb {E}\left[ \left| \sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}}I[X_i\ge \delta _n\mu _i,d_j\le X_j<\delta _n\mu _j]\right| \right] \nonumber \\{} & {} \quad \le \frac{1}{\delta _n^{\alpha +1}}\sum _{1\le i,j\le n}\mathbb {E}\left[ \frac{A_{ij}|d_i-\mu _i||d_j-\mu _j|}{\mu _i^{\alpha +1}d_j^{\alpha +1}}I[X_i\ge \delta _n\mu _i,d_j\le X_j<\delta _n\mu _j]\right] \nonumber \\{} & {} \quad \le \frac{1}{\delta _n^{\alpha +1}(np_n)^{\alpha +1}}\sum _{1\le i,j\le n}\mathbb {E}\left[ \frac{A_{ij}|d_i-\mu _i||d_j-\mu _j|}{d_j^{\alpha +1}}I[d_j<\delta _n\mu _j]\right] . \end{aligned}$$
(28)

Recall that

$$\begin{aligned} |d_i-\mu _i|=|\tilde{d}_i-\tilde{\mu }_i+A_{ij}-p_nf_{ij}|,\ \ \ \ |d_j-\mu _j|=|\tilde{d}_j-\tilde{\mu }_j+A_{ij}-p_nf_{ij}|. \end{aligned}$$

Moreover, \(d_j<\delta _n\mu _j\) implies \(\tilde{d}_j<\delta _n\mu _j\). Then we have

$$\begin{aligned}{} & {} \mathbb {E}\left[ \frac{A_{ij}|d_i-\mu _i||d_j-\mu _j|}{d_j^{\alpha +1}}I[d_j<\delta _n\mu _j]\right] \nonumber \\{} & {} \quad =\mathbb {E}\left[ \frac{A_{ij}|\tilde{d}_i-\tilde{\mu }_i+A_{ij}-p_nf_{ij}||\tilde{d}_j-\tilde{\mu }_j+A_{ij}-p_nf_{ij}|}{d_j^{\alpha +1}}\right. \nonumber \\{} & {} \qquad \left. I[d_j<\delta _n\mu _j]\Big |A_{ij}=1\right] \mathbb {P}(A_{ij}=1)\nonumber \\{} & {} \quad \le p_n\mathbb {E}\left[ \frac{|\tilde{d}_i-\tilde{\mu }_i+1-p_nf_{ij}||\tilde{d}_j-\tilde{\mu }_j+1-p_nf_{ij}|}{(\tilde{d}_j+1)^{\alpha +1}}I[\tilde{d}_j<\delta _n\mu _j]\right] . \end{aligned}$$
(29)

Since \(\tilde{d}_i\), \(\tilde{d}_j\) are independent and \(\mathbb {E}[|\tilde{d}_j-\tilde{\mu }_j|]=O(\sqrt{np_n})\), then by a similar argument as in (18)–(22), it follows that

$$\begin{aligned}{} & {} p_n\mathbb {E}\left[ \frac{|\tilde{d}_i-\tilde{\mu }_i+1-p_nf_{ij}||\tilde{d}_j-\tilde{\mu }_j+1-p_nf_{ij}|}{(\tilde{d}_j+1)^{\alpha +1}}I[\tilde{d}_j<\delta _n\mu _j]\right] \nonumber \\{} & {} \quad \le p_n\sqrt{np_n}\mathbb {E}\left[ \frac{|\tilde{d}_j-\tilde{\mu }_j+1-p_nf_{ij}|}{(\tilde{d}_j+1)^{\alpha +1}}I[\tilde{d}_j<\delta _n\mu _j]\right] \nonumber \\{} & {} \quad \le p_n\sqrt{np_n}e^{-\epsilon np_n(1+o(1))}. \end{aligned}$$
(30)

Combining (21), (29) and (30) yields

$$\begin{aligned}{} & {} \mathbb {E}\left[ \left| \sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}}I[X_i\ge \delta _n\mu _i,d_j\le X_j<\delta _n\mu _j]\right| \right] \nonumber \\{} & {} \quad \le \frac{p_n\sqrt{np_n}}{\delta _n^{\alpha +1}(np_n)^{\alpha +1}}n^2e^{-\epsilon np_n(1+o(1))}\nonumber \\{} & {} \quad =\frac{n^2p_n}{(np_n)^{2\alpha }}e^{-\epsilon np_n(1+o(1))}. \end{aligned}$$
(31)

The third term in (24) can be similarly bounded as the second term. Now we consider the last term in (24). By a similar argument as in (21)–(31), one gets

$$\begin{aligned}{} & {} \mathbb {E}\left[ \left| \sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}}I[d_i\le X_i<\delta _n\mu _i,d_j\le X_j<\delta _n\mu _j]\right| \right] \nonumber \\{} & {} \quad \le \sum _{1\le i,j\le n}\mathbb {E}\left[ \frac{A_{ij}|d_i-\mu _i||d_j-\mu _j|}{d_i^{\alpha +1}d_j^{\alpha +1}}I[d_i\le \delta _n\mu _i,d_j\le \delta _n\mu _j]\right] \nonumber \\{} & {} \quad \le \sum _{1\le i,j\le n}\mathbb {E}\left[ \frac{A_{ij}|\tilde{d}_i-\tilde{\mu }_i+A_{ij}-p_nf_{ij}||\tilde{d}_j-\tilde{\mu }_j+A_{ij}-p_nf_{ij}|}{(\tilde{d}_j+A_{ij})^{\alpha +1}(\tilde{d}_j+A_{ij})^{\alpha +1}}\right. \nonumber \\{} & {} \quad \qquad \left. I[\tilde{d}_i\le \delta _n\mu _i,\tilde{d}_j\le \delta _n\mu _j]\right] \nonumber \\{} & {} \quad \le p_n\sum _{1\le i,j\le n}\mathbb {E}\left[ \frac{(|\tilde{d}_i-\tilde{\mu }_i|+1)(|\tilde{d}_j-\tilde{\mu }_j|+1)}{(\tilde{d}_j+1)^{\alpha +1}(\tilde{d}_j+1)^{\alpha +1}}I[\tilde{d}_i\le \delta _n\mu _i,\tilde{d}_j\le \delta _n\mu _j]\right] \nonumber \\{} & {} \quad =p_n\left( \sum _{1\le i\le n}\mathbb {E}\left[ \frac{(|\tilde{d}_i-\tilde{\mu }_i|+1)}{(\tilde{d}_i+1)^{\alpha +1}}I[\tilde{d}_i\le \delta _n\mu _i\right] \right) ^2\nonumber \\{} & {} \quad \le p_nn^2e^{-2\epsilon np_n(1+o(1))}=\frac{n^2p_n}{(np_n)^{2\alpha }}e^{-2\epsilon np_n(1+o(1))}. \end{aligned}$$
(32)

By (24)–(32) and the Markov’s inequality, it follows that

$$\begin{aligned} \sum _{1\le i,j\le n}\frac{A_{ij}(d_i-\mu _i)(d_j-\mu _j)}{X_i^{\alpha +1}X_j^{\alpha +1}} =O_P\left( \frac{n^2p_n}{(np_n)^{2\alpha }}\frac{(\log (np_n))^{4(\alpha +1)}}{np_n}\right) . \end{aligned}$$
(33)

It is easy to verify that \(\sum _{1\le i<j\le n}\frac{p_nf_{ij}}{\mu _i^{\alpha }\mu _j^{\alpha }} \ge \frac{\epsilon n(n-1)p_n}{2(np_n)^{2\alpha }}\). Then combining (11), (12), (23) and (33) yields the limit of \(\mathcal {R}_{-\alpha }\) with \(\alpha >-1\).

Next, we consider \(\mathcal {R}_{-\alpha }\) for \(\alpha \le -1\). In this case, we rewrite the general Randić index as

$$\begin{aligned} \mathcal {R}_{\alpha }=\sum _{1\le i<j\le n}A_{ij}d_i^{\alpha }d_j^{\alpha }, \alpha \ge 1. \end{aligned}$$
(34)

By the Taylor expansion, we have

$$\begin{aligned} d_i^{\alpha }=\mu _i^{\alpha }+\alpha X_i^{\alpha -1}(d_i-\mu _i), \end{aligned}$$

where \(X_i\) is between \(d_i\) and \(\mu _i\). Then

$$\begin{aligned} \mathcal {R}_{\alpha }= & {} \frac{1}{2}\sum _{1\le i,j\le n}A_{ij}d_i^{\alpha }d_j^{\alpha }\nonumber \\= & {} \frac{1}{2}\sum _{1\le i,j\le n}A_{ij}\mu _i^{\alpha }\mu _j^{\alpha }\nonumber \\{} & {} +\frac{\alpha }{2}\sum _{1\le i,j\le n}A_{ij}(d_i-\mu _i)X_i^{\alpha -1}\mu _j^{\alpha }+\frac{\alpha }{2}\sum _{1\le i,j\le n}A_{ij}(d_j-\mu _j)X_j^{\alpha -1}\mu _i^{\alpha }\nonumber \\{} & {} +\frac{\alpha ^2}{2}\sum _{1\le i,j\le n}A_{ij}(d_i-\mu _i)(d_j-\mu _j)X_i^{\alpha -1}X_j^{\alpha -1}. \end{aligned}$$
(35)

We shall show that the first term in (35) is the leading term and the remaining terms are of smaller order. Similar to (12), it is easy to get

$$\begin{aligned} \sum _{1\le i<j\le n}A_{ij}\mu _i^{\alpha }\mu _j^{\alpha }=\sum _{1\le i<j\le n}p_nf_{ij}\mu _i^{\alpha }\mu _j^{\alpha }\left( 1+O_P\left( \frac{1}{\sqrt{n}\sqrt{np_n}}\right) \right) . \end{aligned}$$
(36)

Since the second term and the third term in (35) have the same order, we only need to bound the second term and the last term. Let \(M=\frac{4}{\epsilon (1-p_n\epsilon )}\). Clearly M is bounded and \(M>4\). The expectation of the absolute value of the second term in (35) can be bounded by

$$\begin{aligned}{} & {} \mathbb {E}\left[ \left| \sum _{1\le i,j\le n}A_{ij}(d_i-\mu _i)X_i^{\alpha -1}\mu _j^{\alpha }\right| \right] \nonumber \\{} & {} \le \mathbb {E}\left[ \sum _{1\le i,j\le n}A_{ij}\left| d_i-\mu _i\right| X_i^{\alpha -1}\mu _j^{\alpha }I[M\mu _i\le X_i\le d_i]\right] \nonumber \\{} & {} \quad +\mathbb {E}\left[ \sum _{1\le i,j\le n}A_{ij}\left| d_i-\mu _i\right| X_i^{\alpha -1}\mu _j^{\alpha }I[X_i\le M\mu _i]\right] . \end{aligned}$$
(37)

Note that

$$\begin{aligned}{} & {} \mathbb {E}\left[ \sum _{1\le i,j\le n}A_{ij}\left| d_i-\mu _i\right| X_i^{\alpha -1}\mu _j^{\alpha }I[X_i\le M\mu _i]\right] \nonumber \\{} & {} \quad \le M^{\alpha -1}(np_n)^{2\alpha -1}\sum _{1\le i,j\le n}\mathbb {E}\left[ A_{ij}\left| \tilde{d}_i-\mu _i+A_{ij}\right| \right] \nonumber \\{} & {} \quad =(np_n)^{2\alpha }n^2p_nO\left( \frac{1}{\sqrt{np_n}}\right) , \end{aligned}$$
(38)

and

$$\begin{aligned}{} & {} \mathbb {E}\left[ \sum _{1\le i,j\le n}A_{ij}\left| d_i-\mu _i\right| X_i^{\alpha -1}\mu _j^{\alpha }I[M\mu _i\le X_i\le d_i]\right] \nonumber \\{} & {} \quad \le O((np_n)^{\alpha })\mathbb {E}\left[ \sum _{1\le i,j\le n}A_{ij}\left| d_i-\mu _i\right| d_i^{\alpha -1}I[M\mu _i\le d_i]\right] \nonumber \\{} & {} \quad =O((np_n)^{\alpha }p_n)\sum _{1\le i,j\le n}\mathbb {E}\left[ \left| \tilde{d}_i-\tilde{\mu }_i+1-p_nf_{ij}\right| \tilde{d}_i^{\alpha -1}I[M\mu _i-1\le \tilde{d}_i]\right] \nonumber \\{} & {} \quad =O((np_n)^{\alpha }p_n)\sum _{1\le i,j\le n}\sum _{k=M\mu _i-1}^{n-2}k^{\alpha -1}(k-\tilde{\mu }_i+1-p_nf_{ij})\mathbb {P}(\tilde{d}_i=k). \end{aligned}$$
(39)

By a similar argument as in (20), it follows that

$$\begin{aligned} \sum _{k=M\mu _i-1}^{n-2}k^{\alpha -1}(k-\tilde{\mu }_i+1-p_nf_{ij})\mathbb {P}(\tilde{d}_i=k){} & {} \le \sum _{k=M\mu _i-1}^{n-2}k^{\alpha }\left( {\begin{array}{c}n\\ k\end{array}}\right) p_n^k(1-p_n\epsilon )^{n-k}\nonumber \\{} & {} \le \sum _{k=M\mu _i-1}^{n-2}\exp \left( \alpha \log k+g(k)\right) .\nonumber \\ \end{aligned}$$
(40)

Let \(h(k)=\alpha \log k+g(k)\). Then

$$\begin{aligned} h^{\prime }(k)=\frac{\alpha }{k}+\log \left( \frac{np_n}{1-p_n\epsilon }\right) -\log k. \end{aligned}$$

Hence h(k) is decreasing for \(k>\frac{1.1np_n}{1-p_n\epsilon }\) and large n. Since \(k\ge M\mu _i-1\ge M\epsilon np_n-1\ge \frac{2np_n}{1-p_n\epsilon }\) for large n, then

$$\begin{aligned} h(k){} & {} \le h\left( \frac{2np_n}{1-p_n\epsilon }\right) =\alpha \log \left( \frac{2np_n}{1-p_n\epsilon }\right) -\frac{2np_n\log 2}{1-p_n\epsilon }\\{} & {} \quad +n\log (1-p_n\epsilon )\le -\frac{np_n\log 2}{1-p_n\epsilon }-\epsilon np_n. \end{aligned}$$

By the assumption \(np_n\log 2\ge \log n\), it is easy to get \(\log n-\frac{np_n\log 2}{1-p_n\epsilon }<0\). Then

$$\begin{aligned} \sum _{k=M\mu _i-1}^{n-2}k^{\alpha -1}(k-\tilde{\mu }_i+1-p_nf_{ij})\mathbb {P}(\tilde{d}_i=k)\le & {} n\exp \left( -\frac{np_n\log 2}{1-p_n\epsilon }-\epsilon np_n\right) \nonumber \\\le & {} \exp \left( -\epsilon np_n(1+o(1))\right) . \end{aligned}$$
(41)

Hence (37) is bounded by \((np_n)^{2\alpha }n^2p_nO\left( \frac{1}{\sqrt{np_n}}\right) \).

Now we bound the last term in (35). Note that

$$\begin{aligned}{} & {} \sum _{1\le i,j\le n}\big |A_{ij}(d_i-\mu _i)(d_j-\mu _j)X_i^{\alpha -1}X_j^{\alpha -1}\big |\nonumber \\{} & {} \quad =\sum _{1\le i,j\le n}\big |A_{ij}(d_i-\mu _i)(d_j-\mu _j)X_i^{\alpha -1}X_j^{\alpha -1}\big |I\left[ X_i\le M\mu _i,X_j\le M\mu _j\right] \nonumber \\{} & {} \qquad +\sum _{1\le i,j\le n}\big |A_{ij}(d_i-\mu _i)(d_j-\mu _j)X_i^{\alpha -1}X_j^{\alpha -1}\big |I\left[ X_i\le M\mu _i,X_j\ge M\mu _j\right] \nonumber \\{} & {} \qquad +\sum _{1\le i,j\le n}\big |A_{ij}(d_i-\mu _i)(d_j-\mu _j)X_i^{\alpha -1}X_j^{\alpha -1}\big |I\left[ X_i\ge M\mu _i,X_j\le M\mu _j\right] \nonumber \\{} & {} \qquad +\sum _{1\le i,j\le n}\big |A_{ij}(d_i-\mu _i)(d_j-\mu _j)X_i^{\alpha -1}X_j^{\alpha -1}\big |I\left[ X_i\ge M\mu _i,X_j\ge M\mu _j\right] .\nonumber \\ \end{aligned}$$
(42)

Since \(X_i\) is between \(d_i\) and \(\mu _i\), then \(X_i\le M\mu _i\) implies \(d_i\le X_i\le M\mu _i\), and \(X_i\ge M\mu _i\) implies \(d_i\ge X_i\ge M\mu _i\). Similar results hold for \(X_j\). Then by (42) we have

$$\begin{aligned}{} & {} \sum _{1\le i,j\le n}\big |A_{ij}(d_i-\mu _i)(d_j-\mu _j)X_i^{\alpha -1}X_j^{\alpha -1}\big |\nonumber \\{} & {} \quad \le \sum _{1\le i,j\le n}\big |A_{ij}(d_i-\mu _i)(d_j-\mu _j)X_i^{\alpha -1}X_j^{\alpha -1}\big |I\left[ X_i\le M\mu _i, X_j\le M\mu _j\right] \nonumber \\{} & {} \qquad +\sum _{1\le i,j\le n}\big |A_{ij}(d_i-\mu _i)(d_j-\mu _j)X_i^{\alpha -1}X_j^{\alpha -1}\big |I\left[ X_i\le M\mu _i,d_j\ge X_j\ge M\mu _j\right] \nonumber \\{} & {} \qquad +\sum _{1\le i,j\le n}\big |A_{ij}(d_i-\mu _i)(d_j-\mu _j)X_i^{\alpha -1}X_j^{\alpha -1}\big |I\left[ d_i\ge X_i\ge M\mu _i,X_j\le M\mu _j\right] \nonumber \\{} & {} \qquad +\sum _{1\le i,j\le n}\big |A_{ij}(d_i-\mu _i)(d_j-\mu _j)X_i^{\alpha -1}X_j^{\alpha -1}\nonumber \\{} & {} \qquad \big |I\left[ d_i\ge X_i\ge M\mu _i,d_j\ge X_j\ge M\mu _j\right] . \end{aligned}$$
(43)

Now we bound the expectation of each term in (43). Since the second term and the third term have the same order, it suffices to bound the first term, second term and the last term. By a similar argument as in (39) and (41), it is easy to get the following results.

$$\begin{aligned}{} & {} \mathbb {E}\left[ \sum _{1\le i,j\le n}A_{ij}|d_i-\mu _i||d_j-\mu _j|X_i^{\alpha -1}X_j^{\alpha -1}I\left[ X_i\le M\mu _i,X_j\le M\mu _j\right] \right] \nonumber \\{} & {} \quad \le O\big ((np_n)^{2(\alpha -1)}p_n\big )\sum _{1\le i,j\le n}\mathbb {E}|\tilde{d}_i-\tilde{\mu }_i+1-p_nf_{ij}||\tilde{d}_j-\tilde{\mu }_j+1-p_nf_{ij}|\nonumber \\{} & {} \quad =O\big ((np_n)^{2(\alpha -1)}p_nn^2np_n\big )\nonumber \\{} & {} \quad =(np_n)^{2\alpha }n^2p_nO\left( \frac{1}{np_n}\right) , \end{aligned}$$
(44)
$$\begin{aligned}{} & {} \mathbb {E}\left[ \sum _{1\le i,j\le n}A_{ij}|d_i-\mu _i||d_j-\mu _j|X_i^{\alpha -1}X_j^{\alpha -1}I\left[ d_i\ge X_i\ge M\mu _i,d_j\ge X_j\ge M\mu _j\right] \right] \nonumber \\{} & {} \quad \le \mathbb {E}\left[ \sum _{1\le i,j\le n}A_{ij}d_i^{\alpha }d_j^{\alpha }I\left[ d_i\ge M\mu _i,d_j\ge M\mu _j\right] \right] \nonumber \\{} & {} \quad \le p_n\sum _{1\le i,j\le n}\mathbb {E}\left[ (\tilde{d}_i+1)^{\alpha }(\tilde{d}_j+1)^{\alpha }I\left[ \tilde{d}_i\ge M\mu _i-1,\tilde{d}_j\ge M\mu _j-1\right] \right] \nonumber \\{} & {} \quad =p_n\left( \sum _{1\le i\le n}\mathbb {E}\left[ (\tilde{d}_i+1)^{\alpha }I\big [\tilde{d}_i\ge M\mu _i-1\big ]\right] \right) ^2\nonumber \\{} & {} \quad = O\left( n^2p_n\right) \exp \left( -2\epsilon np_n(1+o(1))\right) , \end{aligned}$$
(45)

and

$$\begin{aligned}{} & {} \mathbb {E}\left[ \sum _{1\le i,j\le n}A_{ij}|d_i-\mu _i||d_j-\mu _j|X_i^{\alpha -1}X_j^{\alpha -1}I\big [X_i\le M\mu _i,d_j\ge X_j\ge M\mu _j\big ]\right] \nonumber \\{} & {} \quad \le O\left( (np_n)^{\alpha -1}\right) \mathbb {E}\left[ \sum _{1\le i,j\le n}A_{ij}|d_i-\mu _i|d_j^{\alpha }I\big [d_j\ge M\mu _j\big ]\right] \nonumber \\{} & {} \quad \le O\left( (np_n)^{\alpha -1}p_n\right) \sum _{1\le i,j\le n}\mathbb {E}\left[ |\tilde{d}_i-\tilde{\mu }_i+1-p_nf_{ij}|(\tilde{d}_j+1)^{\alpha }I\big [\tilde{d}_j\ge M\mu _j-1\big ]\right] \nonumber \\{} & {} \quad =O\left( (np_n)^{\alpha -1}p_nn^2\sqrt{np_n}\right) \exp \left( -\epsilon np_n(1+o(1))\right) . \end{aligned}$$
(46)

Combining (35)–(46) yields the desired result. Then the proof of the result of the general Randić index is complete.

(II). Now we prove the result of the general sum-connectivity index. We provide the proof in two cases: \(\alpha <1\) and \(\alpha \ge 1\).

Firstly we work on \(\chi _{-\alpha }\) with \(\alpha >-1\). By Taylor expansion or the mean value theorem, we have

$$\begin{aligned} \chi _{-\alpha }{} & {} =\frac{1}{2}\sum _{1\le i,j\le n}\frac{A_{ij}}{(d_i+d_j)^{\alpha }}=\frac{1}{2}\sum _{1\le i,j\le n}\frac{A_{ij}}{(\mu _i+\mu _j)^{\alpha }}\nonumber \\{} & {} \quad -\frac{\alpha }{2}\sum _{1\le i,j\le n}\frac{A_{ij}}{X_{ij}^{\alpha +1}}(d_i-\mu _i+d_j-\mu _j), \end{aligned}$$
(47)

where \(X_{ij}\) is between \(\mu _i+\mu _j\) and \(d_i+d_j\). We shall prove the first term is the leading term and the second term has smaller order than the first term.

By a similar argument as in (12), it is easy to get

$$\begin{aligned} \sum _{i<j}\frac{A_{ij}}{(\mu _i+\mu _j)^{\alpha }}=\sum _{i<j}\frac{p_nf_{ij}}{(\mu _i+\mu _j)^{\alpha }}\left( 1+O_P\left( \frac{1}{\sqrt{n^2p_n}}\right) \right) . \end{aligned}$$
(48)

Hence the first term of (47) is asymptotically equal to \(\sum _{i<j}\frac{p_nf_{ij}}{(\mu _i+\mu _j)^{\alpha }}\).

Let \(\delta _n=[\log (np_n)]^{-2}\). Since \(X_{ij}\) is between \(\mu _i+\mu _j\) and \(d_i+d_j\), \(X_{ij}\le \delta _n(\mu _i+\mu _j)\) implies \(d_i+d_j\le X_{ij}\le \delta _n(\mu _i+\mu _j)\). Then

$$\begin{aligned}{} & {} \sum _{i,j}\left| \frac{A_{ij}}{X_{ij}^{\alpha +1}}(d_i-\mu _i+d_j-\mu _j)\right| \nonumber \\{} & {} \quad \le \sum _{i,j}\left| \frac{A_{ij}}{X_{ij}^{\alpha +1}}(d_i-\mu _i+d_j-\mu _j)\right| I\left[ d_i+d_j\le X_{ij}\le \delta _n(\mu _i+\mu _j)\right] \nonumber \\{} & {} \qquad +\sum _{i,j}\left| \frac{A_{ij}}{X_{ij}^{\alpha +1}}(d_i-\mu _i+d_j-\mu _j)\right| I\left[ X_{ij}\ge \delta _n(\mu _i+\mu _j)\right] . \end{aligned}$$
(49)

Next we bound the expectation of each term in (45). For the second term, the expectation can be bounded as follows.

$$\begin{aligned}{} & {} \mathbb {E}\left[ \sum _{i,j}\frac{A_{ij}}{X_{ij}^{\alpha +1}}(|d_i-\mu _i|+|d_j-\mu _j|)I\left[ X_{ij}\ge \delta _n(\mu _i+\mu _j)\right] \right] \nonumber \\{} & {} \le O\left( \frac{1}{\delta _n^{\alpha +1}(np_n)^{\alpha +1}}\right) \sum _{i,j}\mathbb {E}\left[ A_{ij}(|\tilde{d}_i-\mu _i+A_{ij}|+|\tilde{d}_j-\mu _i+A_{ij}|)\right] \nonumber \\{} & {} =O\left( \frac{n^2p_n\sqrt{np_n}}{\delta _n^{\alpha +1}(np_n)^{\alpha +1}}\right) =\frac{n^2p_n}{(np_n)^{\alpha }}O\left( \frac{[\log (np_n)]^{2(\alpha +1)}}{\sqrt{np_n}}\right) . \end{aligned}$$
(50)

Next we focus on the first term in (45). It is clear that

$$\begin{aligned}{} & {} \mathbb {E}\left[ \sum _{i,j}\frac{A_{ij}}{X_{ij}^{\alpha +1}}(|d_i-\mu _i|+|d_j-\mu _j|)I[d_i+d_j\le X_{ij}< \delta _n(\mu _i+\mu _j)]\right] \\{} & {} \quad \le \mathbb {E}\left[ \sum _{i,j}\frac{A_{ij}(|d_i-\mu _i|+|d_j-\mu _j|)}{(d_i+d_j)^{\alpha +1}}I\left[ d_i+d_j< \delta _n(\mu _i+\mu _j)\right] \right] . \end{aligned}$$

Note that \(d_i+d_j< \delta _n(\mu _i+\mu _j)\) implies \(d_i< \delta _n(\mu _i+\mu _j)\) and \(d_j< \delta _n(\mu _i+\mu _j)\), and

$$\begin{aligned} \frac{|d_i-\mu _i|+|d_j-\mu _j|}{(d_i+d_j)^{\alpha +1}}=\frac{|d_i-\mu _i|}{(d_i+d_j)^{\alpha +1}}+\frac{|d_j-\mu _j|}{(d_i+d_j)^{\alpha +1}}\le \frac{|d_i-\mu _i|}{d_i^{\alpha +1}}+\frac{|d_j-\mu _j|}{d_j^{\alpha +1}}. \end{aligned}$$

Then we have

$$\begin{aligned}{} & {} \mathbb {E}\left[ \sum _{i,j}\frac{A_{ij}}{X_{ij}^{\alpha +1}}(|d_i-\mu _i|+|d_j-\mu _j|)I\left[ d_i+d_j\le X_{ij}< \delta _n(\mu _i+\mu _j)\right] \right] \nonumber \\{} & {} \quad \le \mathbb {E}\left[ \sum _{i,j}\frac{A_{ij}|d_i-\mu _i|}{d_i^{\alpha +1}}I\left[ d_i< \delta _n(\mu _i+\mu _j)\right] \right] \nonumber \\{} & {} \qquad +\mathbb {E}\left[ \sum _{i,j}\frac{A_{ij}|d_j-\mu _j|}{d_j^{\alpha +1}}I\left[ d_j< \delta _n(\mu _i+\mu _j)\right] \right] \nonumber \\{} & {} \quad \le 2p_n \mathbb {E}\left[ \sum _{i,j}\frac{|\tilde{d}_i-\mu _i+1|}{(\tilde{d}_i+1)^{\alpha +1}}I\left[ \tilde{d}_i< \delta _n(\mu _i+\mu _j)\right] \right] \nonumber \\{} & {} \quad =n^2p_ne^{-\epsilon np_n(1+o(1))}=\frac{n^2p_n}{(np_n)^{\alpha }}e^{-\epsilon np_n(1+o(1))}. \end{aligned}$$
(51)

Combining (47)–(51) yields

$$\begin{aligned} \chi _{-\alpha }=p_n^{1-\alpha }\sum _{i<j}\frac{f_{ij}}{(f_i+f_j)^{\alpha }}\left( 1+O_P\left( \frac{[\log (np_n)]^{2(\alpha +1)}}{\sqrt{n^2p_n}}\right) \right) , \alpha >-1. \end{aligned}$$

Now we work on \(\chi _{\alpha }\) with \(\alpha \ge 1\). When \(\alpha =1\), the proof is trivial. We will focus on \(\alpha >1\). By the mean value theorem, one has

$$\begin{aligned} \chi _{\alpha }{} & {} =\frac{1}{2}\sum _{i,j}A_{ij}(d_i+d_j)^{\alpha }=\frac{1}{2}\sum _{i,j}A_{ij}(\mu _i+\mu _j)^{\alpha }\nonumber \\{} & {} \quad +\frac{\alpha }{2}\sum _{i,j}A_{ij}X_{ij}^{\alpha -1}(d_i-\mu _i+d_j-\mu _j), \end{aligned}$$
(52)

where \(X_{ij}\) is between \(\mu _i+\mu _j\) and \(d_i+d_j\).

The remaining proof is similar to the proof of the case \(\alpha <1\). Let \(M=\frac{4}{\epsilon (1-p_n\epsilon )}\). It is clear M is bounded and \(M>4\). Note that

$$\begin{aligned}{} & {} \sum _{i,j}\mathbb {E}\left[ A_{ij}X_{ij}^{\alpha -1}(|d_i-\mu _i|+|d_j-\mu _j|)I[X_{ij}\le M(\mu _i+\mu _j)]\right] \nonumber \\{} & {} =(np_n)^{\alpha }n^2p_nO\left( \frac{1}{\sqrt{np_n}}\right) , \end{aligned}$$
(53)

and

$$\begin{aligned}{} & {} \sum _{i,j}\mathbb {E}\left[ A_{ij}X_{ij}^{\alpha -1}(|d_i-\mu _i+d_j-\mu _j|)I[d_i+d_j\ge X_{ij}> M(\mu _i+\mu _j)]\right] \nonumber \\{} & {} \quad \le O(1)\sum _{i,j}\mathbb {E}\left[ A_{ij}(\tilde{d}_i+\tilde{d}_j+2A_{ij})^{\alpha -1}(|\tilde{d}_i+\tilde{d}_j-\mu _i-\mu _j+2A_{ij}|)\right. \nonumber \\{} & {} \quad \qquad \left. I[\tilde{d}_i+\tilde{d}_j>M(\mu _i+\mu _j-1)]\right] \nonumber \\{} & {} \quad \le O(1)p_n\sum _{i,j}\mathbb {E}\left[ (\tilde{d}_i+\tilde{d}_j+2)^{\alpha -1}(\tilde{d}_i+\tilde{d}_j)I[\tilde{d}_i+\tilde{d}_j>M(\mu _i+\mu _j-1)]\right] \nonumber \\{} & {} \quad \le O(1)p_n\sum _{i,j}\sum _{k=M(\mu _i+\mu _j-1)}^{2(n-2)}(k+2)^{\alpha -1}k\mathbb {P}(\tilde{d}_i+\tilde{d}_j=k)\nonumber \\{} & {} \quad =n^2p_nne^{-\epsilon np_n(1+o(1))}=(np_n)^{\alpha }n^2p_ne^{-\epsilon np_n(1+o(1))}, \end{aligned}$$
(54)

where the second last step follows from a similar argument as in (41) by noting that \(\tilde{d}_i+\tilde{d}_j\) follows the Poisson–Binomial distribution.

Combining (52), (53) and (54) yields

$$\begin{aligned} \chi _{\alpha }=\left( 1+O_P\left( \frac{1}{\sqrt{np_n}}\right) \right) p_n^{\alpha +1}\sum _{i<j}(f_i+f_j)^{\alpha }f_{ij}, \alpha \ge 1. \end{aligned}$$

Then the proof is complete. \(\square \)