1 Introduction

Recent years have witnessed an explosive growth in social media and online social networks, which have increasingly become an important part of our lives (Smith and Christakis 2008). For example, online social networks can increase the diversity of opinions, ideas, and information available to individuals (Kim 2011; Lee et al. 2014). At the same time, people may use online social networks to broadcast information on their lives and their opinions about some topics or issues to a large audience. It has been reported that social networks and social media have resulted in a fundamental change of ways that people share and shape opinions (Das et al. 2014; Fotakis et al. 2016; Auletta et al. 2018). Recently, there have been a concerted effort to model opinion dynamics in social networks, in order to understand the effects of various factors on the formation dynamics of opinions (Jia et al. 2015; Dong et al. 2018; Anderson and Ye 2019).

One of the popular opinion dynamics models is the Friedkin-Johnsen (FJ) model (Friedkin and Johnsen 1990). Although simple and succinct, the FJ model can capture complex behavior of real social groups by incorporating French’s “theory of social power” (French 1956), and thus has been extensively studied. A sufficient condition for the stability of this standard model was obtained in (Ravazzi et al. 2015), the average innate opinion was estimated in (Das et al. 2013), and the unique equilibrium expressed opinion vector was derived in (Das et al. 2013; Bindel et al. 2015). Some explanations of this natural model were consequently explored from different perspectives (Ghaderi and Srikant 2014; Bindel et al. 2015). In addition, based on the FJ opinion dynamics model, some social phenomena have been quantified and studied Xu et al. (2021), including polarization (Matakos et al. 2017; Musco et al. 2018), disagreement (Musco et al. 2018), conflict (Chen et al. 2018), and controversy (Chen et al. 2018). Moreover, some optimization problems (Abebe et al. 2018) for the FJ model were also investigated, such as opinion maximization (Gionis et al. 2013).

Other than studying the properties, interpretations and related quantities of the FJ model, many extensions or variants of this popular model have been developed (Jia et al. 2015). In (Abebe et al. 2018), the impact of susceptibility to persuasion on opinion dynamics was analyzed by introducing a resistance parameter to modify the FJ model. In (Semonsen et al. 2019), a varying peer-pressure coefficient was introduced to the FJ model, aiming to explore the role of increasing peer pressure on opinion formation. In (Chitra and Musco 2020), the FJ model was augmented to include algorithmic filtering, to analyze the effect of filter bubbles on polarization. Some multidimensional extensions were developed for the FJ model (Friedkin 2015; Parsegov et al. 2015, 2017; Friedkin et al. 2016), extending the scalar opinion to vector-valued opinions corresponding to several settings, either independent (Friedkin 2015) or interdependent (Parsegov et al. 2015, 2017; Friedkin et al. 2016).

The above related works for opinion dynamic models provide deep insights into the understanding of opinion formulation, since they grasped various important aspects affecting opinion shaping, including individual’s attributes, interactions among individuals, and opinion update mechanisms. However, existing models consider only the interactions among the nearest neighbors, allowing interactions with higher-order neighbors only indirectly via their immediate neighbors, in spite of the fact that this situation is commonly encountered in real natural (Schunack et al. 2002) and social (Ghasemiesfeh et al. 2013; Lyu et al. 2022) networks. In a real natural example (Schunack et al. 2002), it is shown as the long-jump spanning multiple lattice spacings, which plays a dominating role in the diffusion of these molecules. In a social network example (Ghasemiesfeh et al. 2013; Lyu et al. 2022), an individual can make use of the local, partial, or global knowledge corresponding to his direct, second-order, and even higher-order neighbors to search for opinions about a concerned issue or to diffuse information and opinions in an efficient way. It has been suggested by many existing theories and models that long ties are more likely to persist than other social ties, and that many of them constantly function as social bridges (Lyu et al. 2022). Schawe and Hernández (2022) defined a higher-order Deffuant model (Deffuant et al. 2000), generalizing the original pairwise interaction model for bounded-confidence opinion-dynamics to interactions involving a group of agents. To date, there is still a lack a comprehensive higher-order FJ opinion dynamics model on social networks, although it has been observed that long-range non-nearest-neighbor interactions could play a fundamental role in opinion dynamics.

In this paper, we make a natural extension of the classical FJ opinion dynamics model to explicitly incorporate the higher-order interactions between individuals and their non-nearest neighbors by leveraging higher-order random walks. We prove that the higher-order model converges to a unique equilibrium opinion vector, provided that each individual has a non-zero resistance parameter measuring his susceptibility to persuasion. We show that the equilibrium opinions of the higher-order FJ model differ greatly from those of the classical FJ model, demonstrating that higher-order interactions have a significant impact on opinion dynamics.

Basically, the equilibrium opinions of the higher-order FJ model on a graph are the same as those of the standard FJ model on a corresponding dense graph with a loop at each node. That is, at each time step, every individual updates his opinion according to his innate opinion, as well as the currently expressed opinions of his nearest neighbors on the dense graph. Since the transition matrix of the dense graph is a combination of the powers of that on the original graph, direct construction of the transition matrix for the dense graph is computationally expensive. To reduce the computation cost, we construct a sparse matrix, which is spectrally close to the dense matrix, nearly linearly in both space and time with respect to the number of edges on the original graph. This sparsified matrix maintains the information of the dense graph, such that the difference between the equilibrium opinions on the dense graph and the sparsified graph is negligible.

Based on the obtained sparsifed matrix, we further introduce an iteration algorithm, which has a theoretical convergence and can approximate the equilibrium opinions of the higher-order FJ model quickly. Finally, we perform extensive experiments on different networks of various scales, and show that the new algorithm achieves high efficiency and effectiveness. Particularly, this algorithm is scalable, which can approximate the equilibrium opinions of the second-order FJ on large graphs with millions of nodes. It is expected that the new model sheds light on further understanding of opinion formation, and that the new algorithm can be helpful for various applications, such as the computations of polarization and disagreement in opinion dynamics.

A preliminary version of our work has been published in (Zhang et al. 2020). In this paper, we extend our preliminary results in several directions. First, we present proof details previously omitted in (Zhang et al. 2020) for several important theorems, including the convergence analysis and approximation error bound of the proposed algorithm. Second, we add an illustrative example in Sect. 3.2, in order to better understand and demonstrate the difference between the traditional FJ model and the higher-order model. Finally, we provide additional experimental results for different innate opinion distributions and provide a thorough parameter analysis in Sect. 6.

2 Preliminaries

In this section, some basic concepts in graph and matrix theories, as well as the Friedkin–Johnsen (FJ) opinion dynamics model are briefly reviewed.

2.1 Graphs and related matrices

Consider a simple, connected, undirected social network (graph) \(\mathcal {G}=(\mathcal {V},\mathcal {E})\), where \(\mathcal {V}=\{1,2,\ldots ,n\}\) is the set of n agents and \(\mathcal {E}=\{(i,j)|i,j\in \mathcal {V}\}\) is the set of m edges describing relations among nearest neighbors. The topological and weighted properties of \(\mathcal {G}\) are encoded in its adjacency matrix \(\varvec{ A }=(a_{ij})_{n\times n}\), where \(a_{ij}=a_{ji}=w_e\) if i and j are linked by an edge \(e=(i,j)\in \mathcal {E}\) with weight \(w_e\), and \(a_{ij}=0\) otherwise. Let \(\mathcal {N}_i=\{j|(i,j)\in \mathcal {E}\}\) denote the set of neighbors of node i and \(d_i=\sum _{j\in \mathcal {N}_i} a_{ij}\) denote the degree of i. The diagonal degree matrix of graph \(\mathcal {G}\) is defined to be \(\varvec{ D }= \textrm{diag}(d_1,d_2,\ldots ,d_n)\), and the Laplacian matrix of \(\mathcal {G}\) is \(\varvec{ L }=\varvec{ D }-\varvec{ A }\). Let \(\varvec{ e }_i\) denote the i-th standard basis vector of appropriate dimension. Let \(\textbf{1}\) (\(\textbf{0}\)) be the vector with all entries being ones (zeros). Then, it can be verified that \(\varvec{ L }\textbf{1}=\textbf{0}\). The random walk transition matrix for \(\mathcal {G}\) is defined as \(\varvec{ P }=\varvec{ D }^{-1}\varvec{ A }\), which is row-stochastic (i.e., each row-sum equals 1).

2.2 Norms of a vector or matrix

For a non-negative vector \(\varvec{ x }\), \(x_{\max }\) and \(x_{\min }\) denote the maximum and minimum entry, respectively. For an \(n\times n\) matrix \(\varvec{ A }\), \(\sigma _i(\varvec{ A }), i=1,2,\ldots ,n\) denote its singular values. Given a vector \(\varvec{ x }\), its 2-norm is defined as \(\Vert \varvec{ x } \Vert _2=\root 2 \of {\sum _i{|x_i|^2}}\) and the \(\infty\)-norm is defined as \(\Vert \varvec{ x } \Vert _{\infty }=\max _i{|x_i|}\). It is easy to verify that \(\Vert \varvec{ x } \Vert _{\infty }\le \Vert \varvec{ x } \Vert _2\le \sqrt{n}\Vert \varvec{ x } \Vert _{\infty }\) for any n-dimensional vector \(\varvec{ x }\). For a matrix \(\varvec{ A }\), its 2-norm is defined to be \(\Vert \varvec{ A } \Vert _2=\max _{\varvec{ x }} \Vert \varvec{ A }\varvec{ x } \Vert _2/\Vert \varvec{ x } \Vert _2\). By definition, \(\Vert \varvec{ A }\varvec{ x } \Vert _{2}\le \Vert \varvec{ A } \Vert _2\Vert \varvec{ x } \Vert _2\). It is known that the 2-norm of the matrix \(\varvec{ A }\) is equal to its maximum singular value \(\sigma _{\max }\), satisfying \(\Vert \varvec{ x }^\top \varvec{ A }\varvec{ y } \Vert _2\le \sigma _{\max }\Vert \varvec{ x } \Vert _2\Vert \varvec{ y } \Vert _2\) for any vectors \(\varvec{ x }\) and \(\varvec{ y }\) (Golub and Van Loan 2012).

2.3 Friedkin-Johnsen opinion dynamics model

The Friedkin-Johnsen (FJ) model is a classic opinion dynamics model (Friedkin and Johnsen 1990). For a specific topic, the FJ model assumes that each agent \(i\in \mathcal {V}\) is associated with an innate opinion \(s_i\in [0,1]\), where higher values signify more favorable opinions, and a resistance parameter \(\alpha _i\in (0,1]\) quantifying the agent’s stubbornness, with a higher value corresponding to a lower tendency to conform with his neighbors’ opinions. Let \(\varvec{ x }^{(t)}\) denote the opinion vector of all agents at time t, with element \(x_i^{(t)}\) representing the opinion of agent i at that time. At every timestep, each agent updates his opinion by taking a convex combination of his innate opinion and the average of the expressed opinion of his neighbors in the previous timestep. Mathematically, the opinion of agent i evolves according to the following rule:

$$\begin{aligned} x_i^{(t+1)}=\alpha _i s_i+(1-\alpha _i)\frac{\sum _{j\in \mathcal {N}_i}a_{ij}\cdot x_j^{(t)}}{d_i}. \end{aligned}$$
(1)

The evolution rule can be rewritten in matrix form as

$$\begin{aligned} \varvec{ x }^{(t+1)}=\varvec{\varLambda }\varvec{ s }+(\varvec{ I }-\varvec{\varLambda })\varvec{ P }\varvec{ x }^{(t)}, \end{aligned}$$
(2)

where \(\varvec{\varLambda }\) denotes the diagonal matrix \(\textrm{diag}(\alpha _1,\alpha _2,\ldots ,\alpha _n)\), and \(\varvec{ I }\) is the identity matrix.

It has been proved (Das et al. 2013) that the above opinion formation process converges to a unique equilibrium \(\varvec{ z }\) when \(\alpha _i>0\) for all \(i\in \mathcal {V}\). The equilibrium vector \(\varvec{ z }\) can be obtained as the unique fixed point of equation (2), i.e.,

$$\begin{aligned} \varvec{ z }=\left( \varvec{ I }-\left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }\right) ^{-1}\varvec{\varLambda }\varvec{ s }. \end{aligned}$$
(3)

The ith entry \(z_i\) of \(\varvec{ z }\) is the expressed opinion of agent i.

A straightforward way to calculate the equilibrium vector \(\varvec{ z }\) requires inverting a matrix, which is expensive and intractable for large networks. In (Chan et al. 2019), the iteration process of the opinion dynamics model is used to obtain an approximation of vector \(\varvec{ z }\), which has a theoretical guarantee of convergence. The method is very efficient, scalable to networks with millions of nodes.

3 Higher-order opinion dynamics model

The classical FJ model has many advantages; for example, it captures some complex human behavior in social networks. However, this model considers only the interactions among nearest neighbors, without explicitly considering the higher-order interactions existing in social networks and social media. To fix this deficiency, in this section, we generalize the FJ model to a higher-order setting by using the random walk matrix polynomials describing higher-order random walks.

3.1 Random walk matrix polynomial

For a network \(\mathcal {G}\), its random walk matrix polynomial is defined as follows (Cheng et al. 2015):

Definition 1

Let \(\varvec{ A }\) and \(\varvec{ D }\) be, respectively, the adjacency matrix and diagonal degree matrix of a graph \(\mathcal {G}\). For a non-negative vector \(\varvec{\beta }=(\beta _1,\beta _2,\ldots ,\beta _T)\) satisfying \(\sum _{r=1}^T \beta _r=1\), the matrix

$$\begin{aligned} \varvec{ L }_{\varvec{\beta }}(\mathcal {G})=\varvec{ D }-\sum _{r=1}^T \beta _r\varvec{ D }\left( \varvec{ D }^{-1}\varvec{ A }\right) ^r \end{aligned}$$
(4)

is a T-degree random walk matrix polynomial of \(\mathcal {G}\).

The Laplacian matrix \(\varvec{ L }\) is a particular case of \(\varvec{ L }_{\varvec{\beta }}(\mathcal {G})\), which can be obtained from \(\varvec{ L }_{\varvec{\beta }}(\mathcal {G})\) by setting \(T=1\) and \(\beta _1=1\). In fact, it can be proved that, for any \(\varvec{\beta }\), there always exists a graph \(\mathcal {G}'\) with loops, whose Laplacian matrix is \(\varvec{ L }_{\varvec{\beta }}(\mathcal {G})\), as characterized by the following theorem.

Theorem 1

(Proposition 25 in Cheng et al. (2015)) The random walk matrix polynomial \(\varvec{ L }_{\varvec{\beta }}(\mathcal {G})\) is a Laplacian matrix.

Define matrix \(\varvec{ L }_{\mathcal {G}_r}=\varvec{ D }-\varvec{ D }\left( \varvec{ D }^{-1}\varvec{ A }\right) ^r\), which is a particular case of matrix \(\varvec{ L }_{\varvec{\beta }}(\mathcal {G})\) corresponding to \(T=r\) and \(\beta _r=1\). In fact, \(\varvec{ L }_{\mathcal {G}_r}\) is the Laplacian matrix of graph \(\mathcal {G}_r\), constructed from graph \(\mathcal {G}\) by performing r-step random walks on graph \(\mathcal {G}\). The ij-th element of the adjacency matrix \(\varvec{ A }_{\mathcal {G}_r}\) for graph \(\mathcal {G}_r\) is equal to the product of the degree \(d_i\) for node i in \(\mathcal {G}\) and the probability that a walker starts from node i and ends at node j after performing r-step random walks in \(\mathcal {G}\). Thus, the matrix polynomial \(\varvec{ L }_{\varvec{\beta }}(\mathcal {G})\) is a combination of matrices \(\varvec{ L }_{\mathcal {G}_r}\) for \(r=1,2,\ldots ,T\).

Based on the random walk matrix polynomials, one can define a generalized transition matrix \(\varvec{ P }^*=\varvec{ P }^*_{\varvec{\beta }}\) for graph \(\mathcal {G}\) as follows.

Definition 2

Given an undirected weighted graph \(\mathcal {G}\) and a coefficient vector \(\varvec{\beta }=(\beta _1,\beta _2,\ldots ,\beta _T)\) with \(\sum _{r=1}^T\beta _r=1\), the matrix

$$\begin{aligned} \varvec{ P }^*_{\varvec{\beta }}=\sum _{r=1}^T\beta _r\varvec{ P }^r =\varvec{ I }-\varvec{ D }^{-1}\varvec{ L }_{\varvec{\beta }}(\mathcal {G}) \end{aligned}$$
(5)

is a T-order transition matrix of \(\mathcal {G}\) with respect to vector \(\varvec{\beta }\).

Note that the generalized transition matrix \(\varvec{ P }^*\) for graph \(\mathcal {G}\) is actually the transition matrix for another graph \(\mathcal {G}'\).

3.2 Higher-order FJ model

To introduce the higher-order FJ model, first modify the update rule in equation (2) by replacing \(\varvec{ P }\) with \(\varvec{ P }^*\). In other words, the opinion vector evolves as follows:

$$\begin{aligned} \varvec{ x }^{(t+1)}&=\varvec{\varLambda }\varvec{ s }+(\varvec{ I }-\varvec{\varLambda })\varvec{ P }^*\varvec{ x }^{(t)} \nonumber \\&=\varvec{\varLambda }\varvec{ s }+(\varvec{ I }-\varvec{\varLambda })\left[ \beta _1\varvec{ P }+\beta _2\varvec{ P }^2+\ldots +\beta _T\varvec{ P }^T\right] \varvec{ x }^{(t)}. \end{aligned}$$
(6)

In this way, individuals update their opinions by incorpating those of their higher-order neighborhoods at each timestep. Moreover, by adjusting the coefficient vector \(\varvec{\beta }\), one can choose different weights for neighbors of different orders.

Note that for the case of \(\varvec{ P }^*=\varvec{ P }\), the higher-order FJ model is reduced to the classic FJ model. While for the case of \(\varvec{ P }^*\ne \varvec{ P }\), the higher-order FJ model can lead to very different results, in comparison with the standard FJ model, as shown in the following example.

Fig. 1
figure 1

A tree with ten nodes

Example. Consider the tree shown in Fig. 1. The nodes from the center to the periphery are colored in red, yellow and blue, respectively. Suppose that for the red node its \((s_i,\alpha _i)\) are given by (1, 1), implying that the center node has a favorable opinion and is insusceptible to be persuaded by others. And, suppose that the yellow and blue nodes have values (0, 0.6) and (0, 0.01), respectively. Then, calculate the equilibrium opinion vector for the following three cases:

  1. I.

    \(\beta _1=1,\beta _2=0\). This case corresponds to the classic FJ model. At every timestep, the opinion of each node is influenced by the opinions of its nearest neighbors. At equilibrium, the expressed opinions of red, yellow and blue nodes are 1, 0.181 and 0.179, respectively. This is consistent with the intuition, since the red and yellow nodes are stubborn and thus prone to their innate opinions, while the blue nodes are susceptible to their neighboring nodes, the yellow ones.

  2. II.

    \(\beta _1=0,\beta _2=1\). This case is associated with the second-order FJ model. In this case, only the influences of the second-order neighbors are considered. The equilibrium expressed opinions of the red, yellow and blue nodes are 1, 0 and 0.971, respectively. This can be explained as follows. Since any yellow node is the second-order neighbor of the other two yellow nodes, they are influenced by each other, so they all stick to their innate opinions. In contrast, the blue nodes are highly affected by the center node.

  3. III.

    \(\beta _1=\beta _2=0.5\). This is in fact a hybrid case of the above two cases, with interactions between a node and both of its first- and second-order neighbors with equal weights. For this case, the equilibrium opinions for red, yellow and blue nodes are 1, 0.142 and 0.351, respectively. The opinion of each node lies between the opinions of the above two cases.

In addition to the expressed opinion of an individual, for the above-considered three cases, the sum of their expressed opinions are also significantly different, which are equal to 2.617, 6.826 and 3.532, respectively.

This example demonstrates that the interactions between nodes and their higher-order neighbors can have substantial impact on network opinions. Moreover, as will be seen in Sect. 6.2, the higher-order interactions also strongly affect the opinion dynamics on real-world social networks.

4 Convergence analysis

In this section, the convergence of the higher-order FJ model is analyzed. It will be shown that if all \(\alpha _i\) are positive, the model has a unique equilibrium and will converge to that equilibrium after sufficiently many iterations. The high-level ideas of our proof are adapted from Das et al. (2013).

First, recall the Gershgorin Circle Theorem.

Lemma 1

(Gershgorin Circle Theorem Bell (1965)) Given a square matrix \(\varvec{ A }\in {\mathbb {R}}^{n\times n}\), let \(R_i=\sum _{j\ne i}|a_{ij}|\) be the sum of the absolute values of the non-diagonal entries in the i-th row and \(D(a_{ii},R_i)\in {\mathbb {C}}\) be a closed disc centered at \(a_{ii}\) with radius \(R_i\). Then, every eigenvalue of \(\varvec{ A }\) lies in at least one of the discs \(D(a_{ii},R_i)\).

Now, the following main result is established.

Theorem 2

The higher-order FJ model defined in (6) has a unique equilibrium if \(\alpha _i>0\) for all \(i \in \mathcal {V}\).

Proof

Any equilibrium \(\varvec{ z }^*\in {\mathbb {R}}^n\) of (6) must be a solution of the following linear system:

$$\begin{aligned} \left( \varvec{ I }-(\varvec{ I }-\varvec{\varLambda })\varvec{ P }^*\right) \varvec{ z }^*=\varvec{\varLambda }\varvec{ s }. \end{aligned}$$
(7)

Let \(\varvec{ M }=\varvec{ I }-(\varvec{ I }-\varvec{\varLambda })\varvec{ P }^*\). It suffices to show that \(\varvec{ M }\) is non-singular. First, it is obvious that every diagonal entry of \(\varvec{ M }\) is non-negative and every non-diagonal entry is non-positive, since every entry of \(\varvec{ P }^*\) lies in the interval [0, 1]. Thus, for any row i of \(\varvec{ M }\), the sum of absolute values of its non-diagonal elements is

$$\begin{aligned} R_i=\sum _{j\ne i} |M_{ij}| =-\varvec{ e }_i^\top \varvec{ M }\textbf{1}+ M_{ii} =M_{ii}-\alpha _i>0, \end{aligned}$$

where \(M_{ij}\) denotes the (ij)th element of \(\varvec{ M }\). Then, according to Lemma 1, every eigenvalue \(\lambda\) of \(\varvec{ M }\) lies within the discs \(\{z:|z-M_{ii}|\le M_{ii}-\alpha _i\}\). Since \(\alpha _i>0\), the set of all eigenvalues for \(\varvec{ M }\) excludes 0. Therefore, matrix \(\varvec{ M }\) is invertible, and thus the equilibrium is unique. \(\square\)

Hence, \(\varvec{ z }^*= \left( \varvec{ I }-(\varvec{ I }-\varvec{\varLambda })\varvec{ P }^*\right) ^{-1}\varvec{\varLambda }\varvec{ s }\) is the unique equilibrium of the opinion dynamics model defined by (6). Next, it will be proved that after sufficiently many iterations, the new higher-order FJ model will converge to this equilibrium.

Theorem 3

If \(\alpha _i>0\) for all \(i\in \mathcal {V}\), then the higher-order FJ model converges to its unique equilibrium \(\varvec{ z }^*= \left( \varvec{ I }-(\varvec{ I }-\varvec{\varLambda })\varvec{ P }^*\right) ^{-1}\varvec{\varLambda }\varvec{ s }\).

Proof

Define the error vector \(\varvec{\epsilon }^{(t)}\) at the t-th iteration as \(\varvec{\epsilon }^{(t)}=\varvec{ x }^{(t)}-\varvec{ z }^*\). It can be proved that as t approaches infinity, the error term \(\varvec{\epsilon }^{(t)}\) tends to zero. Substituting (6) into the formula of \(\varvec{\epsilon }^{(t)}\), one obtains \(\varvec{\epsilon }^{(t+1)}=(\varvec{ I }-\varvec{\varLambda })\varvec{ P }^*\varvec{\epsilon }^{(t)}\). In addition, because the sum of all the entries in each row of \(\varvec{ P }^*\) is one and \((1-\alpha _i)\in [0,1]\), the elements of \(\varvec{\epsilon }^{(t)}\) become smaller as t grows. This can be explained as follows. Let \(\epsilon ^{(t+1)}_{\max }\) be the element of vector \(\varvec{\epsilon }^{(t+1)}\) that has the largest absolute value. Then,

$$\begin{aligned} \left| \epsilon ^{(t+1)}_{\max }\right|&= \max _{i=1,2,\ldots ,n}\left\{ (1-\alpha _i)\left| \sum _{j=1}^n P^*_{ij}\epsilon ^{(t)}_{j}\right| \right\} \\&\le \max _{i=1,2,\ldots ,n}\left\{ (1-\alpha _i)\sum _{j=1}^n P^*_{ij}\left| \epsilon ^{(t)}_{\max }\right| \right\} \\&= \max _{i=1,2,\ldots ,n}\left\{ (1-\alpha _i)\left| \epsilon ^{(t)}_{\max }\right| \right\} < \left| \epsilon ^{(t)}_{\max }\right| , \end{aligned}$$

which completes the proof of the theorem. \(\square\)

5 Fast estimation of equilibrium opinion vector

To compute the equilibrium expressed opinion vector \(\varvec{ z }^*\) requires calculating matrix \(\varvec{ P }^*\) and inverting a matrix, both of which are time consuming. In general, for a network \(\mathcal {G}\), sparse or dense, its r-step random walk graph \(\mathcal {G}_r\) could be very dense. Particularly, for a small-world network with a moderately large r, its r-step random walk graph \(\mathcal {G}_r\) is a weighted and almost complete graph. This makes it infeasible to compute the generalized transition matrix \(\varvec{ P }^*\) for huge networks.

In this section, the spectral graph sparsification technique is utilized to obtain an approximation of matrix \(\varvec{ P }^*\). Then, a fast convergent algorithm is developed to approximate the expressed opinion vector \(\varvec{ z }^*\), which avoids matrix inverse operation. The pseudocode of this new algorithm is shown in Algorithm 1.

5.1 Random-walk matrix polynomial sparsification

First, we introduce the concept of spectral similarity and the technique of random-walk matrix polynomial sparsification.

Definition 3

(Spectral Similarity of Graphs Spielman and Srivastava (2011)) Consider two weighted undirected networks \(\mathcal {G}=(\mathcal {V},\mathcal {E})\) and \({\tilde{\mathcal{G}}} = ({\mathcal{V}},{\tilde{\mathcal{E}}})\). Let \(\varvec{ L }\) and \(\varvec{\tilde{ L }}\) denote, respectively, their Laplacian matrices. Graphs \(\mathcal {G}\) and \({\tilde{\mathcal{G}}}\) are \((1+\epsilon )\)-spectrally similar if

$$\begin{aligned} (1-\epsilon )\cdot \varvec{ x }^\top \varvec{\tilde{ L }}\varvec{ x }\le \varvec{ x }^\top \varvec{ L }\varvec{ x }\le (1+\epsilon )\cdot \varvec{ x }^\top \varvec{\tilde{ L }}\varvec{ x },\quad \forall \varvec{ x }\in {\mathbb {R}}^n. \end{aligned}$$
(8)

Next, recall the sparsification algorithm Cheng et al. (2015). For a given graph \(\mathcal {G}=(\mathcal {V},\mathcal {E})\), start from an empty graph \({\tilde{\mathcal{G}}}\) with the same node set \(\mathcal {V}\) and an empty edge set. Then add M edges into the sparsifier \({\tilde{\mathcal{G}}}\) iteratively by a sampling technique. At each iteration, randomly pick an edge \(e=(u,v)\) from \(\mathcal {E}\) as an intermediate edge and an integer r from \(\{1,2,\ldots ,T\}\) as the length of the random-walk path. To this end, run the PathSampling(er) algorithm Cheng et al. (2015) to sample an edge by performing r-step random walks, and add the sample edge, together with its corresponding weight, into the sparsifier \({\tilde{\mathcal{G}}}\). Note that multiple edges will be merged into a single edge by summing up their weights together. Finally, the algorithm generates a sparsifier \({\tilde{\mathcal{G}}}\) for the original graph \(\mathcal {G}\) with no more than M edges.

In Cheng et al. (2015), an algorithm is designed to obtain a sparsifier \({\tilde{\mathcal{G}}}\) with \(O(n\epsilon ^{-2}\log n)\) edges for \(\varvec{ L }_{\varvec{\beta }}(\mathcal {G})\), which consists of two steps: The first step uses random walk path sampling to get an initial sparsifier with \(O(Tm\epsilon ^{-2}\log n)\) edges. The second step utilizes the standard spectral sparsification algorithm proposed in Spielman and Srivastava (2011) to further reduce the edge number to \(O(n\epsilon ^{-2}\log n)\). Since a sparsifier with \(O(Tm\epsilon ^{-2}\log n)\) edges is sparse enough for the present purposes, only the first step will be taken, while skipping the second step, to avoid unnecessary computations.

Algorithm 1
figure a

HODynamic\((\mathcal {G},M,\varvec{ s },\varvec{\beta },t)\)

To sample an edge by performing r-step random walks, the procedure of PathSampling algorithm Cheng et al. (2015) is characterized in Lines 5–9 of Algorithm 1. To sample an edge, first draw a random integer k from \(\{1,2,\ldots ,r\}\) and then perform, respectively, \((k-1)\)-step and \((r-k)\)-step walks starting from two end nodes of the edge \(e=(u,v)\). This process samples a length-r path \(\varvec{ p }=(u_0,u_1,\ldots ,u_r)\). At the same time, compute

$$\begin{aligned} Z(\varvec{ p })=\sum _{i=1}^r \frac{2}{a_{u_{i-1},u_i}}. \end{aligned}$$
(9)

The algorithm returns the two endpoints of path \(\varvec{ p }\) as the sample edge \((u_0,u_r)\) and the quantity \(Z(\varvec{ p })\) for the calculation of weight.

Theorem 4

(Spectral Sparsifiers of Random-Walk Matrix Polynomials Cheng et al. (2015)) For a graph \(\mathcal {G}\) with random-walk matrix polynomial

$$\begin{aligned} \varvec{ L }_{\varvec{\beta }}(\mathcal {G}) = \varvec{ D }- \sum _{r=1}^T \beta _r \varvec{ D }\left( \varvec{ D }^{-1}\varvec{ A }\right) ^r, \end{aligned}$$
(10)

where \(\sum _{r=1}^T \beta _r=1\) and \(\beta _r\) are non-negative, one can construct, in time \(O(T^2m\epsilon ^{-2}\log ^2 n)\), a \((1+\epsilon )\)-spectral sparsifier, \(\varvec{\tilde{ L }}\), with \(O(n\epsilon ^{-2}\log n)\) non-zeros.

Now one can approximate the generalized transition matrix using the Laplacian \({\tilde{\varvec{L}}}( {\tilde{\mathcal{G}}} )\) of the sparse graph \({\tilde{\mathcal{G}}}\):

$$\begin{aligned} \varvec{ P }^*= \varvec{ I }-\varvec{ D }^{-1}\varvec{ L }_{\varvec{\beta }}(\mathcal {G}) \approx \varvec{ I }-\varvec{ D }^{-1}{\tilde{\varvec{L}}}( {\tilde{\mathcal{G}}} ) =\tilde{\varvec{ P }}^*. \end{aligned}$$
(11)

Complexity Analysis Regarding the time and space complexity of sparsification process of Algorithm 1, the main time cost of sparsification (Lines 2–9) is the M calls of the PathSampling routine. In PathSampling, it requires \(O(\log n)\) time to sample a neighbor from the weighted network, and thus takes \(O(r\log n)\) time to sample a length-r path. Totally, the time complexity of Algorithm 1 is \(O(MT\log n)\). As for space complexity, it takes \(O(n+m)\) space to store the original graph \(\mathcal {G}\) and additional O(M) space to store the sparisifier \({\tilde{\mathcal{G}}}\). Thus, for appropriate size M, the sparsifier is computable.

5.2 Approximating the equilibrium opinion vector via the iteration method

With the spectral graph sparsification technique, it is possible to approximate \(\varvec{ P }^*\) with a sparse matrix. Nevertheless, directly computing the equilibrium still involves the matrix inverse operation, which is computationally expensive for large networks, such as those with millions of nodes. To approximate the equilibrium vector \(\varvec{ z }^*\) using the recurrence defined in (6) and multiple iterations, in this section, we develop a convergent approximation algorithm. For this purpose, an important lemma is first introduced.

Lemma 2

(Lemma 4 in Qiu et al. (2019)) Let \(\varvec{\mathcal { L }}=\varvec{ D }^{-1/2}\varvec{ L }\varvec{ D }^{-1/2}\), \(\varvec {{\tilde{\mathcal{L}}}} =\varvec{ D }^{-1/2}{\tilde{\varvec{L}}}\varvec{ D }^{-1/2}\) and \(\epsilon <0.5\). Then all the singular values of \(\varvec {{\tilde{\mathcal{L}}}} -\varvec{\mathcal { L }}\) satisfy that for all \(i\in \{1,2,\ldots ,n\}\), \(\sigma _i(\varvec {{\tilde{\mathcal{L}}}} -\varvec{\mathcal { L }})<4\epsilon\).

Now, we are in position to introduce a new iteration method for approximating the equilibrium vector \(\varvec{ z }^*\). First, set \(\varvec{\tilde{ x }}^{(0)}=\varvec{ x }^{(0)}=\varvec{ s }\). Then, in every timestep, update the opinion vector with the approximate transition matrix \(\varvec{\tilde{ P }}^*\), i.e., \(\varvec{\tilde{ x }}^{(t+1)}=\varvec{\varLambda }\varvec{ s }+(\varvec{ I }-\varvec{\varLambda })\varvec{\tilde{ P }}^*\varvec{\tilde{ x }}^{(t)}\). Let \(\alpha _{\min }\) be the smallest value among all \(i\in \mathcal {V}\).

Lemma 3

For every \(t\ge 0\),

$$\begin{aligned} \left\| \varvec{\tilde{ x }}^{(t)}-\varvec{ x }^{(t)} \right\| _{\infty } \le \frac{4\epsilon \sqrt{nd_{\max }d^{-1}_{\min }}\cdot \left( 1-\alpha _{\min }\right) \left[ 1-\left( 1-\alpha _{\min }\right) ^t\right] }{\alpha _{\min }}. \end{aligned}$$
(12)

Proof

Inequality (12) is proved by induction. The case of \(j=0\) is trivial, since \(\varvec{\tilde{ x }}^{(0)}=\varvec{ x }^{(0)}=\varvec{ s }\). Assume that (12) holds for some integer t. Then, it needs to show that (12) also holds for \(t+1\). To this end, split \(\Vert \varvec{\tilde{ x }}^{(t+1)}-\varvec{ x }^{(t+1)} \Vert _{\infty }\) into two terms by using triangle inequality:

$$\begin{aligned}&\quad \left\| \varvec{\tilde{ x }}^{(t+1)}-\varvec{ x }^{(t+1)} \right\| _{\infty }\end{aligned}$$
(13)
$$\begin{aligned}&\le \left\| \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ D }^{-1}\left( \varvec{ L }_{\varvec{\beta }}-\varvec{\tilde{ L }}_{\varvec{\beta }}\right) \varvec{ x }^{(t)} \right\| _{\infty }\nonumber \\&+\left\| \left( \varvec{ I }-\varvec{\varLambda }\right) \left( \varvec{ I }-\varvec{ D }^{-1}\varvec{\tilde{ L }}_{\varvec{\beta }}\right) \left( \varvec{\tilde{ x }}^{(t)}-\varvec{ x }^{(t)}\right) \right\| _{\infty }. \end{aligned}$$
(14)

For every coordinate of the first term in (14), an upper bound can be derived as follows:

$$\begin{aligned}&\left| \varvec{ e }_i^\top \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ D }^{-1}\left( \varvec{ L }_{\varvec{\beta }}-{\tilde{\varvec{L}}}_{\varvec{\beta }}\right) \varvec{ x }^{(t)}\right| \nonumber \\ \le&(1-\alpha _{\min })\cdot \left| \varvec{ e }_i^\top \varvec{ D }^{-1}\left( \varvec{ L }_{\varvec{\beta }}-\varvec{\tilde{ L }}_{\varvec{\beta }}\right) \varvec{ x }^{(t)}\right| \nonumber \\ \le&(1-\alpha _{\min })\cdot \sigma _{\max }\left( \varvec{ D }^{-1}\left( \varvec{ L }_{\varvec{\beta }}-{\tilde{\varvec{L}}}_{\varvec{\beta }}\right) \right) \left\| \varvec{ e }_i\right\| _2\left\| \varvec{ x }^{(t)}\right\| _2 \nonumber \\ \le&(1-\alpha _{\min })\cdot \sigma _{\max }(\varvec{ D }^{-1/2})\cdot \sigma _{\max }(\varvec{ D }^{1/2})\cdot \sigma _{\max }\left( \varvec{\mathcal { L }}_{\varvec{\beta }}-\varvec {{\tilde{\mathcal{L}}}} _{\varvec{\beta }}\right) \left\| \varvec{ e }_i\right\| _2\left\| \varvec{ x }^{(t)}\right\| _2\nonumber \\ \le&\frac{4\epsilon (1-\alpha _{\min })d^{1/2}_{\max }}{d^{1/2}_{\min }}\sqrt{n}, \end{aligned}$$
(15)

where the second inequality is obtained by using the inequality that \(\varvec{ x }^\top \varvec{ A }\varvec{ y }\le \sigma _{\max }(\varvec{ A }) \Vert \varvec{ x }\Vert _2 \Vert \varvec{ y }\Vert _2\) for any matrix \(\varvec{ A }\),the third one follows \(\Vert \varvec{ D }^{-1}\left( \varvec{ L }_{\varvec{\beta }}-{\tilde{\varvec{L}}}_{\varvec{\beta }}\right) \Vert _2\le \Vert \varvec{ D }^{-1/2}\Vert _2\Vert \varvec{ D }^{1/2}\Vert _2\Vert \varvec{\mathcal { L }}_{\varvec{\beta }}-\varvec {{\tilde{\mathcal{L}}}} _{\varvec{\beta }}\Vert _2\) and the last inequality follows from the fact that \(x^{(t)}_i\le 1\) for all \(i\in \{1,2,\ldots ,n\}\).

Next, consider the second term in (14). One has

$$\begin{aligned}&\left\| \left( \varvec{ I }-\varvec{\varLambda }\right) \left( \varvec{ I }-\varvec{ D }^{-1}\varvec{\tilde{ L }}_{\varvec{\beta }}\right) \left( \varvec{\tilde{ x }}^{(t)}-\varvec{ x }^{(t)}\right) \right\| _{\infty } \nonumber \\ \le&\left\| \varvec{ I }-\varvec{\varLambda } \right\| _{\infty }\left\| \varvec{ I }-\varvec{ D }^{-1}\varvec{\tilde{ L }}_{\varvec{\beta }} \right\| _{\infty }\left\| \varvec{\tilde{ x }}^{(t)}-\varvec{ x }^{(t)} \right\| _{\infty }\nonumber \\ =&\left( 1-\alpha _{\min }\right) \cdot \left\| \varvec{\tilde{ x }}^{(t)}-\varvec{ x }^{(t)} \right\| _{\infty }, \end{aligned}$$
(16)

where the equality is due to the fact that \(\Vert \varvec{ I }-\varvec{ D }^{-1}\varvec{\tilde{ L }}_{\varvec{\beta }} \Vert _{\infty }=1\), which can be understood as follows. Since every entry of \(\varvec{ I }-\varvec{ D }^{-1}\varvec{\tilde{ L }}_{\varvec{\beta }}\) is non-negative and \(\varvec{\tilde{ L }}_{\varvec{\beta }}\textbf{1}=\textbf{0}\), one has

$$\begin{aligned} \left\| \varvec{ I }-\varvec{ D }^{-1}\varvec{\tilde{ L }}_{\varvec{\beta }} \right\| _{\infty } =\left\| \left( \varvec{ I }-\varvec{ D }^{-1}\varvec{\tilde{ L }}_{\varvec{\beta }}\right) \textbf{1} \right\| _{\infty } =1. \end{aligned}$$

Substituting (15) and (16) into (14), one obtains

$$\begin{aligned}&\quad \left\| \varvec{\tilde{ x }}^{(t+1)}-\varvec{ x }^{(t+1)} \right\| _{\infty }&\\&\le \frac{4\epsilon (1-\alpha _{\min })d^{1/2}_{\max }}{d^{1/2}_{\min }}\sqrt{n} + \left( 1-\alpha _{\min }\right) \cdot \left\| \varvec{\tilde{ x }}^{(t)}-\varvec{ x }^{(t)} \right\| _{\infty }\\&\le \frac{4\epsilon \sqrt{nd_{\max }d^{-1}_{\min }}\cdot \left( 1-\alpha _{\min }\right) \left[ 1-\left( 1-\alpha _{\min }\right) ^{t+1}\right] }{\alpha _{\min }}, \end{aligned}$$

as required. \(\square\)

In order to show the convergence of this method, it needs to prove that, after sufficiently many iterations, the error between \(\varvec{ x }^{(t)}\) and \(\varvec{ z }^*\) will be sufficiently small, as characterized by the following lemma.

Lemma 4

For every \(t\ge 0\),

$$\begin{aligned} \left\| \varvec{ x }^{(t)}-\varvec{ z }^* \right\| _{\infty } \le \frac{(1-\alpha _{\min })^t}{\alpha _{\min }}. \end{aligned}$$
(17)

Proof

Expanding with the series \(\left( \varvec{ I }-\left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }^*\right) ^{-1}=\sum \limits _{j=0}^{\infty } \left( \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }^*\right) ^j\) leads to

$$\begin{aligned} \varvec{ z }^*-\varvec{ x }^{(t)}=\sum _{j=t}^{\infty }\left( \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }^*\right) ^j\varvec{\varLambda }\varvec{ s }- \left( \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }^*\right) ^t\varvec{ s }. \end{aligned}$$
(18)

Below, by induction, it will be shown that for any \(\varvec{ x }\in [0,1]^n\), the relation \(\Vert \left( \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }^*\right) ^j\varvec{ x } \Vert _{\infty }\le (1-\alpha _{\min })^j\) holds for all \(j\ge 0\). Since every coordinate of \(\varvec{ x }\) lies in the interval [0, 1], it is obvious that the above relation is true for the case of \(j=0\). Suppose that, for some \(j>0\), every coordinate of \(\varvec{ y }=\left( \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }^*\right) ^j\varvec{ x }\) has magnitude at most \((1-\alpha _{\min })^j\). Since \(\varvec{ P }^*\) is row-stochastic, it follows that \(\left\| \varvec{ P }^*\varvec{ y } \right\| _{\infty }\le (1-\alpha _{\min })^j\). In addition, because \(\alpha _i\ge \alpha _{\min }\) for all \(i\in \mathcal {V}\), one has \(\left\| \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }^*\varvec{ y } \right\| _{\infty }\le (1-\alpha _{\min })^{j+1}\), completing the induction proof.

Finally, since both \(\sum _{j=t}^{\infty }\left( \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }^*\right) ^j\varvec{\varLambda }\varvec{ s }\) and \(\left( \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }^*\right) ^t\varvec{ s }\) have non-negative coordinates, one has

$$\begin{aligned} \left\| \varvec{ x }^{(t)}-\varvec{ z }^* \right\| _{\infty }&\le \max \Bigg \{\left\| \sum _{j=t}^{\infty }\left( \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }^*\right) ^j\varvec{\varLambda }\varvec{ s } \right\| _{\infty }, \left\| \left( \left( \varvec{ I }-\varvec{\varLambda }\right) \varvec{ P }^*\right) ^t\varvec{ s } \right\| _{\infty }\Bigg \} \\ \le \sum _{j=t}^{\infty } (1-\alpha _{\min })^j&=\frac{(1-\alpha _{\min })^t}{\alpha _{\min }}, \end{aligned}$$

as claimed by the lemma. \(\square\)

Combining Lemmas 3 and 4, a convergent approximate iteration method can be summarized as stated in the following theorem.

Theorem 5

(Approximation Error) For every \(t\ge 0\),

$$\begin{aligned} \left\| \varvec{\tilde{ x }}^{(t)}-\varvec{ z }^* \right\| _{\infty } \le \frac{4\epsilon \sqrt{n}\cdot \left( 1-\alpha _{\min }\right) \left[ 1-\left( 1-\alpha _{\min }\right) ^t\right] + (1-\alpha _{\min })^t}{\alpha _{\min }}. \end{aligned}$$

In the sequel, this approximate iteration algorithm is referred to as Approx. It should be mentioned that Theorem 5 provides only a rough upper bound. The experiments in Sect. 6.3 show that Approx works well in practice, leading to very accurate results for real networks.

6 Experiments on real networks

In this section, we conduct extensive experiments on real-world social networks to evaluate the performance of the algorithm Approx.

Table 1 Statistics of real networks used in experiments and comparison of running time (seconds, s) between Exact and Approx for three innate opinion distributions (uniform distribution, exponential distribution, and power-law distribution)

6.1 Setup

Machine Configuration and Reproducibility. Our extensive experiments run on a Linux box with 16-core 3.00GHz Intel Xeon E5-2690 CPU and 64GB of main memory. All algorithms are programmed in Julia v1.3.1. The source code is publicly available at https://github.com/HODynamic/HODynamic.

Datasets. We test the algorithm on a large set of realistic networks, all of which are collected from the Koblenz Network Collection (Kunegis 2013) and Network Repository (Rossi and Ahmed 2015). For those networks that are disconnected originally, we perform experiments on their largest connected components. The statistics of these networks are summarized in the first three columns of Table 1, where we use \(n'\) and \(m'\) to denote, respectively, the numbers of nodes and edges in their largest connected components. The smallest network consists of 4, 991 nodes, while the largest network has more than one million nodes. In Table 1, the networks are listed in an increasing order of the number of nodes in their largest connected components.

Input Generation. For each dataset, we use the network structure to generate the input parameters in the following way. The innate opinions are generated according to three different distributions, that is, uniform distribution, exponential distribution, and power-law distribution, where the latter two are generated by the randht.py file in (Clauset et al. 2009). For the uniform distribution, we generated the opinion \(s_i\) of node i at random in the range of [0, 1]. For the exponential distribution, we use the probability density \(\textrm{e}^{ x_{\min }}\textrm{e}^{-x}\) to generate \(n'\) positive real numbers x with minimum value \(x_{\min }>0\). Then, we normalize these \(n'\) numbers to be within the range [0, 1] by dividing each x with the maximum observed value. Similarly, for the power-law distribution, we choose the probability density \((\alpha -1)x_{\min }^{\alpha -1}x^{-\alpha }\) with \(\alpha =2.5\) to generate \(n'\) positive real numbers, and then normalize them to be within the interval [0, 1] as the innate opinions. In practice, we set \(x_{\min }=1\) for both the exponential and power-law distribution. We note that there is always a node with innate opinion 1 due to the normalization operation for the latter two distributions. We generate the resistance parameters uniformly to be within the interval (0, 1).

6.2 Comparison between standard FJ model and second-order FJ model

To show the impact of higher-order interactions on the opinion dynamics, we compare the equilibrium expressed opinions between the second-order FJ model and the standard FJ model on four real networks: PagesTVshow, PagesCompany, Gplus, and GemsecRO. For both models, we generate innate opinions and resistance parameters for each node according to the uniform distribution. We set \(\beta _1=1, \beta _2=0\) for the standard FJ model, and \(\beta _1=0, \beta _2=1\) for the second-order FJ model.

Fig. 2
figure 2

Distribution for difference of equilibrium expressed opinions between the standard FJ model and the second-order FJ model on four real networks

Figure 2 illustrates the distribution for the difference of the final expressed opinions for each node between the classic and second-order FJ models on four considered real networks. It can be observed that for each of these four networks, there are more than half nodes, for which the difference of expressed opinions between the two models is larger than 0.01. Particularly, there are over \(10\%\) agents, for which the difference of equilibrium opinions is greater than 0.1. This possibly makes them stand on the opposite sides for different models. Thus, the opinion dynamics for the second-order FJ model differs largely from the classic FJ model, indicating that the effects of higher-order interactions are not negligible.

6.3 Performance evaluation

To evaluate the performance of the new algorithm Approx, we implement it on various real networks and compare the running time and accuracy of Approx with those corresponding to the standard Exact algorithm. For the Exact, it computes the equilibrium vector by calculating the random-walk matrix polynomials via matrix multiplication and directly inverting the matrix \(\varvec{ I }-(\varvec{ I }-\varvec{\varLambda })\varvec{ P }^*\). Here, we use the second-order random-walk matrix polynomial to simulate the opinion dynamics with \(\beta _1=\beta _2=0.5\). For Approx, we set the number M of samples as \(10\times T\times m\) and approximate the equilibrium vector with 100 iterations. By using the maximum iterations as the stopping criterion, we can avoid the numeric errors and instability caused by other stopping criteria and thus make the efficiency comparison more stable. To objectively evaluate the running time, we enforce the program to run on a single thread for both Exact and Approx on all considered networks, except the last seven marked with asterisks, for which we cannot run Exact due to the very high cost for space and time.

Efficiency. We present the running time of algorithms Approx and Exact for all networks in Table 1. For the last seven networks, we only run algorithm Approx since Exact would take extremely long time. For each of the three innate opinion distributions in different networks, we record the running time of Approx and Exact. From Table 1, we observe that for small networks with less than 10,000 nodes, the running time of Approx is a little longer than that of Exact. Thus, Approx shows no superiority for small networks. However, for those networks having more than twenty thousand nodes, Approx significantly improves the computation efficiency compared with Exact. For example, for the moderately large network GemsecRO with 41,773 nodes, Approx is \(60\times\) faster than Exact. Finally, for large graphs Approx shows a very obvious efficiency advantage. Table 1 indicates that for those networks with over 100 thousand nodes, Approx completes running within 12 minutes, whereas Exact fails to run. We note that for large networks, the running time of Approx grows nearly linearly with respect to \(m'\), consistent with the above complexity analysis, while the running time of Exact grows as a cube power of \(n'\).

Accuracy. In addition to the high efficiency, the new algorithm Approx provides a good approximation for the equilibrium opinion \(\varvec{ z }^* =( \varvec{ z }_1^*,\varvec{ z }_2^*,\ldots , \varvec{ z }_{n}^*)^\top\) in practice. To show this, we compare the approximate results of Approx for second-order FJ model with exact results obtained by Exact, for all the examined networks shown in Table 1, except the last seven which are too big for Exact to handle. For each of the three distributions of the innate opinions, Table 2 reports the mean absolute error \(\sigma = \sum \nolimits _{i=1}^{n'} |\varvec{ z }^*_i - \varvec{\tilde{ z }}^*_i|/n'\), where \(\varvec{\tilde{ z }}^*=( \varvec{\tilde{ z }}_1^*,\varvec{\tilde{ z }}_2^*,\ldots , \varvec{\tilde{ z }}_{n}^*)^\top\) is the estimated vector obtained by Approx. From Table 2, we observe that the actual mean absolute errors \(\sigma\) are all less than 0.008, thus ignorable. Furthermore, for all networks we tested, the mean absolute errors \(\sigma\) are smaller than the theoretical ones provided by Theorem 5. Therefore, the new algorithm Approx provides a very desirable approximation for the equilibrium opinion vector in applications.

Table 2 Mean absolute error (\(\times 10^{-3}\)) for estimated expressed opinions of the second-order FJ model with respect to three innate opinion distributions

6.4 Parameter analysis

We finally discuss how the parameters affect the performance and efficiency of Approx. We report all the parameter analyses on four networks, namely HamstersterFriends, HamstersterFull, PagesTVshow, and Facebook (NIPS). All experiments here are performed for the second-order FJ opinion dynamics model with \(\beta _1=\beta _2=0.5\).

Fig. 3
figure 3

Mean absolute error v.s. the number of non-zeros M

The number of non-zeros M. As shown in Sect. 5.1, \(M = O(Tm\epsilon ^{-2}\log n)\) is required to guarantee the approximation error in theory. We first explore how the number of non-zeros influences the performance of the algorithm Approx in implementation. Without loss of generality, we empirically set M to be \(k\times T\times m\), with k being 1, 10, 100, 200, 500, 1000 and 2000, respectively. In Fig. 3, we report the mean absolute error of Approx, which drops as we increase the number of samples M. This is because matrix \(\varvec{ P }^*\) is approximated more accurately for larger M. On the other hand, although increasing M may have a positive influence on the accuracy of Approx, this marginal benefit diminishes gradually. Figure 3 shows that \(M = 10\times T\times m\) is in fact a desirable choice, which balances the trade-off between the effectiveness and efficiency of Approx.

Fig. 4
figure 4

Mean absolute error v.s. the number of iterations

The number of iterations on small networks. In Sect. 5.2, we present an approximation convergence for the iteration method Approx, with the accuracy of Approx depending on the number of iterations. In Fig. 4, we plot the mean absolute error of Approx as a function of the number of iterations. In all experiments, M is set to be \(10\times T\times m\). As demonstrated in Fig. 4, the second-order FJ model converges in several iterations for all the four networks tested.

Fig. 5
figure 5

Mean absolute error between two iterations v.s. the number of iterations

The number of iterations on large networks. We also analyze the influence of the number of iterations on large networks. Since we cannot run Exact for these four large networks, we instead use the mean absolute error between two iterations in Approx as an indicator for convergence in Fig. 5. In all experiments, M is set to be \(10\times T\times m\). As demonstrated in Fig. 5, the second-order FJ model converges in dozens of iterations for all the four networks tested. Therefore, one hundred iterations are enough to obtain desirable approximation results for networks with millions of nodes. For all the experiments shown in Table 1, the difference between opinion vectors of two consecutive iterations for the second-order FJ model is insignificant after dozens of iterations.

7 Conclusion

In this paper, we presented a significant extension of the classic Friedkin-Johnsen (FJ) model by considering not only nearest-neighbor interactions, but also long-range interactions via leveraging higher-order random walks. We showed that the proposed model has a unique equilibrium expressed opinion vector, provided that each individual holds an innate opinion. We also demonstrated that the resultant expressed opinion vector of the new model may be significantly different from that of the FJ model, indicating the important impact of higher-order interactions on opinion dynamics.

The expressed opinion vector of the new model can be considered as an expressed opinion vector of the FJ model in a dense graph with a loop at every node, whose transition matrix is a convex combination of powers of the transition matrix for the original graph. However, direct computation of the transition matrix for the dense graph is computationally expensive, which involves multiple matrix multiplication and inversion operations. As a remedy, we leveraged the state-of-the-art Laplacian sparsification technique and the nearly linear-time algorithm in Cheng et al. (2015) to obtain a sparse matrix, which is spectrally similar to the original dense matrix thereby preserving all basic information. Based on the obtained sparse matrix, we further proposed a convergent iteration algorithm, which approximates the equilibrium opinion vector in linear space and time. We finally conducted extensive experiments on diverse social networks, which demonstrate that the new algorithm achieves both good efficiency and effectiveness.

It should be mentioned that in this paper, we only focus on the impacts of higher-order interactions on the sum of expressed opinions. Actually, in addition to the opinion sum, there are many other related quantities for opinion dynamics, including convergence rate, polarization Dandekar et al. (2013); Matakos et al. (2017); Musco et al. (2018), disagreement Musco et al. (2018); Zhu et al. (2021), and so on. It is expected that higher-order interactions have also important influences on these important quantities. Although these subjects are beyond our paper, below we provide a heuristic explanation for the reason of higher-order interactions affecting polarization. As shown in Theorem 3, the vector of expressed opinions is determined simultaneously by three factors: the innate opinion \(s_i\) and resistance parameter \(\alpha _i\) of every agent i, as well as the higher-order interaction encoded in matrix \(P^*\). Thus, higher-order interactions play a significant role in opinion polarization, since this quantity is also simultaneously affected by these three factors Musco et al. (2018). Finally, it is worth emphasizing that although our model incorporates higher-order interactions and thus generates an opinion vector different from that of the classic FJ model, it is difficult to judge which opinion vector is superior or more compelling. In fact, these two models are not mutually exclusive. The choice of the models depends on the specific aim of applications, such as minimizing polarization Musco et al. (2018), disagreement Zhu et al. (2021), or conflict Zhu and Zhang (2022); Wang and Kleinberg (2023).