Abstract
The eigenpair here means the twins of eigenvalue and corresponding eigenvector. The talk introduces the three steps of our study on computing the maximal eigenpair. In the first two steps, we construct efficient initials for a known but dangerous algorithm, first for tridiagonal matrices and then for the irreducible matrices, having nonnegative off-diagonal elements. In the third step, we present two global algorithms which are still efficient and work well for a quite large class of matrices, even complex for instance.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
This paper is a continuation of [4]. For the reader’s convenience, we review shortly the first part of [4], especially the story of the proportion of 1000 and 2 of iterations for two different algorithms.
The most famous result on the maximal eigenpair should be the Perron-Frobenius theorem. For nonnegative (pointwise) and irreducible A, if Trace (A) \(>0\), then the theorem says there exists uniquely a maximal eigenvalue \(\rho (A)>0\) with positive left-eigenvector u and positive right-eigenvector g:
These eigenvectors are also unique up to a constant. Before going to the main body of the paper, let us make two remarks.
(1) We need to study the right-eigenvector g only. Otherwise, use the transpose \(A^*\) instead of A.
(2) The matrix A is required to be irreducible with nonnegative off-diagonal elements, its diagonal elements can be arbitrary. Otherwise, use a shift \(A + m I\) for large m:
their eigenvector remains the same but the maximal eigenvalues are shifted to each other.
Consider the following matrix.
The main character of the matrix is the sequence \(\{k^2\}\). The sum of each row equals zero except the last row. Actually, this matrix is truncated from the corresponding infinite one, in which case we have known that the maximal eigenvalue is \(-1/4\) (refer to ([2]; Example 3.6)).
Example 1
Let \(N=7\). Then the maximal eigenvalue is \(-0.525268\) with eigenvector:
where the vector \(v^*=\) the transpose of v.
We now want to practice the standard algorithms in matrix eigenvalue computation. The first method in computing the maximal eigenpair is the Power Iteration, introduced in 1929. Starting from a vector \(v_0\) having a nonzero component in the direction of g, normalized with respect to a norm \(\Vert \cdot \Vert \). At the kth step, iterate \(v_k\) by the formula
Then we have the convergence: \(v_k\rightarrow g\) (first pointwise and then uniformly) and \(z_k\rightarrow \rho (Q)\) as \(k\rightarrow \infty \). If we rewrite \(v_k\) as
one sees where the name “power” comes from. For our example, to use the Power Iteration, we adopt the \(\ell ^1\)-norm and choose \(v_0={{\tilde{v}}_0}/{\Vert {\tilde{v}}_0\Vert }\), where
This initial comes from a formula to be given in the next section. In Fig. 1 below, the upper curve is g, the lower one is modified from \(\tilde{v}_0\), renormalized so that its last component becomes one. Clearly, these two functions are quite different, one may worry about the effectiveness of the choice of \(v_0\). Anyhow, having the experience of computing its eigensystem, I expect to finish the computation in a few of seconds. Unexpectly, I got a difficult time to compute the maximal eigenpair for this simple example. Altogether, I computed it for 180 times, not in one day, using 1000 iterations. The printed pdf-file of the outputs has 64 pages. Figure 2 give us the outputs.
The figure shows that the convergence of \(z_k\) goes quickly at the beginning of the iterations. This means that our initial \(v_0\) is good enough. Then the convergence goes very slow which means that the Power Iteration Algorithm converges very slowly.
Let us have a look at the convergence of the power iteration. Suppose that the eigenvalues are all different for simplicity. Denote by \((\lambda _j, g_j)\) the eigenpairs with maximal one \((\lambda _0, g_0)\). Write \(v_0=\sum _{j=0}^N c_j g_j\) for some constants \((c_j)\). Then \(c_0\ne 0\) by assumption and
Since \(|\lambda _j/\lambda _0|<1\) for each \(j\geqslant 1\) and \(\Vert g_0\Vert =1\), we have
where \(|\lambda _1|:=\max \{|\lambda _j|: j>0\}.\) Since \(|\lambda _1/\lambda _0|\) can be very closed to 1, this explains the reason why the convergence of the method can be very slow.
Before moving further, let us mention that the power method can be also used to compute the minimal eigenvalue \(\lambda _{\min }(A)\), simply replace A by \(A^{-1}\). That is the Inverse Iteration introduced in 1944:
It is interesting to note that the equivalent assertion on the right-hand side is exactly the input-output method in economy.
To come back to compute the maximal \(\rho (A)\) rather than \(\lambda _{\min }(A)\), we add a shift z to A: replacing A by \(A-z I\). Actually, it is even better to replace the last one by \(zI-A\) since we will often use \(z>\rho (A)\) rather than \(z<\rho (A)\), the details will be explained at the beginning of Sect. 4 below. When z is close enough to \(\rho (A)\), the leading eigenvalue of \((z I-A)^{-1}\) becomes \((z-\rho (A))^{-1}\). Furthermore, we can even use a variant shift \(z_{k-1}I\) to accelerate the convergence speed. Thus, we have arrived at the second algorithm in computing the maximal eigenpair, the Rayleigh Quotient Iteration (RQI), a variant of the Inverse Iteration. From now on, unless otherwise stated, we often use the \(\ell ^2\)-norm. Starting from an approximating pair \((z_0, v_0)\) of the maximal one \((\rho (A), g)\) with \(v_0^*v_0=1\), use the following iteration.
If \((z_0, v_0)\) is close enough to \((\rho (A), g)\), then
Since for each \(k\geqslant 1\), \(v_k^* v_k=1\), we have \(z_k={v_{k}^*Av_k}/{(v_{k}^* v_k)}.\) That is where the name “Rayleigh Quotient” comes from. Unless otherwise stated, \(z_0\) is setting to be \(v_0^* A v_0\).
Having the hard time spent in the first algorithm, I wondered how many iterations are required using this algorithm. Of course, I can no longer bear 1000 iterations. To be honest, I hope to finish the computation within 100 iterations. What happens now?
Example 2
For the same matrix Q and \({\tilde{v}}_0\) as in Example 1.1, by RQI, we need two iterations only:
The result came to me, not enough to say surprisingly, I was shocked indeed. This shows not only the power of the second method but also the effectiveness of my initial \(v_0\). From the examples above, we have seen the story of the proportion of 1000 and 2.
For simplicity, from now on, we often write \(\lambda _j:=\lambda _j(-Q)\). In particular \(\lambda _0=-\rho (Q)>0\). Instead of our previous \(v_0\), we adopt the uniform distribution:
This is somehow fair since we usually have no knowledge about g in advance.
Example 3
Let Q be the same as above. Use the uniform distribution \(v_0\) and set \(z_0=v_0^*(-Q)v_0\). Then
The computation becomes stable at the 4th iteration. Unfortunately, it is not what we want \(\lambda _0\) but \(\lambda _2\). In other words, the algorithm converges to a pitfall. Very often, there are \(n-1\) pitfalls for a matrix having n eigenvalues. This shows once again our initial \({\tilde{v}}_0\) is efficient and the RQI is quite dangerous.
Hopefully, everyone here has heard the name Google’s PageRank. In other words, the Google’s search is based on the maximal left-eigenvector. On this topic, the book [8] was published 11 years ago. In this book, the Power Iteration is included but not the RQI. It should be clear that for PageRank, we need to consider not only large system, but also fast algorithm.
It may the correct position to mention a part of the motivations for the present study.
-
Google’s search–PageRank.
-
Input–output method in economy. In this and the previous cases, the computation of the maximal eigenvector is required.
-
Stability speed of stochastic systems. Here, for the stationary distribution of a Markov chain, we need to compute the eigenvector; and for the stability rate, we need to study the maximal (or the fist nontrivial) eigenvalue.
-
Principal component analysis for BigData. One choice is to study the so-called five-diagonal matrices. The second approach is using the maximal eigenvector to analysis the role played by the components, somehow similar to the PageRank.
-
For image recognition, one often uses Poisson or Toeplitz matrices, which are more or less the same as the Quasi-birth-death matrices studied in queueing theory. The discrete difference equations of elliptic partial differential equations are included in this class: the block-tridiagonal matrices.
-
The effectiveness of random algorithm, say Markov Chain Monte Carlo for instance, is described by the convergence speed. This is also related to the algorithms for machine learning.
-
As in the last item, a mathematical tool to describe the phase transitions is the first nontrivial eigenvalue (the next eigenpair in general). This is the original place where the author was attracted to the topic.
Since the wide range of the applications of the topic, there is a large number of publications. The author is unable to present a carefully chosen list of references here, what instead are two random selected references: [8, 11].
Up to now, we have discussed only a small size \(8\times 8 \,(N=7)\) matrix. How about large N? In computational mathematics, one often expects the number of iterations grows in a polynomial way \(N^{\alpha }\) for \(\alpha \) greater or equal to 1. In our efficient case, since \(2=8^{1/3}\), we expect to have \(10000^{1/3}\approx 22\) iterations for \(N+1= 10^4\). The next table subverts completely my imagination.
Here \(z_0\) is defined by
where \({v}_0\) and \(\delta _1\) are computed by our general formulas to be defined in the next section. We compute the matrices of order \(8, 100,\ldots , 10^4\) by using MatLab in a notebook, in no more than 30 s, the iterations finish at the second step. This means that the outputs starting from \(z_2\) are the same and coincide with \(\lambda _0\). See the first row for instance, which becomes stable at the first step indeed. We do not believe such a result for some days, so we checked it in different ways. First, since \(\lambda _0=1/4\) when \(N=\infty \), the answers of \(\lambda _0\) given in the fourth column are reasonable. More essentially, by using the output \(v_2\), we can deduce upper and lower bounds of \(\lambda _0\) (using ([2]; Theorem 2.4 (3))), and then the ratio upper/ lower is presented in the last column. In each case, the algorithm is significant up to 6 digits. For the large scale matrices here and in Sect. 4, the computations are completed by Yue-Shuang Li.
2 Efficient Initials: Tridiagonal Case
It is the position to write down the formulas of \(v_0\) and \(\delta _1\). Then our initial \(z_0\) used in Table 1 is a little modification of \(\delta _1^{-1}\): a convex combination of \(\delta _1^{-1}\) and \(v_0^*(-Q)v_0\).
Let us consider the tridiagonal matrix (cf. ([3]; Sect. 3) and ([6]; Subsect. 4.4)). Fix \(N\geqslant 1\), denote by \(E=\{0, 1, \ldots , N\}\) the set of indices. By a shift if necessary, we may reduce A to Q with negative diagonals: \(Q^c=A-m I\), \( m:=\max _{i\in E} \sum _{j\in E} a_{ij},\)
Thus, we have three sequences \(\{a_i>0\}\), \(\{b_i>0\}\), and \(\{c_i\geqslant 0\}\). Our main assumption here is that the first two sequences are positive and \(c_i\not \equiv 0\). In order to define our initials, we need three new sequences, \(\{h_k\}\), \( \{\mu _k\}\), and \(\{\varphi _k\}\).
First, we define the sequence \(\{h_k\}\):
here we need another sequence \(\{r_k\}\):
Here and in what follows, our iterations are often of one-step. Note that if \(c_k= 0\) for every \(k<N\), then we do not need the sequence \(\{h_k\}\), simply set \(h_k\equiv 1\). An easier way to remember this \((h_i)\) is as follows. It is nearly harmonic of \(Q^c\) except at the last point N:
where \(B^{\setminus \text {the last row}}\, \) means the matrix modified from B by removing its last low.
We now use H-transform, it is designed to remove the sequence \((c_i)\):
Then
for some modified \(\{a_i>0\}\), \(\{b_i>0\}\), and \(c_N^{}> 0\). Of course, \(Q^c\) and \({\widetilde{Q}}\) have the same spectrum. In particular, under the H-transform,
From now on, for simplicity, we denote by Q the matrix replacing \(c_N^{}\) by \(b_N^{}\) in \({\widetilde{Q}}\).
Next, we define the second sequence \(\{\mu _k\}\):
And then define the third one \(\{\varphi _k\}\) as follows.
We are now ready to define \(v_0\) and \(\delta _1\) (or \(z_0\)) using the sequences \((\mu _i)\) and \((\varphi _i)\).
with a convention \(\sum _{\emptyset }=0\).
Finally, having constructed the initials \((v_0, z_0)\), the RQI goes as follows. Solve \(w_k\):
and define
Then
Before moving further, let us mention that there is an explicit representation of the solution \((w_i)\) to Eq. (12). Assume that we are given \(v:=v_{k-1}\) and \(z:=z_{k-1}\). Set
Define two independent sequences \(\{A(s)\}\) and \(\{B(s)\}\), recurrently:
Set
Then the required solution \(w_k:=\{w(s): s\in E\}\) can be expressed as \(w(s)=A(s)+ x B(s)\,(s\in E)\).
To finish the algorithm, we return to the estimates of \(\big (\lambda _{\min }(-Q^c), g(Q^c)\big )\) (\(g(Q^c)=g(-Q^c)\)) or further \((\rho (A), g(A))\) if necessary, where g(A), for instance, denotes the maximal eigenvector of A. Suppose that the iterations are stopped at \(k=k_0\) and set \((\bar{z}, \bar{v})=\big (z_{k_0}^{}, v_{k_0}^{}\big )\) for simplicity. Then, we have
and so
Because \(\lambda _{\min } (-Q^c)=m-\rho (A)\), we obtain
Now, the question is the possibility from the tridiagonal case to the general one.
3 Efficient Initials: The General Case (([3]; Subsect. 4.2) and ([6]; Subsect. 4.5))
When we first look at the question just mentioned, it seems quite a long distance to go from the special tridiagonal case to the general one. However, in the eigenvalue computation theory, there is the so-called Lanczos tridiagonalization procedure to handle the job, as discussed in ([3]; Appendix of Sect. 3). Nevertheless, what we adopted in ([3]; Sect. 4) is a completely different approach. Here is our main idea. Note that the initials \(v_0\) and \(\delta _1\) constructed in the last section are explicitly expressed by the new sequences. In other words, we have used three new sequences \(\{h_k\}\), \( \{\mu _k\}\), and \(\{\varphi _k\}\) instead of the original three \(\{a_i\}\), \(\{b_i\}\), and \(\{c_i\}\) to describe our initials. Very fortunately, the former three sequences do have clearly the probabilistic meaning, which then leads us a way to go to the general setup. Shortly, we construct these sequences by solving three linear equations (usually, we do not have explicit solution in such a general setup). Then use them to construct the initials and further apply the RQI-algorithm.
Let \(A=(a_{ij}: i,j\in E)\) be the same as given at the beginning of the paper. Set \(A_i=\sum _{j\in E} a_{ij}\) and define
We can now state the probabilistic/analytic meaning of the required three sequences \((h_i)\), \((\mu _i)\), and \((\varphi _i)\).
-
\((h_i)\) is the harmonic function of \(Q^c\) except at the right endpoint N, as mentioned in the last section.
-
\((\mu _i)\) is the invariant measure (stationary distribution) of the matrix \(Q^c\) removing the sequence \((c_i)\).
-
\((\varphi _i)\) is the tail related to the transiency series, refer to ([3]; Lemma 24 and its proof).
We now begin with our construction. Let \({h}=(h_0, h_1,\ldots , h_N)^*\) (with \(h_0=1\)) solve the equation
and define
Then for which we have
This is very much similar to the tridiagonal case.
Next, let \(Q={\widetilde{Q}}\). Let \({\varphi }=(\varphi _0, \varphi _1,\ldots , \varphi _N)^*\) (with \(\varphi _0=1)\) solve the equation
where
Thirdly, assume that \({\mu }:=(\mu _0, \mu _1, \ldots , \mu _N)\) with \(\mu _0=1\) solves the equation
Having these sequences at hand, we can define the initials
Then, go to the RQI as usual. For \(k\geqslant 1\), let \(w_k\) solve the equation
and set
Then \((z_k, v_k)\rightarrow (\lambda _0, g)\) as \(k\rightarrow \infty \).
We remark that there is an alternative choice (more safe) of \(z_0\):
which is almost a copy of the one used in the last section.
The procedure for returning to the estimates of \(\big (\lambda _{\min }(-Q^c), g(Q^c)\big )\) or further \((\rho (A), g(A))\) is very much the same as in the last section.
To conclude this section, we introduce two examples to illustrate the efficiency of the extended initials for tridiagonally dominant matrices. The next two examples were computed by Xu Zhu, a master student in Shanghai.
Example 4
(Block-tridiagonal matrix). Consider the matrix
where \(A_k, B_k, C_k\) are \(40\times 40\)-matrices, \(B's\) and \(C's\) are identity matrices, and \(A's\) are tridiagonal matrices. For this model, two iterations are enough to arrive at the required results (Table 2).
Example 5
(Toeplitz matrix). Consider the matrix
For this model, three iterations are enough to arrive at the required results (Table 3).
As mentioned before, the extended algorithm should be powerful for the tridiagonally dominant matrices. How about more general case? Two questions are often asked to me by specialists in computational mathematics: do you allow more negative off-diagonal elements? How about complex matrices? My answer is: they are too far away from me, since those matrices can not be a generator of a Markov chain, I do not have a tool to handle them. Alternatively, I have studied some more general matrices than the tridiagonal ones: the block-tridiagonal matrices, the lower triangular plus upper-diagonal, the upper triangular plus lower-diagonal, and so on. Certainly, we can do a lot case by case, but this seems still a long way to achieve a global algorithm. So we do need a different idea.
4 Global Algorithms
Several months ago, AlphaGo came to my attention. From which I learnt the subject of machine learning. After some days, I suddenly thought, since we are doing the computational mathematics, why can not let the computer help us to find a high efficiency initial value? Why can not we leave this hard task to the computer? If so, then we can start from a relatively simple and common initial value, let the computer help us to gradually improve it.
The first step is easy, simply choose the uniform distribution as our initial \(v_0\):
As mentioned before, this initial vector is fair and universal. One may feel strange at the first look at “global” in the title of this section. However, with this universal \(v_0\), the power iteration is already a global algorithm. Unfortunately, the convergence of this method is too slow, and hence is often not practical. To quicken the speed, we should add a shift which now has a very heavy duty for our algorithm. The main trouble is that the usual Rayleigh quotient \({v_0^*Av_0}/({v_0^*v_0})\) can not be used as \(z_0\), otherwise, it will often lead to a pitfall, as illustrated by Example 1.3. The main reason is that our \(v_0\) is too rough and so \(z_0\) deduced from it is also too rough. Now, how to choose \(z_0\) and further \(z_n\)?
Clearly, for avoiding the pitfalls, we have to choose \(z_0\) from the outside of the spectrum of A (denoted by Sp(A)), and as close to \(\rho (A)\) as possible to quicken the convergence speed. For nonnegative A, Sp(A) is located in a circle with radius \(\rho (A)\) in the complex plane. Thus, the safe region should be on the outside of Sp(A). Since \(\rho (A)\) is located at the boundary on the right-hand side of the circle, the effective area should be on the real axis on the right-hand side of, but a little away from, \(\rho (A)\).
For the matrix Q used in this paper, since \(\rho (Q)<0\), its spectrum Sp(Q) is located on the left-hand side of the origin. Then, one can simply choose \(z_0=0\) as an initial. See Fig. 3.
Having these idea in mind, we can now state two of our global algorithms. Each of them uses the same initials:
where for two vectors f and g, \((f/g)(i)=f_i/g_i\).
Algorithm 1
(Specific Rayleigh quotient iteration). At step \(k\geqslant 1\), for given \(v:=v_{k-1}\) and \(z:=z_{k-1}\), let w solve the equation
Set \(v_k={w}/{\Vert w\Vert }\) and let \(z_k=v_k^* A v_k.\)
This algorithm goes back to [3], Subsect. 4.1 with Choice I.
Algorithm 2
(Shifted inverse iteration). Everything is the same as in Algorithm 1, except redefine \(z_k\) as follows:
for \(k\geqslant 1\) (or equivalently, \(k\geqslant 0\) \()\).
The comparison of these algorithms are the following: with unknown small probability, Algorithm 1 is less safe than Algorithm 2, but the former one has a faster convergence speed than the latter one with possibility 1/5 for instance.
With the worrying on the safety and convergence speed in mind, we examine two examples which are non-symmetric.
The first example below is a lower triangular plus the upper-diagonal. It is far away from the tridiagonal one, we want to see what can be happened.
Example 6
([6]; Example 7). Let
For this matrix, we have computed several cases:
Among them, the first one is the hardest and is hence presented below.
For different N, the outputs of our algorithm are given in Table 4.
The next example is upper triangular plus lower-diagonal. It is motivated from the classical branching process. Denote by \((p_k: k\geqslant 0)\) a given probability measure with \(p_1=0\). Let
The matrix is defined on \(E:=\{1, 2, \ldots , N\}\). Set \(M_1=\sum _{k\in E}k p_k\). When \(N=\infty \), it is subcritical iff \(M_1<1\), to which the maximal eigenvalue should be positive. Otherwise, the convergence rate should be zero.
Now, we fix
Then \(M_1=3(2-\alpha )/2\) and hence we are in the subcritical case iff \(\alpha \in (4/3, 2)\).
Example 7
([6]; Example 9). Set \(\alpha =7/4\). We want to know how fast the local \((\) \(N<\infty \) \()\) maximal eigenvalue becomes stable (i.e., close enough to the converge rate at \(N=\infty \) \()\). Up to \(N=10^{4}\), the steps of the iterations we need are no more than 6. To quicken the convergence, we adopt an improved algorithm. Then the outputs of the approximation of the minimal eigenvalue of \(-Q\) for different N are given in Table 5.
The computation in each case costs no more than one minute. Besides, starting from \(N=50\), the final outputs are all the same: 0.625, which then can be regarded as a very good approximation of \(\lambda _{\min }(-Q)\) at infinity \(N=\infty \).
It is the position to compare our global algorithm with that given in the last section. At the first look, here in the two examples above, we need about 6 iterations, double of the ones given in the last section. Note that for the initials of the algorithm in the last section, we need solve three additional linear equations, which are more or less the same as three additional iterations. Hence the efficiency of these two algorithms are very close to each other. Actually, the computation time used for the algorithm in the last section is much more than the new one here.
It is quite surprising that our new algorithms work for a much general class of matrices, out of the scope of [3]. Here we consider the maximal eigenpair only.
The example below allows partially negative off-diagonal elements.
Example 8
(([9]; Example (7)), ([6]; Example 12)). Let
Then the eigenvalues of A are as follows.
The corresponding maximal eigenvector is
which is positive.
Here are the outputs of our algorithms. Both algorithms are started at \(z_0=24\) (Table 6).
Furthermore, we can even consider some complex matrices.
Example 9
(([10]; Example 2.1), ([6]; Example 15)). Let
where the coefficients are all accurate, to four decimal digits. Then A has eigenvalues
with maximal eigenvector
The outputs \((y_n)\) (but not \((z_n)\) \()\)of ([6]; Algorithm 14), a variant of Algorithm 2, are as follows (Table 7).
We mention that a simple sufficient condition for the use of our algorithms is the following:
Then we have the Perron–Frobenius property: there exists the maximal eigenvalue \(\rho (A)>0\) having simple left- and right-eigenvectors.
Hopefully, the reader would now be accept the use of “global” here for our new algorithms. They are very much efficient indeed. One may ask about the convergence speed of the algorithms. Even though we do not have a universal estimate for each model in such a general setup, it is known however that the shifted inverse algorithm is a fast cubic one, and hence should be fast enough in practice. This explains the reason why our algorithms are fast enough in the general setup. Certainly, in the tridiagonal dominate case, one can use the algorithms presented in the previous sections. Especially, in the tridiagonal situation, we have analytically basic estimates which guarantee the efficiency of the algorithms. See [4] for a long way to reach the present level.
When talking about the eigenvalues, the first reaction for many people (at least for me, 30 years ago) is that well, we have known a great deal about the subject. However, it is not the trues. One may ask himself that for eigenvalues, how large matrix have you computed by hand? As far as I know, \(2\times 2\) only in analytic computation by hand. It is not so easy to compute them for a \(3\times 3\) matrix, except using computer. Even I have worked on the topic for about 30 years, I have not been brave enough to compute the maximal eigenvector, we use its mimic only to estimate the maximal eigenvalue (or more generally the first nontrivial eigenvalue). The first paper I wrote on the numerical computation is [3]. It is known that the most algorithms in computational mathematics are local, the Newton algorithm (which is a quadratic algorithm) for instance. Hence, our global algorithms are somehow unusual.
About three years ago, I heard a lecture that dealt with a circuit board optimization problem. The author uses the Newton method. I said it was too dangerous and could fall into the trap. The speaker answered me that yes, it is dangerous, but no one in the world can solve this problem. Can we try annealing algorithm? I asked. He replied that it was too slow. We all know that in the global optimization, a big problem (not yet cracked) is how to escape from the local traps. The story we are talking about today seems to have opened a small hole for algorithms and optimization problems, and perhaps you will be here to create a new field.
References
Chen, M.F.: Eigenvalues, Inequalities, and Ergodic Theory. Springer, Heidelberg (2005). https://doi.org/10.1007/b138904
Chen, M.F.: Speed of stability for birth-death processes. Front. Math. China 5, 379–515 (2010)
Chen, M.F.: Efficient initials for computing the maximal eigenpair. Front. Math. China 11, 1379–1418 (2016)
Chen, M.F.: The charming leading eigenpair. To appear in Advances in Mathematics (China) (2017)
Chen, M.F.: Efficient Algorithm for Principal Eigenpair of Discrete \(p\)-Laplacian. Preprint
Chen, M.F.: Global Algorithms for Maximal Eigenpair. Preprint
Golub, G.H., van der Vorst, H.A.: Eigenvalue computation in the 20th century. J. Comput. Appl. Math. 123, 35–65 (2000)
Langville, A.N., Meyer, C.D.: Google’s PageRank and Beyond: The Science of Search Engine Rankings. Princeton University, Princeton (2006)
Noutsos, D.: Perron Frobenius Theory and Some Extensions (2008). http://www.pdfdrive.net/perron-frobenius-theory-and-some-extensions-e10082439.html
Noutsos, D., Varga, R.S.: On the Perron-Frobenius theory for complex matrices. Linear Algebra Appl. 473, 1071–1088 (2012)
Solomon, J.: Numerical Algorithms: Methods for Computer Vision, Machine Learning, and Graphics. CRC Press, Boca Raton (2015)
Acknowledgments
This paper is based on a series of talks: Central South University (2017/6), 2017 IMS-China ICSP (2017/6), Summer School on Stochastic Processes at BNU (2017/7), the 9th Summer Camp for Excellent College Students at BNU (2017/7), Sichun University (2017/7), the 12th International Conference on Queueing Theory and Network Applications (2017/8), the 2nd Sino-Russian Seminar on Asymptotic Methods in Probability Theory and Mathematical Statistics & the 10th Probability Limit Theory and Statistic Large Sample Theory Seminar (2017/9). The author thanks professors Zhen-Ting Hou, Zai-Ming Liu, Zhen-Qing Chen, Elton P. Hsu, Jing Yang, Xiao-Jing Xu, An-Min Li, Lian-Gang Peng, Quan-Lin Li, Zhi-Dong Bai, Ning-Zhong Shi, Jian-Hua Guo, Zheng-Yan Lin for their invitations and hospitality. The author also thanks Ms Jing-Yu Ma for the help in editing the paper. Research supported in part by National Natural Science Foundation of China (No. 11131003 and 11626245) the “985” project from the Ministry of Education in China, and the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Chen, MF. (2017). Trilogy on Computing Maximal Eigenpair. In: Yue, W., Li, QL., Jin, S., Ma, Z. (eds) Queueing Theory and Network Applications. QTNA 2017. Lecture Notes in Computer Science(), vol 10591. Springer, Cham. https://doi.org/10.1007/978-3-319-68520-5_19
Download citation
DOI: https://doi.org/10.1007/978-3-319-68520-5_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-68519-9
Online ISBN: 978-3-319-68520-5
eBook Packages: Computer ScienceComputer Science (R0)