Abstract
The alternate row and column scaling algorithm applied to a positive \(n\times n\) matrix A converges to a doubly stochastic matrix S(A), sometimes called the Sinkhorn limit of A. For every positive integer n, a two parameter family of row but not column stochastic \(n\times n\) positive matrices is constructed that become doubly stochastic after exactly one column scaling.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
2010 Mathematics Subject Classification
1 The Alternate Scaling Algorithm
A positive matrix is a matrix with positive coordinates. A nonnegative matrix is a matrix with nonnegative coordinates. Let \(D = \text{ diag }(x_1,\ldots , x_n)\) denote the \(n\times n\) diagonal matrix with coordinates \(x_1,\ldots , x_n\) on the main diagonal. The diagonal matrix D is positive if its coordinates \(x_1,\ldots , x_n\) are positive. If \(A = (a_{i,j})\) is an \(m\times n\) positive matrix, if \(X = \text{ diag }(x_1,\ldots , x_m)\) is an \(m\times m\) positive diagonal matrix, and if \(Y= \text{ diag }(y_1,\ldots , y_n)\) is an \(n\times n\) positive diagonal matrix, then \(XA = (x_ia_{i,j})\), \(AY = (a_{i,j} y_j)\), \(XAY = (x_ia_{i,j} y_j)\) are \(m\times n\) positive matrices.
Let \(A = (a_{i,j})\) be an \(n \times n\) matrix. The ith row sum of A is
The jth column sum of A is
The matrix A is row stochastic if it is nonnegative and \( \text{ rowsum }_i(A) = 1\) for all \(i \in \{1,\ldots , n\}\). The matrix A is column stochastic if it is nonnegative and \( \text{ colsum }_j(A) = 1\) for all \(j \in \{1,\ldots , n\}\). The matrix A is doubly stochastic if it is both row stochastic and column stochastic.
Let \(A = (a_{i,j})\) be a nonnegative \(n \times n\) matrix such that \( \text{ rowsum }_i(A) > 0\) and \( \text{ colsum }_j(A)>0\) for all \(i,j \in \{1,\ldots , n\}\). Define the \(n \times n\) positive diagonal matrix
Multiplying A on the left by X(A) multiplies each coordinate in the ith row of A by \( 1/ \text{ rowsum }_i(A)\), and so
and
for all \(i \in \{1,2,\ldots , n\}\). The process of multiplying A on the left by X(A) to obtain the row stochastic matrix X(A)A is called row scaling. We have \(X(A) A = A\) if and only if A is row stochastic if and only if \(X(A) = I\). Note that the row stochastic matrix X(A)A is not necessarily column stochastic.
Similarly, we define the \(n \times n\) positive diagonal matrix
Multiplying A on the right by Y(A) multiplies each coordinate in the jth column of A by \(1/ \text{ colsum }_j(A)\), and so
and
for all \(j \in \{1,2,\ldots , n\}\). The process of multiplying A on the right by Y(A) to obtain a column stochastic matrix AY(A) is called column scaling. We have \(AY(A) = A\) if and only if \(Y(A) = I\) if and only if A is column stochastic. The column stochastic matrix AY(A) is not necessarily row stochastic.
Let A be a positive \(n \times n\) matrix. Alternately row scaling and column scaling the matrix A produces an infinite sequence of matrices that converges to a doubly stochastic matrix This result (due to Brualdi, Parter, and Schnieder [1], Letac [3], Menon [4], Sinkhorn [7], Sinkhorn–Knopp [8], Tverberg [9], and others) is classical.
Nathanson [5, 6] proved that if A is a \(2\times 2\) positive matrix that is not doubly stochastic but becomes doubly stochastic after a finite number L of scalings, then L is at most 2, and the \(2\times 2\) row stochastic matrices that become doubly stochastic after exactly one column scaling were computed explicitly. An open question was to describe \(n \times n\) matrices with \(n \ge 3\) that are not doubly stochastic but become doubly stochastic after finitely many scalings. Ekhad and Zeilberger [2] discovered the following row-stochastic but not column stochastic \(3\times 3\) matrix, which requires exactly one column scaling to become doubly stochastic:
Column scaling A produces the doubly stochastic matrix
The following construction generalizes this example. For every \(n \ge 3\), there is a two parameter family of row-stochastic \(n\times n\) matrices that require exactly one column scaling to become doubly stochastic
Let \(A = \left( \begin{matrix} a_{i,j} \end{matrix} \right) \) be an \(m \times n\) matrix. For \(i=1,\ldots , m\), we denote the ith row of A by
Theorem 1
Let k and \(\ell \) be positive integers, and let \(n > \max (2k, 2\ell )\). Let x and z be positive real numbers such that
and let
The \(n \times n\) matrix A such that
is row stochastic but not column stochastic. The matrix obtained from A after one column scaling is doubly stochastic.
Proof
If
then
If
then
Thus, the matrix A is row stochastic.
If
then
If
then
Thus, matrix A is not column stochastic.
The column scaling matrix for A is the positive diagonal matrix
For the column scaled matrix AY(A), we have the following row sums. If
then
If
then
Thus, the matrix AY(A) is row stochastic. This completes the proof. \(\square \)
For example, let \(k = \ell = 1\) and \(n = 3\), and let w, x, y, z be positive real numbers such that
The matrix
is row stochastic but not column stochastic. By Theorem 1, column scaling A produces a doubly stochastic matrix. Choosing \(x = 1/5\) and \(z = 3/5\), we obtain the matrix (1).
Here is another example. Let \(k = 2\), \(\ell = 3\), and \(n = 7\). Choosing
we obtain the row but not column stochastic matrix
Column scaling produces the doubly stochastic matrix
Theorem 2
Every \(n\times n\) matrix A constructed in Theorem 1 satisfies \(\det (A) = 0\).
Proof
There are three cases.
If \(k > 1\) or \(n-2k > 1\), then A has two equal columns and \(\det (A) = 0\).
If \(\ell > 1\) or \(n-2 \ell > 1\), then A has two equal rows and \(\det (A) = 0\).
If \(k = \ell = 1\) and \(n = 3\), then
and
This completes the proof. \(\square \)
Theorem 2 is of interest for the following reason. Let \(A = \left( \begin{matrix} a_{i,j} \end{matrix} \right) \) be an \(n \times n\) matrix. If \(\det (A) \ne 0\), then the system of linear equations
has a unique solution. Equivalently, if \(\det (A) \ne 0\), then there exists a unique \(n \times n\) diagonal matrix \(T = \text{ diag }(t_1,\ldots , t_n)\) such that the matrix \(B = TA\) is column stochastic.
Suppose that the matrix A is positive and row stochastic. If \(t_i > 0\) for all \(i \in \{1,\ldots , n\}\), then T is invertible and \(B = TA\) is a positive column stochastic matrix. Setting \(X = T^{-1}\), we have \(XB = A\). Moreover, X is the row scaling matrix associated to B. Thus, if A is a row stochastic matrix such that column scaling A produces a doubly stochastic matrix, then we have pulled A back to a column stochastic matrix B, and we have increased by 1 the number of scalings needed to get a doubly stochastic matrix.
Unfortunately, the matrices constructed in Theorem 1 have determinant 0.
2 Open Problems
-
1.
Does there exist a positive \(3\times 3\) row stochastic but not column stochastic matrix A with nonzero determinant such that A becomes doubly stochastic after one column scaling?
-
2.
Let A be a positive \(3\times 3\) row stochastic but not column stochastic matrix that becomes doubly stochastic after one column scaling. Does \(\det (A) = 0\) imply that A has the shape of matrix (4)?
-
3.
Here is the inverse problem: Let A be an \(n \times n\) row-stochastic matrix. Does there exist a column stochastic matrix B such that row scaling B produces A (equivalently, such that \(X(B) B= A\))? Compute B.
-
4.
Modify the above problems so that the matrices are required to have rational coordinates.
-
5.
Determine if, for positive integers \(L \ge 3\) and \(n \ge 3\), there exists a positive \(n \times n\) matrix that requires exactly L scalings to reach a doubly stochastic matrix.
-
6.
Classify all matrices for which the alternate scaling algorithm terminates in finitely many steps.
References
R. A. Brualdi, S. V. Parter, and H. Schneider, The diagonal equivalence of a nonnegative matrix to a stochastic matrix, J. Math. Anal. Appl. 16 (1966), 31–50.
S. B. Ekhad and D. Zeilberger, Answers to some questions about explicit Sinkhorn limits posed by Mel Nathanson, arXiv:1902.10783, 2019.
G. Letac, A unified treatment of some theorems on positive matrices, Proc. Amer. Math. Soc. 43 (1974), 11–17.
M. V. Menon, Reduction of a matrix with positive elements to a doubly stochastic matrix, Proc. Amer. Math. Soc. 18 (1967), 244–247.
M. B. Nathanson, Alternate minimization and doubly stochastic matrices, arXiv:1812.11935, 2018.
M. B. Nathanson, Matrix scaling and explicit doubly stochastic limits, Linear Algebra and its Applications 578 (2019), 111–132; https://doi.org/10.1016/j.laa.2019.05.004
R. Sinkhorn, A relationship between arbitrary positive matrices and doubly stochastic matrices, Ann. Math. Statist. 35 (1964), 876–879.
R. Sinkhorn and P. Knopp, Concerning nonnegative matrices and doubly stochastic matrices, Pacific J. Math. 21 (1967), 343–348.
H. Tverberg, On Sinkhorn’s representation of nonnegative matrices, J. Math. Anal. Appl. 54 (1976), no. 3, 674–677.
Acknowledgements
Supported in part by a grant from the PSC-CUNY Research Award Program.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Nathanson, M.B. (2020). Matrix Scaling Limits in Finitely Many Iterations. In: Nathanson, M. (eds) Combinatorial and Additive Number Theory III. CANT 2018. Springer Proceedings in Mathematics & Statistics, vol 297. Springer, Cham. https://doi.org/10.1007/978-3-030-31106-3_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-31106-3_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-31105-6
Online ISBN: 978-3-030-31106-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)