Keywords

Subject Classication:

1 Introduction

Let \(A=[a_{ij}]\) be an \(m\times n\) matrix, and let \(A^{\dagger }=[a_{ij}^{\dagger }]\) denote the \(m\times n\) matrix obtained from A by a rotation of 180\(^{\circ }\). Thus \(a_{ij}^{\dagger }=a_{n+1-i, n+1-j}\) for all i and j. The matrix A is centrosymmetric provided \(A^{\dagger }=A\), that is, provided

$$ a_{n+1-i,n+1-j}=a_{ij} \text{ for } \text{ all } i \text{ and } j. $$

If the \(m\times n\) matrix A is centrosymmetric, then if m (resp., n) is odd, row \((m+1)/2\) (resp. column \((n+1)/2\)) is palindromic, that is, is the same read forward or backward. The centrosymmetric matrix A is determined by its first \(\lceil n/2\rceil \) columns. Centrosymmetric matrices have occured in many investigations; see e.g. [1, 2, 6].

As usual \(A^t=[a_{ij}^t]\) denotes the usual \(n\times m\) transpose of A so that \(a_{ij}^t=a_{ji}\) for all i and j. In addition, we consider the matrix \(A^h=[a_{ij}^h]\) to denote the Hankel transpose [5] of A. This is the \(n\times m\) matrix obtained from A by interchanging rows and columns as with the transpose, but using the reverse order in both cases. Thus \(a_{ij}^h=a_{n+1-j,n+1-i}\) for all i and j. For example, if

$$ A=\left[ \begin{array}{ccc} a&{}b&{}c\\ d&{}e&{}f\end{array}\right] ,$$

then

$$A^{h}=\left[ \begin{array}{cc} f&{}c\\ e&{}b\\ d&{}a\end{array}\right] . $$

If A is a square matrix, then \(A^{h}\) is obtained from A by transposing across the Hankel diagonal, that is, across the diagonal of A running from upper right to lower left.

A matrix A is symmetric provided that \(A^t=A\). We say that a matrix is Hankel-symmetric provided that \(A^h=A\). Symmetric and Hankel-symmetric matrices must be square matrices. An example of a Hankel-symmetric matrix is

$$ \left[ \begin{array}{ccc} 2&{}1&{}0\\ 4&{}5&{}1\\ 3&{}4&{}2\end{array}\right] . $$

If a square matrix is both symmetric and Hankel-symmetric, then it is centrosymmetric. This is because, consecutive reflections about two perpendicular lines (the main diagonal and the antidiagonal) is a rotation by 180\(^{\circ }\). More precisely, we have the following basic result.

Proposition 1

Let A be an \(n\times n\) matrix. Then any two of the following three properties implies the other:

  1. (s)

    A is symmetric: \(A^t=A\).

  2. (hs)

    A is Hankel-symmetric: \(A^h=A\).

  3. (cs)

    A is centrosymmetric: \(A^{\dagger }=A\).

Proof

  1. (i)

    (s) and (hs) \(\Rightarrow \) (cs): \(a_{ij}\mathop {=}\limits ^\mathrm{(hs)}a_{n+1-j,n+1-i}\mathop {=}\limits ^\mathrm{(s)}a_{n+1-i,n+1-l}\).

  2. (ii)

    (s) and (cs) \(\Rightarrow \) (hs): \(a_{ij}\mathop {=}\limits ^{(cs)}a_{n+1-i.n+1-j} \mathop {=}\limits ^{(s)}a_{n+1-j,n+1-i}\).

  3. (iii)

    (hs) and (cs) \(\Rightarrow \) (s): \(a_{ij}\mathop {=}\limits ^{(cs)}a_{n+1-i,n+1-j}\mathop {=}\limits ^{(hs)} a_{ji}\).

\(\square \)

Centrosymmetric permutation matrices are studied in [1, 2]. Centrosymmetric graphs are considered in [6].

A matrix, even a permutation matrix, may be centrosymmetric but not Hankel symmetric, or Hankel symmetric but not centrosymmetric. For example,

$$ \left[ \begin{array}{c|c|c|c} &{}&{}1&{}\\ \hline &{}&{}&{}1\\ \hline &{}1&{}&{}\\ \hline 1&{}&{}&{}\end{array}\right] \text{(Hankel } \text{ symmetric } \text{ but } \text{ not } \text{ centrosymmetric), } $$

and

$$ \left[ \begin{array}{c|c|c|c} &{}1&{}&{}\\ \hline &{}&{}&{}1\\ \hline 1&{}&{}&{}\\ \hline &{}&{}1&{}\end{array}\right] \text{(centrosymmetric } \text{ but } \text{ not } \text{ Hankel } \text{ symmetric) } $$

Let \(R=(r_1,r_2,\ldots .r_m)\) and \(S=(s_1,s_2,\ldots ,s_n)\) be two nonnegative vectors with the sum of components:

$$ \tau : = r_1+r_2+\cdots +r_m=s_1+s_2+\cdots +s_n. $$

We denote by \({\mathcal T}(R,S)\) the set of all nonnegative real matrices with row sum vector R and column sum vector S. Matrices in \({\mathcal T}(R,S)\) are often called transportation matrices because of their connection to the well-known transportation problem of transporting goods from m sources with supplies of sizes given by R to n sources with demands given by S. \({\mathcal T}(R,S)\) is a convex polytope, and is nonempty since the \(m\times n\) matrix \(T=[t_{ij}]\) with

$$ t_{ij}=\frac{r_is_j}{\tau } \text{ for } \text{ all } i \text{ and } j. $$

is in \({\mathcal T}(R,S)\). If R and S are also integral vectors, then \({\mathcal T}(R,S)\) is an integral transportation polytope. An integral transportation polytope always contains an integral matrix. In fact, the following transportation matrix algorithm always produces such a matrix \(A=[a_{ij}]\) (see e.g. [4], pp. 26–27):

  1. (1)

    Choose any i and j and set \(a_{ij}=\min \{r_i,s_j\}\).

    1. (a)

      If \(\min \{r_i,s_j\}=r_i\), set \(a_{il}=0\) for all \(l\ne j\).

    2. (b)

      If \(\min \{r_i,s_j\}=s_j\), set \(a_{kj}=0\) for all \(k\ne i\).

  2. (2)

    Reduce \(r_i\) and \(s_j\) by \(\min \{r_i,s_j\}\), and proceed inductively.

In fact, the matrices produced by this algorithm carried out in all possible ways gives all the extreme points of the convex polytope \({\mathcal T}(R,S)\). Note that if R and S are integral vectors, then the algorithm always produces integral matrices. We denote the class of integral matrices in \({\mathcal T}(R,S)\) by \({\mathcal T}_Z(R,S)\). The class \({\mathcal T}_Z(R,S)\) may or may not contain a (0, 1)-matrix even if the components of R and S are small enough. For instance, if \(R=(4,3,2,1)\) and \(S=(4,4,1,1)\), then there does not exists a (0, 1)-matrix in \({\mathcal Z}(R,S)\).

Again with the assumption that R and S are nonnegative integral vectors, a much studied class of matrices is the class \({\mathcal A}(R,S)\) consisting of all \(m\times n\) (0, 1)-matrices in \({\mathcal T}_Z(R,S)\). The Gale-Ryser theorem (see e.g. [4], p. 27) characterizes the nonemptiness of the class \({\mathcal A}(R,S)\) as follows.

Let \(R=(r_1,r_2,\ldots ,r_m)\) and \(S=(s_1,s_2,\ldots ,s_n)\) be nonnegative integral vectors with

$$ r_1+r_2+\cdots +r_m=s_1+s_2+\cdots +s_n. $$

Assume without loss of generality (by permuting rows and columns if necessary) that

$$ r_1\ge r_2\ge \cdots \ge r_m \text{ and } s_1\ge s_2\ge \cdots \ge s_n. $$

Let \(R^*=(r_1^*,r_2^*,\ldots ,r_n^*)\) be the conjugate of R, that is,

$$ r_j^*=|\{i: r_i\ge j\}. $$

Then \({\mathcal A}(R,S)\ne \emptyset \) if and only if S is majorized by R, that is,

$$\begin{aligned} s_1+s_2+\cdots +s_j\le r_1^*+r_2^*+\cdots +r_j^*\quad (j=1,2,\ldots ,n), \text{ with } \text{ equality } \text{ for } j=n. \end{aligned}$$
(1)

Assuming that \(R=S=(r_1,r_2,\ldots ,r_n)\), one can also consider the existence of symmetric matrices in the classes \({\mathcal T}(R,R)\) and, if R is integral, \({\mathcal T}_Z(R,R)\) and \({\mathcal A}(R,R)\). The classes \({\mathcal T}(R,R)\) and \({\mathcal T}_Z(R,R)\) always contain a symmetric matrix since they contain the diagonal matrix with \(r_1,r_2,\ldots ,r_n\) on the main diagonal. It is a consequence of a theorem of Fulkerson, Hoffman, and McAndrew that the class \({\mathcal A}(R)\) contains a symmetric matrix if and only if it is nonempty (see e.g. [3], pp. 179–182).

In this note, we consider certain subsets of the above matrix classes defined by imposing the structural conditions of centrosymmetry, and symmetry and Hankel-symmetry, and we obtain analogous results to those described above.

2 Existence Theorems

In this section we obtain nonemptiness criteria for the classes introduced in the previous section. Since the property of centrosymmetry does not require that the matrix be square, we need not assume our matrices are square in this case; but symmetry and Hankel-symmetry do require that the matrix be square. We shall adapt three well-known algorithms to the centrosymmetric, and symmetric and Hankel-symmetric cases.

Let \(R=(r_1,r_2,\ldots ,r_m)\) and \(S=(s_1,s_2,\ldots ,s_n)\) be nonnegative vectors. Let \({\mathfrak C}(R,S)\) denote the class of all centrosymmetric, nonnegative matrices with row sum vector R and column sum vector S. If R and S are also integral, let \({\mathfrak C}_Z(R,S)\) denote the class of all centrosymmetric, nonnegative integral matrices with row sum vector R and column sum vector S. Recall that a vector \((d_1,d_2,\ldots , d_k)\) is palindromic provided that \(d_i=d_{k+1-i}\) for all i.

Theorem 2

The class \({\mathfrak C}(R,S)\) is nonempty if and only if

$$\begin{aligned} \sum _{i=1}^mr_i=\sum _{j=1}^n s_j. \end{aligned}$$
(2)

and

$$\begin{aligned} R \text{ and } S \text{ are } \text{ palindromic. } \end{aligned}$$
(3)

If R and S are integral vectors, the class \({\mathfrak C}_Z(R,S)\) is nonempty if and only if (2) and (3) hold.

Proof

The conditions (2) and (3) are certainly necessary in order that \({\mathfrak C}(R,S)\ne \emptyset \) and, if R and S are integral, in order that \({\mathfrak C}_Z(R,S)\ne \emptyset \). Now assume that (2) and (3) hold. We modify the transportation matrix algorithm to show the two classes are nonempty.

Since R and S are palindromic, it follows from (2) that if m is even and n is odd, then \(s_{(n+1)/2}\) is even. Similarly. if m is odd and n is even, then \(r_{(m+1)/2}\) is even. If m and n are both odd, then \(r_{(m+1)/2}\) and \(s_{(n+1)/2}\) have the same parity. We proceed as in the transportation algorithm but with the modifications as given below. If R and S are integral, the result will be an integral matrix.

If m and n are both odd and e.g. \(r_{(m+1)/2}\ge s_{(n+1)/2}\), then we put \(a_{(m+1)/2,(n+1)/2}=s_{(n+1)/2}\), and put the remaining entries in column \((n+1)/2\) equal to zero and adjust \(r_{(m+1)/2}\) to \(r_{(m+1)/2}-s_{(n+1)/2}\). Then we are left to construct a centrosymmetric \(m\times (n-1) \) matrix where m is odd and \(n-1\) is even with palindromic row and column sum vectors the sum of whose entries are equal.

If m is even and n is odd, then there are two possibilities (first we use row 1 although any row \(i\le m/2\) can be used): If \(2r_1\le s_{(n+1)/2}\), then we put \(a_{1,(n+1)/2}=a_{m,(n+1)/2}=r_1\) and set the remaining entries in rows 1 and m equal to zero. Then, adjusting the sum of column \((n+1)/2\), we are left to construct a centrosymmetric \((m-2)\times n\) matrix with palindromic row and column sum vectors the sum of whose entries are equal. If \(2r_1> s_{(n+1)/2}\), then we set \(a_{1,(n+1)/2}=a_{m,(n+1)/2}=\frac{s_{(n+1)/2}}{2}\), and set the remaining entries in column \((n+1)/2\) equal to zero. If needed, we next consider row 2 and continue until column \((n+1)/2\) is specified. We are then left to construct a centrosymmetric \(m\times (n-1)\) matrix with palindromic row and column sum vectors the sum of whose entries are equal.

If m is odd and n is even, we proceed in a similar way.

Finally, if m and n are both even, then we proceed as in the transportation matrix algorithm with additionally setting \(a_{n+1-i,n+1-j}=a_{ij}\), and adjusting two row or two column sums as needed.\(\square \)

Example 3

Let \(R=(2,4,5,4,2)\) and \(S=(5,2,3,2,5)\). Then one way to carry out the above procedure to obtain a matrix in \({\mathfrak C}(R,S)\) is the following:

$$ \left[ \begin{array}{c|c|c|c|c} &{}&{}0&{}&{}\\ \hline &{}&{}0&{}&{}\\ \hline &{}&{}3&{}&{}\\ \hline &{}&{}0&{}&{}\\ \hline &{}&{}0&{}&{}\end{array}\right] \rightarrow \left[ \begin{array}{c|c|c|c|c} &{}&{}0&{}&{}\\ \hline &{}&{}0&{}&{}\\ \hline 1&{}0&{}3&{}0&{}1\\ \hline &{}&{}0&{}&{}\\ \hline &{}&{}0&{}&{}\end{array}\right] \rightarrow \left[ \begin{array}{c|c|c|c|c} 2&{}0&{}0&{}0&{}0\\ \hline &{}&{}0&{}&{}\\ \hline 1&{}0&{}3&{}0&{}1\\ \hline &{}&{}0&{}&{}\\ \hline 2&{}0&{}0&{}0&{}2\end{array}\right] \rightarrow $$
$$\left[ \begin{array}{c|c|c|c|c} 2&{}0&{}0&{}0&{}2\\ \hline 2&{}&{}0&{}&{}0\\ \hline 1&{}0&{}3&{}0&{}1\\ \hline 0&{}&{}0&{}&{}2\\ \hline 0&{}0&{}0&{}0&{}2\end{array}\right] \rightarrow \left[ \begin{array}{c|c|c|c|c} 2&{}0&{}0&{}0&{}0\\ \hline 2&{}2&{}0&{}0&{}0\\ \hline 1&{}0&{}3&{}0&{}1\\ \hline 0&{}0&{}0&{}2&{}2\\ \hline 0&{}0&{}0&{}0&{}2\end{array}\right] .$$

\(\Box \)

We now consider the possibility of the existence of an \(m\times n\) (0, 1)-matrix \(A=[a_{ij}]\) in \({\mathfrak C}(R,S)\). Let \({\mathfrak C}_{(0,1)}(R,S)\) denote the set of (0, 1)-matrices in \({\mathfrak C}(R,S)\). Then

$$\begin{aligned} {\mathfrak C}_{(0,1)}(R,S)={\mathfrak C}(R,S)\cap {\mathcal A}(R,S). \end{aligned}$$

Hence, necessary conditions for the nonemptiness of \({\mathfrak C}_{(0,1)}(R,S)\) are that both \({\mathfrak C}(R,S)\) and \( {\mathcal A}(R,S)\) are nonempty. For \({\mathfrak C}(R,S)\ne \emptyset \), conditions (2) and (3) must hold and hence R and S must be palindromic with the same sum of entries. Thus if \(A=[a_{ij}]\in {\mathfrak C}_{(0,1)}(R,S)\) and m and n are both odd, then \(r_{(m+1)/2}\) and \(s_{(n+1)/2}\) have the same parity, and \(a_{(m+1)/2,(n+1)/2}=1\) if this parity is odd and \(a_{(m+1)/2,(n+1)/2}=0\) if this parity is even.

Before showing \({\mathfrak C}(R,S)\ne \emptyset \) and \( {\mathcal A}(R,S)\ne \emptyset \) are also sufficient for the class \({\mathfrak C}_{(0,1)}(R,S)\) to be nonempty, we consider a property of this class analogous to a basic property of \({\mathcal A}(R,S)\). This property of \({\mathcal A}(R,S)\) is the following.

Let \(A=[a_{ij}]\in {\mathcal A}(R,S)\). Let ijkl be indices where \(i<j\) and \(k<l\) such that the \(2\times 2\) submatrix of A determined by rows i and j and columns k and l is

$$\begin{aligned} A[i,j|k,l]=\begin{array}{c|cc} &{}k&{}l\\ \hline i&{}1&{}0\\ j&{}0&{}1\end{array}. \end{aligned}$$
(4)

Replacing this \(2\times 2\) submatrix equal to \(I_2\) with

$$ L_2=\left[ \begin{array}{cc}0&{}1\\ 1&{}0\end{array}\right] $$

gives another matrix in \({\mathfrak C}_{(0,1)}(R,S)\). Similarly, replacing an \(L_2\) with an \(I_2\) in a matrix in \({\mathcal A}(R,S)\) always gives another matrix in \({\mathcal A}(R,S)\). Either of these replacements is called an interchange. It is a basic fact (see e.g. [4], pp. 52–57) that any matrix in \({\mathcal A}(R,S)\) can transformed into any other by a sequence of interchanges.

Now suppose that A is also centrosymmetric, Then if (4) holds in A, we also have

$$ A[n+1-j,n+1-i|n+1-l,n+1-k]=\begin{array}{c|cc} &{}n+1-l&{}n+1-k\\ \hline n+1-j&{}1&{}0\\ n+1-i&{}0&{}1\end{array}. $$

Replacing each of these \(2\times 2\) submatrices equal to \(I_2\) with

$$ L_2=\left[ \begin{array}{cc}0&{}1\\ 1&{}0\end{array}\right] $$

gives another matrix in \({\mathfrak C}_{(0,1)}(R,S)\). We call this pair of substitutions and the one going in the reverse direction (so two submatrices equal to \(L_2\) are replaced with matrices equal to \(I_2\)) a centrosymmetric double-interchange. Note that if \(i+j=n+1\) and \(k+l=n+1\), only one \(2\times 2\) submatrix is involved in a centrosymmetric double-interchange.

Lemma 4

Assume that at least one of m and n is odd and \(A=[a_{ij}]\in {\mathfrak C}_{(0,1)}(R,S)\ne \emptyset \). Then by a sequence of centrosymmetric double-interchanges we can obtain a matrix \(C=[c_{ij}]\in {\mathfrak C}_{(0,1)}(R,S)\) such that, except for \(c_{(m+1)/2,(n+1)/2}\) in the case that both m and n are odd, the 1s in row \((m+1)/2\) if m is odd occur in the positions corresponding to the largest column sums, and the 1s in column \((n+1)/2\) if n is odd occur in the positions corresponding to the largest row sums.

Proof

This lemma follows easily using centrosymmetric double-interchanges. First, if m and n are both odd, then \(a_{(m+1)/2,(n+1)/2}\) equals 1 if \(r_{(m+1)/2}\) and \(s_{(n+1)/2}\) have odd parity, and equals 0 if they have even parity. If there are two columns k and l with \(1\le k,l\le (n+1)/2\) such that \(s_k<s_l\) but \(a_{(m+1)/2,k}=1\) and \(a_{(m+1)/2,l}=0\), then for some i we must have \(a_{ik}=0\) and \(a_{il}=1\). Then there is a centrosymmetric double-interchange that replaces \(a_{(m+1)/2,k}\) with 0 and \(a_{(m+1)/2,l}\) with 1. A similar argument works for rows. It follows that by centrosymmetric double-interchanges we can arrive at C with the desired properties.\(\square \)

Lemma 5

Let A and B be any two matrices in \({\mathfrak C}_{(0,1)}(R,S)\). Then there is a sequence of centrosymmetric double-interchanges which transforms A into B with all intermediate matrices in \({\mathfrak C}_{(0,1)}(R,S)\).

Proof

We may assume that both \(A=[a_{ij}] \) and \(B=[b_{ij}]\) have the properties of C specified in Lemma 4. By deleting row \((m+1)/2\) if m is odd and column \((n+1)/2\) if n is odd, we may assume that both m and n are even. By permutations of rows and of columns in a way that preserves centrosymmetry, we may also assume that \(r_1\ge r_2\ge \cdots \ge r_{m/2}\) and that \(s_1\ge s_2\ge \cdots \ge s_{n/2}\). Consider columns 1 and n of A, and suppose they differ from columns 1 and n of B, respectively. Then there exists k and l with \(k\ne l\) such that \(a_{kn}=1\) and \(a_{ln}=0\), and \(b_{kn}=0\) and \(b_{ln}=1\) (or the other way around). Suppose there did not exist a p such that either \(a_{kp}=0\) and \(a_{lp}=1\), or \(b_{kp}=1\) and \(b_{lp}=0\). Since A and B have the same row sum vector R, from A we see that \(r_k>r_l\), and from B we see that \(r_k<r_l\), a contradiction. Without loss of generality, assume that \(a_{kp}=0\) and \(a_{lp}=1\) for some p. Then a centrosymmetric double-interchange applied to A results in a matrix C whose column n (and column 1) has more in common with the corresponding columns of B. Replacing A with C and proceeding recursively, we see that there is a sequence of centrosymmetric double-interchanges applied to A and a sequence of centrosymmetric double-interchanges applied to B such that the resulting matrices \(A'\) and \(B'\) are in \({\mathfrak C}_{(0,1)}(R,S)\) and their corresponding columns n and corresponding columns 1 agree. Proceeding recursively, we see that there is a sequence of centrosymmetric double-interchanges applied to A and a sequence of centrosymmetric double-interchanges applied to B which result in the same matrix. Since centrosymmetric double-interchanges are reversible, this completes the proof.\(\square \)

Since by Lemma 4, if at least one of m and n is odd, we can reduce the nonemptiness of \({\mathfrak C}_{(0,1)}(R,S)\) to the case where both m and n are even, we now assume that both m and n are even. By Theorem 2, \({\mathfrak C}(R,S)\ne \emptyset \) if and only if (2) and (3) are satisfied. Necessary and sufficient conditions for \({\mathcal A}(R,S)\ne \emptyset \) are given by the Gale-Ryser theorem (see e.g. [4]). Assuming without loss of generality that the monotonicity conditions \(r_1\ge r_2\ge \cdots \ge r_{m/2}\) and \(s_1\ge s_2\ge \cdots \ge s_{n/2}\) hold, the Gale-Ryser conditions applied to the monotone rearrangements of R and S, that is, to

$$ (r_1,r_1,r_2,r_2,\ldots ,r_{m/2},r_{m/2}) \text{ and } (s_1,s_1,s_2,s_2,\ldots ,s_{n/2},s_{n/2}) $$

reduce to

$$\begin{aligned} \sum _{j=1}^k s_j\le \sum _{j=1}^k r_j^* \quad (j=1,2,\ldots ,n/2), \end{aligned}$$
(5)

with equality for \(k=n/2\). Here \(\tilde{R}^*=(r_1^*,r_2^*,\ldots ,r_{m/2}^*)\) is the conjugate of \(\tilde{R}=(r_1,r_2,\ldots ,r_{m/2})\).

We then have the following theorem.

Theorem 6

Let m and n be even. The class \({\mathfrak C}_{(0,1)}(R,S)\) is nonempty if and only if both of the classes \({\mathfrak C}(R,S)\) and \({\mathcal A}(R,S)\) are nonempty, thus if and only if the conditions (2), (3), and (5) hold.

Proof

If \({\mathfrak C}_{(0,1)}(R,S)\) is nonempty, then clearly \({\mathfrak C}(R,S)\) and \({\mathcal A}(R,S)\) are nonempty, and (2), (3), and (1) hold.

Now assume that (2), (3), and (5) hold. We prove the existence (along with an algorithm for construction) of a matrix in \({\mathfrak C}_{(0,1)}(R,S)\) by modifying the Gale-Ryser algorithm to construct a matrix in \({\mathcal A}(R,S)\) (again see [4]), and thus we do not make explicit use of (1). In the Gale-Ryser algorithm, the row and column sums are assumed to be monotone, but this is just for ease of description to establish that the algorithm produces a matrix in \({\mathcal A}(R,S)\) or that at least one of the Gale-Ryser conditions fails. The Gale-Ryser algorithm is recursive and proceeds as follows.

  1. (i)

    Choose a column with the smallest prescribed column sum.

  2. (ii)

    Put the prescribed number of 1s in that column in those rows with the largest row sum (there may be ties in which case the row can be chosen arbitrarily among those rows with the same sum), and put 0s in all other positions of column n.

  3. (iii)

    Adjust the prescribed row sums and proceed recursively with a column with the next smallest sum.

Since R and S are palindromic, we have that

$$ R=(r_1,r_2,\ldots ,r_2,r_1) \text{ and } S=(s_1,s_2,\ldots ,s_2,s_1). $$

By reordering the first half of the entries of R (resp., S) with the corresponding reordering of the second half of the entries, as above we assume that

$$\begin{aligned} r_1\ge r_2\ge \cdots \ge r_{\lfloor n/2\rfloor } \text{ and } s_1\le s_2\le \cdots \le s_{\lfloor n/2\rfloor } . \end{aligned}$$
(6)

We now adjust the Gale-Ryser algorithm in the case that R and S are palindromic, staying within the constraints of the algorithm, in order to produce a centrosymmetric matrix.

Let \(c_0,c_1,c_2,\ldots ,c_k\) where \(c_0=0\) and \(c_k=n/2\) be defined by

$$ r_1=\cdots =r_{c_1}>r_{c_1+1}=\cdots =r_{c_1+c_2}>\cdots >r_{c_1+\cdots +c_{k-1}+1}= \cdots =r_{n/2}. $$

Let \(s_1\) satisfy \(2(c_0+c_1+\cdots +c_p)\le s_1<2(c_0+c_1+\cdots +c_p+c_{p+1})\). In column 1 we put 1 s in rows \(\{1,2,\ldots ,c_1+\cdots +c_p\}\) and in rows \(\{n+1-1,n+1-2,\ldots ,n+1-(c_1+\cdots +c_p)\}.\) We also put 1 s in rows \(c_1+\cdots +c_p+1,\ldots , c_1+\cdots +c_{p+1}, n+1-c_1,\cdots ,n+1-c_{p+1}\) in the order listed until a total of \(s_1\) 1s have been placed in column 1. This is in agreement with a possible way to carry out the Gale-Ryser algorithm to produce a matrix in \({\mathcal A}(R,S)\). Adjusting the needed row sums as a result of these 1s in column 1, we see that if we rotate this column 1 by 180\(^{\circ }\) and take it as column n, then this is also a next possible step in the Gale-Ryser algorithm. Adjusting the needed row sums again, we see that they form a palindromic sequence. We now delete columns 1 and n, and proceed recursively. In order to keep the assumed monotonicty conditions on row and column sums, we may have to reorder the rows keeping as we do so, the palindromic property. We conclude that this way of carrying out the Gale-Ryser algorithm produces a centrosymmetric (0, 1)-matrix with row sum vector R and column sum vector S.\(\square \)

Example 7

Let \(R=(4,4,3,3,3,3,4,4)\) and \(S=(5,3,3,3,3,3,3,5)\). Then the algorithm in the proof of Theorem 6 produces the following matrix, with the resulting row sum vectors after each pair of steps, including the initial row sum vector, indicated on the right.

$$ \begin{array}{||c|c|c|c|c|c|c|c||c|c|c|c|c|} 1&{}1&{}&{}1&{}&{}&{}&{}1&{}4&{}2&{}1&{}1&{}0\\ \hline 1&{}&{}1&{}1&{}&{}&{}&{}1&{}4&{}2&{}2&{}1&{}0\\ \hline 1&{}&{}1&{}&{}&{}1&{}&{}&{}3&{}2&{}2&{}0&{}0\\ \hline &{}1&{}&{}1&{}&{}&{}1&{}&{}3&{}3&{}1&{}1&{}0\\ \hline &{}1&{}&{}&{}1&{}&{}1&{}&{}3&{}3&{}1&{}1&{}0\\ \hline &{}&{}1&{}&{}&{}1&{}&{}1&{}3&{}2&{}2&{}0&{}0\\ \hline 1&{}&{}&{}&{}1&{}1&{}&{}1&{}4&{}2&{}2&{}1&{}0\\ \hline 1&{}&{}&{}&{}1&{}&{}1&{}1&{}4&{}2&{}1&{}1&{}0\end{array} $$

\(\Box \)

We know that a centrosymmetric matrix need not be symmetric or Hankel-symmetric. So Theorem 6 does not directly address the question of the existence of a symmetric and Hankel-symmetric matrix with a specified row sum vector (which by symmetry equals its column sum vector). The existence of a symmetric (or Hankel-symmetric) nonnegative integral matrix with a prescribed row sum vector follows from the theorem of Erdös and Gallai (the (0, 1)-case with all zeros on the main diagonal) and its generalizations (see again [4]). We show using a technique of Fulkerson, Hoffman, and McAndrew (see e.g. [3], pp. 179–182) that the existence of a symmetric and Hankel symmetric (0, 1) matrix (so centrosymmetric) can be gotten from Theorem 6. The row and column sum vector of such a matrix are equal to the same palindromic vector.

We now make use of the digraph associated with a (0, 1)-matrix. Let \(\Gamma \) be a digraph with vertices \(\{1,2,\ldots ,n\}\). The outdegree sequence (resp., indegree sequence) of \(\Gamma \) records the number of edges leaving (resp., entering) each of its vertices. The adjacency matrix of \(\Gamma \) is the \(n\times n\) (0, 1)-matrix \(A=[a_{ij}]\) where \(a_{ij}=1\) if and only if there is an edge from vertex i to vertex j. We write \(\Gamma =D(A)\) to indicate that the digraph associated with A equals \(\Gamma \). A digraph is a centrosymmetric digraph provided after possible reordering of its vertices its adjacency matrix is centrosymmetric.

Recall that \({\mathcal A}(R,R)\) denotes the class of all (0, 1)-matrices with both row and column sum vectors equal to R.

Theorem 8

Let \(R=(r_1,r_2,\ldots ,r_n)\) be a vector of nonnegative integers.

  1. (i)

    If the class \({\mathcal A}(R,R)\) contains a centrosymmetric matrix, then it contains a Hankel-symmetric, symmetric matrix.

  2. (ii)

    Necessary and sufficient conditions that \({\mathcal A}(R,R)\) contains a Hankel-symmetric, symmetric matrix are that R is palindromic and the Gale-Ryser conditions (1) are satisfied.

Proof

The assertion (ii) follows from Theorem 6 and assertion (i).

To prove assertion (i), let \(A=[a_{ij}]\) be a centrosymmetric matrix in \({\mathcal A}(R,R)\) with digraph D(A). The indegree and outdegree sequences of D(A) are both equal to R. If A is symmetric, then by Proposition 1, A is also Hankel-symmetric. Now assume that A is not symmetric. Let \(A^*=[a_{ij}^*]\) be the matrix obtained from A by replacing all pairs of symmetrically opposite 1s (including 1 s on the main diagonal) with zeros. The matrix \(A^*\) is also centrosymmetric; moreover, for each i, \(a^*_{i,n+1-i} =0\), for otherwise we would have that \(a^*_{i,n+1-i}=1\) and, by centrosymmetry, \(a^*_{n+1-i,i}=1\), a contradiction. The digraph \(D(A^*)\) has vertex set \(\{1,2,\ldots ,n\}\) with an edge \(i\rightarrow j\) from vertex i to vertex j if and only if \(a_{ij}=1\) but \(a_{ji}=0\). Since A is centrosymmetric, \(i\rightarrow j\) is an edge of \(D(A^*)\) if and only if \(n+1-i\rightarrow n+1-j\) is also an edge. Since R is both the indegree and outdegree sequence of D(A), the indegree of each vertex of \(D(A^*)\) also equals its outdegree. Since A is not symmetric, \(D(A^*)\) has at least one edge. These facts imply that \(D(A^*)\) has a simple cycle of distinct vertices

$$\begin{aligned} \gamma : i_1\rightarrow i_2\rightarrow \cdots \rightarrow i_k\rightarrow i_1 \end{aligned}$$

for some \(k\ge 3\). Since A is palindromic,

$$\begin{aligned} \gamma ^{\dagger }:n+1-i_1\rightarrow n+1-i_2\rightarrow \cdots \rightarrow n+1-i_k\rightarrow n+1-i_1 \end{aligned}$$

is also a cycle of \(D(A^*)\), which we call the palindromic mate of \(\gamma \). No edge of \(\gamma \) can also be an edge of \(\gamma ^{\dagger }\) for that would imply that \(a^*_{i,n+1-i}= a^*_{n+1-i,i}=1\) for some i. a contradiction. It follows that the edges of \(D(A^*)\) can be partitioned into cycles and these cycles come in palindromic pairs of cycles without common edges, or are self-palindromic cycles (that is, they are cycles equal to their palindromic mates).

Consider a palindromic pair \(\gamma \) and \(\gamma ^{\dagger }\) of these cycles. If the length of \(\gamma \) and \(\gamma ^{\dagger }\) is even, we delete the first, third, fifth, ... edges of these cycles and add the reverse of the second, fourth, sixth, ... edges. If the length of \(\gamma \) and \(\gamma ^{\dagger }\) is odd, then there are two possibilities. The first possibility is that some vertex a of \(\gamma \) (and the corresponding vertex \(n+1-a\) of \( \gamma ^{\dagger }\)) does not contain a loop in D(A) (\(D(A^*)\) does not contain any loops since \(A^*\) has only zeros on its main diagonal). We then put a loop at vertices a and \(n+1-a\) and delete every other edge of \(\gamma \) starting with an edge meeting vertex a, and similarly delete every other edge of \(\gamma ^{\dagger }\) starting with the corresponding edge at vertex \(n+1-a\); and we also insert the reverse of the remaining edges of \(\gamma \) and \(\gamma ^{\dagger }\). If every vertex of \(\gamma \) has a loop at it in D(A) (and then so does every vertex of \(\gamma ^{\dagger }\)), we remove the loop at some vertex a of \(\gamma \) (and remove the loop at the corresponding vertex \(n+1-a\) of \(\gamma ^{\dagger }\)), insert the reverse of every other edge of \(\gamma \) starting with an edge at vertex a, and similarly insert the reverse of every other edge of \(\gamma ^{\dagger }\) starting with the corresponding edge at vertex \(n+1-a\), and delete all other edges of \(\gamma \) and \(\gamma ^{\dagger }\). If \(\gamma =\gamma ^{\dagger }\), then we follow a similar procedure as above but we have only to consider \(\gamma \). In the case of n even, a self-palindromic cycle has even length and so we can follow the procedure above for pairs of cycles of even length. In case of n odd, a self-palindromic cycle may have even or odd length; if even length, we follow the above procedure; if odd length, then \((n+1)/2\) must be a vertex of the cycle, and we follow the above procedure taking \((n+1)/2\) as the first vertex of the cycle. The resulting digraph has a centrosymmetric adjacency matrix with fewer nonsymmetric arcs.

Repeating for each pair of palindromic cycles in the partition of the edges of \(D(A^*)\), we obtain a digraph whose adjacency matrix B is symmetric and centrosymmetric. Putting the symmetric 1 s into B that were deleted from A to get \(A^*\), we obtain a symmetric, centrosymmetric (0, 1)-matrix with row sum vector R, and hence a Hankel-symmetric, symmetric matrix in \({\mathcal A}(R,R)\).\(\square \)

Example 9

Consider the \(11\times 11\) centrosymmetric (0, 1)-matrix with palindromic \(R=S= (1,3,3,2,1,2,1,2,3,3,1)\):

$$ A=\left[ \begin{array}{c|c|c|c|c|c|c|c|c|c|c} &{}&{}&{}&{}&{}1&{}&{}&{}&{}&{}\\ \hline &{}1&{}1&{}&{}&{}&{}&{}&{}1&{}&{}\\ \hline &{}1&{}1&{}1&{}&{}&{}&{}&{}&{}&{}\\ \hline &{}&{}&{}1&{}&{}&{}&{}&{}&{}1&{}\\ \hline &{}&{}&{}&{}&{}&{}&{}&{}&{}&{}1\\ \hline &{}&{}&{}&{}1&{}&{}1&{}&{}&{}&{}\\ \hline 1&{}&{}&{}&{}&{}&{}&{}&{}&{}&{}\\ \hline &{}1&{}&{}&{}&{}&{}&{}1&{}&{}&{}\\ \hline &{}&{}&{}&{}&{}&{}&{}1&{}1&{}1&{}\\ \hline &{}&{}1&{}&{}&{}&{}&{}&{}1&{}1&{}\\ \hline &{}&{}&{}&{}&{}1&{}&{}&{}&{}&{}\end{array}\right] . $$

The matrix \(A^*\) is centrosymmetric and is given by

$$ A^{*}=\left[ \begin{array}{c|c|c|c|c|c|c|c|c|c|c} &{}&{}&{}&{}&{}1&{}&{}&{}&{}&{}\\ \hline &{}&{}&{}&{}&{}&{}&{}&{}1&{}&{}\\ \hline &{}&{}&{}1&{}&{}&{}&{}&{}&{}&{}\\ \hline &{}&{}&{}&{}&{}&{}&{}&{}&{}1&{}\\ \hline &{}&{}&{}&{}&{}&{}&{}&{}&{}&{}1\\ \hline &{}&{}&{}&{}1&{}&{}1&{}&{}&{}&{}\\ \hline 1&{}&{}&{}&{}&{}&{}&{}&{}&{}&{}\\ \hline &{}1&{}&{}&{}&{}&{}&{}&{}&{}&{}\\ \hline &{}&{}&{}&{}&{}&{}&{}1&{}&{}&{}\\ \hline &{}&{}1&{}&{}&{}&{}&{}&{}&{}&{}\\ \hline &{}&{}&{}&{}&{}1&{}&{}&{}&{}&{}\end{array}\right] . $$

The decomposition of \(D(A^*)\) into palindromic pairs of cycles is

$$ 1\rightarrow 6\rightarrow 7\rightarrow 1, 11 \rightarrow 6\rightarrow 5\rightarrow 11 \text{ and } 2\rightarrow 9\rightarrow 8\rightarrow 2, 10\rightarrow 3\rightarrow 4\rightarrow 10, $$

and this corresponds to the following the decomposition of \(A^{*}\):

figure a

Replacing \(A^{*}\) with

figure b

(the \(-1\) corresponds to a loop that is deleted but is not part of the cycle), we then get the centrosymmetric, symmetric (0, 1)-matrix with row and column sum vector (1, 2, 2, 2, 1, 2, 1, 2, 2, 2, 1):

$$ A=\left[ \begin{array}{c|c|c|c|c|c|c|c|c|c|c} 1&{}&{}&{}&{}&{}&{}&{}&{}&{}&{}\\ \hline &{}&{}1&{}&{}&{}&{}&{}&{}1&{}&{}\\ \hline &{}1&{}&{}&{}&{}&{}&{}&{}&{}1&{}\\ \hline &{}&{}&{}1&{}&{}&{}&{}&{}&{}&{}\\ \hline &{}&{}&{}&{}&{}1&{}&{}&{}&{}&{}\\ \hline &{}&{}&{}&{}1&{}&{}1&{}&{}&{}&{}\\ \hline &{}&{}&{}&{}&{}1&{}&{}&{}&{}&{}\\ \hline &{}&{}&{}&{}&{}&{}&{}1&{}&{}&{}\\ \hline &{}1&{}&{}&{}&{}&{}&{}&{}1&{}1&{}\\ \hline &{}&{}1&{}&{}&{}&{}&{}&{}1&{}1&{}\\ \hline &{}&{}&{}&{}&{}&{}&{}&{}&{}&{}1\end{array}\right] . $$

\(\Box \)

Let \({\mathcal H}(R)\) denote the class of all Hankel-symmetric, symmetric (0, 1)-matrices with row and column sum vectors equal to \(R=(r_1,r_2,\ldots ,r_2,r_1)\). Theorem 8 contains necessary and sufficient conditions on R in order that \({\mathcal H}(R) \ne \emptyset \). We now show how to generate all matrices in a nonempty class \({\mathcal H}(R)\) from any one matrix in \({\mathcal H}(R)\). This is the analogue of Lemma 5 for \({\mathcal H}(R)\). According to Corollary 7.2.4 in [4], any two symmetric (0, 1)-matrices with row and column sum vector R can be obtained from one another by a sequence of symmetric interchanges. These symmetric interchanges are of three types and transform a \(2\times 2\), \(3\times 3\), and \(4\times 4\) principal submatrix into another as shown below:

  1. (i)

    \(\left[ \begin{array}{cc} 1&{}0\\ 0&{}1\end{array}\right] \leftrightarrow \left[ \begin{array}{cc} 0&{}1\\ 1&{}0\end{array}\right] .\)

  2. (ii)

    \(\left[ \begin{array}{ccc} 0&{}1&{}0\\ 1&{}0&{}0\\ 0&{}0&{}1\end{array}\right] \leftrightarrow \left[ \begin{array}{ccc} 0&{}0&{}1\\ 0&{}1&{}0\\ 1&{}0&{}0\end{array}\right] \).

  3. (iii)

    \(\left[ \begin{array}{cccc}0&{}0&{}1&{}0\\ 0&{}0&{}0&{}1\\ 1&{}0&{}0&{}0\\ 0&{}1&{}0&{}0\end{array}\right] \leftrightarrow \left[ \begin{array}{cccc}0&{}1&{}0&{}0\\ 1&{}0&{}0&{}0\\ 0&{}0&{}0&{}1\\ 0&{}0&{}1&{}0\end{array}\right] \).

In terms of the graph whose adjacency matrix is a symmetric (0, 1)-matrix, these transformations are:

  1. (i)

    Interchange (a) a configuration consisting of two distinct vertices u and v with a loop at u and a loop at v with (b) the configuration consisting of an edge \(\{u,v\}\) joining u and v.

  2. (ii)

    Interchange (a) a configuration consisting of an edge joining distinct vertices u and v and a loop at a third vertex w with (b) the configuration consisting of an edge joining u and w and a loop at v.

  3. (iii)

    Interchange (a) a configuration consisting of four distinct vertices uvwz and an edge joining u and v and one joining w and z with (b) the configuration consisting of an edge joining u and w and one joining v and z.

Every symmetric interchange has a corresponding complementary interchange with each index i of the corresponding principal submatrix (so a vertex of the associated graph) replaced by the complementary index \(n+1-i\) (the complementary vertex of the associated graph).

We can use these symmetric interchanges to transform each matrix in \({\mathcal H}(R)\) to any other matrix in \({\mathcal H}(R)\). First we note that if n is odd, then all matrices in \({\mathcal H}(R)\) agree in position \((n+1)/2,(n+1)/2)\) and hence we never have to consider loops at vertex \((n+1)/2\). Whenever a symmetric interchange is required, we also perform the corresponding complementary symmetric interchange, unless the symmetric interchange is self-complementary in which case only one symmetric interchange is performed. We call the simultaneous application of both of these a symmetric double-interchange. This then gives the following result.

Theorem 10

Let A and B be any two matrices in \({\mathcal H}(R)\). Then there is a sequence of symmetric double-interchanges which transforms A into B with all intermediate matrices in \({\mathcal H}(R)\).

3 Concluding Remarks

We have investigated two natural subclasses, namely the centrosymmetric subclass and the symmetric, Hankel-symmetric subclass, of several well-known classes of matrices. We have presented existence theorems, construction algorithms, and a means to generate all matrices in a subclass starting from any one matrix contained in it. Combinatorial questions concerning the number (or a good and simple upper bounds) of matrices in these subclasses are of interest but presumably very difficult. Formulas for the number of matrices in nonempty classes \({\mathcal A}(R,S)\) and \({\mathcal T}_Z(R,S)\) are available in terms of the Kostka numbers for the number of Young tableaux of given shape and size (see [4], p. 147). It may be possible to express the number of centrosymmetric, or symmetric and Hankel-symmetric, matrices in these classes using Kostka numbers. There is also a (basically symbolic) generating polynomial in \(m+n\) variables for the number of matrices in a class \({\mathcal A}(R,S)\). It would be of interest to have useful generating polynomials for any of the classes and subclasses considered in this paper.