For our purposes, a digraph product is a binary operation \(D*D'\) on digraphs, for which \(V(D*D')\) is the Cartesian product \(V(D)\times V(D')\) of the vertex sets of the factors. There are many ways to define such products. But if we insist on the algebraic property of associativity, and demand that the projections to factors respect adjacency, then we are left with just four products, known as the standard products. One of these is the Cartesian product, introduced in Chapter 1. We review it now, and introduce the three other products.

10.1 The Four Standard Associative Products

The four standard digraph products are the Cartesian product \(D\,\Box \,D'\), the direct product \(D\times D'\!\), the strong product \(D{\,\boxtimes \,}D'\!\), and the lexicographic product \(D\circ D'\!\). Each has vertex set \(V(D)\times V(D')\). Their arcs are

$$ \begin{array}{rcl} A(D\,\Box \,D') &{}=&{} \{ (x,x')(y,y') \mid xy\in A(D), x'=y', \,\,{\mathbf{or}}\,\, x=y, x'y'\in A(D')\},\\ A(D\times D') &{}=&{} \{ (x,x')(y,y') \mid xy\in A(D)\,\,{\mathbf{and }}\,\, x'y'\in A(D')\},\\ A(D{\,\boxtimes \,}D') &{}=&{} A(D\,\Box \,D')\cup A(D\times D'),\\ A(D\circ D') &{}=&{} \{ (x,x')(y,y') \mid xy\in A(D),\,\,{\mathbf{or }}\,\, x=y \text { and } x'y'\in A(D')\}. \end{array} $$
Figure 10.1
figure 1

The three standard associative, commutative products

In each case D and \(D'\) are called factors of the product. In drawing products, we often align the factors roughly horizontally and vertically (like x- and y-coordinate axes) below and to the left of the vertex set \(V(D)\times V(D')\), so that \((x,x')\) projects vertically to \(x\in V(D)\) and horizontally to \(x'\in V(D')\). This is illustrated in Figure 10.1, showing examples of the Cartesian, direct and strong products. The lexicographic product is illustrated in Figure 10.2.

Figure 10.2
figure 2

The lexicographic product. Note \(D\circ D'\not \cong D'\circ D\).

The definitions reveal immediately that the Cartesian, direct and strong products are commutative in the sense that the map \((x,x')\mapsto (x',x)\) yields isomorphisms \(D\,\Box \,D'\rightarrow D'\,\Box \,D\), \(D\times D'\rightarrow D'\times D\), and \(D{\,\boxtimes \,}D'\rightarrow D'{\,\boxtimes \,}D\). However, Figure 10.2 shows that the lexicographic product is not commutative.

It is also easy to check that all four standard products are associative in the sense that the identification \((x,(y,z)) = ((x,y),z)\) yields equalities

$$ \begin{array}{lcr} D_1\,\Box \,(D_2\,\Box \,D_3) &{} = &{} (D_1\,\Box \,D_2)\,\Box \,D_3,\\ D_1\times (D_2\times D_3) &{} = &{} (D_1\times D_2)\times D_3,\\ D_1{\,\boxtimes \,}\,(D_2\,{\,\boxtimes \,}\,D_3) &{}= &{} (D_1\,{\,\boxtimes \,}\, D_2)\,{\,\boxtimes \,}D_3,\\ D_1\,\circ \, (D_2\,\circ \, D_3) &{} = &{} (D_1\,\circ \,D_2)\,\circ \, D_3. \end{array} $$

Thus we may unambiguously define products of more than two factors without regard to grouping. The product definitions extend as follows.

The vertex set of the Cartesian product \(D_1\,\Box \,\cdots \,\Box \,D_n\) is the Cartesian product of sets \(V(D_1)\times \cdots \times V(D_n)\). The arcs of the product are all pairs \((x_1,\ldots ,x_n)(y_1,\ldots ,y_n)\), where \(x_iy_i\in A(D_i)\) for some index \(i\in [n]\), and \(x_j=y_j\) for all \(j\ne i\).

The direct product \(D_1\times \cdots \times D_n\) has vertices \(V(D_1)\times \cdots \times V(D_n)\) and arcs \((x_1,\ldots ,x_n)(y_1,\ldots ,y_n)\), where \(x_iy_i\in A(D_i)\) for all \(i\in [n]\).

The strong product \(D_1{\,\boxtimes \,}\cdots {\,\boxtimes \,}D_n\) has vertices \(V(D_1)\times \cdots \times V(D_n)\), and \((x_1,\ldots ,x_n)(y_1,\ldots ,y_n)\) is an arc provided \(x_i=y_i\) or \(x_iy_i\in A(D_i)\) for all \(i\in [n]\), and \(x_iy_i\in A(D_i)\) for at least one \(i\in [n]\). Note the containment

$$A(D_1\,\Box \,\cdots \,\Box \,D_n) \cup A(D_1\times \cdots \times D_n)\subseteq A(D_1{\,\boxtimes \,}\cdots {\,\boxtimes \,}D_n),$$

which is only guaranteed to be an equality when \(n=2\). (As in the definition on page 467.)

Extending the lexicographic product to more than two factors, we see that \(D_1\circ \cdots \circ D_n\) has vertices \(V(D_1)\times \cdots \times V(D_n)\) and \((x_1,\ldots ,x_n)(y_1,\ldots ,y_n)\) is an arc of the product provided that there is an index \(i\in [n]\) for which \(x_iy_i\in A(D_i)\), while \(x_j=y_j\) for any \(1\le j<i\).

We define the nth powers with respect to the four products as

$$ \begin{array}{lcl} D^{\,\Box \,n} = D\,\Box \,D\,\Box \,\cdots \,\Box \,D, \qquad D^{\times n} = D\times D\times \cdots \times D,\\ D^{\boxtimes \, n} = D{\,\boxtimes \,}D{\,\boxtimes \,}\cdots {\,\boxtimes \,}D, \qquad D^{\circ n} \, = D\,\circ \,D\,\circ \, \cdots \,\circ \,D, \end{array} $$

where there are n factors in each case.

A digraph homomorphism \(\varphi :D\rightarrow D'\) is a map \(\varphi :V(D)\rightarrow V(D')\) for which \(xy\in A(D)\) implies \(\varphi (x)\varphi (y)\in A(D')\). We call \(\varphi \) a weak homomorphism if \(xy\in A(D)\) implies \(\varphi (x)\varphi (y)\in A(D')\) or \(\varphi (x)=\varphi (y)\). A homomorphism is a weak homomorphism, but not conversely. For each \(k\in [n]\), define the projection \(\pi _k:V(D_1)\times \cdots \times V(D_n)\rightarrow V(D_k)\) as \(\pi _k(x_1,\ldots ,x_n)=x_k\). It is straightforward to verify that each projection \(\pi _k:D_1\times \cdots \times D_n\rightarrow D_k\) is a homomorphism, and \(\pi _k:D_1\,\Box \,\cdots \,\Box \,D_n\rightarrow D_k\) and \(\pi _k:D_1{\,\boxtimes \,}\cdots {\,\boxtimes \,}D_n\rightarrow D_k\) are weak homomorphisms. In general, only the first projection \(\pi _1:D_1\circ \cdots \circ D_n\rightarrow D_1\) of a lexicographic product is a weak homomorphism. Although we will not undertake such a demonstration here, it can be shown that \(\,\Box \,\), \(\times \), \({\,\boxtimes \,}\) and \(\circ \) are the only associative products for which at least one projection is a weak homomorphism (or homomorphism) and each arc of each factor is a projection of an arc in the product. See [18] for details in the class of graphs. (The arguments apply equally well to digraphs.)

For products written as \(D\,\Box \,H\), we write the projections as \(\pi _D\) and \(\pi _H\).

We continue with some algebraic properties of the four products. Denote the disjoint union of digraphs D and H as \(D+H\). The following distributive laws are immediate:

$$ \begin{array}{ccccccc} (D+ H)\,\Box \,K &{} = &{} D\,\Box \,K+ H\,\Box \,K, &{} \;\;\;\;\;&{} (D+ H)\times K &{} = &{} D\times K+ H\times K,\\ (D+ H){\,\boxtimes \,}K &{} = &{} D{\,\boxtimes \,}K+ H{\,\boxtimes \,}K, &{} &{} (D+ H)\circ K &{} = &{} D\circ K+ H\circ K. \end{array} $$

The corresponding left-distributive laws also hold, except in the case of the lexicographic product, where generally \(D\circ (H+K) \ne D\circ H+D\circ K\). Regarding this, the next proposition tells the whole story.

Proposition 10.1.1

We have \(D\circ (H+K) \cong D\circ H+D\circ K\) if and only if D has no arcs.

Proof:

If D has no arcs, then the definition of the lexicographic product shows that both \(D\circ (H+K)\) and \(D\circ H+D\circ K\) are |V(D)| copies of \(H+K\).

Conversely, suppose \(D\circ (H+K)\cong D\circ H+D\circ K\), so both digraphs have the same number of arcs. Note that in general

$$\begin{aligned} |A(D\circ H)|=|A(D)|\cdot |V(H)|^2+ |V(D)|\cdot |A(H)|, \end{aligned}$$
(10.1)

where the first term counts arcs \((x,x')(y,y')\) with \(xy\in A(D)\), and the second term counts such arcs with \(x=y\). Using this to count the arcs of \(D\circ (H+K)\), and again to count those of \(D\circ H+D\circ K\), we see that \(|A(D)|=0\). \(\square \)

The trivial digraph \(K_1\) is a unit for \(\,\Box \,\), \({\,\boxtimes \,}\) and \(\circ \), that is, \(K_1\times D=D\), \(\,K_1{\,\boxtimes \,}D=D\), and \(\,K_1\circ D=D= D\circ K_1\) (by identifying \((1,x)=x=(x,1)\) for all \(x\in V(D)\)). However, this does not work for the direct product because \(K_1\times D\) has no arcs, even if D does. But if we admit loops and let \(K_1^*\) be a loop on one vertex, then \(K_1^*\) is the unique digraph for which \(K_1^*\times D=D\). For this reason we often regard the direct product as a product on the class of digraphs with loops allowed, especially when dealing with issues of unique prime factorization, where the existence of a unit is crucial.

As mentioned above, the lexicographic product is the only one of the four standard products that is not commutative. However, if \(D=H^{\,\circ n}\) and \(D'=H^{\,\circ m}\) are lexicographic powers of the same digraph H, then we do of course get \(D\circ D'=D'\circ D\). Another way that D and \(D'\) can commute is if they are both transitive tournaments, in which case we have

$$\begin{aligned} TT_n\circ TT_m = TT_{mn}=TT_m\circ TT_n. \end{aligned}$$
(10.2)

To verify this, order the vertices of \(TT_n\) as \(v_1,v_2,\ldots , v_n\) with \(v_iv_j\in A(TT_n)\) provided \(i<j\). Order those of \(TT_m\) as \(w_1,w_2,\ldots , w_m\) with \(w_kw_\ell \in A(TT_m)\) provided \(k<\ell \). Order the set \(V(TT_n)\times V(TT_m)\) lexicographically, that is, \((v_i,w_k)<(v_j,w_\ell )\) if \(i<j\), or \(i=j\) and \(k<\ell \). The definition of \(\circ \) reveals that \(TT_n\circ TT_m\) has an arc \((v_i,w_k)(v_j,w_\ell )\) if and only if \((i,k)<(j,\ell )\). Therefore \(TT_n\circ TT_m=TT_{mn}\).

Certainly also \({\mathop {K}\limits ^{\leftrightarrow }}_n\circ {\mathop {K}\limits ^{\leftrightarrow }}_m \;=\; {\mathop {K}\limits ^{\leftrightarrow }}_{mn} \;=\; {\mathop {K}\limits ^{\leftrightarrow }}_m\circ {\mathop {K}\limits ^{\leftrightarrow }}_n\) where \({\mathop {K}\limits ^{\leftrightarrow }}_n\) is the complete biorientation of \(K_n\). And if \(D_n\) is its complement (i.e., the arcless digraph on n vertices) then \(D_n\circ D_m = D_{mn}=D_m\circ D_n\). In fact, these are the only situations in which the lexicographic product commutes, as discovered by Dörfler and Imrich [8].

Theorem 10.1.2

Two digraphs commute with respect to the lexicographic product if and only if they are both lexicographic powers of the same digraph, or both transitive tournaments, or both complete symmetric digraphs, or both arcless digraphs.

We close this section with another property of the lexicographic product. Denote by \(\overline{D}\) the complement of the digraph D, that is, the digraph on V(D) with \(xy\in A(\overline{D})\) if and only if \(xy\notin A(D)\). The equation

$$\begin{aligned} \overline{D\circ H}=\overline{D}\circ \overline{H} \end{aligned}$$
(10.3)

is easily confirmed. No other standard product has this property.

The remainder of the chapter is organized as follows. Sections 10.2 and 10.3 treat distance and connectedness for the four products. Sections 10.4, 10.5 and 10.6 deal with kings and kernels, Hamiltonian issues, and invariants. The final five sections consider algebraic questions of cancellation and unique prime factorization. Section 10.7 covers some preliminary material on homomorphisms and quotients that is used in the following section on cancellation. Section 10.9 covers prime factorization for \(\,\Box \,\) and \(\circ \). The cases \(\times \) and \(\boxtimes \) are treated in Sections 10.10 and 10.11.

10.2 Distance

Recall that the distance \(\mathrm{dist}_D(x,y)\) between two vertices \(x,y\in V(D)\) is the length of the shortest directed path from x to y, or \(\infty \) if no such path exists. This is not a metric in the usual sense, as generally \(\mathrm{dist}_D(x,y)\ne \mathrm{dist}_D(y,x)\). Let \(\mathrm{dist}'_D(x,y)\) be the length of the shortest (xy)-path in D (not necessarily directed). This is a metric.

We begin by recording the distance formulas for each of the four products. These formulas are nearly identical to the corresponding formulas for graphs; here we adapt the proofs of Chapter 5 of Hammack, Imrich and Klavžar [18] to digraphs. Our proofs will use the fact that if \(p:D\rightarrow H\) is a weak homomorphism, then \(\mathrm{dist}_D(x,y)\ge \mathrm{dist}_H\big (p(x),p(y)\big )\), and similarly for \(\mathrm{dist}'\). This holds because if P is an (xy)-dipath (or path) in D, then the projection of any arc of P is either an arc of H or a single vertex of H. The projections that are arcs constitute a (p(x), p(y))-diwalk (or walk) in H of length not greater than the length of P. (In fact, its length is the length of P minus the number of its arcs that are mapped to single vertices.)

Proposition 10.2.1

In a Cartesian product \(D=D_1\,\Box \,\cdots \,\Box \,D_n\), the distance between vertices \((x_1,\ldots ,x_n)\) and \((y_1,\ldots ,y_n)\) is

$$\begin{aligned} \mathrm{dist}_D\big ( (x_1,\ldots ,x_n),(y_1,\ldots ,y_n)\big )=\sum _{1\le i\le n}\mathrm{dist}_{D_i}(x_i,y_i). \end{aligned}$$

For the strong product \(D=D_1{\,\boxtimes \,}\cdots {\,\boxtimes \,}D_n\), the distance is

$$\begin{aligned} \mathrm{dist}_D\big ( (x_1,\ldots ,x_n),(y_1,\ldots ,y_n)\big )=\max _{1\le i\le n}\left\{ \mathrm{dist}_{D_i}(x_i,y_i)\right\} . \end{aligned}$$

The same formulas hold when \(\mathrm{dist}\) is replaced with \(\mathrm{dist}'\).

Proof:

By associativity, it suffices to prove the statements for the case \(n=2\).

First consider the Cartesian product \(D=D_1\,\Box \,D_2\). To begin, suppose \(\mathrm{dist}_D((x_1,x_2),(y_1,y_2))\) is finite. Take a \(((x_1,x_2),(y_1,y_2))\)-dipath P of length \(\mathrm{dist}_D((x_1,x_2),(y_1,y_2))\). By definition of the Cartesian product, any arc of P is mapped to an arc in \(D_1\) or \(D_2\) by one of the two projections \(\pi _1\) and \(\pi _2\), and to a single vertex by the other. It follows that \(\pi _1\) maps P to an \((x_1,y_1)\)-diwalk in \(D_1\) of length (say) \(d_1\), and \(\pi _2\) maps P to an \((x_2,y_2)\)-diwalk in \(D_2\) of length \(d_2\), with \(\mathrm{dist}_D((x_1,x_2),(y_1,y_2))=d_1+d_2\ge \mathrm{dist}_{D_1}(x_1,y_1)+\mathrm{dist}_{D_2}(x_2,y_2)\).

In particular this means the proposition holds if \(\mathrm{dist}_{D_1}(x_1,y_1)=\infty \) or \(\mathrm{dist}_{D_2}(x_2,y_2)=\infty \). If they are both finite, take a shortest \((x_1,y_1)\)-dipath \(P_1\) in \(D_1\) and a shortest \((x_2,y_2)\)-dipath \(P_2\) in \(D_2\). Then \(D_1\,\Box \,D_2\) has a dipath

$$ \left( P_1\times \{x_2\}\right) + \left( \{y_1\}\times P_2\right) $$

from \((x_1,x_2)\) to \((y_1,y_2)\), of length \(\mathrm{dist}_{D_1}(x_1,y_1)+\mathrm{dist}_{D_2}(x_2,y_2)\). Therefore \(\mathrm{dist}_D((x_1,x_2),(y_1,y_2))\le \mathrm{dist}_{D_1}(x_1,y_1)+\mathrm{dist}_{D_2}(x_2,y_2)\). Equality holds by the previous paragraph.

Now consider the strong product \(D_1{\,\boxtimes \,}D_2\). As each \(\pi _i:D_1{\,\boxtimes \,}D_2\rightarrow D_i\) is a weak homomorphism, it follows that \(\mathrm{dist}_D((x_1,x_2),(y_1,y_2))\ge \mathrm{dist}_{D_i}(x_i,y_i)\) for \(i=1,2\), so \(\mathrm{dist}_D((x_1,x_2),(y_1,y_2))\ge \max _{1\le i\le 2}\{\mathrm{dist}_{D_i}(x_i,y_i)\}\).

Thus, if at least one \(\mathrm{dist}_{D_i}(x_i,y_i)\) is infinite, then \(\mathrm{dist}_D((x_1,x_2),(y_1,y_2))=\infty \), and the proposition follows. Otherwise, take a shortest \((x_1,y_1)\)-dipath \(x_1a_1a_2a_3\ldots a_py_1\) in \(D_1\) and a shortest \((x_2,y_2)\)-dipath \(x_2b_1b_2b_3\ldots b_qy_2\) in \(D_2\). Say \(p\ge q\). We get the following \(((x_1,x_2),(y_1,y_2))\)-dipath in \(D_1{\,\boxtimes \,}D_2\):

$$ (x_1,x_2)(a_1,b_1)(a_2,b_2)(a_3,b_3)\ldots (a_q,b_q)(a_{q+1},y_2)(a_{q+2},y_2)\ldots (a_p,y_2)(y_1,y_2). $$

Its length is \(\mathrm{dist}_{D_1}(x_1,y_1)=\max \{\mathrm{dist}_{D_i}(x_i,y_i)\}\ge \mathrm{dist}_D((x_1,x_2),(y_1,y_2))\). The reverse inequality was established in the previous paragraph.

The arguments for \(\mathrm{dist}'\) are identical, but replacing each occurrence of the word “diwalk” with “walk,” and “dipath” with “path.” \(\square \)

The situation for the direct product is quite different. It requires the following useful result concerning directed walks in a direct product.

Proposition 10.2.2

A direct product \(D=D_1\times \cdots \times D_n\) has a diwalk of length k from \((x_1,\ldots ,x_n)\) to \((y_1,\ldots ,y_n)\) if and only if each \(D_i\) has a diwalk of length k from \(x_i\) to \(y_i\).

Proof:

Suppose D has a diwalk W from \((x_1,x_2,\ldots ,x_n)\) to \((y_1,y_2,\ldots ,y_n)\), of length k. As each projection \(\pi _i:G\rightarrow G_i\) is a homomorphism, W projects to an \((x_i,y_i)\)-diwalk of length k in each \(D_i\).

Conversely, if each factor \(D_i\) has a diwalk \(x_i x_i^1 x_i^2 x_i^3 \ldots x_i^{k-1} y_i\) of length k, then by the definition of the direct product, D has a diwalk

$$ (x_1,\ldots ,x_n) (x_1^1,\ldots ,x_n^1) (x_1^2,\ldots ,x_n^2)\ldots (x_1^{k-1},\ldots ,x_n^{k-1}) (y_1,\ldots ,y_n) $$

of length k. \(\square \)

Proposition 10.2.3

In a direct product \(D=D_1\times \cdots \times D_n\), the distance between two vertices \((x_1,\ldots ,x_n)\) and \((y_1,\ldots ,y_n)\) is

$$\begin{aligned} \mathrm{dist}_D\big ( (x_1,\ldots ,x_n),(y_1,\ldots ,y_n)\big )=\min \left\{ \; k\in \mathbb {Z} \;\,\left| \;\, \begin{array}{c} {each}\,\,D_i \,\,has\,\,an\\ {(x_i,y_i)-diwalk}\\ {of\,\,length\,\,k}\\ \end{array}\right. \right\} , \end{aligned}$$

or \(\infty \) if no such k exists.

Proof:

Let \(\mathrm{dist}_D\big ( (x_1,\ldots ,x_n), (y_1,\ldots ,y_n)\big )=d\). Let \(d'\) equal the smallest k for which each \(D_i\) has an \((x_i,y_i)\)-diwalk of length k, or \(\infty \) if no such k exists. We must show \(d=d'\).

If \(d=\infty \), then \(d\ge d'\). But also \(d\ge d'\) when \(d<\infty \), by Proposition 10.2.2.

On the other hand, if \(d'=\infty \), then \(d\le d'\). And again \(d\le d'\) when \(d'<\infty \), by Proposition 10.2.2. Thus \(d=d'\). \(\square \)

Distance in the lexicographic product requires a new definition. Given a vertex x of a digraph D, let \(\xi _D(x)\) be the length of a shortest non-trivial dicycle containing x, or \(\infty \) if no such dicycle exists. Let \(\xi _D'(x)\) be the length of the shortest non-trivial cycle containing x. We first state the distance formula for lexicographic products \(D_1\circ D_2\) having just two factors (a consequence of Theorem 4 of Szumny, Włoch and Włoch [54]).

Proposition 10.2.4

The distance formula for the lexicographic product is

$$\begin{aligned} \mathrm{dist}_{D_1\circ D_2}\big ((x_1,x_2),(y_1,y_2)\big )= \left\{ \begin{array}{ll} \mathrm{dist}_{D_1}(x_1,y_1) &{} i\!f\,\, x_1\ne y_1 \\ \min \big \{\, \xi _{D_1}(x_1),\, \mathrm{dist}_{D_2}(x_2,y_2)\big \} &{} i\!f\,\, x_1=y_1. \end{array} \right. \end{aligned}$$

The formula also holds with \(\mathrm{dist}\) and \(\xi \) replaced with \(\mathrm{dist}'\) and \(\xi '\).

Proof:

Suppose \(x_1\ne y_1\). Then, as the projection \(\pi _1\) is a weak homomorphism, we have \(\mathrm{dist}_{D_1\circ D_2}\big ((x_1,x_2),(y_1,y_2)\big )\ge \mathrm{dist}_{D_1}(x_1,y_1)\). On the other hand, given a shortest \((x_1,y_1)\)-dipath \(x_1a_1a_2\ldots a_py_1\) in \(D_1\), we construct a dipath \((x_1,y_1)(a_1,y_2)(a_2,y_2)(a_3,y_2)\ldots (y_1,y_2)\) in \(D_1\circ D_2\) of length \(\mathrm{dist}_{D_1}(x_1,y_1)\), so \(\mathrm{dist}_{D_1\circ D_2}\big ((x_1,x_2),(y_1,y_2)\big )= \mathrm{dist}_{D_1}(x_1,y_1)\).

Now suppose \(x_1= y_1\). Take a shortest \(((x_1,x_2),(y_1,y_2))\)-dipath P in \(D_1\circ D_2\). Because \(\pi _1\) is a weak homomorphism, \(\pi _1(P)\) is either a closed diwalk in \(D_1\) beginning and ending at \(x_1\) that is no longer than P, or it is the single vertex \(x_1\). In the first case, \(\mathrm{dist}((x_1,x_2),(y_1,y_2))\ge \xi _{D_1}(x_1)\). In the second, P lies in the fiber \(\{x_1\}\circ D_2\cong D_2\), and its length is no less than \(\mathrm{dist}_{D_2}(x_2,y_2)\). Thus \(\mathrm{dist}_{D_1\circ D_2}\big ((x_1,x_2),(y_1,y_2)\big )\ge \) \(\min \big \{\, \xi _{D_1}(x_1),\, \mathrm{dist}_{D_2}(x_2,y_2)\big \}\).

Conversely, if \(D_1\) has a closed dicycle \(x_1a_1a_2\ldots a_px_1\), then \(D_1\circ D_2\) has a dipath \((x_1,y_1)(a_1,y_2)(a_2,y_2)(a_3,y_2)\ldots (x_1,y_2)\) of the same length. And if \(D_2\) has an \((x_2,y_2)\)-dipath P, then \(\{x_1\}\circ P\) is a \(((x_1,x_2),(y_1,y_2))\)-dipath in \(D_1\circ D_2\). Thus \(\mathrm{dist}_{D_1\circ D_2}\big ((x_1,x_2),(y_1,y_2)\big )\le \) \(\min \big \{\, \xi _{D_1}(x_1),\, \mathrm{dist}_{D_2}(x_2,y_2)\big \}\).

The proof is the same for \(\mathrm{dist}'\). \(\square \)

Corollary 10.2.5

Suppose \((x_1,x_2,\ldots , x_n)\) and \((y_1,y_2,\ldots , y_n)\) are distinct vertices of \(D=D_1\circ D_2\circ \cdots \circ D_n\), and let \(k\in [n]\) be the smallest index for which \(x_k\ne y_k\). Then

$$\begin{aligned} \mathrm{dist}_D\big ((x_1,x_2,&\ldots , x_n),(y_1,y_2,\ldots , y_n)\big )=\\&\min \left\{ \xi _{D_1}(x_1), \,\xi _{D_2}(x_2),\, \ldots , \,\xi _{D_{k-1}}(x_{k-1}), \,\mathrm{dist}_{D_k}(x_k,y_k) \right\} . \end{aligned}$$

(For \(k=1\) this is \(\mathrm{dist}_D\big ((x_1,x_2,\ldots , x_n),(y_1,y_2,\ldots , y_n)\big )=\) \(\mathrm{dist}_{D_1}(x_1,y_1)\). In any case, the distance does not depend on any factor \(D_i\) with \(k<i\le n\).) The formula also holds with \(\mathrm{dist}\) and \(\xi \) replaced with \(\mathrm{dist}'\) and \(\xi '\).

Proof:

If \(n=2\), this is just a restatement of Proposition 10.2.4. If \(n>2\), then applying Proposition 10.2.4 to \(D_1\circ (D_2\circ \cdots \circ D_n)\) yields

$$\begin{aligned} \mathrm{dist}_D\big ((x_1,x_2,&\ldots , x_n),(y_1,y_2,\ldots , y_n)\big )=\\&\min \left\{ \xi _{D_1}(x_1), \mathrm{dist}_{D_2\circ \cdots \circ D_n}\big ((x_2,\ldots x_n),(y_2\ldots y_n)\big ) \right\} , \end{aligned}$$

and we proceed inductively. \(\square \)

10.3 Connectivity

We now apply the results of the previous section to connectivity of the four products. Our first result characterizes connectivity and strong connectivity of three of our four products. The proofs are straightforward, with appeals to the distance formulas of Section 10.2 as needed. The parenthetical words (strongly) and (strong) in the proposition can be deleted to obtain parallel results on connectedness. (Recall that a digraph is connected if any two of its vertices can be joined by a [not necessarily directed] path.)

Theorem 10.3.1

Suppose \(D_1,\ldots , D_n\) are digraphs. Then:

  1. 1.

    The Cartesian product \(D_1\,\Box \,\cdots \,\Box \,D_n\) is (strongly) connected if and only if each factor \(D_i\) is (strongly) connected. More generally, the (strong) components of a product \(D_1\,\Box \,\cdots \,\Box \,D_n\) are the subgraphs \(X_1\,\Box \,\cdots \,\Box \,X_n\) for which each \(X_i\) is a (strong) component of \(D_i\).

  2. 2.

    The strong product \(D_1{\,\boxtimes \,}\cdots {\,\boxtimes \,}D_n\) is (strongly) connected if and only if each factor \(D_i\) is (strongly) connected. More generally, the (strong) components of a product \(D_1{\,\boxtimes \,}\cdots {\,\boxtimes \,}D_n\) are the subgraphs \(X_1{\,\boxtimes \,}\cdots {\,\boxtimes \,}X_n\) for which each \(X_i\) is a (strong) component of \(D_i\).

  3. 3.

    The lexicographic product \(D_1\circ \cdots \circ D_n\) of non-trivial digraphs is (strongly) connected if and only if the first factor \(D_1\) is (strongly) connected. More generally, the (strong) components of a product \(D_1\circ \cdots \circ D_n\) are the subgraphs \(X_1\circ D_2\circ \cdots \circ D_n\), where \(X_1\) is a non-trivial strong component of \(D_1\), as well as

    $$ X_1\circ X_2 \circ \cdots \circ X_k \circ D_{k+1}\circ \cdots \circ D_n, $$

    where \(X_i\) is a trivial (strong) component of \(D_i\) for \(1\le i <k\), and \(X_k\) is a non-trivial strong component of \(D_k\) (unless \(k=n\), in which case \(X_k\) is allowed to be trivial).

Theorem 10.3.1 is a key to understanding the interconnections between the strong components of products. Recall that the strong component digraph of a digraph D is the acyclic digraph \(\mathrm{SC}(D)\) whose vertices are the strong components of D, with an arc directed from X to Y precisely when D has an arc from X to Y. Thus \(\mathrm{SC}(D)\) carries information on the interconnections between the various strong components. The \(\mathrm{SC}\) operator respects the Cartesian and strong products in the sense that \({\mathrm{SC}}(D\,\Box \,H)={\mathrm{SC}}(D)\,\Box \,{\mathrm{SC}}(H)\) and \({\mathrm{SC}}(D\boxtimes H)={\mathrm{SC}}(D)\boxtimes {\mathrm{SC}}(H)\). Indeed, the pairwise projection map \(X\mapsto (\pi _D(X), \pi _H(X))\) sending strong components X in the product to pairs of strong components in the factors is an isomorphism in both cases \(\,\Box \,\) and \({\,\boxtimes \,}\) (as is easily checked).

Also, if every strong component of D is non-trivial, then \({\mathrm{SC}}(D\circ H)={\mathrm{SC}}(D)\). This is so because Theorem 10.3.1 says the strong components of \(D\circ H\) have form \(X\circ H\), where X is a strong component of G. From the definition of \(\circ \), the projection \(X\circ H\mapsto X\) is an isomorphism \({\mathrm{SC}}(D\circ H)\rightarrow \mathrm{SC}(D)\). (But this breaks down if D has a trivial strong component \(X=\{x_0\}\) and H has at least two strong components Y and Z, because then the distinct strong components \(X\circ Y\) and \(X\circ Z\) are both mapped to X.)

There is no result analogous to Theorem 10.3.1 for the direct product. Indeed, Figure 10.3 shows a direct product of two strong digraphs that is not even connected: Here \(\overrightarrow{C}_4\times \overrightarrow{C}_6=2\overrightarrow{C}_{12}\), where the coefficient 2 means the product is 2 disjoint copies of \(\overrightarrow{C}_{12}\). In fact, it is easy to verify the formula

$$\begin{aligned} \overrightarrow{C}_m\times \overrightarrow{C}_n=\gcd (m,n)\overrightarrow{C}_{\mathrm{lcm}(m,n)} \end{aligned}$$
(10.4)

(which is an instance of Theorem 10.3.2 below).

Despite the fact that a direct product of strongly connected digraphs need not be strongly connected, the converse is true: if \(D_1\times \cdots \times D_n\) is strongly connected, then each \(D_i\) must be strongly connected. This is a consequence of the fact that the projection maps are homomorphisms. Given two vertices \(x_i,y_i\) of \(D_i\), take \((x_1,\ldots ,x_n),(y_1,\ldots , y_n) \in V(D_1\times \cdots \times D_n)\). Any diwalk joining these two vertices projects to a diwalk joining \(x_i\) to \(y_i\).

Figure 10.3
figure 3

The direct product of strongly connected graphs is not necessarily strongly connected.

Additional conditions on the factors that guarantee the product is strongly connected were first spelled out by McAndrew [37]. For a digraph D, let d(D) be the greatest common divisor of the lengths of all closed diwalks in D.

Theorem 10.3.2

If \(D_1,D_2,\ldots ,D_n\) are strongly connected digraphs, then the number of strong components of the direct product \(D_1\times D_2\times \cdots \times D_n\) is

$$ \frac{d(D_1)\cdot d(D_2)\cdots d(D_n)}{\text { lcm}\big (d(D_1),d(D_2),\ldots ,d(D_n)\big )}. $$

Consequently, \(D_1\times D_2\times \cdots \times D_n\) is strongly connected if and only if each \(D_i\) is strongly connected and the numbers \(d(D_1),\ldots ,d(D_n)\) are relatively prime.

Notice how this theorem agrees with Equation (10.4) and Figure 10.3. The proof is constructive and gives a neat description of the strong components.

Proof:

We need only prove the first statement. Assume each factor \(D_i\) is strongly connected, and let \(D=D_1\times \cdots \times D_n\).

For each index \(i\in [n]\), put \(d_i=d(D_i)\), and fix a vertex \(a_i\in V(D_i)\). Define functions \(f_i:V(D_i)\rightarrow \{0,1,2,\ldots , d(D_i)-1\}\) so that \(f_i(v)\) is the length (mod \(d_i\)) of an \((a_i,v)\)-diwalk W. To see that this does not depend on W, let \(W'\) be any other \((a_i,v)\)-diwalk. Let Z be a \((v,a_i)\)-diwalk. Then the concatenations \(W+Z\) and \(W'+Z\) are closed \((a_i,a_i)\)-diwalks, and \(d_i\) divides both of their lengths \(|W+Z|\) and \(|W'+Z|\). Thus \(d_i\) divides the difference \(|W|-|W'|\) of their lengths, so |W| and \(|W'|\) have the same length, modulo \(d_i\). Hence f is well defined.

Regard \(f_i(v)\) as a coloring of vertex v, so \(D_i\) is \(d_i\)-colored. Now, to each vertex \(x=(x_1,\ldots ,x_n)\) of D, assign the n-tuple \(f(x)=(f_1(x_1),\ldots ,f_n(x_n))\). Regard the distinct n-tuples as colors, so D is colored with \(d_1d_2\cdots d_n\) colors.

Take a vertex \(b=(b_1,\ldots ,b_n)\) of D, and let \(X_b\) be the strong component of D that contains b. If \(x=(x_1,\ldots ,x_n)\) is in \(X_b\), then D has a (bx)-diwalk of length (say) k. By Proposition 10.2.2, each \(D_i\) has a \((b_i,x_i)\)-diwalk of length k. As \(b_i\) is colored \(f_i(b_i)\), it follows from the definition of \(f_i\) that \(x_i\) is colored \(f_i(x_i)=f_i(b_i)+k\) (mod \(d_i\)). Thus every vertex x of \(X_b\) has a color of form \(f(x)=(f_1(b_1)+k, \ldots , f_n(b_n)+k)\) for some non-negative integer k. (Where the arithmetic in the ith coordinate is done modulo \(d_i\).)

Suppose for the moment that the converse is true: If \(x\in V(D)\) and \(f(x)=(f_1(b_1)+k, \ldots , f_n(b_n)+k)\) for some non-negative k, then x belongs to \(X_b\). (We will prove this shortly.) Combined with the previous paragraph, this means \(V(X_b)\) consists precisely of those vertices colored \((f_1(b_1)+k, \ldots , f_n(b_n)+k)\) for some non-negative integer k. There are precisely \(\text {lcm}\big (d_1, \ldots , d_n\big )\) such colors. In summary, D has \(d_1d_2\cdots d_n=d(D_1)\cdot d(D_2)\cdot \cdots \cdot d(D_n)\) color classes, and any strong component of D is the union of \(\text {lcm}\big (d(D_1), \ldots , d(D_n)\big )\) of them. Thus D has

$$ \frac{d(D_1)\cdot d(D_2)\cdots d(D_n)}{\text {lcm}\big (d(D_1),d(D_2),\ldots ,d(D_n)\big )} $$

strong components, and the theorem follows.

It remains to prove the assertion made above, namely that if the vertex \(b=(b_1,\ldots ,b_n)\) belongs to a strong component \(X_b\), then any vertex colored \((f_1(b_1)+k, \ldots , f_n(b_n)+k)\) belongs to \(X_b\). Thus let \(x=(x_1,\ldots ,x_n)\) be colored \(f(x)=(f_1(b_1)+k, \ldots , f_n(b_n)+k)\). That is, each \(x_i\) has color \(f_i(x_i)=f_i(b_i)+k\) (mod \(d_i\)). We need to prove that D has both a (bx)-diwalk and an (xb)-diwalk. By Proposition 10.2.2, it suffices to show that there is a positive integer K for which each \(D_i\) has a \((b_i,x_i)\)-diwalk of length K. (And also a \(K'\) for which each \(D_i\) has a \((x_i,b_i)\)-diwalk of length \(K'\).) The following claim assures that this is possible.

Claim. Suppose vertices \(b_i,x_i\in V(D_i)\) have colors \(f_i(b_i)\) and \(f_i(b_i)+k\), respectively. Then there is an integer \(M_i\) such that for all \(m_i\ge M_i\) there is a \((b_i,x_i)\)-diwalk of length \(m_id_i+k\). Also there is an integer \(M_i'\) such that for all \(m_i\ge M_i'\) there is an \((x_i,b_i)\)-diwalk of length \(m_id_i-k\).

Once the claim is established, we can put \(m_i=Ld_1d_2 \cdots d_n/d_i\), where L is large enough that each \(m_i\) exceeds the maximum of all the \(M_i\) and \(M_i'\). Then \(m_id_i=Ld_1d_2 \cdots d_n\) for each \(i\in [n]\), and the claim then gives the required diwalks of lengths \(K=Ld_1d_2 \cdots d_n+k\) and \(K'=Ld_1d_2 \cdots d_n-k\).

To prove the claim, let vertices \(b_i\) and \(x_i\) of \(D_i\) have colors \(f_i(b_i)\) and \(f_i(b_i)+k\), respectively. Because \(D_i\) is strongly connected, \(D_i\) has a \((b_i,x_i)\)-diwalk W. Moreover, we may assume \(|W|\ge k\), by concatenating with W (if necessary) arbitrarily many closed \((x_i,x_i)\)-diwalks. Because \(b_i\) has color \(f_i(b_i)\) and \(x_i\) has color \(f_i(b_i)+k\), it follows that W has length \(\ell d_i+k\) for some non-negative integer \(\ell \).

By definition of \(d_i\), there are dicycles \(C_1,C_2,\ldots , C_s\) in \(D_i\) for which \(d_i=\gcd (|C_1|, |C_2|, \ldots , |C_s|)\). Select a vertex \(c_i\) of each \(C_j\). Let \(P_0\) be a \((b_i,c_1)\)-diwalk, let \(P_s\) be a \((c_s,x_i)\)-diwalk, and for each \(j\in [s-1]\) let \(P_j\) be a \((c_j,c_{j+1})\)-diwalk. See Figure 10.4. By the same reasoning used for W, the diwalk \(W'=P_0+P_1+\cdots +P_s\) has length \(\ell 'd_i+k\) for some non-negative \(\ell '\).

Figure 10.4
figure 4

The diwalk W has length \(\ell d_i+k\), and the diwalk \(W'=P_0+P_1+\cdots +P_s\) has length \(\ell ' d_i+k\).

By choice of the \(C_i\), there are integers \(u_j\) for which \(\sum _{j=1}^s u_j|C_j|=d_i\). Let \(u=\max \{|u_1|, \ldots , |u_s|\}\). Put \(w=\sum _{j=1}^{s}\frac{|C_j|}{d_i}\), which is a positive integer because \(d_i\) divides each \(|C_j|\). We will show that \(M_i=\ell '+w+w^2u\) satisfies the requirements of the claim: Let \(m_i\ge M_i\). By the division algorithm

$$\begin{aligned} m_i-\ell ' = qw +r \;\;\;\; \text { with } \;\;\;\; 0\le r < w. \end{aligned}$$
(10.5)

For each \(j\in [s]\), put \(v_j=q+ru_j\). Note that each \(v_j\) is positive because

$$\begin{aligned} v_j&= \left( \frac{qw+r}{w}-\frac{r}{w}\right) +ru_i\\&> \left( \frac{m_i-\ell '}{w}-1\right) -wu \qquad \qquad \quad \, \text {(by (10.5) and } u\ge u_i\text {)}\\&\ge \frac{M_i-\ell '}{w}-1 -wu \qquad \qquad \qquad \qquad \text {(because } m_i\ge M_i\text {)}\\&= 0 \qquad \qquad \qquad \qquad \qquad \quad \text {(by definition of} \,M_i\text {).} \end{aligned}$$

Thus we may construct a diwalk

$$ W''=P_0+v_1C_1+P_1+v_2C_2+P_2+v_3C_3+P_3+\cdots + v_sC_s+P_s, $$

where \(v_jC_j\) is \(C_j\) concatenated with itself \(v_j\) times. The length of \(W''\) is

Thus for any \(m_i\ge M_i\) we have constructed a \((b_i,x_i)\)-diwalk \(W''\) in \(D_i\) of length \(m_id_i+k\), and this completes the first part of the claim. By a like construction (reversing the walks in Figure 10.4, which is possible because \(D_i\) is strong) there is also a \((x_i,b_i)\)-diwalk \(W'''\) in \(D_i\) of length \(m_id_i-k\). This completes the proof of the claim, and also the proof of the Theorem. \(\square \)

The issue of connectedness of direct products is even more subtle than that of strong connectedness. Despite the contributions [3], [21] and [22], more than 50 years elapsed between McAndrew’s result on strong connectedness (Theorem 10.3.2) and the eventual characterization of connectedness by Chen and Chen [5], which we now examine. To begin the discussion, note that because all projections of \(D=D_1\times \cdots \times D_n\) to factors are homomorphisms, if D is connected, then each factor \(D_i\) is connected too. The converse is generally false, as demonstrated by \(\overrightarrow{P}_2\times \overrightarrow{P}_2\). Laying out the exact additional conditions on the factors that ensure that the product is connected requires several definitions.

A matrix A is chainable if its entries are non-negative, and it has no rows or columns of zeros, and there are no permutation matrices M and N for which MAN has block form

$$ MAN=\left[ \begin{array}{cc}A_1 &{} 0\\ 0 &{} A_2 \end{array} \right] . $$

For a positive integer \(\ell \), we say A is \(\ell \)-chainable if \(A^\ell \) is chainable. A digraph is \(\ell \)-chainable if its adjacency matrix is \(\ell \)-chainable.

Given a walk W from x to y in a digraph D, its weight w(W) is the integer \(m-n\), where in traversing W from x to y, we encounter m arcs in forward orientation and n arcs in reverse orientation. The weight w(D) of the digraph D is the greatest common divisor of the weights of all closed walks in D, or 0 if all closed walks have weight 0.

Space limitations prevent inclusion of the proof of the following theorem. It can be found in [5].

Theorem 10.3.3

Suppose \(D_1,\ldots , D_n\) are connected digraphs. Then:

  1. 1.

    If no \(w(D_i)\) is zero, then \(D_1\times \cdots \times D_n\) is connected if and only if both of the following conditions hold:

    • \(\gcd \big (w(D_1), \ldots ,w(D_n) \big )=1\),

    • If some \(D_i\) has a vertex of in-degree 0 (respectively out-degree 0) then no \(D_j\) (\(j\ne i\)) has a vertex of out-degree 0 (respectively in-degree 0).

  2. 2.

    If some \(w(D_i)\) is zero, then \(D_1\times \cdots \times D_n\) is connected if and only if the other \(D_j\) (\(j\ne i\)) are \(\ell \)-chainable, where \(\ell =\text {diam}(D_i)\).

We conclude this section with characterizations of unilateral connectedness of the four products. Recall that a digraph is unilaterally connected if for any two of its vertices xy there exists an (xy)-diwalk or a (yx)-diwalk. (Because this relation on vertices is not symmetric, and thus not an equivalence relation, there is no notion of unilateral components.) Note that strongly connected digraphs are unilaterally connected, but not conversely.

Theorem 10.3.4

A Cartesian product of digraphs is unilaterally connected if and only if one factor is unilaterally connected and the others are strongly connected. This is also true for the strong product.

For a proof, see the solution of Exercise 32.4 of Hammack, Imrich and Klavžar [18]. See the solution of Exercise 32.5 for a proof of the next result.

Theorem 10.3.5

A lexicographic product of digraphs is unilaterally connected but not strongly connected if and only if each factor is unilaterally connected, and the first factor is not strongly connected.

Finally, we have a characterization of unilaterally connected direct products due to Harary and Trauth [21].

Theorem 10.3.6

A direct product \(D_1\times \cdots \times D_n\) is unilaterally connected if and only if each of the following holds:

  • At most one factor \(D_i\) is unilaterally connected but not strongly connected.

  • \(D_1\times \cdots \times D_{i-1}\times D_{i+1}\times \cdots \times D_n\) is strongly connected.

  • \(D_1\times \cdots D_{i-1}\times C \times D_{i+1}\times \cdots \times D_n\) is strongly connected for each strong component C of \(D_i\).

10.4 Neighborhoods, Kings and Kernels

The structures of vertex neighborhoods in digraph products are clear from the definitions. For instance, \(N^+_{D\,\Box \,D'}(x,y)=\left( N^+_D(x)\times \{y\}\right) \cup \left( \{x\}\times N^+_{D'}(y)\right) \), etc. For future reference we record two particularly useful formulas, namely

$$\begin{aligned} N^+_{D\times D'}(x,y)\,= & {} N^+_D(x)\times N^+_{D'}(y),\end{aligned}$$
(10.6)
$$\begin{aligned} N^+_{D\boxtimes D'}[(x,y)]= & {} N^+_D[x]\,\times \,N^+_{D'}[y]. \end{aligned}$$
(10.7)

These also hold with the out-neighborhoods \(N^+\) replaced by in-neighborhoods \(N^-\), and extend to arbitrarily many factors.

Recall that a k-king in a digraph is a vertex x for which there is an (xy)-dipath of length no greater than k for all vertices y of the digraph. The next proposition follows from the distance properties in Section 10.2.

Proposition 10.4.1

Let \(D_1\) and \(D_2\) be digraphs. Then:

  1. 1.

    \((x_1,x_2)\) is a k-king in \(D_1\boxtimes D_2\) if and only if each \(x_i\) is a k-king in \(D_i\).

  2. 2.

    \((x_1,x_2)\) is a k-king in \(D_1\,\Box \,D_2\) if and only if each \(x_i\) is a \(k_i\)-king in \(D_i\), where \(k_1+k_2=k\).

  3. 3.

    \((x_1,x_2)\) is a k-king in \(D_1\circ D_2\) if and only if \(x_1\) is a k-king in \(D_1\), and \(x_2\) is a k-king in \(D_2\) or \(\xi _{D_1}(x_1)\le k\) (where \(\xi \) is as defined on page 473).

  4. 4.

    If \((x_1,x_2)\) is a k-king in \(D_1\times D_2\), then each \(x_i\) is a k-king in \(D_i\).

This proposition is due to students P. LaBarr, M. Norge and I. Sanders, directed by D. Taylor [40]. Concerning statement 4, no characterization of kings in direct products is known.

Recall that a (kl)-kernel of a digraph D is a subset \(J\subseteq V(D)\) for which \(\mathrm{dist}_D(x,y)\ge k\) for all distinct \(x,y\in J\), and to any \(x\notin J\) there is a \(y\in J\) with \(\mathrm{dist}_D(x,y)\le l\). Szumny, Włoch and Włoch [54] explored (kl)-kernels in so-called D-joins. Their Theorem 8 implies the following characterization for the lexicographic product. (They also enumerate all (kl)-kernels in \(D_1\circ D_2\).)

Proposition 10.4.2

Let \(l\ge k\ge 2\). Then \(J^*\subseteq V(D_1\circ D_2)\) is a (kl)-kernel if and only if \(D_1\) has a (kl)-kernel J with \(J^*=\bigcup _{x\in J}\{x\}\times J_x\), where

  • \(J_x\) is a (kl)-kernel of \(D_2\) if \(\xi _{D_1}(x)>l\) and \(\mathrm{dist}_{D_2}(y,x)> l\) for \(y\ne x\), or

  • \(J_x\) is a single vertex of \(D_2\) if \(\xi _{D_1}(x)<k\), or

  • \(\mathrm{dist}_{D_2}(x,y)\ge k\) for all distinct \(x,y\in J_x\) otherwise.

The case \(k>l\) is open. No characterizations are known   for the other products, though Kwaśnik [33] proved the following.

Proposition 10.4.3

Let \(D_1\) and \(D_2\) be digraphs, and let \(J_i\) be a \((k_i,l_i)\)-kernel of \(D_i\) for each \(i=1,2\).

  1. 1.

    \(J_1\times J_2\) is a \(\big (\min \{k_1,k_2\}, \, l_1+l_2\big )\)-kernel of \(D_1\,\Box \,D_2\) (for \(k_1,k_2\ge 2\)).

  2. 2.

    \(J_1\times J_2\) is a \(\big (\min \{k_1,k_2\},\max \{l_1,l_2\}\big )\)-kernel of \(D_1{\,\boxtimes \,}D_2\).

See [59] for corresponding results for generalized products. Finally, we remark that Lakshmi and Vidhyapriya [34] characterize kernels in Cartesian products of tournaments with directed paths and cycles.

10.5 Hamiltonian Properties

Hamiltonian properties of digraphs have been studied extensively. The following four theorems are among the results proved in the book [44] by Schaar, Sonntag and Teichert.

Theorem 10.5.1

If \(D_1\) and \(D_2\) are Hamiltonian digraphs, then \(D_1\boxtimes D_2\) and \(D_1\circ D_2\) are Hamiltonian. If, in addition, \(D_1\) is Hamiltonian connected, and \(|D_1|\ge 3\) and \(|D_2|\ge 4\), then \(D_1\,\Box \,D_2\) is Hamiltonian.

The above additional conditions on the factors of a Cartesian product are necessary, as evidenced by the next theorem of Erdős and Trotter [57].

Theorem 10.5.2

The Cartesian product \(\overrightarrow{C}_p\,\Box \,\overrightarrow{C}_q\) is Hamiltonian if and only if there are non-negative integers \(d_1,d_2\) for which \(d_1+d_2 =\gcd (p,q)\ge 2\) and \(\gcd (p,d_1)=\gcd (q,d_2)=1\).

Recall that a digraph is traceable if it has a Hamiltonian path. It is homogeneously traceable if each of its vertices is the initial point of some Hamiltonian path.

Theorem 10.5.3

If digraphs \(D_1\) and \(D_2\) are homogeneously traceable, then so are \(D_1\,\Box \,D_2\), \(D_1\boxtimes D_2\) and \(D_1\circ D_2\).

Theorem 10.5.4

If \(D_1\) is homogeneously traceable and \(D_2\) is traceable, then \(D_1\,\Box \,D_2\) and \(D_1\boxtimes D_2\) are traceable. If \(D_1\) and \(D_2\) are traceable, then so is \(D_1\circ D_2\).

A digraph is Hamiltonian decomposable if it has a family of Hamiltonian dicycles such that every arc of the digraph belongs to exactly one of the dicycles. Ng [39] gives the most complete result among digraph products.

Theorem 10.5.5

If \(D_1\) and \(D_2\) are Hamiltonian decomposable digraphs, and \(|V(D_1)|\) is odd, then \(D_1\circ D_2\) is Hamiltonian decomposable.

At present it is not known if the assumption of odd order can be removed.

Conjecture 10.5.6

If \(D_1\) and \(D_2\) are Hamiltonian decomposable digraphs, then \(D_1\circ D_2\) is Hamiltonian   decomposable.

By Theorem 10.5.2, a Cartesian product of Hamiltonian decomposable digraphs is not necessarily Hamiltonian decomposable. This is also the case for the strong product, as is illustrated by \(\overleftrightarrow {K_2}{\,\boxtimes \,}\overleftrightarrow {K_2}=\overleftrightarrow {K_4}\).

Problem 10.5.7

Determine conditions under which a Cartesian   or strong product of digraphs is Hamiltonian decomposable.

A solution to this problem may shed light on the longstanding conjecture that a Cartesian product of Hamiltonian decomposable graphs is Hamiltonian decomposable. See Section 30.2 of [18] and the references therein.

Despite these difficulties, there has been progress on Cartesian products of biorientations of graphs. Stong [52] proved that complete biorientations of odd-dimensional hypercubes decompose into \(2m+1\) Hamiltonian cycles, and the same is true for \(\overleftrightarrow {C}_{n_1}\,\Box \,\cdots \,\Box \,\overleftrightarrow {C}_{n_m}\,\Box \,\overleftrightarrow {K}_{2}\) provided \(n_i\ge 3\) and \(m>2\).

Hamiltonian results for direct products of digraphs are scarce. Keating [28] proves that if \(D_1\) and \(D_2\times \overrightarrow{C}_{|D_1|}\) are Hamiltonian decomposable, then so is \(D_1\times D_2\). Paulraja and Sivasankar [41] establish hamilton decompositions in direct products of biorientations of special classes of graphs.

10.6 Invariants

Here we collect various results on invariants of digraph products, beginning with the chromatic number and proceeding to domination and independence.

The chromatic number \(\chi (D)\) of a digraph D is the chromatic number of the underlying graph of D. For the Cartesian and lexicographic products, the underlying graph of the product is the product of the underlying graph of the factors. Thus for \(\,\Box \,\) and \(\circ \), the chromatic number of products of digraphs coincides with that of products of graphs. This has been well-studied. See Chapter 26 of [18] for a survey.

The situation for the direct and strong products is different. For example, \(\chi (G\times H)\le \min \{\chi (G),\chi (H)\}\) is straightforward, whether G and H are graphs or digraphs. The celebrated Hedetniemi conjecture asserts that \(\chi (G\times H)= \min \{\chi (G),\chi (H)\}\) for all graphs G and H. But if G and H are digraphs, then it is quite possible that \(\chi (G\times H)< \min \{\chi (G),\chi (H)\}\), as was first noted by Poljak and Röld [43]. More recently, Bessy and Thomassé [2] exhibit a 5-chromatic digraph D for which \(\chi (D\times TT_5)=3\), and Tardif [55] gives digraphs \(G_n\) and \(H_n\) for which \(\chi (G_n)=n\), \(\chi (H_n)=4\) and \(\chi (G_n\times H_n)=3\). Poljak and Röld introduced the functions

$$\begin{aligned} f(n)= & {} \min \{\chi (G\times H) \mid G \text { and } H \text { are } n\text {-chromatic digraphs }\},\\ g(n)= & {} \min \{\chi (G\times H) \mid G \text { and } H \text { are } n\text {-chromatic graphs}\}, \end{aligned}$$

and showed that if g is bounded above, then the bound is at most 16. This bound was improved to 3 in [42].

Notice that \(f(n)\le g(n)\le n\), and Hedetniemi’s conjecture is equivalent to the assertion \(g(n)=n\). Certainly if g is bounded, then so is f. Interestingly, the converse is true. Tardif [56] proved that f and g are either both bounded or both unbounded. Thus Hedetniemi’s conjecture is false if f is bounded.

There is an oriented version of the chromatic number, defined on oriented graphs, that is, digraphs with no 2-cycles. A oriented k-coloring of such a digraph D is a map \(c:V(D)\rightarrow [k]\) with the property that \(c(x)\ne c(y)\) whenever \(xy\in A(D)\), and, in addition, the existence of an arc from one color class \(X_1\) to another color class \(X_2\) implies that there are no arcs from \(X_2\) to \(X_1\). The smallest such k is called the oriented chromatic number of D, denoted \(\chi _o(D)\). Equivalently, this is the smallest k for which there is a homomorphism from D to an oriented graph of order k. The oriented chromatic number \(\chi _o(G)\) of a graph G is the maximum oriented chromatic number of all orientations of G. For a survey, see Sopena [51]. Tight bounds on this invariant are rare, even for simple classes of graphs. Aravind, Narayanan and Subramanian [1] show \(\chi _o(G\,\Box \,P_n)\le (2n-1)\chi _o(G)\), and \(\chi _o(G\,\Box \,C_n)\le 2n\chi _o(G)\), as well as \(8\le \chi _o(P_2{\,\boxtimes \,}P_n)\le 11\) and \(10\le \chi _o(P_3{\,\boxtimes \,}P_n)\le 67\). There appears to have been no other work with this invariant on products other than some progress on grids [10, 53].

A dominating set in a digraph D is a subset \(S\subseteq V(D)\) with the property that for any \(y\in V(D)-S\) there exists some \(x\in S\) for which \(xy\in A(D)\). The domination number \(\gamma (D)\) is the size of a smallest dominating set. Domination in digraphs has not been studied as extensively as in graphs. As computing the domination number of a graph is \(\mathcal{NP}\)-hard [13], the same is true for digraphs. (Consider the complete biorientation of an arbitrary graph.) Thus we can expect exact formulas only for products of special classes of digraphs. Liu et al. [35] and Shaheen [45,46,47] consider the case of Cartesian products of directed paths and cycles. For example, Shaheen proves

$$\begin{aligned} {\gamma \left( \!\overrightarrow{P}_m\,\Box \,\overrightarrow{P}_n\!\right) {=} \displaystyle { m{+}\left\lceil \! \frac{m-1}{3}\!\right\rceil \left\lceil \!\frac{n-2}{3}\!\right\rceil \left\lceil \! \frac{m}{3}\!\right\rceil {+}\left\lceil \!\frac{m}{3}\right\rceil \left\lceil \! \frac{n-3}{3}\!\right\rceil {+}\left\lceil \! \frac{m+1}{3}\!\right\rceil \left\lceil \! \frac{n-4}{3}\!\right\rceil },} \end{aligned}$$

provided \(m,n>3\), and separate formulas are given for \(m\le 3\). Similar results for the strong product of grid graphs are considered in [48].

Concerning independence, note that (as for the chromatic number) questions of independence in Cartesian and lexicographic products of digraphs coincide with the same questions for graphs. So only the direct and strong product of digraphs are not covered by the theory of graph products. Despite this, there appears to have been little work done with them. But one interesting application deserves mention. The Gallai–Milgram theorem [12] says that the vertices of any digraph with independence number n can be partitioned into n parts, each of which is the vertex set of a directed path (see also Theorem 1.8.4). Hahn and Jackson [14] conjectured that this theorem is the best possible in the sense that for each positive n there is a digraph with independence number n, and such that removing the vertices of any \(n-1\) directed paths still leaves a digraph with independence number n. Bondy, Buchwalder and Mercier [4] used lexicographic products to construct such digraphs for \(n=2^a3^b\). (The general conjecture was proved by Fox and Sudakov [11].)

Finally, we briefly visit the notion of the exponent \(\exp (D)\) of a digraph D, which is the least positive integer k for which any two vertices of D are joined by a diwalk of length k (or \(\infty \) if no such k exists). We say D is primitive if its exponent is finite. Wielandt [58] proved that the exponent of a primitive digraph on n vertices is bounded above by \(n^2-n+1\), and established a family \(W_n\) of digraphs for which this bound is attained. Kim, Song and Hwang [32] showed that if \(D_1\) and \(D_2\) have order \(n_1\) and \(n_2\), respectively, then \(\exp (D_1\,\Box \,D_2)\le n_1n_2-1\), and this upper bound can be attained only if \(\gcd (n_1,n_2)=1\). Moreover, if \(n_1=n_2=n\), then \(\exp (D_1\,\Box \,D_2)\le n^2-n+1\), and the bound is attained only for \(D_1=\overrightarrow{C}_n\) and \(D_2= W_n\). In [30] they compute the exponents of Cartesian products of cycles, and also show that if \(D_1\) is a primitive graph and \(D_2\) is a strong digraph, then

$$ \exp (D_1\,\Box \,D_2)=\exp (D_1)+\mathrm{diam}(D_2). $$

This work continues in [29], which proves \(\exp (D_1{\,\boxtimes \,}D_2)\le n_1+n_2-2\), with equality for dicycles. Concerning the direct product, the same authors [31] show that for a primitive digraph D there is an integer m for which

$$\begin{aligned} \mathrm{diam}(D)<\mathrm{diam}(D^{\times 2})<\mathrm{diam}(D^{\times 3})< & {} \cdots \\< & {} \mathrm{diam}(D^{\times m})\\= & {} \mathrm{diam}(D^{\times m+1}) = \cdots = \exp (D). \end{aligned}$$

10.7 Quotients and Homomorphisms

Here we set up the notions needed in the subsequent sections on cancellation and prime factorization of digraphs. Some of that material is most naturally phrased within the class of digraphs in which loops are allowed. With this in mind, let \(\mathscr {D}\) denote the set of (isomorphism classes of) digraphs without loops, and let \(\mathscr {D}_0\) be the set of digraphs in which loops are allowed. Thus \(\mathscr {D}\subset \mathscr {D}_0\). We admit as an element of \(\mathscr {D}\) the empty digraph O with \(V(O)=\emptyset \).

This section’s main theme is that a digraph is completely determined, up to isomorphism, by the number of homomorphisms into it. Recall that a homomorphism \(f:D\rightarrow D'\) between digraphs \(D,D'\in \mathscr {D}_0\) is a map \(f:V(D)\rightarrow V(D')\) for which \(xy\in A(D)\) implies \(f(x)f(y)\in A(D')\). Also f is a weak homomorphism if \(xy\in A(D)\) implies \(f(x)f(y)\in A(D')\) or \(f(x)=f(y)\).

The set of all homomorphisms \(D\rightarrow D'\) is denoted \(\text {Hom}(D,D')\), and the set of weak homomorphisms \(D\rightarrow D'\) is \(\text {Hom}_w(D,D')\). A homomorphism is injective if it is injective as a map from V(D) to \(V(D')\). We denote the set of all injective homomorphisms \(D\rightarrow D'\) as \(\text {Inj}(D,D')\). (Necessarily \(\text {Inj}(D,D')\) is also the set of injective weak homomorphisms \(D\rightarrow D'\).) Let \(\text {hom}(D,D')=|\text {Hom}(D,D')|\) be the number of homomorphisms \(D\rightarrow D'\). Similarly, \(\text {hom}_w(D,D')=|\text {Hom}_w(D,D')|\), and \(\text {inj}(D,D')=|\text {Inj}(D,D')|\) .

We will need several notions of digraph quotients. For a digraph D in \(\mathscr {D}\) and a partition \(\varOmega \) of V(D), the quotient \(D/\varOmega \) in \(\mathscr {D}\) is the digraph in \(\mathscr {D}\) whose vertices are the partition parts \(U\in \varOmega \), and with an arc from U to V if \(U\ne V\) and D has an arc uv with \(u\in U\) and \(v\in V\). Notice the map \(D\rightarrow D/\varOmega \) sending u to the element \(U\in \varOmega \) with \(u\in U\) is a weak homomorphism.

On the other hand, if \(D\in \mathscr {D}_0\), then the quotient \(D/\varOmega \) in \(\mathscr {D}_0\) is as above, but with a loop \(UU\in A(D/\varOmega )\) whenever D has an arc with both endpoints in U. The map \(D\rightarrow D/\varOmega \) sending u to the element \(U\in \varOmega \) that contains u is a homomorphism. See Figure 10.5.

The remaining results in this section (at least in the class \(\mathscr {D}_0\)) are from Lovász [36]. See also Hell and Nešetřil [23] for a very readable account. The statements concerning weak homomorphisms were developed by Culp in [7].

Lemma 10.7.1

For a digraph D, let \(\mathscr {P}\) be the set of all partitions of V(D).

  1. 1.

    If \(D,G\in \mathscr {D}_0\), then \(\;\displaystyle {\mathrm{hom}(D,G)\,=\,\sum _{\varOmega \in \mathscr {P}}\mathrm{inj}(D/\varOmega ,G)}\) (quotients in \(\mathscr {D}_0\)).

  2. 2.

    If \(D,G\in \mathscr {D}\), then \(\;\displaystyle {\mathrm{hom}_w(D,G)=\sum _{\varOmega \in \mathscr {P}}\mathrm{inj}(D/\varOmega ,G)}\) (quotients in \(\mathscr {D}\)).

Proof:

For the first part, put \(\varUpsilon =\left\{ (\varOmega , f)\ |\ \;\varOmega \in \mathscr {P}, \;f\in \text {Inj}(D/\varOmega , G)\right\} \), so \(|\varUpsilon |=\) \(\sum _{\varOmega \in \mathscr {P}}\text {inj}(D/\varOmega ,G)\). It suffices to show a bijection \(\theta :\text {Hom}(D,G)\rightarrow \) \(\varUpsilon \). Define \(\theta \) to be \(\theta (f)=(\varOmega ,f^*)\), where \(\varOmega =\{f^{-1}(x) \mid x \in V(G)\}\in \mathscr {P}\), and \(f^*:D/\varOmega \rightarrow G\) is defined as \(f^*(U)=f(u)\), for \(u\in U\). By construction \(\theta \) is an injective map to \(\varUpsilon \). For surjectivity, take any \((\varOmega ,f)\in \varUpsilon \), and note that \(\theta \) sends the composition \(D\rightarrow D/\varOmega {\mathop {\rightarrow }\limits ^{\small f}}G\) to \((\varOmega ,f^*)\).

The proof of part 2 is the same, except that \(\text {Hom}(D,G)\) and \(\hom (D,G)\) are replaced by \(\text {Hom}_w(D,G)\) and \(\hom _w(D,G)\), and quotients are in \(\mathscr {D}\). \(\square \)

Figure 10.5
figure 5

Left: a digraph D and a partition \(\varOmega \) of V(D). Center: the quotient \(D/\varOmega \) in \(\mathscr {D}\). Right: the quotient \(D/\varOmega \) in \(\mathscr {D}_0\).

Proposition 10.7.2

The isomorphism class of a digraph is determined by the number of homomorphisms into it, in the following senses.

  1. 1.

    If \(G,H\in \mathscr {D}_0\) and \(\hom (X,G)=\hom (X,H)\) for all \(X\in \mathscr {D}_0\), then \(G\cong H\).

  2. 2.

    If \(G,H\in \mathscr {D}\) and \(\hom _w(X,G)\!=\!\hom _w(X,H)\) for all \(X\in \mathscr {D}\), then \(G\cong H\).

  3. 3.

    If \(G,H\in \mathscr {D}\) and \(\hom (X,G)\,=\,\hom (X,H)\) for all \(X\in \mathscr {D}\), then \(G\cong H\).

Proof:

For the first statement, say \(\text {hom}(X,G)=\text {hom}(X,H)\) for all \(X\in \mathscr {D}_0\). Our strategy is to show that this implies \(\text {inj}(X,G)=\) \(\text {inj}(X,H)\) for every X. Then the theorem will follow because we get \(\text {inj}(H,G)=\) \(\text {inj}(H,H)>0\) and \(\text {inj}(G,H)=\) \(\text {inj}(G,G)>0\), so there are injective homomorphisms \(G\rightarrow H\) and \(H\rightarrow G\), whence \(G\cong H\).

We use induction on |X| to show \(\text {inj}(X,G)=\) \(\text {inj}(X,H)\). If \(|X|=1\), then

$$ \text {inj}(X,G)=\hom (X,G)=\hom (X,B)=\text {inj}(X,H). $$

If \(|X|>1\), Lemma 10.7.1 (1) applied to \(\hom (X,G)=\hom (X,H)\) yields

$$ \sum _{\varOmega \in \mathscr {P}}\text {inj}(X/\varOmega ,G)= \sum _{\varOmega \in \mathscr {P}}\text {inj}(X/\varOmega ,H). $$

Let T be the trivial partition of V(X) consisting of |X| singleton sets. Then \(X/T=X\) and the above equation becomes

$$ \text {inj}(X,G)+\!\!\!\!\sum _{\varOmega \in \mathscr {P}-T}\text {inj}(X/\varOmega ,G)\;\;=\;\; \text {inj}(X,H)+\!\!\!\!\sum _{\varOmega \in \mathscr {P}-T}\text {inj}(X/\varOmega ,H). $$

By the induction hypothesis, \(\text {inj}(X/\varOmega ,G)=\text {inj}(X/\varOmega ,H)\) for all non-trivial partitions \(\varOmega \). Consequently \(\text {inj}(X,G)=\) \(\text {inj}(X,H)\), completing the proof.

The second statement is proved in exactly the same way, but using \(\hom _w\) instead of \(\hom \), and part 2 of Lemma 10.7.1 instead part 1.

Finally, part 3 follows immediately from part 1, because if \(G,H\in \mathscr {D}\) and \(X\in \mathscr {D}_0-\mathscr {D}\), then X has a loop, but neither G nor H has one, so \(\hom (X,G)=0=\hom (X,H)\). \(\square \)

Observe that \(\hom \) and \(\hom _w\) factor neatly over the direct and strong products:

Proposition 10.7.3

Suppose XD and G are digraphs.

  1. 1.

    If \(X,D,G\in \mathscr {D}_0\), then \(\mathrm{hom}(X,\,D\times G) = \mathrm{hom}(X,D)\cdot \mathrm{hom}(X,G)\).

  2. 2.

    If \(X,D,G\in \mathscr {D}\), then \(\mathrm{hom}_w(X,D{\,\boxtimes \,}G) = \mathrm{hom}_w(X,D)\cdot \mathrm{hom}_w(X,G)\).

Proof:

The map \(\mathrm{Hom}(X,D\times G)\rightarrow \mathrm{Hom}(X,D)\times \mathrm{Hom}(X,G)\) given by \(f\mapsto (\pi _Df,\pi _Gf)\) is injective. And it is surjective because any \((f_D, f_G)\) in the codomain is the image of \(x\mapsto (f_D(x), f_G(x))\), which is a homomorphism by definition of the direct product. This establishes the first statement, and the second follows analogously. \(\square \)

As an application, we get a quick result for direct and strong powers.

Corollary 10.7.4

If \(D,G\in \mathscr {D}_0\), then \(D^{\times n}\cong G^{\times n}\) if and only if \(D\cong G\). Also, if \(D,G\in \mathscr {D}\), then \(D^{\boxtimes n}\cong G^{\boxtimes n}\) if and only if \(D\cong G\).

Proof:

If \(D\cong G\), then clearly \(D^{\times n}\cong G^{\times n}\). Conversely, if \(D^{\times n}\cong G^{\times n}\), then Proposition 10.7.3 gives \(\hom (X,D)^n=\hom (X,G)^n\), so \(\hom (X,D)=\hom (X,G)\) for any \(X\in \mathscr {D}_0\). Thus \(D\cong G\), by Proposition 10.7.2. Apply a parallel argument to the strong product. \(\square \)

10.8 Cancellation

Given a product \(*\in \{\,\Box \,,{\,\boxtimes \,}, \times , \circ \}\) the cancellation problem seeks the conditions under which \(D*G\cong D*H\) implies \(G\cong H\) for digraphs DG and H. If this is the case, we say that cancellation holds; otherwise it fails. Obviously cancellation fails if D is the empty digraph, for then \(D*G= O=D*H\) for any G and H. We will see that cancellation holds for each of the products \(\,\Box \,, {\,\boxtimes \,}\) and \(\circ \) provided \(D\ne O\). The situation for the direct product is much more subtle; it is reserved for the end of the section.

As in the previous section, \(\mathscr {D}\) is the class of digraphs (without loops) and \(\mathscr {D}_0\) is the class of digraphs that may have loops. Our first result concerns the strong product. The proof approach is from Culp [7].

Theorem 10.8.1

Let DG and H be digraphs (without loops), with \(D\ne O\). If \(D{\,\boxtimes \,}G\cong D{\,\boxtimes \,}H\), then \(G\cong H\).

Proof:

Let \(D{\,\boxtimes \,}G\cong D{\,\boxtimes \,}H\). Proposition 10.7.3 says that for any digraph X,

$$ {\mathrm{hom}}_w(X,D)\cdot {\mathrm{hom}}_w(X,G)= {\mathrm{hom}}_w(X,D)\cdot {\mathrm{hom}}_w(X,H). $$

If \(D\ne O\), then \(\mathrm{hom}_w(X,D)>0\) (constant maps are weak homomorphisms), so \(\mathrm{hom}_w(X,G)=\mathrm{hom}_w(X,H)\). Proposition 10.7.2 (2) yields \(G\cong H\). \(\square \)

Theorem 10.8.1 applies only to \(\mathscr {D}\). Indeed, cancellation over \({\,\boxtimes \,}\) fails in \(\mathscr {D}_0\). Consider the case where D is a single vertex with a loop, and \(H=K_1\). Then \(D{\,\boxtimes \,}D=D=D{\,\boxtimes \,}H\), but \(D\not \cong H\).

Echoing Theorem 10.8.1, we get a partial cancellation result for the direct product [36]. The proof is the same but uses part (1) of Proposition 10.7.2 instead of part (2), plus the fact that any constant map from X to a vertex with a loop is a homomorphism. The result is due to Lovász [36].

Theorem 10.8.2

Suppose \(D,G,H\in \mathscr {D}_0\), and D has a loop. If \(D\times G\cong D\times H\), then \(G\cong H\).

Proposition 10.7.3 has no analogue for the Cartesian product, so to deduce cancellation for it we must count our homomorphisms indirectly. The proof of the next theorem is new. A different approach uses unique prime factorization; see the remarks in Chapter 23 of [18].

Theorem 10.8.3

Let DG and H be digraphs (without loops), with \(D\ne O\). If \(D\,\Box \,G\cong D\,\Box \,H\), then \(G\cong H\).

Proof:

The proof has two parts. First we derive a formula for \(\hom (X, D\,\Box \,G)\). Then we use it to show \(D\,\Box \,G\cong D\,\Box \,H\) implies \(\hom (X,G)=\hom (X,H)\) for all \(X\in \mathscr {D}\), whence Proposition 10.7.2 yields \(G\cong H\).

Our counting formula uses an arc 2-coloring scheme, shown in Figure 10.6. Given a 2-coloring of A(X) by the colors dashed and bold, let \(X_d\) be the spanning subdigraph of X whose arcs are the dashed arcs, and let \(X_b\) be the spanning subdigraph whose arcs are bold. Let \(X_b/X_d\) be the contraction in \(\mathscr {D}_0\) of \(X_b\) in which each connected component of \(X_d\) is collapsed to a vertex. Specifically, \(V(X_b/X_d)\) is the set of connected components of \(X_d\), and

$$ A(X_b/X_d)=\{ UV \mid X \text { has a } \mathbf{bold} \text { arc from } U \text { to } V\}. $$

Define \(X_d/X_b\) analogously, as the contraction of \(X_d\) by the connected components of \(X_b\). Note that \(X_b/X_d\) (resp. \(X_d/X_b\)) has a loop at U if and only if the subdigraph of X induced on U has a bold (resp. dashed) arc.

Figure 10.6
figure 6

Each homomorphism \(X\rightarrow D\,\Box \,G\) is encoded as an arc 2-coloring of X with colors dashed and bold, and homomorphisms \(X_d/X_b\rightarrow D\) and \(X_b/X_d\rightarrow G\).

Let \(\mathscr {C}\) be the set of all arc 2-colorings of X by colors dashed and bold. We claim that there is a disjoint union

$$\begin{aligned} \text {Hom}(X,D\,\Box \,G)=\bigcup _{\mathscr {C}}\,\text {Hom}(X_d/X_b, D)\times \text {Hom}(X_b/X_d, G). \end{aligned}$$

Indeed, any \(f\in \text {Hom}(X,D\,\Box \,G)\) corresponds to a 2-coloring in \(\mathscr {C}\) and a pair \((\pi _D f,\pi _G f)\in \text {Hom}(X_d/X_b, D)\times \text {Hom}(X_b/X_d, G)\), as follows. For any \(xy\in A(X)\), either \(\pi _Df(x)=\pi _Df(y)\) and \(\pi _Gf(x)\pi _Gf(y)\in A(G)\), or \(\pi _Df(x)\pi _Df(y)\in A(D)\) and \(\pi _Gf(x)=\pi _Gf(y)\). Color xy bold in the first case and dashed in the second. One verifies that \(\pi _Df\) is a well-defined homomorphism \(X_d/X_b\rightarrow D\), and similarly for \(\pi _Gf:X_b/X_d\rightarrow D\), and it is easy to check that \(f\mapsto (\pi _Df,\pi _Gf)\) is injective. For surjectivity, note that for any arc 2-coloring of X and pair \((f_D, f_G)\in \mathrm{Hom}(X_d/X_b, D)\times \mathrm{Hom}(X_b/X_d, G)\), there is an associated \(f\in \text {Hom}(X,D\,\Box \,G)\) defined as \(f(x)=(f_D(U), f_G(V))\), where \(x\in U,V\).

It follows that we can count the homomorphisms from X to \(D\,\Box \,G\) as

$$\begin{aligned} \hom (X,D\,\Box \,G)=\sum _{\mathscr {C}}\hom (X_d/X_b, D)\cdot \hom (X_b/X_d, G). \end{aligned}$$
(10.8)

This completes the first part of the proof.

For the second step, suppose \(D\,\Box \,G\cong D\,\Box \,H\) and X is arbitrary. We will show \(\hom (X,G)=\hom (X,H)\) by induction on |X|. If \(|X|=1\), then \(\hom (X,G)=|G|=|H|=\hom (X,H)\). Otherwise, by Equation (10.8),

$$ \sum _{\mathscr {C}}\hom (X_d/X_b, D)\cdot \hom (X_b/X_d, G)= \sum _{\mathscr {C}}\hom (X_d/X_b, D)\cdot \hom (X_b/X_d, H). $$

By induction, \(\hom (X_b/X_d, G)=\hom (X_b/X_d, H)\) for all colorings with at least one dashed edge. Thus, for the coloring where all edges are bold, we get

$$ \hom (X_d/X_b, D)\cdot \hom (X_b/X_d, G)=\hom (X_d/X_b, D)\cdot \hom (X_b/X_d, H). $$

But then \(X_d/X_b\) has no arcs, so \(\hom (X_d/X_b, D)> 0\). Also, \(X_b/X_d=X\), so we get \(\hom (X, G)=\hom (X, H)\). Finally, Proposition 10.7.2 says \(G\cong H\). \(\square \)

Next we aim our homomorphism-counting program at the lexicographic product and bag a particularly strong cancellation law. We use a coloring scheme like that in Figure 10.6. For a homomorphism \(X\rightarrow D\circ G\), arcs mapping to fibers over vertices of D are colored bold, and all other arcs are colored dashed. Equation (10.8) adapts as

$$\begin{aligned} \hom (X,D\circ G)=\sum _{\mathscr {C}}\,\hom (X_d/X_b, D)\cdot \hom (X_b, G). \end{aligned}$$
(10.9)

Verification is left as an exercise. Using this, we can prove right- and left-cancellation for the lexicographic product.

Lemma 10.8.4

Suppose DG and H are digraphs (without loops) and \(D\ne O\). If \(G\circ D\cong H\circ D\), then \(G\cong H\). If \(D\circ G\cong D\circ H\), then \(G\cong H\).

Proof:

Say \(G\circ D\cong H\circ D\). We will get \(G\cong H\) by showing \(\hom (X,G)=\) \(\hom (X, H)\) for any X. If \(|X|=1\), then \(\hom (X,G)=|G|=|H|=\hom (X, H)\). Let \(|X|>1\) and assume \(\hom (X',G)=\hom (X', H)\) whenever \(|X'|<|X|\). As \(\hom (X,G\circ D)=\hom (X,H\circ D)\), Equation 10.9 gives

$$ \sum _{\mathscr {C}}\,\hom (X_d/X_b, G)\cdot \hom (X_b, D)= \sum _{\mathscr {C}}\,\hom (X_d/X_b, H)\cdot \hom (X_b, D). $$

Now, \(\hom (X_d/X_b, G)=\hom (X_d/X_b, H)\) unless all arcs of X are dashed, in which case \(X_d/X_b=X\) and \(X_b\) is the arcless digraph on V(X). From this, the above equation reduces to \(\hom (X,G)\cdot |D|^{|G|}=\hom (X,H)\cdot |D|^{|G|}\), and then \(G\cong H\) by Proposition 10.7.2. For the second statement, Equation 10.9 gives

$$ \sum _{\mathscr {C}}\,\hom (X_d/X_b, D)\cdot \hom (X_b, G)= \sum _{\mathscr {C}}\,\hom (X_d/X_b, D)\cdot \hom (X_b, H), $$

and we reason as in the first case. \(\square \)

We now discuss a notion that leads to a much stronger cancellation law. A subdigraph X of D is said to be externally related if for each \(b\in V(D)-V(X)\) the following holds: if there is an arc from b to a vertex of X, then there are arcs from b to every vertex in X; and if there is an arc from a vertex of X to b, then there are arcs from every vertex of X to b. (In the context of graphs, see Section 10.2 of [18], and the references therein.)

Given a vertex \(a=(x_1,x_2)\in V(G\circ D)\), let \(D^a\) denote the subdigraph of \(G\circ D\) induced on the vertices \(\{(x_1,x)\mid x\in V(D)\}\). We call \(D^a\) the D-layer through a. The definition of the lexicographic product implies \(D^a\cong D\), and that each \(D^a\) is externally related in \(G\circ D\). Note that each \(D^a\) is also an induced subdigraph of \(G\circ D\). All of these ideas are used in the proof of the next theorem, which was first proved by Dörfler and Imrich [8].

Theorem 10.8.5

Let DGH and K be non-empty digraphs (without loops). If \(G\circ D\cong H\circ K\) and \(|D|=|K|\), then \(G\cong H\) and \(D\cong K\).

Proof:

We prove this under the assumption that either D is disconnected, or that both D and its complement \(\overline{D}\) are connected. Once proved, this implies the general result, because if D is connected and \(\overline{D}\) is disconnected, then we can use Equation 10.3 to get \(\overline{G}\circ \overline{D}\cong \overline{H}\circ \overline{K}\). Then \(\overline{G}\cong \overline{H}\) and \(\overline{D}\cong \overline{K}\), and the theorem follows.

Take an isomorphism \(\varphi :G\circ D\rightarrow H\circ K\).

We first claim that for any D-layer \(D^a\), the image \(\pi _H\varphi (D^a)\) is either an arcless subdigraph of H (i.e., one or more vertices of H), or it is a single arc of H. Indeed, suppose it has an arc. Then \(\varphi (D^a)\) has an arc cd with \(\pi _H(c)\ne \pi _H(d)\). We will show that if \(\varphi (D^a)\) has a vertex x with \(\pi _H(x)\notin \{\pi _H(c),\pi _H(d)\}\), then all arcs \(x''y, yx''\) are present in \(\varphi (D^a)\), for any vertex \(y\in \varphi (D^a)\cap (K^c\cup K^d)\) and \(x''\in K^x\). This will contradict our assumption about D, because it implies that \(\varphi (D^a)\) (hence also D) is connected, but its complement is disconnected, for in \(\overline{\varphi (D^a)}\) it is impossible to find a path from x to c or d. Thus let x be as stated above. Select vertices \(c',d',x'\) in \(H\circ K-\varphi (D^a)\) with \(\pi _H(c')=\pi _H(c)\), \(\pi _H(d')=\pi _H(d)\) and \(\pi _H(x')=\pi _H(x)\). (Possible because \(|\varphi (D^a)|=|K|\), and the existence of the arc cd means that no K-layer is contained in \(\varphi (D^a)\).) The definition of \(\circ \) implies \(cd',c'd\in A(H\circ K)\). In turn, \(c'x, xd'\in A(H\circ K)\) because \(\varphi (D^a)\) is externally related. By definition of \(\circ \) we get \(cx',x'd\in A(H\circ K)\), and then also \(x'c, dx'\in A(H\circ K)\) because \(\varphi (D^a)\) is externally related. From this, the definition of \(\circ \) implies that for any vertex \(x''\) of \(K^x\) and y of \(K^c\cup K^d\) we have \(yx'', \,x''y\in A(H\circ K)\). The claim is proved. Now we break into cases.

Case 1. Suppose D is disconnected. Then \(\pi _H\varphi (D^a)\) is never an arc because then every vertex of \(\varphi (D^a)\) in the fiber over the tail of the arc would be adjacent to every vertex in the fiber over the tip, making \(\varphi (D^a)\) connected. It follows that \(\varphi \) maps components of D-layers into components of K-layers. Further, \(\varphi \) maps each component of a D-layer onto a component of a K-layer: Suppose to the contrary that C is a component of \(D^a\) and \(\varphi (C)\) is a proper subgraph of a component of \(K^{\varphi (a)}\). Take a vertex x of \(K^{\varphi (a)}-\varphi (C)\) that is adjacent to or from \(\varphi (C)\). Then \(\varphi ^{-1}(x)\) is adjacent to or from \(C\subseteq D^a\), which is externally related, so \(\varphi ^{-1}(x)\) is adjacent to or from every vertex of \(D^a\). Consequently x is adjacent to or from every vertex of \(\varphi (D^a)\). But then any vertex of y of \(\varphi (D^a)\) must be contained in \(K^{\varphi (a)}\), for otherwise it is adjacent to or from \(x\in V(K^{\varphi (a)})\), and hence also to or from \(\varphi (C)\), which is impossible. Thus \(K^{\varphi (a)}\) contains \(\varphi (D^a)\) as well as x, contradicting \(|K^{\varphi (a)}|=|\varphi (D^a)|\).

Thus each component of a D-layer is isomorphic to a component of a K-layer, and conversely, as \(\varphi \) is bijective. As there are |G| D-layers (all isomorphic to D), and just as many K-layers (isomorphic to K), we conclude \(D\cong K\). Thus \(G\circ D\cong H\circ D\), and Lemma 10.8.4 implies \(G\cong H\).

Case 2. Suppose D is connected and its complement is connected. If \(\varphi \) maps a D-layer to a K-layer, then \(D\cong K\) and Lemma 10.8.4 implies \(G\cong H\). Otherwise \(\pi _H\varphi (D^a)\) is an arc for every layer \(D^a\). Thus we can define a map \(f:G\rightarrow H\) by declaring f(x) to be the tail of the arc \(\pi _H\varphi (\pi _G^{-1}(x))\). We will finish the proof by showing f is an isomorphism. (For then \(G\cong H\), and \(D\cong K\), by Lemma 10.8.4.) We will show that f is injective; once this is done the isomorphism properties are simple consequences of the definitions. Suppose to the contrary that f is not injective, which means that for some \(a\ne b\) we have \(\pi _H\varphi (D^a)=wy\) and \(\pi _H\varphi (D^b)=wz\). Say the vertex set of \(\varphi (D^a)\) is \(A_w\cup Ay\) with \(\pi _H(A_w)=w\) and \(\pi _H(A_y)=y\). Likewise the vertex set of \(\varphi (D^b)\) is \(B_w\cup B_z\) with \(\pi _H(B_w)=w\) and \(\pi _H(B_z)=z\). Then there are arcs from each vertex of \(\varphi (B_w)\) to each vertex of \(\varphi (B_z)\), and then by definition of \(\circ \) there are arcs from each vertex of \(\varphi (A_w)\) to each vertex of \(\varphi (B_z)\). For the same reasons there are arcs from \(\varphi (A_w)\) to \(\varphi (A_y)\), and thus from \(\varphi (B_w)\) to \(\varphi (A_y)\). From this we conclude that in \(G\circ D\) there are arcs from every vertex of \(G^a\) to every vertex of \(G^b\), and arcs from every vertex of \(G^b\) to every vertex of \(G^a\). Hence there are arcs from \(\varphi (B_z)\) to \(\varphi (A_w)\), so there is an arc from \(\varphi (B_z)\) to \(\varphi (B_w)\). Thus \(\pi _H\varphi (G^b)\) contains two arcs wz and zw, contradicting the fact that this projection is a single arc. \(\square \)

We get a quick corollary concerning lexicographic powers.

Corollary 10.8.6

If \(G,H\in \mathscr {D}\), then \(G^{\circ n}\cong H^{\circ n}\) if and only if \(G\cong H\).

Having cancellation laws for the strong, Cartesian and lexicographic products, we devote the remainder of this section to the direct product. The next result due to Lovász [36] is useful in this context.

Proposition 10.8.7

Let DCG and H be digraphs in \(\mathscr {D}_0\). If \(C\times G\cong C\times H\) and there is a homomorphism \(D\rightarrow C\), then \(D\times G\cong D\times H\).

Proof:

As \(C\times G\cong C\times H\), Proposition 10.7.3 says \(\hom (X,C)\cdot \hom (X,G)= \hom (X,C)\cdot \hom (X,H)\) for any X. The homomorphism \(D\rightarrow C\) guarantees \(\hom (X,D)=0\) whenever \(\hom (X,C)=0\). Thus \(\hom (X,D)\cdot \hom (X,G)= \hom (X,D)\cdot \hom (X,H)\), so \(\hom (X,D\times G)=\hom (X, D\times H)\) by Proposition 10.7.3, and then Proposition 10.7.2 gives \(D\times G\cong D\times H\). \(\square \)

Now observe that cancellation can fail over the direct product. Figure 10.7 shows digraphs \(D,G,H\in \mathscr {D}_0\) for which \(D\times G\cong 3\overrightarrow{C_3} \cong D\times H\), but \(G\not \cong H\). Cancellation can also fail in the class of loopless digraphs. For example, note that for graphs we have \(K_2\times 2C_3 = 2C_6 = K_2\times C_6\), so \(\overleftrightarrow {K_2}\times 2\overleftrightarrow {C_3} \cong \overleftrightarrow {K_2}\times \overleftrightarrow {C_6}\).

A digraph D is called a zero divisor if there are digraphs \(G\not \cong H\) for which \(D\times H\cong D\times G\). For example, Figure 10.7 shows that \(D=\overrightarrow{C_3}\) is a zero divisor, and the equation above shows \(\overleftrightarrow {K_2}\) is a zero divisor. The following characterization of zero divisors is due to Lovász [36].

Theorem 10.8.8

A digraph D is a zero divisor if and only if there exists a homomorphism \(D\rightarrow \overrightarrow{C}_{p_1}+\overrightarrow{C}_{p_2}+ \cdots +\overrightarrow{C}_{p_k}\) into a disjoint union of directed cycles of distinct prime lengths \(p_1,p_2,\ldots , p_k\).

Proof:

We will prove only one (the easier) direction. See [36] for the other.

Suppose there is a homomorphism \(D\rightarrow C=\overrightarrow{C}_{p_1}+\overrightarrow{C}_{p_2}+ \cdots +\overrightarrow{C}_{p_k}\), where the \(p_i\) are distinct primes. Our plan is to produce non-isomorphic digraphs G and H for which \(C\times G=C \times H\), for then Proposition 10.8.7 will insure \(D\times G\cong D\times H\), showing D is a zero divisor.

Put \(n=p_1p_2\cdots p_k\). Let \(\mathscr {G}\) be the set of positive divisors of n that are products of an even number of the \(p_i\)’s, whereas \(\mathscr {H}\) is the set of divisors that are products of an odd number of the \(p_i\)’s. Let G and H be the disjoint unions

$$ G=\sum _{d\in \mathscr {G}} d\, \overrightarrow{C}_{\frac{n}{d}} \qquad \text { and } \qquad H=\sum _{d\in \mathscr {H}} d\, \overrightarrow{C}_{\frac{n}{d}}. $$

Clearly \(G\not \cong H\). As the direct product distributes over disjoint unions, \(C\times G=C \times H\) will follow provided \(\overrightarrow{C}_{p_i}\times G=\overrightarrow{C}_{p_i} \times H\) for each \(p_i\). We establish this with the aid of Equation (10.4), as follows:

$$\begin{aligned} \begin{array}{lclclclclcl} \overrightarrow{C}_{p_i}\times G = \displaystyle {\sum _{d\in \mathscr {G}} \overrightarrow{C}_{p_i}\times d\overrightarrow{C}_{\frac{n}{d}}} &{}=&{} \displaystyle {\sum _{\begin{array}{c} d\in \mathscr {G}\\ p_i\mid d \end{array}} \overrightarrow{C}_{p_i}\times d \overrightarrow{C}_{\frac{n}{d}}} &{}+&{} \displaystyle {\sum _{\begin{array}{c} d\in \mathscr {G}\\ p_i\,\not \mid \,d \end{array}} \overrightarrow{C}_{p_i}\times d \overrightarrow{C}_{\frac{n}{d}}}\\ &{}=&{} \displaystyle {\sum _{\begin{array}{c} d\in \mathscr {G}\\ p_i\mid d \end{array}} d \overrightarrow{C}_{\frac{p_in}{d}}}\; &{}+&{} \; \displaystyle {\sum _{\begin{array}{c} d\in \mathscr {G}\\ p_i\,\not \mid \,d \end{array}} p_i d \overrightarrow{C}_{\frac{p_in}{p_id}}}\\ &{}=&{} \displaystyle {\sum _{\begin{array}{c} d\in \mathscr {H}\\ p_i\not \mid d \end{array}} d p_i\overrightarrow{C}_{\frac{p_in}{d}}}\; &{}+&{} \; \displaystyle {\sum _{\begin{array}{c} d\in \mathscr {H}\\ p_i\,\mid \,d \end{array}} d \overrightarrow{C}_{\frac{p_in}{d}}} \\ &{}=&{} \displaystyle {\sum _{\begin{array}{c} d\in \mathscr {H}\\ p_i\not \mid d \end{array}} \overrightarrow{C}_{p_i}\times d \overrightarrow{C}_{\frac{n}{d}}}\; &{}+&{} \; \displaystyle {\sum _{\begin{array}{c} d\in \mathscr {H}\\ p_i\,\mid \,d \end{array}} \overrightarrow{C}_{p_i}\times d \overrightarrow{C}_{\frac{n}{d}}}\\ &{}=&{} \displaystyle {\sum _{d\in \mathscr {H}} \overrightarrow{C}_{p_i}\times d \overrightarrow{C}_{\frac{n}{d}}} &{}=&{} \overrightarrow{C}_{p_i}\times H. \end{array} \end{aligned}$$

From this, \(C\times G=C\times H\), and hence \(D\times G=D\times H\), as noted above. \(\square \)

Figure 10.7
figure 7

Failure of cancellation over the direct product.

For example, \(\overrightarrow{C}_n\) is a zero divisor when \(n>1\), as there is a homomorphism \(\overrightarrow{C}_n\rightarrow \overrightarrow{C}_p\) for any prime divisor p of n. Also, each \(\overrightarrow{P}_n\) is a zero divisor, as there are homomorphisms \(\overrightarrow{P}_n\rightarrow \overrightarrow{C}_p\).

Paraphrasing Theorem 10.8.8, if there are no homomorphisms from D into a union of directed cycles, then \(D\times G\cong D\times H\) necessarily implies \(G\cong H\). But if there is such a homomorphism then D is a zero divisor and there exist non-isomorphic digraphs G and H for which \(D\times G\cong D\times H\), as constructed in the proof of Theorem 10.8.8.

Given a digraph G and a zero divisor D, a natural problem is to determine all digraphs H for which \(G\times D\cong H\times D\). If there is only one such H, then necessarily \(H\cong G\), and cancellation holds. Thus it is meaningful to ask if there are conditions on G and D that force cancellation to hold, even if D is a zero divisor. For example, if \(G=K_1^*\), then \(G\times D\cong H\times D\) implies \(G\cong H\), regardless of whether D is a zero divisor. What other graphs have this property? We now turn our attention to this type of question, adopting the approach of [15, 19, 20].

For a digraph G, let \(S_{V(G)}\) denote the symmetric group on V(G), that is, the set of bijections from V(G) to itself. For \(\sigma \in S_{V(G)}\), define the permuted digraph \(G^{\sigma }\) to be \(V(G^\sigma )=V(G)\) and \(A(G^\sigma )=\{x \sigma (y) \mid xy\in A(G)\}\). Thus \(xy\in A(G)\) if and only if \(x\sigma (y)\in A(G^\sigma )\), and \(xy\in A(G^\sigma )\) if and only if \(x\sigma ^{-1}(y)\in E(G)\). Figure 10.8 shows several examples. The upper part shows a digraph G and two of its permuted digraphs. In the lower part, the cyclic permutation (0245) of the vertices of \(\overrightarrow{C_6}\) yields a permuted digraph \(\overrightarrow{C_6}^{(0245)}=2\overrightarrow{C_3}\). The permuted digraph \(\overrightarrow{C_6}^{(01)}\) is also shown. For another example, note that \(G^{\text {id}}=G\) for any digraph G. It may be possible that \(G^\sigma \cong G\) for some non-identity permutation \(\sigma \). For instance, \(\overrightarrow{{C_6}}^{(024)}\cong \overrightarrow{C_6}\).

Figure 10.8
figure 8

Upper: A digraph G and permuted digraphs \(G^\sigma \) for transpositions \(\sigma =(2 3)\) and \(\sigma =(1 3)\). Lower: two permutations of a directed cycle.

The significance of permuted digraphs is given by the next proposition. asserting that \(D\times G\cong D\times H\) implies that H is a permuted digraph of G.

Proposition 10.8.9

Let GH and D be digraphs, where D has at least one arc. If \(D\times G\cong D\times H\), then \(H\cong G^\sigma \) for some permutation \(\sigma \in S_{V(G)}\). As a partial converse, \(D\times G\cong D\times G^\sigma \) for all \(\sigma \in S_{V(A)}\), provided there is a homomorphism \(D\rightarrow \overrightarrow{P}_2\).

Proof:

Suppose \(D\times G\cong D\times H\), and D has at least one arc. Then there is a homomorphism \(\overrightarrow{P}_2\rightarrow D\), and Proposition 10.8.7 yields an isomorphism \(\varphi : \overrightarrow{P_2}\times G\rightarrow \overrightarrow{P_2}\times H\). We may assume \(\varphi \) has the form \((\varepsilon ,x)\mapsto (\varepsilon , \varphi _\varepsilon (x))\), where \(\varepsilon \in \{0,1\}=\) \(V(\overrightarrow{P_2})\), and each \(\varphi _\epsilon \) is a bijection \(V(G)\rightarrow V(H)\). (That a \(\varphi \) of such form exists is a consequence of Theorem 3 of [36]. However, it is also easily verified in the present setting, when the common factor is \(\overrightarrow{P}_2\).) Hence \(\varphi _0^{-1}\varphi _1:V(G)\rightarrow V(G)\) is a permutation of V(G). We now show that the map \(\varphi _0:G^{\varphi _0^{-1}\varphi _1}\rightarrow H\) is an isomorphism. Simply observe that

$$\begin{aligned} \begin{array}{rrrrll} xy \in A(G^{\varphi _0^{-1}\varphi _1}) &{}\Longleftrightarrow &{} x\, (\varphi _0^{-1}\varphi _1)^{-1}(y) &{}\in &{} A(G) \\ &{} \Longleftrightarrow &{} x\,\varphi _1^{-1}\varphi _0(y) &{}\in &{} A(G) \\ &{} \Longleftrightarrow &{} (0,x) \,(1,\varphi _1^{-1}\varphi _0(y)) &{}\in &{} A( \overrightarrow{P_2}\times G)\\ &{} \Longleftrightarrow &{} (0,\varphi _0(x))\,(1,\varphi _1\varphi _1^{-1}\varphi _0(y)) &{}\in &{} A(\overrightarrow{P_2}\times H) \quad \text {(apply} \varphi \text {)}\\ &{}\Longleftrightarrow &{} (0,\varphi _0(x)) \,(1,\varphi _0(y)) &{}\in &{} A(\overrightarrow{P_2}\times H) \\ &{} \Longleftrightarrow &{} \varphi _0(x)\,\varphi _0(y) &{}\in &{} A(H). \end{array} \end{aligned}$$

Conversely, let \(\sigma \in S_{V(G)}\). Note that the map \(\varphi \) defined as \(\varphi (0,x)=(0,x)\) and \(\varphi (1,x)=(1,\sigma (x))\) is an isomorphism \(\overrightarrow{P}_2\times G\rightarrow \overrightarrow{P}_2\times G^\sigma \) because \((0,x)(1,y)\in A(\overrightarrow{P}_2\times G)\) if and only if \((0,x)(1,\sigma (y))\in A(\overrightarrow{P}_2\times G^\sigma )\). If there is a homomorphism \(D\rightarrow \overrightarrow{P}_2\), Proposition 10.8.7 gives \(D\times G\cong D\times G^\sigma \). \(\square \)

In general, the full converse of Proposition 10.8.9 is (as we shall see) false. If there is no homomorphism \(D\rightarrow \overrightarrow{P}_2\), then not every \(\sigma \) will yield a digraph \(H=G^\sigma \) for which \(D\times G\cong D\times H\). In addition, it is possible that \(\sigma \ne \tau \) but \(G^\sigma \cong G^\tau \). Towards clarifying these issues, we next introduce a group action on \(S_{V(G)}\) whose orbits correspond to isomorphism classes of permuted digraphs.

The factorial of a digraph G is a digraph G!, defined as \(V(G!)=S_{V(G)}\), and \(\alpha \beta \in A(G!)\) provided that \(xy\in A(G)\Longleftrightarrow \) \(\alpha (x)\beta (y)\in A(G)\) for all pairs \(x,y\in V(G)\). To avoid confusion with composition, we will denote arcs \(\alpha \beta \) of G! as \([\alpha ,\beta ]\). Note that A(G!) has a group structure as a subgroup of \(S_{V(G)}\times S_{V(G)}\), that is, we can multiply arcs as \([\alpha ,\beta ][\gamma ,\delta ]=\) \([\alpha \gamma , \beta \delta ]\).

Observe that the definition implies that there is a loop \([\alpha , \alpha ]\) at \(\alpha \in V(G!)\) if and only if \(\alpha \) is an automorphism of G. In particular, any G! has a loop at the identity \(\text {id}\).

Our first example explains the origins of our term “factorial.” Let \(K_n^*\) be the complete symmetric digraph with a loop at each vertex, and note that

$$ K_n^*!\cong K_{n!}^*\cong K_n^*\times K_{n-1}^*\times K_{n-2}^*\times \cdots \times K_3^*\times K_2^*\times K_1^*. $$

For less obvious computations, it is helpful to keep in mind the following interpretation of A(G!). Any arc \([\alpha ,\beta ]\in A(G!)\) is a permutation of the arcs of G, where \([\alpha ,\beta ](xy)=\alpha (x)\beta (y)\). This permutation preserves in-incidences and out-incidences in the following sense: Given two arcs xyxz of G that have a common tail, \([\alpha ,\beta ]\) carries them to the two arcs \(\alpha (x)\beta (y)\), \(\alpha (x)\beta (z)\) of G with a common tail. Given two arcs xyzy with a common tip, \([\alpha ,\beta ]\) carries them to the two arcs \(\alpha (x)\beta (y)\), \(\alpha (z)\beta (y)\) of G with a common tip.

Bear in mind, however, that even if the head of xy meets the tail of yz, then the arcs \([\alpha ,\beta ](xy)\) and \([\alpha ,\beta ](yz)\) need not meet; they can be quite far apart in G. To illustrate these ideas, Figure 10.9 shows the effect of a typical \([\alpha ,\beta ]\) on the arcs incident with a typical vertex z of G.

Figure 10.9
figure 9

Action of an arc \([\alpha ,\beta ]\) of G! on the neighborhood of a vertex \(z\in V(G)\).

Let’s use these ideas to compute the factorial of the transitive tournament \(TT_n\), which has distinct out- and in-degrees \(0,1,\ldots , n-1\). The above discussion implies if \([\alpha ,\beta ]\in A(TT_n!)\), the out-degree of any \(x\in V(TT_n)\) equals the out-degree of \(\alpha (x)\). Hence \(\alpha =\mathrm {id}\). The same argument involving in-degrees gives \(\beta =\mathrm {id}\). Therefore \(TT_n!\) has n! vertices but only one arc \([\mathrm {id},\mathrm {id}]\). Figure 10.10 shows \(T_3!\), plus two other examples.

Figure 10.10
figure 10

Some digraphs (left) and their factorials (right).

The group A(G!) acts on \(S_{V(G)}\) as \([\alpha ,\beta ]\cdot \sigma =\alpha \sigma \beta ^{-1}\), and this determines the situation in which \(G^\sigma =G^\tau \).

Proposition 10.8.10

If \(\sigma ,\tau \) are permutations of the vertices of a digraph G, then \(G^\sigma =G^\tau \) if and only if \(\sigma \) and \(\tau \) are in the same A(G!)-orbit.

Proof:

If there is an isomorphism \(\varphi :G^\sigma \rightarrow G^\tau \), then for any \(x,y\in V(G)\),

$$\begin{aligned} xy\in A(G) \Longleftrightarrow x\sigma (y)\in A(G^\sigma )&\Longleftrightarrow \varphi (x)\varphi \sigma (y)\in A(G^\tau )\\ {}&\Longleftrightarrow \varphi (x)\tau ^{-1}\varphi \sigma (y)\in A(G). \end{aligned}$$

This means \([\varphi , \tau ^{-1}\varphi \sigma ]\in A(G!)\). Then \([\varphi , \tau ^{-1}\varphi \sigma ]\cdot \sigma =\tau \), so \(\sigma \) and \(\tau \) are indeed in the same orbit.

Conversely, suppose \(\sigma \) and \(\tau \) are in the same orbit. Take \([\alpha ,\beta ]\in A(G!)\) with \(\tau =[\alpha ,\beta ]\cdot \sigma =\alpha \sigma \beta ^{-1}\). Then \(\alpha :G^\sigma \rightarrow G^{\alpha \sigma \beta ^{-1}}=G^\tau \) is an isomorphism:

$$\begin{aligned} xy\in A(G^\sigma ) \Longleftrightarrow x\sigma ^{-1}(y)\in A(G)&\Longleftrightarrow \alpha (x)\beta \sigma ^{-1}(y)\in A(G)\\&\Longleftrightarrow \alpha (x)\alpha \sigma \beta ^{-1}\beta \sigma ^{-1}(y)\in A(G^{\alpha \sigma \beta ^{-1}})\\&\Longleftrightarrow \alpha (x)\alpha (y)\in A(G^{\alpha \sigma \beta ^{-1}}).\qquad \quad {\Box } \end{aligned}$$

Given an arc \([\alpha ,\beta ]\in A(G!)\), we have \([\alpha ,\beta ]\cdot \beta =\alpha \). The previous proposition then assures \(G^\alpha \cong G^\beta \), and therefore yields the following corollary.

Corollary 10.8.11

If two permutations \(\sigma ,\tau \) are in the same component of G!, then \(G^\sigma \cong G^\tau \).

For a given digraph G, the next theorem and corollary characterize the complete set of digraphs H for which \(D\times G\cong D\times H\), provided D is a zero divisor that admits a homomorphism into a directed path. Space limitations prohibit inclusion of a proof of the theorem, as well as inclusion of the characterization for general zero divisors D. For a full treatment, see Hammack [15].

Theorem 10.8.12

Suppose G and H are digraphs, and D is a zero divisor that admits a homomorphism \(D\rightarrow \overrightarrow{P}_n\). Assume \(n\ge 2\) is the smallest such integer. Then \(D\times G\cong D\times H\) if and only if \(H\cong G^\sigma \), where \(\sigma \) is a vertex of a diwalk of length \(n-2\) in G!.

Given a digraph G and a zero divisor D that admits a homomorphism \(D\rightarrow \overrightarrow{P}_n\), Theorem 10.8.12 describes a complete collection of digraphs H for which \(D\times G\cong D\times H\). Of course it is possible that some (possibly all) of these H are isomorphic. We next describe a means of constructing the exact set of isomorphism classes of such H. Combining the previous theorem with Proposition 10.8.10 yields the following.

Corollary 10.8.13

Suppose G and D are digraphs, and D is a zero divisor that admits a homomorphism \(D\rightarrow \overrightarrow{P}_n\). Assume \(n\ge 2\) is the smallest such integer. Then the set of distinct (up to isomorphism) digraphs H for which \(D\times G\cong D\times H\) can be obtained as follows: Let \(\varUpsilon _{n-2}\) denote the set of vertices of G! that lie on a directed walk of length \(n-2\). Select a maximal set of elements \(\sigma _1,\sigma _2,\ldots ,\sigma _k\in \varUpsilon _{n-1}\) that are in distinct orbits of the A(G!)-action on \(S_{V(G)}\). Then the digraphs H for which \(D\times G\cong D\times H\) are precisely \(H\cong G^{\sigma _1}, G^{\sigma _2}, \ldots , G^{\sigma _k}\).

Cancellation holds (\(D\times G\cong D\times H\) implies \(G\cong H\)) if and only if \(k=1\).

According to Theorem 10.8.12, if D admits a homomorphism into \(\overrightarrow{P}_2\), then \(D\times G\cong D\times H\) if and only if \(H\cong G^\sigma \), where \(\sigma \) is a vertex of G! on a diwalk of length 0. In this case there are no restrictions whatsoever on \(\sigma \); it can be any permutation of V(G). Consequently, there can be potentially |V(G)|! different \(H\cong G^\sigma \).

We close with an application of these results that illustrates an extreme failure of cancellation involving the transitive tournament \(TT_n\). We remarked earlier that \(TT_n!\) has n! vertices and a single arc \([\mathrm {id},\mathrm {id}]\). Therefore each \(A(TT_n!)\)-orbit of \(S_{V(TT_n)}\) consists of a single permutation. Also \(\varUpsilon _0=S_{V(TT_n)}\). Thus, if D is a zero divisor that admits a homomorphism to \(\overrightarrow{P}_2\), then there are exactly n! distinct digraphs \(TT_n^\sigma \) for which \(D\times TT_n\cong D\times TT_n^\sigma \). By Proposition 10.8.9, this is the maximum number possible.

But notice that if we merely replace D with a zero divisor that admits a homomorphism to \(\overrightarrow{P}_n\), with \(n>2\), then \(\varUpsilon _{n-2}=\{\mathrm{{id}}\}\) and cancellation holds!

10.9 Prime Factorization

We mentioned in Section 10.1 that the trivial digraph \(K_1\) is a unit for \(\,\Box \,\), \({\,\boxtimes \,}\) and \(\circ \) in the sense that \(K_1\,\Box \,D=D\), \(K_1{\,\boxtimes \,}D=D\) and \(K_1\circ D=D\) for any digraph D. If \(*\in \{\,\Box \,, {\,\boxtimes \,}, \circ \}\), we say a digraph D is prime over \(*\) if D is non-trivial, and for any factoring \(D=D_1*D_2\), one factor \(D_i\) is isomorphic to D and the other is \(K_1\).

Certainly any non-trivial digraph D has a factoring \(D=D_1*D_2*\cdots *D_n\), where each \(D_i\) is prime (possibly \(n=1\)). We call any such factoring a prime factoring over \(*\). (Note that \(n\le \log _2|V(D)|\) because a product \(D_i*D_j\) always has at least twice as many vertices as either of its factors.)

It is natural to ask whether any prime factoring of a given digraph D is unique up to order and isomorphism of the factors. In general this is false. For \(\,\Box \,\) and \({\,\boxtimes \,}\), the standard counterexamples arise from the equation

$$\begin{aligned} (1+x+x^2)(1+x^3)=(1+x^2+x^4)(1+x), \end{aligned}$$
(10.10)

giving two distinct prime factorings of the polynomial \(1+x+x^2+x^3+x^4+x^5\) in the semiring \(\mathbb {Z}^+[x]\). Let \(\overleftrightarrow {Q}_n\) be the complete biorientation of the n-cube \(Q_n\). For typographical efficiency, let us denote \(\overleftrightarrow {Q}_n\) simply as \(Q_n\). Then \(Q_n= \overleftrightarrow {K}_2^{\,\Box \,n}\) (the nth Cartesian power of \(\overleftrightarrow {K}_2\)), and \(Q_0=K_1\). Substituting \(Q_1\) for x in Equation (10.10) yields two factorings

$$ (Q_0+Q_1+Q_2)\,\Box \,(Q_0+Q_3)=(Q_0+Q_2+Q_4)\,\Box \,(Q_0+Q_1), $$

of the digraph \(Q_0+Q_1+Q_2+Q_3 +Q_4+Q_5\). It is routine to check that the above factors are prime.

The same idea applies to the strong product. Denote the complete biorientation \(\overleftrightarrow {K}_n\) of \(K_n\) simply as \(K_n\) (a convention we will adhere to for the rest of this section). Note that \(K_m{\,\boxtimes \,}K_n=\) \(K_{mn}\). Then, as above,

$$ (K_1+K_{2}+K_{4}){\,\boxtimes \,}(K_1+K_8)=(K_1+K_4+K_{16}){\,\boxtimes \,}(K_1+K_2) $$

are two distinct prime factorings of \(K_1+K_2+K_4+K_8 +K_{16}+K_{32}\).

Despite these failures of unique prime factorization, connected digraphs do factor uniquely over the Cartesian and strong products. For the Cartesian product, this was first proved by Feigenbaum [9], who also gives a polynomial algorithm for finding the prime factors. (More recently, Crespelle and Thierry [6] give a linear algorithm.) Our approach adapts that of Imrich, Klavžar and Rall [25]. Their proof is for graphs; we adapt it here to digraphs.

Convexity is the central ingredient of the proof. A subdigraph H of D is convex if any shortest path (not necessarily directed) in D that joins two vertices of H is itself a path in H. (There are other notions of convexity. For example, it could be phrased in terms of directed paths; however the one given here is best suited for our present purposes.) The next lemma makes use of \(\mathrm{dist}'_D(x,y)\), the length of a shortest (xy)-path in D. (See Proposition 10.2.1.)

Lemma 10.9.1

A subdigraph H of \(D=D_1\,\Box \,\cdots \,\Box \,D_k\) is convex if and only if \(H=H_1\,\Box \,\cdots \,\Box \,H_k\), where each \(H_i\) is a convex subdigraph of \(D_i\).

Proof:

Suppose \(H=H_1\,\Box \,\cdots \,\Box \,H_k\), with each \(H_i\) a convex subdigraph of \(D_i\). We claim that any shortest path P joining two vertices \(a=(a_1,\ldots ,a_k)\) and \(b=\) \((b_1,\ldots ,b_k)\) in H lies entirely in H. By Proposition 10.2.1, the length of P is the sum of the lengths of the shortest \((a_i,b_i)\)-paths \(P_i\) in \(D_i\) for \(i\in [k]\). Because each arc of P projects to an arc in only one factor (and to single vertices in all the others) it follows that each projection \(\pi _i(P)\) is a shortest \((a_i,b_i)\) path in \(D_i\), and therefore lies entirely in \(H_i\), by convexity. Thus P lies entirely in \(H=H_1\,\Box \,\cdots \,\Box \,H_k\), so H is convex.

Conversely, suppose H is convex in D. Note \(H\subseteq \pi _1(H)\,\Box \,\cdots \,\Box \,\pi _k(H)\). We complete the proof by showing that the inclusion is equality, and each \(\pi _i(H)\) is convex in \(D_i\).

To see that \(\pi _i(H)\) is convex in \(D_i\), take vertices \(a_i,b_i\in \pi _i(H)\). Let \(x_i\) be on a shortest \((a_i,b_i)\)-path in \(D_i\). We must show \(x_i\in \pi _i(H)\). Choose vertices \(a=\) \((a_1,\ldots ,a_k)\) and \(b=(b_1,\ldots ,b_k)\) of H with \(\pi _i(a)=a_i\) and \(\pi _i(b)=b_i\). Define \(x=(x_1,\ldots ,x_k)\) as follows. For each index \(j\ne i\), let \(x_j\) be on a shortest \((a_j,b_j)\)-path in \(D_j\). Thus \(\mathrm{dist}'_{D_s}(a_s,b_s)=\) \(\mathrm{dist}'_{D_s}(a_s,x_s)+\mathrm{dist}'_{D_s}(x_s,b_s)\) for each \(s\in [k]\), and Proposition 10.2.1 gives \(\mathrm{dist}'_D(a,b)=\) \(\mathrm{dist}'_D(a,x)+\mathrm{dist}'_D(x,b)\). It follows that x is on a shortest (ab)-path in D, so \(x\in H\) by convexity of H. Hence \(x_i=\pi _i(x)\in \pi _i(H)\).

Finally, we prove \(H\subseteq \pi _1(H)\,\Box \,\cdots \,\Box \,\pi _k(H)\) is equality. Since both sides are connected, it sufficies to show that any vertex v of \(\pi _1(H)\,\Box \,\cdots \,\Box \,\pi _k(H)\) at distance 1 from a vertex \(x\in V(H)\) is also in H. Let \(v=(v_1,\ldots , v_i,\ldots ,v_k)\) and \(x=(v_1,\ldots , v_{i-1},x_i,v_{i+1},\ldots ,v_k)\) be such vertices. As v is in the product of the projections of H, there is a \(u=(x_1,\ldots , x_{i-1},v_i, x_{i+1},\ldots x_k)\in V(H)\). Proposition 10.2.1 says \(\mathrm{dist}'(x,v)+\mathrm{dist}'(v,u)=\mathrm{dist}'(x,u)\), meaning v is on a shortest path joining \(x,u\in V(H)\), so \(v\in V(H)\) by convexity of H. \(\square \)

Given a vertex \(a=(a_1,\ldots , a_k)\) of \(D_1\,\Box \,\cdots \,\Box \,D_k\), and an \(i\in [k]\), we define \(D_i^a\) to be the subgraph of the product induced on the vertices \((a_1,\ldots ,a_{i-1}, x, a_{i+1},\ldots ,a_k)\), where \(x\in V(D_i)\). That is,

$$ D_i^a = a_1\,\Box \,\cdots \,\Box \,a_{i-1}\,\Box \,D_i\,\Box \,a_{i+1}\,\Box \,\cdots \,\Box \,a_k. $$

Thus \(D_i^a\cong D_i\), and it is a convex subdigraph of the product, by Lemma 10.9.1. We call \(D_i^a\) the \(D_i\) -layer through a. We are ready for our main results on prime factorization of digraphs over the Cartesian product.

Theorem 10.9.2

Connected digraphs factor uniquely into primes over the Cartesian product, up to order and isomorphism of the factors. Specifically, if a digraph D factors into primes as

$$ D=D_1\,\Box \,\cdots \,\Box \,D_k \quad \text {and} \quad D=G_1\,\Box \,\cdots \,\Box \,G_\ell , $$

then \(k=\ell \), and \(D_i\cong G_{\sigma (i)}\) for some permutation \(\sigma \) of [k].

Proof:

As remarked earlier, D has a prime factorization \(D=D_1\,\Box \,\cdots \,\Box \,D_k\). Now suppose D has two prime factorings \(D=D_1\,\Box \,\cdots \,\Box \,D_k\) and \(D=G_1\,\Box \,\cdots \,\Box \,G_\ell \). We may assume \(k\ge \ell \). Take an isomorphism

$$ \varphi :D_1\,\Box \,\cdots \,\Box \,D_k\rightarrow G_1\,\Box \,\cdots \,\Box \,G_\ell . $$

Fix \(a=(a_1,\ldots ,a_k)\), and say \(\varphi (a)=b=(b_1,\ldots ,b_\ell )\). It suffices to show \(k = \ell \), and there is a permutation \(\sigma \) of [k] for which \(\varphi (D_i^a) = G_{\sigma (i)}^b\) for \(1 \le i \le k\). (Recall \(D_i^a\cong D_i\) and \(G_{\sigma (i)}^b\cong G_{\sigma (a)}\).)

To this end, fix \(i\in [k]\). As mentioned above, any \(D_i^a\) is convex in \(D_1\,\Box \,\cdots \,\Box \,D_\ell \), so \(\varphi (D_i^a)\) is convex in \(G_1\,\Box \,\cdots \,\Box \,G_\ell \). Using Lemma 10.9.1,

$$ (b_1,\ldots ,b_\ell )\in \varphi (D_i^a)=H_1\,\Box \,\cdots \,\Box \,H_\ell , $$

where each \(H_j\) is a convex subgraph of \(G_j\). But \(D_i\cong D_i^a\cong \varphi (D_i^a)\) is prime, so \(H_i=\{b_i\}\) for all but one index, call it \(\sigma (i)\). This means \(\varphi (D_i^a)\subseteq G_{\sigma (i)}^{b}\). But then \(D_i^a \subseteq \varphi ^{-1}\big (G_{\sigma (i)}^{b}\big )\). Now, \(G_{\sigma (i)}^{b}\) is prime, and convex in \(G_1\,\Box \,\cdots \,\Box \,G_\ell \), so also \(\varphi ^{-1}\big (G_{\sigma (i)}^{b}\big )\) is prime, and convex in \(D_1\,\Box \,\cdots \,\Box \,D_k\). Lemma 10.9.1 gives

$$\begin{aligned} D_i^a\subseteq \varphi ^{-1}\big (G_{\sigma (i)}^{b}\big )=H_1'\,\Box \,\cdots \,\Box \,H_k', \end{aligned}$$
(10.11)

where each \(H_j'\) is a subdigraph of \(D_i\), containing \(a_j\). Primeness assures all but one \(H_j'\) is trivial, and necessarily it is \(H_i\) that is nontrivial. Therefore (10.11) implies \(D_i^a\subseteq \varphi ^{-1}\big (G_{\sigma (i)}^{b}\big )\subseteq D_i^a\), whence \(\varphi (D_i^a)= G_{\sigma (i)}^{b}\).

We claim that the map \(\sigma :[k]\rightarrow [\ell ]\) is injective. If \(\sigma (i)=\sigma (j)\), then

$$ \varphi (D_i^a)= G_{\sigma (i)}^{b}=\varphi (D_j^a). $$

Because \(G_{\sigma (i)}^{b}\) is nontrivial (it is prime), it follows that \(D_i^a\) and \(D_j^a\) have a nontrivial intersection. This means \(i=j\), so \(\sigma \) is injective. Thus \(k\le \ell \). We have assumed \(k\ge \ell \), so \(k=\ell \), so \(\sigma \) is a permutation. \(\square \)

Theorem 10.9.2 implies our next result, which describes the structure of isomorphisms between digraphs. The proof uses the notation from the proof of Theorem 10.9.2.

Theorem 10.9.3

Let D and G be isomorphic connected digraphs with prime factorizations \(D=D_1\,\Box \,\cdots \,\Box \,D_k\) and \(G=G_1\,\Box \,\cdots \,\Box \,G_k\). Then for any isomorphism \(\varphi :D\rightarrow G\), there is a permutation \(\sigma \) of [k] and isomorphisms \(\varphi _i:D_{\sigma (i)}\rightarrow G_i\) for which

$$\begin{aligned} \varphi (x_1,x_2,\ldots ,x_k)= \big (\varphi _1(x_{\sigma (1)}), \varphi _2(x_{\sigma (2)}), \ldots , \varphi _k(x_{\pi (k)})\big ). \end{aligned}$$
(10.12)

Proof:

By Theorem 10.9.2, there is a permutation \(\sigma \) of [k] for which \(\varphi \) restricts to an isomorphism \(D_i^a\rightarrow G_{\sigma (i)}^{b}\) for each index i. Replacing \(\sigma \) with \(\sigma ^{-1}\), we can say that, for each i, \(\varphi \) restricts to an isomorphism \(D_{\sigma (i)}^a\rightarrow H_i^{b}\).

To finish the proof, we show that \(\pi _i\varphi (x_1,\ldots ,x_k)\) depends only on \(x_{\sigma (i)}\). Then we can put \(\varphi _i(x_{\sigma (i)})=\) \(\pi _i\varphi (x_1,\ldots ,x_k)\), which yields Equation (10.12), and it is immediate that the \(\varphi _i\) are isomorphisms.

For any \(x_{\sigma (i)}\in V(D_{\sigma (i)})\), define the “hyperplane” subdigraph

$$ B[x_{\sigma (i)}]:=D_1\,\Box \,D_2\,\Box \,\cdots \,\Box \,x_{\sigma (i)}\,\Box \,\cdots \,\Box \,D_k \;\subseteq \;D_1\,\Box \,\cdots \,\Box \,D_k, $$

whose \(\sigma (i)th\) factor is the single vertex \(x_{\sigma (i)}\). This subdigraph is convex, so Lemma 10.9.1 says \(\varphi (B[x_{\sigma (i)}])=U_1\,\Box \,\cdots \,\Box \,U_k\), with each \(U_j\) convex in \(G_j\).

Now, \(B[x_{\sigma (i)}]\cap D_{\sigma (i)}^a=\) \(\{(a_1,a_2,\ldots ,x_{\sigma (i)},\ldots ,a_k)\}\). Thus \(\varphi (B[x_{\pi (i)}])=U_1\,\Box \,\cdots \,\Box \,U_k\) meets \(\varphi (D_{\sigma (i)}^a)= G_i^{b}=b_1\,\Box \,\cdots \,\Box \,G_i\,\Box \,\cdots \,\Box \,b_k\) at the single vertex \(\varphi (a_1,a_2,\ldots ,x_{\sigma (i)},\ldots ,a_k)\). This means all vertices in \(\varphi (B[x_{\sigma (i)}])\) have the same ith coordinate \(\pi _i\varphi (a_1,a_2,\ldots ,x_{\sigma (i)},\ldots ,a_k)\), so

$$ \pi _i\big (\varphi (B[x_{\sigma (i)}])\big )=\pi _i\varphi (a_1,a_2,\ldots ,x_{\sigma (i)},\ldots ,a_k). $$

Now, any \((x_1,\ldots ,x_{\sigma (i)},\ldots , x_k)\in V(G)\) belongs to \(B[x_{\sigma (i)}]\). Consequently \(\pi _i\varphi (x_1,\ldots ,x_{\sigma (i)},\ldots , x_k)\) \(= \pi _i\varphi (a_1,\ldots ,x_{\sigma (i)},\ldots ,a_k)\), which depends only on \(x_{\sigma (i)}\). \(\square \)

In Theorem 10.9.3, we may relabel each vertex x of \(G_i\) with its preimage under the isomorphism \(D_i\rightarrow G_{\sigma (i)}\) to make this isomorphism an identity map. We record this observation as a useful corollary.

Corollary 10.9.4

For an isomorphism \(\varphi : D_1\,\Box \,\cdots \,\Box \,D_k \rightarrow G_1\,\Box \,\cdots \,\Box \,G_k\) where each \(D_i\) and \(G_i\) is prime, the vertices of the \(G_i\) can be relabeled so that \(\varphi (x_1,x_2,\ldots , x_k)=(x_{\sigma (1)},x_{\sigma (2)},\ldots , x_{\sigma (k)})\) for some permutation \(\sigma \) of [k].

We turn now to the lexicographic product. It is not commutative, so we should not expect a prime factorization to be unique up to order of the factors. Indeed this is not so, but there is a fascinating relationship between different prime factorings. Explaining it requires the idea of the join \(D\oplus G\) of two digraphs with disjoint vertex sets, which is the digraph obtained from \(D+G\) by adding arcs from each vertex of D to every vertex of G, and from each vertex of G to every vertex of D.

Recall that the right-distributive law holds for \(\circ \), but there is no general left-distributive law. However, if \(K_n\) is the biorientation of the complete graph on n vertices, and \(D_n\) (the arcless digraph) is its complement, we do have

$$\begin{aligned} D_n\circ (G+H)= & {} D_n\circ G + D_n\circ H,\\ K_n\circ (G\oplus H)= & {} K_n\circ G \oplus K_n\circ H. \end{aligned}$$

The first equation follows from Proposition 10.1.1. The second follows from the first, with the observation that \(G\oplus H=\overline{\overline{G}+\overline{H}}\) and \(\overline{D\circ D'}=\overline{D}\circ \overline{D'}\) (Equation (10.3)), where the bar denotes the complement.

We see now that unique prime factorization over the lexicographic product can fail in at least two ways: If q is prime and if \(D_q \circ G + D_m\) is prime, then

$$ (D_q \circ G+ D_m) \circ D_q = D_q \circ (G \circ D_q + D_m) $$

are two different prime factorizations of the same graph. We say they are related by a transposition of a totally disconnected graph. Analogously, if \(K_q \circ G \oplus K_m\) is prime, then

$$ (K_q \circ G \oplus K_m) \circ K_q = K_q \circ (G \circ K_q \oplus K_m) $$

are two different prime factorizations of the same graph, and we say they are related by a transposition of a complete graph. Also, we call the transition from \(TT_m \circ TT_n\) to \(TT_n \circ TT_m\) a transposition of transitive tournaments. (Recall that transitive tournaments commute, by Equation (10.2).)

Our final theorem of the section is due to Dörfler and Imrich [8].

Theorem 10.9.5

Any prime factorization of a digraph over the lexicographic product can be transformed into any other prime factorization by transpositions of totally disconnected graphs, transpositions of complete graphs, and transpositions of transitive tournaments.

10.10 Cartesian Skeletons

The previous section developed prime factorization results for the Cartesian and Lexicographic products. In order to get analogous results for the direct and strong products, we first need to define what is called the Cartesian skeleton of a digraph. This is an operator S that transforms a digraph D into a symmetric digraph S(D), and, under suitable conditions, obeys \(S(D\times G)=S(D)\,\Box \,S(G)\). In the subsequent section we will use it to transform questions about factorizations over \(\times \) to the more manageable product \(\,\Box \,\) (which was treated in the previous section).

Our exposition is a generalization to digraphs of Hammack and Imrich [16], which developed S in the setting of graphs. We also draw inspiration from Hellmuth and Marc [24], who devised a similar skeleton operator for which \(S(D{\,\boxtimes \,}G)=S(D)\,\Box \,S(G)\). The present development is from [17].

We need several definitions. An antiwalk in a digraph is a walk in which the orientations of the arcs alternate as the walk is traversed. An out-antiwalk is an antiwalk for which the first and last arcs are directed away from the end-vertices of the walk. An in-antiwalk is one for which the first and last arcs are directed towards the end-vertices. See Figure 10.11. Notice that in- and out-antiwalks necessarily have even length.

Figure 10.11
figure 11

An out-antiwalk (top) and an in-antiwalk (bottom).

For a digraph D, let \(D^+\) be the symmetric digraph on V(D) for which \(xy, yx\in A(D^+)\) whenever D has an out-antiwalk of length 2 from x to y, that is, if \(N_D^+(x)\cap N_D^+(y)\ne \emptyset \). See Figure 10.12 (left), where a dotted line between x and y represents two arcs xy and yx in \(D^+\). It is immediate from the definitions that \((D\times G)^+=D^+\times G^+\). Note that \(D^+\) has a loop at each vertex of positive out-degree. We define \(D^-\) similarly, where \(xy, yx\in A(D^-)\) provided \(N_D^-(x)\cap N_D^-\ne \emptyset \). Again, \((D\times G)^-=D^-\times G^-\). Because they are symmetric digraphs, \(D^+\) and \(D^-\) can be regarded as graphs. (In a different context [49, 50], \(D^+\) is also called the competition graph of D.)

Observe that \(D^+\) is connected if and only if any two vertices of D are joined by an out-antiwalk in D, and \(D^-\) is connected if and only if any two vertices of D are joined by an in-antiwalk in D.

We now explain how to construct Cartesian skeletons \(S^+(D)\) and \(S^-(D)\) of a digraph D by removing strategic edges from \(D^+\) and \(D^-\). Given a factoring \(D=H\times K\), we say an arc \((h,k)(h',k')\) of \(D^+\) is diagonal relative to the factoring if it is a loop, or \(h\ne h'\) and \(k\ne k'\); otherwise it is Cartesian. For example, in Figure 10.12, arcs xz and zy of \(D^+\) are Cartesian, and arcs xy and yy of \(D^+\) are diagonal. We note two intrinsic criteria that tell us if a non-loop arc of \(D^+\) is diagonal relative to some factoring of D.

  1. 1.

    In Figure 10.12, arc xy of \(D^+\) is not Cartesian, and there is a \(z\in V(D)\) with \(N_D^+(x)\cap N_D^+(y)\subset N_D^+(x)\cap N_D^+(z)\) and \(N_D^+(x)\cap N_D^+(y)\subset N_D^+(y)\cap N_D^+(z)\).

  2. 2.

    In Figure 10.12, arc \(x'y'\) of \(D^+\) is not Cartesian, and there is a \(z'\in V(G)\) with \(N_D^+(x')\subset N_D^+(z')\subset N_D^+(y')\).

We will get \(S^+(D)\) by removing from \(D^+\) all loops, and arcs that meet one of these criteria. Now, these criteria are somewhat dependent on one another. Note \(N_D^+(x)\subset N_D^+(z)\subset N_D^+(y)\) implies \(N_D^+(y)\cap N_D^+(x)\subset N_D^+(y)\cap N_D^+(z)\). Also, \(N_D^+(y)\subset N_D^+(z)\subset N_D^+(x)\) implies \(N_D^+(x)\cap N_D^+(y)\subset \) \(N_D^+(x)\cap N_D^+(z)\). This allows us to pack the above criteria into the following definition.

Definition 10.10.1

An arc xy of \(D^+\) is dispensable in \(D^+\) if it is a loop, or if there is some \(z\in V(D)\) for which both of the following statements hold:

  1. 1.

    \(N_D^+(x)\cap N_D^+(y)\subset N_D^+(x)\cap N_D^+(z)\;\;\) or \( \;\;N_D^+(x)\subset N_D^+(z)\!\subset N_D^+(y)\),

  2. 2.

    \(N_D^+(y)\cap N_D^+(x)\subset N_D^+(y)\cap N_D^+(z)\;\;\) or \(\;\;N_D^+(y)\subset N_D^+(z)\!\subset N_D^+(x)\).

Similarly, an arc xy of \(D^-\) is dispensable in \(D^-\) if the above conditions hold with \(N_D^-\) used in the place of \(N_D^+\).

Figure 10.12
figure 12

Left: Digraphs H, K, \(H\times K\) (bold), and \(H^+\!\), \(K^+\!\), \((H\times K)^+\) (dotted). Right: Digraphs H, K, \(H\times K\) (bold), and \(S^+(H)\), \(S^+(K)\), \(S^+(H\times K)\) (dotted). Note that \((H\times K)^+=H^+\times K^+\), and \(S^+(H\times K)=S^+(H)\,\Box \,S^+(K)\).

Note that the above statements (1) and (2) are symmetric in x and y. The next remark follows from the paragraph preceding the definition. It will be used often.

Remark 10.10.2

An arc xy of \(D^+\) is dispensable in \(D^+\) if and only if there is a \(z\in V(D)\) with \(N_D^+(x)\subset N_D^+(z)\!\subset N_D^+(y)\), or \(N_D^+(y)\subset N_D^+(z)\!\subset N_D^+(x)\), or \(N_D^+(x)\cap N_D^+(y)\subset N_D^+(x)\cap N_D^+(z)\) and \(N_D^+(y)\cap N_D^+(x)\subset N_D^+(y)\cap N_D^+(z)\). The same remark holds for dispensability in \(D^-\) (replacing \(N^+\) with \(N^-\)).

Now we come to the main definition of this section.

Definition 10.10.3

The Cartesian out-skeleton \(S^+(D)\) of a digraph D is the spanning subgraph of \(D^+\) obtained by deleting all arcs that are dispensable in \(D^+\). The Cartesian in-skeleton \(S^-(D)\) of D is the spanning subgraph of \(D^-\) obtained by deleting all arcs that are dispensable in \(D^-\). The Cartesian skeleton S(D) of D is the graph with vertices V(D) and arcs \(A(S(D))=A(S^+(D))\cup A(S^-(D))\).

Note that each of \(D^+\!\), \(D^-\!\), \(S^+(D)\), \(S^-(D)\) and S(D) is a symmetric digraph. We thus tend to refer to them as graphs, and call their arcs edges.

As an example, the right side of Figure 10.12 is the same as the left, except that all dispensable edges of \(H^+\), \(K^+\), and \((H\times K)^+\) are deleted. Thus the remaining dashed edges are \(S^+(H), S^+(K)\), and \(S^+(H\times K)\). Note that although \(S^+(D)\) was defined without regard to the factoring \(D=H\times K\), we nonetheless have \(S^+(H\times K)=S^+(H) \,\Box \,S^+(K)\). In fact, we will shortly prove that this equation holds for each of \(S^+, S^-\) and S, under mild restrictions.

These restrictions involve certain equivalence relations on the vertex set of a digraph. Define an equivalence relation \(R^+\) on V(D) by declaring \(xR^+y\) whenever \(N_D^+(x)=N_D^+(y)\). A digraph is called \(\mathbf{R}^+\)-thin if \(N_D^+(x)=N_D^+(y)\) implies \(x=y\) for all \(x,y\in V(D)\), that is, if each \(R^+\)-class contains exactly one vertex. Similarly, we define \(R^-\) and \(\mathbf{R}^-\) -thinness as above, but replacing \(N_D^+\) with \(N_D^-\). Finally, we say D is R-thin if it is both \(R^+\) thin and \(R^-\)-thin. We will need the following.

Lemma 10.10.4

Let H and K be digraphs for which all vertices have positive in- and out-degrees. Then H and K are \(R^+\!\)-thin (respectively \(R^-\!\)-thin) if and only if \(H\times K\) is \(R^+\!\)-thin (respectively \(R^-\!\)-thin). Consequently H and K are R-thin if and only if \(H\times K\) is R-thin.

Proof:

Immediate from \(N_{H\times K}^+(x,y)=N_H^+(x)\times N_K^+(y)\) (Equation 10.6) and its companion \(N_{H\times K}^-(x,y)=N_H^-(x)\times N_K^-(y)\), combined with the fact that no neighborhoods are empty. \(\square \)

The next lemma and proposition show \(S^+(H\times K)=S^+(H) \,\Box \,S^+(K)\) and \(S^-(H\times K)=S^-(H) \,\Box \,S^-(K)\) for \(R^+\!\)- and \(R^-\!\)-thin digraphs. The proofs frequently use the fact that for \(D=H\times K\),

$$\begin{aligned} N^+_D(h,k)\cap N^+_D(h',k')= & {} \big (N^+_H(h)\cap N^+_H(h') \big )\times \big (N^+_K(k)\cap N_K(k') \big ), \end{aligned}$$

which follows from \(N_D^+(h,k)=N_H^+(h)\times N_K^+(k)\) and simple set theory.

Lemma 10.10.5

Suppose D is a digraph with a factorization \(D=H\times K\). If D is \(R^+\)-thin, then every arc of \(S^+(D)\) is Cartesian with respect to the factorization. Similarly, if D is \(R^-\)-thin, then every arc of \(S^-(D)\) is Cartesian with respect to the factorization.

Proof:

We prove only the first statement. The proof of the second is identical, but replaces \(N^+\) with \(N^-\), and the notion of \(R^+\!\)-thinness with \(R^-\!\)-thinness.

Let \((h,k)(h',k')\) be a non-Cartesian edge of \(D^+\). We need only show that it is dispensable. It is certainly dispensable if it is a loop. Otherwise \(h\ne h'\) and \(k\ne k'\) Observe:

$$\begin{aligned} N_D^+(h,k)\cap N_D^+(h',k')= & {} \big (N_H^+(h)\cap N_H^+(h')\big )\times \big (N_K^+(k)\cap N_K(k')\big )\\\subseteq & {} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;N_H^+(h) \times \big (N_H^+(k)\cap N_H^+(k')\big )\\= & {} N_D^+(h,k)\cap N_D^+(h,k')\,,\\[6pt] N_D^+(h',k')\cap N_D^+(h,k)= & {} \big (N_H^+(h')\cap N_H^+(h)\big )\times \big (N_K^+(k')\cap N_K^+(k)\big )\\\subseteq & {} \big (N_H^+(h')\cap N_H^+(h)\big )\times N_K^+(k')\\= & {} N_D^+(h',k')\cap N_D^+(h,k')\,. \end{aligned}$$

If both of these inclusions are proper, then \((h,k)(h',k')\) is dispensable. If one inclusion is equality, then \(N_H^+(h)\cap N_H^+(h')=N_H^+(h)\) in the first case or \(N_K^+(k')\cap N_K^+(k)=N_K^+(k')\) in the second. From this, \(N_H^+(h)\subseteq N_H^+(h')\) or \(N_K^+(k')\subseteq N_K^+(k)\). By \(R^+\!\)-thinness,

$$\begin{aligned} N_H^+(h)\subset N_H^+(h') \;\;\;\text { or }\;\;\; N_K^+(k')\subset N_K^+(k). \end{aligned}$$
(10.13)

Repeating this argument but interchanging h with \(h'\), and k with \(k'\),

$$\begin{aligned} N_H^+(h')\subset N_H^+(h) \;\;\; \text{ or } \;\;\; N_K^+(k)\subset N_K^+(k'). \end{aligned}$$
(10.14)

Inclusions (10.13) and (10.14) show \(N_H^+(h)\subset N_H^+(h')\) and \(N_K^+(k)\subset N_K^+(k')\), or \(N_K^+(k')\subset N_K^+(k)\) and \(N_H^+(h')\subset N_H^+(h)\). The first case gives

$$\begin{aligned} N_H^+(h)\times N_K^+(k)\subset N_H^+(h)\times N_K^+(k') \subset N_H^+(h')\times N_K^+(k'), \end{aligned}$$

that is, \(N_D^+(h,k)\subset N_D^+(h,k')\subset N_D^+(h',k')\), so \((h,k)(h',k')\) is dispensable. The second case yields \(N_D^+(h',k')\subset N_D^+(h,k')\subset N_D^+(h,k)\), with the same conclusion. \(\square \)

Proposition 10.10.6

If HK are \(R^+\!\)-thin digraphs with no vertices of zero out-degree, then \(S^+(H\times K)=S^+(H)\,\Box \,S^+(K)\). If HK are \(R^-\!\)-thin, with no vertices of zero in-degree, then \(S^-(H\times K)=S^-(H)\,\Box \,S^-(K)\).

Proof:

Again, we prove only the first statement; the proof of the second is entirely analogous.

First we show \(S^+(H\times K) \subseteq S^+(H)\,\Box \,S^+(K)\). By Lemma 10.10.5, all arcs of \(S^+(H\times K)\) are Cartesian, so we need only show \((h,k)(h',k)\in S^+(H\times K)\) implies \(hh'\in S^+(H)\). (The same argument will work for arcs \((h,k)(h,k')\).) Thus suppose \(hh'\notin S^+(H)\). Then \(hh'\) is dispensable in \(H^+\), so there is a \(z'\) in V(H) for which both of the following conditions hold:

$$\begin{aligned} \begin{array}{rcl} N_H^+(h)\cap N_H^+(h')\subset N_H^+(h)\cap N_H^+(z') &{} \text { or } &{} N_H^+(h)\subset N_H^+(z')\!\subset N_H^+(h')\\ N_H^+(h')\cap N_H^+(h)\subset N_H^+(h')\cap N_H^+(z') &{} \text { or } &{} N_H^+(h')\subset N_H^+(z')\!\subset N_H^+(h). \end{array} \end{aligned}$$

Because there are no vertices of zero out-degree, \(N_K^+(k)\ne \emptyset \). Thus we can multiply each neighborhood \(N_H^+(u)\) above by \(N_K^+(k)\) on the right and still preserve the proper inclusions. Then the fact \(N_H^+(u)\times N_K^+(k) = N^+_{H \times K}(u,k)\) yields the dispensability conditions (1) and (2), where \(x = (h,k)\), \(y = (h',k)\) and \(z = (z', k)\). Thus \((h,k)(h',k)\notin S^+(H\times K)\).

Now we show \(S^+(H)\,\Box \,S^+(K)\subseteq S^+(H\times K)\). Take an arc in \(S(H)\,\Box \,S(K)\), say \((h,k)(h',k)\) with \(hh'\in S^+(H)\). We must show that \((h,k)(h',k)\) is not dispensable in \((H\times K)^+\). Suppose it was. Then there would be a vertex \(z = (z', z'')\) in \(H\times K\) such that the dispensability conditions (1) and (2) hold for \(x = (h,k), y = (h',k)\), and \(z = (z',z'')\). The various cases are considered below. Each leads to a contradiction.

Suppose \(N_D^+(x)\subset N_D^+(z)\subset N_D^+(y)\). This means

$$ N_H^+(h)\times N_K^+(k) \subset N_H^+(z')\times N_K^+(z'')\subset N_H^+(h')\times N_K^+(k), $$

so \(N_K^+(z'') = N_K^+(k)\). Then the fact that \(N_K^+(k)\ne \emptyset \) permits cancellation of the common factor \(N_K^+(k)\), so \(N_H^+(h)\subset N_H^+(z')\subset N_H^+(h')\), and \(hh'\) is dispensable. We reach the same contradiction if \(N_D^+(y)\subset N_D^+(z)\!\subset N_D^+(x)\).

Finally, suppose there is a \(z=(z',z'')\) for which both \(N_D^+(x)\cap N_D^+(y)\subset N_D^+(x)\cap N_D^+(z)\) and \(N_D^+(y)\cap N_D^+(x)\subset N_D^+(y)\cap N_D^+(z)\). Rewrite this as

$$ \begin{array}{rcl} N_D^+(h,k) \cap N_D^+(h',k) &{} \subset &{} N_D^+(h,k) \cap N_D^+(z',z'')\\ N_D^+(h',k) \cap N_D^+(h,k) &{} \subset &{} N_D^+(h',k) \cap N_D^+(z',z''), \end{array} $$

which is the same as

$$ \begin{array}{rcl} \big (N_H^+(h)\cap N_H^+(h')\big ) \times N_K^+(k) &{} \subset &{} \big (N_H^+(h)\cap N_H^+(z')\big ) \times \big (N_K(k)\cap N_K(z'')\big )\\ \big (N_H^+(h')\cap N_H^+(h)\big ) \times N_K^+(k) &{} \subset &{} \big (N_H^+(h')\cap N_H^+(z')\big ) \times \big (N_K^+(k)\cap N_K^+(z'')\big ). \end{array} $$

Thus \(N_K^+(k)\subseteq N_K^+(k)\cap N_K^+(z'')\), so \(N_K^+(k)= N_K^+(k)\cap N_K^+(z'')\), whence

$$ \begin{array}{rcl} N_H^+(h)\cap N_H^+(h') &{} \subset &{} N_H^+(h)\cap N_H^+(z')\\ N_H^+(h')\cap N_H^+(h) &{} \subset &{} N_H^+(h')\cap N_H^+(z')\,. \end{array} $$

Thus \(hh'\) is dispensable, a contradiction. \(\square \)

The next corollary follows from Proposition 10.10.6, Definition 10.10.3, as well as the definition of the Cartesian product. (Recall that a digraph is R-thin if it is both \(R^+\!\)-thin and \(R^-\!\)-thin.)

Corollary 10.10.7

Suppose K and H are R-thin digraphs, no vertices of which have zero in- or out-degree. Then \(S(K\times H)=S(K)\,\Box \,S(H)\).

Because the various skeletons are defined entirely in terms of adjacency structure, we have the following immediate consequence of Definition 10.10.3.

Proposition 10.10.8

Any isomorphism \(\varphi : D\rightarrow D'\) between digraphs, as a map \(V(D)\rightarrow V(D')\), is also an isomorphism \(\varphi :S(D)\rightarrow S(D')\).

We next consider connectivity of S(G). The following lemma is needed.

Lemma 10.10.9

Suppose a digraph D has no vertex of zero out-degree, and \(x,y\in V(D)\). If \(N_D^+(x)\subset N_D^+(y)\), then \(D^+\) has an (xy)-path consisting of edges that are non-dispensable in \(D^+\). Similarly, if no vertex of D has zero in-degree and and \(N_D^-(x)\subset N_D^-(y)\), then \(D^-\) has an (xy)-path consisting of edges that are non-dispensable in \(D^-\).

Proof:

We prove the first statement; the second follows analogously.

Consider the following maximal chain of neighborhoods between \(N_D^+(x)\) and \(N_D^+(y)\), ordered by proper inclusion. (It is possible that \(y_1=y\).)

$$\begin{aligned} N_D^+(x)\subset N_D^+(y_1) \subset N_D^+(y_2) \subset N_D^+(y_3) \subset \cdots \subset N_D^+(y_k) \subset N_D^+(y). \end{aligned}$$

We claim that \(xy_1\) is non-dispensable in \(D^+\). Certainly \(N_D^+(x)\subset N_D^+(y_1)\) implies \(xy_1\) is an edge of \(D^+\), because \(N_D^+(x)\ne \emptyset \). Also, there is no z for which \(N_D^+(x)\cap N_D^+(y_1)\subset \) \(N_D^+(x)\cap N_D^+(z)\); otherwise the condition \(N_D^+(x)\subset N_D^+(y_1)\) would yield \(N_D^+(x) \subset N_D^+(x)\cap N_D^+(z)\), which is impossible. As the chain is maximal, there is no z for which \(N_D^+(x)\subset N_D^+(z)\subset N_D^+(y_1)\). Further, \(N_D^+(y_1)\subset N_D^+(z)\subset N_D^+(x)\) is impossible, so \(xy_1\) is non-dispensable in \(D^+\).

The same argument shows that each \(y_iy_{i+1}\) is a non-dispensable edge of \(D^+\), as is \(y_ky\). Thus we have the required path \(xy_1y_2\ldots y_ky\). \(\square \)

Let us define a digraph to be anti-connected if any two of its vertices are joined by an antiwalk of even length. It should be clear that a direct product of digraphs is anti-connected if and only if all of its factors are anti-connected.

Proposition 10.10.10

If D is anti-connected, then S(D) is connected.

Proof:

Take \(x_1,x_2\in V(S(D))=V(D)\). Suppose first that they are joined by an (even) out-antiwalk W in D. As \(E(S(D)) = E(S^+(D))\cup E(S^-(D))\), and because \(D^+\) has an \((x_1,x_2)\)-path P on alternate vertices of W, it suffices to show that for any dispensable edge xy of P, there is an (xy)-path in \(D^+\) consisting of non-dispensable edges. In fact, we will prove this for any edge xy of \(D^+\). Given such an edge xy, define the integer

$$ k_{xy}=\max \{\,|N_D^+(u)\cap N_D^+(v)|-|N_D^+(x)\cap N_D^+(y)|\;\; | \;u, v\in V(D), u\ne v\}. $$

Notice \(k_{xy}\ge 0\). (Put \(u=x\) and \(v=y\).) If \(k_{xy}=0\), then the definition of \(k_{xy}\) implies that there is no z for which \(N_D^+(x)\cap N_D^+(y)\subset N_D^+(x)\cap N_D^+(z)\) or \(N_D^+(y)\cap N_D^+(x)\subset N_D^+(y)\cap N_D^+(z)\). Then \(N_D^+(x)\subset N_D^+(z)\subset N_D^+(y)\) is also impossible, as it implies \(N_D^+(y)\cap N_D^+(x)\subset N_D^+(y)\cap N_D^+(z)\). Therefore xy is not dispensable if \(k_{xy}=0\).

Take \(N>0\), and assume that whenever \(D^+\) has an edge xy with \(k_{xy}<N\), there is a (xy)-path in \(D^+\) composed of non-dispensable edges. Now suppose xy is dispensable and \(k_{xy}=N\). If \(N_D^+(x)\subset N_D^+(y)\) or \(N_D^+(y)\subset N_D^+(x)\), then we are done, by Lemma 10.10.9, so assume \(N_D^+(x)\not \subset N_D^+(y)\) and \(N_D^+(y)\not \subset N_D^+(x)\). As xy is dispensable, there is a vertex z with

$$\begin{aligned} N_D^+(x)\cap N_D^+(y)\subset N_D^+(x)\cap N_D^+(z)&\text {and }&N_D^+(y)\cap N_D^+(x)\subset N_D^+(y)\cap N_D^+(z). \end{aligned}$$

This implies \(N_D^+(x)\cap N_D^+(z)\ne \emptyset \ne N_D^+(y)\cap N_D^+(z)\), so \(xz,yz\in E(D^+)\). But it also means

$$ |N_D^+(u)\cap N_D^+(v)|-|N_D^+(x)\cap N_D^+(z)|\;<\;|N_D^+(u)\cap N_D^+(v)|-|N_D^+(x)\cap N_D^+(y)| $$

for all uv, so \(k_{xz}<k_{xy}\). Similarly, \(k_{zy}<k_{xy}\). The induction hypothesis guarantees (xz)- and (zy)-paths of non-dispensable edges in \(D^+\), so we have an (xy)-path of non-dispensable edges in \(D^+\).

To finish the proof, we must treat the case where \(x_1\) and \(x_2\) are joined by an in-antiwalk. Just repeat the above argument with \(N^-\) and \(D^-\). \(\square \)

10.11 Prime Factorings of Direct and Strong Products

Now we turn to prime factorings over the direct product. Recall that the one-vertex digraph \(K_1^*\) with a loop is the unit for the direct product, that is, \(K_1^*\times D=D\) for every digraph D, and \(K_1^*\) is the unique digraph with this property. Thus we say a digraph D is prime over the direct product if it has more than one vertex, and in any factoring \(D=G\times H\), one factor is \(K_1^*\) and the other is isomorphic to D. For this reason, the entire discussion of prime factorization over the direct product takes place in the class \(\mathscr {D}_0\) of digraphs that may have loops.

This section adopts the approach of Hammack and Imrich [17]. The next lemma uses the Cartesian skeleton and unique prime factorization over \(\,\Box \,\) to deliver a key ingredient to the proof of our unique prime factorization theorem for the direct product (Theorem 10.11.2).

Lemma 10.11.1

Suppose \(\varphi :D_1\times \cdots \times D_k\rightarrow G_1\times \cdots \times G_\ell \) is an isomorphism, where all the factors are anti-connected and R-thin, and that we have \(\varphi (x_1,\ldots ,x_k)= \big ( \varphi _1(x_1,\ldots ,x_k), \varphi _2(x_1,\ldots ,x_k),\ldots ,\varphi _\ell (x_1,\ldots ,x_k)\big )\). If a factor \(D_i\) is prime, then exactly one of the functions \(\varphi _1,\varphi _2,\ldots ,\varphi _\ell \) depends on \(x_i\).

Proof:

By commutativity and associativity, it suffices to prove the lemma for the case \(k=\ell =2\), and with \(D_1\) prime. Thus take an isomorphism \(\varphi :D_1\times D_2\rightarrow G_1\times G_2\), where \(\varphi (x_1,x_2)=\) \(\big ( \varphi _1(x_1,x_2), \varphi _2(x_1,x_2)\big )\). We will prove the lemma by showing that if it is not the case that exactly one of \(\varphi _1\) and \(\varphi _2\) depends on \(x_1\), then \(D_1\) is not prime.

Certainly if neither \(\varphi _1\) nor \(\varphi _2\) depends on \(x_1\), then the fact that \(\varphi \) is bijective means that \(|V(D_1)|=1\), so \(D_1\) is not prime. Thus assume that both \(\varphi _1\) and \(\varphi _2\) depend on \(x_1\). This means each of \(D_1,G_1\), and \(G_2\) has more than one vertex. If \(D_2\) had only one vertex, then \(D_1\cong G_1\times G_2\), and \(D_1\) would not be prime. Thus each factor \(D_1,D_2,G_1\), and \(G_2\) has more than one vertex. Taking skeletons, and applying Proposition 10.10.8, we see that \(\varphi \) is also an isomorphism \(\varphi :S(D_1\,\Box \,D_2)\rightarrow S(G_1\,\Box \,G_2)\). Because all factors are R-thin (and anti-connectedness implies that all vertices have positive in- and out-degrees), Corollary 10.10.7 applies, and we have an isomorphism

$$\begin{aligned} \varphi : S(D_1)\,\Box \,S(D_2)\rightarrow S(G_1)\,\Box \,S(G_2). \end{aligned}$$
(10.15)

Note that \(\varphi \) is simultaneously an isomorphism \(\varphi :D_1\times D_2\rightarrow G_1\times G_2\) and an isomorphism \(\varphi : S(D_1)\,\Box \,S(D_2)\rightarrow S(G_1)\,\Box \,S(G_2)\). Because each of \(D_1, D_2, G_1\), and \(G_2\) is anti-connected, each factor \(S(D_1), S(D_2), S(G_1)\), and \(S(G_2)\) is connected, by Proposition 10.10.10. Consider prime factorizations

$$ \begin{array}{lcl} S(D_1)=H_1\,\Box \,H_2\,\Box \,\cdots \,\Box \,H_k, &{} \;\; &{} S(G_1)=L_1\, \,\Box \,\, L_2\,\Box \,\cdots \,\Box \,\,L_\ell ,\\ S(D_2)=K_1\,\Box \,K_2\,\Box \,\cdots \,\Box \,K_m, &{}\;\; &{} S(G_2)=M_1\,\Box \,M_2\,\Box \,\cdots \,\Box \,M_n, \end{array}$$

where each factor is prime over \(\Box \). Our isomorphism (10.15) becomes

$$\begin{aligned} \varphi : (H_1\,\Box \,\cdots \,\Box \,H_k)\,\Box \,&(K_1\,\Box \,\cdots \,\Box \,K_m)\rightarrow \nonumber \\&(L_1\,\Box \,\cdots \,\Box \,L_\ell )\,\Box \,(M_1\,\Box \,\cdots \,\Box \,M_n). \end{aligned}$$
(10.16)

Corollary 10.9.4 applies here. In fact, in using it, we may order the factors \(H_i\) and \(K_i\) and relabel the vertices of the \(L_i\) and \(M_i\) so that, for some \(0< s<k\) and \(0\le t\le m\), the isomorphism (10.16) has form

$$\begin{aligned} \varphi :&(H_1\,\Box \,\cdots \,\Box \,H_k)\,\Box \,(K_1\,\Box \,\cdots \,\Box \,K_m)\rightarrow \\&\big (H_1\,\Box \,\cdots \,\Box \,H_s\,\Box \,K_1\,\Box \,\cdots \,\Box \,K_t\big )\,\Box \,\big (H_{s+1}\,\Box \,\cdots \,\Box \,H_k\,\Box \,K_{t+1}\,\Box \,\cdots \,\Box \,K_m\big ) \end{aligned}$$

and where

$$\begin{aligned} \varphi ((h_1,\dots ,h_k),&(k_1,\ldots ,k_m))=\\&((h_1,\ldots , h_s,k_1,\ldots , k_t),(h_{s+1},\ldots , h_k,k_{t+1},\ldots ,k_m)). \end{aligned}$$

Our assumption that both \(\varphi _1\) and \(\varphi _2\) depend on \(x_1\in V(G_1)\) forces \(0<s<k\).

We have now labeled the vertices of \(D_1\) with \(V(H_1\,\Box \,\cdots \,\Box \,H_k)\), and those of \(D_2\) with \(V(K_1\,\Box \,\cdots \,\Box \,K_m)\). We have labeled vertices of \(G_1\) with \(V(H_1\,\Box \,\cdots \,\Box \,H_s\,\Box \,K_1\,\Box \,\cdots \,\Box \,K_t)\), and we have labeled the vertices of \(G_2\) with \(V(H_{s+1}\,\Box \,\cdots \,\Box \,H_k\,\Box \,\) \(K_{t+1}\,\Box \,\cdots \,\Box \,K_m)\). To tame the notation, we denote a vertex \((h_1,\ldots ,h_s,h_{s+1},\ldots , h_k)\in V(D_1)\) as (xy), where \(x=(h_1,\ldots ,h_s)\) and \(y=(h_{s+1},\ldots , h_k)\). Similarly, any \((k_1,\ldots ,k_t,k_{t+1},\ldots , k_m)\in V(D_2)\) is denoted (uv), where \(u=(k_1,\ldots , k_t)\) and \(v=(k_{t+1},\ldots ,k_m)\). With this convention we regard vertices of \(G_1\) and \(G_2\) as (xu) and (yv), respectively, and we have

$$ \varphi ((x,y),(u,v))=((x,u),(y,v)). $$

Remember that this is the same isomorphism \(\varphi :D_1\times D_2\rightarrow G_1\times G_2\) that we began the proof with; all we have done is relabel the vertices of the factors to put \(\varphi \) into a more convenient form.

Now we display a nontrivial factorization \(D_1=S\times S'\). Define digraphs S and \(S'\) as follows:

$$\begin{aligned} V(S)= & {} \{x\mid \big ((x,y),(u,v)\big )\in V(D_1\times D_2)\}\,,\\ A(S)= & {} \{xx' \mid \big ((x,y),(u,v)\big )\big ((x',y'),(u',v')\big )\in A(D_1\times D_2)\}\,, \end{aligned}$$
$$\begin{aligned} V(S')= & {} \{y \mid \big ((x,y),(u,v)\big )\in V(D_1\times D_2)\}\,,\\ A(S')= & {} \{yy' \mid \big ((x,y),(u,v)\big )\big ((x',y'),(u',v')\big )\in A(D_1\times D_2)\}\,. \end{aligned}$$

We claim \(D_1=S\times S'\), that is, \((x,y)(x',y')\in A(D_1)\) if and only if \((x,y)(x',y')\in A(S\times S')\). Certainly if \((x,y)(x',y')\in A(D_1)\), there is an arc

$$ \big ((x,y),(u,v)\big )\big ((x',y'),(u',v')\big )\in A(D_1\times D_2). $$

The definitions of S and \(S'\) then imply \((x,y)(x',y')\in A(S\times S')\).

Conversely, suppose \((x,y)(x',y')\in A(S\times S')\). Then \(xx'\in A(S)\) and \(yy'\in A(S')\). By definition of S and \(S'\), this means \(D_1\times D_2\) has arcs

$$ \big ((x,y''),(u,v)\big )\big ((x',y'''),(u',v')\big ) \text{ and } \big ((x'',y),(u'',v'')\big )\big ((x''',y'),(u''',v''')\big ). $$

Applying the isomorphism \(\varphi \), we see that \(G_1\times G_2\) has arcs

$$ \big ((x,u),(y'',v)\big )\big ((x',u'),(y''',v')\big ) \text{ and } \big ((x'',u''),(y,v'')\big )\big ((x''',u'''),(y',v''')\big ). $$

Then \((x,u)(x',u')\in A(G_1)\) and \((y,v'')(y',v''')\in A(G_2)\). Thus \(G_1\times G_2\) has an arc \(\big ((x,u),(y,v'')\big )\big ((x',u'),(y',v''')\big )\). Applying \(\varphi ^{-1}\) to this, we get

$$ \big ((x,y),(u,v'')\big )\big ((x',y'),(u',v''')\big )\in A(D_1\times D_2), $$

hence \((x,y)(x',y')\in A(D_1)\). Thus \(D_1=S\times S'\), and the lemma is proved. \(\square \)

We now can easily prove that anti-connected R-thin digraphs factor uniquely into primes over the direct product, up to order and isomorphism of the factors.

Theorem 10.11.2

Take any isomorphism \(\varphi :D_1\times \cdots \times D_k\rightarrow G_1\times \cdots \times G_\ell \), where all factors \(D_i\) and \(G_i\) are anti-connected, R-thin, and prime. Then \(k=\ell \), and there is a permutation \(\sigma \) of [k] and isomorphisms \(\varphi _i:D_{\sigma (i)}\rightarrow G_i\) for which \(\varphi (x_1,x_2,\ldots ,x_k)=\) \(\big ( \varphi _1(x_{\sigma (1)}), \varphi _2(x_{\sigma (2)}),\ldots ,\varphi _k(x_{\sigma (k)})\big )\).

Proof:

Assume the hypothesis. Note that Lemma 10.11.1 implies that for each \(i\in [k]\), exactly one \(\varphi _j\) depends on \(x_i\). But no \(\varphi _j\) is constant, because \(\varphi \) is surjective and each \(G_i\) has more than one vertex (it is prime). Thus \(k\ge \ell \). The same argument applied to \(\varphi ^{-1}\) gives \(\ell \ge k\), therefore \(k=\ell \).

Thus each \(\varphi _j\) depends on only one \(x_i\), call it \(x_{\sigma (j)}\). The result follows. \(\square \)

To see that prime factorization may fail if the hypotheses of this theorem are not met, let D be a closed antiwalk on six vertices, which is not anti-connected. Indeed, we have the non-unique prime factorization

$$ D\cong \overrightarrow{P}_2\times K_3\cong \overrightarrow{P}_2\times H, $$

where H is the symmetric path \(\overleftrightarrow {P_3}\) of length two with loops at each end.

A careful examination of its proof shows that Theorem 10.11.2 still holds if R-thinness is replaced by \(R^+\!\)-thinness (respectively, \(R^-\!\)-thinness) and the assumption of anti-connectivity is replaced with the condition that any two vertices are joined by an out-antiwalk (respectively, an in-antiwalk), in which case we say the graph is out-anti-connected (respectively in-anti-connected). Imrich and Klöckl [26] present a polynomial algorithm that computes the prime factorization of any out-anti-connected \(R^+\!\)-thin digraph. In [27] they weaken (but do not entirely eliminate ) the \(R^+\!\)-thinness condition.

We can remove the condition of R-thinness in Theorem 10.11.2 if we strengthen the connectivity condition. The fundamental work of McKenzie [38] on relational structures yields the following corollary.

Theorem 10.11.3

Suppose each pair of vertices of a digraph is joined by both an in-antiwalk and an out-antiwalk. Then it has a unique prime factorization over the direct product, up to isomorphism and order of the factors.

It is not known whether the hypotheses of this theorem can be relaxed to anti-connectivity, nor is there currently an algorithm that finds the prime factors. Any progress would be a welcome contribution.

Problem 10.11.4

Find an efficient algorithm that computes the prime factors of a digraph meeting the conditions of Theorem 10.11.3.

Note that   Hellmuth and Marc [24] develop such an algorithm for connected strong products.

Theorem 10.11.3 yields a parallel theorem for the strong product. For a digraph D, let \(\mathscr {L}(D)\) be the digraph obtained from D by adding a loop to each vertex. If \(D_1,\ldots ,D_k\) are digraphs without loops, then

$$\begin{aligned} \mathscr {L}(D_1\boxtimes \cdots \boxtimes D_k)= \mathscr {L}(D_1)\times \cdots \times \mathscr {L}(D_k), \end{aligned}$$
(10.17)

which follows immediately from the definitions. Notice that if D is connected, then \(\mathscr {L}(D)\) is automatically anti-connected. In fact, any two of its vertices can be joined by an in-antiwalk and an out-antiwalk, so Theorem 10.11.3 applies to it. And clearly if D and G are digraphs without loops, then \(D\cong G\) if and only if \(\mathscr {L}(D)\cong \mathscr {L}(G)\).

Theorem 10.11.5

Every connected digraph (without loops) has a unique prime factorization over \({\,\boxtimes \,}\), up to isomorphism and order of the factors.

Proof:

Let D be a connected digraph without loops. Then, as noted above, Theorem 10.11.3 applies to \(\mathscr {L}(D)\), so it has a unique prime factorization over the direct product. Because \(\mathscr {L}(D)\) has a loop at each vertex, each of its prime factors also have loops at all of their vertices. Thus each prime factor has the form \(\mathscr {L}(D_i)\) for some \(D_i\) (without loops). Write the prime factorization as

$$\begin{aligned} \mathscr {L}(D)=\mathscr {L}(D_1)\times \mathscr {L}(D_2)\times \cdots \times \mathscr {L}(D_n)\,, \end{aligned}$$
(10.18)

where the \(\mathscr {L}(D_i)\) (and hence also each \(D_i\)) are uniquely determined by D.

Now consider any prime factorization

$$\begin{aligned} D=G_1\boxtimes G_2\boxtimes \cdots \boxtimes G_k \end{aligned}$$
(10.19)

over the strong product. From this, Equation (10.17) yields

$$\begin{aligned} \mathscr {L}(D)=\mathscr {L}(G_1)\times \mathscr {L}(G_2)\times \cdots \times \mathscr {L}(G_k). \end{aligned}$$
(10.20)

Observe that each \(\mathscr {L}(G_i)\) is prime over \(\times \). Indeed, any factoring of it must have the form \(\mathscr {L}(G_i)=\mathscr {L}(H)\times \mathscr {L}(H')\) for digraphs \(H, H'\) (without loops), and Equation (10.17) gives \(\mathscr {L}(G_i)=\mathscr {L}(H\boxtimes H')\). Hence \(G_i\cong H\boxtimes H'\) and primeness of \(G_i\) implies one of H or \(H'\) is \(K_1\), and therefore one of the factors \(\mathscr {L}(H)\) or \(\mathscr {L}(H')\) is \(\mathscr {L}(K_1)\). Thus \(\mathscr {L}(G_i)\) is prime.

Comparing prime factorizations (10.18) and (10.20), and applying Theorem 10.11.3, we get \(n=k\), and we may assume the ordering is such that \(\mathscr {L}(D_i)\cong \mathscr {L}(G_i)\) for each \(1\le i\le n\). Consequently, \(D_i\cong G_i\) for each \(i\in [k]\). But, as was noted above, the \(G_i\) are uniquely determined by D, so the factorization (10.19) is unique. \(\square \)

A different approach is taken by Hellmuth and Marc [24], who design and apply a skeleton operator \(\mathbb {S}\) satisfying \(\mathbb {S}(D\boxtimes D')=\) \(\mathbb {S}(D)\,\Box \,\mathbb {S}(D')\).