1 Introduction

Cuntz and Krieger introduced the Cuntz–Krieger algebras in [11], and Cuntz showed in [9] that if we restrict to the matrices satisfying the modest Condition (II), then the stabilized Cuntz–Krieger algebras are an invariant of shifts of finite type up to flow equivalence. Shortly after Franks had made a successful classification of irreducible shifts of finite type up to flow equivalence [16], Cuntz raised the question of whether this invariant or the \(K_0\)-group alone classifies simple Cuntz–Krieger algebras up to stable isomorphism. He sketched in [10] that it was enough to answer whether \(\mathcal {O}_2\) and \(\mathcal {O}_{2_-}\) are isomorphic, where \(\mathcal {O}_2\) and \(\mathcal {O}_{2_-}\) are the Cuntz–Krieger algebras associated with the matrices

$$\begin{aligned} \begin{pmatrix} 1 &{} 1 \\ 1 &{} 1 \end{pmatrix}\quad \text {and}\quad \begin{pmatrix} 1 &{} 1 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 0 \\ 0 &{} 1 &{} 1 &{} 1 \\ 0 &{} 0 &{} 1 &{} 1 \end{pmatrix}, \end{aligned}$$

respectively. This question remained open until Rørdam in [26] showed that \(\mathcal {O}_2\) and \(\mathcal {O}_{2_-}\) are in fact isomorphic and elaborated on the arguments of Cuntz to show that the \(K_0\)-group is a complete invariant of the stabilized simple Cuntz–Krieger algebras. The concept of Cuntz–Krieger algebras has since been generalized to the so-called graph \(C^*\)-algebras, and this procedure of gluing the graph corresponding to the former matrix above onto another graph has since been known as Cuntz splicing a graph at a certain vertex.

Since the K-theory of a Cuntz–Krieger algebra or a graph \(C^*\)-algebra can not distinguish the \(C^*\)-algebra associated to a graph E from its Cuntz spliced version \(E_-\), any ambition of classifying \(C^*\)-algebras in this class by K-theoretical invariants contains the challenge of proving that the \(C^*\)-algebras associated to E and \(E_-\) are stably isomorphic, as indeed established in the seminal case described above in [26]. But in fact, in a series of results [18, 19, 25, 26], it has emerged that when such invariance has been established, it can be used to prove classification by extensive elaborations of Cuntz’ original idea. Indeed, in the case considered by Rørdam, Franks showed that the corresponding shift spaces are flow equivalent if and only if \(U(I-A)V=I-A'\) for some SL-matrices U and V, where we have made the sizes of the adjacency matrices A and \(A'\) (for E and \(E'\), respectively) equal by adding isolated vertices. On the other hand, K-theory isomorphism is shown to be equivalent to the same condition only with U and V being GL-matrices instead. Philosophically speaking, it transpires from the sequence of papers listed above that for the cases considered (which are all of real rank zero), it suffices to use the Cuntz splice to—if necessary—change one of the graphs so that the sign of the determinant changes in such a way that we may assume that U and V are SL-matrices to reach from K-theory to stable isomorphism.

Trying to push this strategy further, we show in this paper that Cuntz splicing a vertex that supports two distinct return paths yields stably isomorphic graph \(C^*\)-algebras —only assuming that the graph is countable. In a similar way as in the above mentioned cases, this result is a key step in the recent development in the geometric classification of general Cuntz–Krieger algebras and of unital graph \(C^*\)-algebras [13, 14] as well as in the question of strong classification of general Cuntz–Krieger algebras and of unital graph \(C^*\)-algebras [7, 14]. In fact, using the results as well as the proof methods of the present paper, these papers close the classification problem for all Cuntz–Krieger algebras and for all unital graph \(C^*\)-algebras. The results and methods of this paper have also further gained attention from other perspectives; e.g., from the perspective of classifying Leavitt path algebras, the analogous question is open and of paramount interest (see [20]).

To prove the results of this paper we recast Rørdam’s now classical idea using the recent concept of relative graph algebras and appeal to several results from graph \(C^*\)-algebra theory. The results we appeal to apply only in certain configurations, but we shall see that these suffice to establish such a highly specialized isomorphism result in complete generality. One may think of this approach as localizing the classification problem to the Cuntz splice situation. We mention that even though it is at present not at all clear if classification of non-unital graph \(C^*\)-algebras is obtainable by a route similar to the one leading to our complete resolution in the unital case, the fact that Cuntz splice invariance holds in general at least provides evidence that a classification result by K-theory may be possible.

With the recent work on the relation between move equivalence of graphs and stable isomorphism of the corresponding graph \(C^*\)-algebras, the question of whether Cuntz splicing yields stably isomorphic \(C^*\)-algebras has become of great interest. Bentmann has established this for purely infinite graph \(C^*\)-algebras with finitely many ideals [4], and Gabe recently has generalized the result to also cover general purely infinite graph \(C^*\)-algebras [17]. Their methods are very different from the ones we will use here, as they use classification and depend heavily on the result of Kirchberg on lifting invertible ideal-related \({\text {KK}}\)-elements to equivariant isomorphisms for strongly purely infinite \(C^*\)-algebras [21], which is not available in general.

We proved invariance of the Cuntz splice in the special case of unital graph \(C^*\)-algebras in an arXiv preprint (1505.06773) posted in May 2015. Bentmann’s recent paper showed us how to reduce the general question to the row-finite case, and we proceeded to discover that our arguments applied with only minor changes to that case.

2 Preliminaries

Definition 2.1

A graph E is a quadruple \(E = (E^0 , E^1 , r, s)\) where \(E^0\) and \(E^1\) are sets, and r and s are maps from \(E^1\) to \(E^0\). The elements of \(E^0\) are called vertices, the elements of \(E^1\) are called edges, the map r is called the range map, and the map s is called the source map.

When working with several graphs at the same time, to avoid confusion, we will denote the range map and source map of a graph E by \(r_E\) and \(s_E\) respectively.

All graphs considered will be countable, i.e., there are countably many vertices and edges.

Definition 2.2

A loop is an edge with the same range and source.

A path \(\mu \) in a graph is a finite sequence \(\mu = e_1 e_2 \cdots e_n\) of edges satisfying \(r(e_i)=s(e_{i+1})\), for all \(i=1,2,\ldots , n-1\), and we say that the length of \(\mu \) is n. We extend the range and source maps to paths by letting \(s(\mu ) = s(e_1)\) and \(r(\mu ) = r(e_n)\). Vertices in E are regarded as paths of length 0 (also called empty paths).

A cycle is a nonempty path \(\mu \) such that \(s(\mu ) = r(\mu )\). We call a cycle \(e_1e_2\cdots e_n\) a vertex-simple cycle if \(r(e_i)\ne r(e_j)\) for all \(i\ne j\). A vertex-simple cycle \(e_1e_2\cdots e_n\) is said to have an exit if there exists an edge f such that \(s(f)=s(e_k)\) for some \(k=1,2,\ldots ,n\) with \(e_k\ne f\). A return path is a cycle \(\mu = e_1 e_2 \cdots e_n\) such that \(r(e_i) \ne r(\mu )\) for \(i < n\).

For a loop, cycle or return path, we say that it is based at the source vertex of its path. We also say that a vertex supports a certain loop, cycle or return path if it is based at that vertex.

Note that in [2, 28], the authors use the term loop where we use cycle.

Definition 2.3

A vertex v in E is called regular if \(s^{-1}(v)\) is finite and nonempty. We denote the set of regular vertices by \(E_{\mathrm {reg}}^0\).

A vertex v in E is called a sink if \(s^{-1}(v)=\emptyset \). A graph E is called row-finite if for each \(v \in E^0\), v is either a sink or a regular vertex.

It is essential for our approach to graph \(C^*\)-algebras to be able to shift between a graph and its adjacency matrix. In what follows, we let \(\mathbb {N}\) denote the set of positive integers, while \(\mathbb {N} _0\) denotes the set of nonnegative integers.

Definition 2.4

Let \(E = (E^0 , E^1 , r, s)\) be a graph. We define its adjacency matrix \(\mathsf {A}_E\) as an \(E^0\times E^0\) matrix with the (uv)’th entry being

As we only consider countable graphs, \(\mathsf {A}_E\) will be a finite matrix or a countably infinite matrix, and it will have entries from \(\mathbb {N} _0\sqcup \{\infty \}\).

Let X be a set. If A is an \(X \times X\) matrix with entries from \(\mathbb {N} _0\sqcup \{\infty \}\), we let \(\mathsf {E}_{A}\) be the graph with vertex set X such that between two vertices \(x,x' \in X\) we have \(A(x,x')\) edges.

It will be convenient for us to alter the adjacency matrix of a graph in a very specific way, subtracting the identity, so we introduce notation for this.

Notation 2.5

Let E be a graph and \(\mathsf {A}_E\) its adjacency matrix. Let \(\mathsf {B}_E\) denote the matrix \(\mathsf {A}_{E} - I\).

2.1 Graph \(C^*\)-algebras

We follow the notation and definition for graph \(C^*\)-algebras in [15]; this is not the convention used in Raeburn’s monograph [24].

Definition 2.6

Let \(E = (E^0,E^1,r,s)\) be a graph. The graph \(C^*\)-algebra \(C^*(E)\) is defined as the universal \(C^*\)-algebra generated by a set of mutually orthogonal projections and a set of partial isometries satisfying the relations

  • \(s_e^* s_f = 0\) if \(e,f \in E^1\) and \(e \ne f\),

  • \(s_e^* s_e = p_{r(e)}\) for all \(e \in E^1\),

  • \(s_e s_e^* \le p_{s(e)}\) for all \(e \in E^1\), and,

  • \(p_v = \sum _{e \in s^{-1}(v)} s_e s_e^*\) for all \(v \in E^0\) with \(0< |s^{-1}(v)| < \infty \).

Whenever we have a set of mutually orthogonal projections and a set of partial isometries in a \(C^*\)-algebra satisfying the relations, then we call these elements a Cuntz–Krieger E -family.

We will also need moves on graphs as defined in [27]. In the case of graphs with finitely many vertices the basic moves are outsplitting (Move (O)), insplitting (Move (I)), reduction (Move (R)), and removal of a regular source (Move (S)). It turns out that in the general setting, move (R) must be replaced by the following

Definition 2.7

(Collapse a regular vertex that does not support a loop, Move (Col)) Let \(E = (E^0 , E^1 , r , s )\) be a graph and let v be a regular vertex in E that does not support a loop. Define a graph \(E_{COL}\) by

the range and source maps extend those of E, and satisfy \(r_{E_{COL}}([ef ]) = r (f )\) and \(s_{E_{COL}} ([ef ]) = s (e)\).

Move (Col) was defined in [27, Theorem 5.2] for graphs with finitely many vertices as an auxiliary move, and proved there to be realized by moves (I), (O) and (R).

Definition 2.8

The equivalence relation generated by the moves (O), (I), (S), (Col) together with graph isomorphism is called move equivalence, and denoted .

Let X be a set and let A and \(A'\) be \(X \times X\) matrices with entries from , then we say that A and \(A'\) are move equivalent, and we write .

Remark 2.9

By [27, Theorem 5.2], the above definition is equivalent to the definition in [27, Section 4] for graphs with finitely many vertices.

These moves have been considered by other authors, and were previously noted to preserve the Morita equivalence class of the associated graph \(C^*\)-algebra. The moves (O) and (I) induce stably isomorphic \(C^*\)-algebras due to the results in [3], and by [8], moves (R), (S), (Col) preserve the Morita equivalence class of the associated graph \(C^*\)-algebras (see also [27, Propositions 3.1, 3.2 and 3.3 and Theorem 3.5]). Therefore, we get the following theorem.

Theorem 2.10

Let \(E_1\) and \(E_2\) be graphs such that . Then \(C^*(E_1)\otimes \mathbb {K} \cong C^*(E_2) \otimes \mathbb {K} \).

We now recall the definition of the Cuntz splice (see Notation 4.1 and Example 4.2 for illustrations).

Definition 2.11

(Move (C): Cuntz splicing at a regular vertex supporting two return paths) Let \(E = (E^0 , E^1 , r , s )\) be a graph and let \(v \in E^0\) be a regular vertex that supports at least two return paths. Let \({E_{v,-}}\) denote the graph \((E_{v,-}^0 , E_{v,-}^1 , r_{v,-}, s_{v,-})\) defined by

$$\begin{aligned} {E^0_{v,-}}:= & {} E^0\sqcup \{u_1 , u_2 \} \\ {E^1_{v,-}}:= & {} E^1\sqcup \{e_1 , e_2 , f_1 , f_2 , h_1 , h_2 \}, \end{aligned}$$

where \(r_{v,-}\) and \(s_{v,-}\) extend r and s, respectively, and satisfy

$$\begin{aligned} s_{v,-} (e_1 ) = v,\quad s_{v,-} (e_2 ) = u_1 ,\quad s_{v,-} (f_i ) = u_1 ,\quad s_{v,-} (h_i ) = u_2 , \end{aligned}$$

and

$$\begin{aligned} r_{v,-} (e_1 ) = u_1 ,\quad r_{v,-} (e_2 ) = v,\quad r_{v,-} (f_i ) = u_i ,\quad r_{v,-} (h_i ) = u_i . \end{aligned}$$

We call \({E_{v,-}}\) the graph obtained by Cuntz splicing E at v, and say \({E_{v,-}}\) is formed by performing Move (C) to E.

The aim of this paper is to prove that \(C^*(E)\otimes \mathbb {K} \cong C^*({E_{v,-}})\otimes \mathbb {K} \) for any graph E. In fact, we prove slightly more, since our proof allows for Cuntz splicing also at infinite emitters supporting at least two return paths.

3 Elementary matrix operations preserving move equivalence

In this section we perform row and column additions on \(\mathsf {B}_E\) without changing the move equivalence class of the associated graphs. Our setup is slightly different from what was considered in [27, Section 7], so we redo the proofs from there in our setting. There are no substantial changes in the proof techniques, which essentially go back to [16].

Lemma 3.1

Let \(E = (E^0, E^1, r_E,s_E)\) be a graph. Let \(u,v \in E^0\) be distinct vertices. Suppose the (uv)’th entry of \(\mathsf {B}_E\) is nonzero (i.e., there is an edge from u to v), and that the sum of the entries in the u’th row of \(\mathsf {B}_E\) is strictly greater than 0 (i.e., u emits at least two edges). If \(B'\) is the matrix formed from \(\mathsf {B}_E\) by adding the u’th column into the v’th column, then

Proof

Fix an edge f from u to v. Form a graph G from E by removing f but adding for each edge \(e \in r_E^{-1}(u)\) an edge \(\bar{e}\) with \(s_G(\bar{e}) = s_E(e)\) and \(r_G(\bar{e}) = v\). We claim that \(B' = \mathsf {B}_G\). At any entry other than the (uv)’th entry the two matrices have the same values, since we in both cases add entries into the v’th column that are exactly equal to the number of edges in E. At the (uv)’th entry of \(\mathsf {B}_G\) we have

$$\begin{aligned} (|s_E^{-1}(u) \cap r_E^{-1}(v)| - 1) + |s_E^{-1}(u) \cap r_E^{-1}(u)|&= \mathsf {B}_E(u,v) + \mathsf {B}_E(u,u) = B'(u,v). \end{aligned}$$

Thus to prove this lemma it suffices to show .

Partition \(s_E^{-1}(u)\) as \(\mathcal {E}_1 = \{ f \}\) and \(\mathcal {E}_2 = s_E^{-1}(u){\setminus }\{ f \}\). By assumption \(\mathcal {E}_2\) is not empty, so we can use Move (O). Doing so yields a graph just as E but where u is replaced by two vertices, \(u_1\) and \(u_2\). The vertex \(u_1\) receives a copy of everything u did and it emits only one edge. That edge has range v. The vertex \(u_2\) also receives a copy of everything u did, and it emits everything u did, except f. Since \(u_1\) is regular and not the base of a loop, we can collapse it. The resulting graph is G (after we relabel \(u_2\) as u), so . \(\square \)

We can also add columns along a path.

Proposition 3.2

Let \(E = (E^0, E^1, r_E,s_E)\) be a graph and let \(u,v \in E^0\) be distinct vertices with a path from u to v going through distinct vertices \(u = u_0, u_1, u_2, \ldots , u_n = v\) (labelled so there is an edge from \(u_i\) to \(u_{i+1}\) for \(i=0,1,2, \ldots ,n-1\)). Suppose further that u supports a loop. If \(B'\) is the matrix formed from \(\mathsf {B}_E\) by adding the u’th column into the v’th column, then

Proof

That u supports a loop guarantees that \(B'+I\) is the adjacency matrix of a graph \(E'=\mathsf {E}_{B'+I}\).

The vertex \(u_i\) emits exactly one edge in E if and only if it emits exactly one edge in \(E'\), for \(i=1,\ldots ,n-1\). So by collapsing all regular vertices \(u_i\), \(i=1,2,\ldots ,n-1\) emitting exactly one edge both in E and in \(E'\), we get two new graphs and . In \(E_1\), there is a path from u to v through vertices that all emit at least two edges. Moreover, \(\mathsf {B}_{E_1'}\) is obtained from \(\mathsf {B}_{E_1}\) by adding the u’th column into the v’th column. Therefore, we may without loss of generality assume that all the vertices \(u_i\), \(i=0,1,2,\ldots ,n-1\) emit at least two edges.

By repeated applications of Lemma 3.1, we first add the \(u_{n-1}\)’th column into the \(u_n\)’th column of \(\mathsf {B}_E\), which we can since there is an edge from \(u_{n-1}\) to \(u_n\). Then we add the \(u_{n-2}\)’th column into the \(u_n\)’th column, which we can since there now is an edge from \(u_{n-2}\) to \(u_n\). Continuing this way, we end up with a matrix C which is formed from \(\mathsf {B}_E\) by adding all the columns \(u_i\), for \(i = 0,1,2,\ldots , n-1\), into the \(u_n\)’th column. We have that .

Now consider the matrix \(B'=\mathsf {B}_{E'}\). By repeated applications of Lemma 3.1, we first add the \(u_{n-1}\)’th column into the \(u_n\)’th column of \(B'=\mathsf {B}_{E'}\), which we can since there is an edge from \(u_{n-1}\) to \(u_n\). Then we add the \(u_{n-2}\)’th column into the \(u_n\)’th column, which we can since there now is an edge from \(u_{n-2}\) to \(u_n\). Continuing this way, we end up with a matrix D which is formed from \(B'=\mathsf {B}_{E'}\) by adding all the columns \(u_i\), for \(i = 1,2,\ldots , n-1\), into the the \(u_n\)’th column. We have that .

But it is clear from the construction that \(C=D\). \(\square \)

Remark 3.3

Similar to how we used Lemma 3.1 in the above proof, we can use Proposition 3.2 “backwards” to subtract columns in \(\mathsf {B}_E\) as long as the addition that undoes the subtraction would be legal.

We now turn to row additions.

Lemma 3.4

Let \(E = (E^0, E^1, r_E,s_E)\) be a graph. Let \(u,v \in E^0\) be distinct vertices. Suppose the (vu)’th entry of \(\mathsf {B}_E\) is nonzero (i.e., there is an edge from v to u), and that u is a regular vertex. If \(B'\) is the matrix formed from \(\mathsf {B}_E\) by adding the u’th row into the v’th row, then

Proof

Let \(E'=\mathsf {E}_{B'+I}\) denote the graph with adjacency matrix \(B'+I\).

First assume that u only receives one edge in E (which necessarily is the edge from v). Then u is a regular vertex not supporting a loop, so we can collapse it obtaining a graph \(E''\). Note that the vertex u is a regular source in \(E'\), so we may remove it. It is clear that the resulting graph is exactly \(E''\).

Now assume instead that u receives at least two edges. Fix an edge f from v to u. Form a graph G from E by removing f but adding for each edge \(e \in s_E^{-1}(u)\) an edge \(\bar{e}\) with \(s_G(\bar{e}) = v\) and \(r_G(\bar{e}) = r_E(e)\). We claim that . Arguing as in the proof of Lemma 3.1 we see that this is equivalent to proving .

Partition \(r_E^{-1}(u)\) as \(\mathcal {E}_1 = \{ f \}\) and \(\mathcal {E}_2 = r_E^{-1}(u) {\setminus } \{ f \}\). By our assumptions on u, \(\mathcal {E}_2\) is nonempty, and u is regular, so we can use Move (I). Doing so replaces u with two new vertices, \(u_1\) and \(u_2\). The vertex \(u_1\) only receives one edge, and that edge comes from v, the vertex \(u_2\) receives the edges u received except f. Since \(u_1\) is regular and not the base of a loop we can collapse it. The resulting graph is G (after we relabel \(u_2\) as u), so . \(\square \)

We can also add rows along a path of vertices.

Proposition 3.5

Let \(E = (E^0, E^1, r_E,s_E)\) be a graph and let \(u,v \in E^0\) be distinct vertices with a path from v to u going through distinct vertices\(v = v_0, v_1, v_2, \ldots , v_n = u\) (labelled so there is an edge from \(v_i\) to \(v_{i+1}\) for \(i=0,1,2,\ldots ,n-1\)). Suppose further that the vertex u is regular and supports at least one loop. If \(B'\) is the matrix formed from \(\mathsf {B}_E\) by adding the u’th row into the v’th row, then

Proof

That u supports a loop guarantees that \(B'+I\) is the adjacency matrix of a graph \(E'=\mathsf {E}_{B'+I}\).

First we prove the special case where all the vertices \(v_1,\ldots ,v_n\) are regular. By repeated applications of Lemma 3.4, we first add the \(v_{1}\)’st row into the \(v_0\)’th row of \(\mathsf {B}_E\), which we can since there is an edge from \(v_{0}\) to \(v_1\) and \(v_1\) is regular. Then we add the \(v_{2}\)’nd row into the \(v_0\)’th row, which we can since there now is an edge from \(v_{0}\) to \(v_2\) and \(v_2\) is regular. Continuing this way, we end up with a matrix C which is formed from \(\mathsf {B}_E\) by adding all the rows \(v_i\), for \(i = 1,2,\ldots , n\), into the the \(v_0\)’th column. We have that .

Now consider the matrix \(B'=\mathsf {B}_{E'}\). By repeated applications of Lemma 3.4, we first add the \(v_{1}\)’st row into the \(v_0\)’th row of \(B'=\mathsf {B}_{E'}\), which we can since there is an edge from \(v_{0}\) to \(v_1\). Then we add the \(v_{2}\)’nd row into the \(v_0\)’th row, which we can since there now is an edge from \(v_{0}\) to \(v_2\). Continuing this way, we end up with a matrix D which is formed from \(B'=\mathsf {B}_{E'}\) by adding all the rows \(v_i\), for \(i = 1,2,\ldots , n-1\), into the the \(v_0\)’th row. We have that . But it is clear from the construction that \(C=D\).

Now we prove that the general case when only u is assumed to be regular can be reduced to the case where \(v_1,\ldots ,v_n\) are regular. Choose a path \(e_0e_1\cdots e_{n-1}\) going through the distinct vertices \(v_1,\ldots ,v_n\). For each singular vertex \(v_{i}\), \(i=1,\ldots ,n-1\), we outsplit according to the partition \(\mathcal {E}_i^1=\{e_i\}\) and \(\mathcal {E}_i^2=s_E^{-1}(v_i)\) and call the corresponding vertices \(v_i^1\) and \(v_i^2\), respectively. Denote the split graph by \(E_1\), and denote the vertices \(v_i\), \(i=1,\ldots ,n-1\) that were not split by \(v_i^1\). Note that we now have a path from v to u through distinct regular vertices. Note also that since all vertices along the path are distinct, what happens to the \(v_i\)’th entry of row u and v is that it gets doubled for each vertex \(u_i\) that gets split and stays unchanged for the vertices \(u_i=u_i^1\in E^0\) that are regular. Let \(E'\) be the graph \(\mathsf {E}_{B'+I}\), and let \(E_1'\) be the graph constructed using exactly the same outsplittings as in the graph above. Now it is clear that the graph we get from \(E_1\) by adding row u into row v is exactly \(E_1'\). Thus the general case now follows from the above. \(\square \)

Remark 3.6

We can also use Proposition 3.5 “backwards” to subtract rows in \(\mathsf {B}_E\) (cf. Remark 3.3).

4 Cuntz splice implies stable isomorphism

In this section, we show that the Cuntz splice gives stably isomorphic graph \(C^*\)-algebras. We first set up some notation.

Notation 4.1

Let \(\mathbf {E}_*\) and \(\mathbf {E}_{**}\) denote the graphs:

The graph \(\mathbf {E}_*\) is what we attach when we Cuntz splice. If we instead attach the graph \(\mathbf {E}_{**}\), we have Cuntz spliced twice.

Let \(E = ( E^{0}, E^{1} , r_{E}, s_{E} )\) be a graph and let u be a vertex of E. Then \(E_{u, -}\) can be described as follows (up to canonical isomorphism):

$$\begin{aligned} E_{u,-}^{0}&= E^{0} \sqcup \mathbf {E}_{*}^{0} \\ E_{u,-}^{1}&= E^{1} \sqcup \mathbf {E}_{*}^{1} \sqcup \{ d_1, d_2 \} \end{aligned}$$

with \(r_{E_{u,-}} \vert _{E^{1}} = r_{E}\), \(s_{E_{u,-}} \vert _{ E^{1} } = s_{E}\), \(r_{E_{u,-}} \vert _{\mathbf {E}_{*}^{1}} = r_{\mathbf {E}_{*}}\), \(s_{E_{u,-}} \vert _{\mathbf {E}_{*}^{1}} = s_{\mathbf {E}_{*}}\), and

$$\begin{aligned} s_{E_{u,-}}(d_1)&= u&r_{E_{u,-}}(d_1)&= v_{1} \\ s_{E_{u,-}}(d_2)&= v_1&r_{E_{u,-}}(d_2)&= u. \end{aligned}$$

Moreover, \(E_{u,--}\) can be described as follows (up to canonical isomorphism):

$$\begin{aligned} E_{u,--}^{0}&= E^{0} \sqcup \mathbf {E}_{**}^{0} \\ E_{u,--}^{1}&= E^{1} \sqcup \mathbf {E}_{**}^{1} \sqcup \{ d_1, d_2 \} \end{aligned}$$

with \(r_{E_{u,--}} \vert _{E^{1}} = r_{E}\), \(s_{E_{u,--}} \vert _{ E^{1} } = s_{E}\), \(r_{E_{u,--}} \vert _{\mathbf {E}_{**}^{1}} = r_{\mathbf {E}_{**}}\), \(s_{E_{u,--}} \vert _{\mathbf {E}_{**}^{1}} = s_{\mathbf {E}_{**}}\), and

$$\begin{aligned} s_{E_{u,--}}(d_1)&= u&r_{E_{u,--}}(d_1)&= w_{1} \\ s_{E_{u,--}}(d_2)&= w_1&r_{E_{u,--}}(d_2)&= u. \end{aligned}$$

Example 4.2

Consider the graph

Then

and

The strategy for obtaining the result is as follows. By [26], the graph \(C^*\)-algebras \(C^*(\mathbf {E}_*)\) and \(C^*(\mathbf {E}_{**})\) are isomorphic. We first show in Proposition 4.3 that \(C^*(\mathbf {E}_*)\) and \(C^*(\mathbf {E}_{**})\) are still isomorphic if we do not enforce the summation relation at \(v_1\) and \(w_1\) respectively, by appealing to general classification results. In fact, we need to establish (Lemma 4.4) that they are isomorphic in a way sending prescribed elements of the nonstable K-theory to other prescribed elements. Using this, we prove in Theorem 4.5 by use of Kirchberg’s Embedding Theorem that Cuntz splicing once and twice yields isomorphic graph \(C^*\)-algebras. Finally, we establish in Proposition 4.7 that the graph obtained by Cuntz splicing twice is move equivalent to the original, and the desired conclusion follows.

Proposition 4.3

The relative graph \(C^*\)-algebras (in the sense of Muhly–Tomforde [23]) \(C^*(\mathbf {E}_{*}, \{v_2\})\) and \(C^*(\mathbf {E}_{**}, \{ w_2,w_3,w_4 \})\) are isomorphic.

Proof

Following [23, Definition 3.6] we define a graph

Then by [23, Theorem 3.7] we have that \(C^*(\mathbf {E}_{*}, \{v_2\}) \cong C^*((\mathbf {E}_*)_{\{v_2\}})\). Similarly we define a graph

Using [23, Theorem 3.7] again, we have that \(C^*(\mathbf {E}_{**}, \{ w_2,w_3,w_4 \})\) is isomorphic to \(C^*((\mathbf {E}_{**})_{\{w_2,w_3,w_4\}})\).

Both the graphs \((\mathbf {E}_*)_{\{v_2\}}\) and \((\mathbf {E}_{**})_{\{w_2,w_3,w_4\}}\) satisfy Condition (K). Using the well developed theory of ideal structure and K-theory for graph \(C^*\)-algebras, we see that both have exactly one nontrivial ideal, that this ideal is the compact operators, and that their six-term exact sequences are

Furthermore, in \(K_0(C^*((\mathbf {E}_*)_{\{v_2\}}))\) we have

$$\begin{aligned} {[}p_{v_1}]&= -[p_{v_1'}] = [p_{v_2}], \end{aligned}$$

and in \(K_0(C^*((\mathbf {E}_{**})_{\{w_2,w_3,w_4\}}))\) we have

$$\begin{aligned} {[}p_{w_1}]&= -[p_{w_1'}] = [p_{w_2}], \\ {[}p_{w_3}]&= 0 = [p_{w_4}]. \end{aligned}$$

Therefore the class of the unit is \(-[p_{v_1'}]\) and \(-[p_{w_1'}]\), respectively. It now follows from [6, Theorem 2] (see also [12, Corollary 4.20]) that \(C^*((\mathbf {E}_*)_{\{v_2\}}) \cong C^*((\mathbf {E}_{**})_{\{w_2,w_3,w_4\}})\) and hence that \(C^*(\mathbf {E}_{*}, \{v_2\}) \cong C^*(\mathbf {E}_{**}, \{ w_2,w_3,w_4 \})\). \(\square \)

We also need a technical result about the projections in \(\mathcal {E} = C^*(\mathbf {E}_{*}, \{v_2\})\).

Lemma 4.4

Let \(\mathcal {E} = C^*(\mathbf {E}_{*}, \{v_2\})\) and choose an isomorphism between \(\mathcal {E}\) and \(C^*(\mathbf {E}_{**}, \{ w_2,w_3,w_4 \})\) according to the previous proposition. Let \(p_{v_1}\), \(p_{v_2}\), \(s_{e_1}\), \(s_{e_2}\), \(s_{e_3}\), \(s_{e_4}\) be the canonical generators of \(C^*(\mathbf {E}_{*}, \{v_2\})=\mathcal {E}\) and let \(p_{w_1}\), \(p_{w_2}\), \(p_{w_3}\), \(p_{w_4}\), \(s_{f_1}\), \(s_{f_2}, \ldots , s_{f_{10}}\) denote the image of the canonical generators of \(C^*(\mathbf {E}_{**}, \{ w_2,w_3,w_4 \})\) in \(\mathcal {E}\) under the chosen isomorphism. Then

$$\begin{aligned} s_{e_1} s_{e_1}^* + s_{e_2} s_{e_2}^*&\sim s_{f_1} s_{f_1}^* + s_{f_2} s_{f_2}^* +s_{f_5} s_{f_5}^*, \\ p_{v_1} - \left( s_{e_1} s_{e_1}^* + s_{e_2} s_{e_2}^* \right)&\sim p_{w_1} - \left( s_{f_1} s_{f_1}^* + s_{f_2} s_{f_2}^* +s_{f_5} s_{f_5}^* \right) , \\ 1_\mathcal {E}-p_{v_1} = p_{v_2}&\sim p_{w_2}+p_{w_3}+p_{w_4}=1_\mathcal {E}-p_{w_1} \end{aligned}$$

in \(\mathcal {E}\), where \(\sim \) denotes Murray-von Neumann equivalence. Thus there exists a unitary \(z_0\) in \(\mathcal {E}\) such that

$$\begin{aligned} z_0\left( s_{e_1} s_{e_1}^* + s_{e_2} s_{e_2}^*\right) z_0^*&= s_{f_1} s_{f_1}^* + s_{f_2} s_{f_2}^* +s_{f_5} s_{f_5}^*, \\ z_0\left( p_{v_1} - \left( s_{e_1} s_{e_1}^* + s_{e_2} s_{e_2}^* \right) \right) z_0^*&= p_{w_1} - \left( s_{f_1} s_{f_1}^* + s_{f_2} s_{f_2}^* +s_{f_5} s_{f_5}^* \right) , \\ z_0p_{v_1}z_0^*&= p_{w_1} \\ z_0p_{v_2}z_0^*&= p_{w_2}+p_{w_3}+p_{w_4}. \end{aligned}$$

Proof

By [1, Corollary 7.2], row-finite graph \(C^*\)-algebras have stable weak cancellation, so by [23, Theorem 3.7], \(\mathcal {E}\) has stable weak cancellation. Hence any two projections in \(\mathcal {E}\) are Murray-von Neumann equivalent if they generate the same ideal and have the same K-theory class.

As in the proof of Proposition 4.3, we will use [23, Theorem 3.7] to realize our relative graph \(C^*\)-algebras as graph \(C^*\)-algebras of the graphs \((\mathbf {E}_*)_{\{v_2\}}\) and \((\mathbf {E}_{**})_{\{w_2,w_3,w_4\}}\). Denote the image of the vertex projections of \(C^*((\mathbf {E}_*)_{\{v_2\}})\) inside \(\mathcal {E}\) under this isomorphism by \(q_{v_1}, q_{v_2}, q_{v_1'}\) and denote the image of the vertex projections of \((\mathbf {E}_{**})_{\{w_2,w_3,w_4\}}\) inside \(\mathcal {E}\) under the isomorphisms \((\mathbf {E}_{**})_{\{w_2,w_3,w_4\}}\cong C^*(\mathbf {E}_{**}, \{ w_2,w_3,w_4 \}) \cong \mathcal {E}\) by \(q_{w_1}, q_{w_2}, q_{w_3}, q_{w_4}, q_{v_1'}\). Using the description of the isomorphism in [23, Theorem 3.7], we see that we need to show that \(q_{v_1} \sim q_{w_1}\), \(q_{v_1'} \sim q_{w_1'}\) and \(q_{v_2}\sim q_{w_2}+q_{w_3}+q_{w_4}\).

Since \((\mathbf {E}_*)_{\{v_2\}}^0\) satisfies Condition (K) and the smallest hereditary and saturated subset containing \(v_1\) is all of \((\mathbf {E}_*)_{\{v_2\}}^0\) we have that \(q_{v_1}\) is a full projection ([2, Theorem 4.4]). Similarly \(q_{w_1}\), \(q_{v_2}\) and \(q_{w_2}+q_{w_3}+q_{w_4}\) are full. In \(K_0(\mathcal {E})\) we have, using our calculations from the proof of Proposition 4.3, that

$$\begin{aligned} {[}q_{v_1}]&= [1] = [q_{w_1}], \\ {[}q_{v_2}]&= [1] = [q_{w_2}]=[q_{w_2}]+[q_{w_3}]+[q_{w_4}]. \end{aligned}$$

So by stable weak cancellation \(q_{v_1} \sim q_{w_1}\) and \(q_{v_2}\sim q_{w_2}+q_{w_3}+q_{w_4}\).

Both \(q_{v_1'}\) and \(q_{w_1'}\) generate the only nontrivial ideal \(\mathfrak {I}\) of \(\mathcal {E}\) ([2, Theorem 4.4]). Since that ideal is isomorphic to the compact operators and both \([q_{v_1'}]\) and \([q_{w_1'}]\) are positive generators of \(K_0(\mathfrak {I})\cong K_0(\mathbb {K})\cong \mathbb {Z} \), they must both represent the same class in \(K_0(\mathfrak {I})\), and thus also in \(K_0(\mathcal {E})\). Therefore \(q_{v_1'} \sim q_{w_1'}\).

Let u, v and w be partial isometries realizing the Murray-von Neumann equivalences. Then \(z_0=u+v+w\) is a unitary that satisfies the required conditions. \(\square \)

Theorem 4.5

Let E be a graph and let u be a vertex of E. Then \(C^{*}(E_{u,-}) \cong C^{*}(E_{u,--})\).

Proof

As above, we let \(\mathcal {E}\) denote the \(C^*\)-algebra \(C^*(\mathbf {E}_{*}, \{v_2\})\), and we choose an isomorphism between \(\mathcal {E}\) and \(C^*(\mathbf {E}_{**}, \{ w_2,w_3,w_4 \})\), which exists according to Proposition 4.3.

Since \(C^*(E_{u,-})\) and \(\mathcal {E}\) are separable, nuclear \(C^*\)-algebras, by the Kirchberg Embedding Theorem [22], there exists an injective \(^*\)-homomorphism

$$\begin{aligned} C^*(E_{u,-}) \oplus \mathcal {E} \hookrightarrow \mathcal {O}_2. \end{aligned}$$

We will suppress this embedding in our notation.

In \(\mathcal {O}_2\), we denote the vertex projections and the partial isometries coming from \(C^*(E_{u,-})\) by \(p_v, v\in E_{u,-}^0\) and \(s_e,e\in E_{u,-}^1\), respectively, and we denote the vertex projections and the partial isometries coming from \(\mathcal {E}=C^*(\mathbf {E}_*, \{v_2\})\) by \(p_1,p_2\) and \(s_1, s_2, s_3, s_4\), respectively. Since we are dealing with an embedding, it follows from Szymański’s Generalized Cuntz–Krieger Uniqueness Theorem ([28, Theorem 1.2]) that for any vertex-simple cycle \(\alpha _1 \alpha _2 \cdots \alpha _n\) in \(E_{u,-}\) without any exit, we have that the spectrum of \(s_{\alpha _1} s_{\alpha _2} \cdots s_{\alpha _n}\) contains the entire unit circle.

We will define a new Cuntz–Krieger \(E_{u,-}\)-family. We let

$$\begin{aligned} q_v&=p_v \quad \text {for each }v\in E^0, \\ q_{v_1}&=p_1, \\ q_{v_2}&=p_2. \end{aligned}$$

Since any two nonzero projections in \(\mathcal {O}_2\) are Murray-von Neumann equivalent, we can choose partial isometries \(x_1, x_2 \in \mathcal {O}_2\) such that

$$\begin{aligned} x_1 x_1^*&= s_{d_1} s_{d_1}^*&x_1^* x_1&= p_1 \\ x_2 x_2^*&= p_1 - (s_1 s_1^* + s_2 s_2^*)&x_2^* x_2&= p_u. \end{aligned}$$

We let

By construction is a set of orthogonal projections, and a set of partial isometries. Furthermore, by choice of the relations are clearly satisfied at all vertices other than \(v_1\) and u. The choice of \(x_1, x_2\) ensures that the relations hold at u and \(v_1\) as well. Hence \(\{q_v, t_e\}\) does indeed form a Cuntz–Krieger \(E_{u,-}\)-family. Denote this family by \(\mathcal {S}\).

Using the universal property of graph \(C^*\)-algebras, we get a \(*\)-homomorphism from \(C^*(E_{u,-})\) onto \(C^*(\mathcal {S}) \subseteq \mathcal {O}_2\). Let \(\alpha _1 \alpha _2 \cdots \alpha _n\) be a vertex-simple cycle in \(E_{u,-}\) without any exit. Since u is where the Cuntz splice is glued on, no vertex-simple cycle without any exit uses edges connected to \(u, v_1\) or \(v_2\). Hence \(t_{\alpha _1} t_{\alpha _2} \cdots t_{\alpha _n} = s_{\alpha _1} s_{\alpha _2} \cdots s_{\alpha _n}\) and so its spectrum contains the entire unit circle. It now follows from [28, Theorem 1.2] that the \(^*\)-homomorphism from \(C^*(E_{u,-})\) to \(C^*(\mathcal {S})\) is in fact a \(^*\)-isomorphism.

Let \(\mathfrak {A} _0\) be the \(C^*\)-algebra generated by , and let \(\mathfrak {A}\) be the subalgebra of \(\mathcal {O}_2\) generated by and \(\mathcal {E}\). Note that \(\mathfrak {A} =\mathfrak {A} _0\oplus \mathcal {E}\).

Let us denote by the image of the canonical generators of \(C^*(\mathbf {E}_{**}, \{ w_2,w_3,w_4 \})\) in \(\mathcal {O}_2\) under the chosen isomorphism between \(C^*(\mathbf {E}_{**}, \{ w_2,w_3,w_4 \})\) and \(\mathcal {E}\) composed with the embedding into \(\mathcal {O}_2\).

By Lemma 4.4, certain projections in \(\mathcal {E}\) are Murray-von Neumann equivalent, so choose a unitary \(z_0 \in \mathcal {E}\) according to this lemma, and set \(z=z_0+\sum _{v\in E^0}p_v \in \mathcal {M}(\mathfrak {A})\). Clearly z is a unitary in \(\mathcal {M}( \mathfrak {A})\). Since the approximate identity of \(\mathfrak {A} \) given by

$$\begin{aligned} \left\{ \sum _{ k = 1 }^n p_{v_k} + 1_\mathcal {E} \right\} _{n \in \mathbb {N}}, \end{aligned}$$

where , is an approximate identity of \(C^*( \mathcal {S} )\), we have a canonical unital \(^*\)-homomorphism from \(\mathcal {M} ( \mathfrak {A})\) to \(\mathcal {M} ( C^* (\mathcal {S} ) )\) which, when restricted to \(\mathfrak {A} \), gives the embedding of \(\mathfrak {A} \) into \(C^*( \mathcal {S})\). So we can consider z as a unitary in \(\mathcal {M} ( C^* ( \mathcal {S} ) )\). Hence, for all \(x \in C^*(\mathcal {S})\), we have that zx and xz are elements of \(C^* ( \mathcal {S} )\). By construction of z, we have that

We will now define a Cuntz–Krieger \(E_{u,--}\)-family in \(\mathcal {O}_2\). We let

Moreover, we let

Denote this family by \(\mathcal {T}\).

By construction is a set of orthogonal projections, and a set of partial isometries satisfying

$$\begin{aligned} S_e^*S_e&=s_e^*s_e=p_{r(e)},&S_eS_e^*&=s_es_e^*, \\ S_{f_i}^*S_{f_i}&=y_{f_i}^*y_{f_i}=r_{f_i},&S_{f_i}S_{f_i}^*&=y_{f_i}y_{f_i}^*, \\ S_{d_1}^*S_{d_1}&=r_{w_1},&S_{d_1}S_{d_1}^*&=s_{d_1}s_{d_1}^*, \\ S_{d_2}^*S_{d_2}&=p_u,&S_{d_2}S_{d_2}^*&=r_{w_1}-\left( y_{f_1}y_{f_1}^*+y_{f_2}y_{f_2}^*+y_{f_5}y_{f_5}^*\right) , \end{aligned}$$

for all \(e\in E^1\) and \(i=1,2,\ldots ,10\). From this, it is clear that \(\mathcal {T}\) will satisfy the Cuntz–Krieger relations at all vertices in \(E^0\). Similarly, we see that since is a Cuntz–Krieger \((\mathbf {E}_{**}, \{ w_2,w_3,w_4 \})\)-family, \(\mathcal {T}\) will satisfy the relations at the vertices \(w_2, w_3, w_4\). It only remains to check the summation relation at \(w_1\), for that we compute

$$\begin{aligned} \smash {\sum _{s_{E_{u,--}}(e) = w_1} S_e S_e^*}&= S_{f_1} S_{f_1}^* + S_{f_2} S_{f_2}^* + S_{f_5} S_{f_5}^* + S_{d_2} S_{d_2}^* \\&= y_{f_1} y_{f_1}^* + y_{f_2} y_{f_2}^* +y_{f_5} y_{f_5}^* + r_{w_1}-\left( y_{f_1}y_{f_1}^*+y_{f_2}y_{f_2}^*+y_{f_5}y_{f_5}^*\right) \\&= r_{w_1} = P_{w_1}. \end{aligned}$$

Hence \(\mathcal {T}\) is a Cuntz–Krieger \(E_{u,--}\)-family.

The universal property of \(C^*(E_{u,--})\) provides a surjective \(^*\)-homomorphism from \(C^*(E_{u,--})\) to \(C^*(\mathcal {T}) \subseteq \mathcal {O}_2\). Let \(\alpha _1 \alpha _2 \cdots \alpha _n\) be a vertex-simple cycle in \(E_{u,--}\) without any exit. We see that all the edges \(\alpha _i\) must be in \(E^1\), and hence we have

$$\begin{aligned} S_{\alpha _1} S_{\alpha _2} \cdots S_{\alpha _n} = t_{\alpha _1}t_{\alpha _2} \cdots t_{\alpha _n} = s_{\alpha _1} s_{\alpha _2} \cdots s_{\alpha _n} \end{aligned}$$

and so its spectrum contains the entire unit circle. It now follows from [28, Theorem 1.2] that \(C^*(E_{u,--})\) is isomorphic to \(C^*(\mathcal {T})\).

Recall that \(z \in \mathcal {M}( C^* ( \mathcal {S} ) )\). Therefore, \(\mathcal {T} \subseteq C^*(\mathcal {S})\) since \(\mathfrak {A} \subseteq C^*(\mathcal {S})\) and since \(r_{w_i}, y_{f_j}\in \mathcal {E} \subseteq C^*(\mathcal {S})\), for \(i=1,2,3,4\), \(j = 1,2,\ldots , 10\). So \(C^*(\mathcal {T}) \subseteq C^*(\mathcal {S})\).

Since the approximate identity of \(\mathfrak {A} \) given by

$$\begin{aligned} \left\{ \sum _{ k = 1 }^n p_{v_k} + 1_\mathcal {E} \right\} _{n \in \mathbb {N}}, \end{aligned}$$

where , is an approximate identity of \(C^*( \mathcal {T} )\), we get that for all \(x \in C^*(\mathcal {T})\), \(z x z^*\) and \(z^* x z\) are elements of \(C^* ( \mathcal {T} )\). But since \(\mathfrak {A}\) is also contained in \(C^*(\mathcal {T})\) and \(\mathcal {E} \subseteq C^*(\mathcal {T})\), we have that \(\mathcal {S} \subseteq C^*(\mathcal {T})\), and hence \(C^*(\mathcal {S}) \subseteq C^*(\mathcal {T})\). Therefore

$$\begin{aligned} C^*(E_{u,-}) \cong C^*(\mathcal {S}) = C^*(\mathcal {T}) \cong C^*(E_{u,--}). \end{aligned}$$

\(\square \)

The next two results will show that for a row-finite graph E and a vertex \(u \in E^0\) which supports two distinct return paths. This will be enough to prove our main result since by [4, Lemma 5.1], there exists a row-finite graph F and a vertex v supporting two distinct return paths such that \(C^* ( E_{u, - } ) \otimes \mathbb {K} \cong C^* ( F_{v, - } ) \otimes \mathbb {K} \) and \(C^* ( E ) \otimes \mathbb {K} \cong C^* ( F ) \otimes \mathbb {K} \).

Proposition 4.6

Let E be a row-finite graph and let u be a vertex which supports two distinct return paths. Then there exists a row-finite graph F and a vertex \(v \in F^0\) which supports two distinct loops such that and .

Proof

Throughout the proof, we will freely use the following fact: Let G be a graph and let w be a vertex and let \(w' \ne w\) be a regular vertex that does not support a loop. Let \(G'\) be the resulting graph after collapsing \(w'\). Then and .

Suppose \(u \in E^0\) supports two loops. Then set \(E = F\) and \(v = u\). Suppose u does not support two loops. Then there exists a return path \(\mu = e_1 e_2 \cdots e_n\) with \(n \ge 2\). Starting at \(r(e_1)\), if \(r( e_1 )\) does not support a loop, we collapse the vertex \(r( e_1)\). Doing this will result in reducing the length on \(\mu \). Note that we may have also added a loop at u. Continuing this procedure, we have obtained a graph \(E'\) with u in \((E')^0\) such that , , and either u supports two loops or u supports a return path \(\nu = f_1 f_2 \dots f_m\) with \(m \ge 2\), with \(r( f_1 )\) supporting a loop.

If u supports two loops, set \(F = E'\) and \(v = u\). Suppose u supports a return path \(\nu = f_1 f_2 \dots f_m\) with \(m \ge 2\), with \(r( f_1 )\) supporting a loop. Then by Proposition 3.2, we add the \(r( f_1 )\)’th column to the u’th column twice, to get a graph F with \(u \in F^0\) supporting two loops such that . Note that we may perform the same matrix operations to \(\mathsf {B}_{E'_{u, -- } }\) and get that . Set \(v = u\).

We have just obtained the desired graph F and the desired vertex \(v \in F^0\) since and .\(\square \)

We now show that performing the Cuntz splice twice is a legal move for a row-finite graph.

Proposition 4.7

Let E be a row-finite graph and let v be a vertex that supports at least two distinct return paths. Then .

Proof

According to Proposition 4.6, we can assume that E satisfies the conditions of that proposition — so we assume that v is a regular vertex that supports at least two loops.

For a given matrix size \(N \in \mathbb {N} \cup \{ \infty \}\) and \(i,j\in \{1,2,\ldots ,N\}\), we let \(E_{(i,j)}\) denote the \(N\times N\) matrix that is equal to the identity matrix everywhere except for the (ij)’th entry, which is 1. If B is an \(N\times N\) matrix, then \(E_{(i,j)}B\) is the matrix obtained from B by adding the j’th row into the i’th row, and \(BE_{(i,j)}\) is the matrix obtained from B by adding the i’th column into the j’th column. Using \(E_{(i,j)}^{-1}\) instead will yield subtraction. In what follows we will make extensive use of the results from Sect. 3, but we do so implicitly since we feel it will only muddle the exposition if we add all the references in.

Note that \(\mathsf {B}_{E_{v,--}}\) can be written as

$$\begin{aligned} B_1= \begin{pmatrix} \begin{pmatrix} 0 &{} 1 &{} 0 &{} 0 \\ 1 &{} 0 &{} 1 &{} 0 \\ 0 &{} 1 &{} 0 &{} 1 \\ 0 &{} 0 &{} 1 &{} 0 \end{pmatrix} &{} \begin{pmatrix} 0 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \\ 1 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \end{pmatrix} \\ \begin{pmatrix} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ \end{pmatrix}&\mathsf {B}_E \end{pmatrix}. \end{aligned}$$

Now let \(B_2=E_{(3,2)}B_1\) and \(B_3=B_2E_{(2,1)}^{-1}\). Then . We have that

$$\begin{aligned} B_3 = \begin{pmatrix} \begin{pmatrix} -1&{} 1 &{} 0 &{} 0 \\ 1 &{} 0 &{} 1 &{} 0 \\ 0 &{} 1 &{} 1 &{} 1 \\ 0 &{} 0 &{} 1 &{} 0 \end{pmatrix} &{} \begin{pmatrix} 0 &{} 0 &{} \cdots \\ 0 &{} 0 &{}\cdots \\ 1 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \end{pmatrix} \\ \begin{pmatrix} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 \\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ \end{pmatrix}&\mathsf {B}_E \end{pmatrix}. \end{aligned}$$

The 1st vertex in \(\mathsf {E}_{B_3 + I}\) does not support a loop, so it can be collapsed yielding

$$\begin{aligned} B_4 = \begin{pmatrix} \begin{pmatrix} 1 &{} 1 &{} 0 \\ 1 &{} 1 &{} 1 \\ 0 &{} 1 &{} 0 \end{pmatrix} &{} \begin{pmatrix} 0 &{} 0 &{} \cdots \\ 1 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \end{pmatrix} \\ \begin{pmatrix} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 \\ \vdots &{}\vdots &{}\vdots \\ \end{pmatrix}&\mathsf {B}_E \end{pmatrix} \end{aligned}$$

with . Now we let \(B_5=E_{(2,3)}^{-1}B_4\), \(B_6=E_{(4,1)}B_5\), \(B_7=E_{(4,3)}^{-1}E_{(4,3)}^{-1}B_6\), \(B_8=E_{(1,2)}B_7\) and \(B_9=B_8E_{(2,3)}^{-1}\). We then have . We have that

$$\begin{aligned} B_9 = \begin{pmatrix} \begin{pmatrix} 2 &{} 1 &{} 0 \\ 1 &{} 0 &{} 1 \\ 0 &{} 1 &{} -1 \end{pmatrix} &{} \begin{pmatrix} 1 &{} 0 &{} \cdots \\ 1 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \end{pmatrix} \\ \begin{pmatrix} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 \\ \vdots &{}\vdots &{}\vdots \\ \end{pmatrix}&\mathsf {B}_E \end{pmatrix}. \end{aligned}$$

In \(\mathsf {E}_{B_9 + I}\) the 3rd vertex does not support a loop, so it can be collapsed to yield

$$\begin{aligned} B_{10} = \begin{pmatrix} \begin{pmatrix} 2 &{} 1 \\ 1 &{} 1 \end{pmatrix} &{} \begin{pmatrix} 1 &{} 0 &{} \cdots \\ 1 &{} 0 &{} \cdots \\ \end{pmatrix} \\ \begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \\ \vdots &{}\vdots \\ \end{pmatrix}&\mathsf {B}_E \end{pmatrix} \end{aligned}$$

with .

Now we look at the graph E again, and and let \(\mathsf {B}_E = (b_{ij})\). Since the vertex v (number 1) has at least two loops, we have \(b_{11}\ge 1\). Now we can insplit by partitioning \(r^{-1}(v)\) into two sets, one with a single edge consisting of a loop based at v, and the other the rest. In the resulting graph, v is split into two vertices \(v^1\) and \(v^2\), and let \(E'\) denote the rest of the graph. The vertex \(v^1\) has the same edges in and out of \(E'\) as v had, but it has only \(b_{11}\) loops. There is one edge from \(v^1\) to \(v^2\) and \(v^2\) has one loop and there are \(b_{11}\) edges from \(v^2\) to \(v^1\) as well as all the same edges going from \(v^2\) into \(E'\) as originally from v. Use the inverse collapse move to add a new vertex u to the middle of the edge from \(v^1\) to \(v^2\) and call the resulting graph F. Label the vertices such that \(v^2\), u and \(v^1\) are the 1st, 2nd and 3rd vertices, then \(\mathsf {B}_F\) is:

$$\begin{aligned} \mathsf {B}_F = \begin{pmatrix} \begin{pmatrix} 0 &{} 0 \\ 1 &{} -1 \end{pmatrix} &{} \begin{pmatrix} b_{11} &{} b_{12} &{} \cdots \\ 0 &{} 0 &{} \cdots \\ \end{pmatrix} \\ \begin{pmatrix} 0 &{} 1 \\ 0 &{} 0 \\ \vdots &{}\vdots \\ \end{pmatrix}&\widetilde{B} \end{pmatrix} \end{aligned}$$

where \(\widetilde{B}\) is \(\mathsf {B}_E\) except for on the (1, 1)’th entry, which is \(b_{11}-1\). Note that \(b_{11}-1\ge 0\), so that there is still a loop based at the 3rd vertex. Also, note that since E is a row-finite graph, the \(b_{1k}\)’s are eventually zero. This is important since it allows us to do the following matrix manipulations. Let \(C_2=\mathsf {B}_F E_{(1,2)}E_{(1,2)}\), \(C_3=E_{(1,2)}C_2\), \(C_4=E_{(1,3)}^{-1}C_3\), \(C_5=C_4E_{(2,3)}\) and \(C_6=C_5E_{(1,2)}\). We have that . The matrix \(C_6\)

$$\begin{aligned} C_6 = \begin{pmatrix} \begin{pmatrix} 1 &{} 1 \\ 1 &{} 2 \end{pmatrix} &{} \begin{pmatrix} 1 &{} 0 &{} \cdots \\ 1 &{} 0 &{} \cdots \\ \end{pmatrix} \\ \begin{pmatrix} 0 &{} 1 \\ 0 &{} 0 \\ \vdots &{}\vdots \\ \end{pmatrix}&\mathsf {B}_E \end{pmatrix}. \end{aligned}$$

Therefore, \(C_6\) is in fact equivalent to \(B_{10}\) upon relabeling of the first two vertices, thus it follows, that . \(\square \)

Thus we have the following fundamental result.

Theorem 4.8

Let E be a graph and let v be a vertex that supports at least two distinct return paths. Then \(C^*(E)\otimes \mathbb {K} \cong C^*(E_{v,-})\otimes \mathbb {K} \).

Proof

By [4, Lemma 5.1], we may assume that E is a graph with no singular vertices, in particular, E is a row-finite graph. By Theorem 4.5, \(C^{*} ( E_{v, - } ) \cong C^{*} ( E_{v, - - } )\) and hence, \(C^{*} ( E_{v, -} ) \otimes \mathbb {K} \cong C^{*} ( E_{v, - -}) \otimes \mathbb {K} \). By Proposition 4.7, \(C^{*} (E) \otimes \mathbb {K} \cong C^{*} ( E_{v, - - } ) \otimes \mathbb {K} \). Thus, \(C^*(E)\otimes \mathbb {K} \cong C^*(E_{v,-})\otimes \mathbb {K} \). \(\square \)