1 Introduction

Let G be a simple graph and V(G) be the set of vertices and E(G) be the set of edges. We call G a graph of order n and size m if \(|V(G)|=n\) and \(|E(G)|=m\). Denote by \(\overline{G}\) the complement of G. In this paper, the path of order n is denoted by \(P_n\) whereas the complete graph of order n is denoted by \(K_n\) and the complete bipartite graph with parts M and N with sizes m and n,  is denoted as \(K_{m,n}\). Let \(V(G)=\{v_1,\ldots , v_n\}\). For \(i=1,2,\ldots ,n,\) we denote \(d_i\) the degree of \(v_i\) and \(\Delta (G)=\max _{1 \le i \le n} d_i(G).\) In addition, G is called a (ab)-semi-regular if \(d_i=a\) or b for \(i=1,\ldots , n.\)

The adjacency matrix of G, denoted by \(A(G)=(a_{ij}),\) whose entry is \(a_{ij}=1 \) if \(v_i\) and \(v_j\) are adjacent and \(a_{ij}=0\) otherwise. The characteristic polynomial and eigenvalues of G are the characteristic polynomial and eigenvalues of A(G), respectively. As A(G) is a real symmetric matrix, all eigenvalues of G are real numbers and thus can be ordered as \(\lambda _1(G) \ge \lambda _2(G)\ge \cdots \ge \lambda _n(G)\) with \(\lambda _1(G)\) and \(\lambda _n(G)\) being the largest and the smallest eigenvalue of G,  respectively. See [2, 3, 19] for more details. All eigenvalues of G with each respective algebraic multiplicity give the spectrum of G,  denoted by \( {\textrm{Spec}}(G)= \begin{pmatrix} \lambda _1 &{} \lambda _2 &{} \cdots &{} \lambda _n \\ a_1 &{} a_2 &{} \cdots &{} a_n \end{pmatrix}, \) where \(a_i\) is the algebraic multiplicity of \(\lambda _i\). The energy of G is defined as

$$\begin{aligned} {\mathcal E}(G)&=\sum ^n_{i=1} |\lambda _i(G)|, \quad (\text {cf. [6]}). \end{aligned}$$

In the early 1970 s, graph theory had been found to have an important application in the study of calculation of electron and molecules energy [16, 17]. This has initiated the emerging of the concept of energy of simple graphs in 1978 by Gutman [6] which greatly advances the research study of graphs and the energy of graphs from 1995 till now [5, 6, 9,10,11,12, 15, 18, 20].

The graph obtained from G by attaching a self-loop at each of the vertex in \(S\subseteq V(G)\), is called the self-loop graph of G at S, denoted by \(G_S\). Generalizing from the definition of A(G), \(A(G_S)=J_S+A(G)\) where \((J_S)_{i,j}=1\) if \(i=j\) and \(v_i\in S\), and \((J_S)_{i,j}=0\) otherwise. Similar to G, the eigenvalues of \(G_S\) are the eigenvalues of \(A(G_S).\) It can be verified that all properties of eigenvalues of A(G) are attained by \(A(G_S)\). The energy of \(G_S\) of order n with \(|S|=\sigma \) is defined as

$$\begin{aligned} {\mathcal E}(G_S)&=\sum ^n_{i=1} \left| \lambda _i(G_S)-\frac{\sigma }{n}\right| , \quad (\text {cf. [13]}). \end{aligned}$$

In some occasion, we also denote G as the self-loop graph and \(G_0\) is the ordinary graph obtained from G by removing all its self-loops. When \(\sigma =n,\) we write \(G_S\) as \(\widehat{G}.\)

Self-loop graphs have been shown to play a significant role in the mathematical study of heteroconjugated molecules [7, 8, 16]. Recently in 2022, Gutman et al. have introduced the concept of energy of self-loops graphs for the first time in [13]. The study of energy of self-loop graphs is still very new with results appeared in only two papers [13, 14]. In [13], the following results were proved:

Theorem 1. Let G be a bipartite graph of order n, with vertex set V. Let S be a subset of V. Then, \({\mathcal E}(G_S)={\mathcal E}(G_{V\backslash S})\).

Theorem 2. Let \(G_S\) be a self-loop graph of order n, with m edges, and \(|S|=\sigma \). Let \(\lambda _1\ge \lambda _2 \ge \cdots \ge \lambda _n\) be its eigenvalues. Then \(\sum ^n_{i=1} \lambda _i^2 = 2\,m+\sigma .\)

Theorem 3. Let \(G_S\) be a self-loop graph of order n, with m edges, and \(|S|=\sigma \). Then

$$\begin{aligned} {\mathcal E}(G_S)\le \sqrt{n\left( 2m+\sigma -\frac{\sigma ^2}{n}\right) }. \end{aligned}$$

This paper consists of four sections of main results. Section 2 first completely determines the \({\textrm{Spec}}((K_n)_S)\) and \({\textrm{Spec}}((K_{m,n})_S)\) for all \(n,m \ge 1\) using Theorem 2. The results on \({\textrm{Spec}}((K_n)_S)\) are then used to completely characterize those \(G_S\) with only positive or non-negative eigenvalues as well as with few distinct eigenvalues. In Sect. 3, a necessary and sufficient result on each \(\lambda _i(G_{V\backslash S}) = 1-\lambda _i(G_S)\) in relation with the respective \(\lambda _i(G_S)\) for every \(i=1,\ldots ,n\) when G is bipartite, is obtained and this gives a simplified proof of Theorem 1. We also show that \({\mathcal E}(G_S)\ge {\mathcal E}(G)\) when G is bipartite and a conjecture is given. In Sect. 4, an alternative proof of Theorem 3 is given using Cauchy-Schwarz inequality. This approach leads to a result on the semi-regular graphs with self-loops. An upper bound for \(\lambda _1(G_S)\) analogous to a classical bound for \(\lambda _1(G)\) in terms of \(\Delta (G)\) is obtained. The existence of a certain semi-regular graphs is shown when the upper bound for \(\lambda _1(G_S)\) is attained.

2 Some Characterization of Self-loop Graphs by Its Eigenvalues

In this section, we aim to provide some characterization of self-loop graphs with positive and non-negative eigenvalues, as well as those with few distinct eigenvalues. Before that, an identification of \({\textrm{Spec}}((K_n)_S)\) and \({\textrm{Spec}}((K_{m,n})_S)\) are needed for arbitrary S. The results we obtained are a generalisation of the classical spectrum result of \({\textrm{Spec}}(K_n)\) and \({\textrm{Spec}}(K_{m,n})\) when \(\sigma =0.\)

The self-loop spectrum characterization for both \(K_n\) and \(K_{m,n}\) are technical and require careful examinations of several cases. As a rule of thumb, consider G as \(K_n\) or \(K_{m,n},\) then the steps to determine \({\textrm{Spec}}(G_S)\) are as follows.

Step 1. Determination of the multiplicity of eigenvalue 0.

Determine the rank of \(A(G_S)\) through each of its submatrices. By the Rank-Nullity Theorem, this implies the nullity of \(A(G_S),\) the multiplicity \(m_0\) of the eigenvalue 0.

Step 2. Determination of multiplicity of eigenvalue 1 (for \((K_n)_S\)) or \(-1\) (for \((K_{m,n})_S\)).

Repeat Step 1 for the matrix \(A(G_S)-I_n\) or \(A(G_S)+I_n\) to obtain the multiplicity \(m_1\) of the eigenvalue 1 or \(-1,\) respectively.

Step 3. Determination of the remaining eigenvalues and its multiplicities.

Find the remaining \(n-m_0-m_1\) many eigenvalues by appropriate methods. If there are only two eigenvalues \(\lambda _1(G_S),\lambda _2(G_S)\) left, then they can be obtained by solving the simultaneous equations in Lemma 2.1 below.

Lemma 2.1

[13] Let \(G_S\) be a self-loop graph of order n and m edges. Let \(\lambda _1(G_S),\ldots ,\lambda _n(G_S)\) be its eigenvalues. Then,

  1. (i)

    \(\displaystyle \sum ^n_{i=1} \lambda _i(G_S)=\sigma ,\)

  2. (ii)

    \(\displaystyle \sum ^n_{i=1} \lambda ^2_i (G_S)=2m+\sigma .\)

For \((K_{m,n})_S,\) there are occasions where we need to determine three eigenvalues \(\lambda _i((K_{m,n})_S),\) \(i=1,2,3.\) In particular, an additional formula of \(\sum _i \lambda ^3_i((K_{m,n})_S)\) will be introduced, see Lemma 2.3.

Without loss of generality, let \(S=\{1,2,\ldots , \sigma \}.\) Let \(J_{k\times \ell }\) denote the \(k \times \ell \) matrix whose all entries are 1 and \(j_k\) be the \(k \times 1\) vector with all entries 1.

2.1 Identification of \({\textrm{Spec}}((K_n)_S)\)

In this subsection, \({\textrm{Spec}}((K_n)_S)\) for arbitrary S are identified according to \(\sigma .\)

Theorem 2.2

Let \((K_n)_S\) be the self-loop graph of \(K_n\) and \(|S|=\sigma .\) Then, \({\textrm{Spec}}((K_n)_S)\) are determined by the following three cases:

Case 1. \(\sigma =0.\) Then, \(A((K_n)_S)=A(K_n).\) This is the classical case where

$$\begin{aligned} {\textrm{Spec}}((K_n)_S) ={\textrm{Spec}}(K_n) = \begin{pmatrix} n-1 &{} -1 \\ 1 &{} n-1 \end{pmatrix}. \end{aligned}$$

Case 2. \(0<\sigma <n.\) Then,

$$\begin{aligned} A((K_n)_S)= \left[ \begin{array}{c|c} J_{\sigma \times \sigma } &{} J_{\sigma \times (n-\sigma )} \\ \hline J_{(n-\sigma ) \times \sigma } &{} J_{(n-\sigma ) \times (n-\sigma )} - I_{n-\sigma } \end{array} \right] = \left[ \begin{array}{c} B \\ \hline C \end{array} \right] . \end{aligned}$$

Clearly, \({\textrm{rank}}(B)=1.\) On the other hand, \(J_{(n-\sigma )\times (n-\sigma )} - I_{n-\sigma }\) is the adjacency matrix of \(K_{n-\sigma },\) which follows from Case 1 there is no zero eigenvalue, thus it is invertible. Thus, all rows in C are linearly independent. We claim that all rows of C are linearly independent to \(j_n^T\), a row of B. Let \(\alpha _i\) be the i-th row of the adjacency matrix of \(K_{n-\sigma }.\) Then, \(\{\alpha _1,\ldots , \alpha _{n-\sigma }\}\) is a basis for \({\mathbb R}^{n-\sigma }.\) Clearly, \(j^T_{n-\sigma }=\sum ^{n-\sigma }_{i=1}\frac{1}{n-\sigma -1}\alpha _i\) and this implies that no rows of B is a linear combination of rows of C. Thus,

$$\begin{aligned} {\textrm{rank}}(A(K_n)_S) = n-\sigma +1, \quad {\textrm{null}}(A(K_n)_S)= \sigma -1. \end{aligned}$$

This completes Step 1.

Now, we consider

$$\begin{aligned} A(K_n)_S + I_n = \left[ \begin{array}{c|c} J_{\sigma \times \sigma }+ I_\sigma &{} J_{\sigma \times (n-\sigma )} \\ \hline J_{(n-\sigma ) \times \sigma } &{} J_{(n-\sigma ) \times (n-\sigma )} \end{array} \right] = \left[ \begin{array}{c} B \\ \hline C \end{array} \right] . \end{aligned}$$

We have \({\textrm{rank}}(C)=1.\) Let \(B'=J_{\sigma \times \sigma } + I_\sigma .\) Then,

$$\begin{aligned} B'=J_{\sigma \times \sigma }+ I_\sigma = 2I_\sigma + A(K_\sigma ). \end{aligned}$$

The eigenvalues of \(B'\) are \(\sigma +1\) with multiplicity 1 and 1 with multiplicity \(\sigma -1\). Hence, \(B'\) are invertible. By a similar argument as in Step 1, all rows of B are linearly independent to \(j_n^T\), a row of C. Thus, we have

$$\begin{aligned} {\textrm{rank}}(A(K_n)_S + I_n)=\sigma +1, \quad {\textrm{null}}(A(K_n)_S +I_n) = n-\sigma -1. \end{aligned}$$

This completes Step 2.

For Step 3, note that \(n-(\sigma -1)-(n-\sigma -1)=2.\) By Lemma 2.1, we have

$$\begin{aligned} \lambda _1 ((K_n)_S) + \lambda _2 ((K_n)_S)&= n-1, \\ \lambda ^2_1 ((K_n)_S) + \lambda ^2_2 ((K_n)_S)&= n^2-2n+2\sigma +1. \end{aligned}$$

Solving this, we obtain

$$\begin{aligned} \lambda _{1,2}((K_n)_S)= \frac{(n-1)\pm \sqrt{(n-1)^2+4\sigma }}{2}. \end{aligned}$$

As a conclusion, we have

$$\begin{aligned} {\textrm{Spec}}((K_n)_S)= \begin{pmatrix} \frac{(n-1)+ \sqrt{(n-1)^2+4\sigma }}{2} &{} 0 &{} -1 &{} \frac{(n-1)- \sqrt{(n-1)^2+4\sigma }}{2} \\ 1 &{} \sigma -1 &{} n-\sigma -1 &{} 1 \end{pmatrix}. \end{aligned}$$
(2.1)

Case 3. \(\sigma =n.\) Let \(\widehat{K_n}=(K_n)_S.\) Then, \(A(\widehat{K_n}) = A(K_n) + I_n.\) Thus, we have

$$\begin{aligned} {\textrm{Spec}}(\widehat{K_n})= \begin{pmatrix} n &{} 0 \\ 1 &{} n-1 \end{pmatrix}. \end{aligned}$$
(2.2)

2.2 Identification of \({\textrm{Spec}}((K_{m,n})_S)\)

In this subsection, \({\textrm{Spec}}((K_{m,n})_S)\) for arbitrary S are identified according to \(\sigma .\) Before that, we first show a crucial lemma.

Lemma 2.3

Let \(G=K_{m,n}\) with \(S=S_M \cup S_N \subseteq V(G)\) with \(S_M \subseteq M\) and \(S_N \subseteq N.\) Let \(|S_M|=\sigma _M,\) \(|S_N|=\sigma _N,\) and \(|S|=\sigma =\sigma _M+\sigma _N\). If \(\lambda _1(G_S), \ldots ,\lambda _n(G_S)\) are the eigenvalues of \(G_S,\) then

$$\begin{aligned} \sum ^n_{i=1} \lambda ^3_i(G_S) = 3(m \sigma _N + n\sigma _M)+\sigma . \end{aligned}$$

Proof

By [2, Lemma 2.5 & Result 2 h], \( \sum ^n_{i=1} \lambda ^3_i(G_S)\) is the number of closed walks of length 3. Generally, a closed walk of length 3 in \(G_S\) is either a triple self-looping or three walks that involve two edges and a loop. Without loss of generality, we write vertices that have a loop by \(\mathring{v},\) vertices without loop by \(\bar{v}\), and \(\ell \) as a loop. A direct counting of total number of closed walks of length 3 in \(G_S\) is as follows.

Case 1. Starting from a \(v \in M\) that is incident with \(u \in N\) via an edge e.

  1. (1)

    For \(\mathring{v},\) there are only two closed walks of length 3: \(\mathring{v} \ell \mathring{v} e u e \mathring{v}\) and \(\mathring{v} e u e \mathring{v} \ell \mathring{v}.\) This gives a total of \(2n\sigma _M\) closed walks of length 3.

  2. (2)

    For \(\bar{v},\) it has to be incident with \(\mathring{u} \in N\). There is only one such walk: \(\bar{v} e \mathring{u} \ell \mathring{u} e \bar{v}.\) This gives a total of \(m\sigma _N\) closed walks of length 3.

Thus, totally there are \(2n\sigma _M +m\sigma _N\) closed walks of length 3.

Case 2. Starting from \(u \in N\) that is incident with \(\mathring{v}\) or \(\bar{v} \in M\) via an edge e. Similar to Case 1, there is a total of \(2m\sigma _N +n\sigma _M\) closed walks of length 3.

Case 3. Triple self-looping at \(\mathring{v} \in M\) or \(\mathring{u} \in N.\) There are \(\sigma \) triple self-loopings.

Thus, the total number of closed walks of length 3 is \(3(m\sigma _N+n\sigma _M)+\sigma .\) \(\square \)

We are now ready to characterize the spectrum of \(G_S.\)

Theorem 2.4

Let \((K_{m,n})_S\) be the self-loop graph of \(K_{m,n}\) for \(S \subseteq V(K_{m,n})\) with \(|S|=\sigma .\) Assume that, if \(0<\sigma \le m,\) all loops are within M,  and if \(m<\sigma \le m+n,\) there are m loops in M and \(\sigma -m\) loops in N,  respectively. Then, the spectrum \({\textrm{Spec}}((K_{m,n})_S)\) are determined by the following five cases:

For convenience, we let \(G=K_{m,n}\). Recall that the adjacency matrix of G is given by

$$\begin{aligned} A(G) = \left[ \begin{array}{c|c} {\textbf {0}}_{m} &{} J_{m \times n} \\ \hline J_{n \times m} &{} {\textbf {0}}_{n} \end{array} \right] . \end{aligned}$$
(2.3)

Case 1. \(\sigma =0\). It is clear that \(A(G_S)=A(G)\) and so

$$\begin{aligned} {\textrm{Spec}}(G_S)={\textrm{Spec}}(K_{m,n})= \begin{pmatrix} \sqrt{mn} &{} 0 &{} -\sqrt{mn} \\ 1 &{} m+n-2 &{} 1 \end{pmatrix}. \end{aligned}$$

Case 2. \(0<\sigma < m\). The adjacency matrix of \(G_S\) is

$$\begin{aligned} A(G_S)= \left[ \begin{array}{c|c} J_S &{} J_{m \times n} \\ \hline J_{n \times m} &{} {\textbf {0}}_{n} \end{array} \right] = \left[ \begin{array}{ccc} I_\sigma &{} {\textbf {0}}_{\sigma \times (m-\sigma )} &{} J_{\sigma \times n}\\ \hline {\textbf {0}}_{(m-\sigma ) \times \sigma }&{} {\textbf {0}}_{m-\sigma } &{} J_{(m-\sigma ) \times n} \\ \hline J_{n \times \sigma } &{} J_{n \times (m-\sigma )} &{} {\textbf {0}}_{n} \\ \end{array} \right] = \left[ \begin{array}{c} B\\ \hline C\\ \hline D\\ \end{array} \right] . \end{aligned}$$

We proceed with Step 1. It is straightforward to observe that both submatrices C and D have rank 1 due to repeated rows. One verifies that \({\textrm{rank}}(A(G_S))= \sigma +2\) and we obtain the eigenvalue 0 with multiplicity \({\textrm{null}}(A)=m+n-\sigma -2.\)

For Step 2, consider the matrix \(A(G_S) - I_{m+n},\) which can be viewed in three parts as well:

$$\begin{aligned} A(G_S)-I_{m+n}= \left[ \begin{array}{ccc} {\textbf {0}}_\sigma &{} {\textbf {0}}_{\sigma \times (m-\sigma )} &{} J_{\sigma \times n}\\ \hline {\textbf {0}}_{(m-\sigma ) \times \sigma }&{} -I_{m-\sigma } &{} J_{(m-\sigma ) \times n} \\ \hline J_{n \times \sigma } &{} J_{n \times (m-\sigma )} &{} -I_{n} \\ \end{array} \right] = \left[ \begin{array}{c} B\\ \hline C\\ \hline D\\ \end{array} \right] . \end{aligned}$$

One checks that \({\textrm{rank}}(B)=1,\) \({\textrm{rank}}(C)=m-\sigma ,\) and \({\textrm{rank}}(D)=n.\) Thus, \({\textrm{rank}}(A(G_S) - I_{m+n})=m+n -\sigma +1.\) So, \(G_S\) has the eigenvalue 1 with multiplicity \({\textrm{null}}(A(G_S)-I_{m+n})=\sigma -1.\)

For Step 3, there are \(m+n-{\textrm{null}}(A(G_S))-{\textrm{null}}(A(G_S)-I_{m+n})=3\) eigenvalues \(\lambda _i=\lambda _i(G_S),\) \(i=1,2,3,\) yet to be determined. Now, by Lemma 2.1 and Lemma 2.3, we obtain

$$\begin{aligned} \lambda _1 + \lambda _2 + \lambda _3&= 1, \\ \lambda ^2_1 + \lambda ^2_2 + \lambda ^2_3&= 2mn+1, \\ \lambda ^3_1 + \lambda ^3_2 + \lambda ^3_3&= 3n\sigma +1. \end{aligned}$$

By a fundamental property of a cubic polynomial of \(\lambda _i,\) \(i=1,2,3,\) we write

$$\begin{aligned} p(\lambda ) =(\lambda - \lambda _1)(\lambda - \lambda _2)(\lambda - \lambda _3) =\lambda ^3+a\lambda ^2+b\lambda +c. \end{aligned}$$

Solving the coefficients yield

$$\begin{aligned} a&=-1, \quad b = \frac{1}{2}\left( \left( \sum ^3_{i=1}\lambda _i\right) ^2 - \sum ^3_{i=1}\lambda _i^2\right) = -mn, \\ c&= \frac{(\sum ^3_{i=1}\lambda _i)^3 - (\sum ^3_{i=1}\lambda _i^3) - 3(\sum ^3_{i=1}\lambda _i)(\sum _{i\ne j}\lambda _i\lambda _j) }{3} = n(m-\sigma ). \nonumber \end{aligned}$$

Thus, \(\lambda _1,\lambda _2,\lambda _3\) are exactly the roots of \(p(\lambda )=\lambda ^3-\lambda ^2-mn\lambda + n(m-\sigma ).\) Note that \(p(0)>0,\) \(p(1)=-n\sigma <0,\) \(\lim _{\lambda \rightarrow -\infty } p(\lambda )= -\infty ,\) and \(\lim _{\lambda \rightarrow +\infty }p(\lambda ) = +\infty .\) Thus, by Intermediate Value Theorem, the three roots of \(p(\lambda )\) are in the intervals \((-\infty ,0), (0,1),\) and \((1,+\infty ),\) respectively. Hence, \(\lambda _1,\lambda _2,\) and \(\lambda _3\) are all distinct with multiplicity 1.

As a conclusion, when \(0<\sigma <m,\) we have

$$\begin{aligned} {\textrm{Spec}}(G_S) = \begin{pmatrix} 1 &{} 0 &{} \lambda _1 &{} \lambda _2 &{} \lambda _3\\ \sigma -1 &{} m+n-\sigma -2 &{} 1 &{} 1 &{} 1 \end{pmatrix} \end{aligned}$$

where \(\lambda _1,\lambda _2,\lambda _3\) are the roots of \(p(\lambda )=\lambda ^3-\lambda ^2-mn\lambda +n(m-\sigma ).\)

Case 3. \(\sigma = m\). From (), we have

$$\begin{aligned} A(G_S)= \left[ \begin{array}{c|c} I_{m} &{} J_{m \times n}\\ \hline J_{n \times m} &{} {\textbf {0}}_{n} \end{array} \right] . \end{aligned}$$

Clearly, we have \({\textrm{rank}}(A(G_S))=m+1\) and \({\textrm{null}}(A(G_S))=n-1.\) Hence, \(G_S\) has the eigenvalue 0 with multiplicity \(n-1.\) Step 1 is complete.

Next, we consider

$$\begin{aligned} A(G_S)-I_{m+n} = \left[ \begin{array}{c|c} {\textbf {0}}_{m} &{} J_{m \times n}\\ \hline J_{n \times m} &{} -I_{n} \end{array} \right] . \end{aligned}$$

Similarly, \({\textrm{rank}}(A(G_S)-I_{m+n})=n+1\) and \({\textrm{null}}(A(G_S)-I_{m+n})=m-1.\) Thus, \(G_S\) has the eigenvalue 1 with multiplicity \(m-1.\) Step 2 is now complete.

For Step 3, there are only \((m+n)-(n-1)-(m-1)=2\) eigenvalues \(\lambda _i=\lambda _i(G_S),\) \(i=1,2,\) left to be determined. By Lemma 2.1, we have

$$\begin{aligned} \lambda _1+\lambda _2&= 1 \\ \lambda ^2_1+\lambda ^2_2&= 2mn+1. \end{aligned}$$

So, we have

$$\begin{aligned} \lambda _{1,2} = \frac{1 \pm \sqrt{1+4mn}}{2}. \end{aligned}$$

As a conclusion, when \(\sigma =m,\) we have

$$\begin{aligned} {\textrm{Spec}}(G_S) = \begin{pmatrix} \frac{1+\sqrt{1+4mn}}{2}&{} 1 &{} 0 &{} \frac{1-\sqrt{1+4mn}}{2} \\ 1 &{} m-1 &{} n-1 &{} 1 \end{pmatrix}. \end{aligned}$$

Case 4. \(m<\sigma <m+n\). In this case, the adjacency matrix is

$$\begin{aligned} A(G_S)= \left[ \begin{array}{ccc} I_m &{} J_{m \times (\sigma -m)} &{} J_{m \times (m+n-\sigma )}\\ \hline J_{(\sigma -m) \times m}&{} I_{(\sigma -m)} &{} {\textbf {0}}_{(\sigma -m) \times (m+n-\sigma )} \\ \hline J_{(m+n-\sigma )\times m} &{} {\textbf {0}}_{(m+n-\sigma ) \times (\sigma -m)} &{} {\textbf {0}}_{(m+n-\sigma )} \\ \end{array} \right] = \left[ \begin{array}{c} B\\ \hline C\\ \hline D\\ \end{array} \right] . \end{aligned}$$

For Step 1, it is clear that \({\textrm{rank}}(D)=1\) and both B and C have full rank. Thus, \({\textrm{rank}}(A(G_S))=\sigma +1\) and \(G_S\) has the eigenvalue 0 with multiplicity \(m+n-\sigma -1.\) For Step 2, consider the matrix

$$\begin{aligned} A(G_S)-I_{m+n}= \left[ \begin{array}{ccc} {\textbf {0}}_m &{} J_{m \times (\sigma -m)} &{} J_{m \times (m+n-\sigma )}\\ \hline J_{(\sigma -m) \times m}&{} {\textbf {0}}_{(\sigma -m)} &{} {\textbf {0}}_{(\sigma -m) \times (m+n-\sigma )} \\ \hline J_{(m+n-\sigma )\times m} &{} {\textbf {0}}_{(m+n-\sigma ) \times (\sigma -m)} &{} -I_{(m+n-\sigma )} \\ \end{array} \right] = \left[ \begin{array}{c} B\\ \hline C\\ \hline D\\ \end{array} \right] . \end{aligned}$$

We have \({\textrm{rank}}(B)={\textrm{rank}}(C)=1\) and D has full rank. Thus, \({\textrm{rank}}(A(G_S))=m+n-\sigma +2\) and \(G_S\) has the eigenvalue 1 with multiplicity \(\sigma -2.\)

For Step 3, there are \((m+n)-(m+n-\sigma -1)-(\sigma -2)=3\) eigenvalues \(\lambda _i=\lambda _i(G_S),\) \(i=1,2,3,\) left to be determined. Proceed similarly as in Case 2, we apply Lemma 2.1 and Lemma 2.3 to get

$$\begin{aligned} \lambda _1 + \lambda _2 + \lambda _3&= 2, \\ \lambda ^2_1 + \lambda ^2_2 + \lambda ^2_3&= 2mn+2, \\ \lambda ^3_1 + \lambda ^3_2 + \lambda ^3_3&= 3m(\sigma + n-m) +2. \end{aligned}$$

The corresponding cubic polynomial is \(p(\lambda )=\lambda ^3-2\lambda ^2+(1-mn)\lambda +m(m+n-\sigma ).\) Using the method as in Case 2, these eigenvalues are distinct, each has multiplicity 1. As a conclusion, when \(m<\sigma <m+n,\) we have

$$\begin{aligned} {\textrm{Spec}}(G_S) = \begin{pmatrix} 1 &{} 0 &{} \lambda _1 &{} \lambda _2 &{} \lambda _3\\ \sigma -2 &{} m+n-\sigma -1 &{} 1 &{} 1 &{} 1 \end{pmatrix}, \end{aligned}$$

where \(\lambda _1,\lambda _2,\lambda _3\) are the roots of \(p(\lambda )=\lambda ^3-2\lambda ^2+(1-mn)\lambda +m(m+n-\sigma ).\)

Case 5. \(\sigma =m+n\). In this case, the adjacency matrix takes the form \(A(G_S)= A(G)+I_{m+n}.\) Thus, its spectrum can be obtained by shifting from Case 1:

$$\begin{aligned} {\textrm{Spec}}(G_S)= \begin{pmatrix} 1+\sqrt{mn} &{} 1 &{} 1- \sqrt{mn} \\ 1 &{} m+n-2 &{} 1 \end{pmatrix}. \end{aligned}$$

Now, we are ready to prove the main results of this section. Recall that if the eigenvalues are in non-increasing order, then we have the Courant-Weyl Inequality:

Theorem 2.5

[3, Theorem 1.3.15] Let A and B be \(n\times n\) real symmetric matrix. Then

$$\begin{aligned}&\lambda _i(A+B)\le \lambda _j(A)+ \lambda _{i-j+1}(B) \quad \text { for } n\ge i\ge j\ge 1,\\&\lambda _i(A+B)\ge \lambda _j(A)+ \lambda _{i-j+n}(B) \quad \text { for } 1\le i\le j\le n. \end{aligned}$$

By choosing \(i=j=n\) in the second inequality above, we obtain the inequality \(\lambda _{\min }(A+B)\ge \lambda _{\min }(A)+ \lambda _{\min }(B)\) which is essential in the proof of our next theorem.

Theorem 2.6

Let G be a self-loop graph of order n and has eigenvalues \(\lambda _1(G) \ge \lambda _2(G)\ge \cdots \ge \lambda _n(G)\).

  1. (i)

    If \(\lambda _i(G)>0\), for \(i=1,2,\ldots ,n\), then G is disjoint unions of n \(\widehat{K_1}\).

  2. (ii)

    If \(\lambda _i(G)\ge 0\), for \(i=1,2,\ldots ,n\), then every connected component of G is either \(K_1\) or \(\widehat{K_r}\) for some \(r\in \{1,2,\ldots ,n\}\).

Proof

  1. (i)

    Let H be a connected component of G. We shall show that \(H_0\) is a complete graph. Suppose on the contrary that \(H_0\) is not a complete graph. Thus, \(H_0\) contains \(P_3\) as a vertex-induced subgraph. Note that \(\lambda _{\min }(P_3)=-\sqrt{2}.\) So, by Interlacing Theorem (cf. [3, Cor 1.3.12]), we obtain \(\lambda _{\min }(H_0)\le -\sqrt{2}\). Note that \(J_S=A(H)-A(H_0)\). By the Courant-Weyl Inequalities, we have \(\lambda _{\min }(H_0)\ge \lambda _{\min }(H)+\lambda _{\min }(-J_S)\). This implies that

    $$\begin{aligned} \lambda _{\min }(G) \le \lambda _{\min }(H)\le \lambda _{\min }(H_0)-\lambda _{\min }(-J_S)\le -\sqrt{2}+1, \end{aligned}$$

    this is a contradiction because G has only positive eigenvalues. Therefore, \(H_0\) is a complete graph. By Theorem 2.2, \(H=\widehat{K_1}\) and we are done.

  2. (ii)

    With a slight modification of the proof in Part (i), we can deduce that if H is a connected component of G, then \(H_0\) is a complete graph. Now, by Theorem 2.2, \(H=K_1\) or \(H=\widehat{K_r}\) for some \(r\in \{1,2,\ldots ,n\}\). The proof is complete. \(\square \)

In the following, we characterize self-loop graphs of order n with a few distinct eigenvalues.

Theorem 2.7

Let G be a self-loop graph of order n. Then,

  1. (i)

    G has exactly one eigenvalue if and only if \(G=\overline{K}_n\) or every connected component of G is \(\widehat{K_1}.\)

  2. (ii)

    G has exactly two distinct eigenvalues if and only if all connected components of G are the same and each is \(\widehat{K_r}\) for some r,  or each component of G is either \(K_1\) or \(\widehat{K_r}.\)

Proof

  1. (i)

    Assume that G has \(\sigma \) loops and \(a=\lambda _1(G)=\cdots =\lambda _n(G).\) Then, \(\sigma =\textrm{Tr}(A(G))=na.\) This implies that \(a=\displaystyle \frac{\sigma }{n}.\) Since a is a root of a monic polynomial with integer coefficients, it follows that a is an integer. Since \(0 \le \sigma \le n,\) we have \(\sigma =0\) or \(\sigma =n.\)

    1. (a)

      If \(\sigma =0,\) then \(a=0.\) This implies \(A(G)={\textbf {0}}_n.\) Thus, \(G=\overline{K}_n.\)

    2. (b)

      If \(\sigma =n,\) then \(a=1.\) So, \(A(G)-I_n\) is the adjacency matrix of an ordinary graph with only eigenvalue 0. It follows that \(A(G)-I_n={\textbf {0}}_n,\) that is, \(A(G)=I_n\) and each component of G is \(\widehat{K_1}.\)

  2. (ii)

    Let \(A=A(G).\) If G has two eigenvalues \(\lambda _1\) and \(\lambda _2,\) then the minimal polynomial of A has the form

    $$\begin{aligned} p(\lambda )=(\lambda -\lambda _1)(\lambda -\lambda _2). \end{aligned}$$

    So, we have

    $$\begin{aligned} A^2-(\lambda _2+\lambda _2)A+\lambda _1\lambda _2I_n={\textbf {0}}_n. \end{aligned}$$
    (2.4)

    Let H be a connected component of G. We show that \(H_0\) is a complete graph. If \(H_0\) is not a complete graph, then there is a vertex induced path of order 3 in \(H_0\) such as \(\widehat{P}_3:\) \(v_rv_sv_k.\) Now, we have \((A^2)_{rk} > 0\) with \(A_{rk}=0.\) This is a contradiction because the (rk)-entry of (2.4) is positive on the left, but the (rk)-entry on the right side is 0. Thus, \(H_0\) is a complete graph. By Theorem 2.2, \(H=K_r\) or \(H=\widehat{K_r}\) for some r. If one of the components of G is \(K_r\) (resp. \(\widehat{K_r}\)), then the other components of G should also be \(K_r\) (resp. \(\widehat{K_r}\)) because otherwise G has more than two distinct eigenvalues. If \(r=1,\) then G is union of finitely many \(K_1\) or \(\widehat{K_r}.\) The opposite direction is obvious and the proof is complete. \(\square \)

3 Bipartite Graphs and Eigenvalues of Self-loop Graphs

In this section, we provide a necessary and sufficient condition for a graph G being bipartite according to the eigenvalues of G and \(G_{V(G)\backslash S}\) for arbitrary \(S \subseteq V(G).\) First, we give an observation on similarity.

Lemma 3.1

Let G be a bipartite graph of part sizes m and n and \(|S|=\sigma .\) Then, \(J_S + A(G)\) is similar to the matrix \(J_S-A(G).\)

Proof

One can easily see that if

$$\begin{aligned} P= \begin{bmatrix} I_m &{} {\textbf {0}} \\ {\textbf {0}} &{} -I_n \end{bmatrix}, \end{aligned}$$

then, by (2.3), \( P(J_S + A(G))P^{-1} = P(J_S + A(G))P = J_S - A(G). \) Thus, \(J_S + A(G)\) is similar to \(J_S - A(G).\) \(\square \)

Theorem 3.2

Let G be a bipartite graph of order n and \(|S|=\sigma .\) Then, \(1 - \lambda _n(G_S) \ge \cdots \ge 1 - \lambda _1(G_S)\) are the eigenvalues of \(G_{V(G)\backslash S},\) where \(\lambda _1(G_S)\ge \cdots \ge \lambda _n(G_S)\) are the eigenvalues of \(G_S.\)

Proof

Note that \(A(G_S) = J_S + A(G)\) and \(A(G_{V(G)\backslash S}) = J_{V(G) \backslash S} + A(G).\) Hence, we find

$$\begin{aligned} J_{V(G) \backslash S} - A(G) = I_n -(J_S + A(G)) = I_n - A(G_S). \end{aligned}$$

Thus, we have \(1 -\lambda _i(G_S),\) \(i=1, \ldots ,n,\) the eigenvalues of \(J_{V(G)\backslash S}- A(G),\) which coincide with the eigenvalues of \(A(G_{V(G)\backslash S})\) by similarity in the previous lemma. \(\square \)

Theorem 3.3

Let G be a connected graph of order n. Let \(G_S\) be its self-loop graph with eigenvalues \(\lambda _1(G_S) \ge \cdots \ge \lambda _n(G_S)\) and \(S \subseteq V(G).\) Then, the eigenvalues of \(G_{V(G)\backslash S}\) are \(1-\lambda _n(G_S) \ge \cdots \ge 1-\lambda _1(G_S)\) if and only if G is bipartite.

Proof

By Theorem 3.2, it suffices to prove that if the spectrum of \(G_{V(G)\backslash S}\) is \(1-\lambda _n(G_S) \ge \cdots \ge 1-\lambda _1(G_S),\) then G is bipartite.

Since \(1-\lambda _1(G_S)\) is the smallest eigenvalues of \(G_{V(G) \backslash S},\) it follows that the largest eigenvalue of the matrix \(I_n-A(G_{V(G)\backslash S})\) is \(\lambda _1(G_S).\) Also note that \(I_n-A(G_{V(G)\backslash S})=I_n-(J_{V(G)\backslash S}+A(G))=J_S-A(G)\). Thus, we have

$$\begin{aligned} \lambda _1(G_S)&=\max _{||x||=1} x^T(J_S + A(G))x \\&=\max _{||x||=1} x^T(I_n -A(G_{V(G) \backslash S})x\\&=\max _{||x||=1} x^T(J_S - A(G))x. \end{aligned}$$

Suppose that \(\lambda _1(G_S)=z^T(J_S-A(G))z\) for some z with \(\Vert z\Vert =1.\) Define

$$\begin{aligned} |z|^T = (|z_1|, |z_2|, \cdots , |z_n|). \end{aligned}$$

Then, we have

$$\begin{aligned} \lambda _1(G_S)=z^T(J_S - A(G))z \le |z|^T (J_S +A(G)) |z| \le \lambda _1(G_S). \end{aligned}$$

Thus, \(z^T(J_S-A(G))z=|z|^T(J_S+A(G))|z|.\) This implies that \(z^T(-A(G))z=|z|^T(A(G))|z|\), or equivalently

$$\begin{aligned} \sum _{1\le i,j\le n}(-a_{ij})z_iz_j=\sum _{1\le i,j\le n}a_{ij}|z_i||z_j|. \end{aligned}$$
(3.1)

Observe that for each \(1\le i,j\le n\), we have

$$\begin{aligned} (-a_{ij})z_iz_j\le a_{ij}|z_i||z_j|. \end{aligned}$$
(3.2)

Since \(\Vert |z|\Vert =1\) and \(|z|^T(J_S+A(G))|z|=\lambda _1(G_S),\) we conclude that |z| is an eigenvector corresponding to \(\lambda _1(G_S)\) for the matrix \(J_S +A(G).\) Since G is connected, by Perron-Frobenius Theorem [4, §2], \(\lambda _1(G_S)\) is a simple eigenvalue and there exists an eigenvector \(\alpha \) for \(\lambda _1(G_S)\) whose all entries are positive. Then |z| is a multiple of \(\alpha .\) This implies that \(z_i \ne 0\) for \(i=1,\ldots , n.\) Note that if there exist some i and j with \(1\le i,j\le n\) such that \(a_{ij}=1\) and \(z_iz_j>0,\) then (3.2) becomes a strict inequality. Thus, by taking the summation on both sides over ij,  we obtain a strict inequality that contradicts (3.1). We shall show that this contradiction occurs if G is not bipartite.

Assume G is not bipartite, that is, G contains an odd cycle with vertices \(v_1,v_2,\ldots ,\) \(v_{2k+1}.\) Then, for \(i=1,\ldots , 2k+1\), we have \(a_{i,r_i}=1,\) where \(r_i=i+1\mod (2k+1)\). Note that \(z_\ell z_{r_{\ell }}\) cannot be negative for every \(1\le \ell \le 2k+1\) because there are odd distinct vertices. So, there exists an \(\ell \) with \(1 \le \ell \le 2k+1\) such that \(z_\ell z_{r_{\ell }}>0.\) Thus, G is bipartite. The proof is complete. \(\square \)

Remark 3.4

Let \(S \subseteq V(G).\) Theorem 3.3 provides a way of determining the bipartiteness of a graph directly from the eigenvalues of its self-loop graphs \(G_S\) and the eigenvalues of \(G_{V(G)\backslash S}.\) Indeed, if we have \(\lambda _1(G_S)\) and \(\lambda _n(G_{V(G)\backslash S}),\) we can determine whether G is bipartite.

Another immediate consequence of Theorem 3.3 is the following corollary.

Corollary 3.5

[13, Theorem 3] Let G be a bipartite graph of order n with vertex set V(G). Let S be a nonempty subset of V(G). Then, \({\mathcal E}(G_S)={\mathcal E}(G_{V(G)\backslash S}).\)

Before closing this section, we discuss a case that answers [13, Conjecture 2]. As a motivation, let us consider the case \(G=K_{3,3}.\) Using the spectrum determined Sect. 2.2, the energy of \((K_{3,3})_S\) is obtained as follows.

Table 1 The energy of \((K_{3,3})_S\)

From the table, we observe that when there are loops, \((K_{3,3})_S\) has at least the energy of the ordinary graph \(K_{3,3}.\) Thus, it is natural to ask whether the same is true for all bipartite graphs. Indeed, we provide an affirmative answer to this question. Before proving this, let us recall an inequality in [1].

Theorem 3.6

[1] Let A be a Hermitian matrix in the following block form:

$$\begin{aligned} A= \begin{bmatrix} B &{} D \\ D^* &{} C \end{bmatrix}. \end{aligned}$$

Then, \({\mathcal E}(A)\ge 2 {\mathcal E}(D).\)

Theorem 3.7

If G is a bipartite graph and \(S \subseteq V(G),\) then, \({\mathcal E}(G_S) \ge {\mathcal E}(G).\)

Proof

Let \(A(G)=\begin{bmatrix} {\textbf {0}} &{} D \\ D^T &{} {\textbf {0}} \end{bmatrix}\) be its adjacency matrix. Note that

$$\begin{aligned} {\mathcal E}(G_S) = {\mathcal E}(A(G_S) - \frac{\sigma }{n} I_n), \end{aligned}$$

where \(\sigma =|S|\) and \(n=|V(G)|.\) By Theorem 3.6,

$$\begin{aligned} {\mathcal E}(G) =2{\mathcal E}(D) \le {\mathcal E}(A(G_S)-\frac{\sigma }{n}I_n) = {\mathcal E}(G_S). \end{aligned}$$

\(\square \)

Theorem 3.7 answers a special case (bipartite graphs) of [13, Conjecture 2]. Lastly, we propose the following conjecture.

Conjecture 1

For every simple graph G,  there exists \(S \subseteq V(G)\) such that \({\mathcal E}(G_S) > {\mathcal E}(G).\)

4 Upper Bound for the Energy of \(G_S\)

We first recall the following theorem.

Theorem 4.1

[5, Theorem 4] For any \((a_{ij})=A \in M_n({\mathbb C})\) with eigenvalues \(\lambda _1,\ldots , \lambda _n,\)

$$\begin{aligned} \sum ^n_{i=1} |\lambda _i| \le \sum ^n_{i=1} \Vert A_i\Vert , \end{aligned}$$

where \(A_i\) denotes the i-th row of A and \(\Vert A_i\Vert = \sqrt{a^2_{i1} + a^2_{i2} + \cdots + a^2_{in}}.\)

For \(A \in M_n({\mathbb C}),\) the energy of a complex matrix is defined to be

$$\begin{aligned} {\mathcal E}(A)= \sum ^n_{i}|\lambda _i(A)|, \end{aligned}$$

where \(\lambda _1(A),\ldots , \lambda _n(A)\) are eigenvalues of A. Thus, \({\mathcal E}(A)\le \sum ^n_{i=1} \Vert A_i\Vert .\)

As a consequence, we obtain an alternative proof of an upper bound for the energy of \(G_S\) given by Gutman et al. and discuss the equality case.

Corollary 4.2

[13, Theorem 6] Let \(G_S\) be a self-loop graph of order n and size m with \(|S|=\sigma \). Then,

$$\begin{aligned} {\mathcal E}(G_S) \le \sqrt{n\left( 2m+ \sigma - \frac{\sigma ^2}{n}\right) }. \end{aligned}$$
(4.1)

Suppose the equality holds, then \(G_S\) is (ab)-semi-regular, where

$$\begin{aligned} a = \frac{2m}{n} + \frac{3\sigma }{n} - \frac{2\sigma ^2}{n^2} -1, \quad b = \frac{2m}{n} + \frac{\sigma }{n} -\frac{2\sigma ^2}{n^2}. \end{aligned}$$

Proof

Let \(\displaystyle B=A(G_S) - \frac{\sigma }{n} I_n.\) By Theorem 4.1, we have

$$\begin{aligned} {\mathcal E}(G_S)= \sum ^n_{i=1} \left| \lambda _i(B) \right| =\sum ^n_{i=1} \left| \lambda _i(G_S) - \frac{\sigma }{n} \right| \le \sum ^n_{i=1} \Vert B_i\Vert . \end{aligned}$$

Hence, we have

$$\begin{aligned} \sum ^n_{i=1} \Vert B_i\Vert&= \sqrt{d_1 + \left( 1-\frac{\sigma }{n}\right) ^2} + \cdots + \sqrt{d_\sigma + \left( 1-\frac{\sigma }{n}\right) ^2} \\&\qquad \qquad + \sqrt{d_{\sigma +1} + \left( \frac{\sigma }{n}\right) ^2} + \cdots + \sqrt{d_n + \left( \frac{\sigma }{n}\right) ^2}. \nonumber \end{aligned}$$

By Cauchy-Schwarz inequality, we obtain

$$\begin{aligned} {\mathcal E}(G_S) \le \sqrt{n\left( 2m+ \sigma - \frac{\sigma ^2}{n}\right) }. \end{aligned}$$

If the equality holds, then \(d_i+(1-\frac{\sigma }{n})^2=d_{i+1}+(1-\frac{\sigma }{n})^2\) for \(1\le i \le \sigma -1\) and \(d_j+(\frac{\sigma }{n})^2=d_{j+1} + (\frac{\sigma }{n})^2\) for \(\sigma +1 \le i \le n-1.\) One deduces that \(a=d_1=\cdots =d_\sigma \) and \(b=d_{\sigma +1} = \cdots =d_{n}.\) Thus, the (ab)-semi-regularity can be obtained by solving the simultaneous equations \(2m=\sigma a + (n-\sigma )b\) and \(a+(1-\frac{\sigma }{n})^2=b+\frac{\sigma ^2}{n^2}.\) This completes the proof. \(\square \)

5 Analogous Classical Upper Bound for \(\lambda _1\) in Terms of Maximum Degree

It is well-known that if G is a graph, then \(\lambda _1(G)\le \Delta (G)\). In the next theorem, we generalize it to graphs with self-loop graphs by showing that \(\lambda _1(G_S) \le \Delta (G)+1\).

Theorem 5.1

Let \(G_S\) be a self-loop graph of order n. Then \(\lambda _1(G_S)\le \Delta (G)+1\le n.\) Moreover, \(\lambda _1(G_S)=n\) if and only if \(G_S=\widehat{K_n}.\)

Proof

Since \(A(G_S)\) is a non-negative matrix, by Perron-Frobenius Theorem, there exists an eigenvector x corresponding to eigenvalue \(\lambda _1(G_S)\) such that \(x^T= [x_1, x_2, \ldots , x_n]\) and \(x_i \ge 0\) for \(i=1,2,\ldots ,n.\)

Assume that \(\displaystyle x_r=\max _{1\le i\le n}x_i,\) for some \(r\in \{1,2,\ldots ,n\}\). By considering the r-th entry of both sides of \(A(G_S)x=\lambda _1(G_S)x\), we have

$$\begin{aligned} x_{i_1}+x_{i_2}+\cdots +x_{i_{d(v_r)}}+\gamma x_r=\lambda _1(G_S) x_r,\quad \gamma \in \{0,1\}. \end{aligned}$$
(5.1)

This implies that

$$\begin{aligned} \lambda _1(G_S) x_r\le (\Delta (G)+1)x_r. \end{aligned}$$

Hence, \(\lambda _1(G_S)\le \Delta (G)+1\) as \(x_r>0\). Since \(\Delta (G)\le n-1,\) \(\lambda _1(G_S) \le n.\)

If \(\lambda _1(G_S)=n\), then Eq. (5.1) becomes

$$\begin{aligned} x_{i_1}+x_{i_2}+\cdots +x_{i_{d(v_r)}}+\gamma x_r=n x_r,\quad \gamma \in \{0,1\}, \end{aligned}$$

with \(d(v_r)=\Delta (G)=n-1\), \(\gamma =1\), and \(x_{i_k}=x_r\) for \(k=1,2,\ldots ,n-1\). Hence, \(A(G_S)=J_{n\times n}\) indicating \(G_S=\widehat{K_n}\). Conversely, if \(G_S=\widehat{K_n}\), then by (2.2), we have

$$\begin{aligned} \text {Spec}(G_S)= \begin{pmatrix} n &{} 0 \\ 1 &{} n-1 \end{pmatrix}, \end{aligned}$$

which gives \(\lambda _1(G_S)=n\). \(\square \)

The next theorem provides a sharp lower bound for \(\lambda _1(G)\) for any connected graph with loops.

Theorem 5.2

Let G be a connected graph of order n and size m. If \(S\subseteq V(G)\) with \(|S|=\sigma \), then

$$\begin{aligned} \lambda _1(G_S)\ge \frac{2m}{n}+\frac{\sigma }{n}. \end{aligned}$$

Moreover, if G is a \(\left( k,k+1\right) \)-semi-regular graph for some natural number k

$$\begin{aligned} d_G(v)={\left\{ \begin{array}{ll} k,\quad &{} \text {if }v\in S,\\ k+1,\quad &{}\text {if }v\in V(G)\backslash S, \end{array}\right. } \end{aligned}$$

where \(d_G\) is the degree of vertices of G, then

$$\begin{aligned} \lambda _1(G_S)=\frac{2m}{n}+\frac{\sigma }{n}. \end{aligned}$$

Proof

Since \(J_S+A(G)\) is a real symmetric matrix, by Rayleigh quotient, we obtain

$$\begin{aligned} \lambda _1(G_S)&=\max _{0\ne x\in \mathbb {R}^n} \frac{x^T(J_S+A(G))x}{x^Tx}\\&\ge \frac{j^T(J_S+A(G))j}{j^Tj}\\&=\frac{j^T(J_S)j}{j^Tj}+\frac{j^T(A(G))j}{j^Tj}=\frac{2m}{n}+\frac{\sigma }{n}. \end{aligned}$$

To show the equality, we consider a \(\left( k,k+1\right) \)-semi-regular graph G, \(1\le k\le n\), such that

$$\begin{aligned} d_G(v)= {\left\{ \begin{array}{ll}k,\quad &{} \text {if}~ \, v\in S,\\ k+1,\quad &{}\text {if}~\, v\in V(G)\backslash S. \end{array}\right. } \end{aligned}$$

Then, \(A(G_S)=J_S+A(G)\) gives \((J_S+A(G))j=(k+1)j\). This shows that j is an eigenvector corresponding to the eigenvalue \(k+1\), and thus \(k+1\in \text {Spec}(G_S)\). Notice that for G

$$\begin{aligned} 2m=\sum _{i=1}^n d_{G}(v_i)=\sigma k +(n-\sigma )(k+1) = -\sigma + nk + n. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \frac{2m}{n}+\frac{\sigma }{n}=k+1\in \text {Spec}(G_S), \end{aligned}$$

and j is an eigenvector corresponding to \(\frac{2m}{n}+\frac{\sigma }{n}\).

To show that \(\frac{2m}{n}+\frac{\sigma }{n}\) is the largest eigenvalue of graph \(G_S\), we suppose on the contrary that \(\frac{2m}{n}+\frac{\sigma }{n}=\lambda _i(G_S)\) for some \(i\ge 2\). Since \(A(G_S)\) is a non-negative matrix, by Perron-Frobenius Theorem, there exists an eigenvector x corresponding to eigenvalue \(\lambda _1\) such that \(x^T=[x_1,\ldots , x_n]\) and \(x_i>0\) for \(i=1,2,\ldots ,n.\) Since \(\lambda _i\ne \lambda _1\), it follows that the eigenvector x and j are orthogonal, a contradiction. Hence, \(\lambda _1(G_S)=\frac{2m}{n}+\frac{\sigma }{n}\). \(\square \)