1 Introduction

In recent years, much research has been devoted to investigating different graphs prescribed to algebraic structures, such as groups, rings and semirings. Among these graphs, in this author’s opinion, two of the most important ones appear to be the commuting graph that describes the way that the elements in the algebraic structure commute with each other, and the zero-divisor graph that describes the way in which the elements mutually annihilate.

In this paper, we concern ourselves with the latter. The zero-divisor graph of a commutative ring was first introduced by Beck in [1]. Later, Anderson and Livingston in [2] made a slight adjustment to the definition of the zero-divisor graph in order to be able to investigate the zero-divisor structure of commutative rings. After that, the definition was extended to non-commutative rings [3], as well as semigroups [4] and nearrings [5]. The zero-divisor graphs of semirings have been studied in [6,7,8].

For a simple undirected graph G and its vertex v, we choose an ordered subset \(W=\{ w_1,w_2,\ldots ,w_k\}\) of the vertex set of graph G. Then, we call the vector \(r(v|W)=(d(v,w_1),d(v,w_2),\ldots ,d(v,w_k))\) the representation of v with respect to W. A set W is called a resolving set for G if distinct vertices of G have distinct representations with respect to W. A resolving set of minimal cardinality for G is called a basis of G, and the cardinality of the basis is defined as the metric dimension of G and denoted by \({\textrm{dim}_\textrm{M}}(G)\) [9].

The metric dimension was first introduced by Slater [10]. He was tackling the problem of trying to (uniquely) determine the location of an intruder in a network. The concept was then also studied by Harary and Melter [11]. It turned out that the metric dimension has many applications, for example, in pharmaceutical chemistry [9, 12], robot navigation in space [13], combinatorial optimization [14], as well as sonar and coast guard long range navigation [10, 15].

However, it turns out that determining the metric dimension of a graph is not that easy. It is actually an NP-complete problem, even in special cases such as bounded-degree planar graphs [16], split graphs, bipartite graphs and their complements and line graphs of bipartite graphs [17].

Mathematicians have recently studied the metric dimensions of various graphs corresponding to many different algebraic structures. For example, the metric dimension of a zero-divisor graph of a commutative ring was studied in [18,19,20,21], a total graph of a finite commutative ring in [22], an annihilating-ideal graph of a finite ring in [23], an annihilator graph of a ring in [24] and a commuting graph of a dihedral group in [25]. A related concept of strong metric dimension has also been introduced and studied, in the case of zero-divisor graphs of rings in [26, 27] and in the case of annihilator graphs of commutative rings in [28].

In this paper, we study the metric dimension of the zero-divisor graph of a commutative entire antinegative semiring. A semiring is a set S equipped with binary operations \(+\) and \(\cdot \) such that \((S,+)\) is a commutative monoid with identity element 0 and \((S,\cdot )\) is a monoid with identity element 1. Furthermore, addition and multiplication in S are connected by distributivity and the element 0 annihilates S. Semiring S is commutative if we have \(ab=ba\) for all \(a,b \in S\). Semirings are often divided according to the properties of their elements: for example, semiring S is called zero-divisor free or entire if \(ab=0\) for some \(a, b \in S\) implies \(a=0\) or \(b=0\). Furthermore, S is called antinegative or zero-sum-free (or simply an antiring), if \(a+b=0\) for \(a, b \in S\) implies \(a=b=0\). The most basic example of an antiring is the binary Boolean semiring \({\mathcal {B}}\), which consists of the set \(\{0,1\}\) where 0 is the additive identity, 1 is the multiplicative identity and \(1+1=1\). Commutative semirings often arise naturally; the set of nonnegative integers (or reals) with the usual operations of addition and multiplication, for example, forms a commutative semiring. But there are many more examples of commutative semirings, such as distributive lattices, tropical semirings, dioïds, fuzzy algebras, inclines and bottleneck algebras.

The theory of semirings has also proved to have a lot of applications. For example, there exist applications in optimization theory, automatic control, models of discrete event networks and graph theory [29,30,31,32]. The reader is encouraged to consult [33] for a comprehensive theory of semirings.

This paper is organized as follows. In the next section, we examine some preliminary graph-theoretical notions and results. In Sect. 3, we calculate the metric dimension of the zero-divisor graph of \(M_n({\mathcal {B}})\) (see Theorem 3.11). In the last section, we use this result to calculate the metric dimension of the zero-divisor graph of \(M_n(S)\), where S is an arbitrary commutative entire antinegative semiring (see Theorem 4.4).

2 Preliminaries

For a semiring S, we denote by Z(S) the set of zero-divisors in S, \(Z(S)=\{x \in S;\) there exists \(0 \ne y \in S \text{ such } \text{ that } xy=0 \text{ or } yx=0 \}\). We denote by \(\Gamma (S)\) the zero-divisor graph of S. The vertex set \(V(\Gamma (S))\) of \(\Gamma (S)\) is the set of elements in \(Z(S) {\setminus } \{0\}\) and an unordered pair of vertices \(x,y \in V(\Gamma (S))\), \(x \ne y\), is an edge \(x-y\) in \(\Gamma (S)\) if \(xy = 0\) or \(yx=0\). The sequence of edges \(x_0 - x_1\), \( x_1 - x_2\),..., \(x_{k-1} - x_{k}\) in a graph is called a path of length k. The distance between vertices x and y is the length of the shortest path between them, denoted by d(xy). The diameter \(\textrm{diam}(\Gamma )\) of the graph \(\Gamma \) is the longest distance between any two vertices of the graph.

Next, we shall also need the following definition.

Definition 2.1

Let v be a vertex of a graph G. Then the open neighbourhood of v is \(N(v)=\{u \in V(G); \text { there exists an edge } uv \text{ in } \text{ G }\}\) and the closed neighbourhood of v us \(N[v]=N(v) \cup \{v\}\). Two distinct vertices u and v of G are twins if \(N(u)=N(v)\) or \(N[u]=N[v]\).

The following lemma can be found in [34].

Lemma 2.2

[34] Suppose u and v are twins in a connected graph G and the set W is a resolving set for G. Then u or v is in W.

In this paper, we mostly concern ourselves with the semirings of matrices. For a semiring S, we denote by \(M_n(S)\) the semiring of all n by n matrices with entries in S. We shall denote by \(E_{ij} \in M_n(S)\) the matrix with 1 at entry (ij) and zeros elsewhere. For a matrix \(A \in M_n(S)\), we shall denote by \(A_{ij} \in S\) the (ij)-th entry of A. Furthermore, let \(N_n\) denote the set \(\{1,2,\ldots ,n\}\).

We shall see that the semiring of Boolean matrices \(M_n({\mathcal {B}})\) plays a special role in the search for the metric dimension of the zero-divisor graph of an entire antiring. Thus, the next section studies the metric dimension of \(\Gamma (M_n({\mathcal {B}}))\).

3 The Metric Dimension of the Zero-Divisor Graph of Boolean Matrices

In this section, we study the metric dimension of \(\Gamma (M_n({\mathcal {B}}))\). Let’s start with the simplest case of two-by-two matrices.

Example 3.1

Let \(W= \left\{ \left[ \begin{matrix}1 &{} 0 \\ 1 &{} 0 \end{matrix}\right] , \left[ \begin{matrix}1 &{} 1 \\ 0 &{} 0 \end{matrix}\right] \right\} \subset M_2({\mathcal {B}})\). It can be easily verified that W is a resolving set for \(\Gamma (M_2({\mathcal {B}}))\), so \({\textrm{dim}_\textrm{M}}(\Gamma (M_2({\mathcal {B}}))) \le 2\). Since \(\Gamma (M_2({\mathcal {B}}))\) has 8 vertices and Theorem 1 from [7] states that \(\textrm{diam}(\Gamma (M_2({\mathcal {B}}))) \le 3\), we conclude that no set of cardinality 1 can be a resolving set for \(\Gamma (M_2({\mathcal {B}}))\), therefore \({\textrm{dim}_\textrm{M}}(\Gamma (M_2({\mathcal {B}}))) = 2\).

Now, we turn to the case \(n \ge 3\). For every \(I, J \subseteq N_n\) we define the set of all matrices with their zero rows and columns prescribed by I and J, respectively, by \(T_{I,J}=\{A \in M_n({\mathcal {B}}); A_{ik}=A_{kj}=0 \text { for every } i \in I, j \in J \text{ and } \text{ every } k \in N_n, \text{ and } \text{ for } \text{ every } s \notin I, u \notin J \text{ there } \text{ exist } t_s, v_u \in N_n \text{ such } \text{ that } A_{st_s} \ne 0 \text{ and } A_{v_uu} \ne 0 \}\). Also, we denote by \(t_{I,J}= |T_{I,J}|\) the cardinality of set \(T_{I,J}\).

The next two lemmas investigate the values of \(t_{I,J}\) for various sets \(I, J \subseteq N_n\).

Lemma 3.2

If \(|I|=n-1\) or \(|J|=n-1\), then \(t_{I, J}=1\).

Proof

Suppose \(|I|=n-1\). Then any matrix \(A \in T_{I,J}\) has only one nonzero row. In this row, the zero elements are exactly determined by the set J, so \(T_{I,J}\) has exactly one element. Similarly, we reason in the case \(|J|=n-1\). \(\square \)

Lemma 3.3

For every \(I, J \subsetneq N_n\), we have \(t_{I,J}=\sum \limits _{k=0}^{n-|J|}{(-1)^k{n-|J| \atopwithdelims ()k}}\)\((2^{n-|J|-k}-1)^{n-|I|}\).

Proof

Notice that \(t_{I,J}\) is exactly equal to the number of matrices of size \((n-|I|) \times (n - |J|)\) with entries in \({\mathcal {B}}\) that have no zero row or column. So, let us examine the set of \((n-|I|) \times (n - |J|)\) matrices with elements in \({\mathcal {B}}\). Obviously, there are exactly \((2^{n-|J|}-1)^{n-|I|}\) matrices that have all rows nonzero, but naturally some of them may have a few zero columns. Suppose therefore that a matrix has at least k zero columns for some \(k \in N_{n-|J|}\). We have \(n-|J| \atopwithdelims ()k\) possible ways to choose these k columns. But if we disregard the zero columns, there are \(2^{n-|J|-k}-1\) possible ways to choose the remaining elements in every (nonzero) row. Since there are \(n-|I|\) rows, this yields \((2^{n-|J|-k}-1)^{n-|I|}\) matrices. Now, if we sum this over all possible k, we will have counted some matrices (with more than k zero columns) multiple times, but the inclusion exclusion principle then yields that there are exactly \(\sum _{k=0}^{n-|J|}{{n-|J| \atopwithdelims ()k}(-1)^k(2^{n-|J|-k}-1)^{n-|I|}}\) matrices of size \((n-|I|) \times (n - |J|)\) with entries in \({\mathcal {B}}\) that have no zero rows or columns. \(\square \)

Remark 3.4

  1. (1)

    There is no known closed formula for the expression of Lemma 3.3 in the general case, even in the special case of square matrices (see [35]).

  2. (2)

    Obviously, the number \(t_{I,J}\) is only dependent on the cardinalites of sets I and J. Therefore for any \(1 \le i,j \le n-1\), we can define \(t_{i, j}=t_{\{1,\ldots ,i\},\{1,\ldots ,j\}}\).

The next lemma shows that all elements in \(T_{I,J}\) are twins.

Lemma 3.5

For every \(I, J \subsetneq N_n\), all elements of the set \(T_{I,J}\) are twins in \(\Gamma (M_n({\mathcal {B}}))\).

Proof

Choose \(A \in T_{I,J}\) and observe that \(AB=0\) for \(B \in M_n({\mathcal {B}})\) if and only if \(B \in T_{K, L}\) for some \(K, L \subseteq N_n\) with \(N_n {\setminus } J \subseteq K\), and \(CA=0\) for \(C \in M_n({\mathcal {B}})\) if and only if \(C \in T_{K,L}\) for some \(K, L \subseteq N_n\) with \(N_n {\setminus } I \subseteq L\). Thus, all matrices in \(T_{I,J}\) have the same neighbours in \(\Gamma (M_n({\mathcal {B}}))\). \(\square \)

Our next task is to construct a resolving set for \(\Gamma (M_n({\mathcal {B}}))\). Thus, for every \(\emptyset \subsetneq I,J \subsetneq N_n\), let \(X_{I,J}\) denote an arbitrarily chosen element of \(T_{I,J}\). Also, for every \(\emptyset \subsetneq I \subsetneq N_n\) with \(|I| \le n-2\), let \(Y_{I, \emptyset }\) denote an arbitrarily chosen element of \(T_{I, \emptyset }\) and let \(Z_{\emptyset , I}\) denote an arbitrarily chosen element of \(T_{\emptyset ,I}\). Now, define R with \( R= \left( \bigcup \limits _{\emptyset \subsetneq I,J \subsetneq N_n}{X_{I,J}}\right) \cup \left( \bigcup \limits _{\emptyset \subsetneq I \subsetneq N_n \& |I| \le n-2}{Y_{I, \emptyset } \cup Z_{\emptyset , I}}\right) \cup T_{N_{n-1},\emptyset } \cup T_{\emptyset , N_{n-1}}\). Finally, define \( W_R = \left( \bigcup \limits _{\emptyset \subseteq I,J \subsetneq N_n \& I \cup J \ne \emptyset }{T_{I,J}} \right) {\setminus } R\). We shall prove that \(W_R\) is a resolving set for \(\Gamma (M_n({\mathcal {B}}))\). But before we do that, let us examine the cardinality of \(W_R\).

Lemma 3.6

\( |W_R|=2(n-1)+\sum \limits _{i,j=0 \& ij \ne 0}^{n-2}{n \atopwithdelims ()i}{n \atopwithdelims ()j} \Big [\sum _{k=0}^{n-j}{(-1)^k{n-j \atopwithdelims ()k}\left( 2^{n-j-k}-1 \right) ^{n-i}} -1\Big ]\).

Proof

Observe firstly that by the construction of R, for any set \(I \subseteq N_n\) with \(|I|=n-1\), we have \(T_{\emptyset ,I} \subset R\) if and only if \(T_{I, \emptyset } \subset R\) if and only if \(I=N_{n-1}\). This implies that there exist \(2(n-1)\) sets of the form \(T_{\emptyset ,I}\) and \(T_{I,\emptyset }\) with \(|I|=n-1\) that are subsets of \(W_R\). Note that in this case \(|T_{\emptyset ,I}|=|T_{I,\emptyset }|=1\) by Lemma 3.2; hence, we have the first \(2(n-1)\) elements in \(W_R\). Notice next that R contains exactly one element from \(T_{I,J}\) for every \(\emptyset \subseteq I,J \subsetneq N_n\) with \(I \cup J \ne \emptyset \) and \(|I|, |J| \le n-2\) and that there are exactly \({n \atopwithdelims ()i}{n \atopwithdelims ()j}\) different possible pairs of sets I and J in \(N_n\) with \(|I|=i\) and \(|J|=j\). Lemma 3.3 now concludes this proof. \(\square \)

Remark 3.7

Note that by Lemmas 2.2 and 3.5 every resolving set for \(\Gamma (M_n({\mathcal {B}}))\) has to contain all but perhaps one element from every set \(T_{I,J}\). This, by a similar calculation as in the proof of Lemma 3.6 yields \( \sum \limits _{i,j=0 \& ij \ne 0}^{n-2}{n \atopwithdelims ()i}{n \atopwithdelims ()j} \Big [\sum _{k=0}^{n-j}{(-1)^k{n-j \atopwithdelims ()k}\left( 2^{n-j-k}-1 \right) ^{n-i}} -1\Big ] = |W_R| - 2(n-1)\) distinct elements that belong to each and every resolving set of \(\Gamma (M_n({\mathcal {B}}))\).

In order to find the metric dimension of \(\Gamma (M_n({\mathcal {B}}))\), we shall also need the following two lemmas.

Lemma 3.8

Suppose that \(A, B \in M_n({\mathcal {B}})\) are zero-divisors such that \(A \in T_{I_A,J_A}\) for some sets \(I_A,J_A \in N_n\) with \(|I_A|, |J_A|=1\). Then \(d(A,B) \le 2\).

Proof

Denote \(I_A=\{i\}\) and \(J_A=\{j\}\). Since B is a zero-divisor, \(B \in T_{I_B,J_B}\) for some \(I_B, J_B \subseteq N_n\) and there exists \(k \in N_n\) such that \(k \in I_B\) or \(k \in J_B\). If \(k \in I_B\) then \(E_{jk}B=AE_{jk}=0\), so \(d(A,B) \le 2\). Similarly, if \(k \in J_B\) then \(BE_{ki}=E_{ki}A=0\) and again \(d(A,B) \le 2\). \(\square \)

Lemma 3.9

Suppose that \(A, B \in M_n({\mathcal {B}})\) are zero-divisors such that either

  1. (1)

    \(A \in T_{I_A,\emptyset }\) and \(B \in T_{I_B,\emptyset }\) for some sets \(I_A, I_B \subsetneq N_n\) with \(I_A \cap I_B = \emptyset \), or

  2. (2)

    \(A \in T_{\emptyset , J_A}\) and \(B \in T_{\emptyset , J_B}\) for some sets \(J_A, J_B \subsetneq N_n\) with \(J_A \cap J_B = \emptyset \).

Then \(d(A,B) = 3\).

Proof

Obviously, it suffices to prove one of the two assertions; the other one can be done with a symmetrical proof. Suppose therefore that \(A \in T_{I_A,\emptyset }\) and \(B \in T_{I_B,\emptyset }\) for some sets \(I_A, I_B \subsetneq N_n\) with \(I_A \cap I_B = \emptyset \). By [7, Theorem 1], \(\textrm{diam}(\Gamma (M_n({\mathcal {B}}))) \le 3\). Since \(AB \ne 0\) and \(BA \ne 0\), we only have to prove that \(d(A,B) \ne 2\). Note that \(AC, BC \ne 0\) for every \(0 \ne C \in M_n({\mathcal {B}})\). So, suppose there exists \(C \in Z(M_n({\mathcal {B}}))\) such that \(CA=CB=0\). But \(CA=0\) implies \(C \in T_{I_C, J_C}\) for some \(I_C, J_C \subseteq N_n\) with \(N_n {\setminus } I_A \subseteq J_C\). Similarly, \(CB=0\) implies \(N_n {\setminus } I_B \subseteq J_C\). But \(I_A \cap I_B = \emptyset \), resulting in \(J_C = N_n\), so \(C=0\), a contradiction. \(\square \)

We can now prove that \(W_{R}\) is a resolving set for \(\Gamma (M_n({\mathcal {B}}))\).

Theorem 3.10

For any \(n \ge 3\), the set \(W_R\) is a resolving set for \(\Gamma (M_n({\mathcal {B}}))\).

Proof

Observe firstly that \( Z(M_n({\mathcal {B}})) {\setminus } \{0\} = \bigcup \limits _{\emptyset \subseteq I,J \subsetneq N_n \& I \cup J \ne \emptyset }{T_{I,J}} \), so \(Z(M_n({\mathcal {B}})) {\setminus } \{0\} = W_R \cup R\). Now, choose arbitrary \(A, B \in R\). We have to prove that A and B have different representations with respect to \(W_R\). Denote by \(I_A, J_A\) and \(I_B, J_B\) the sets such that \(A \in T_{I_A,J_A}\) and \(B \in T_{I_B,J_B}\). By the definition of R, we have \(I_A \ne I_B\) or \(J_A \ne J_B\). Suppose without loss of generality that \(I_A \ne I_B\). Now, we have \(|I_A| \ge |I_B|\) or \(|I_B| \ge |I_A|\), and again without loss of generality we can assume that \(|I_A| \ge |I_B|\).

We examine firstly the case that \(|I_A| \ge 2\). We denote \(J_C=N_n {\setminus } I_A \ne \emptyset \) and observe that \(|J_C | \le n-2\), thus \(|T_{\emptyset ,J_C}| \ge 2\). Therefore, by the definition of \(W_R\), there exists a matrix \(C \in W_R\) such that \(C \in T_{\emptyset ,J_C}\). This implies that \(CA=0\) and since \(I_A \nsubseteq I_B\), we see that \(CB \ne 0\). By definition, C is not a right zero-divisor, so \(d(A,C)=1\) and \(d(B,C) \ge 2\) and therefore A and B have different representations with respect to \(W_R\).

Now, assume that \(|I_A| \le 1\). Since \(I_A \ne I_B\), \(I_A \ne \emptyset \), so it is left for us to examine the case \(|I_A| = 1\). We have two possibilities, either \(|I_B|=1\) or \(I_B = \emptyset \). Assume firstly that \(|I_B|=1\). Then \(I_A \ne \{n\}\) or \(I_B \ne \{n\}\). We can assume without loss of generality that \(I_A \ne \{n\}\). Let us denote \(J_C=N_n {\setminus } I_A\) and observe that \(|J_C|=n-1\) and \(J_C \ne N_{n-1}\), so by the definition of \(W_R\), there exists a matrix \(C \in W_R\) such that \(C \in T_{\emptyset ,J_C}\). This implies that \(CA=0\) and \(CB \ne 0\). Since C is not a right zero-divisor, we conclude that \(d(A,C)=1\) and \(d(B,C) \ge 2\), so A and B have different representations with respect to \(W_R\). Finally, assume that \(I_B = \emptyset \). Note that \(J_B \ne N_n\), so choose any \(k \notin J_B\) and observe that by the definition of \(W_R\) and the fact that \(n \ge 3\) there exists \(C \in W_R\) such that \(C \in T_{\emptyset , \{k\}}\). Now, Lemma 3.9 yields \(d(B,C)=3\). Denote by l the only element of \(I_A\) and observe that \(E_{kl}A=CE_{kl}=0\), so \(d(A,C) \le 2\), again concluding that A and B have different representations with respect to \(W_R\). \(\square \)

We can now prove the following theorem, which is the main result of this section.

Theorem 3.11

For any \(n \ge 2\), the metric dimension of \(\Gamma (M_n({\mathcal {B}}))\) equals

$$ \begin{aligned} {\textrm{dim}_\textrm{M}}(\Gamma (M_n({\mathcal {B}}))= & {} 2(n-1)+\sum \limits _{i,j=0 \& ij \ne 0}^{n-2}{n \atopwithdelims ()i}{n \atopwithdelims ()j}\\{} & {} \times \left[ \sum \limits _{k=0}^{n-j}{(-1)^k{n-j \atopwithdelims ()k}\left( 2^{n-j-k}-1 \right) ^{n-i}}-1\right] . \end{aligned}$$

Proof

If \(n=2\), Example 3.1 tells us that \({\textrm{dim}_\textrm{M}}(\Gamma (M_2({\mathcal {B}})) =2\), so the result holds in this case. So, assume from now on that \(n \ge 3\). By Lemma 3.6, we have \( |W_R|=2(n-1)+\sum _{i,j=0 \& ij \ne 0}^{n-2}{n \atopwithdelims ()i}{n \atopwithdelims ()j} \left[ \sum _{k=0}^{n-j}{(-1)^k{n-j \atopwithdelims ()k}\left( 2^{n-j-k}-1 \right) ^{n-i}}-1\right] \) and by Theorem 3.10, we have \({\textrm{dim}_\textrm{M}}(\Gamma (M_n({\mathcal {B}})) \le |W_R|\).

Now, we have to prove that the inequality \({\textrm{dim}_\textrm{M}}(\Gamma (M_n({\mathcal {B}})) \ge |W_R|\) also holds. So, let W be an arbitrary resolving set for \(\Gamma (M_n({\mathcal {B}}))\). Remark 3.7 assures us that there are at least \( \sum _{i,j=0 \& ij \ne 0}^{n-2}{n \atopwithdelims ()i}{n \atopwithdelims ()j} \left[ \sum _{k=0}^{n-j}{(-1)^k{n-j \atopwithdelims ()k}\left( 2^{n-j-k}-1 \right) ^{n-i}}-1\right] \) “twin” elements in W (all but perhaps one element from \(T_{I,J}\) for every \(0 \subseteq I,J \subsetneq N_n\) with \(I \cup J \ne \emptyset \) and \(|I|,|J| \le n-2\)). Now, choose \(i, k \ne j \in N_n\) and define \(I_A=\{i\}, J_A=\{j\}, I_B=\{i\}, J_B=\{k\}\). Let us examine two arbitrary matrices \(A \in T_{I_A,J_A}, B \in T_{I_B,J_B}\). If neither of A and B is in W, then Lemma 3.8 implies that the only way for A and B to have different representations according to W is that there exists \(C \in W\) such that C is a neighbour of A, but not B, or vice versa. But such C has to be either an element of \(T_{N_n{\setminus }\{j\},J_C}\) or \(T_{N_n{\setminus }\{k\},J_C}\) for some \(N_n{\setminus }\{i\} \ne J_C \subsetneq N_n\). So, we now have one of the following four possibilities: every element of \(T_{I_A,J_A}\) is in W, every element of \(T_{I_B,J_B}\) is in W, some element of \(T_{N_n{\setminus } \{j\},J_C}\) is in W, or some element of \(T_{N_n{\setminus } \{k\},J_C}\) is in W. In every case, we see that W has to contain one additional element to the “twin” elements described above (in the first two cases, this is the one additional element from \(T_{I_A,J_A}\) or \(T_{I_B,J_B}\), since we counted all but one of the elements from these sets above). We can reason similarly in case \(I_A=\{i\}, J_A=\{k\}, I_B=\{j\}, J_B=\{k\}\) for any \(i \ne j, k \in N_n\), yielding again one of the four options: every element of \(T_{I_A,J_A}\) is in W, every element of \(T_{I_B,J_B}\) is in W, some element of \(T_{I_C,N_n{\setminus } \{i\}}\) is in W, or some element of \(T_{I_C,N_n{\setminus } \{j\}}\) is in W (for some \(N_n {\setminus } \{k\} \ne I_C \subsetneq N_n\)). We can easily see that in this way, we must have at least \(2(n-1)\) (distinct) additional elements in W (one for any two distinct rows and one for any two distinct columns), thus \( |W| \ge 2(n-1)+\sum _{i,j=0 \& ij \ne 0}^{n-2}{n \atopwithdelims ()i}{n \atopwithdelims ()j} \left[ \sum _{k=0}^{n-j}{(-1)^k{n-j \atopwithdelims ()k}\left( 2^{n-j-k}-1 \right) ^{n-i}}-1\right] \), which proves the theorem. \(\square \)

4 The General Case

Now, we examine the metric dimension of the zero-divisor graph of \(M_n(S)\) for an arbitrary commutative entire antinegative semiring S. We shall need the following definition.

Definition 4.1

Let S be an entire antinegative semiring and \(A \in M_n(S)\). Then the pattern of A is the matrix \(\textrm{patt}(A) \in M_n(S)\) such that for every \(i, j \in N_n\) we have \(\textrm{patt}(A)_{ij}=1\) if and only if \(A_{ij} \ne 0\) and \(\textrm{patt}(A)_{ij}=0\) otherwise.

Lemma 4.2

Let S be an entire antiring. For every \(0 \ne A \in Z(M_n(S))\) matrices A and \(\textrm{patt}(A)\) are twins in \(\Gamma (M_n(S))\).

Proof

This follows directly from the fact that S is an entire antinegative semiring. \(\square \)

Lemma 4.3

Let S be an entire antiring, \(\Lambda \subseteq N_n \times N_n\) and \(\alpha _{ij}, \beta _{ij} \in S {\setminus } \{0\}\) for all \((i,j) \in \Lambda \). If matrices \(\sum _{(i,j) \in \Lambda }{\alpha _{ij}E_{ij}}\) and \(\sum _{(i,j) \in \Lambda }{\beta _{ij}E_{ij}}\) are nonzero zero-divisors, they are twins in \(\Gamma (M_n(S))\).

Proof

Denote \(A=\sum _{(i,j) \in \Lambda }{\alpha _{ij}E_{ij}}\) and \(B=\sum _{(i,j) \in \Lambda }{\beta _{ij}E_{ij}}\). By Lemma 4.2, both A and \(\textrm{patt}(A)\), as well as B and \(\textrm{patt}(B)\) are twins in \(\Gamma (M_n(S))\). By definition though, \(\textrm{patt}(A)=\textrm{patt}(B)\), so A and B are also twins. \(\square \)

Lemmas 2.2 and 4.3 imply that for infinite entire antirings S, the metric dimension of \(\Gamma (M_n(S))\) is infinite. Therefore, we shall limit ourselves to studying finite semirings. Note that any entire antiring with two elements is isomorphic to \({\mathcal {B}}\), so we can also assume that \(|S| \ge 3\).

We can now prove our main result.

Theorem 4.4

Let S be an entire finite antiring with \(|S| \ge 3\) and \(n \ge 2\). Then the following formula holds:

$$\begin{aligned} {\textrm{dim}_\textrm{M}}(\Gamma (M_n(S)))= & {} |S|^{n^2}-2^{n^2}-\sum \limits _{k=0}^{n}{(-1)^k{n \atopwithdelims ()k}\left[ \left( |S|^{n-k}-1 \right) ^{n}-\left( 2^{n-k}-1 \right) ^{n}\right] }\\{} & {} +\ {\textrm{dim}_\textrm{M}}(\Gamma (M_n({\mathcal {B}})))- 2(n-1). \end{aligned}$$

Proof

Throughout this proof, we shall (by a slight abuse of notation) consider that \({\mathcal {B}}\) is a proper subsemiring in S. Suppose W is a resolving set for \(\Gamma (M_n(S))\). Lemmas 2.2 and 4.3 imply that for every \(\Lambda \subseteq N_n \times N_n\) at most one zero-divisor matrix with its pattern of zero and non-zero entries prescribed by \(\Lambda \) is not in W. This yields \(| Z(M_n(S))| - | Z(M_n({\mathcal {B}}))|\) elements that have to be included in W (and we can assume without loss of generality that the missing elements from W are exactly the matrices from \(Z(M_n({\mathcal {B}}))\)). Suppose firstly that \(n \ge 3\). Lemmas 2.2 and 3.5 tell us that W also has to additionally contain all but perhaps one element from \(T_{I,J}\) for every \(I,J \subsetneq N_n\). Since none of these matrices have the same pattern, this yields by Remark 3.7 and Theorem 3.11 additional \({\textrm{dim}_\textrm{M}}(\Gamma (M_n({\mathcal {B}}))) - 2(n-1)\) elements that have to be in W. If \(n=2\), \({\textrm{dim}_\textrm{M}}(\Gamma (M_2({\mathcal {B}}))) = 2(2-1)\), so in both cases \({\textrm{dim}_\textrm{M}}(\Gamma (M_n(S))) \ge | Z(M_n(S))| - | Z(M_n({\mathcal {B}}))|+{\textrm{dim}_\textrm{M}}(\Gamma (M_n({\mathcal {B}}))) - 2(n-1)\).

On the other hand, let \(W = Z(M_n(S)) {\setminus } Z(M_n({\mathcal {B}}))\) and if \(n \ge 3\), we also add to W the \({\textrm{dim}_\textrm{M}}(\Gamma (M_n({\mathcal {B}})))- 2(n-1)\) elements from Remark 3.7. By construction, \(|W| = | Z(M_n(S))| - | Z(M_n({\mathcal {B}}))| + {\textrm{dim}_\textrm{M}}(\Gamma (M_n({\mathcal {B}}))) - 2(n-1)\) for every \(n \ge 2\). Let us prove that W is a resolving set. Choose arbitrary non-zero matrices \(A \ne B \in Z(M_n(S))\). Now, if A and B have the same pattern, one of them is in W by definition. Suppose therefore that A and B have different patterns. Remark 3.7 now ensures that if matrices \(\textrm{patt}(A)\) and \(\textrm{patt}(B)\) belong to the same set \(T_{I,J}\) for some \(\emptyset \subseteq I,J \subsetneq N_n\), at least one of them is in W. So, let us assume that \(\textrm{patt}(A) \in T_{I_A,J_A}\) and \(\textrm{patt}(B) \in T_{I_B,J_B}\), where \(I_A \ne I_B\) or \(J_A \ne J_B\). Without loss of generality (if necessary, swapping the roles of rows and columns, or matrices A and B respectively), assume that \(I_A \ne I_B\) and \(|I_A| \le |I_B|\). Since \(|S| \ge 3\), at least one zero-divisor matrix with every pattern of its zero and nonzero entries is in W, so we can choose \(C \in W\) such that \(\textrm{patt}(C) \in T_{\emptyset , N_n {\setminus } I_B}\). Therefore \(\textrm{patt}(C)\) is not a right zero-divisor, but \(\textrm{patt}(C)\textrm{patt}(B)=0\), and since \(I_B \nsubseteq I_A\) also \(\textrm{patt}(C)\textrm{patt}(A) \ne 0\). Lemma 4.2 now implies that A and B have different representations with respect to W. This proves that \({\textrm{dim}_\textrm{M}}(\Gamma (M_n(S))) \le | Z(M_n(S))| - | Z(M_n({\mathcal {B}}))|+{\textrm{dim}_\textrm{M}}(\Gamma (M_n({\mathcal {B}})))-2(n-1)\).

Finally, observe that the zero-divisors in \(M_n(S)\) are exactly those matrices with at least one zero row or column, so with a similar argument as in the proof of Lemma 3.3, we can prove that \(| Z(M_n(S))|=|S|^{n^2}-\sum \limits _{k=0}^{n}{(-1)^k{n \atopwithdelims ()k}\left( |S|^{n-k}-1 \right) ^{n}}\). This yields that

$$\begin{aligned}{} & {} |Z(M_n(S))|-| Z(M_n({\mathcal {B}}))|\\{} & {} \quad =|S|^{n^2}-2^{n^2}-\sum \limits _{k=0}^{n}{(-1)^k{n \atopwithdelims ()k}\left[ \left( |S|^{n-k}-1 \right) ^{n}-\left( 2^{n-k}-1 \right) ^{n}\right] }, \end{aligned}$$

which proves the assertion. \(\square \)