1 Introduction

A toric variety can be constructed from a nice combinatorial object called fan. Every cone \(\sigma \) in a fan corresponds to an affine toric variety \(U_{\sigma }\) and intersection of two cones \(\sigma _1, \sigma _2\) corresponds to the affine toric variety \(U_{\sigma _1 \cap \sigma _2}\) lying in both \(U_{\sigma _1}\) and \( U_{\sigma _2}\). All the affine toric varieties \(U_{\sigma _1}\) and \( U_{\sigma _2}\) are glued together along the affine toric variety \(U_{\sigma _1 \cap \sigma _2}\) to obtain the abstract normal toric variety of the fan. If all the cones in the fan are generated by linearly independent vectors, the fan is called simplicial.

Let \(\Sigma \subseteq {\mathbb {R}}^{n}\) be a complete simplicial fan with rays \(\rho _{1},\ldots ,\rho _{r}\) and \(X:=X_{\Sigma }\) be the corresponding n dimensional smooth projective toric variety with Picard group isomorphic to \({\mathbb {Z}}^{d} \) over the algebraic closure \({\mathbb {K}}=\overline{{\mathbb {F}}}_q\) of a finite field \({\mathbb {F}}_q\), where the rank \(d=r-n>0\).

Let \({v_1,\ldots ,v_r}\) be the generators of the rays of the cones in the fan \(\Sigma \) and \(\phi \) be the matrix whose rows are these generators of the rays. We get the matrix \(\beta \) in the following exact sequence by applying the Smith Normal Form Algorithm (see [2] and references therein) to \(\phi \):

$$\begin{aligned} {\mathfrak {B}}: 0 {\longrightarrow } {\mathbb {Z}}^{n} \overset{\phi }{\longrightarrow }{\mathbb {Z}}^{r}\overset{\beta }{\longrightarrow }{\mathcal {A}} {\longrightarrow } 0. \end{aligned}$$

Here the group \({\mathcal {A}} \cong {\mathbb {Z}}^{r} / {\phi }({\mathbb {Z}}^{n})\) is isomorphic to the Picard group of X. Applying \(Hom(-, {\mathbb {K}}^{*})\) functor to the sequence \( {\mathfrak {B}}\) gives the following short exact sequence:

$$\begin{aligned} \mathfrak {B^{*}}: 1{\longrightarrow }G\overset{i}{\longrightarrow }({\mathbb {K}}^{*})^{r}\overset{\pi }{\longrightarrow }T_{X} {\longrightarrow }1 \end{aligned}$$

with \( \pi :(\xi _1,...,\xi _r) \rightarrow (\xi ^{u_{1}},...,\xi ^{u_{n}})\) where \( u_{1},...,u_{n}\) are the columns of \(\phi \).

Let \( S={\mathbb {F}}_q[x_{1},...,x_{r}]=\bigoplus _{\alpha \in {\mathcal {A}} } S_\alpha \) be the multigraded polynomial ring which is also known as the Cox ring of X graded using the columns of the matrix \(\beta \), i.e. \( deg_{\mathcal {A}}(x_{j})=\beta _{j}\) is the j-th column of \( \beta \), for \( j=1,...,r\). The irrelevant ideal of S is the monomial ideal

$$\begin{aligned} B = \langle x^{\hat{\sigma }}: \quad \sigma \in \Sigma \rangle , \quad \text {where } x^{\hat{\sigma }} = \prod _{\rho _i \notin \sigma } x_i. \end{aligned}$$

The main feature of X we use here is that we can represent points on our toric variety X using homogeneous coordinates thanks to the Geometric Invariant Theory quotient representation

$$\begin{aligned} X\cong {\mathbb {K}}^{r}\setminus V(B))/G, \text { where } G={\text {Ker}}(\pi ) \text { and } {\mathbb {K}}=\overline{{\mathbb {F}}}_q, \end{aligned}$$

due to Cox, see [6]. Therefore, every point of X is identified with an orbit of the following type:

$$\begin{aligned}{}[P]=G\cdot P=[p_{1}:\cdots :p_{r}] \text { for } P\in {\mathbb {K}}^{r}\setminus V(B). \end{aligned}$$

When the point is an \({\mathbb {F}}_q\)-rational point, one can choose a representative \(P\in {\mathbb {F}}_q^r\). Traditionally, the set of \({\mathbb {F}}_q\)-rational points of X is denoted by \(X({\mathbb {F}}_q)\). Since we focus only on a subset Y of the \({\mathbb {F}}_q\)-rational points, we abuse the notation and simply use \(Y\subseteq X\) instead of \(Y \subseteq X({\mathbb {F}}_q)\) for the sake of the notation, throughout the paper.

For a fixed degree \(\alpha \) in the semigroup \({\mathbb {N}}\beta \) which is generated by \(\beta _{1},...,\beta _{r}\), and a subset \(Y=\{ [P_{1}],...,[P_{N}]\}\subseteq X\) with cardinality N, the evaluation map is defined as follows:

$$\begin{aligned} ev_{Y}: S_\alpha \mapsto {\mathbb {F}}_{q}^{N}, \quad F\mapsto (F(P_{1}),\dots ,F(P_{N})). \end{aligned}$$

The image of \(S_\alpha \) under \(ev_{Y}\) is called a (generalized) toric code which is denoted by \({\mathcal {C}}_{\alpha ,Y}\), see [19] for a survey. By definition, the size of Y is the length of the code and the dimension \(\dim _{{\mathbb {F}}_{q}}{\mathcal {C}}_{\alpha ,Y}\) of the code is the dimension of \({\mathcal {C}}_{\alpha ,Y}\) as a vector space over \({\mathbb {F}}_{q}\). The number of nonzero entries in a codeword is called its weight and the minimum distance \(\delta \) is obtained by calculating the smallest weight among all nonzero codewords.

It is clear that the kernel of the linear map \(ev_{Y}\) is the degree \(\alpha \) part \(I_\alpha (Y)\) of the \(\beta \)-graded or homogeneous vanishing ideal I(Y) of Y, which is defined to be the ideal generated by homogeneous polynomials vanishing at all the points of Y. Due to this relation, the dimension \(dim_{{\mathbb {F}}_{q}}{\mathcal {C}}_{\alpha ,Y}\) equals the value \(H_{I(Y)}(\alpha ):=\dim S_\alpha - \dim I_\alpha (Y)\) of the multigraded Hilbert function \(H_{I(Y)}\) of I(Y). It is known that for sufficiently large values of \(\alpha \), the function \(H_{I(Y)}\) agrees with a polynomial \(P_{I(Y)}\) known as the multigraded Hilbert polynomial of I(Y). If I is the B-saturated ideal corresponding to the set of \(\ell \) points on a smooth projective toric variety, then the Hilbert polynomial of I is just the constant \(P_{I}(t)=\ell \) (see Example 4.12 in [12]). Therefore, the length of the code \({\mathcal {C}}_{\alpha ,Y}\) is \(P_{I(Y)}(t)=N\).

Toric codes attracted attentions of mathematicians nearly about two decades due to their rich combinatorial nature showcasing some linear codes with best parameters, see [14] for a very recent paper and the references therein for more details. They have also applications in information theory, for example Hansen [8] created a secret sharing scheme with strong multiplication by using toric codes on toric surfaces and used toric codes which are obtained from a special toric variety called Hirzebruch surface to construct new quantum codes [9]. There are lots of champion toric codes having the largest minimum distance among all known linear codes with equal block length and dimension, see [4, 5, 11]. Toric codes are indeed generalizations of (generalized) Reed-Solomon codes which can be obtained by choosing \(X={\mathbb {P}}^1, Y\subseteq X ({{\mathbb {F}}_{q}})\) and \(\alpha <q\), which are MDS codes [9, 10].

An algebraic algorithm using the vanishing ideal I(Y) has been given by Martinez-Bernal, Pitones and Villarreal in [13] to compute the minimum distance of a toric code defined on a projective space \(X={\mathbb {P}}^n\). They pointed out that the algorithm is more interesting theoretically rather than computationally as other existing algorithms relying on the generating matrix of the code perform better. The motivation for the present article is to discuss possible generalizations of their algorithm to a more general smooth projective toric variety. We offer 3 algorithms in Sect. 2 for this purpose and share Macaulay 2 procedures to implement them in Sect. 3. In Sect. 4, we present experimental results obtained using these algorithms and discuss their performance.

2 Theoretical Results and Algorithms

In this section we give our main results leading to algebraic algorithms for computing the minimum distance of the code.

Let X be a toric variety over the finite field \({\mathbb {F}}_q\) and \(Y\subseteq X\) be a subset of the \({\mathbb {F}}_q\)-rational points. For a homogeneous polynomial \(f\in S_{\alpha }\), \(V_{X,Y} (f)\) denotes the subset of Y which consists of the roots of f, i.e.

$$\begin{aligned} V_{X,Y} (f)=\{ [P] \in Y: f(P)=0\}. \end{aligned}$$

In the first step, we make the following simple observation whose proof is included for the sake of the reader, and was already given in [13] for the case where X is a projective space:

Lemma 1

Let \(Y \subseteq X\). Then, the minimum distance of the corresponding code is

$$\begin{aligned} \delta ({\mathcal {C}}_{\alpha ,Y})=N-max\{\left| V_{X,Y} (f)\right| : f\in S_\alpha \setminus I_\alpha (Y) \}. \end{aligned}$$

Proof

By definition, the weight of the codeword \(ev_Y(f)\) is the number of non-zero components, which is nothing but \(N-\left| V_{X,Y} (f)\right| \). A codeword \(ev_Y(f)\) is zero if and only if \(f\in I_\alpha (Y)\). Thus, a non-zero codeword with the minimum weight corresponds to a polynomial in \(S_{\alpha }\setminus I_\alpha (Y)\) with the maximum number of zeroes. \(\square \)

We recall basic notions and definitions of commutative algebra such as minimal prime, associated prime, primary decomposition etc.

Definition 1

(Definition 5 in [3]) A primary decomposition of an ideal \({\mathfrak {a}}\subset S\) is a decomposition

$$\begin{aligned} {\mathfrak {a}}=\bigcap ^k_{i=1}\mathfrak {q_{i}} \end{aligned}$$

into primary ideals \(\mathfrak {q_{i}}\subset S\). The decomposition is called minimal if the corresponding prime ideals \(\mathfrak {p_{i}}=rad(\mathfrak {q_{i}})\) are pairwise different and the decomposition is unshortenable; the latter means that \(\cap _{i \ne j}\mathfrak {q_{i}} \not \subset \mathfrak {q_{j}}\) for all \(j=1, \ldots , k\).

By Theorem 8 in [3], the prime ideals \(\mathfrak {p_{i}}=rad(\mathfrak {q_{i}})\) are uniquely determined when the primary decomposition above is minimal. These are of the form \(rad(({\mathfrak {a}}: f))\) for some \(f \in S\) and deserve a special name as we see below:

Definition 2

(Definition and Proposition 9 in [3]) Let \({\mathfrak {a}} \subset S\) be an ideal. The set of all prime ideals in S that are of type \(rad(({\mathfrak {a}}: f))\) for some \(f \in S\) is denoted by \(Ass({\mathfrak {a}})\); its members are called the prime ideals associated to \({\mathfrak {a}}\).

Definition 3

(Definition 11 in [3]) Given any ideal \({\mathfrak {a}} \subset S\), the subset of all prime ideals that are minimal in \(Ass({\mathfrak {a}})\) is denoted by \(Ass'({\mathfrak {a}})\) and its members are called the isolated prime ideals associated to \({\mathfrak {a}}\). All other elements of \(Ass({\mathfrak {a}})\) are said to be embedded prime ideals.

Definition 4

(Page 63 in [3]) For an arbitrary ideal \({\mathfrak {a}} \subset S\),

$$\begin{aligned} {\mathcal {Z}}({\mathfrak {a}})=\bigcup _{f \in S\setminus {\mathfrak {a}}}({\mathfrak {a}}: f)= \{z \in S \,: \, zf \in {\mathfrak {a}} \text { for some } f \in S\setminus {\mathfrak {a}} \} \end{aligned}$$

is called the set of zero divisors modulo \({\mathfrak {a}}\) in S. Indeed, an element \(z \in S\) belongs to \({\mathcal {Z}}({\mathfrak {a}})\) if and only if its residue class \({\bar{z}} \in S/{\mathfrak {a}}\) is a zero divisor. Furthermore, if a power \(z^n\) of some element \(z \in S\) belongs to \({\mathcal {Z}}({\mathfrak {a}})\), then z itself must belong to \({\mathcal {Z}}({\mathfrak {a}})\) and therefore

$$\begin{aligned} {\mathcal {Z}}({\mathfrak {a}})=\bigcup _{f \in S\setminus {\mathfrak {a}}}rad(({\mathfrak {a}}: f)). \end{aligned}$$

With these in mind it is now easy to prove the following useful fact as in [18, Theorem 5.1(iii)].

Lemma 2

Let \(Y \subseteq X\) be a finite subset. Then, the minimal primary decomposition of I(Y) is

$$\begin{aligned} \displaystyle I(Y)=\bigcap _{[P] \in Y} I([P]). \end{aligned}$$

Furthermore, we have

$$\begin{aligned} {\mathcal {Z}}(I(Y))=\bigcup _{f \in S\setminus I(Y)}rad((I(Y): f))=\bigcup _{[P] \in Y} I([P]). \end{aligned}$$

By using Lemmas 1 and  2 above we obtain the following result.

Theorem 1

Let \(Y \subseteq X\). Then, the minimum distance of the corresponding code is

$$\begin{aligned} \delta ({\mathcal {C}}_{\alpha ,Y})= & {} N-max\{\left| \{ {\mathfrak {p}}\in Ass(I(Y)): f \in {\mathfrak {p}} \}\right| : f\in {\mathcal {Z}}(I(Y))\}, \\= & {} P_{I(Y)}-max\{\left| \{{\mathfrak {p}} \in Ass(I(Y)): f \in {\mathfrak {p}} \} \right| : f\in {\mathcal {Z}}(I(Y))\}, \end{aligned}$$

where \(P_{I(Y)}\) is the Hilbert Polynomial of the ideal I(Y).

Proof

By Lemma 1, we have the formula:

$$\begin{aligned} \delta ({\mathcal {C}}_{\alpha ,Y})=N-max\{\left| V_{X,Y} (f)\right| : f\in S_\alpha \setminus I_\alpha (Y) \}. \end{aligned}$$

\([P]\in V_{X,Y} (f)\) if and only if \(f(P)=0\) and \([P]\in Y\) if and only if \(f\in I(P)\) and \([P]\in Y\). Thus, the number of zeroes [P] of f is exactly the number of associated primes I(P) in the minimal primary decomposition of I(Y) containing f. It follows from Lemma 2 that \(Ass(I(Y))=Ass'(I(Y))=\{I(P): [P]\in Y\}\). Hence, we have

$$\begin{aligned} \left| V_{X,Y} (f)\right| {=} \left| \{ {\mathfrak {p}} \in Ass(I(Y)): f \in {\mathfrak {p}} \} \right| . \end{aligned}$$

In particular, \(\left| V_{X,Y} (f)\right| > 0\) implies that \(f\in {\mathcal {Z}}(I(Y))\), completing the proof. \(\square \)

As a consequence, we obtain the following algorithm which calculates the minimum distance of a code obtained from a smooth projective toric variety:

Algorithm 1
figure a

Calculating the minimum distance of a toric code.

Next, we give the second result leading to an alternative algorithm for computing the minimum distance.

Theorem 2

Let \(Y \subseteq X\). Then, the minimum distance of the corresponding code is

$$\begin{aligned} \delta ({\mathcal {C}}_{\alpha ,Y})=N-max\{P_{I(V_{X,Y}(f))}: f\in S_\alpha \setminus I_\alpha (Y) \text { is a zero-divisor} \}, \end{aligned}$$

where \(P_{I(V_{X,Y}(f))}\) is the Hilbert Polynomial of the ideal \(I(V_{X,Y}(f))\), which can be obtained as follows:

$$\begin{aligned} I(V_{X,Y}(f))=\bigcap _{{\mathfrak {p}}\in Ass(I(Y)) \text { and } f\in {\mathfrak {p}}} {\mathfrak {p}}. \end{aligned}$$

Proof

By Lemma 1, we have the formula:

$$\begin{aligned} \delta ({\mathcal {C}}_{\alpha ,Y})=N-max\{\left| V_{X,Y} (f)\right| : f\in S_\alpha \setminus I_\alpha (Y) \}. \end{aligned}$$

Since \(I(V_{X,Y}(f))\) is the B-saturated ideal corresponding to the set of \(\left| V_{X,Y} (f)\right| \) points on a smooth projective toric variety X, it follows that the Hilbert polynomial of \(I(V_{X,Y}(f))\) is just the constant \(P_{I(V_{X,Y}(f))}(t)=\left| V_{X,Y} (f)\right| \) as indicated in Example 4.12 of [12]. When f is not a zero-divisor, then \(f \notin I(P)\) by Lemma 2. So it has no zeroes and thus the weight of the corresponding codeword will be N, the maximum possible.

Finally, by Lemma 2, we have

$$\begin{aligned} I(V_{X,Y}(f))=\bigcap _{[P]\in V_{X,Y}(f)} I(P) \text { and } I(Y)=\bigcap _{[P]\in Y} I(P)=\bigcap _{{\mathfrak {p}}\in Ass(I(Y)) } {\mathfrak {p}}. \end{aligned}$$

As \([P]\in V_{X,Y}(f)\) if and only if \({\mathfrak {p}}\in Ass(I(Y)) \text { and } f\in {\mathfrak {p}}\), the proof follows. \(\square \)

Algorithm 2
figure b

Calculating the minimum distance of a toric code.

There is a third algorithm suggested by the paper [13] which was the starting point of this research. The toric variety in that paper is the special case of the projective space \(X={\mathbb {P}}^n\) and our aim was to understand if we can generalize it. The connection with the second and third algorithm to be given below is the difference in their last steps. One of them uses the Hilbert polynomial \(P_{I(V_{X,Y}(f))}\) of the ideal \(I(V_{X,Y}(f))\) whereas the other uses the Hilbert polynomial \(P_{I(Y)+(f)}\) of the ideal \(I(Y)+(f)\). Notice that the notion of the degree of the ideal \(I(Y)+(f)\) in their paper coincides with the Hilbert polynomial \(P_{I(Y)+(f)}\) by [13, Corollary 4.3] and [12, Example 4.12]. Until now, we were not able to get a generalization of [13, Corollary 4.3] valid for any toric variety X. Therefore, the correctness of the third algorithm is guaranteed only for the case where X is a projective space.

Algorithm 3
figure c

Calculating the minimum distance of a toric code.

Remark 1

It is worth briefly describing an idea used in [14, 15] adapted to our notation which is relevant to the ideals considered in Algorithm 2 and 3, see the proof of [14, Lemma 5.5]. Recall that the set reg(Y) consists of the elements \(\alpha \) for which \(H_{I(Y)}(\alpha )=N\). Thus, for an element \({\tilde{\alpha }} \in reg(Y)\), the linear map \(ev_{{{\tilde{\alpha }}},Y}: S_{{\tilde{\alpha }}} \mapsto {\mathbb {F}}_{q}^{N}\) is surjective. We also have the natural projection \({\mathbb {F}}_{q}^{N} \mapsto {\mathbb {F}}_{q}^{N_{f}}\) onto \(Y_f:=V_{X,Y}(f)\), for any \(f\in S_\alpha {\setminus } I_\alpha (Y)\), thus we have the following surjective map by composing them:

$$\begin{aligned} ev_{{{\tilde{\alpha }}}, Y_f}: S_{{\tilde{\alpha }}} \mapsto {\mathbb {F}}_{q}^{N_{f}} \end{aligned}$$

where \(N_{f}=\left| Y_f\right| \). Therefore, \(N_{f}=H_{I(Y_f)}({{\tilde{\alpha }}})\) and \({\tilde{\alpha }} \in reg(Y_f)\). This means that the Hilbert polynomial \(N_{f}\) of \(I(Y_{f})\) is attained at \({\tilde{\alpha }}\).

Clearly, the ideal \(I(Y,f):=I(Y)+(f)\subseteq I(Y_f)\) and hence we have

$$\begin{aligned} I_{{{\tilde{\alpha }}}}(Y,f)=I_{{{\tilde{\alpha }}}}(Y)+S_{{{\tilde{\alpha }}}-\alpha }\cdot (f)\subseteq I_{{{\tilde{\alpha }}}}(Y_f). \end{aligned}$$

Thus, an upper bound for \(N_f\) is given by

$$\begin{aligned} \widetilde{N_{f}}({{\tilde{\alpha }}})=H_{I(Y)+(f)}({{\tilde{\alpha }}}) \ge N_{f}=H_{I(Y_f)}({{\tilde{\alpha }}}). \end{aligned}$$

Letting \(\widetilde{N_{f}}\) be the Hilbert polynomial of the ideal I(Yf), we see that \(\widetilde{N_{f}}\ge N_{f}\) and hence, it follows that we have

$$\begin{aligned} N_{f}=\widetilde{N_{f}}\iff I(Y_f)=(I(Y,f):B^{\infty }), \end{aligned}$$

since Hilbert polynomial of an ideal and that of its saturation with respect to B are the same. This leads to the following claim which is proved in [13, Corollary 4.3] when X is a projective space.

Conjecture 1

For a smooth projective toric variety, we have \(I(Y_f)=(I(Y,f):B^{\infty })\).

Remark 2

Available algorithms in computer algebra programs such as GAP, Magma or Sage require a generating matrix for the code and in order to determine that matrix one needs to give the members of the set Y. But the elements are not always given explicitly. Sometimes they are given implicitly as in

$$\begin{aligned} Y_Q=\{[\textbf{t}^{\textbf{q}_{1}}:\cdots :\textbf{t}^{\textbf{q}_{r}}]|\textbf{t}\in ({\mathbb {F}}_q^{*})^s\} \end{aligned}$$

which is parameterized by the columns of a matrix \(Q=[\textbf{q}_1 \textbf{q}_2\cdots \textbf{q}_r]\). In this case our algorithms can be used alternatively as the vanishing ideal needed for them can be found as described in [1].

In some cases it is even possible to expedite our first two algorithms by omitting the \(4-\)th step by the virtue of the minimal primary decomposition in Lemma 2. For instance, in the proof of [17, Theorem 4.1], where \(X={\mathbb {P}}(1,w_1,\dots ,w_n)\) is a weighted projective space over the field \({\overline{{\mathbb {F}}}}_q\), the elements of the subgroup

$$\begin{aligned} Y_Q = \{[t_0:t_1^{w_1}:\ldots :t_n^{w_n}] \, | \, t_i \in {\mathbb {F}}_q^*, \text { for all } i=0,\ldots ,n\} \end{aligned}$$

is given explicitly so the minimal primary decomposition of \(I(Y_Q)\) is the intersection of I(P) for all \(P\in Y_Q\). More precisely, the set \(Y_Q\) is found to be the union of the points \([1:\eta _1^{i_1}:\dots :\eta _n^{i_n} ]\) for \(1\le i_1 \le d_1, \dots , 1 \le i_n \le d_n\), where \({\mathbb {F}}_q^*=\langle \eta \rangle \) so that the order of the generator \(\eta \) is \(q-1\) and the order of \(\eta _i:=\eta ^{w_i}\) is

$$\begin{aligned} d_i = \frac{q-1}{\gcd (q-1,w_i)} \quad i=1,\ldots ,n. \end{aligned}$$

Since the generators of the vanishing ideal is given by [17, Proposition 3.3], and the ideals \(I([1:\eta _1^{i_1}:\dots :\eta _n^{i_n}]) = \langle x_2 - \eta _1^{i_1}x_1^{w_1}, \dots ,x_n -\eta _n^{i_n}x_n^{w_n} \rangle \) are found using [16], the Algorithms 1 and 2 can be applied to compute the minimum distance more quickly.

3 Macaulay2 Procedures

In this section, we share some procedures needed to implement the algorithms in the previous section in Macaulay2. One may reach all of these codes from the link https://github.com/fdmozkan/Macaulay2-Codes/releases/tag/v1.0.0

Procedure 1

Calculating the minimum distance of a toric code with Macaulay2 [7].

figure d

Let us now explain each step of the codes in i5-i6 in detail. In step N, we obtain a list of K-tuples by choosing entries from \({\mathbb {F}}_{q}\) and removing the origin, where K is the dimension of \(S_\alpha / I_\alpha (Y)\) or the cardinality of the set Balpha. In step P, we transform each of these K-tuples into a vector in \(\mathbb {{\mathbb {F}}}_q^K\). In step D, by using the basis of \(S_\alpha / I_\alpha (Y)\), which are given as vectors in Balpha we make a list of polynomials representing elements in \(S_\alpha / I_\alpha (Y)\) by taking their dot products with the coefficients stored as vectors in step P. In step A, we remove all parenthesis from each member of the list D. In step Malpha, we substitute each polynomial obtained in the previous step to the ring S. In step Z, we determine all the zero-divisors of \(S_\alpha / I_\alpha (Y)\). One may possibly expedite this process by finding a direct way to form elements of Malpha.

For Algorithm 1 we continue with the following:

figure e

The command primaryDecomposition used in step i7 is part of a package designed for providing components of ideals and modules, including associated primes and primary decompositions, see [21].

As pointed out in Remark 2, one may accelerate Algorithm 1 by replacing the usage of the general purpose primaryDecomposition package by using Lemma 2 and by finding a set of representatives for the set Y in question.

The hilbertPolynomial command used in step i8 is part of another package which computes with normal toric varieties, see [20]. But, we need to modify the content of that package as we work with a coordinate ring S over a finite field \({\mathbb {F}}_q\). Therefore, one has to run the command in i2 after loading the package as in the command in i1.

figure f

For Algorithm 2 we replace the previous commands with the following:

figure g

For Algorithm 3 we use the following command instead:

figure h

4 Experimental Results

In this section, we present an example illustrating the speed of our algorithms implemented and run in Macaulay 2.

Example 1

Let X be the Hirzebruch Surface \(\ H_{2}\) over \(K={\mathbb {F}}_7\). The first exact sequence above becomes

$$\begin{aligned} \displaystyle {\mathfrak {P}}: 0 \longrightarrow {\mathbb {Z}}^2 \overset{\phi }{\longrightarrow }\ {\mathbb {Z}}^4 \overset{\beta }{\longrightarrow }\ {\mathbb {Z}}^2 \longrightarrow 0 \end{aligned}$$

where

$$\begin{aligned} \displaystyle \phi = \begin{bmatrix} 1 &{} 0 &{} -1&{} ~~0 \\ 0 &{} 1 &{} ~~~2&{} -1 \end{bmatrix}^T \quad \text{ and } \quad \beta =\begin{bmatrix} 1 &{} -2 &{} 1&{} 0 \\ 0 &{} ~~~1 &{} 0&{}1 \end{bmatrix}. \end{aligned}$$

Thus, the Cox ring \(S=K[x_1,x_2,x_3,x_4]\) of X is graded via

$$\begin{aligned} \deg _{{\mathcal {A}}}(x_1)=\deg _{{\mathcal {A}}}(x_3)=(1,0), \deg _{{\mathcal {A}}}(x_2)=(-2,1), \deg _{{\mathcal {A}}}(x_4)=(0,1). \end{aligned}$$

Consider the subset \(Y\subset X\) with \(I(Y)=(x_3^2-x_1^2, x_4^3-x_3^{6}x_2^3).\)

Our inputs for the example are as follows:

figure i

Using Procedure 1, we compute the minimum distance of \(C_{\alpha ,Y}\) to be 3 for all the three algorithms.

Table 1 Comparison of 3 algorithms.
Fig. 1
figure 1

Comparison of the algorithms

Before we conclude the paper, we will give a table and a figure to compare the performance of the three algorithms on this example for various values of \(\alpha \).

In Table 1, \(\delta \) is the minimum distance found using the first two algorithms and \({\widetilde{\delta }}\) is the minimum distance found using the third algorithm. Moreover, \(T_i\) stands for the time used by the Algorithm i, for each \(i=1,2,3\). The smallest CPU time is given in boldface type.

See Fig. 1 for the graphical comparison of the three algorithms according to their CPU time. In the graph \(x_i\) corresponds to the i-th value of \(\alpha \) in Table 1 and \(y_i\) corresponds to the CPU time used by the relevant algorithm. It seems that the first algorithm is almost always faster than the others. This is not surprising when it comes to a comparison with the second algorithm, because of the differences of the algorithms in Procedure 1. The first one avoids computations of the ideals \(I(V_{X,Y}(f))\) in step i8 of Procedure 1 for Algorithm 2, for all f, and replaces the computations of the hilbertPolynomials for all these ideals in step i9 by a rather basic command in i8 of Procedure 1 for Algorithm 1.

5 Conclusion

In the literature, there is an algorithm for computing the minimum distance of an evaluation code obtained from a projective space which heavily makes use of computational commutative algebra. Until now, there is no direct generalization to a more general smooth toric variety. We offered two alternatives that work correctly for a general smooth toric variety and illustrate it via an example. We also share a table showing the computation time for these three algorithms in order to give a clue as to their complexity.

All the algorithms suffer from listing all the zero divisors f and computing with them even if they have just 1 root, which is clearly unnecessary. This is overcome in certain cases by providing an upper bound on the number of roots and demonstrating a concrete polynomial meeting that upper bound, see e.g. [1, 15]. One may possibly improve these algorithms by proving a result which helps eliminate zero divisors with less zeroes.