Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Algebraic geometry has a long tradition, and in fact comes from a natural place. Then after making algebraic geometry to a categorical theme, it is possible to define noncommutative algebraic geometry. In this text we try to take noncommutative algebraic geometry back to the natives. We will use deformation theory to define higher order derivatives between points, and then use this to construct a noncommutative variety. Our main commutative reference is Hartshorne’s classical book [2].

Through this notes, \(k\) is an algebraically closed field of characteristic \(0\).

2 Polynomial Matrix Algebras

Let \(r\in \mathbb N\) and let \((d_{ij})\) be an \(r\times r\)-matrix with entries \(d_{ij}\in \mathbb N\). We start by defining the free \(r\times r\) matrix polynomial algebra generated by the matrix variables \(t_{ij}(l)\), \(1\le l\le d_{ij}\), in entry \(1\le i,j\le r\). To get into the language, consider the following (in which \(r=2\) and \(d_{ij}=1, 1\le i,j\le 2\)):

Example 2.1

Let the matrices

$$X=\left( \begin{matrix} x&{}0\\ 0&{}0\end{matrix}\right) ,\text {}Y=\left( \begin{matrix} 0&{}y\\ 0&{}0\end{matrix}\right) ,\text {}Z=\left( \begin{matrix} 0&{}0\\ z&{}0\end{matrix}\right) ,\text { and } W=\left( \begin{matrix} 0&{}0\\ 0&{}w\end{matrix}\right) $$

be given. Together with the idempotents \(e_1=\left( \begin{matrix}1&{}0\\ 0&{}0\end{matrix}\right) \) and \(e_2=\left( \begin{matrix}0&{}0\\ 0&{}1\end{matrix}\right) \) these matrix variables generates a \(k^2\)-algebra which is denoted

$$S=\left( \begin{matrix}k\langle x\rangle &{}k y\\ kz&{}k\langle w\rangle \end{matrix}\right) \;.$$

By the notation \(\left( \begin{matrix}S_{ij}\end{matrix}\right) \) where \(S_{ii}\) is a \(k\)-algebra for each \(i\), \(1\le i\le r\), and \(S_{ij}\) is a \(k\)-vector space, we mean the \(k^r\)-algebra generated by the matrices \(M=(m_{ij})\) with \(m_{ij}\in S_{ij}\), \(1\le i,j\le r\).

Definition 2.1

For a positive integer \(r\), for each pair \((i,j)\), \(1\le i,j\le r\), let \(d_{ij}\in \mathbb N\). Then the free polynomial algebra in the matrix variables \(t_{ij}(l)\), \(1\le i,j\le d_{ij},\) is the \(k^r\)-algebra generated by the matrix elements in

$$\left( \begin{matrix}k\langle t_{11}(1),\dots ,t_{11}(d_{11})\rangle &{}\cdots &{}\sum \nolimits _{v=1}^{d_{1r}}k t_{1r}(v)\\ \vdots &{}\ddots &{}\vdots \\ \sum \nolimits _{v=1}^{d_{r1}}k t_{r1}(v)&{}\cdots &{}k\langle t_{rr}(1),\dots ,t_{rr}(d_{rr})\rangle \end{matrix}\right) \;.$$

Alternatively, we consider the \(k^r\)-module \(V\) generated by \(t_{ij}(l)\), and let \(S\) be the tensor algebra

$$S=T_{k^r}(V)\;.$$

Definition 2.2

For a positive integer \(r\), a finitely generated \(r\times r\) matrix polynomial algebra is a quotient of a free \(r\times r\) matrix polynomial algebra.

Lemma 2.1

Let \(\pi :R\twoheadrightarrow S\) be a surjective \(k\)-algebra homomorphism sending non-units to non-units, and let \(\mathfrak {m}\subset R\) be a maximal ideal. Then \(\pi (\mathfrak {m})\) is maximal in \(S\).

Proof

First of all, as \(s\pi (m)=\pi (r)\pi (m)=\pi (r m)\in \pi (\mathfrak {m})\) for some \(r\in R\), \(\pi (\mathfrak {m})\) is an ideal. Assume \(\pi (\mathfrak {m})\varsubsetneq \mathfrak {a}\) and let \(a=\pi (r)\in \mathfrak {a}\setminus \pi (\mathfrak {m})\). Then \(r\in \pi ^{-1}(\mathfrak {a})\setminus \mathfrak {m}\) so that \(\mathfrak {m}\varsubsetneq \pi ^{-1}(\mathfrak {a})\) so that \(\pi ^{-1}(\mathfrak {a})=R\). But then \(1=\pi (1)\in \mathfrak {a}\) implying \(\mathfrak {a}=S\) and we conclude that \(\pi (\mathfrak {m})\) is maximal.

Remark 2.1

We could have simplified by working only with commutative polynomial algebras on the diagonal. However, for obvious reasons, we choose to be as general as reasonable.

Lemma 2.2

The maximal (right or left or both) ideals, corresponding to one-dimensional simple modules, of the free noncommutative \(k\)-algebra \(k\langle t_1,\dots ,t_d\rangle \) are the ideals generated by \((t_1-a_1,\dots ,t_d-a_d)\) with \(a_1,\dots ,a_d\in k.\)

Proof

Let \(\mathfrak {m}\) be a maximal ideal in \(S=k\langle t_1,\dots ,t_{d}\rangle \). We have a surjection \(\pi _0:S\twoheadrightarrow k[t_1,\dots ,t_d]\). We let \(\mathfrak {m}_0=\pi _0(\mathfrak {m})\) which is a maximal ideal, and because \(k\) is algebraically closed, \(k[t_1,\dots ,t_d]/\mathfrak {m}_0\simeq k.\) Letting \(\pi :S\rightarrow k\) be the canonical homomorphism and letting \(\pi (t_i)=a_i\), we find \(\mathfrak {m}=\ker \pi =(t_1-a_1,\dots ,t_d-a_d)\).

Lemma 2.3

For each \(i\le r\), let \(S_{ii}=k\langle t_{ii}(1),\dots ,t_{ii}(d_{ii})\rangle \), and let \(\pi _{ii}:S\twoheadrightarrow S_{ii}\) be the natural morphism. Then the maximal ideals of the free matrix algebra \(S\) are the ideals \(\mathfrak {m}_{ii}=\pi _{ii}^{-1}(\mathfrak {m})\) where \(\mathfrak {m}\subset S_{ii}\) is a maximal ideal. This means that the maximal ideals of \(S\) are the maximal ideals on the diagonal.

Proof

For a maximal ideal \(\mathfrak {m}_{ii}\subset S_{ii}\), we have an isomorphism

$$S/\pi _{ii}^{-1}(\mathfrak {m}_{ii})\overset{\simeq }{\rightarrow }S_{ii}/\mathfrak {m}_{ii}\;.$$

This proves that \(\pi _{ii}^{-1}(\mathfrak {m}_{ii})\) is a maximal ideal. For the converse, assume \(\mathfrak {m}\subset S\) is maximal. If \(\pi _{ii}(\mathfrak {m})=S_{ii}\) for all \(i\), it follows that \(1=\sum e_i\) is in \(\mathfrak {m}\) which is impossible. Thus there exists an \(i\) where \(\pi _{ii}(\mathfrak {m})\subseteq \mathfrak {m}_{ii}\) for a maximal ideal \(\mathfrak {m}_{ii}\subset S_{ii}\). Then \(\mathfrak {m}\subseteq \pi _{ii}^{-1}(\pi _{ii}(\mathfrak {m}))\subseteq \pi _{ii}^{-1}(\mathfrak {m}_{ii})\subsetneqq S.\) Then \(\mathfrak {m}=\pi _{ii}^{-1}(\mathfrak {m}_{ii})\) by maximality, and the lemma is proved.

3 Algebraic Spaces and Matrix Coordinate Algebras

For ordinary polynomial algebras, the evaluation in points of affine space is clear. We give the definition for the \(r\times r\) polynomial matrix algebras. Let \(S=(S_{ij})\) be an \(r\times r\) matrix polynomial algebra. Let as before \(S_{ii}=k\langle t_{ii}(1),\dots ,t_{ii}(d_{ii})\rangle \) and let \(\pi _{ii}:S\rightarrow S_{ii}\) be the morphism defined by sending \(t_{ii}(l)\in S\) to \(t_{ii}(l)\in S_{ii}\), and all other generators to \(0\). We have seen that the maximal ideals in \(S\) are in one to one correspondence with the collection of maximal ideals in the \(k\)-algebras \(S_{ii}\), \(1\le i\le r\).

Definition 3.1

The affine \(r\times r\)-space \(\mathbb A^{r\times r}_S\) is the set of points (maximal ideals) in the free \(r\times r\) matrix polynomial algebra \(S\). (Together with the additional structure given by \(S\) to be defined in the next section).

We define the evaluation of \(f\in S\) in the point \(p=m_{ii}\in S_{ii}\) as \(f(p)=\overline{\pi _{ii}(f)}\), the class of \(\pi _{ii}(f)\in S_{ii}/\mathfrak {m}_{ii}.\) So, in the situation with polynomial matrix algebras, we have the following naive definition.

Definition 3.2

Let \(S=(S_{ij})\) be a free \(r\times r\) matrix polynomial algebra, and let \(I=(I_{ij})\subseteq S\) be an ideal. Then an algebraic set is a set on the form \(Z(I)=\{p\in \mathbb A^{r\times r}_{S}:f(p)=0,\forall f\in I\}\). Conversely, let \(V\subseteq \mathbb A^{r\times r}_{S}\). Then the ideal of \(V\) is \(I(V)=\{f\in S:f(p)=0,\forall p\in V\}\), and the affine matrix ring coordinate ring is defined as \(S(V)=S/I(V)\).

4 Tangent Spaces for Finitely Generated Matrix Algebras

Speaking differential geometric, for an affine variety \(V=Z(I)\subseteq \mathbb A^n\), \(I\subseteq k[x_1,\dots ,x_d]\) an ideal, the tangent directions are the directions along which we can differentiate, so that the total differential is a sum of the differentials along the directions. Even better, the \(k\)-vector space of derivations has a basis indexed over the tangent directions. Translated to algebraic geometry, for a point \(\mathfrak {m}\in V\), we consider the \(A(V)\)-module \(A(V)/\mathfrak {m}\simeq k\) and find a basis for the vector space of \(k\)-derivations \({{\mathrm{Der}}}_k(A(V),k)\) indexed over what we could call tangent directions, spanning the tangent space. So we just call \({{\mathrm{Der}}}_k(A(V),k)\) the tangent space. To recognize this in other textbooks, e.g. Hartshorne [2], we notice the following:

Lemma 4.1

For a general vector space \(W\), letting \(W^*\) denote the dual vector space, we have that

$${{\mathrm{Der}}}_k(A(V),k)\simeq (\mathfrak {m}/\mathfrak {m}^2)^*\;.$$

Proof

As \(A(V)\) is generated in degree one by \(\mathfrak {m}\), a derivation is determined by its value on the generators on \(\mathfrak {m}\). In addition, as the target module is \(k=A(V)/\mathfrak {m}\), any derivation \(\delta \) satisfies \(\delta (\mathfrak {m}^2)=0\) giving a linear transformation \(\delta :\mathfrak {m}/\mathfrak {m}^2\rightarrow k\). Also, given such a linear transformation \(\delta \) with \(\delta (\mathfrak {m}^2)=0\), \(\delta \) defines a derivation.

Now, we generalize this to the noncommutative situation, that is to the finitely generated matrix polynomial algebras. For any two points in a variety \(V\), that is for any two maximal ideals \(\mathfrak {m}_1\) and \(\mathfrak {m}_2\), put \(V_1=S(V)/\mathfrak {m}_1\) and \(V_2=S(V)/\mathfrak {m}_2\). Then we have proved above that \(S(V)/\mathfrak {m}_i\simeq S_{jj}/\mathfrak {m}_i^\prime \) for \(i=1,2\) and some \(j\)’s, so we can consider \({{\mathrm{Hom}}}_k(V_1,V_2)\) as an \(S\)-bimodule by defining \((s\phi )(v)=\phi (sv)\) and \((\phi s)(v)=s\phi (v),\) with the given multiplication by \(s\). We then define the tangent space between two closed points as

$$\begin{aligned} T_{V_1,V_2}&={{\mathrm{Ext}}}^1_{S(V)}(V_1,V_2)={{\mathrm{HH}}}^1(S(V),{{\mathrm{Hom}}}_k(V_1,V_2))\nonumber \\ {}&={{\mathrm{Der}}}_k(S(V),{{\mathrm{Hom}}}_k(V_1,V_2))/{{\mathrm{Inner}}}\; .\nonumber \end{aligned}$$

In the commutative situation, for a commutative \(k\)-algebra \(A\), and two different simple \(A\)-modules \(V_1=A/\mathfrak {m}_1\), \(V_2=A/\mathfrak {m}_2\), it is well known that \({{\mathrm{Ext}}}^1_A(V_1,V_2)\cong {{\mathrm{Der}}}_k(A,{{\mathrm{Hom}}}_k(V_1,V_2))/{{\mathrm{Inner}}}=0\). In the noncommutative case however, this is is different. The noncommutative information is contained in the different tangent spaces and higher order derivations between the different points. For simplicity, we give the following definition in all generality, even if it makes sense only for noncommutative \(k\)-algebras.

Definition 4.1

Let S be any \(k\)-algebra. The tangent space between two \(S\)-modules \(M_1\) and \(M_2\) is

$$\begin{aligned} {{\mathrm{Ext}}}^1_S(M_1,M_2)\cong {{\mathrm{HH}}}^1(S,{{\mathrm{Hom}}}_k(M_1,M_2)) \end{aligned}$$

where \({{\mathrm{HH}}}^\cdot \) is the Hochschild cohomology.

Example 4.1

Let \(S=\left( \begin{matrix}k[t_{11}]&{}kt_{12}\\ kt_{21}&{}k[t_{22}]\end{matrix}\right) \) and consider two general points

$$V_1=k[t_{11}]/(t_{11}-a)\; , V_2=k[t_{22}]/(t_{22}-b)\; .$$

First, we compute

$${{\mathrm{Ext}}}^1_S(V_i,V_j)\cong {{\mathrm{Der}}}_k(S,{{\mathrm{Hom}}}_k(V_i,V_j))/{{\mathrm{Inner}}}$$

by derivations:

\({{\mathrm{Ext}}}^1_S(V_1,V_1)\): Let \(\delta \in {{\mathrm{Der}}}_k(S,{{\mathrm{End}}}_k(V_1))\). Then

$$\delta (e_i)=\delta (e_i^2)=2\delta (e_i)\Rightarrow \delta (e_i)=0, \text {}i=1,2\;.$$
$$\delta (t_{12})=\delta (t_{12}e_2)=\delta (t_{12})e_2=0\;,$$
$$\delta (t_{21})=\delta (e_2t_{21})=e_2\delta (t_{21})=0\;,$$
$$\delta (t_{22})=\delta (t_{22})e_2=0\;,$$

and finally

$$\delta (t_{11})=\alpha \;.$$

As all inner derivations are zero (easily seen from the computation above), we find that \({{\mathrm{Ext}}}^1_S(V_1,V_1)\) is generated by the derivation sending \(t_{11}\) to \(\alpha \), and all other generators to \(0\).

\({{\mathrm{Ext}}}^1_S(V_1,V_2)\):

For \(\delta \in {{\mathrm{Der}}}_k(S,{{\mathrm{End}}}_k(V_1,V_2))\) things are slightly different. \(\delta (e_1)=\delta (e_1^2)=e_1\delta (e_1)+\delta (e_1)e_1=\delta (e_1)\), that is, the above trick doesn’t work quite the same way. However, as \(\delta (1)=\delta (e_1+e_2)=0\), for every derivation \(\delta :S\rightarrow {{\mathrm{End}}}_k(V_1,V_2),\) we find \(\delta (e_1)=\alpha \), \(\delta (e_2)=-\alpha \),

$$\begin{aligned} \delta (e_1)=\alpha ,\text { }\delta (e_2)=-\alpha ,\nonumber \\ \delta (t_{11})=\delta (t_{11}e_1)=\delta (t_{11})e_1+t_{11}\delta (e_1)=a\alpha ,\nonumber \\ \delta (t_{21})=\delta (t_{21}e_1)=\delta (t_{21})e_1=0,\nonumber \\ \delta (t_{22})=\delta (e_2t_{22})=\delta (e_2)t_{22}=-b\alpha ,\nonumber \\ \delta (t_{12})=\rho \; .\nonumber \end{aligned}$$

So a general derivation can be written, the \(^*\) denoting the dual,

$$ \delta =\alpha e_1^*-\alpha e_2^*+a\alpha t_{11}^*-b\alpha t_{22}^*+\rho t_{12}^*\;. $$

For the inner derivations, we compute

$$\begin{aligned} {{\mathrm{ad}}}_\beta (e_1)=\beta e_1-e_1\beta =-\beta \; ,\nonumber \\ {{\mathrm{ad}}}_\beta (e_2)=\beta e_2-e_2\beta =\beta \; ,\nonumber \\ {{\mathrm{ad}}}_\beta (t_{11})=-\beta a\; ,\nonumber \\ {{\mathrm{ad}}}_\beta (t_{22})=\beta b\; ,\nonumber \end{aligned}$$

saying that

$${{\mathrm{ad}}}_\beta =\gamma e_1^*-\gamma e_2^*+a\gamma t_{11}^*-b\gamma t_{22}^*,\text { where we have put }\gamma =-\beta \;.$$

So as \({{\mathrm{ad}}}_\beta (t_{12})=0\), and there are no conditions on \(\delta (t_{12})\), we get

$${{\mathrm{Ext}}}^1_S(V_1,V_2)=kt_{12}^*=kd_{t_{12}}\;.$$

The cases \({{\mathrm{Ext}}}^1_S(V_2,V_1)\) and \({{\mathrm{Ext}}}^1_S(V_2,V_2)\) are exactly similar.

Generalizing the computation in the above example, we have proved the following:

Lemma 4.2

Let \(S\) be a general free \(r\times r\) matrix polynomial algebra, and let \(V_i=V_{ii}(p_{ii})\) be the point \(p_{ii}\) in entry \(i,i\). Then the tangent space from \(V_i\) to \(V_j\) is \({{\mathrm{Ext}}}^1_S(V_i,V_j)=\oplus _{l=1}^{d_{ij}} k \mathrm{d}_{t_{ij}(l)}.\)

Now, we will explain what happens in the case with relations, that is, quotients of a matrix polynomial algebra.

Example 4.2

We let \(R=\left( \begin{matrix}k[t_{11}]&{}k t_{12}\\ kt_{21}&{}k[t_{22}]\end{matrix}\right) /(t_{11}t_{12}-t_{12}t_{22}).\) The polynomial in the ideal is really in the entry \((1,2)\), but there is no ambiguity writing it like this. The points are still the simple modules along the diagonal, but a derivation \(\delta \in {{\mathrm{Der}}}_k(R,{{\mathrm{Hom}}}_k(V_{ii}(p_{ii}),V_{jj}(p_{jj})))\), must this time respect the quotient;

$$\delta (t_{11}t_{12}-t_{12}t_{22})=0\;.$$

This says

$$\delta (t_{11}t_{12}-t_{12}t_{22})=t_{11}\delta (t_{12})+\delta (t_{11})t_{12}-t_{12}\delta (t_{22})-\delta (t_{12})t_{22}=0\;,$$

and is fulfilled for any \(\delta \in {{\mathrm{Ext}}}^1_R(V_i,V_j)\), \((i,j)\ne (1,2)\). When \(\delta \in {{\mathrm{Ext}}}^1_R(V_1,V_2)\), we get that the above equation is equivalent to

$$t_{11}\delta (t_{12})-\delta (t_{12})t_{22}=\delta (t_{12})(t_{11}-t_{22})=0\;.$$

Thus in the case that \(p_{11}\ne p_{22}\) the tangent direction is annihilated: This quotient has no tangent direction from \(V_1(p_1)\) to \(V_2(p_2)\) unless \(p_1=p_2\).

This example illustrates the geometry of matrix polynomial algebras, and is of course nothing else than the obvious generalization of the the ordinary tangent space:

Lemma 4.3

Let \(S\) be a finitely generated \(r\times r\) matrix polynomial algebra with residue \(\rho :S\rightarrow k^r\) and radical \(\mathfrak {m}=\ker \rho .\) Let \(p_1\), \(p_2\) be two points on the diagonal of \(S\) with respective quotients \(V_1\cong V_2\cong k\). Then \(T_{p_1,p_2}={{\mathrm{Ext}}}^1_S(V_1,V_2)={{\mathrm{Hom}}}_k(\mathfrak {m}/\mathfrak {m}^2,k)\) where the action on \(k\cong {{\mathrm{Hom}}}_k(V_1,V_2)\) is the left-right action defined by \((s\phi )(v)=\phi (vs)\), \((\phi s) (v)=\phi (v) s\) (for right modules).

The tangent space is not enough to reconstruct the algebra, not even in the commutative situation. As always, to get the full geometric picture we also need the higher order derivatives. Even if we cannot reconstruct the algebra in all cases, we get an algebra that is geometrically equivalent (Morita equivalent), and that suffices in construction of moduli.

5 Noncommutative Deformation Theory

For ordinary, commutative, varieties \(V\), for each closed point \(\mathfrak {m}\), we have the ring of local regular functions. For noncommutative \(k\)-algebras, there are serious challenges with localizing. These challenges are already present when it comes to finitely generated matrix polynomial algebras, and as the noncommutative deformation theory is the solution, we need to go through the basics of this. However, the constructive proof of existence of a local formal moduli is found in the classical works of Laudal [3], also formulated by Eriksen in [1].

Definition 5.1

The objects in the category \(a_r\) are the \(k\)-algebras \(S\) with morphisms commuting in the diagram

such that \(\ker (\rho )^n=0.\) We call \(\ker (\rho )={{\mathrm{rad}}}(S)\) the radical, the morphisms are the morphisms commuting with \(\iota \) and \(\rho \). The category \(a_r\) is called the category of \(r\)-pointed Artinian \(k\)-algebras. The notation \(\hat{a}_r\) denotes the procategory of \(a_r\), the category of objects that are projective limits of objects in \(a_r\).

Definition 5.2

Let \(A\) be a \(k\)-algebra, let \(V=\{V_1,\dots ,V_r\}\) be \(A\)-modules. The noncommutative deformation functor \({{\mathrm{Def}}}_V:a_r\rightarrow {{\mathrm{Sets}}}\) is given by:

$${{\mathrm{Def}}}_V(S)=\{S\otimes _k A\text {-Mod }V_S\text {, flat over}S:k^r\otimes _S V_S\simeq V\}/\cong $$

where the equivalence of \(M_S\) and \(M_S^\prime \) is given as an isomorphism

Lemma 5.1

(Yoneda). Consider a covariant functor \(F:C\rightarrow {{\mathrm{Sets}}}\). Then there is an isomorphism

$$\psi :F(R)\rightarrow {{\mathrm{Hom}}}({{\mathrm{Hom}}}(R,-),F)$$

given by \(\psi (\xi )(\eta )=F(\eta )(\xi ),\) for \(\xi \in F(R)\) and \(\eta :R\rightarrow R^\prime \) any morphism.

Definition 5.3

In the above situation, \((\hat{H},\hat{\xi })\) is said to prorepresent \({{\mathrm{Def}}}_V:\hat{a}_r\rightarrow {{\mathrm{Sets}}}\) if \(\psi (\hat{\xi })\) is an isomorphism. If \(\psi (\hat{\xi })\) is smooth and an isomorphism for the \(r\times r\) matrix polynomial algebra \(R\) in the variables \(\varepsilon _{ij},\) \(1\le i,j\le r\), \((\varepsilon _{ij})^2=0\), we call \((\hat{H},\hat{\xi })\) a prorepresenting hull, or a local formal moduli.

Theorem 5.1

There exists a local formal moduli \((\hat{H}_V,\hat{\xi }_V)\) for the noncommutative deformation functor \({{\mathrm{Def}}}_V\). There is a homomorphism

$$\begin{aligned} \iota :A\rightarrow (H_{ij})\otimes _{k^r}{{\mathrm{Hom}}}_k(V_i,V_j)\;. \end{aligned}$$

Its kernel is given by \(\ker \iota =\underset{i,n}{\cap }\mathfrak {a}_i^n\) where \(\mathfrak {a}_i=\ker \rho _i:A\rightarrow {{\mathrm{End}}}_k(V_i).\)

Proof

The proof is given by Laudal in [3].

In our situation, what we need is the following:

Corollary 5.1

For \(V=\{V_1,\dots ,V_r\}\) a collection of simple \(S\)-modules where \(S\) is a finitely generated matrix polynomial algebra, there exists an injection

$$\iota :S\hookrightarrow \hat{H}_V$$

such that \(\iota (f)\) is a unit if \(f\in S\setminus \cup _{i=1}^r\mathfrak {m}_i\), where \(\mathfrak {m}_i\), \(1\le i\le r\), is the maximal ideal corresponding to \(V_i\).

We notice that this holds also in the ordinary commutative situation, allowing us to replace a localization with the image of \(S\).

Definition 5.4

For a finite family of simple modules \(V=\{V_1,\dots ,V_r\}\), the localization of \(S\) in \(V\) is the \(k\)-algebra \(S_V\) generated by the image of \(\iota \) in \(\hat{H}_V\), together with the inverses of the images of elements not contained in any of the maximal ideals.

6 Definition of Noncommutative Varieties

In this final section, we make the direct translation of the general theory in [4] to the affine varieties. As in the commutative situation, we let \(\mathbb A(S)\) denote the set of maximal ideals in \(S\). We define a topology on \(\mathbb A(S)\) by letting the closed sets be the algebraic sets \(Z(I)\) where \(I\subseteq S\) is an ideal. Alternatively, the sets \(D(f)\), \(f\in S\), given by \(D(f)=\{\mathfrak {m}:f\notin \mathfrak {m}\}\), is a generating set for the topology.

For any set \(U\), let \(\mathrm{Pf }(U)\) denote the set of finite subsets of \(U\). We define a sheaf of rings on the topological space: For an open \(U\) we let

such that \(f\) is locally regular: For each \({{\mathrm{\underline{c}}}}\in \mathrm{Pf(U) }\) there exists an open subset \(V\subseteq ~U\) containing \({{\mathrm{\underline{c}}}}\) and elements \(f,g\in S\) with \(g\) not in the unions of the corresponding maximal ideals of any of the subsets \({{\mathrm{\underline{c}}}}^\prime \in \mathrm{Pf }(V)\).

Then all theorems from the commutative situations are prolonged, and we have the category of noncommutative varieties

$$\begin{aligned} ({\mathbb {A}}(S),{\fancyscript{O}}_S)\;. \end{aligned}$$