1 Introduction

A nonlocal game is a test performed between a verifier and k players, in which the verifier tests the players’ ability to produce correlations without communicating. In a round of the game, the verifier sends questions to the players and the players return responses to the verifier. The list of questions and responses is then scored according to a function known by both the verifier and the players before the game began. By convention, the score achieved lies in the interval [0, 1]. The players cooperate to try and achieve the highest possible score, with the challenge that the players can’t communicate while the game is in progress and so don’t know the questions sent to other players.

The optimal score the players can achieve on a nonlocal game \(\mathcal {G}\) depends on the resources the players share. If the players share only classical randomness the optimal score they can achieve in expectation is called the classical value of the game, denoted \(\omega (\mathcal {G})\). If players share an arbitrary state in a (possibly infinite-dimensional) Hilbert space and can make commuting measurements on it the optimal score they can achieve is called the commuting operator value of the game, denoted \(\omega ^*_{\textrm{co}}(\mathcal {G})\). The supremum value achievable by players who make commuting measurements on a state in a finite-dimensional entangled space is called the quantum value, denoted \(\omega ^*_{\text {q}}(\mathcal {G})\). These three values can all differ, though the inequalities \(\omega (\mathcal {G}) \le \omega ^*_{\text {q}}(\mathcal {G}) \le \omega ^*_{\textrm{co}}(\mathcal {G})\) are always satisfied.

Starting roughly in this century the classical subject of real algebraic geometry has been extended to matrix and operator (noncommutative) variables. Here, inequalities and equalities are explained by being equivalent to algebraic formulas, often involving sums of squares (SOS). These go under the names of Positivstellensatz for inequalities and Nullstellensatz for equations. Of course, finding quantum strategies for games leads to many such noncommutative (NC) inequalities and equalities.

In this paper, we describe how the well developed NC real algebraic geometry theory applies and integrates with nonlocal games and commuting operator strategies for them. We show a connection between NC Nullstellensatz and whether or not a nonlocal game has a perfect commuting operator solution (i.e., \(\omega ^*_{\textrm{co}}(\mathcal {G}) = 1\)). This connection gives a new algebraic characterization which applies to all nonlocal games with commuting operator value exactly equal to one. This characterization provides a unified algebraic framework through which several previous results concerning the commuting operator value of nonlocal games can be understood. For a large class of games it also reduces the question of whether or not a game has perfect commuting operator value to an instance of the subgroup membership problem, providing a potential starting point for the investigation of several yet-to-be studied families of games.

For context, Positivstellensätze have long played a major role in the study of nonlocal games in that they are behind the standard [11, 25] upper bound on the commuting operator value of a game. Underlying this bound is one of the earliest NC Positivstellensätze, [15]. This paper turns its attention to developing the analogous NC real algebraic geometry which bears on perfect games.

In the remainder of this introduction, we introduce some new terminology, review some previous results concerning the commuting operator value of nonlocal games, and then give formal statements of some of our main results.

Algebraic Description of the Commuting Operator Value A commuting operator strategy for a nonlocal game is a description of how players can use commuting operator measurements to map questions sent by the verifier to responses. Formally, a (commuting operator) strategy can be specified by a Hilbert space \({\mathcal {H}}\), a state \(\psi \in {\mathcal {H}}\) which is shared by the players and projectors \(\{ E(\alpha )^{i}_{a} \}\) acting on \({\mathcal {H}}\), where \(\alpha \) ranges over all players, i ranges over all questions, and a ranges over all responses. The projector \(E(\alpha )^{i}_{a}\) can be read as “the projector corresponding to player \(\alpha \) giving response a to question i.”

Because the Hilbert space \({\mathcal {H}}\) on which they act is arbitrary, it is difficult to reason about the \(E(\alpha )^{i}_{a}\) directly. Instead, we introduce the universal game algebra \(\mathscr {U}\), a \(*\)-algebra generated by variables \(e(\alpha )^{i}_{a}\) which satisfy the same relations as the projectors \(E(\alpha )^{i}_{a}\), for example, that \(e(\alpha )^{i}_{a}\) and \(e(\beta )^{j}_{b}\) commute for any \(\alpha \ne \beta \).Footnote 1\(^,\)Footnote 2 Commuting operator strategies can then be specified by tuples \((\pi ,\psi )\), consisting of a \(*\)-representation \(\pi \) mapping \(\mathscr {U}\) to bounded operators on a Hilbert space \({\mathcal {H}}\), along with a state \(\psi \in {\mathcal {H}}\). When specified in this way, it is understood that projectors \(E(\alpha )^{i}_{a}\) are given by \(\pi (e(\alpha )^{i}_{a})\) and that \(\psi \) gives the state shared by the players.

Other Characterizations of Perfect Commuting Operator Strategies Several other papers have considered the problem of deciding whether or not a game has a perfect commuting operator strategy and given criteria which determine the existence of perfect commuting operator strategies for specific families of nonlocal games. We review some of those families of games and the associated characterizations below.

  • Linear systems games are two-player games based around systems of m linear equations on n variables. In [9] it was shown that deciding existence of a perfect commuting operator strategy for a binary linear systems game was equivalent to solving an instance of the word problem on a group called the solution group of the game.

  • XOR games are k player games which, similarly to linear system games, test satisfiability of a system of m binary equations on kn variables.

    In [2] it was shown that deciding the existence of a perfect commuting operator strategy for an XOR game was equivalent to solving an instance of the subgroup membership problem on a group called the game group.

  • Synchronous games are two-player nonlocal games which include “consistency checks,” where Alice and Bob are sent the same question and win iff they send the same response. Other than these consistency checks, the questions and winning responses involved in a synchronous game are arbitrary. In [27] it was shown that there was a perfect commuting operator strategy for a coloring game iff a \(*\)-algebra associated with a single player’s operators could be represented into a \(C^*\)-algebra with a faithful trace. In [16] and [20] this was generalized to synchronous games.

Our Results The main results of this paper are two theorems giving algebraic characterizations of games with perfect commuting operator strategies.

A key concept introduced on the way to proving these theorems is the notion of a game being determined by a set of elements \(\mathcal {F}\subseteq \mathscr {U}\). Formally we say a game \(\mathcal {G}\) is determined by a set of elements \(\mathcal {F}\) if, for any commuting operator strategy \((\pi ,\psi )\), we have that \((\pi ,\psi )\) is a perfect commuting operator strategy for \(\mathcal {G}\) iff \( \pi (f) \psi = 0 \) for all \(f\in \mathcal {F}\). We also note that any game \(\mathcal {G}\) is naturally determined by two sets of elements. The first, \(\mathcal {N}\), consists of elements corresponding to projectors onto responses which obtain a score less than 1 on questions asked by the verifier, while the second, \(\mathcal {Y}\), consists of elements \(y-1\) with each element y corresponding to projectors onto responses which obtain a score of exactly 1. Details of this construction are given in Sect. 3.4.1.

Our first major theorem follows from combining the notion of sets of elements which determine a game with a result in noncommutative algebraic geometry known as a Nullstellensatz. To state the result formally, let \({{\mathfrak {L}}}\left( \mathcal {X}\right) \) denote the left ideal of \(\mathscr {U}\) generated by \(\mathcal {X}\) for any set of elements \(\mathcal {X}\subseteq \mathscr {U}\) and \({{\,\textrm{SOS}\,}}_\mathscr {U}\) denote sums of squares in the algebra \(\mathscr {U}\). Then, the following result holds:

Theorem 1.1

For a nonlocal game \(\mathcal {G}\) determined by a set \(\mathcal {F}\subseteq \mathscr {U}\), the following are equivalent:

  1. (i)

    \(\mathcal {G}\) has a perfect commuting operator strategy;

  2. (ii)

    \(-1 \notin {{\mathfrak {L}}}\left( \mathcal {F}\right) + {{\mathfrak {L}}}\left( \mathcal {F}\right) ^* + {{\,\textrm{SOS}\,}}_\mathscr {U}\).

Theorem 1.1, combined with the natural determining sets \(\mathcal {N}\) and \(\mathcal {Y}\), gives a fully algebraic characterization of nonlocal games with perfect commuting operator strategies. This characterization is analogous to the characterization of synchronous games given in [16], but works for all games. For the special case of synchronous games, we show in Sect. 8 that the characterizations of Theorem 1.1 and [16] are equivalent.

The second major theorem focuses on a general class of games on which Theorem 1.1 can be simplified further. A game \(\mathcal {G}\) is called a torically determined game if there exists a group G with \(\mathscr {U}\cong \mathbb {C}[G]\) and \(\mathcal {G}\) is determined by a set of elements

$$\begin{aligned} \mathcal {F}= \{\beta _i g_i - 1\} \end{aligned}$$
(1.1)

with each \(\beta _i \in \mathbb {C}\) and \(g_i \in G\). We call the elements \(\beta _i g_i\) clauses of \(\mathcal {F}\), and let \(\mathscr {H}= \{\beta _i g_i\}\) be the set of all the clauses of \(\mathcal {F}\). We give the following characterization of torically determined games with perfect commuting operator strategies:

Theorem 1.2

Let \(\mathcal {G}\) be a game torically determined by a set of elements \(\mathcal {F}\) with clauses \(\mathscr {H}\). Then, \(\mathcal {G}\) has a perfect commuting operator strategy iff the following equivalent criteria are satisfied:

  1. (i)

    \(-1 \notin {{\mathfrak {L}}}\left( \mathcal {F}\right) +{{\mathfrak {L}}}\left( \mathcal {F}\right) ^*\);

  2. (ii)

    The subgroup H of \(\mathscr {U}\) generated by \(\mathscr {H}\cup \mathscr {H}^*\) meets \({\mathbb {C}}\) only at 1.

Condition (i) makes it clear Theorem 1.2 can be viewed as a version of Theorem 1.1 for torically determined games which holds without the SOS term. Additionally, condition (ii) reduces the characterization of perfect commuting operator strategies in terms of \(*\)-algebras given in Theorem 1.1 to one entirely in terms of groups. In this paper, we show that both linear systems games and XOR games are torically determined games, and that Theorem 1.2 recovers the algebraic characterizations of these games given in [2, 9], respectively. We also show that Theorem 1.2 lets us extend the algebraic characterization of both XOR and binary linear systems games to more general games based on linear equations Mod r for any integer r.Footnote 3

Both Theorems 1.1 and 1.2 allow new algorithms for identifying nonlocal games with perfect commuting operator strategies. In this paper, we discuss one such algorithm, based on Gröbner bases, and give some sample applications. We note that, unlike the upper bounds coming from the ncSoS hierarchy, these Gröbner bases algorithms can both prove a game has commuting operator value strictly less than 1 and identify some games with commuting operator value exactly equal to 1.Footnote 4

Recent Related Work In [22], the characterization of perfect commuting operator strategies for synchronous games in terms of representations of a \(*\)-algebra into a \(C^*\)-algebra with a faithful trace was generalized to a class of games known as imitation games, and a purely algebraic characterization of perfect commuting operator strategies was given for a subclass of imitation games known as mirror games. In this paper, we do not try and reconcile these characterizations with Theorems 1.1 and 1.2, but we do note it as an interesting problem worth future study. In [13] the solution group characterization of perfect commuting operator strategies for linear systems game given in [9] was related to the \(C^*\)-algebra characterization of perfect commuting operator strategies for a synchronous variant of linear systems games presented in [20]. The relationship between these algebraic objects is similar to the relationships presented in Sects. 8 and 7 of this paper.

Paper Outline In Sect. 2, we review some general mathematical definitions necessary to understand the main results of the paper. In Sect. 3, we introduce terminology related to nonlocal games and define key algebraic objects. In Sect. 4, we introduce Nullstellensätze and prove Theorem 1.1. In Sect. 5, we simplify the Nullstellensätze from the previous section and prove Theorem 1.2, then apply it to XOR and Mod r games. Section 7 shows that Theorem 1.2 is equivalent to the characterization of linear systems games given in [9]. Section 8 shows that, in the special case of synchronous games, Theorem 1.1 is equivalent to the characterization of synchronous games given in [16]. Section 6 introduces Gröbner basis algorithms for identifying games with perfect commuting operator value based on Theorems 1.1 and 1.2, and Sect. 8.3 gives an example application of our Nullstellensätze to quantum graph coloring.

There is a table of contents, an index and a list of notation at the end of the paper.

2 Math Background

Representations Given an algebra \({\mathcal {A}}\), a representation of \({\mathcal {A}}\) is a unital homomorphism \(\pi :{\mathcal {A}}\rightarrow \mathcal {B}({\mathcal {H}})\) for some Hilbert space \({\mathcal {H}}\). When \({\mathcal {A}}\) is endowed with an involution \(*\), then \(\pi \) is called a \(*\)-representation if \(\pi (a^*)=\pi (a)^*\) for all \(a\in {\mathcal {A}}\).

Free Algebras Let \(x=(x_1,\ldots ,x_d)\) denote a tuple of noncommuting letters. Words in x form the free monoid \(\langle x\rangle \), including the empty word denoted 1. A noncommutative (nc) polynomial is a linear combination of words; the algebra of nc polynomials is the free algebra \(\mathbb {C}\langle x\rangle \). We endow it with the involution \(*\) fixing \(\mathbb {R}\cup \{x\}\).

Example 2.1

Any representation \(\pi \) of \(\mathbb {C}\langle x\rangle \) is described by a tuple of operators \(X\in \mathcal {B}({\mathcal {H}})^d\) through \(\pi (x_j)=X_j\). Then, \(\pi \) is a \(*\)-representation if and only if the \(X_j\) are self-adjoint.

Group Algebras Given an abstract group G, the group algebra \(\mathbb {C}[G]\) has G as a vector space basis, and the multiplication is extended from the one in G by linearity/distributivity. Thus, elements \(p\in \mathbb {C}[G]\) are finite linear combinations

$$\begin{aligned} p= \sum _j s_j g_j, \end{aligned}$$
(2.1)

where \(s_j\in \mathbb {C}\) and \(g_j \in G\). The involution on \(\mathbb {C}[G]\) is complex conjugation on \(\mathbb {C}\) and \(g\mapsto g^{-1}\) for \(g\in G.\) Hence, a \(*\)-representation of a group algebra \(\pi :\mathbb {C}[G]\rightarrow \mathcal {B}({\mathcal {H}})\) is a ring homomorphism such that \(\pi (g)\) is a unitary operator for all \(g\in G\).

Example 2.2

Each group algebra admits a left regular \(*\)-representation. Namely, let \(\ell ^2(G)\) be the Hilbert space with Hilbert space basis G. Then, the left regular representation \(\lambda \) defined as

$$\begin{aligned} \lambda (g)\left( \sum _j a_j g_j\right) = \sum _j a_j g^{-1} g_j \qquad \text {for } \sum _j a_j g_j \in \ell ^2(G) . \end{aligned}$$

Thus, each \(g\in G\) induces a unitary operator \(\lambda (g)\in \mathcal {B}(\ell ^2(G))\), and thus by linearity \(\lambda :\mathbb {C}[G]\rightarrow \mathcal {B}(\ell ^2(G))\) is a \(*\)-representation.

3 Nonlocal Game Definitions

This section gives an overview of all the terminology used to discuss nonlocal games in this paper. Section 3.1 gives a semi-formal introduction to the language used to describe nonlocal games. Section 3.2 gives technical definitions which are key to this paper. In Sect. 3.3, we introduce an algebraic framework which we will use to describe nonlocal games and their commuting operator strategies. In Sect. 3.4, we describe the condition that a nonlocal game has perfect commuting operator strategy in terms of some of the notations introduced in previous sections. Finally, in Sect. 3.5 we describe some well-known families of games using the language introduced in previous sections.

A reader already familiar with nonlocal games can skip Sect. 3.1 but should not skip Sect. 3.2 (or later sections) as conventions are set in those sections which will remain important throughout the paper.

3.1 General Notation

Nonlocal games describe experiments which test the correlations that can be produced by measurements of quantum systems. A nonlocal game involves a referee (also called the verifier) and k players (also called provers). In a round of the game, the verifier selects a vector of questions \(\vec {i} = (i(1), i(2), \ldots , i(k))\) randomly from a set \(\mathcal {Q}\) of possible question vectors, then sends player \(\alpha \) question \(i(\alpha )\). Each player responds with an answer \(a(\alpha )\). The players cannot communicate with each other when choosing their answers. After receiving an answer from each player, the verifier computes a score \(V(\vec {a} \;|\vec {i}) = V(a(1), a(2), \ldots , a(k) | i(1), i(2), \ldots , i(k))\) which depends on the questions selected and answers received. The players know the set of possible questions S and the scoring function V. Their goal is to chose a strategy for responding to each possible question which maximizes their score in expectation. The difficulty for the players lies in the fact that in a given round each player only has partial information about the questions sent to other players. The set of all questions possible is finite; cardinality is denoted n.

For a given game G, the maximum expected score achievable by players is called the value of the game. The value depends on the resources available to the players. If players are restricted to classical strategies, the value is called the classical value and denoted \(\omega (G)\). If players can make measurements on a shared quantum state (but still can’t communicate) the value can be larger and is called the entangled value. More specifically, if the players shared state lives in a Hilbert space \({\mathcal {H}}= {\mathcal {H}}_1 \otimes {\mathcal {H}}_2 \otimes \cdots \otimes {\mathcal {H}}_k\) and the i-th player makes a measurement on the i-th Hilbert space (i.e.,  all measurement operators used by the i-th player take the form \(I _1 \otimes \cdots \otimes I_{i-1} \otimes X_i \otimes I_{i+1} \otimes \cdots \otimes I_k\) with \(X_i\) acting on \({\mathcal {H}}_i\)) the supremum score the players can obtain is called the quantum value, denoted \(\omega ^*_{\text {q}}\). In [31], it was shown that this value is equal to the supremum score the players can achieve making measurements on finite-dimensional Hilbert spaces.

If the players state and Hilbert space are arbitrary and the only restriction placed on their measurements is all measurement operators for different players commute (enforcing no-communication), the maximum achievable score is called the commuting operator value, denoted \(\omega ^*_{\textrm{co}}\). When the state shared by the players is finite-dimensional the tensor product and commuting operator values of a game coincide. In the infinite-dimensional case \(\omega ^*_{\text {tp}} \le \omega ^*_{\textrm{co}}\), and there exist games for which the inequality is strict [12, 17, 18].

A notation summary is:

$$\begin{aligned} k:= \# \text {players}, \ \ n:= \# \text {questions} \ \ m:= \#\text {responses per question} . \end{aligned}$$
(3.1)

3.2 Basic Definitions

3.2.1 Commuting Operator Strategies

We start with a definition of a commuting operator strategy for nonlocal games. The setting is a separable Hilbert space \({\mathcal {H}}\), possibly infinite-dimensional; measurements, described by bounded operators in \(\mathcal {B}({\mathcal {H}})\) and states, positive semidefinite \(\rho \in \mathcal {B}({\mathcal {H}})\) with trace 1. Such operators \(\rho \) are often called density operators or (normalized) trace class operators. Of course, \(\ell _{\rho }(W)= \text {tr}(W \rho )\) defines a positive linear functional \(\ell _\rho : \mathcal {B}({\mathcal {H}}) \rightarrow {\mathbb {C}}\) with \(\ell (I)=1\). A pure state corresponds to rank one \(\rho \), that is \(\rho = \psi ^* \psi \) with \(\psi \) a unit vector in \({\mathcal {H}}\). In this case, \(\ell _\rho \) is of the form \(W\mapsto \psi ^* W\psi \).

Definition 3.1

A commuting operator strategy \(S\) for a k-player, n-question, m-response nonlocal game is defined by \(({\mathcal {H}},\rho , \mathcal {E}(1), \mathcal {E}(2), \ldots , \mathcal {E}(k))\) where \({\mathcal {H}}\) is a Hilbert space, \(\rho \) a state, and each \(\mathcal {E}(\alpha )\) for \(\alpha \in [k]\) is a set of projectors acting on \({\mathcal {H}}\),

$$\begin{aligned} \mathcal {E}(\alpha ) = \{ E(\alpha )^{i}_{a} \mid i \in [n], a \in [m] \} \end{aligned}$$
(3.2)

which satisfy

$$\begin{aligned}{}[E(\alpha )^{i}_{a}, E(\beta )^{k}_{b}]&= 0 \quad \forall \; \alpha \ne \beta , \end{aligned}$$
(3.3a)
$$\begin{aligned} \sum _{a \in [m]} E(\alpha )^{i}_{a}&= 1 \quad \forall \; \alpha \in [k], i \in [n]. \end{aligned}$$
(3.3b)

Note here we use the shorthand \([n] := \{1,2,\dots ,n\}\).

Note that as a consequence of Eq. (3.3b) we have \(E(\alpha )^{i}_{a}E(\alpha )^{i}_{b} = 0\) and hence \([E(\alpha )^{i}_{a},E(\alpha )^{i}_{b}] = 0\) for any \(\alpha , i\) and \(a \ne b\).

Conventions:

\(\alpha , \beta \) variables label players, ij variables label questions, and

ab variables label responses.

3.2.2 Games and their Commuting Operator Value

A k-player, n-question, m-response nonlocal game \(\mathcal {G}= (V, \mu )\) is specified by a scoring function

$$\begin{aligned} V : [n]^k \times [m]^k \rightarrow [0,1] \end{aligned}$$
(3.4)

and a probability distribution \(\mu \) on \([n]^k\). The score a strategy \(S\) obtains on a game \(\mathcal {G}= (V,\mu )\) is given by

$$\begin{aligned} {V(\mathcal {G},S)} = \sum _{\vec {i} \in [n]^k} \sum _{\vec {a} \in [m]^k} \mu \left( \vec {i}\right) V\left( \vec {i}, \vec {a}\right) \text {Tr}[\prod _{\alpha \in [k]} E(\alpha )^{i(\alpha )}_{a(\alpha )} \rho ] \end{aligned}$$
(3.5)

The commuting operator value \(\omega ^*_{\textrm{co}}(\mathcal {G})\) of a game is defined to be the supremum value achieved over commuting operator strategies, so

$$\begin{aligned} \omega ^*_{\textrm{co}}(\mathcal {G}) = \sup _{S\in \mathcal {S}_{\textrm{co}}} V(\mathcal {G}, S). \end{aligned}$$
(3.6)

where we have defined \(\mathcal {S}_{\textrm{co}}(k,n,m)\) to be the set of all k-player, n-question, m-response commuting operator strategies. Often k, n, m are implied from context, and we just write \(\mathcal {S}_{\textrm{co}}\).

3.3 The Algebraic Picture

In this paper, we think of strategies \(S\in \mathcal {S}_{\textrm{co}}\) as arising from representations of an algebra we call the universal game algebra. We define that algebra next.

3.3.1 Universal Game Algebra

Here, we define the Universal Game Algebra \(\mathscr {U}\) in terms of various generators and relations. These relations define a two-sided ideal \({\mathscr {I}}\) which lives inside the \(*\)-algebra of all noncommutative polynomials in the generators. We call \({\mathscr {I}}\) the Universal Game Ideal. These relations reflect the algebraic properties of projectors or related algebraic objects.

Projection Generators Define \(\mathscr {U}\) to be the \(*\)-algebra with generators \(e(\alpha )^i_{a}\) which satisfy relations

$$\begin{aligned}{}[e(\alpha )^i_a, e(\beta )^j_b]&= 0 \quad \forall \; i,j, a, b, \alpha \ne \beta , \end{aligned}$$
(3.7a)
$$\begin{aligned} \left( e(\alpha )^i_a\right) ^2&= \left( e(\alpha )^i_a\right) ^* = e(\alpha )^i_a, \end{aligned}$$
(3.7b)
$$\begin{aligned} e(\alpha )^i_a e(\alpha )^i_b&= 0 \quad \forall \; i, a \ne b, \end{aligned}$$
(3.7c)
$$\begin{aligned} \sum _a e(\alpha )^i_{a}&= 1 \quad \forall \; \alpha , i, \end{aligned}$$
(3.7d)

with \(i,j \in [n]; a,b \in [m],\) and \(\alpha \in [k]\). (Technically, we should define a different universal algebra for every different value nm and k, so \(\mathscr {U}= \mathscr {U}(n,m,k)\). We frequently omit this detail when nm and k are clear from context.)

Remark

The third relation in Equation (3.7) holds automatically for subalgebras of bounded operators on a Hilbert space \({\mathcal {H}}\), but is not redundant in general. To explain this, we shall employ noncommutative Gröbner bases; see Sect. 6.1 later for a brief introduction.

There exists a \(*\)-algebra with four self-adjoint idempotents that sum to 1 but are not pairwise orthogonal. Indeed, consider the free algebra \(\mathbb {C}\langle a,b,c,d\rangle \) generated by four self-adjoint elements abcd, and the two-sided \(*\)-ideal \({\mathcal {I}}\) in \(\mathbb {C}\langle a,b,c,d\rangle \) generated by

$$\begin{aligned} a^2-a,\; b^2-b,\; c^2-c,\, d^2-d,\; a+b+c+d-1. \end{aligned}$$

Its reduced Gröbner basis with respect to the graded lex order is given by

$$\begin{aligned}{} & {} -1 + a + b + c + d, \quad -a + a^2, \quad -b + b^2,\quad -c + c^2, a b + a c + b a + b c + c a + c b,\\{} & {} \quad -a b - 2 a c - 2 b a - 2 b c - c a - a b a - a b c - a c a - b a c - b c a + c a b. \end{aligned}$$

We can use this to check that the element \(ab + {\mathcal {I}}\) is not zero in the quotient algebra \({\mathcal {A}}:=\mathbb {C}\langle a,b,c,d\rangle /{\mathcal {I}}\). This example incidentally gives a negative answer to Problem 5.2 in [16].

For three self-adjoint idempotents abc that add to 1 the corresponding ideal has the reduced Gröbner basis

$$\begin{aligned} -1 + a + b + c,\quad -a + a^2, \quad -b + b^2,\quad ab, \quad ba \end{aligned}$$

whence the product of any two of the idempotents abc is indeed zero.

Alternately, we can describe \(\mathscr {U}\) as a quotient of the free \(*\)-algebra. Namely, let e denote the tuple

$$\begin{aligned} e= (e(\alpha )_a^i ))_{i,a, \alpha }, \end{aligned}$$

and define the universal game ideal \({\mathscr {I}}\) to be the two-sided ideal in the free algebra \({\mathbb {C}\langle e\rangle }\), defined by the polynomials

$$\begin{aligned} \big \{ [e(\alpha )^i_a, e(\beta )^j_b], \; \left( e(\alpha )^i_a\right) ^2 - e(\alpha )^i_a, \ e(\alpha )^i_{a'} e(\alpha )^i_{b'},\ \sum _a e(\alpha )^i_{a} - 1 \big \} \end{aligned}$$
(3.8)

where the indices run through all \( i,j, a, b, a'\ne b', \alpha \ne \beta .\) Then, \(\mathscr {U}= {\mathbb {C}\langle e\rangle }/ {\mathscr {I}}\); note that \(\mathscr {U}\) comes equipped with an involution induced by \(\big (e(\alpha )^i_a\big )^*= e(\alpha )^i_a\).

There are two common changes of variables we will use when describing the algebra \(\mathscr {U}\).

Signature Matrix Generators The first change of variables is to generators satisfying the algebraic properties of signature matrices, defined by

$$\begin{aligned} x(\alpha )^{i}_{a} := 2e(\alpha )^{i}_{a} - 1. \end{aligned}$$
(3.9)

These variables satisfy relations

$$\begin{aligned} x(\alpha )^{i}_{a}x(\beta )^{j}_{b}&= x(\beta )^{j}_{b}x(\alpha )^{i}_{a} \quad \forall \; i,j,a,b, \alpha \ne \beta , \end{aligned}$$
(3.10a)
$$\begin{aligned} (x(\alpha )^{i}_{a}+1)(x(\alpha )^{i}_{b}+1)&= 0 \quad \forall \; i,a\ne b, \alpha \text { and } \end{aligned}$$
(3.10b)
$$\begin{aligned} \sum _{a} x(\alpha )^{i}_{a}&= - (m-2), \end{aligned}$$
(3.10c)
$$\begin{aligned} \left( x(\alpha )^{i}_{a}\right) ^*&= x(\alpha )^{i}_{a}, \end{aligned}$$
(3.10d)
$$\begin{aligned} \left( x(\alpha )^{i}_{a}\right) ^2&= 1 \quad \forall \; i,a, \alpha . \end{aligned}$$
(3.10e)

It is straightforward to check that the set of relations above gives a defining set of relations for the algebra \(\mathscr {U}\) written in terms of the \(x(\alpha )^{i}_{a}\).

Cyclic Unitary Generators The second change of variables is to cyclic unitary generators, defined by

$$\begin{aligned} c({\alpha })_{j} := \sum _a \exp \left( \frac{2\pi a i}{ m}\right) e(\alpha )^{j}_{a}. \end{aligned}$$
(3.11)

(In this subsection, i refers to the imaginary unit and j is used to index questions.) These elements satisfy relations

$$\begin{aligned} c({\alpha })_{j}c({\beta })_{j'}&= c({\beta })_{j'}c({\alpha })_{j} \quad \forall \; j,j', \alpha \ne \beta , \end{aligned}$$
(3.12a)
$$\begin{aligned} \left( c({\alpha })_{j}\right) ^*&= \left( c({\alpha })_{j}\right) ^{-1} , \end{aligned}$$
(3.12b)
$$\begin{aligned} \left( c({\alpha })_{j}\right) ^m&= 1. \end{aligned}$$
(3.12c)

Routine calculation using the inverse transformation

$$\begin{aligned} e(\alpha )^{j}_{a} = \frac{1}{m} \sum _b \left( \exp \left( \frac{-2\pi a i}{m}\right) c({\alpha })_{j}\right) ^b \end{aligned}$$
(3.13)

shows that Equations (3.12a) to (3.12c) also form a defining set of relations for \(\mathscr {U}\).

We can also define a group \(G\) generated by elements \(c({\alpha })_{j}\) which satisfy the relations defined in Equations (3.12a) to (3.12c). Then, we can write that \(\mathscr {U}= \mathbb {C}[G]\), making it clear that \(\mathscr {U}\) is a group algebra.

Two Answer Games A notable special case occurs when the answer set of the game contains only 2 responses. In this case, the cyclic observable and signature matrix change of variables are the same, since

$$\begin{aligned} c({\alpha })_{i}&= e(\alpha )^{i}_{0} - e(\alpha )^{i}_{1} = 2e(\alpha )^{i}_{0} - 1 = x(\alpha )^{i}_{0}, \end{aligned}$$
(3.14a)
$$\begin{aligned} x(\alpha )^{i}_{0}&= - x(\alpha )^{i}_{1}. \end{aligned}$$
(3.14b)

In this case, we use the simplified notation \(x^{(\alpha )}_{i} = c({\alpha })_{i} = x(\alpha )^{i}_{0}\).

3.3.2 Strategies as Representations of the Universal Game Algebra

Recall from Sect. 2 that a \(*\)-representation \(\pi : \mathscr {U}\rightarrow \mathcal {B}({\mathcal {H}})\) is an algebraic structure preserving map, so images of generators of \(\mathscr {U}\) consist of operators on \({\mathcal {H}}\) which respect the algebraic structure of \(\mathscr {U}\), for example

$$\begin{aligned} \pi \big ((e(\alpha )^i_a)^2\big )= \pi (e(\alpha )^i_a)^2. \end{aligned}$$

Then, we note that any \(*\)-representation \(\pi : \mathscr {U}\rightarrow \mathcal {B}({\mathcal {H}})\) and state \(\psi \in {\mathcal {H}}\) can be used to define a commuting operator strategy \(S\in \mathcal {S}_{\textrm{co}}\), simply by setting

$$\begin{aligned} E(\alpha )^{i}_{a} = \pi (e(\alpha )^{i}_{a}) \end{aligned}$$
(3.15)

for all \(i,a,\alpha \) and taking the Hilbert space \({\mathcal {H}}\) to be the target Hilbert space of the \(*\)-representation \(\pi \). Similarly, any commuting operator strategy defined by a Hilbert space \({\mathcal {H}}\), state \(\psi \), and projectors \(E(\alpha )^{i}_{a}\) can be used to define a \(*\)-representation \(\pi : \mathscr {U}\rightarrow \mathcal {B}({\mathcal {H}})\) by setting

$$\begin{aligned} \pi (e(\alpha )^{i}_{a}) = E(\alpha )^{i}_{a} \end{aligned}$$
(3.16)

for all \(i,a,\alpha \). For this reason, we view commuting operator strategies interchangeably with pairs \((\pi , \psi )\) consisting of \(*\)-representations \(\pi :\mathscr {U}\rightarrow \mathcal {B}({\mathcal {H}})\) and states \(\psi \in {\mathcal {H}}\).

3.3.3 An Algebraic Definition of the Commuting Operator Value of a Game

For any game \(\mathcal {G}\) define the game polynomial \(\Phi _\mathcal {G}\in \mathscr {U}\) by

$$\begin{aligned} \Phi _\mathcal {G}= \sum _{\vec {i} \in [n]^k} \sum _{\vec {a} \in [m]^k} \mu \left( \vec {i}\right) V\left( \vec {i}, \vec {a}\right) \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )} \end{aligned}$$
(3.17)

Then, recalling the view of strategies as representations introduced in Sect. 3.3.2,

$$\begin{aligned} \omega ^*_{\textrm{co}}(\mathcal {G}) = \sup _{\pi , \rho } \text {tr}[\pi (\Phi _\mathcal {G})\rho ] \end{aligned}$$
(3.18)

where the supremum is taken over all \(*\)-representations \(\pi \) of \(\mathscr {U}\) into bounded operators on a Hilbert space \({\mathcal {H}}\), and density operators \(\rho \in \mathcal {B}({\mathcal {H}})\).

A slightly different representation of \(\omega ^*_{\textrm{co}}\) which is useful is given in the next lemma.

Lemma 3.2

The sup in Eq. (3.18) is attained for some pair representation density operator \(({\hat{\pi }},{\hat{\rho }})\). Moreover, one can take \({\hat{\rho }}\) to be a pure state \({\hat{\rho }}= {\hat{\psi }}^* {\hat{\psi }}\), so

$$\begin{aligned} \omega ^*_{\textrm{co}}(\mathcal {G}) = \max _{\pi ,\psi }\psi ^*\pi (\Phi _\mathcal {G}) \psi . \end{aligned}$$
(3.19)

Here, \(\psi \in {\mathcal {H}}\) is a unit vector, and \({\mathcal {H}}\) is the Hilbert space into which \(\pi \) represents.

Proof

Firstly, it is well known that the extreme points of the convex set of density operators \(\rho \in \mathcal {B}({\mathcal {H}})\) are exactly rank ones. This shows one can replace \(\rho \) by \( \psi ^* \psi \) for unit vectors \(\psi \) in Eq. (3.18).

Given \(\pi ,\psi \) as above, the map

$$\begin{aligned} \ell :\mathscr {U}\rightarrow {\mathbb {C}}, \quad a\mapsto \psi ^*\pi (a)\psi \end{aligned}$$
(3.20)

is a positive linear functional with \(\ell (1)=1\). Conversely, a normalized positive linear functional \(\ell :\mathscr {U}\rightarrow {\mathbb {C}}\) yields through the Gelfand–Naimark–Segal (GNS) construction a representation \(\pi \) and unit vector \(\psi \) such that Eq. (3.20) holds. Thus, Eq. (3.18) can be rewritten as

$$\begin{aligned} \omega ^*_{\textrm{co}}(\mathcal {G}) = \sup _{\ell } \ell (\Phi _\mathcal {G}). \end{aligned}$$
(3.21)

The set of the normalized positive linear functionals on \(\mathscr {U}\) is weak-\(*\) compact by the Banach–Alaoglu theorem. Thus, by continuity, the supremum in Eq. (3.21) is attained. Hence, the GNS construction as explained above yields Eq. (3.19). Observe that since \(\mathscr {U}\) is countably dimensional, the constructed Hilbert space \({\mathcal {H}}\) will be separable. \(\square \)

3.3.4 Valid and Invalid Response Sets

Now, we introduce a few more objects which are useful for describing nonlocal games.

First, for any game \(\mathcal {G}\), let the set \(\mathcal {Q}\subseteq [n]^k\) contain all question vectors which can be sent to the players with nonzero probability. Formally,

$$\begin{aligned} \mathcal {Q}= \{\vec {i} \in [n]^k : \mu (\vec {i}) > 0\}. \end{aligned}$$
(3.22)

Then, for each question vector \(\vec {i} \in \mathcal {Q}\), let \(\mathcal {Y}(\vec {i})\) list the corresponding response vectors which achieve a score of exactly 1, so

$$\begin{aligned} \mathcal {Y}(\vec {i})=\{ \vec {a} : V(\vec {i},\vec {a}) = 1\}. \end{aligned}$$
(3.23)

We say the set \(\mathcal {Y}(\vec {i})\) contains all the valid or “winning” responses to the question vector \(\vec {i}\). We also define the set of invalid responses or “losing” responses \(\mathcal {N}(\vec {i})\) to be the complement of the set \(\mathcal {Y}(\vec {i})\).

In this paper, we assume that all games have a scoring function \(V\) whose image is either 0 or 1 and a distribution \(\mu \) which is uniform over a set of allowed questions. For these games, specifying a question set \(\mathcal {Q}\) and sets \(\mathcal {Y}(\vec {i})\) (or \(\mathcal {N}(\vec {i})\)) for all \(\vec {i} \in \mathcal {Q}\) completely specifies the game, since we can write

$$\begin{aligned} \Phi _\mathcal {G}= \frac{1}{\left|\mathcal {Q}\right|} \sum _{\vec {i} \in \mathcal {Q}} \sum _{\vec {a} \in \mathcal {Y}(\vec {i})} \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )}. \end{aligned}$$
(3.24)

The results in this paper can be easily generalized to games with non-uniform question distributions \(\tilde{\mu }\) or more complicated scoring functions \(\tilde{V}\) since a (classical or commuting operator) strategy for such a game is perfect if it is perfect for the game with “flattened” scoring function \(V = \lfloor \tilde{V} \rfloor \) and question distribution \(\mu \) which is uniform over the set of support of \(\tilde{\mu }\).Footnote 5

3.4 Determining Perfect Commuting Operator Strategies for Games

A commuting operator strategy \(S= (\pi ,\psi )\) is called a perfect strategy for a game \(\mathcal {G}\) if \(V(\mathcal {G},S) = 1\). In this section, we develop some terminology for describing perfect commuting operator strategies for nonlocal games.

We first introduce a general definition which will be used frequently in later sections.

Definition 3.3

A game \(\mathcal {G}\) is said to be determined by a set of elements \(\mathcal {F}\subseteq \mathscr {U}\) (or, equivalently, \(\mathcal {F}\) is said to be a determining set of \(\mathcal {G}\)) if it is true that any strategy \(S= (\pi ,\psi )\) satisfies \(V(\mathcal {G},S) = 1\) iff \(\pi (f)\psi = 0\) for all \(f\in \mathcal {F}\).

3.4.1 Determining Sets of a Game

We next show that any game \(\mathcal {G}\) has two natural determining sets, based on the valid and invalid response sets introduced in Sect. 3.3.4. We first define these sets, then show they are both determining sets for the game \(\mathcal {G}\).

Definition 3.4

For any game \(\mathcal {G}\) with question set \(\mathcal {Q}\) and valid responses \(\mathcal {Y}(\vec {i})\), we introduce a companion set of valid elements of \(\mathscr {U}\),

$$\begin{aligned} \mathcal {Y}:= \left\{ \left( \sum _{\vec {a} \in \mathcal {Y}(\vec {i})} \prod _ \alpha e(\alpha )^{i(\alpha )}_{a(\alpha )}\right) -1 \right\} _{\vec {i} \in \mathcal {Q}}, \end{aligned}$$
(3.25)

Similarly, define the invalid elements \(\mathcal {N}\) by

$$\begin{aligned} \mathcal {N}:= \left\{ \prod _ \alpha e(\alpha )^{i(\alpha )}_{a(\alpha )} \right\} _{(\vec {i},\vec {a}) \in \mathcal {Q}\times \mathcal {N}(\vec {i})}. \end{aligned}$$
(3.26)

Theorem 3.5

Let \(\mathcal {G}\) be a game and \(\mathcal {Y}\), \(\mathcal {N}\) be as in Definition 3.4. Then, \(\mathcal {G}\) is determined by both \(\mathcal {Y}\) and \(\mathcal {N}\).

Proof

By definition, a strategy \((\pi ,\psi )\) for a nonlocal game is perfect iff

$$\begin{aligned} \psi ^* \pi (\Phi _G)\psi&= 1, \end{aligned}$$
(3.27)

i.e.,

$$\begin{aligned} \psi ^* \frac{1}{\left|\mathcal {Q}\right|} \sum _{\vec {i} \in \mathcal {Q}} \sum _{\vec {a} \in \mathcal {Y}(\vec {i})} \prod _{\alpha \in [k]} \pi \left( e(\alpha )^{i(\alpha )}_{a(\alpha )}\right) \psi&= 1. \end{aligned}$$
(3.28)

This equation has the form: the average of a function equals its maximum, so each term equals the maximum. We now exploit this: for all \(\vec {i} \in \mathcal {Q}\), we have

$$\begin{aligned} \pi \left( \sum _{\vec {a} \in \mathcal {Y}(\vec {i})} \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )}\right) \le \pi \left( \sum _{\vec {a} \in [m]} \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )}\right) = I \end{aligned}$$
(3.29)

and hence

$$\begin{aligned} \psi ^* \pi \left( \sum _{\vec {a} \in \mathcal {Y}(\vec {i})} \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )}\right) \psi \le \psi ^* \psi = 1 \end{aligned}$$
(3.30)

and a game is perfect iff we have for all \(\vec {i} \in \mathcal {Q}\):

$$\begin{aligned} \psi ^* \pi \left( \sum _{\vec {a} \in \mathcal {Y}(\vec {i})} \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )} \right) \psi&= 1, \end{aligned}$$
(3.31)

equivalently,

$$\begin{aligned} \psi ^* \pi \left( \left( \sum _{\vec {a} \in \mathcal {Y}(\vec {i})} \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )} \right) - 1 \right) \psi&= 0. \end{aligned}$$
(3.32)

Again using Eq. (3.29), we see

$$\begin{aligned} \pi \left( \sum _{\vec {a} \in \mathcal {Y}(\vec {i})} \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )} \right) - I \end{aligned}$$
(3.33)

is negative semidefinite; hence, Eq. (3.32) implies

$$\begin{aligned} \pi \left( \left( \sum _{\vec {a} \in \mathcal {Y}(\vec {i})} \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )} \right) - I \right) \psi = 0 \end{aligned}$$
(3.34)

which finishes the first part of the proof.

To convert this condition from terms of \(\mathcal {Y}\) to terms of \(\mathcal {N}\) fix \(\vec {i} \in \mathcal {Q}\) and use

$$\begin{aligned} \sum _{\vec {a} \in \mathcal {Y}(\vec {i}) \cup \mathcal {N}(\vec {i}) } \psi ^* \pi \left( \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )} \right) \psi = \psi ^* \psi = 1 . \end{aligned}$$
(3.35)

In words, we are summing over all responses valid or invalid to question \(\vec {i}\). Subtract Eq. (3.31) from this to get \(\sum _{\vec {a} \in \mathcal {N}(\vec {i}) } =0 \). This is a sum of nonnegative terms, so each is 0:

$$\begin{aligned} \psi ^* \pi \left( \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )}\right) \psi = 0 \ \ \ \forall \; \vec {a} \in \mathcal {N}(\vec {i}) \end{aligned}$$
(3.36)

Since operators \(\pi \left( e(\alpha )^{i}_{a}\right) \) are positive semidefinite we get

$$\begin{aligned} \pi \left( \prod _{\alpha \in [k]} e(\alpha )^{i(\alpha )}_{a(\alpha )}\right) \psi = 0 \ \ \ \forall \; \vec {a} \in \mathcal {N}(\vec {i}) , \end{aligned}$$

thus finishing the proof. \(\square \)

3.4.2 Torically Determined Games

While any game \(\mathcal {G}\) is determined by the sets \(\mathcal {Y}\) and \(\mathcal {N}\), there are, in general, many other sets of elements in \(\mathscr {U}\) that determine a game \(\mathcal {G}\). An important class of games in this paper are torically determined games, which we define as games determined by a particularly nice set of binomial elements.

Definition 3.6

A game \(\mathcal {G}\) is called a torically determined game if there exists a group G with \(\mathscr {U}\cong \mathbb {C}[G]\) and \(\mathcal {G}\) is determined by a set of elements

$$\begin{aligned} \mathcal {F}= \{\beta _i g_i - 1\} \end{aligned}$$
(3.37)

with each \(\beta _i \in \mathbb {C}\) and \(g_i \in G\). In this case, we say \(\mathcal {G}\) is torically determined by the set \(\mathcal {F}\) and call the elements \(\beta _i g_i\) clauses of \(\mathcal {F}\).

Traditionally, the term toric ideal refers to ideals generated by binomials, these being the difference of two monomials. The next lemma shows that our use of the term toric is consistent with this.

Lemma 3.7

A game \(\mathcal {G}\) which is determined by a set of elements of the form

$$\begin{aligned} \mathcal {F}' = \{\beta _i g_i - \beta _i' g_i'\}_i \end{aligned}$$
(3.38)

with all \(\beta _i, \beta _i' \in \mathbb {C}\), and \(g_i, g_i' \in G\) for some group G with \(\mathscr {U}\cong \mathbb {C}[G]\) is also torically determined.

Proof

For all \((\pi ,\psi )\) we have

$$\begin{aligned} \pi (\beta _i g_i - \beta _i' g_i') \psi = 0 \qquad \Leftrightarrow \qquad (\; \pi (\beta _i (\beta _i')^{-1} g_i (g_i')^{-1} - 1) \psi&= 0 \end{aligned}$$
(3.39)

so \(\mathcal {G}\) is also (torically) determined by the set

$$\begin{aligned} \mathcal {F}= \{\beta _i (\beta _i')^{-1} g_i (g_i')^{-1} - 1\}_i. \end{aligned}$$

\(\square \)

To provide some further intuition about the definition of torically determined games, we introduce the concept of a relaxed game polynomial. Given a game \(\mathcal {G}\) with game polynomial \(\Phi _\mathcal {G}\), we define a relaxed game polynomial for the game to be any element \({\tilde{\Phi }}_\mathcal {G}\in \mathscr {U}\) with the property that for any representation \(\pi \) and state \(\psi \) such that

$$\begin{aligned} \pi (\Phi _\mathcal {G}) \psi = \psi \end{aligned}$$
(3.40)

we have

$$\begin{aligned} \pi ({\tilde{\Phi }}_\mathcal {G}) \psi&= \psi , \end{aligned}$$
(3.41a)
$$\begin{aligned} \left|\phi ^*\pi ({\tilde{\Phi }}_\mathcal {G}) \phi \right|&\le 1 \qquad \text {for all }\ \Vert \phi \Vert \le 1 . \end{aligned}$$
(3.41b)

Informally, scoring a strategy using a relaxed game polynomial produces the correct score if a strategy is perfect for the associated game, but may give the wrong score otherwise. In particular, a relaxed game polynomial need not be self-adjoint, and the “score” coming from the relaxed game polynomial may not even be real when a strategy is not perfect.

Then, we note that a game \(\mathcal {G}\) which is torically determined by a set of elements \(\mathcal {F}\) with clauses \(\mathscr {H}\) has a relaxed game polynomial of the form

$$\begin{aligned} {\tilde{\Phi }}_\mathcal {G}= \frac{1}{\left|\mathscr {H}\right|} \sum _{h\in \mathscr {H}} h. \end{aligned}$$
(3.42)

Similarly, any game \(\mathcal {G}\) with a relaxed game polynomial

$$\begin{aligned} {\tilde{\Phi }}_\mathcal {G}= \frac{1}{\left|\mathscr {H}\right|} \sum _{h\in \mathscr {H}} h\end{aligned}$$
(3.43)

where each \(h\in \mathscr {H}\) is of the form \(\beta _i g_i\) with \(\beta _i \in \mathbb {C}\), \(\left|\beta _i\right| = 1\), and \(g_i \in G\) for some group G with \(\mathscr {U}\cong \mathbb {C}[G]\) is torically determined by the set of elements \(\mathcal {F}= \mathscr {H}- 1\). Since \({\tilde{\Phi }}_\mathcal {G}\) is an average, the proof of both these statements follows from an argument very similar to the proof of Theorem 3.5.

3.5 XOR and Mod r Games

Now, we practice using some of the machinery introduced in the previous section to describe XOR games, and a natural generalization which we call Mod r games. In this section, we often describe games in a variety of ways (for example using game polynomials, valid or invalid response sets, and relaxed game polynomials) without proving that these definitions are equivalent. In all cases the proof of equivalence amounts to a routine calculation using the definitions given in Sect. 3.3.

3.5.1 XOR Games

XOR games are games with \(m = 2\) responses which we interpret as a 0 or a 1. We can think of an XOR game with T possible questions, labeled \(\vec {i}_t = (i_t(1), i_t(2), ... , i_t(k))\) for \(t \in [T]\) as testing the satisfiability of a system of T equations, where each equation takes the form

(3.44)

the are free variables taking values in \(\{0,1\}\), and each \(s_t \in \{0,1\}\) specifies the winning parity associated with each question vector \(\vec {i}_t\).

The game polynomial of an XOR game takes the form

(3.45)

with \(T > 0\) some integer, the vector \(\vec {i}_t \in [n]^k\) and the integer \(s_t \in \{0,1\}\) are arbitrary and the notation \(i_{t}(\alpha )\) refers to the \(\alpha \)-th entry of the vector \(\vec {i}_t\). In anticipation of a connection to torically determined games, we refer to each monomial

(3.46)

as a clause, so the game polynomial above corresponds to a T-clause XOR game.

When working with XOR games it is convenient to remove the constant factor in the game polynomial and rescale the remaining terms, producing a relaxed game polynomial

(3.47)

This relaxed game polynomial computes a quantity known as the bias (meaning \(\text {tr}[\pi ({\tilde{\Phi }}_\mathcal {G})\rho ]\) gives the bias achieved by the strategy defined by \(\pi , \rho \)). For this reason, we also call the relaxed game polynomial for XOR games defined in this way the bias polynomial of an XOR game. Note that an XOR game can be completely specified by describing its bias polynomial. We also note that the bias polynomial formulation of XOR games makes it immediately clear that XOR games are torically determined games, by the discussion in Sect. 3.4.2 and the observation that elements are also cyclic unitary generators as defined in Sect. 3.3.

3.5.2 Mod r Games

Mod r games are a natural generalization of XOR games where the players have m responses and the valid responses to any question vector \(\vec {j}_t\) are responses which satisfy some linear equation mod r. Formally, we specify a T question Mod r game by a system of equations

(3.48)

The \(d_t(\alpha )\) and \(s_t\) are integers in [r], while the are free variables. Players’ responses \(a(1), a(2), \ldots , a(k)\) to question vector \(\vec {j}_t\) are winning if setting satisfies the t-th equation in the system of equations. We show in Theorem 3.8 that Mod r games are torically determined by a set of elements of the form

$$\begin{aligned} \mathcal {F}= \left\{ (\exp (- 2\pi i /r))^{s_t} \prod _{\alpha \in [k]} \left( c({\alpha })_{j_{t}(\alpha )}\right) ^{d_t(\alpha )} - 1\right\} _{t \in [T]} \end{aligned}$$
(3.49)

and hence admit a relaxed game polynomial of the form

$$\begin{aligned} {\tilde{\Phi }}_\mathcal {G}= \frac{1}{T} \sum _{t=1}^T (\exp (- 2\pi i /r))^{s_t} \prod _{\alpha \in [k]} \left( c({\alpha })_{j_{t}(\alpha )}\right) ^{d_t(\alpha )}, \end{aligned}$$
(3.50)

with the \(c({\alpha })_{j_{t}(\alpha )}\) cyclic unitaries of order m, as described in Sect. 3.3.1. We also note that a very similar result (Theorem 7.2) is proven for linear systems in Sect. 7.

Theorem 3.8

A Mod r game specified by a system of equations

(3.51)

is torically determined by a set of elements of the form

$$\begin{aligned} \mathcal {F}= \left\{ (\exp (-2\pi i /r))^{s_t} \prod _{\alpha \in [k]} \left( c({\alpha })_{j_{t}(\alpha )}\right) ^{d_t(\alpha )} - 1\right\} _{t \in [T]} \end{aligned}$$
(3.52)

Proof

First we write cyclic unitary generators in terms of projectors \(e(\alpha )^{i}_{a}\) and expand out the resulting product to note

$$\begin{aligned} \prod _{\alpha \in [k]} \left( c({\alpha })_{j_{t}(\alpha )}\right) ^{d_t(\alpha )} = \sum _{\vec {a} \in [r]^k} \left( \exp ( \frac{2 \pi i}{r} \sum _{\alpha \in [k]} d_t(\alpha ) a(\alpha ) ) \prod _{\alpha } e(\alpha )^{j_t(\alpha )}_{a(\alpha )} \right) . \end{aligned}$$
(3.53)

Then, we define

$$\begin{aligned} A(t) = \left\{ \vec {a} : \sum _{\alpha \in [k]} a(\alpha )d_t(\alpha ) = s_t \right\} \end{aligned}$$
(3.54)

to be the collection of response vectors which win on the question corresponding to clause \(t \in [T]\) of the mod r game. Then, Eq. (3.53) gives that, for any commuting operator strategy \((\pi ,\psi )\) the condition

$$\begin{aligned} \pi \left( \prod _{\alpha \in [k]} \left( c({\alpha })_{j_{t}(\alpha )}\right) ^{d_t(\alpha )}\right) \psi = \exp (\frac{2 \pi i s_t}{r}) \psi \end{aligned}$$
(3.55)

is equivalent to the condition

$$\begin{aligned} \pi \left( \sum _{\vec {a} \in [r]^k} \prod _{\alpha } e(\alpha )^{j_t(\alpha )}_{a(\alpha )} \right) \psi = \pi \left( \sum _{\vec {a} \in A(t)} \prod _{\alpha } e(\alpha )^{j_t(\alpha )}_{a(\alpha )} \right) \psi = \psi \end{aligned}$$
(3.56)

and thus the condition

$$\begin{aligned} \pi \left( \exp (-2\pi i /r))^{s_t} \prod _{\alpha \in [k]} \left( c({\alpha })_{j_{t}(\alpha )}\right) ^{d_t(\alpha )}\right) \psi = \psi \end{aligned}$$
(3.57)

ensures the players’ response is always winning for question \(t \in [T]\). The result follows. \(\square \)

3.5.3 Example

To provide a concrete example of the various ways of characterizing perfect games we discuss the GHZ game. This is a three-player XOR game with bias polynomial

$$\begin{aligned} \Phi _{GHZ} = \frac{1}{4}\left( x^{(1)}_{0}x^{(2)}_{0}x^{(3)}_{0} - x^{(1)}_{1}x^{(2)}_{1}x^{(3)}_{0} - x^{(1)}_{1}x^{(2)}_{0}x^{(3)}_{1} - x^{(1)}_{0}x^{(2)}_{1}x^{(3)}_{1}\right) \end{aligned}$$
(3.58)

Then, equivalently, the GHZ game is determined by the set of elements

$$\begin{aligned} \{x^{(1)}_{0}x^{(2)}_{0}x^{(3)}_{0} - 1, -x^{(1)}_{1}x^{(2)}_{1}x^{(3)}_{0} - 1, -x^{(1)}_{1}x^{(2)}_{0}x^{(3)}_{1} - 1, -x^{(1)}_{0}x^{(2)}_{1}x^{(3)}_{1} - 1 \} \end{aligned}$$
(3.59)

and has a perfect commuting operator strategy iff there exists a Hilbert space \({\mathcal {H}}\), state \(\psi \in {\mathcal {H}}\), and a \(*\)-representation \(\pi : \mathscr {U}\rightarrow \mathcal {B}({\mathcal {H}})\) satisfying

$$\begin{aligned}&\pi \left( x^{(1)}_{0}x^{(2)}_{0}x^{(3)}_{0}\right) \psi = \psi , \end{aligned}$$
(3.60a)
$$\begin{aligned}&\pi \left( x^{(1)}_{0}x^{(2)}_{1}x^{(3)}_{1}\right) \psi = \pi \left( x^{(1)}_{1}x^{(2)}_{0}x^{(3)}_{1}\right) \psi = \pi \left( x^{(1)}_{1}x^{(2)}_{1}x^{(3)}_{0}\right) \psi = -\psi . \end{aligned}$$
(3.60b)

Such a representation can be found by taking \({\mathcal {H}}\) to be an eight-dimensional Hilbert space, \(\psi \) a 3 qubit GHZ state, and letting \(\pi \) map elements \(x^{(\alpha )}_{i}\) to the standard measurement operators for the GHZ game or, explicitly

$$\begin{aligned} \pi \left( x^{(1)}_{0}\right)&= \sigma _X \otimes I \otimes I&\pi \left( x^{(1)}_{1}\right)&= \sigma _Y \otimes I \otimes I \end{aligned}$$
(3.61a)
$$\begin{aligned} \pi \left( x^{(2)}_{0}\right)&= I \otimes \sigma _X \otimes I&\pi \left( x^{(2)}_{1}\right)&= I \otimes \sigma _Y \otimes I \end{aligned}$$
(3.61b)
$$\begin{aligned} \pi \left( x^{(3)}_{0}\right)&= I \otimes I \otimes \sigma _X&\pi \left( x^{(3)}_{1}\right)&= I \otimes I \otimes \sigma _Y \end{aligned}$$
(3.61c)

where \(\sigma _X\) and \(\sigma _Y\) are the Pauli X and Y operators, respectively, and I denotes a dimension 2 identity matrix.

While it is certainly easiest to describe the GHZ game using the game polynomial formulation, we can also describe perfect strategies for the game using the language of valid and invalid response sets. Using this language the question set of the GHZ game is given by \(\mathcal {Q}_{GHZ} = \{(0,0,0),(0,1,1),(1,0,1), (1,1,0)\}\) with valid response sets

$$\begin{aligned} \mathcal {Y}_{GHZ}(0,0,0)&= \{\textit{EVEN}\}, \end{aligned}$$
(3.62a)
$$\begin{aligned} \mathcal {Y}_{GHZ}(0,1,1)&= \mathcal {Y}_{GHZ}(1,0,1) = \mathcal {Y}_{GHZ}(1,1,0) = \{\textit{ODD}\} , \end{aligned}$$
(3.62b)

where \(\{\textit{EVEN}\}\) and \(\{\textit{ODD}\}\) denote the set of all 3 bit response strings containing an even and odd number of ones, respectively. The invalid response sets are described similarly, but with \(\{\textit{EVEN}\}\) and \(\{\textit{ODD}\}\) swapped.

We can obtain projectors corresponding to a perfect strategy for the GHZ game using the same representation as above, with

$$\begin{aligned} E(1)^{0}_{0}&= \pi \left( e(1)^{0}_{0}\right) = \pi \left( \frac{1}{2}\left( 1 + x^{(1)}_{0} \right) \right) , \end{aligned}$$
(3.63a)
$$\begin{aligned} E(1)^{0}_{1}&= \pi \left( e(1)^{0}_{1}\right) = \pi \left( \frac{1}{2}\left( 1 - x^{(1)}_{0} \right) \right) , \end{aligned}$$
(3.63b)

and so on. We leave it as an exercise for the reader to check that the projectors defined in this way satisfy the valid and invalid response perfect game characterization conditions laid out in Theorem 3.5.

4 Nullstellensätze for Perfect Nonlocal Games

Nullstellensätze are algebraic certificates for polynomial equations to be solvable or, even stronger, for one set of polynomial equations to have solutions contained in the set of solutions to another. Solutions to a polynomial equation such as \(X^2 =I\) are often called zeros of the polynomial \(p(x):= x^2 -1\).

4.1 Three Types of Zeros

For nc polynomials, there are three natural types of zero: hard zeros, directional zeros and determinantal zeros. These have been studied in the mathematics community for several decades.

Let us illustrate directional zeros since that is what we work with most in this paper. Given \(f \in \mathbb {C}\langle x\rangle \) define the directional zero set

$$\begin{aligned} {{\mathcal {Z}}}_{\text {dir}}(f):=\, \{(X, \psi )\mid X_j \in \mathcal {B}({\mathcal {H}}), \ 0\ne \psi \in {\mathcal {H}}, \ f(X)\psi =0 \text { over all } {\mathcal {H}}\}. \end{aligned}$$
(4.1)

In the language of representations we have

$$\begin{aligned} {{\mathcal {Z}}}_{\text {dir}}(f):= \{ (\pi , \psi ) \mid \pi (f)\psi =0, \ \pi :\mathbb {C}\langle x\rangle \rightarrow \mathcal {B}({\mathcal {H}}) \text { representation}, \ 0\ne \psi \in {\mathcal {H}}\}. \nonumber \\ \end{aligned}$$
(4.2)

Quantum strategies need refinements of this set up; for one, we have not captured crucial algebraic relationships such as our whole world lives inside an algebra. Thus, for an arbitrary algebra \({\mathcal {A}}\) and \(f\in {\mathcal {A}}\) we define

$$\begin{aligned} {{\mathcal {Z}}}_{\text {dir}}^{{\mathcal {A}}} (f)= \{(\pi , \psi ) \mid \pi (f)\psi =0, \ \pi :{\mathcal {A}}\rightarrow \mathcal {B}({\mathcal {H}}) \text { representation}, \ 0\ne \psi \in {\mathcal {H}}\}.\nonumber \\ \end{aligned}$$
(4.3)

Essential to strategies is the intersection of this with \(*\)-representations in the context of \(*\)-algebras \({\mathcal {A}}\):

$$\begin{aligned} {{\mathcal {Z}}}_{\text {dir}}^{\textrm{re}, {\mathcal {A}}} (f) = {{\mathcal {Z}}}_{\text {dir}}^{ {\mathcal {A}}} (f) \cap \{(\pi ,\psi )\mid \pi *\text {-representation}\}. \end{aligned}$$
(4.4)

So far, we have motivated and given the definition of the set of directional zeros. With motivation in the same spirit, we define hard and determinantal real zeros by

$$\begin{aligned} \begin{aligned} {\mathcal {Z}}_{\text {hard}}^{ {\mathcal {A}}} (f)&= \{\pi \mid \pi (f)=0, \ \pi :{\mathcal {A}}\rightarrow \mathcal {B}({\mathcal {H}}) \text { representation} \} \\ {\mathcal {Z}}_{\det }^{{\mathcal {A}}} (f)&= \{\pi \mid \det [\pi (f)]=0, \ \pi :{\mathcal {A}}\rightarrow \mathcal {B}({\mathcal {H}}) \text { representation with } \dim {\mathcal {H}}<\infty \}. \end{aligned} \end{aligned}$$

Directional and hard zeros (cf. Sect. 8) are central to quantum games. The definition of determinantal zeros is included here for perspective only.

Of course, one also has the \(*\)-representation version of these:

$$\begin{aligned} {\mathcal {Z}}_{\text {hard}}^{\text {re}, {\mathcal {A}}} \qquad \text {and} \qquad {\mathcal {Z}}_{\det }^{\text {re}, {\mathcal {A}}} \end{aligned}$$

Also for a subset \(F \subseteq \mathbb {C}\langle x\rangle \) we define

$$\begin{aligned} {\mathcal {Z}}(F)= \ \bigcap _{f \in F} {\mathcal {Z}}(f) . \end{aligned}$$

4.1.1 Example

In Sect. 3.4, we described perfect commuting operator strategies for nonlocal games as \(*\)-representations of an algebra \(\mathscr {U}\) which satisfied certain desiderata which we now redescribe using the language of nc polynomial zeros introduced in the above section. For concreteness, we consider the GHZ game introduced in Sect. 3.5.3.

As discussed in Sect. 3.5.3, the GHZ game is determined by a set of polynomials

$$\begin{aligned} \mathcal {F}= \{x^{(1)}_{0}x^{(2)}_{0}x^{(3)}_{0} - 1,\ x^{(1)}_{0}x^{(2)}_{1}x^{(3)}_{1} + 1 ,\ x^{(1)}_{1}x^{(2)}_{0}x^{(3)}_{1} +1,\ x^{(1)}_{1}x^{(2)}_{1}x^{(3)}_{0} + 1 \}. \end{aligned}$$
(4.5)

Equivalently, in the language of directional zeros we see that the GHZ game has a perfect commuting operator strategy iff the directional zero set \({{\mathcal {Z}}}_{\text {dir}}^{\textrm{re}, \mathscr {U}} (\mathcal {F})\) is nonempty. Furthermore, entries in this set correspond to representations and states \((\pi ,\psi )\) defining perfect commuting operator strategies.

For an example using hard zeros see Sect. 8.3.

4.2 A General Noncommutative Nullstellensatz

Let \({\mathcal {A}}\) be a \(*\)-algebra. We equip \({\mathcal {A}}\) with a certain topology called the finest locally convex topologyFootnote 6. For our purposes, one does not need to worry about it since we will soon move to a much less general “weak” Nullstellensatz. For a subset \(C \subseteq {\mathcal {A}}\) let \({{\,\textrm{cl}\,}}(C)\) denote the closure of C in the finest locally convex topology. We use \({{\,\textrm{SOS}\,}}_C\) to denote all sums of \(u^* u\) with \(u \in C\).

Results of Sect. 5 of [6], see also [5], adapted to our use case are as follows.

Theorem 4.1

Suppose that \({{\mathfrak {L}}}\) is the left ideal of \({\mathcal {A}}\) generated by \(F\subseteq {\mathcal {A}}\). Then, for \(a\in {\mathcal {A}}\) the following are equivalent:

  1. (i)

    \(\pi (a)\psi =0\) for every \(*\)-representation \(\pi : {\mathcal {A}}\rightarrow \mathcal {B}({\mathcal {H}})\) for some (possibly infinite-dimensional) Hilbert space \({\mathcal {H}}\) and vector \(\psi \in {\mathcal {H}}\) such that \(\pi (f)\psi =0\) for all \(f\in F\);

  2. (i’)

    \({{\mathcal {Z}}}_{\text {dir}}^{\textrm{re}, {\mathcal {A}}}(F)\subseteq {{\mathcal {Z}}}_{\text {dir}}^{\textrm{re}, {\mathcal {A}}}(a)\);

  3. (ii)

    \( -a^* a \in {{\,\textrm{cl}\,}}[{{\,\textrm{SOS}\,}}_{\mathcal {A}}- {{\,\textrm{SOS}\,}}_{{{\mathfrak {L}}}}]; \)

  4. (iii)

    \( -a^* a \in {{\,\textrm{cl}\,}}({{\,\textrm{SOS}\,}}_{\mathcal {A}}- {{\,\textrm{cone}\,}}(S) ), \) where \(S= \ \{f^*f\mid f\in F \}\);

  5. (iv)

    \( -a^* a \in {{\,\textrm{cl}\,}}[ {{\,\textrm{SOS}\,}}_{\mathcal {A}}+ {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*]. \)

When \({\mathcal {A}}=\mathbb {C}\langle x\rangle \) is the free algebra and \({{\mathfrak {L}}}\) is finitely generated, \(*\)- representations \(\pi \) into finite-dimensional Hilbert spaces suffice in Item (i).

Corollary 4.2

Suppose \({\mathfrak {I}}\subseteq {\mathcal {A}}\) is a \(*\)-closed two sided ideal. Then, the following are equivalent for \(a\in {\mathcal {A}}\):

  1. (i)

    \(\pi (a)=0\) for every \(*\)-representation \(\pi \) such that \(\pi (f)=0\) for all \(f \in {\mathfrak {I}}\);

  2. (i’)

    \({{\mathcal {Z}}}_{\text {hard}}^{\textrm{re}, {\mathcal {A}}}({\mathfrak {I}})\subseteq {{\mathcal {Z}}}_{\text {hard}}^{\textrm{re}, {\mathcal {A}}}(a)\);

  3. (ii)

    \( -a^* a \in {{\,\textrm{cl}\,}}[ {{\,\textrm{SOS}\,}}_{\mathcal {A}}+ {\mathfrak {I}}].\)

Proof

Items (i) and (i)’ are equivalent by the definition on \({{\mathcal {Z}}}_{\text {hard}}^{\textrm{re}, {\mathcal {A}}}\).

In the context of Theorem 4.1, \({{\mathfrak {L}}}:={\mathfrak {I}}={{\mathfrak {L}}}^*\) is a left (and right) ideal, so the algebraic certificate in Item (ii) is

$$\begin{aligned} -a^* a \in {{\,\textrm{cl}\,}}[ {{\,\textrm{SOS}\,}}_{\mathcal {A}}+ {\mathfrak {I}}] = {{\,\textrm{cl}\,}}[ {{\,\textrm{SOS}\,}}_{\mathcal {A}}+ {{\mathfrak {L}}}+{{\mathfrak {L}}}^*]. \end{aligned}$$
(4.6)

Thus, if Item (ii) fails to hold, then by Theorem 4.1 there is a \(*\)-representation \(\pi :{\mathcal {A}}\rightarrow \mathcal {B}({\mathcal {H}})\) and \(0\ne \psi \in {\mathcal {H}}\) with \(\pi ({\mathfrak {I}})\psi =\{0\}\), and \(\pi (a)\psi \ne 0\).

Now consider the Hilbert space \({\check{{\mathcal {H}}}}:=[\pi ({\mathcal {A}})\psi ]\subseteq {\mathcal {H}}\). By definition, \(\pi ({\mathcal {A}}){\check{{\mathcal {H}}}}\subseteq {\check{{\mathcal {H}}}}\), so \(\pi \) induces a \(*\)-representation \({\check{\pi }}:{\mathcal {A}}\rightarrow \mathcal {B}({\check{{\mathcal {H}}}})\). By construction, \({\check{\pi }}({\mathfrak {I}})=\{0\}\) (this uses that \({\mathfrak {I}}\) is a right ideal, too), but \({\check{\pi }}(a)\psi \ne 0\), so \({\check{\pi }}(a)\ne 0\). \(\square \)

Proof of Theorem 4.1

Here, we give a proof which ties the parts of the theorem to corresponding theorems in [6]. This is unintuitive so in Sect. 4.2.1 we sketch the idea of the proof.

Item (i) and Item (i)’ are equivalent by the definition on \({{\mathcal {Z}}}_{\text {dir}}^{\textrm{re}, {\mathcal {A}}}\). The equivalence between Item (i) and Item (2) is [6, Theorem 5.1]. The equivalence (i) \(\Leftrightarrow \) (iv) is [6, Corollary 5.3]. Finally, (iii) \(\Leftrightarrow \) now follows from [6, Proposition 5.2].

The finite-dimensional assertion is proved constructively and this is the major part of [6]; see [6, Proposition 6.8 and Theorem 2.1]. \(\square \)

Remark

In general, closures in Theorem 4.1 and Corollary 4.2 are needed as can be shown by employing the Weyl algebra.

4.2.1 Intuition Behind the Proof of Theorem 4.1

Here is a special case of Theorem 4.1, included here since a sketch of its proof supplies the readers intuition. Also this lesser level of generality is all that is needed here for quantum games, so this theorem is what gets referenced later.

Theorem 4.3

Suppose

  1. (1)

    \({\mathcal {A}}\) is a \(*\)-algebra, where \({{\,\textrm{SOS}\,}}_{{\mathcal {A}}}\) is Archimedean in the sense that for every \(a\in {\mathcal {A}}\) there is \(\eta \in \mathbb N\) with \(\eta -a^*a\in {{\,\textrm{SOS}\,}}_{{\mathcal {A}}}\);Footnote 7

  2. (2)

    \({{\mathfrak {L}}}\subseteq {\mathcal {A}}\) is a left ideal.

Then, the following are equivalent:

  1. (i)

    there exist a \(*\)-representation \(\pi :{\mathcal {A}}\rightarrow \mathcal {B}({\mathcal {H}})\) and \(0\ne \psi \in {\mathcal {H}}\) satisfying

    $$\begin{aligned} \pi (f) \psi = 0 \end{aligned}$$
    (4.7)

    for all \(f \in {{\mathfrak {L}}}\);

  2. (i)’

    \({{\mathcal {Z}}}_{\text {dir}}^{\textrm{re}, {\mathcal {A}}}({{\mathfrak {L}}})\ne \varnothing \);

  3. (ii)

    \(-1 \not \in {{\,\textrm{SOS}\,}}_{{\mathcal {A}}} + {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*.\)

Proof

As before, Items (i) and (i)’ are equivalent by the definition on \({{\mathcal {Z}}}_{\text {dir}}^{\textrm{re}, {\mathcal {A}}}\). We thus establish (i) \(\Leftrightarrow \) (ii).

Easy side: suppose \(-1 \in {{\,\textrm{SOS}\,}}_{{\mathcal {A}}} + {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*\). If \(\pi ,\psi \) as in Eq. (4.7) exist, then \(- \psi ^* \psi \ge \psi ^* {{\,\textrm{SOS}\,}}_{\mathcal {A}}\psi \ge 0\); contradiction.

Harder side: suppose \(-1 \not \in {{\,\textrm{SOS}\,}}_{{\mathcal {A}}} + {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*\). By the Hahn–Banach theorem (version due to Eidelheit–Kakutani [1, Theorem III.1.7]), there is a linear functional \(L:{\mathcal {A}}\rightarrow \mathbb {C}\) satisfying

$$\begin{aligned} L(1)= 1, \qquad L({{\,\textrm{SOS}\,}}_{{\mathcal {A}}} + {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*) \subseteq \mathbb {R}_{\ge 0}. \end{aligned}$$
(4.8)

We remark that the strict separation is automatic since by Archimedeanity \(L(1)\ne 0\).

Since \({{\mathfrak {L}}}\) is a subspace, the second property of Eq. (4.8) implies \(L({{\mathfrak {L}}})=\{0\}\). Likewise, \(L(f)\ge 0\) for any \(f\in {{\,\textrm{SOS}\,}}_{{\mathcal {A}}}\). We remark that \(L(f)^* = L(f^*)\) for all \(f\in {\mathcal {A}}\). Indeed, because \({\mathcal {A}}\) is Archimedean, every self-adjoint \(g=g^*\in {\mathcal {A}}\) is bounded above: there is \(\eta \in \mathbb N\) with \(\eta -g\in {{\,\textrm{SOS}\,}}_{{\mathcal {A}}}\). Thus, \(L(g)\in \mathbb {R}\). Then, we write \(h\in {\mathcal {A}}\) as a linear combination of self adjoints,

$$\begin{aligned} h=\frac{h+h^*}{2}+i\frac{h-h^*}{2i}, \end{aligned}$$

to get \(L(h^*)=L(h)^*\).

Now perform the GNS construction. Define the bilinear form

$$\begin{aligned} \langle a \mid b\rangle := L(b^*a) \end{aligned}$$
(4.9)

on \({\mathcal {A}}\). Set \(N:=\{ a\in {\mathcal {A}}\mid L(a^*a) \}= 0\). By the Cauchy–Schwarz inequality for semi-scalar products,

$$\begin{aligned} 0 \le L(a^*r^* ra)^2 \le L(a^*a) L(a^* r^*r r^*r a) =0, \end{aligned}$$

so N is a left ideal. Since \(1\not \in N,\) \(N \not = {\mathcal {A}}\). Form the quotient space \({\check{{\mathcal {H}}}}={\mathcal {A}}/N\). Then, Eq. (4.9) induces a scalar product on \({\check{{\mathcal {H}}}}\). We complete it to the Hilbert space \({\mathcal {H}}\). Let \( \phi : {\mathcal {A}}\rightarrow {\mathcal {H}}\),

$$\begin{aligned} \phi (a):= a + N \end{aligned}$$

be the quotient map and let \( \psi := \phi (1).\)

Define a \(*\)-representation \(\pi \) of \({\mathcal {A}}\) on \({\mathcal {H}}\) by

$$\begin{aligned} \pi (a)(p+N):= ap+N. \end{aligned}$$

Since N is a left ideal, this is well defined. It is clear that \(\pi \) is a representation. It also intertwines the involution:

$$\begin{aligned} \begin{aligned} \langle \pi (a^*)(p+N) \mid q+N \rangle&= \langle a^*p+N \mid q+N \rangle = L(q^*a^*p), \\ \langle p+N \mid \pi (a)(q+N) \rangle&= \langle p+N \mid aq+N \rangle = L(q^*a^*p). \end{aligned} \end{aligned}$$

Finally, \(\pi \) maps into \(\mathcal {B}({\mathcal {H}})\). Assume \(p\in {\mathcal {A}}\) is such that \(\langle p+N \mid p+N\rangle =L(p^*p)=1\), and let \(a\in {\mathcal {A}}\). By Archimedeanity, there is \(\eta \in \mathbb N\) with \(\eta -a^*a\in {{\,\textrm{SOS}\,}}_{{\mathcal {A}}}\). Then, we have

$$\begin{aligned} \begin{aligned} 0&\le \langle \pi (a)(p+N)\mid \pi (a)(p+N)\rangle \\ {}&= \langle ap+N \mid ap+N\rangle = L(p^*a^*ap) \le \eta L(p^*p)=\eta , \end{aligned} \end{aligned}$$

whence \(\Vert \pi (a)\Vert \le \sqrt{\eta }\). Thus, \(\pi :{\mathcal {A}}\rightarrow \mathcal {B}({\mathcal {H}})\) is a \(*\)-representation.

It remains to verify Eq. (4.7). For \(f\in {{\mathfrak {L}}}\) we have

$$\begin{aligned} \pi (f)\psi = \pi (f)(1+N) = f+N. \end{aligned}$$

Since \(L({{\mathfrak {L}}})=\{0\}\) and \({{\mathfrak {L}}}^* {{\mathfrak {L}}}\subseteq {{\mathfrak {L}}},\) we have \({{\mathfrak {L}}}\subseteq N\). Thus, \(\pi (f)\psi =f+N=0\), as desired. \(\square \)

Example 4.4

An appealing class of algebras for which this theorem applies are group algebras \(\mathbb {C}[G]\). Indeed, for every group element \(g\in \mathbb {C}[G]\) we have \(1-g^*g=0\in {{\,\textrm{SOS}\,}}\). Since the set of bounded elements

$$\begin{aligned} H=\{f\in \mathbb {C}[G]\mid \exists \eta \in \mathbb N:\, \eta -f^*f\in {{\,\textrm{SOS}\,}}\} \end{aligned}$$

is a \(*\)-subalgebra [32] containing G, we must have \(H=\mathbb {C}[G]\) and thus \({{\,\textrm{SOS}\,}}\) is Archimedean in \(\mathbb {C}[G]\).

Corollary 4.5

Suppose \({\mathcal {A}}\) is a \(*\)-algebra, where \({{\,\textrm{SOS}\,}}_{{\mathcal {A}}}\) is Archimedean, and let \({\mathfrak {I}}\subseteq {\mathcal {A}}\) be a \(*\)-closed two-sided ideal. Then, the following are equivalent:

  1. (i)

    there is a \(*\)-representation \(\pi \) such that \(\pi ({\mathfrak {I}})=\{0\}\);

  2. (i)’

    \({{\mathcal {Z}}}_{\text {hard}}^{\textrm{re},{\mathcal {A}}}({\mathfrak {I}})\ne \varnothing \);

  3. (ii)

    \( -1 \not \in {{\,\textrm{SOS}\,}}_{\mathcal {A}}+ {\mathfrak {I}}.\)

Proof

The proof is the same as that of Corollary 4.2, just that we use Theorem 4.3 instead of Theorem 4.1. \(\square \)

4.3 Nullstellensätze and Perfect Games

Combining the Nullstellensatz of Sect. 4.2 with determining sets defined in Sect. 3.4 immediately gives a characterization of games with perfect commuting operator strategies in terms of left ideals and sums of squares of the universal game algebra \(\mathscr {U}\).

Theorem 4.6

For a nonlocal game \(\mathcal {G}\) determined by a set \(\mathcal {F}\subseteq \mathscr {U}\) the following are equivalent:

  1. (i)

    \(\mathcal {G}\) has a perfect commuting operator strategy;

  2. (ii)

    \(-1 \notin {{\mathfrak {L}}}\left( \mathcal {F}\right) + {{\mathfrak {L}}}\left( \mathcal {F}\right) ^* + {{\,\textrm{SOS}\,}}_\mathscr {U}\).

Proof

Immediate from Theorem 3.5 and Definition 3.3. \(\square \)

An immediate corollary of Theorem 4.6 comes from recalling the notions of determining sets \(\mathcal {Y}, \mathcal {N}\) defined in Sect. 3.4.1.

Corollary 4.7

Let \(\mathcal {G}\) be a nonlocal game, and \(\mathcal {Y}, \mathcal {N}\) be the determining sets associated with the game. Then, the following are equivalent:

  1. (i)

    \(\omega ^*_{\textrm{co}}(\mathcal {G}) = 1\);

  2. (ii)

    \(-1 \notin {{\mathfrak {L}}}\left( \mathcal {Y}\right) + {{\mathfrak {L}}}\left( \mathcal {Y}\right) ^* + {{\,\textrm{SOS}\,}}_{\mathscr {U}}\);

  3. (iii)

    \(-1 \notin {{\mathfrak {L}}}\left( \mathcal {N}\right) + {{\mathfrak {L}}}\left( \mathcal {N}\right) ^* + {{\,\textrm{SOS}\,}}_{\mathscr {U}}\).

Proof

Immediate from Theorems 3.5 and 4.6. \(\square \)

This corollary applies to all games (according to the definitions here) and characterizes which games do vs. do not have a quantum strategy. Unfortunately, the freedom given by the SOS terms in this algebraic certificate can make this theorem hard to use. Hence, we turn next to situations with no SOS term.

5 Nullstellensatz Without SOS and Subgroup Membership

It is helpful to divide the perfect game condition into two sub-questions. The first is checking whether \( -1 \in {{\mathfrak {L}}}+ {{\mathfrak {L}}}^* \) that is, whether

$$\begin{aligned} 1 \in {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*. \end{aligned}$$

Intuitively, this question feels “algebraic,” and we will show in Sect. 5.2 that in special cases it reduces to a subgroup membership problem.

The second problem is checking whether

$$\begin{aligned} -1 \in {{\mathfrak {L}}}+ {{\mathfrak {L}}}^* + {{\,\textrm{SOS}\,}}_\mathscr {U}\end{aligned}$$
(5.1)

given that

$$\begin{aligned} 1 \notin {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*. \end{aligned}$$
(5.2)

This question is more analytic and adds substantial complexity to applications. In special cases, we have that

$$\begin{aligned} 1 \notin {{\mathfrak {L}}}+ {{\mathfrak {L}}}^* \implies -1 \notin {{\mathfrak {L}}}+ {{\mathfrak {L}}}^* + {{\,\textrm{SOS}\,}}_\mathscr {U}\end{aligned}$$
(5.3)

and hence the second problem is trivial. This seems closely related to the existence of projections which are conditional expectations and respect SOS. The next section investigates this link further.

5.1 Conditional Expectations and SOS Projections

We prepare to produce simpler a Nullstellensatz with no SOS terms. This uses existence of SOS projections and conditional expectations, notions we now study.

5.1.1 Definitions

Let \({\mathcal {A}}\) be \(*\)-algebra, and let \(\mathcal {C}\subseteq {\mathcal {A}}\). Recall \({{\,\textrm{SOS}\,}}_\mathcal {C}\) denotes all sums of squares of members of \(\mathcal {C}\), i.e.,

$$\begin{aligned} {{\,\textrm{SOS}\,}}_{\mathcal {C}}=\left\{ \sum _{i} c_i^*c_i \mid c_i\in \mathcal {C}\right\} . \end{aligned}$$

If \(\mathcal {C}\) is a \(*\)-subalgebra, then \({{\,\textrm{SOS}\,}}_{\mathcal {C}}\subseteq \mathcal {C}\). A subtlety which is very important to us is: while \(f\in {{\,\textrm{SOS}\,}}_{\mathcal {A}}\) is in \(\mathcal {C}\), f may not be in \( {{\,\textrm{SOS}\,}}_\mathcal {C}\). Similarly for \(F \subseteq \mathcal {C}\) we let \({{\mathfrak {L}}}_\mathcal {C}(F) \subseteq \mathcal {C}\) denote the left ideal generated by F in the algebra \(\mathcal {C}\). When there is no confusion we may omit \(\mathcal {C}\) or F.

Definition 5.1

Given a unital \(*\)-algebra \({\mathcal {A}}\), a (not necessarily unital) \(*\)-subalgebra \(\mathcal {C}\) and a projection \({\mathbb E}:{\mathcal {A}}\rightarrow \mathcal {C}\) (i.e., \({\mathbb E}^2 = {\mathbb E}\)) onto \(\mathcal {C}\) satisfying \({\mathbb E}(a)^* = {\mathbb E}(a^*)\) for all \(a \in {\mathcal {A}}\). Then, \({\mathbb E}\) is called a

  1. (1)

    SOS Projection if \({\mathbb E}({{\,\textrm{SOS}\,}}_{\mathcal {A}}) \subseteq {{\,\textrm{SOS}\,}}_\mathcal {C}\).

  2. (2)

    Conditional Expectation provided \(\mathcal {C}\) is unital and \({\mathbb E}\) satisfies

    1. (a)

      \({\mathbb E}(b_1 a b_2) = b_1 {\mathbb E}(a) b_2\) for all \(a \in {\mathcal {A}}\), \(b_1, b_2 \in \mathcal {C}\);

    2. (b)

      \({\mathbb E}(1_{\mathcal {A}}) = {\mathbb E}(1_\mathcal {B})\);

    3. (c)

      \({\mathbb E}({{\,\textrm{SOS}\,}}_{\mathcal {A}}) \subseteq {{\,\textrm{SOS}\,}}_{\mathcal {A}}\cap \ \mathcal {C}\).

  3. (3)

    SOS Conditional Expectation if \({\mathbb E}\) is a conditional expectation that also satisfies the SOS projection property \({\mathbb E}({{\,\textrm{SOS}\,}}_{\mathcal {A}}) \subseteq {{\,\textrm{SOS}\,}}_\mathcal {C}\).

Conditional expectations will typically be denoted \({\mathbb E}\).

Remark

The bimodule property (a) in the definition of a conditional expectation can be replaced by the seemingly weaker one-sided version

  1. (a’)

    \({\mathbb E}(b a) = b {\mathbb E}(a)\) for all \(a \in {\mathcal {A}}\), \(b \in \mathcal {C}\).

Indeed, given \(a \in {\mathcal {A}}\) and \(c\in \mathcal {C}\) we have

$$\begin{aligned} {\mathbb E}(ac) ={\mathbb E}\big ( (c^*a^*)^*\big ) = {\mathbb E}\big ( (c^*a^*)\big )^* = \big ( c^* {\mathbb E}(a^*) \big ) ^* = {\mathbb E}(a^*)^* c = {\mathbb E}(a) c, \end{aligned}$$

as desired.

We now show existence of these mappings can simplify the nonlocal games Nullstellensatz.

5.1.2 Nullstellensatz

The next simple lemma explains the main use of SOS and conditional expectation property.

Lemma 5.2

Given a unital \(*\)-algebra \({\mathcal {A}}\), a unital \(*\)-subalgebra \(\mathcal {C}\) and \({{\mathfrak {L}}}\) a left ideal in \({\mathcal {A}}\). If an SOS conditional expectation \({\mathbb E}:{\mathcal {A}}\rightarrow \mathcal {C}\) exists, then

  1. (1)

    \(L:={\mathbb E}({{\mathfrak {L}}})\) is a left ideal in \(\mathcal {C}\);

  2. (2)

    \(-1 \notin {{\mathfrak {L}}}+ {{\mathfrak {L}}}^* + {{\,\textrm{SOS}\,}}_{{\mathcal {A}}}\) iff \(-1 \notin L + L^* + {{\,\textrm{SOS}\,}}_{\mathcal {C}}\).

Proof

Suppose \(b \in L\) and \(c \in \mathcal {C}\). Then, there is \({\hat{b}} \in {{\mathfrak {L}}}\) satisfying \({\mathbb E}({\hat{b}}) =b\). These satisfy

$$\begin{aligned} cb = c {\mathbb E}({\hat{b}}) = {\mathbb E}(c {\hat{b}}) \in {\mathbb E}({{\mathfrak {L}}}) = L \end{aligned}$$

establishing Item (1).

To prove the next item assume that \(-1 \in {{\mathfrak {L}}}+ {{\mathfrak {L}}}^* + {{\,\textrm{SOS}\,}}_{{\mathcal {A}}}\). Then,

$$\begin{aligned} -1= {\mathbb E}(-1)&\in {\mathbb E}({{\mathfrak {L}}}+ {{\mathfrak {L}}}^* + {{\,\textrm{SOS}\,}}_{{\mathcal {A}}}) = L + L^* + {{\,\textrm{SOS}\,}}_{\mathcal {C}}. \end{aligned}$$
(5.4)

The converse is obvious. \(\square \)

Corollary 5.3

Let \(\mathscr {A}\) be a \(*\)-algebra, \(\mathcal {C}\) be a \(*\)-subalgebra of \(\mathscr {A}\) and \(F\subseteq \mathcal {C}\). Then, if a SOS conditional expectation \({\mathbb E}: \mathscr {A}\rightarrow \mathcal {C}\) exists, then

$$\begin{aligned} -1 \notin {{\mathfrak {L}}}(F)_\mathscr {A}+ {{\mathfrak {L}}}(F)_\mathscr {A}^* + {{\,\textrm{SOS}\,}}_{{\mathcal {A}}} \ \ \text {iff} \ \ -1 \notin {{\mathfrak {L}}}(F)_\mathcal {C}+ {{\mathfrak {L}}}(F)_\mathcal {C}^* + {{\,\textrm{SOS}\,}}_{\mathcal {C}} \end{aligned}$$
(5.5)

Proof

We show that \({\mathbb E}({{\mathfrak {L}}}(F)_\mathscr {A}) = {{\mathfrak {L}}}(F)_\mathcal {C}\). Then, the result is immediate from Lemma 5.2.

First, \({\mathbb E}\) is the identity map on \(\mathcal {C}\), whence \({{\mathfrak {L}}}(F)_\mathcal {C}\subseteq {{\mathfrak {L}}}(F)_\mathscr {A}\) implies \({{\mathfrak {L}}}(F)_\mathcal {C}\subseteq {\mathbb E}({{\mathfrak {L}}}(F)_\mathscr {A}) \). To show the other inclusion note any element \(p \in {{\mathfrak {L}}}(F)_\mathscr {A}\) can be written as \( p = \sum _fa_ff, \) with all \(f\in F\) and \(a_f\in \mathscr {A}\). Then, (using the bimodule property of \({\mathbb E}\) in the second equality) we see

$$\begin{aligned} {\mathbb E}(p) = \sum _f{\mathbb E}(a_ff) = {\mathbb E}(a_f) f\in {{\mathfrak {L}}}(F)_\mathcal {C}, \end{aligned}$$
(5.6)

and hence \({\mathbb E}({{\mathfrak {L}}}(F)_\mathscr {A}) \subseteq {{\mathfrak {L}}}(F)_\mathcal {C}\). Then, \({\mathbb E}({{\mathfrak {L}}}(F)_\mathscr {A}) = {{\mathfrak {L}}}(F)_\mathcal {C}\) and the proof is complete. \(\square \)

Now, we prepare for finding a subalgebra \(\mathcal {C}\) that makes Lemma 5.2 valuable.

Lemma 5.4

Let \(\mathscr {A}\) be a unital \(*\)-algebra and \(F\) be a set of elements in \(\mathscr {A}\). Also let \(\mathcal {C}\) denote the \(*\)-subalgebra of \(\mathscr {A}\) generated by \(\{F, 1\}\) and \({{\mathfrak {L}}}_\mathcal {C}\) be the left ideal in \(\mathcal {C}\) generated by \(F\). Finally, let \(\mathcal {C}'\) be the subalgebra of \(\mathscr {A}\) generated by \(F\). Then, the following hold.

  1. (1)

    If \(F=F^*\), then \(\mathcal {C}'= {\mathfrak {L}}_C\).

  2. (2)

    If 1 is not in the (non-unital) \(*\)-subalgebra \(\mathcal {C}'\) generated by F, then

    $$\begin{aligned} -1 \notin {{\mathfrak {L}}}_{\mathcal {C}} + {{\mathfrak {L}}}_{\mathcal {C}}^* + {{\,\textrm{SOS}\,}}_{\mathcal {C}}. \end{aligned}$$
    (5.7)

Proof

(1):

By definition, \({\mathfrak {L}}_C\supseteq F\) and \({\mathfrak {L}}_C\) is closed under addition and multiplication. Since \(F=F^*\), \({\mathfrak {L}}_C\) is also closed under the involution. It is thus contained in the \(*\)-algebra \(\mathcal {C}'\). Conversely, since \(F=F^*\), \(\mathcal {C}'\) is the algebra generated by F and thus by definition contained in \({\mathfrak {L}}_C\).

(2):

First note that any polynomial \(p \in \mathcal {C}\) can be written as

$$\begin{aligned} p = p' + \alpha \end{aligned}$$
(5.8)

where \(\alpha \in \mathbb {C}\) and \(p'\in \mathcal {C}'\). It follows that any polynomial \(q \in {{\mathfrak {L}}}_\mathcal {C}\) can be written as a sum of terms of the form

$$\begin{aligned} q = \left( p' + \alpha \right) f \end{aligned}$$
(5.9)

with \(f \in F\). A similar description holds for any polynomial in \({{\mathfrak {L}}}_\mathcal {C}^*\). Additionally, any polynomial \(p'' \in {{\,\textrm{SOS}\,}}_\mathcal {C}\) can be written as

$$\begin{aligned} p''&= \sum _i (p_{ i} + \alpha _i)^*(p_{ i} + \alpha _i) \end{aligned}$$
(5.10a)
$$\begin{aligned}&= \sum _i \left( p_{i}^*p_i + \alpha _i^* p_{i} + \alpha _i p_{i}^* \right) + \sum _i |\alpha _i|^2 \end{aligned}$$
(5.10b)
$$\begin{aligned}&= {\tilde{p}} + \alpha '' \end{aligned}$$
(5.10c)

with each \(p_{i} \in \mathcal {C}'\); hence, \({\tilde{p}}\in \mathcal {C}'\) and \(\alpha ''\in \mathbb {R}_{\ge 0}\).

Now assume for contradiction that \(-1 \in {{\mathfrak {L}}}_\mathcal {C}+ {{\mathfrak {L}}}_\mathcal {C}^* + {{\,\textrm{SOS}\,}}_\mathcal {C}\). Then, combining the above observations we can write

$$\begin{aligned} - 1 = p' + {\tilde{p}} + \alpha '' \end{aligned}$$
(5.11)

with \(p', {\tilde{p}} \in \mathcal {C}'\) and \(\alpha \in \mathbb {R}_{\ge 0}\). Rearranging Eq. (5.11) gives

$$\begin{aligned} -(1 + \alpha '')&= p' + {\tilde{p}} \in \mathcal {C}', \end{aligned}$$
(5.12)

implying that \(1\in \mathcal {C}'\) since \(1+\alpha ''\in \mathbb {R}_{>0}\).

\(\square \)

Combining Lemmas 5.2 and 5.4 results in the following “SOS free” Nullstellensatz.

Theorem 5.5

Let \(\mathscr {A}\) be a unital \(*\)-algebra with Archimedean \({{\,\textrm{SOS}\,}}_\mathscr {A}\), and let \(F=F^*\subseteq \mathscr {A}\). Also let \(\mathcal {C}\) be the \(*\)-subalgebra of \(\mathscr {A}\) generated by \(\{F, 1\}\). If there exists an SOS conditional expectation \(\mathscr {A}\rightarrow \mathcal {C}\), then the following are equivalent:

  1. (i)

    There exists a \(*\)-representation \(\pi : \mathscr {A}\rightarrow \mathcal {B}({\mathcal {H}}) \) and vector \(\psi \in {\mathcal {H}}\) with \(\pi (f)\psi = 0\) for all \(f\in F\);

  2. (i)’

    \({{\mathcal {Z}}}_{\text {dir}}^\mathrm{re,\mathscr {A}}(F)\ne \varnothing \);

  3. (ii)

    \(-1 \notin {{\mathfrak {L}}}(F)_\mathscr {A}+ {{\mathfrak {L}}}(F)^*_\mathscr {A}+ {{\,\textrm{SOS}\,}}_\mathscr {A}\);

  4. (iii)

    \(-1 \notin {{\mathfrak {L}}}(F)_\mathcal {C}+ {{\mathfrak {L}}}(F)^*_\mathcal {C}+ {{\,\textrm{SOS}\,}}_\mathcal {C}\);

  5. (iv)

    \(1 \notin {{\mathfrak {L}}}(F)_\mathscr {A}+ {{\mathfrak {L}}}(F)^*_\mathscr {A}\);

  6. (v)

    \(1 \notin {{\mathfrak {L}}}(F)_\mathcal {C}+ {{\mathfrak {L}}}(F)^*_\mathcal {C}\);

  7. (vi)

    1 is not in the (non-unital) \(*\)-subalgebra \(\mathcal {C}'\) generated by F;

  8. (vii)

    \(1\notin {{\mathfrak {L}}}(F)_\mathscr {A}\);

  9. (viii)

    \(1 \notin {{\mathfrak {L}}}(F)_\mathcal {C}\).

Proof

Items (i), (i)’ are equivalent by definition. We have (i) \(\Leftrightarrow \) (ii) by the real Nullstellensatz Theorem 4.3, (ii) \(\Rightarrow \) (iii) by set inclusion and (iii) \(\Rightarrow \) (ii) by Corollary 5.3.

Next, (ii) \(\Rightarrow \) (iv) \(\Rightarrow \) (v) again by set inclusion. The equivalence (vi) \(\Leftrightarrow \) (viii) follows from \(\mathcal {C}'={{\mathfrak {L}}}(F)_\mathcal {C}\), the implications (iv) \(\Rightarrow \) (vii) \(\Rightarrow \) (viii) are obvious, and (vi) \(\Rightarrow \) (iii) by Lemma 5.4. \(\square \)

5.2 The Group Algebra Simplification

Obtaining SOS projections or conditional expectations is challenging. We now give a class of tractable situations and a Nullstellensatz appropriate for many games. The setting is a group algebra \(\mathbb {C}[G]\) and \(F\subseteq \mathscr {A}\), later to be chosen a set of binomials. Here, G is a discrete group.

To an element \(p\in \mathbb {C}[G]\),

$$\begin{aligned} p= \sum _j^\gamma s_j g_j \end{aligned}$$

one associates the set \({{\,\textrm{supp}\,}}(p)=\{ g_1, \dots , g_j\}\) of all group elements appearing in p, called the support of p. Similarly, \({{\,\textrm{Mon}\,}}(p)=\{s_1 g_1, \dots , s_\gamma g_\gamma \}\) is the set of monomials appearing in p. These notions are naturally extended to subsets \(F\subseteq \mathbb {C}[G]\), e.g., \({{\,\textrm{supp}\,}}(F)=\bigcup _{p\in F}{{\,\textrm{supp}\,}}(p)\).

We need the next theorem and string of lemmas to obtain our Nullstellensatz aimed at a large class of games. Our first observation is that there is a natural SOS conditional expectation mapping from \(\mathbb {C}[G]\) onto a type of envelope of a given set F. In what follows, for any group G and subset of elements \(F \subseteq G\) we let \({{\,\textrm{Grp}\,}}(F)\) denote the subgroup of G generated by the set F.

Theorem 5.6

Consider a group algebra \(\mathbb {C}[G]\) and let \(F\subseteq \mathbb {C}[G]\). Then, there is an SOS conditional expectation \({\mathbb E}:\mathbb {C}[G]\rightarrow {\mathbb {C}}[{{\,\textrm{Grp}\,}}( {{\,\textrm{supp}\,}}(F))]\).

Proof

Since \({{\,\textrm{Grp}\,}}( {{\,\textrm{supp}\,}}(F))\) is by definition a subgroup of G, this follows from [30]. For context, the map P is defined by

$$\begin{aligned} \sum _{g\in G} a_g g \mapsto \sum _{g\in {{\,\textrm{Grp}\,}}( {{\,\textrm{supp}\,}}(F))} a_g g. \end{aligned}$$

It is an SOS conditional expectation by routine calculation; see [30, Example 5, Proposition 4] for details. \(\square \)

5.2.1 Relating the Subalgebra and Subgroup Membership Problems

In this section, we restrict to sets F of binomials in a group algebra \(\mathbb {C}[G]\) of the form \(f=r -1\) where each r is a monomial, \(r = \beta g\) for some \(g \in G\) and \(\beta \in {\mathbb {C}}\). Our starting point is the following lemma, which gives a standard form for elements in the subalgebra \({{\,\textrm{Alg}\,}}(F)\) of \(\mathbb {C}[G]\) generated by F.

Lemma 5.7

Consider a group algebra \(\mathbb {C}[G]\) and let \(F\) be a set of elements in \(\mathbb {C}[G]\) with each \(f\in F\) of the form \(f= r- 1\) with \(r\) a monomial and let H denote the subgroup of the invertible elements \(\mathbb {C}[G]^{-1}\) in \(\mathbb {C}[G]\) generated by \({{\,\textrm{Mon}\,}}(F)\). Then, any \(p \in {{\,\textrm{Alg}\,}}(F)\) can be written in the form

$$\begin{aligned} p = \sum _{u,v\in H} \beta _{u,v} (u - v) \end{aligned}$$
(5.13)

with \(\beta _{u,v} \in \mathbb {C}\).

Proof

We induct on the length of products needed to express \(p \in {{\,\textrm{Alg}\,}}(F)\). If p is a linear combination of elements of F, say \( p=\sum _j \beta _j(r_j-1), \) this is immediate since \(r_j,1\in H\).

As the set of expressions of the form given in Eq. (5.13) is closed under linear combinations, for the induction step it suffices to consider the product of \(u-v,u'-v'\) for \(u,u',v,v'\in H\). But the product

$$\begin{aligned} (u-v)(u'-v') = (uu'-uv') + (vv'-vu') \end{aligned}$$

is clearly of the form Eq. (5.13) since \(uu',uv',vv',vu'\in H\), thus finishing the proof. \(\square \)

Lemma 5.8

Consider a group algebra \(\mathbb {C}[G]\) and let \(F\) be a set of elements in \(\mathbb {C}[G]\) with each \(f\in F\) of the form \(f= r- 1\) with \(r\) a monomial and let \(\Delta \) denote the subgroup of \(\mathbb {C}[G]^{-1}\) generated by \({{\,\textrm{Mon}\,}}(F) \cup {{\,\textrm{Mon}\,}}(F^*)\). Then, any element p in the \(*\)-subalgebra generated by F, \(p \in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)\), can be written in the form

$$\begin{aligned} p = \sum _{u,v\in \Delta } \beta _{u,v} (u - v) \end{aligned}$$
(5.14)

with \(\beta _{u,v} \in \mathbb {C}\).

Proof

Immediate from Lemma 5.7. \(\square \)

Lemma 5.9

Assume the hypotheses of Lemma 5.8 are in force. Then,

$$\begin{aligned} 1 \in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F) \quad \Leftrightarrow \quad \Delta \cap \mathbb {C} \varsupsetneq \{1\}. \end{aligned}$$
(5.15)

Proof

First we note that the result is trivial if there exists an element \(f= r- 1 \in F\) with \(r^* r\ne 1\). That is, \(r=\beta g\) with \(|\beta |\ne 1\). Namely,

$$\begin{aligned} (r-1)^*(r-1) + (r-1) + (r-1)^* = r^*r - 1 = \beta ^* \beta - 1 \in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F) \end{aligned}$$
(5.16)

and since \(\beta ^* \beta - 1 \ne 0\) we can conclude \( 1 \in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)\). At the same time \(r r^* = \beta \beta ^* \in \Delta \cap \mathbb {C}\) and \(\beta \beta ^* \ne 1\). This proves the result in this special case. In what follows, we can assume \(rr^* = 1\) for all \(r- 1 \in F\).

\((\Leftarrow )\) First note that for all monomials \(r\) and \(r'\) we have

$$\begin{aligned} (r- 1)(r' - 1) + (r- 1) + (r' - 1) = rr' - 1. \end{aligned}$$
(5.17)

Similarly,

$$\begin{aligned} (r-1)^* = r^* - 1 = r^{-1} - 1. \end{aligned}$$

It follows that for any \(r \in \Delta \) we have \(r - 1 \in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)\). Then, if \(1\ne \beta \in \Delta \cap \mathbb {C}\), we have \(\beta - 1 \in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)\), whence \(1\in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)\).

\((\Rightarrow )\) To prove the result in the other direction, we assume for contradiction that \(\beta \notin \Delta \) for all \(\beta \in \mathbb {C}\setminus \{1\}\) and that \(1 \in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)\). Then, using Lemma 5.8, we can write

$$\begin{aligned} 1&= \sum _{u,v\in \Delta } \beta _{u,v} (u - v) =\sum _{u,v} \beta _{u,v} u - \sum _{u,v} \beta _{u,v}v =\sum _{u,v} \beta _{u,v} u - \sum _{u,v} \beta _{v,u}u \end{aligned}$$
(5.18a)
$$\begin{aligned}&= \sum _{u,v} u \left( \beta _{u,v} - \beta _{v,u}\right) = \sum _{u} u \sum _{v} \left( \beta _{u,v} - \beta _{v,u}\right) \end{aligned}$$
(5.18b)

where we relabeled u and v in the last term in the sum on the first line. By assumption, \(\beta \notin \Delta \) for all \(\beta \in \mathbb {C}\setminus \{1\}\). Thus, the terms \(u \sum _{v} \left( \beta _{u,v} - \beta _{v,u}\right) \) in the last equality of Eq. (5.18b) are linearly independent since the underlying group elements of distinct \(u\in \Delta \) are distinct. We conclude

$$\begin{aligned} \sum _v (\beta _{1,v} - \beta _{v,1})&= 1, \end{aligned}$$
(5.19a)
$$\begin{aligned} \sum _v (\beta _{u,v} - \beta _{v,u})&= 0 \end{aligned}$$
(5.19b)

for all \(u \ne 1\). But this is a contradiction, since

$$\begin{aligned} \sum _v (\beta _{1,v} - \beta _{v,1}) + \sum _{u \ne 1} \sum _v (\beta _{u,v} - \beta _{v,u}) = \sum _{u,v} (\beta _{u,v} - \beta _{v,u}) = 0 . \end{aligned}$$

\(\square \)

5.2.2 NC Left Nullstellensatz Without SOS Terms

Now, we combine the results in Sects. 5.1 and 5.2.1 to obtain the following specialized Nullstellensatz.

Theorem 5.10

Consider a group algebra \(\mathbb {C}[G]\) and let \(F\) be a set of elements in \(\mathbb {C}[G]\) with each \(f\in F\) of the form \(f= r- 1\) with \(r\) a monomial and let \(\Delta \) denote the subgroup of \(\mathbb {C}[G]^{-1}\) generated by \({{\,\textrm{Mon}\,}}(F\cup F^*)\).

Then, the following are equivalent:

  1. (i)

    There exists a \(*\)-representation \(\pi : \mathbb {C}[G] \rightarrow \mathcal {B}({\mathcal {H}}) \) and vector \(\psi \in {\mathcal {H}}\) with \(\pi (f)\psi = 0\) for all \(f\in F\);

  2. (i)’

    \({{\mathcal {Z}}}_{\text {dir}}^\mathrm{re,\mathbb {C}[G]}(F)\ne \varnothing \);

  3. (ii)

    \(-1 \notin {{\mathfrak {L}}}(F)_{\mathbb {C}[G]} + {{\mathfrak {L}}}(F)^*_{\mathbb {C}[G]} + {{\,\textrm{SOS}\,}}_{\mathbb {C}[G]}\);

  4. (iii)

    \(1 \notin {{\mathfrak {L}}}(F)_{\mathbb {C}[G]} + {{\mathfrak {L}}}(F)^*_{\mathbb {C}[G]}\);

  5. (iv)

    \(1 \notin {{\mathfrak {L}}}(F)_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)} + {{\mathfrak {L}}}(F)^*_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)}\);

  6. (v)

    \(1 \notin {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)\);

  7. (vi)

    \(\Delta \cap \mathbb {C} = \{1\}\).

Moreover, if each \(f\in F\) is of the form \(f= \beta w - 1\) with \(w\in G\) and \(\beta \in \mathbb {C}\) with \(|\beta |=1\), then these statements are also equivalent to

  1. (vii)

    \(1 \notin {{\mathfrak {L}}}(F)_{\mathbb {C}[G]}\);

  2. (viii)

    \(1\not \in {{\mathfrak {L}}}(F)_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)} \).

Proof

We have (i) \(\Leftrightarrow \) (i)’ by definition, (i) \(\Leftrightarrow \) (ii) by the real Nullstellensatz Theorem 4.3, and (ii) \(\Rightarrow \) (iii) \(\Rightarrow \) (iv) by set inclusion.

To show (iv) \(\Rightarrow \) (v), assume \(1 \in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)\), i.e.,

$$\begin{aligned} 1= p_1 f_1 + p_2 f_2^* \end{aligned}$$
(5.20)

for some \(f_j\in F\) and \(p_i\in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F\cup \{1\}) \). Write \(f_2=\beta _2 w_2-1\) for \(\beta _2\in \mathbb {C}\) and \(w_2\in G\). Then, \(f_2^*=\beta _2^* w_2^*-1\), and

$$\begin{aligned} f_2^*f_2+f_2+f_2^* = |\beta _2|^2-1. \end{aligned}$$
(5.21)

If \(|\beta _2|^2\ne 1\), then Eq. (5.21) immediately implies

$$\begin{aligned} 1 \in {{\mathfrak {L}}}(F)_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)} + {{\mathfrak {L}}}(F)^*_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)}. \end{aligned}$$

If \(|\beta _2|^2=1\), then from Eq. (5.21) we deduce

$$\begin{aligned} f_2^* = -f_2^*f_2-f_2 \in {{\mathfrak {L}}}(F)_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)}, \end{aligned}$$
(5.22)

whence Eq. (5.20) yields

$$\begin{aligned} 1= p_1 f_1 - p_2 f_2^* f_2- p_2 f_2 \in {{\mathfrak {L}}}(F)_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)} \subseteq {{\mathfrak {L}}}(F)_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)} + {{\mathfrak {L}}}(F)^*_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)}, \end{aligned}$$

as desired.

Next, items (v) and (vi) are equivalent by Lemma 5.9, and (v) \(\Rightarrow \) (ii) by Lemma 5.4 and Corollary 5.3. Here, we use that an SOS conditional expectation mapping \(\mathbb {C}[G] \rightarrow {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F\cup \{1\})={\mathbb {C}}[{{\,\textrm{Grp}\,}}( {{\,\textrm{supp}\,}}(F))]\) exists by Theorem 5.6.

Finally, we tackle the moreover statement. Implications (iii) \(\Rightarrow \) (vii) \(\Rightarrow \) (viii) are obvious. To conclude we prove (viii) \(\Rightarrow \) (iv). Thus, assume \(1 \in {{\mathfrak {L}}}(F)_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)} + {{\mathfrak {L}}}(F)^*_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)}\), i.e.,

$$\begin{aligned} 1= \sum p_i f_i + \sum g_j^* q_j \end{aligned}$$
(5.23)

for some \(p_i,q_j\in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F\cup \{1\})\) and \(f_i,g_j\in F\). Write \(q_j=q_{j1} f_{j1} + q_{j2} g_{j2}^*\) for some \(q_{ji}\in {{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F\cup \{1\})\) and \(f_{j1},g_{j2}\in F\). As in Eq. (5.22) we write

$$\begin{aligned} g_{j2}^* = -g_{j2}^*g_{j2}-g_{j2} \in {{\mathfrak {L}}}(F)_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)}. \end{aligned}$$
(5.24)

Using Eq. (5.24) in Eq. (5.23) leads to

$$\begin{aligned} \begin{aligned} 1&=\sum p_i f_i + \sum g_j^* q_j= \sum p_i f_i + \sum g_j^* (q_{j1} f_{j1} + q_{j2} g_{j2}^*) \\&= \sum p_i f_i + \sum g_j^* q_{j1} f_{j1} - \sum g_j^* q_{j2}g_{j2}^*g_{j2} - \sum g_j^* q_{j2}g_{j2} \\&\in {{\mathfrak {L}}}(F)_{{{\,\textrm{Alg}\,}}^*_{\mathbb {C}[G]}(F)}, \end{aligned} \end{aligned}$$

as desired. \(\square \)

5.3 Nullstellensätze for Perfect Torically Determined Games

We now apply the simplified Nullstellensatz of this section to nonlocal games. We first recall the definition of torically determined games introduced in Sect. 3.4.2.

Definition 5.11

(repeated) A game \(\mathcal {G}\) is called a torically determined game if there exists a group G with \(\mathscr {U}\cong \mathbb {C}[G]\) and \(\mathcal {G}\) is determined by a set of elements

$$\begin{aligned} \mathcal {F}= \{\beta _i g_i - 1\} \end{aligned}$$
(5.25)

with each \(\beta _i \in \mathbb {C}\) and \(g_i \in G\). In this case, we say \(\mathcal {G}\) is torically determined by the set \(\mathcal {F}\) and call the elements \(\beta _i g_i\) clauses of \(\mathcal {F}\).

We let the set \(\mathscr {H}\) denote all the clauses of \(\mathcal {F}\). Now, the following characterization of torically determined games with perfect commuting operator strategies is a quick consequence of our Nullstellensatz Theorem 5.10.

Theorem 5.12

Let \(\mathcal {G}\) be a game which is torically determined by a set \(\mathcal {F}= \mathscr {H}- 1\), and let \(H\) be the group of elements in \(\mathscr {U}\) generated by \(\mathscr {H}\cup \mathscr {H}^*\). Then, \(\mathcal {G}\) has a perfect commuting operator strategy iff the following equivalent criteria are satisfied:

  1. (i)

    \(1 \notin {{\mathfrak {L}}}\left( \mathcal {F}\right) +{{\mathfrak {L}}}\left( \mathcal {F}\right) ^*\);

  2. (ii)

    \(H\cap {\mathbb {C}}=\{1\}\).

Moreover, if \(|\beta |=1\) for each \(h=\beta g \in {\mathscr {H}}\) then these statements are also equivalent to

  1. (iii)

    \(1 \notin {{\mathfrak {L}}}\left( \mathcal {F}\right) \).

Proof

By definition, \(\mathcal {G}\) has a perfect commuting operator strategy iff there exists a \(*\)-representation \(\pi \) mapping \(\mathscr {U}\) into bounded operators on a Hilbert space \({\mathcal {H}}\) and a state \(\psi \) in \({\mathcal {H}}\) with \(\pi (h- 1) \psi = 0\) for all clauses \(h\in \mathscr {H}\). But this is equivalent to the statement \({{\mathcal {Z}}}_{\text {dir}}^{\textrm{re},\mathbb {C}[G]}(\mathcal {F})\ne \varnothing \). Then, the result follows directly from Theorem 5.10. \(\square \)

5.3.1 Torically Determined Games and the Subgroup Membership Problem

Condition \(\mathrm {(ii)}\) of Theorem 5.12 relates existence of perfect commuting operator strategies to a question of membership of certain elements in a group. We now translate this question to a standard instance of the subgroup membership problem.

First, observe that if any clause \(\beta _i g_i \in \mathscr {H}\) has \(\left|\beta _i\right| \ne 1\), then the game \(\mathcal {G}\) trivially cannot have a perfect commuting operator strategy, since

$$\begin{aligned} H\ni \beta g \left( \beta g \right) ^{*} = \left|\beta \right|^2 \ne 1 . \end{aligned}$$
(5.26)

Then, we restrict our attention to the case where \(\left|\beta _i\right| = 1\) for all \(\beta _i g_i \in \mathscr {H}\). Let \(B\) be the group generated by all the \(\beta _i\) under multiplication, and note \(B\) is an abelian group consisting of a subset of the unit circle. Then, the statement \(H\cap {\mathbb {C}}=\{1\}\) is equivalent to \(H\cap B =\{1\}\), i.e., \(\beta \not \in H\) for any \(1\ne \beta \in B\). This is exactly equivalent to one (or several) instances of the subgroup membership problem.

For many games, the group \(B\) is very simple, containing just a few elements. In this case, alternate notation can be used for elements \(\beta \in B\) (for example, the J element used in [9] or the \(\sigma \) element in [2]). We see an example of this next.

5.3.2 Mod r Games

We now apply the machinery described previously in this section to Mod r Games. From Sect. 3.5.2, we have that Mod r games are torically determined by a set of elements

$$\begin{aligned} \left\{ (\exp (-2\pi i /r))^{s_t} \prod _{\alpha \in [k]} \left( c({\alpha })_{j_{t}(\alpha )}\right) ^{d_t(\alpha )} - 1\right\} _{t \in [T]} \end{aligned}$$
(5.27)

with the \(c({\alpha })_{j_{t}(\alpha )}\) cyclic unitaries.

Following the notation introduced in Sect. 5.3.1, we define \(B\) to be the group generated by the set of elements \(\{(\exp (-2\pi i /r))^{s_t}\}_{t \in [T]}\). We note that \(B\) is isomorphic to a subgroup of \(\mathbb {Z}_r\) and for notational convenience introduce the element \(\zeta \) as shorthand for \(\exp (-2\pi i /r)\). Then, we can write clauses \(h_t \in \mathscr {H}\) as

$$\begin{aligned} h_t = \zeta ^{s_t} \prod _{\alpha \in [k]} c({\alpha })_{j_{t}(\alpha )}. \end{aligned}$$
(5.28)

It follows that a Mod m game has a perfect commuting operator strategy iff \(\zeta ^s \notin H\) for all \(s \in \{1,2,\ldots ,r-1\}\).

We note that if r is prime there is an immediate simplification of this characterization, since \(\zeta ^s \in H \Leftrightarrow \zeta ^{rs+1} = \zeta \in H\) and hence a game has a perfect commuting operator strategy iff \(\zeta \notin H\). We also note that the Mod 2 game (i.e., XOR game) version of this condition is equivalent to Theorem 2.1 in [2].

6 Gröbner Basis Algorithm Tailored to Games

What arises in this paper are sums of left and two-sided ideals. We present here a kludge for using a conventional two-sided ideal nc Gröbner Basis (GB) algorithm to solve these mixed ideal problems. While a special purpose algorithm could be more efficient, the procedure here can be very convenient in practice. It has certainly been valuable to the authors. We also alert the reader to Sect. 8.3 which applies Gröbner bases plus a Nullstellensatz to synchronous games.

At the core of our algorithm is the following observation:

Proposition 6.1

Consider an ideal \({\mathfrak {I}}\) and left ideal \({\mathfrak {L}}\) in a free algebra \(\mathbb {C}\langle x\rangle \). For \(f\in \mathbb {C}\langle x\rangle \), we have

$$\begin{aligned} f\in {\mathfrak {I}}+{\mathfrak {L}}\quad \iff \quad f\xi \in ({\mathfrak {I}}+{\mathfrak {L}}\xi )_{\mathbb {C}\langle x,\xi \rangle }, \end{aligned}$$

where \(({\mathfrak {I}}+{\mathfrak {L}}\xi )_{\mathbb {C}\langle x,\xi \rangle }\) denotes the two-sided ideal of \(\mathbb {C}\langle x,\xi \rangle \) generated by \({\mathfrak {I}}+{\mathfrak {L}}\xi \). In particular,

$$\begin{aligned} 1\in {\mathfrak {I}}+{\mathfrak {L}}\quad \iff \quad \xi \in ({\mathfrak {I}}+{\mathfrak {L}}\xi )_{\mathbb {C}\langle x,\xi \rangle }. \end{aligned}$$

Proof

The forward implication is obvious, so assume \(f\xi \in ({\mathfrak {I}}+{\mathfrak {L}}\xi )_{\mathbb {C}\langle x,\xi \rangle },\) i.e.,

$$\begin{aligned} f\xi =\sum _j f_j a_j g_j + \sum _i p_i s_i\xi q_i \end{aligned}$$
(6.1)

for some \(f_j,g_j,p_i,q_i\in \mathbb {C}\langle x,\xi \rangle \), \(a_j\in {\mathfrak {I}}\) and \(s_i\in {\mathfrak {L}}\).

Each element r in \(\mathbb {C}\langle x,\xi \rangle \) can be written uniquely as \(r=r_0+r_1\), where \(r_0\in \mathbb {C}\langle x\rangle \) and \(r_1\in (\xi )_{\mathbb {C}\langle x,\xi \rangle }\). We keep only terms of degree \(\ge 1\) in \(\xi \) in Eq. (6.1) to obtain

$$\begin{aligned} f\xi =\sum _j f_{j0} a_j g_{j1} + \sum _j f_{j1} a_j g_{j0} + \sum _i p_{i0} s_i\xi q_{i0}. \end{aligned}$$
(6.2)

Now extract all terms that end in \(\xi \) to get

$$\begin{aligned} f\xi =\sum _j f_{j0} a_j {\tilde{g}}_{j1} \xi + \sum _i p_{i0} {\hat{q}}_{i0} s_i \xi \end{aligned}$$
(6.3)

for some \({\tilde{g}}_{j1}\in \mathbb {C}\langle x\rangle \) and \({\hat{q}}_{i0}\in \mathbb k\). Canceling \(\xi \) on the right in Eq. (6.3) leads to \(f\in {\mathfrak {I}}+{\mathfrak {L}}\). \(\square \)

6.1 Gröbner Basis Algorithm

Algorithm Suppose \(a_1,\ldots ,a_m,b_1,\ldots ,b_n\in \mathbb {C}\langle x\rangle \) are given, and let \({\mathfrak {I}}= (a_1,\ldots ,a_m)\), \({\mathfrak {L}}=\mathbb {C}\langle x\rangle b_1+\cdots +\mathbb {C}\langle x\rangle b_n\) be a two-sided and left ideal in \(\mathbb {C}\langle x\rangle \), respectively. Add a new variable \(\xi \), to form the two-sided ideal \((a_1,\ldots ,a_m,b_1 \xi ,\ldots ,b_n\xi )_{\mathbb {C}\langle x,\xi \rangle }\) and compute its Gröbner basis B, with respect to any admissible monomial order. Then,

$$\begin{aligned} 1\in {\mathfrak {I}}+{\mathfrak {L}}\qquad \text {iff} \qquad 1\in B \ \ \text {or} \ \ \xi \in B. \end{aligned}$$

Here \(1\in B\) iff \(1\in {\mathfrak {I}}\). \(\square \)

Basics of NC GB We assume the reader has a familiarity with Gröbner Bases; a standard reference in the commutative setting is [8], while [14, 23] describe the appropriate nc analogs. NC GBs have properties very similar to those of for traditional commutative GB with the dramatic exception that an NC GB might not be finite.

Fixing a monomial order, a subset B of an ideal \({\mathfrak {I}}\subseteq \mathbb {C}\langle x\rangle \) is a GB if the set of leading terms \({{\,\textrm{LT}\,}}(B)\) of elements of B generates the same ideal \({{\,\textrm{LT}\,}}({\mathfrak {I}})\) as all leading terms of \({\mathfrak {I}}\). Roughly speaking, the nc Bucherger criterion [23, Theorem 5.1] states that B is a GB iff each S-polynomial built off B can be expressed in terms of the elements of B with lower degrees. Thus, algorithms for building NC GBs work by producing S-polynomials for pairs \(a_i,a_j\) of (not necessarily distinct) polynomials which are generators of \({\mathfrak {I}}\). S-polynomials are ones of the form

$$\begin{aligned} S_{i,j}(w_i,w_i',w_j,w_j')= \frac{1}{{{\,\textrm{LC}\,}}(a_i)}w_ia_iw_i'- \frac{1}{{{\,\textrm{LC}\,}}(a_j)}w_ja_jw_j' \end{aligned}$$
(6.4)

for words \(w_i,w_i',w_j,w_j'\) in x satisfying

$$\begin{aligned} w_i{{\,\textrm{LT}\,}}(a_i)w_i'= w_j{{\,\textrm{LT}\,}}(a_j)w_j'. \end{aligned}$$
(6.5)

Here \({{\,\textrm{LC}\,}}(a)\) denotes the coefficient of the leading term of a.

At each step in construction of a GB, one considers a collection of polynomials \({\hat{B}}\). If the “remainder” of an S-polynomial after division by \({\hat{B}}\) is nonzero, one adds it to \({\hat{B}}\) and repeats the process. Ultimately (maybe in an infinite number of steps) \({\hat{B}}\) grows to a GB B.

Left ideals have what we call left GB. Algorithms for producing left GBs are very similar to algorithms for producing GBs for two-sided ideals. Now common multiples must be made by multiplying on the left (not the right), and the key matches are

$$\begin{aligned} w_i{{\,\textrm{LT}\,}}(a_i)= w_j{{\,\textrm{LT}\,}}(a_j) \end{aligned}$$
(6.6)

for words \(w_i,w_j\) in x.

We refer the reader to [24, 33] for excellent references on this.

Justification Now behold a few properties of our GB.

  1. (1)

    The set of polynomials in B which do not contain \(\xi \) is itself a Gröbner basis for \({\mathfrak {I}}\).

  2. (2)

    The set of polynomials in B which do contain \(\xi \), after \(\xi \) is removed, is itself a Left Gröbner basis for \({\mathfrak {L}}\). Moreover, the left S-polynomials which occur in the left GB algorithm for \({\mathfrak {L}}\) are the “same” as those which occur in the standard two sided ideal GB algorithm for \({\mathfrak {L}}\xi \); thus, their run times are similar.

  3. (3)

    The “natural generators” \(x_i^2-1\), \(x_i y_j -y_j x_i \), etc., are themselves a GB for the universal game ideal, \({\mathscr {I}}\). This is easily checked, say with a computer, by focusing on the subset of all polynomials involving only two variables \(x_i,y_j\).

6.2 Examples of GBs

In the forthcoming examples, we use the natural graded lexicographic order defined by

$$\begin{aligned} x_i<y_j< z_k<\xi \qquad \text {and} \quad x_0<x_1 < \cdots , \quad \text {etc.} \end{aligned}$$

Example 6.2

Consider the CHSH game. It has a perfect commuting operator strategy iff there exist signature matrices \(X_0,X_1,Y_0,Y_1\) with Xs commuting with Ys so that for some state \(\psi \) we have

$$\begin{aligned} \begin{aligned} X_0Y_0\psi&=\psi \\ X_0Y_1\psi&=\psi \\ X_1Y_0\psi&=\psi \\ X_1Y_1\psi&=-\psi . \end{aligned} \end{aligned}$$
(6.7)

The universal game ideal \({\mathscr {I}}\) for a 2-player, 2-question game, such as CHSH, with answers denoted \(x_0, x_1, y_0,y_1\) lies in the free algebra \(\mathbb k\langle x_0,x_1,y_0,y_1\rangle \) and is generated by the 4 signature properties \(x_i^2- 1, y_j^2- 1 \) and 4 commuting relations \(x_iy_j- y_jx_i\). The GB for \({\mathscr {I}}\) is itself, by item (3).

So take \({\mathfrak {I}}={\mathscr {I}}\) and take \({\mathfrak {L}}\) generated by \(x_0y_0-1,\ x_0 y_1-1,\ x_1y_0-1,\ x_1y_1+1\). This encodes the CHSH game. The Gröbner basis B for \(({\mathscr {I}}+{\mathfrak {L}}\xi )_{\mathbb k\langle x_0,x_1,y_0,y_1,\xi \rangle }\) is given by the defining relations for \({\mathscr {I}}\) and \(\xi \). Since \(\xi \in B\), by Proposition 6.1 we have \(1\in {\mathscr {I}}+{\mathfrak {L}}\), whence Eq. (6.7) does not have a solution.

6.3 Construction of Solutions to the Game Equations

Next, we show how to attempt finding solutions to the game equations once we have a GB. We do this with an example.

Example 6.3

(GHZ three-player game).

We seek symmetries \(x_i,y_i,z_i\), \(0\le i\le 1\), with x’s commuting with y’s and z’s, and y’s commuting with z’s, so that the four operators

$$\begin{aligned} \begin{aligned} -1 - x_0 y_0 z_1,\ -1 - x_0 y_1 z_0,\ -1 - x_1 y_0 z_0, \ 1-x_1y_1z_1 \end{aligned} \end{aligned}$$
(6.8)

have a common nonzero kernel vector.

In the free algebra \(\mathbb C\langle x,y,z\rangle \) consider its ideal \({\mathfrak {I}}\) encoding the signature relations \(x_i^2=x_i, y_j^2=y_j, z_j^2=z_j\) and commuting relations \(x_iy_j=y_jx_i, x_iz_j=z_jx_i,z_iy_j=y_jz_i\), and its left ideal \({\mathfrak {L}}\) generated by the polynomials in Eq. (6.8). The Gröbner basis B for \(({\mathfrak {I}}+{\mathfrak {L}}\xi )_{\mathbb C\langle x,y,z,\xi \rangle }\) is given by:

the signature relations, commuting relations, and the following 18 polynomials (all of whom are divisible by \(\xi \) on the right)

$$\begin{aligned} \begin{aligned}&y_0 \xi + x_0 z_1 \xi ,\ z_1 \xi + x_0 y_0 \xi ,\ x_0 \xi + y_0 z_1 \xi ,\ y_1 \xi + x_0 z_0 \xi ,\ z_0 \xi + x_0 y_1 \xi ,\\&\quad x_0 \xi + y_1 z_0 \xi ,\ y_0 \xi + x_1 z_0 \xi ,\ z_0 \xi + x_1 y_0 \xi ,\ x_1 \xi + y_0 z_0 \xi ,\ -y_1 \xi + x_1 z_1 \xi ,\\&\quad -z_1 \xi + x_1 y_1 \xi ,\ -x_1 \xi + y_1 z_1 \xi ,\ x_0 x_1 \xi + y_1 y_0 \xi ,\ -x_0 x_1 \xi + z_0 z_1 \xi ,\\&\quad -x_1 x_0 \xi + z_1 z_0 \xi ,\ -x_0 x_1 \xi + y_0 y_1 \xi ,\ x_0 x_1 \xi + x_1 x_0 \xi ,\ -y_1 z_0 \xi + x_1 x_0 x_1 \xi . \end{aligned}\nonumber \\ \end{aligned}$$
(6.9)

We shall use B to construct a solution to our system. Let

$$\begin{aligned} V_0=\mathbb C\langle x,y,z,\xi \rangle /({\mathfrak {I}}+{\mathfrak {L}}\xi ), \end{aligned}$$

and let \(f\mapsto {\overline{f}}\) denote the quotient map \(\mathbb C\langle x,y,z,\xi \rangle \rightarrow V_0\). The subspace

$$\begin{aligned} V:=\overline{\mathbb C\langle x,y,z\rangle \xi }\subseteq V_0 \end{aligned}$$

is eight-dimensional, spanned by \(B=\{\xi ,\ x_0 \xi ,\ x_1 \xi ,\ y_0 \xi ,\ y_1 \xi ,\ z_0 \xi ,\ z_1 \xi ,\ x_0 x_1 \xi \}\).

Each of the nc variables \(x_i,y_i,z_i\) acts on V from the left, and with respect to the basis B we obtain matrix representations as follows:

$$\begin{aligned} {\widehat{x}}_0&= \begin{pmatrix} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \end{pmatrix}, \quad {\widehat{x}}_1 = \begin{pmatrix} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1 \\ 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \end{pmatrix}, \\ {\widehat{y}}_0&= \begin{pmatrix} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \end{pmatrix}, \quad {\widehat{y}}_1 = \begin{pmatrix} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1 \\ 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 \end{pmatrix}, \\ {\widehat{z}}_0&= \begin{pmatrix} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \end{pmatrix}, \quad {\widehat{z}}_1 = \begin{pmatrix} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} -1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1 \\ 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 \end{pmatrix}. \end{aligned}$$

It is easy to verify that these solve the system Eq. (6.8) with common kernel vector \(e_1\).

Remark

For a 3-XOR game, if the restricted GNS construction (as in Example 6.3) gives a finite-dimensional space of dim \(> 8\), then the solution to the game may not be unique. Indeed, MERP is one solution; it has dim 8. Other solutions may show up in the GNS construction via it block diagonalizing and the other solutions being blocks. These blocks might all be unitarily equivalent to the MERP solution; otherwise, the game has multiple solutions.

The problem of establishing uniqueness of solution to the game has been studied and there is a standard technique available. cf [10]. It can be tried on a game for which the bias and optimal value satisfy

$$\begin{aligned} \omega ^*- \Phi _\mathcal {G}= SOS \end{aligned}$$

exactly (as opposed to approximately as in [11, 15, 25]). Optimality implies there exists a state \(\psi \) making

$$\begin{aligned} 0= (\omega ^* -\Phi _\mathcal {G})\psi =SOS\psi . \end{aligned}$$

Thus, we get algebraic equations \(s_j \psi =0\) corresponding to \(SOS = \sum _j s_j^* s_j\). For some games, it is possible to show from these equations have only one solution (that is, a unique \(*\)-representation) and from this show that the game has a unique optimal strategy.

7 Linear Systems Games

Linear systems games are some of the first games to have their perfect commuting operator strategies characterized algebraically. This characterization, given in [9], shows that perfect commuting operator strategies for linear systems games arise from representations of a group called the solution group of the game. As a result of this characterization, deciding existence of perfect commuting operator strategies for linear systems games was shown to be equivalent to solving an instance of the word problem.

In this section, we reconcile this characterization of linear systems games with the Nullstellensatz framework described in this paper. In particular, we derive the linear systems game characterization given in [9] using Theorem 5.12 as a starting point.

To begin, we recap the definition of linear systems games.

Definition 7.1

A linear systems game \(\mathcal {G}\) is a two-player game based on a system of m linear equations in n variables computed mod r. A question to Alice is an integer \(i \in [m]\) selecting an equation in the linear system equation. A question to Bob selects a variable \(j \in [n]\). Alice’s response consists of a vector \(\vec {a}_i = (a_{i_1}, a_{i_2}, \ldots , a_{i_k})~\in ~[r]^k\) giving values for all the variables contained in the i-th equation: here \(\vec {i} = (i_1, \ldots ,i_k)\) indexes these variables. Bob’s response is an integer \(b_j \in [r]\) assigning a value to the single variable indexed by j. The players win each round provided the variable assignments specified by \((a_{i_1}, a_{i_2}, \ldots , a_{i_k})\) satisfy the i-th equation and \(a_{i_t} = b_j\) whenever \(i_t=j\).

Following the notation laid out in Sect. 3.2 note the game algebra \(\mathscr {U}\) is generated by elements \(e(1)^{i}_{\vec {a}_i}\) and \(e(2)^{j}_{b_j}\) with \(\vec {a}_j\) ranging over all possible vectors of responses to question i and \(b_j\) ranging over all possible values Bob can give for variable \(b_j\).

Next, we identify elements of \(\mathscr {U}\) which correspond to the value Alice gives to a single variable in the system of equations. For any \(i \in [m]\) let \(T(i) = \{t_1, t_2, \ldots , t_k\}\) list all the variables contained in the i-th equation in the system of equations associated with the game. Then, for any \(t \in T(i)\) and \(r' \in [r]\) define

$$\begin{aligned} e(1)^{i}_{a_{t} = r'} = \sum _{\vec {a}_i : a_{t} = r'} e(1)^{i}_{\vec {a}_i}. \end{aligned}$$
(7.1)

It is easy to check that the elements \(e(1)^{i}_{a_{t} = r'}\) also satisfy the relations of projectors,

$$\begin{aligned} \left( e(1)^{i}_{a_{t} = r'}\right) ^2&= \left( e(1)^{i}_{a_{t} = r'}\right) ^* = e(1)^{i}_{a_{t} = r'}, \end{aligned}$$
(7.2a)
$$\begin{aligned} \sum _{r' \in [r]} e(1)^{i}_{a_{t} = r'}&= 1. \end{aligned}$$
(7.2b)

Additionally, orthogonality of the \(e(1)^{i}_{\vec {a}_i}\) elements gives that for any \(i \in [m]\), \(t_1, t_2 \in T(i)\), and \(r_1, r_2 \in [r]\) we have

$$\begin{aligned} e(1)^{i}_{a_{t_1} = r_1}e(1)^{i}_{a_{t_2} = r_2} = e(1)^{i}_{a_{t_2} = r_2}e(1)^{i}_{a_{t_1} = r_1} = \sum _{\begin{array}{c} \vec {a}_i: a_{t_1} = r_1\\ \text { and } a_{t_2} = r_2 \end{array}}e(1)^{i}_{\vec {a}_i} \end{aligned}$$
(7.3)

Finally we note that, also by orthogonality of the \(e(1)^{i}_{\vec {a}_i}\) elements we have for any i with \(T(i) = \{t_1, \ldots , t_k\}\) and \(\vec {a}_i = (r_{t_1}, r_{t_2}, \ldots , r_{t_k})\) that

$$\begin{aligned} \prod _{t \in T(i)} e(1)^{i}_{a_{t} = r_{t}} = e(1)^{i}_{\vec {a}_i}. \end{aligned}$$
(7.4)

From here, we can define cyclic unitary generators analogously to in Sect. 3.3, with

$$\begin{aligned} c({1})_{t}^{(j)}&:= \sum _{r'= 1}^r \exp \left( \frac{2\pi r' i}{r}\right) e(1)^{j}_{a_{t} = r'}, \end{aligned}$$
(7.5a)
$$\begin{aligned} c({2})_{j}&:= \sum _{a=1}^r \exp \left( \frac{2\pi a i}{r}\right) e(2)^{j}_{a}. \end{aligned}$$
(7.5b)

These unitary generators satisfy the same relations as the generators defined in Sect. 3.3, namely,

$$\begin{aligned} \left( c({1})_{t}^{(i)} \right) ^r&= \left( c({2})_{j}\right) ^r = 1, \end{aligned}$$
(7.6a)
$$\begin{aligned} \left( c({1})_{t}^{(i)} \right) ^*c({1})_{t}^{(i)}&= \left( c({2})_{j}\right) ^*c({2})_{j} = 1, \end{aligned}$$
(7.6b)
$$\begin{aligned} c({1})_{t}^{(i)}c({2})_{j}&= c({2})_{j}c({1})_{t}^{(i)} \;\;\;\; \forall \; i \in [m], \, j,t \in [n]. \end{aligned}$$
(7.6c)

In addition, Eq. (7.4) gives that

$$\begin{aligned} c({1})_{t_1}^{(i)} c({1})_{t_2}^{(i)} = c({1})_{t_2}^{(i)} c({1})_{t_1}^{(i)} \;\;\;\; \forall \; i \in [m], \, t_1, t_2 \in T(i). \end{aligned}$$
(7.6d)

We can then define \(G_{\text {ls}}\) to be the group generated by the \(c({1})_{t}^{(i)}\) and \(c({2})_{j}\).

Now, we show that linear systems games are torically determined games.

Theorem 7.2

Let \(\mathcal {G}_{ls}\) be a linear systems game based on a system of m equations. For any \(j \in [m]\) write the j-th equation of the system as

$$\begin{aligned} \sum _{t \in T(j)} d^{(j)}_t y_t = s_j \pmod {r} \end{aligned}$$
(7.7)

where \(T(j)\) indexes all the variables \(y_t\) contained in the j-th equation of the system and \(d_t^{(j)}, s_j \in [r]\). Then, \(\mathcal {G}_{ls}\) is a torically determined game, determined by the elements

$$\begin{aligned} \mathcal {F}= \left\{ \exp ( - \frac{2 \pi s_j i}{r}) \prod _{t \in T(j)} \left( c({1})_{t}^{(j)}\right) ^{d_t^{(j)}} - 1\right\} _{j \in [m]} \bigcup \ \left\{ c({1})_{t}^{(j)} \left( c({2})_{t}\right) ^* - 1\right\} _{t \in [n], j \in [m]}. \end{aligned}$$
(7.8)

Proof

First we note that expanding out the product and applying Eq. (7.4) gives

$$\begin{aligned} \prod _{t \in T(j)} \left( c({1})_{t}^{(j)}\right) ^{d_t^{(j)}}&= \prod _{t \in T(j)} \left( \sum _{r'=1}^r \exp \left( \frac{2\pi d_t^{(j)} r' i}{r}\right) e(1)^{j}_{a_{t} = r'} \right) \end{aligned}$$
(7.9)
$$\begin{aligned}&=\sum _{\vec {a}_j} \exp ( \frac{2 \pi i}{r} \sum _{t \in T(j)} d_t^{(j)} a^{(j)}_t ) e(1)^{j}_{\vec {a}_j} \end{aligned}$$
(7.10)

where we wrote \(\vec {a_j} = (a^{(j)}_{t_1}, a^{(j)}_{t_2}, \ldots , a^{(j)}_{t_k})\) for \(\{t_1, t_2, \ldots , t_k\} \in T(j)\). Then, define

$$\begin{aligned} A(j) = \left\{ \vec {a}_j : \sum _{t \in T(j)} d_t^{(j)} a^{(j)}_t = s_j \pmod r \right\} \end{aligned}$$
(7.11)

to be the collection of winning responses Alice can send to question j. Then, as a consequence of Eq. (7.10), we have for any commuting operator strategy \((\pi ,\psi )\) that the condition

$$\begin{aligned} \pi \left( \prod _{t \in T(j)} \left( c({1})_{t}^{(j)}\right) ^{d_t^{(j)}} \right) \psi&= \exp \left( \frac{2 \pi i s_j}{r}\right) \psi \end{aligned}$$
(7.12a)

is equivalent to

$$\begin{aligned} \pi \left( \sum _{\vec {a}_j \in A(j)} \pi ( e(1)^{j}_{\vec {a}_j} ) \right) \psi&= \psi . \end{aligned}$$
(7.12b)

Hence, the condition

$$\begin{aligned} \pi \left( \exp (- \frac{2 \pi s_j i}{r}) \prod _{t \in T(j)} \left( c({1})_{t}^{(j)}\right) ^{d_t^{(j)}} - 1\right) \psi = 0 \end{aligned}$$
(7.13)

for all \(j \in [m]\) ensures that Alice’s responses in the game \(\mathcal {G}_{ls}\) are always winning. Similarly, the condition

$$\begin{aligned} \pi \left( c({1})_{t}^{(j)} \left( c({2})_{t}\right) ^* \right) \psi = \psi \end{aligned}$$
(7.14)

ensures that

$$\begin{aligned} \sum _{r' \in [r]}\pi \left( e(1)^{j}_{a_{t} = r'}e(2)^{t}_{r'}\right) \psi = \psi \end{aligned}$$
(7.15)

and thus Bob’s responses are also always winning. Thus, it is clear that \(\mathcal {G}_{ls}\) is determined by \(\mathcal {F}\).

It remains to show that all the elements of \(\mathcal {F}\) are of the correct form. But we have that the elements \(c({1})_{t}^{(j)}\) and \(c({t})_{2}\) generate the elements \(e(1)^{j}_{a_{t} = r'}\) and \(e(2)^{j}_{a}\) through the same inverse transformation as given for the cyclic unitary generators in Sect. 3.3.1. And we also know that the elements \(e(1)^{j}_{a_{t} = r'}\) generate the elements \(e(1)^{j}_{\vec {a}_i}\) by Eq. (7.4). Then, \(\mathscr {U}= \mathbb {C}[G_{\text {ls}}]\) and the result is clear. \(\square \)

Theorem 7.2 reduces the question of whether or not a game has a perfect commuting operator strategy to an instance of the subgroup membership problem. The next theorem lets us reduce further to the standard formulation in terms of the word problem. We prepare with a few definitions, following wherever possible the conventions laid out in Sect. 5.3.1.

Definition 7.3

Let \(\mathcal {G}_{ls}\) be a linear systems game based on a system of m equations defined as in Theorem 7.2 and introduce to shorthand \(\zeta = \exp (-\frac{2 \pi i}{r})\) to simplify notation. Then, define the following groups:

  1. (1)

    \(G_{\text {ls}}^{\text {aug}}\) to be the subgroup of \(\mathbb {C}[G_{ls}]\) generated by the elements \(G_{ls} \cup \{\zeta \}\);

  2. (2)

    \(H_{\text {ls}}< G_{\text {ls}}^{\text {aug}}\) to be the subgroup of \(G_{\text {ls}}^{\text {aug}}\) generated by the set of elements

    $$\begin{aligned} \left\{ \zeta ^{s_i} \prod _{t \in T(i)} \left( c({1})_{t}^{(i)}\right) ^{d_t^{(i)}}\right\} _{i \in [m]} \bigcup \ \left\{ c({2})_{t}\left( c({1})_{t}^{(i)}\right) ^{-1} \right\} _{i \in [m], t \in [r]}; \end{aligned}$$
    (7.16)
  3. (3)

    \(G_{\text {ls}}^{\text {aug}}(2)\) to be the subgroup of \(G_{\text {ls}}^{\text {aug}}\) generated by the elements

    $$\begin{aligned} \{c({2})_{t}\}_{t \in [r]} \cup \{\zeta \}; \end{aligned}$$
    (7.17)
  4. (4)

    \(N_{\text {ls}}(2)\) to be the normal subgroup of \(G_{\text {ls}}^{\text {aug}}(2)\) generated by the elements

    $$\begin{aligned} \left\{ \zeta ^{s_i} \prod _{t \in T(i)} \left( c({2})_{t}\right) ^{d_t^{(i)}}\right\} _{i \in [m]} \bigcup \ \left\{ c({2})_{t}c({2})_{t'}c({2})_{t}^{-1}c({2})_{t'}^{-1}\right\} _{t, t' \in T(i), i \in [m]}. \end{aligned}$$
    (7.18)

Theorem 7.4

A linear system game \(\mathcal {G}_{ls}\), defined as in Theorem 7.2, has a perfect commuting operator strategy iff any of the equivalent conditions are satisfied:

  1. (i)

    \(\zeta ^{s} \notin H_{\text {ls}}\) for all \(s \in [r-1]\);

  2. (ii)

    \(\zeta ^{s} \notin N_{\text {ls}}(2)\) for all \(s \in [r-1]\);

  3. (iii)

    Letting \([\zeta ^{s}]\) denote the image of \(\zeta ^{s} \in G_{\text {ls}}^{\text {aug}}(2)\) in the group \(G_{\text {ls}}^{\text {aug}}(2)/N_{\text {ls}}(2)\), then \([\zeta ^{s}] \ne 1\) for all \(s \in [r-1]\).

Proof

We first note that condition (i) is equivalent to the existence of a perfect commuting operator strategy by Theorems 7.2 and 5.12.

To show (i) \(\Rightarrow \) (ii) we show \(N_{\text {ls}}(2)\) is contained in \(H_{\text {ls}}\). First note that for any \(i \in [m]\) we have

$$\begin{aligned}&\left( \zeta ^{s_i} \prod _{t \in T(i)} \left( c({1})_{t}^{(i)}\right) ^{d_t^{(i)}}\right) \prod _{t \in T(i)} \left( \left( c({1})_{t}^{(i)}\right) ^{-1} c({2})_{t}\right) ^{d_t^{(i)}} = \zeta ^{s_i} \prod _{t \in T(i)} \left( c({2})_{t}\right) ^{d_t^{(i)}} \in H_{\text {ls}}\end{aligned}$$
(7.19)

and, for any \(i \in [m]\) and \(t_1, t_2 \in T(i)\) we can multiply together generators of the group \(H_{\text {ls}}\) to find

$$\begin{aligned}&\left( c({2})_{t_1}\left( c({1})_{t_1}^{(i)}\right) ^{-1} \right) \left( c({2})_{t_2}\left( c({1})_{t_2}^{(i)}\right) ^{-1} \right) \nonumber \\&\qquad \quad \left( c({2})_{t_1}\left( c({1})_{t_1}^{(i)}\right) ^{-1} \right) ^{-1} \left( c({2})_{t_2}\left( c({1})_{t}^{(i)}\right) ^{-1} \right) ^{-1} \nonumber \\&\quad = \left( c({2})_{t_1}c({2})_{t_2}c({2})_{t_1}^{-1}c({2})_{t_2}^{-1}\right) \left( \left( c({1})_{t_1}^{(i)}\right) ^{-1}\left( c({1})_{t_2}^{(i)}\right) ^{-1} c({1})_{t_1}^{(i)} c({1})_{t_2}^{(i)}\right) . \end{aligned}$$
(7.20a)

Now, we cancel the \(c({1})_{t_1}^{(i)}\) terms using Eq. (7.6d) to get

$$\begin{aligned} c({2})_{t_1}c({2})_{t_2}c({2})_{t_1}^{-1}c({2})_{t_2}^{-1} \in H_{\text {ls}}. \end{aligned}$$
(7.20b)

Finally, we note that for any elements \(w(2) \in H_{\text {ls}}\cap G_{\text {ls}}^{\text {aug}}(2)\) and \(c({2})_{t}\) generating \(G_{\text {ls}}^{\text {aug}}(2)\) we have

$$\begin{aligned} \left( c({2})_{t}\left( c({1})_{t}^{(i)}\right) ^{-1}\right) w(2) \left( c({2})_{t}\left( c({1})_{t}^{(i)}\right) ^{-1} \right) ^{-1} = c({2})_{t} w(2) c({2})_{t}^{-1} \in H_{\text {ls}}\end{aligned}$$
(7.21)

and thus, the normal closure in \(G_{\text {ls}}^{\text {aug}}(2)\) of any element in \(H_{\text {ls}}\cap G_{\text {ls}}^{\text {aug}}(2)\) is also contained in \(H_{\text {ls}}\) (recall that the element \(\zeta \) is central). We conclude that \(N_{\text {ls}}(2) \subseteq H_{\text {ls}}\), as desired.

To show that (ii) \(\Rightarrow \) (i) we assume \(\zeta ^s \in H_{\text {ls}}\) for some \(s \in [r-1]\). Then, there exist elements \(w(1)_1, \ldots , w(1)_L\) and \(v(1,2)_1, \ldots , v(1,2)_{L+1}\) with each \(w(1)_\ell \) equal to a product of elements or inverses of elements in the set

$$\begin{aligned} \left\{ \zeta ^{s_i} \prod _{t \in T(i)} \left( c({1})_{t}^{(i)}\right) ^{d_t^{(i)}}\right\} _{i \in [m]} \end{aligned}$$
(7.22)

each \(v(1,2)_{\ell '} \) equal to a product of elements or inverses of elements in the set

$$\begin{aligned} \left\{ c({2})_{t}\left( c({1})_{t}^{(i)}\right) ^{-1} \right\} _{i \in [m], t \in [r]} \end{aligned}$$
(7.23)

and

$$\begin{aligned} \zeta ^s = v(1,2)_1 w(1)_1 v(1,2)_2 w(1)_2 \cdots v(1,2)_L w(1)_L v(1,2)_{L+1}. \end{aligned}$$
(7.24)

Then, let the elements \(w(2)_\ell \) for \(\ell \in \{1,2,...,L\}\) be obtained by choosing some product of elements drawn from the set defined in Eq. (7.22) and equal to \(w(1)_\ell \), then making the replacement

$$\begin{aligned} \zeta ^{s_i} \prod _{t \in T(i)} \left( c({1})_{t}^{(i)}\right) ^{d_t^{(i)}} \rightarrow \zeta ^{s_i} \prod _{t \in T(i)} \left( c({2})_{t}\right) ^{d_t^{(i)}} \end{aligned}$$
(7.25)

to each element in the product.Footnote 8 Similarly let elements \(v(1)_1, \ldots , v(1)_{L+1}\) be obtained from \(v(1,2)_\ell \) by the replacement

$$\begin{aligned} c({2})_{t}\left( c({1})_{t}^{(i)}\right) ^{-1} \rightarrow \left( c({1})_{t}^{(i)}\right) ^{-1} \end{aligned}$$
(7.26)

and elements \(v(2)_1, \ldots , v(2)_{L+1}\) be obtained via the replacement

$$\begin{aligned} c({2})_{t}\left( c({1})_{t}^{(i)}\right) ^{-1} \rightarrow c({2})_{t} \end{aligned}$$
(7.27)

applied to some fixed products of elements from Eq. (7.23) equal to \(v(1,2)_\ell \).

Then, commuting \(c({1})_{t}\) and \(c({2})_{t}\) elements gives

$$\begin{aligned} \zeta ^{s}&= v(1,2)_1 w(1)_1 v(1,2)_2 w(1)_2 \cdots v(1,2)_L w(1)_L v(1,2)_{L+1} \nonumber \\&= v(1)_1 w(1)_1 v(1)_2 w(1)_2 \cdots v(1)_L w(1)_L v(1)_{L+1} v(2)_1 v(2)_2 \cdots v(2)_{L+1}. \end{aligned}$$
(7.28)

From this, we conclude

$$\begin{aligned} v(2)_1 v(2)_2 \cdots v(2)_{L+1} = 1 \end{aligned}$$
(7.29)

since the word \(v(2)_1 v(2)_2 \cdots v(2)_{L+1}\) contains only \(c({2})_{t}\) elements (and no product of \(c({2})_{t}\) elements and their inverses can be to equal \(\zeta \) or to any product of \(c({1})_{t}\) elements) and hence

$$\begin{aligned} v(1)_1 w(1)_1 v(1)_2 w(1)_2 \cdots v(1)_L w(1)_L v(1)_{L+1} = \zeta ^s. \end{aligned}$$
(7.30)

But then we also have

$$\begin{aligned} v(2)_1 w(2)_1 v(2)_2 w(2)_2 \cdots v(2)_L w(2)_L v(2)_{L+1} = \zeta ^s \end{aligned}$$
(7.31)

since the relations between \(c({2})_{t}\) elements are the same as those between the \(c({1})_{t}^{(i)}\) elements. Then, the calculation

$$\begin{aligned} \zeta ^s&= \left( \prod _{\ell \in [L]} v(2)_{\ell } w(2)_{\ell }\right) v(2)_{L + 1} \end{aligned}$$
(7.32a)
$$\begin{aligned}&= \left( \prod _{\ell \in [L]} \left( \prod _{\ell '< \ell } v(2)_{\ell '} \right) w(2)_{\ell } \left( \prod _{\ell ' < \ell } v(2)_{\ell '} \right) ^{-1} \right) v(2)_1 v(2)_2 \cdots v(2)_{L + 1} \end{aligned}$$
(7.32b)
$$\begin{aligned}&= \left( \prod _{\ell \in [L]} \left( \prod _{\ell '< \ell } v(2)_{\ell '} \right) w(2)_{\ell } \left( \prod _{\ell ' < \ell } v(2)_{\ell '} \right) ^{-1} \right) \in N_{\text {ls}}(2) \end{aligned}$$
(7.32c)

shows \(\zeta ^s \in N_{\text {ls}}(2)\), as desired.

Finally we note (ii) \(\Leftrightarrow \) (iii) by the definition of a quotient group. \(\square \)

8 Nullstellensatz Applied to Synchronous Games

As discussed in the Introduction, a two-player game is called synchronous if it includes “consistency checks” where Alice and Bob are sent the same question and win iff they send the same response. Below we give a formal definition of synchronous games using the scoring function and probability distribution description of games introduced in Sect. 3.2.2.

Definition 8.1

A two-player, n-question, m-response game \(\mathcal {G}\) defined by scoring function V and question distribution \(\mu \) is called a synchronous game iff, for all \(i \in [n]\) we have

$$\begin{aligned} \mu (i,i) > 0 \end{aligned}$$
(8.1)

and

$$\begin{aligned} V(a,b | i, i) = \delta _{a,b} \end{aligned}$$
(8.2)

where \(\delta _{a,b}\) denotes the Kronecker delta function.

An immediate consequence of Definition 8.1 in terms of the notion of invalid determining set introduced in Sect. 3.4.1 is the following claim.Footnote 9

Claim 8.2

Let \(\mathcal {G}\) be a synchronous game and \(\mathcal {N}\) be the invalid elements of \(\mathcal {G}\). Then, we have

$$\begin{aligned} e(1)^{i}_{a}e(2)^{i}_{b} \in {{\mathfrak {L}}}(\mathcal {N}) \end{aligned}$$
(8.3)

for all \(i \in [n]\) and \(a, b \in [m]\) with \(a \ne b\).

Proof

Immediate since for all \(i \in [n]\) and \(a, b \in [m]\) with \(a \ne b\) we have \( V(a,b | i, i) = 0\) and \(\mu (i,i) > 0\) by definition of a synchronous game, hence \(e(1)^{i}_{a}e(2)^{i}_{b} \in \mathcal {N}\) by the definition of \(\mathcal {N}\). \(\square \)

Synchronous games can be studied using the standard techniques of this paper, where we consider representations of the algebra \(\mathscr {U}\) which satisfy directional zero constraints. However, Paulsen with various collaborators [27] [16, Theorem 3.2] found an equivalent simpler formulation using a “smaller” algebra which we denote \(\mathscr {U}(1)\), and using hard zeros arising from an ideal \({\mathfrak {I}}({{\,\textrm{synch}\,}}\mathcal {B}(1))\) of \(\mathscr {U}(1)\). Also the papers [20, 27] show that the synchronous value of a game is given by the trace of a bilinear function on \(\mathscr {U}(1)\). The first goal in this section is to show how this reconciles with our Nullstellensätze. This is the substance of Theorem 8.3 in Sect. 8.1.

Next, in Sect. 8.2 we turn to the tracial Nullstellensatz and recall that the theory of Null- and Positivstellensätze appropriate to tracial situations go back to [21] and is well developed in subsequent papers which are summarized in [4, Chapter 5]; see also [19].

Finally, in Sect. 8.3 we demonstrate (on a graph coloring problem) a computer algorithm based on Gröbner Bases plus Nullstellensatz.

8.1 Synchronous Two-Player Games in Terms of an Algebra

In what follows let:

  1. (1)

    \(\mathscr {U}\) be the universal game algebra;

  2. (2)

    \({{\mathfrak {L}}}(\mathcal {N})\) be the left ideal of \(\mathscr {U}\) generated by the set \(\left\{ \!\prod _\alpha e(\alpha )^{i(\alpha )}_{a(\alpha )}{:}(\vec {i},\vec {a}) \in \mathcal {N}\!\right\} \);

  3. (3)

    \({{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})\) be the left ideal of \(\mathscr {U}\) generated by the set

    $$\begin{aligned}{} & {} \left\{ \prod _\alpha e(1)^{i(\alpha )}_{a(\alpha )} : (\vec {i},\vec {a}) \in \mathcal {N}\right\} \cup \left\{ \prod _\alpha e(2)^{i(\alpha )}_{a(\alpha )} : (\vec {i},\vec {a}) \in \mathcal {N}\right\} \\{} & {} \quad \cup \left\{ e(1)^{i}_{a} - e(2)^{i}_{a} : i \in [n], a \in [m] \right\} ; \end{aligned}$$
  4. (4)

    \(\mathscr {U}(1)\) be the subalgebra of \(\mathscr {U}\) generated by \(e(1)^{i}_{a}\) only;

  5. (5)

    \({\mathfrak {I}}({{\,\textrm{synch}\,}}\mathcal {B}(1))\) be the two-sided ideal of \(\mathscr {U}(1)\) generated by \(\Big \{\prod _\alpha e(1)^{i(\alpha )}_{a(\alpha )} : (\vec {i},\vec {a}) \in \mathcal {N}\Big \}.\)

Now, we state the objective of this section.

Theorem 8.3

A synchronous game characterized by a set \(\mathcal {N}\) of invalid responses has a perfect commuting operator strategy iff any of the equivalent conditions are satisfied:

  1. (i)

    There exists a \(*\)-representation \(\pi : \mathscr {U}\rightarrow \mathcal {B}({\mathcal {H}})\) and a state \(\psi \in {\mathcal {H}}\) satisfying

    $$\begin{aligned} \pi ({{\mathfrak {L}}}(\mathcal {N})) \psi = \{0\}; \end{aligned}$$
    (8.4)
  2. (i’)

    \({{\mathcal {Z}}}_{\text {dir}}^{\textrm{re},\mathscr {U}}(\mathcal {N})\ne \varnothing \);

  3. (ii)

    There exists a \(*\)-representation \(\pi : \mathscr {U}\rightarrow \mathcal {B}({\mathcal {H}})\) and a state \(\psi \in {\mathcal {H}}\) satisfying

    $$\begin{aligned} \pi ({{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})) \psi = \{0\}; \end{aligned}$$
    (8.5)
  4. (ii’)

    \({{\mathcal {Z}}}_{\text {dir}}^{\textrm{re},\mathscr {U}}({{\,\textrm{synch}\,}}\mathcal {B})\ne \varnothing \);

  5. (iii)

    There exists a \(*\)-representation \(\pi ': \mathscr {U}(1) \rightarrow \mathcal {B}({\mathcal {H}})\) and a tracial state \(\psi \in {\mathcal {H}}\) satisfying

    $$\begin{aligned} \pi '({\mathfrak {I}}({{\,\textrm{synch}\,}}\mathcal {B}(1)) \psi = \{0\}; \end{aligned}$$
    (8.6)
  6. (iv)

    There exists a \(*\)-representation \(\pi '\) of \(\mathscr {U}(1)\) mapping into a tracial von Neumann algebra \({\mathcal {W}}\subseteq \mathcal {B}({\mathcal {H}})\) satisfying

    $$\begin{aligned} \pi '({\mathfrak {I}}({{\,\textrm{synch}\,}}\mathcal {B}(1))) = \{0\}; \end{aligned}$$
    (8.7)

The proof of (i), (ii), (iii) is based on several lemmas which we now give. The core of the proofs of the lemmas come from [27] but we include a self contained account for clarity’s sake.

Lemma 8.4

If the set of invalid responses \(\mathcal {N}\) comes from a synchronous game, we have \({{\mathfrak {L}}}(\mathcal {N}) = {{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})\).

Proof

We first show that \(e(1)^{i}_{a} - e(2)^{i}_{a} \in {{\mathfrak {L}}}(\mathcal {N})\). Because \(\mathcal {N}\) corresponds to a synchronous game, we have

$$\begin{aligned} e(1)^{i}_{a}e(2)^{i}_{b} \in \mathcal {N}\subseteq {{\mathfrak {L}}}(\mathcal {N}) \end{aligned}$$
(8.8)

for all \(b \ne a\). Then, we also have

$$\begin{aligned} {{\mathfrak {L}}}(\mathcal {N}) \ni \sum _{b\ne a} e(1)^{i}_{a}e(2)^{i}_{b}&= e(1)^{i}_{a} \sum _{b\ne a} e(2)^{i}_{b} = e(1)^{i}_{a} (1 - e(2)^{i}_{a}) \end{aligned}$$
(8.9a)
$$\begin{aligned}&= e(1)^{i}_{a} - e(1)^{i}_{a}e(2)^{i}_{a} . \end{aligned}$$
(8.9b)

Similarly, summing over the e(1) terms (and recalling that e(1) and e(2) commute) gives:

$$\begin{aligned} \sum _{b\ne a} e(1)^{i}_{b}e(2)^{i}_{a}&= e(2)^{i}_{a} (1 - e(1)^{i}_{a}) = e(2)^{i}_{a} - e(1)^{i}_{a}e(2)^{i}_{a} \in {{\mathfrak {L}}}(\mathcal {N}). \end{aligned}$$
(8.10)

Subtracting Eq. (8.10) from Eq. (8.9b) gives

$$\begin{aligned} e(1)^{i}_{a} - e(2)^{i}_{a} \in {{\mathfrak {L}}}(\mathcal {N}). \end{aligned}$$
(8.11)

Next note that for any \(e(1)^{i}_{a} e(2)^{j}_{b} \in \mathcal {N}\) we have

$$\begin{aligned} e(1)^{i}_{a} e(1)^{j}_{b} = e(1)^{i}_{a} \big (e(1)^{j}_{b} - e(2)^{j}_{b}\big ) + e(1)^{i}_{a} e(2)^j_b\in {{\mathfrak {L}}}(\mathcal {N}). \end{aligned}$$
(8.12)

A similar argument shows \(e(2)^{i}_{a} e(2)^{j}_{b} \in {{\mathfrak {L}}}(\mathcal {N})\). Then, all the generators of \({{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})\) are contained in \({{\mathfrak {L}}}(\mathcal {N})\) and we conclude \({{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B}) \subseteq {{\mathfrak {L}}}(\mathcal {N})\).

To prove the converse, we observe that for any \(e(1)^{i}_{a} e(2)^{j}_{b} \in \mathcal {N}\) we have

$$\begin{aligned} e(1)^{i}_{a} e(2)^{j}_{b} = e(1)^{i}_{a} e(1)^{j}_{b} - e(1)^{i}_{a} \big (e(1)^{j}_{b} - e(2)^{j}_{b}\big ) \in {{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B}), \end{aligned}$$
(8.13)

which gives \({{\mathfrak {L}}}(\mathcal {N}) \subseteq {{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})\) and completes the proof. \(\square \)

Lemma 8.5

\({\mathfrak {I}}({{\,\textrm{synch}\,}}\mathcal {B}(1)) \subseteq {{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})\).

Proof

First consider a monomial \(m(1) = e(1)^{i_1}_{a_1} \cdots e(1)^{i_t}_{a_t} \in \mathscr {U}(1)\) and note

$$\begin{aligned} e(1)^{i_1}_{a_1} \cdots e(1)^{i_t}_{a_t}&= e(1)^{i_1}_{a_1} \cdots e(1)^{i_{t-1}}_{a_{t-1}} (e(1)^{i_t}_{a_t} - e(2)^{i_t}_{a_t}) + e(1)^{i_1}_{a_1} \cdots e(1)^{i_{t-1}}_{a_{t-1}} e(2)^{i_t}_{a_t} \end{aligned}$$
(8.14a)
$$\begin{aligned}&= e(1)^{i_1}_{a_1} \cdots e(1)^{i_{t-1}}_{a_{t-1}} (e(1)^{i_t}_{a_t} - e(2)^{i_t}_{a_t}) + e(2)^{i_t}_{a_t} e(1)^{i_1}_{a_1} \cdots e(1)^{i_{t-1}}_{a_{t-1}}, \end{aligned}$$
(8.14b)

whence

$$\begin{aligned} e(1)^{i_1}_{a_1} \cdots e(1)^{i_t}_{a_t} - e(2)^{i_t}_{a_t} e(1)^{i_1}_{a_1} \cdots e(1)^{i_{t-1}}_{a_{t-1}} \in {{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B}). \end{aligned}$$

Applying this inductively we see

$$\begin{aligned}&e(1)^{i_1}_{a_1} \cdots e(1)^{i_t}_{a_t} - e(2)^{i_t}_{a_t} e(1)^{i_1}_{a_1} \cdots e(1)^{i_{t-1}}_{a_{t-1}} \end{aligned}$$
(8.15a)
$$\begin{aligned}&\quad + e(2)^{i_t}_{a_t} \left( e(1)^{i_1}_{a_1} \cdots e(1)^{i_{t-1}}_{a_{t-1}} - e(2)^{i_{t-1}}_{a_{t-1}}e(1)^{i_1}_{a_1} \cdots e(1)^{i_{t-2}}_{a_{t-2}}\right) \end{aligned}$$
(8.15b)
$$\begin{aligned}&\quad \quad + e(2)^{i_t}_{a_t}e(2)^{i_{t-1}}_{a_{t-1}} \left( e(1)^{i_1}_{a_1} \cdots e(1)^{i_{t-1}}_{a_{t-1}} - e(1)^{i_{t-2}}_{a_{t-2}} e(1)^{i_1}_{a_1} \cdots e(1)^{i_{t-3}}_{a_{t-3}}\right) \end{aligned}$$
(8.15c)
$$\begin{aligned}&\quad \quad \quad + \cdots \end{aligned}$$
(8.15d)
$$\begin{aligned}&= e(1)^{i_1}_{a_1} \cdots e(1)^{i_t}_{a_t} - e(2)^{i_t}_{a_t}e(2)^{i_{t-1}}_{a_{t-1}} \cdots e(2)^{i_1}_{t_1} \in {{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B}). \end{aligned}$$
(8.15e)

We can write this last expression compactly as \(m(1) - m(2)^* \in {{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})\), where we understand m(2) to be the polynomial m(1) with all e(1) replaced by e(2). By linearity, this observation immediately extends to general polynomials, so we have \(p(1) - p(2)^* \in {{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})\) for any polynomial p(1) formed entirely from e(1).

For any polynomial \(p \in {\mathfrak {I}}({{\,\textrm{synch}\,}}\mathcal {B}(1))\), we can write \(p = s(1)t(1)\) where \(s(1) \in {{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})\) and \(t(1) \in \mathscr {U}(1)\) is an arbitrary polynomial consisting only of e(1) generators. Then, we observe:

$$\begin{aligned} p = s(1)t(1) = s(1)(t(1) - t(2)^*) + t(2)^* s(1) \in {{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B}) \end{aligned}$$
(8.16)

and we are done. \(\square \)

Also important in this section are tracial linear mappings on an algebra, defined to be linear mappings \(\tau : \mathscr {A}\rightarrow \mathbb {C}\) satisfying

$$\begin{aligned} \tau (ab) = \tau (ba) \end{aligned}$$
(8.17)

for all \(a,b \in \mathscr {A}\). A state \(\psi \) is called a tracial state (for some operator algebra \({\mathcal {A}}\)) if the linear mapping it induces is tracial, so

$$\begin{aligned} \psi ^* a b \psi = \psi ^* b a \psi \end{aligned}$$
(8.18)

for all operators \(a,b \in \mathscr {A}\). We show, using an argument very similar to the one given in the proof of Lemma 8.5, that any linear mapping vanishing on \({{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})\) and \(\mathfrak {R}({{\,\textrm{synch}\,}}\mathcal {B})\) must be tracial on \(\mathscr {U}(1)\).

Lemma 8.6

Given a linear mapping \(\tau : \mathscr {U}\rightarrow \mathbb {C}\) on \(\mathscr {U}\) satisfying

$$\begin{aligned} \tau ({{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})) =0= \tau (\mathfrak {R}({{\,\textrm{synch}\,}}\mathcal {B})) , \end{aligned}$$
(8.19)

then \(\tau \) is tracial on \(\mathscr {U}(1)\); i.e., for any \(a,b \in \mathscr {U}(1)\),

$$\begin{aligned} \tau (ab) = \tau (ba). \end{aligned}$$
(8.20)

Note: If \(\tau \) is symmetric, then since \({{\,\textrm{synch}\,}}\mathcal {B}\) is a \(*\)-closed set, we have

$$\begin{aligned} \tau ({{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})) =0 \implies \tau (\mathfrak {R}({{\,\textrm{synch}\,}}\mathcal {B})) =0. \end{aligned}$$

Proof

Consider monomials \(w,\tilde{w} \in \mathscr {U}(1)\). Since w is generated by \(e(1)^{i}_{a}\), we can write \(w = w' e(1)^{i}_{a}\) and then observe

$$\begin{aligned} \tilde{w} w' \left( e(1)^{i}_{a} - e(2)^{i}_{a}\right)&\in {{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B}), \end{aligned}$$
(8.21a)
$$\begin{aligned} \left( e(1)^{i}_{a} - e(2)^{i}_{a}\right) \tilde{w} w'&\in \mathfrak {R}({{\,\textrm{synch}\,}}\mathcal {B}). \end{aligned}$$
(8.21b)

Then,

$$\begin{aligned} \tau ( \tilde{w} w' e(1)^{i}_{a} )&= \tau ( \tilde{w} w' e(2)^{i}_{a} ) =\tau (e(2)^{i}_{a} \tilde{w} w' ) =\tau (e(1)^{i}_{a} \tilde{w} w' ) \end{aligned}$$
(8.22)

where we use Eq. (8.21a) for the first equality, that elements of \(\mathscr {U}(1)\) and \(\mathscr {U}(2)\) commute for the second equality, and Eq. (8.21b) for the equality. Repeating this argument shows that for any elements \(w, \tilde{w} \in {{\,\textrm{synch}\,}}\mathcal {B}(1)\) we have

$$\begin{aligned} \tau (w \tilde{w}) = \tau (\tilde{w} w) \end{aligned}$$
(8.23)

and the proof is complete. \(\square \)

Now comes the proof of the main theorem of this section.

Proof of Theorem 8.3

Item (i) is equivalent to existence of a perfect commuting operator strategy by definition (see Sect. 3.4).

Item (ii) is equivalent to Item (i) by Lemma 8.4.

To prove Item (ii) \(\Rightarrow \) Item (iii), we let \(\pi \) be any representation satisfying Item (ii), and define \(\pi '\) to be the restriction of \(\pi \) to \(\mathscr {U}(1)\). Clearly \(\pi ' : \mathscr {U}(1) \rightarrow \mathcal {B}({\mathcal {H}})\) and

$$\begin{aligned} \pi '({\mathfrak {I}}({{\,\textrm{synch}\,}}\mathcal {B}(1))) \psi = \pi ({\mathfrak {I}}({{\,\textrm{synch}\,}}\mathcal {B}(1))) \psi \subseteq \pi ({{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})) \psi = \{0\}, \end{aligned}$$
(8.24)

where the inclusion follows from Lemma 8.5. To show the state \(\psi \) is tracial on \(\pi '(\mathscr {U}(1))\) note the linear mapping defined by

$$\begin{aligned} \tau (w) = \psi ^* \pi (w) \psi \end{aligned}$$
(8.25)

vanishes on \({{\mathfrak {L}}}({{\,\textrm{synch}\,}}\mathcal {B})\) (and hence \(\mathfrak {R}({{\,\textrm{synch}\,}}\mathcal {B})\)) by Item (ii) and so \(\tau \) is tracial on \(\mathscr {U}(1)\) by Lemma 8.6.

We prove Item (iii) \(\Rightarrow \) Item (i). Using \(\pi ',\psi \) we define the positive linear functional

$$\begin{aligned} \ell ':\mathscr {U}(1)\rightarrow \mathbb {C}, \quad f \mapsto \psi ^* \pi '(f)\psi . \end{aligned}$$

Next extend \(\ell '\) to a linear functional \(\ell \) on \(\mathscr {U}\) by mapping a monomial

$$\begin{aligned} w(e(1)) u(e(2)) \mapsto \ell \big (w(e(1)) u(e(1))^*\big ). \end{aligned}$$

It is obvious that \(\ell \) is symmetric in the sense that \(\ell (f^*)=\ell (f)^*\) for all \(f\in \mathscr {U}(1)\). To check that \(\ell \) is positive, let \(f=\sum _{i,j} \beta _{ij} w_i(e(1)) u_j(e(2))\in \mathscr {U}\). Then,

$$\begin{aligned} f^*f= \sum _{i,j}\sum _{k,l} \beta _{ij}^*\beta _{kl} w_i(e(1))^* w_k(e(1)) u_j(e(2))^* u_l(e(2)), \end{aligned}$$

whence

$$\begin{aligned} \ell (f^*f) = \sum _{i,j}\sum _{k,l} \beta _{ij}^*\beta _{kl} \ell ' ( w_i(e(1))^* w_k(e(1)) u_l(e(1))^* u_j(e(1))). \end{aligned}$$
(8.26)

Set

$$\begin{aligned} {\check{f}}=\sum _{i,j} \beta _{ij} w_i(e(1))u_j(e(1))^* \in \mathscr {U}(1). \end{aligned}$$

Then,

$$\begin{aligned} {\check{f}}^*{\check{f}}= \sum _{i,j}\sum _{k,l} \beta _{ij}^*\beta _{kl} u_j(e(1)) w_i(e(1))^* w_k(e(1)) u_l(e(1))^*, \end{aligned}$$

and

$$\begin{aligned} \ell '({\check{f}}^*{\check{f}})= \sum _{i,j}\sum _{k,l} \beta _{ij}^*\beta _{kl} \ell '( u_j(e(1)) w_i(e(1))^* w_k(e(1)) u_l(e(1))^* ). \end{aligned}$$
(8.27)

Since \(\ell '\) is tracial,

$$\begin{aligned} \ell '( u_j(e(1)) w_i(e(1))^* w_k(e(1)) u_l(e(1))^* ) = \ell ' ( w_i(e(1))^* w_k(e(1)) u_l(e(1))^* u_j(e(1))). \end{aligned}$$

This implies the values in Eqs. (8.26) and (8.27) are the same, so

$$\begin{aligned} \ell (f^*f) = \ell '({\check{f}}^*{\check{f}})\ge 0. \end{aligned}$$

It remains to show that \(\ell ( {{\mathfrak {L}}}(\mathcal {N}))=\{0\}\). Elements in \({{\mathfrak {L}}}(\mathcal {N})\) are linear combinations of monomials of the form

$$\begin{aligned} w(e(1)) u(e(2)) e(1)^i_a e(2)^j_b = w(e(1)) e(1)^i_a u(e(2)) e(2)^j_b \end{aligned}$$
(8.28)

with \(((i,j),(a,b))\in \mathcal {N}\). Applying \(\ell \) to Eq. (8.28) gives

$$\begin{aligned} \ell (w(e(1)) e(1)^i_a u(e(2)) e(2)^j_b ) =\ell '(w(e(1)) e(1)^i_a e(1)^j_b u(e(1))^* ) \end{aligned}$$

But \(e(1)^i_a e(1)^j_b\in \mathcal {N}\), whence \(w(e(1)) e(1)^i_a e(1)^j_b u(e(1))^* \in {\mathfrak {I}}({{\,\textrm{synch}\,}}\mathcal {B}(1)) \), so

$$\begin{aligned} \ell '(w(e(1)) e(1)^i_a e(1)^j_b u(e(1))^* )=0, \end{aligned}$$

as desired. We have proved that \(\ell (-1)=-1\) and \(\ell ({{\,\textrm{SOS}\,}}_{\mathscr {U}}+{{\mathfrak {L}}}(\mathcal {N})+{{\mathfrak {L}}}(\mathcal {N})^*)\subseteq \mathbb {R}_{\ge 0}\), whence \(-1\notin {{\,\textrm{SOS}\,}}_{\mathscr {U}}+{{\mathfrak {L}}}(\mathcal {N})+{{\mathfrak {L}}}(\mathcal {N})^*\). Then, Theorem 4.3 implies Item (i).

We next show Item (iii) \(\Rightarrow \) Item (iv). Define the Hilbert space \({\check{{\mathcal {H}}}}:=[\pi '(\mathscr {U}(1))\psi ]\subseteq {\mathcal {H}}\). By definition, \(\pi '(\mathscr {U}(1)){\check{{\mathcal {H}}}}\subseteq {\check{{\mathcal {H}}}}\), so \(\pi '\) induces a \(*\)- representation \({\check{\pi }}':\mathscr {U}(1)\rightarrow \mathcal {B}({\check{{\mathcal {H}}}})\). By construction, \({\check{\pi }}'({\mathfrak {I}}({{\,\textrm{synch}\,}}\mathcal {B}(1))=\{0\}\), as desired in Item (iv).

Finally, to go from Item (iv) to Item (iii), start with the tracial von Neumann algebra \({\mathcal {W}}\) with trace \(\tau \) as in Item (iv) and perform a GNS construction. There is a Hilbert space \({\mathcal {K}}\), unit vector \(\xi \in {\mathcal {K}}\), and a \(*\)-representation \(\pi '':{\mathcal {W}}\rightarrow \mathcal {B}({\mathcal {K}})\) so that

$$\begin{aligned} \tau (a)= \langle \pi ''(a) \xi , \xi \rangle , \quad a\in {\mathcal {W}}. \end{aligned}$$

Since \(\tau \) is a trace, \(\xi \) is a tracial state for \(\pi ''({\mathcal {W}})\). Then, the \(*\)-representation \(\pi ''\circ \pi ':\mathscr {U}(1)\rightarrow \mathcal {B}({\mathcal {K}})\) together with \(\xi \in {\mathcal {K}}\) satisfies Item (iii). \(\square \)

8.2 Tracial Nullstellensatz

As given in Theorem 8.3, the articles [27] and [20] associate perfect commuting operators strategies for any 2-player synchronous game with representations into a tracial von Neumann algebra. Now, we discuss a Nullstellensatz which characterizes the algebraic structures with representation into a tracial von Neumann algebra. Combining this Nullstellensatz with Theorem 8.3 gives an additional algebraic characterization of synchronous games with perfect commuting operator strategies.

The earliest tracial Positivstellensatz is in [21], done in the context of the Connes’ embedding conjecture. An exposition of this and the (dual) tracial moment problem is the book [4, Chapter 5] with more extensive results along the lines of the theorem below, for example in [19, Proposition 4.2 and later] and more moment theory in [3]. Moments with informative quantum games situations are also in the recent paper [28].

Suppose \({\mathcal {A}}\) is a \(*\)-algebra. The commutator of \(a,b\in {\mathcal {A}}\) is

$$\begin{aligned}{}[a,b]:=ab-ba, \end{aligned}$$

and we call ab cyclically equivalent, \(a{\mathop {\thicksim }\limits ^{\textrm{cyc}}}b\), if \(a-b\) is a sum of commutators. Define

$$\begin{aligned} {\widetilde{{{\,\textrm{SOS}\,}}}}:=\{a\in {\mathcal {A}}\mid \exists b\in {{\,\textrm{SOS}\,}}:\, a{\mathop {\thicksim }\limits ^{\textrm{cyc}}}b\}. \end{aligned}$$

Elements of \({\widetilde{{{\,\textrm{SOS}\,}}}}\) are trace-positive under all \(*\)-representations of \({\mathcal {A}}\) in tracial von Neumann algebra.

Theorem 8.7

Suppose

  1. (1)

    \({\mathcal {A}}\) is a \(*\)-algebra, where \({\widetilde{{{\,\textrm{SOS}\,}}}}\) is Archimedean in the sense that for every \(a\in {\mathcal {A}}\) there is \(\eta \in \mathbb N\) with \(\eta -a^*a\in {\widetilde{{{\,\textrm{SOS}\,}}}}\);

  2. (2)

    \({{\mathfrak {L}}}\subseteq {\mathcal {A}}\) is a left ideal.

Then, the following are equivalent:

  1. (i)

    there exist a \(*\)-representation \(\pi :{\mathcal {A}}\rightarrow \mathcal {B}({\mathcal {H}})\) and tracial state \(0\ne \psi \in {\mathcal {H}}\) satisfying

    $$\begin{aligned} \pi (f) \psi = 0 \end{aligned}$$
    (8.29)

    for all \(f \in {{\mathfrak {L}}}\);

  2. (ii)

    there exists a \(*\)-representation \(\pi :{\mathcal {A}}\rightarrow {\mathcal {F}}\) into a tracial von Neumann algebra \(({\mathcal {F}},\tau )\) satisfying

    $$\begin{aligned} \tau (\pi (f)) = 0 \end{aligned}$$
    (8.30)

    for all \(f \in {{\mathfrak {L}}}\);

  3. (iii)

    \(-1 \not \in {\widetilde{{{\,\textrm{SOS}\,}}}}+ {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*.\)

Proof

The equivalence of Items (i) and (ii) is established via the GNS representation as in the proof of Theorem 8.3. The implication (ii) \(\Rightarrow \) (iii) is easy. Namely, if \(-1 \in {\widetilde{{{\,\textrm{SOS}\,}}}}+ {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*\), and \(({\mathcal {F}},\tau )\) as in (ii) exist, then

$$\begin{aligned}-1=\tau (-1) \in \tau ({\widetilde{{{\,\textrm{SOS}\,}}}}+ {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*)= \tau ({\widetilde{{{\,\textrm{SOS}\,}}}}) + \tau ({{\mathfrak {L}}}) + \tau ({{\mathfrak {L}}})^*= \tau ({\widetilde{{{\,\textrm{SOS}\,}}}}) \subseteq \mathbb {R}_{\ge 0}, \end{aligned}$$

a contradiction.

For the harder side (iii) \(\Rightarrow \) (ii) we only give a sketch since it is very similar to that in the proof of Theorem 4.3. Suppose \(-1 \not \in {\widetilde{{{\,\textrm{SOS}\,}}}}+ {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*\). By the Hahn–Banach theorem (version due to Eidelheit–Kakutani), there is a linear functional \(L:{\mathcal {A}}\rightarrow \mathbb {C}\) satisfying

$$\begin{aligned} L(1)= 1, \qquad L({\widetilde{{{\,\textrm{SOS}\,}}}}+ {{\mathfrak {L}}}+ {{\mathfrak {L}}}^*) \subseteq \mathbb {R}_{\ge 0}. \end{aligned}$$
(8.31)

Since \({{\mathfrak {L}}}\) is a subspace, the second property of Eq. (8.31) implies \(L({{\mathfrak {L}}})=\{0\}\). Likewise, \({\widetilde{{{\,\textrm{SOS}\,}}}}\) contains all commutators and \(L({\widetilde{{{\,\textrm{SOS}\,}}}})\subseteq \mathbb {R}_{\ge 0}\), whence \(L(f)\ge 0\) for any \(f\in {{\,\textrm{SOS}\,}}_{{\mathcal {A}}}\) and L is tracial. Further, \(L(f)^* = L(f^*)\) for all \(f\in {\mathcal {A}}\). Then, the GNS construction yields the desired conclusion. \(\square \)

8.3 Quantum Graph Coloring

As an example of how one can use Theorem 8.3, we look at quantum coloring of graphs. It is often formulated as a synchronous game. We start by recalling the coloring setup and then we illustrate a general approach through an example.

Given a graph \(G=(V,E)\), where V is the set of vertices and \(E\subseteq V\times V\) its edges, we say G admits a quantum c-coloring [27], if to each vertex \(i\in V\) one can assign projections \(e^i_a\), \(a\in \{1,\ldots ,c\}\) so that

$$\begin{aligned} \sum _a e^i_a=1, \end{aligned}$$
(8.32)

and for each edge \((i,j)\in E\), we have

$$\begin{aligned} e^i_a e^j_a=0,\quad \forall \, a. \end{aligned}$$
(8.33)

In algebraic terms, G admits a quantum c-coloring if the universal game algebra \(\mathscr {U}(1)\) generated by \(e(1)^i_a:=e^i_a\), where i indexes the vertices and \(a\in \{1,\ldots ,c\}\) admits a \(*\)-representation \(\pi \) that vanishes on the polynomials \(e^i_a e^j_a\) from Eq. (8.33).

8.3.1 Quantum Graph Coloring as a Synchronous Game

We shall be brief here and refer to [27, Sect. 2] for a thorough discussion of graph coloring and the connection to synchronous games.

A game is specified by a graph G. The verifier’s questions amount to giving vertex i to Alice and vertex j to Bob; a perfect strategy consists of a vector \(\psi \) and k projectors \(e(1)^{i}_{a}\) for Alice and \(e(2)^{i}_{b}\) for Bob meeting the (universal game) constraints of Equation (3.7) and \(e(1)^{i}_{a}e(2)^{j}_{a}= 0\) if vertices i and j are joined by an edge. These give the probability of Alice and Bob answering question (ij) with colors \(a,b \in 1, \dots , c\) via the formula

$$\begin{aligned} p(a,b | i,j):= \psi ^* e(1)^{i}_{a}e(2)^{j}_{b} \psi . \end{aligned}$$

A synchronous strategy means for i any vertex, \(p(a,b | i,i)=0\) if \(a \not =b\). This is equivalent to \(e(1)^{i}_{a} \psi =e(2)^{i}_{a} \psi \) for \(a=1, \dots , c\). Equation 8.3 now converts solving this system of equations to finding tracial representations of \(\mathscr {U}(1)\) vanishing on \({{\,\textrm{synch}\,}}\mathcal {B}(1)\). In our forthcoming example, we show there are no representations much less tracial ones, so no 4-coloring exists.

8.3.2 Example

Consider the graph \(G=(V,E)\) given in Fig. 1. It is obtained from a five cycle by adding two apexes (denoted 6 and 7 in the figure), i.e., two vertices connected to all the other vertices.

Fig. 1
figure 1

Pentagonal bipyramid

Observe that the chromatic number of G is five, so G does admit a quantum five coloring. We claim that G does not admit a quantum four coloring. For this, we apply our Nullstellensatz Theorem 4.3 to the quantum 4-coloring Eqs. (8.32) and (8.33).

Generate the ideal

$$\begin{aligned} {\mathfrak {I}}_1=(e^i_a e^j_a \mid (i,j)\in E,\ a\in \{1,\ldots ,4\}) \subseteq \mathscr {U}(1). \end{aligned}$$
(8.34)

By Corollary 4.5, the graph G admits a quantum 4-coloring iff

$$\begin{aligned} -1\not \in {{\,\textrm{SOS}\,}}_{\mathscr {U}(1)}+{\mathfrak {I}}_1. \end{aligned}$$
(8.35)

We show there are quadratic elements \(s_j\) in \(\mathscr {U}(1)\) so that

$$\begin{aligned} 1+\sum s_j^*s_j \in {\mathfrak {I}}_1. \end{aligned}$$
(8.36)

To search for these elements \(s_j\), we employ NC Gröbner basis combined with semidefinite programming.

Firstly, we lift our problem into the free algebra \({\mathbb {C}\langle e\rangle }\), where e denotes the tuple \(e=(e^i_a)\). For this, let \(\Pi :{\mathbb {C}\langle e\rangle }\rightarrow \mathscr {U}(1)\) denote the canonical epimorphism, and let \({\mathfrak {I}}=\Pi ^{-1}({\mathfrak {I}}_1)\). Next one computes a GB for \({\mathfrak {I}}\); with respect to a lex order it has 350 elements of degree \(\le 3\).

To search for nc polynomials \(s_j\) of degree d so that

$$\begin{aligned} 1+\sum s_j^*s_j \in {\mathfrak {I}}, \end{aligned}$$
(8.37)

one employs the Gram matrix method and semidefinite programming [4]. That is, letting \(W_d\) be the set of all monomials of degree \(\le d\) in \({\mathbb {C}\langle e\rangle }\) (listed w.r.t. some ordering), Eq. (8.37) is equivalent to the existence of a positive semidefinite matrix M so that

$$\begin{aligned} 1+ W_d^* M W_d \in {\mathfrak {I}}. \end{aligned}$$
(8.38)

Since ideal membership can be described using linear equations in terms of the entries of M (given a Gröbner basis), Eq. (8.38) immediately transforms into a semidefinite program (SDP).

In our example, Eq. (8.38) is infeasible for \(d=1\) and does have a solution for \(d=2\). Reducing \(W_d\) modulo the GB (which one can do without loss of generality to help reduce the size of the SDP) yields an SDP of size \(272\times 272\). Observing that \(W_d^* M W_d=\text {tr}(M W_dW_d^*)\) and reducing entries of \(W_dW_d^*\) modulo the GB, Eq. (8.38) converts into a set of linear equations on the entries of M which are sparse and can thus be efficiently solved. We are then left with an SDP of manageable size. Solving the SDP using a standard solver with trivial objective function yielded a floating point positive definite solution M with minimal eigenvalue of \(\approx 10^{-2}\). By choosing a fine enough rationalization [7, 26], we thus obtain a symbolic (i.e., in exact arithmetic) positive definite solution M of Eq. (8.38), establishing that G does not admit a quantum 4-coloring.