Keywords

1 Introduction

Modern SAT and CSP solvers are quite efficient on industrial instances, so much so that there is a current impetus in the community towards solvers that tackle computational problems that lie beyond NP [2]. Meanwhile on the theoretical front, several proofs [3, 4] have just been proposed for Feder and Vardi celebrated dichotomy conjecture for the CSP [5]. There has been some advances for its quantified counterpart the QCSP that seems to follow a trichotomy between P, NP-complete and Pspace-complete [6,7,8,9]. Its optimisation counterpart the VCSP has been classified, first for finite valued cost functions [10] then for arbitrary cost function assuming the dichotomy conjecture holds [11]. The reader may also consult these recent surveys on QCSP [12] and VCSP [13].

It is now well established that the presence or absence of certain well behaved polymorphisms of a constraint language characterises the complexity of the corresponding CSP. Schaefer proved a dichotomy in the Boolean case [14], which may be reformulated as follows.

Theorem 1

[15]. Let \(\varGamma \) be a constraint language over \(D=\{0,1\}\).

CSP(\(\varGamma \)) is in P if \(\varGamma \) admits one of the following six good polymorphisms and NP-complete otherwise.

Good polymorphisms: \(\{\mathrm {Mjrty},\mathrm {Mnrty},\mathrm {Max},\mathrm {Min},\mathrm {Const}_0,\mathrm {Const}_1\}\).

Here, \(\mathrm {Mjrty}\) denotes the ternary operation that returns its repeated argument, while \(\mathrm {Mnrty}\) is the ternary minority operation; \(\mathrm {Max}\) and \(\mathrm {Min}\) are binary operations that returns the maximum and minimum, respectively; \(\mathrm {Const}_0\) and \(\mathrm {Const}_1\) are the unary constant operations that sends their argument to 0 and 1 respectively. We delay further formal definitions to the next section.

For the QCSP and VCSP, surjective polymorphisms and fractional polymorphisms play the same role as polymorphisms for the CSP. As an illustration, let us state two classification results in the Boolean case. For the QCSP, the following dichotomy was announced by Schaefer [14].

Theorem 2

[15, 16]. Let \(\varGamma \) be a constraint language over \(D=\{0,1\}\).

QCSP(\(\varGamma \)) is in P if \(\varGamma \) admits one of the following four good surjective polymorphisms and Pspace-complete otherwise.

Good surjective polymorphisms: \(\{\mathrm {Mjrty},\mathrm {Mnrty},\mathrm {Max},\mathrm {Min}\}\).

For the Boolean VCSP, the good multimorphisms must combine good polymorphisms from Theorem 1, as otherwise the feasibility of the VCSP would readily allow to solve a hard CSP.

Theorem 3

[17]. Let \(\varGamma \) be a valued constraint language on \(D=\{0,1\}\).

VCSP(\(\varGamma \)) is in P if it admits at least one of the following eight good multimorphisms. Otherwise, the problem is NP-hard.

Good multimorphisms for VCSP:

\(\{3\mathrm {Mjrty},3\mathrm {Mnrty},2\mathrm {Mjrty}+\mathrm {Mnrty},2\mathrm {Max},2\mathrm {Min},\mathrm {Max}+\mathrm {Min},\mathrm {Const}_0,\mathrm {Const}_1\}\).

In this paper, we combine universal quantification (QCSP) and valued constraints (VCSP) and work in the framework of the Quantified Valued Constraint Satisfaction Problem (QVCSP). Unbeknownst to us until the reviewers pointed it out, this problem was in fact already introduced in [1] as the weighted CSPs with min-max quantifiers and studied from an experimental perspective in the context of solver designs: the authors showed that alpha-beta pruning can be adapted in this context in a relevant fashion. While their name for the problem is very natural in their context, we will stick to our terminology which makes more apparent that we merge the QCSP and the VCSP frameworks. A natural way of building a QCSP instance from a CSP instance consists in assuming that a malicious opponent or uncertainty with respect to the environment is limiting certain resources or strengthening some constraints after some decisions have been made (see for example [18] for examples of scheduling with opponents). In the same manner, we could build natural example of QVCSP from natural instances of the VCSP.

Note that some of the tractable languages for the VCSP are in fact not genuine valued languages since the costs are the same for all feasible tuples. This is the case for the multimorphisms \(3\mathrm {Mjrty}\) and \(3\mathrm {Mnrty}\) in Theorem 3 (actually this is the case for any finite domain size). For such so called essentially crisp languages, one can therefore deduce immediately the complexity of their QVCSP from the QCSP classification: they are both collapsible (so in NP) and reduce in fact to a CSP in P.

For the QCSP, the fact that the complexity drops from Pspace to NP is explained by a property known as switchability [19,20,21,22]. Here we will only consider a restricted form of this property known as collapsibility, which asserts that a language is k-collapsible, whenever to satisfy an input sentence, it suffices to satisfy all sentences induced from this input sentence by fixing all but a bounded number of universal variables to take the same value. This is true in particular for the languages preserved by \(\mathrm {Max}\) or \(\mathrm {Min}\) for \(k=1\). We will show that this approach can be applied in some cases to QVCSP as well, which will in turn lead to tractability in some cases.

In particular, we obtain a complete classification of the complexity of the QVCSP in the Boolean case.

Theorem 4

(main result). Let \(\varGamma \) be a valued constraint language on \(D=\{0,1\}\). If \(\varGamma \) has one of the following good multimorphism then QVCSP(\(\varGamma \)) is tractable, otherwise it is Pspace-hard.

Good multimorphisms for QVCSP: \(\{3\mathrm {Mjrty},3\mathrm {Mnrty},2\mathrm {Mjrty}+\mathrm {Mnrty},2\mathrm {Max},2\mathrm {Min}\}\).

The hardness part of our proof relies on a fairly non trivial case analysis of tractable languages from Theorem 3 that are not tractable according to Theorem 4. We show that we can always express in this case a cost function that is hard for the QVCSP (and was of course not hard for the VCSP). Among other, we borrow and adapt the technique of compression used in [17] for the proof of Theorem 3.

The paper is organised as follows. In the next section we recall definitions and notations. In Sect. 3, we introduce the QVCSP and provide some examples of Pspace-hard Boolean QVCSP. In Sect. 4, we show that some valued constraint languages are tractable for essentially trivial reasons: either because they are crisp or because they are non crisp but any instance with “too many” universal quantifiers must be rejected. In Sect. 5, we extend the notion of collapsibility from the QCSP to the valued setting of QVCSP and use it to obtain tractability results for the QVCSP. In Sect. 6, we finish the proof of Theorem 4. Finally we conclude with some remarks.

2 Preliminaries

A VCSP instance \(\phi \) is a finite collection of valued constraints over some finite variable set X ranging over a label set D with value in the rationals augmented with the non feasible \(\infty \). Here a valued constraint is given by a so-called cost function \(\rho \) from \(D^n\) to \(\mathbb {Q}\cup \{\infty \}\) for some positive integer n (the arity) and an n-tuple of elements of X (the scope). Given the assignment \(\alpha :=\{\sigma _x \in D | x \in X \}\), we write \(\mathrm {obj}(\phi ,\alpha )\) as a short hand for \(\sum _{\rho (\bar{x})\text { in } \phi } \rho (\alpha (\bar{x}))\) where by \(\alpha (\bar{x})\) we mean that the value of each variable x from the scope \(\bar{x}\) is replaced by its assigned label \(\sigma _x\). The VCSP is an optimisation problem, the aim of which is to compute an \(\alpha \) such that \(\mathrm {obj}(\phi ,\alpha )\) is minimum. There are two other problems that arise in the context of VCSP.

  • decision: given an additional input k in \(\mathbb {Q}\), is there an \(\alpha \) such that \(\mathrm {obj}(\phi ,\alpha )\) is at most k?

  • feasibility: is there an \(\alpha \) such that \(\mathrm {obj}(\phi ,\alpha )\) is finite?

All these problems are NP-hard in general, and a standard way of better understanding the complexity is to study language restrictions, that is restricts costs functions to come from a certain set \(\varGamma \). A valued constraint language is NP-hard (for the VCSP), iff it contains a finite language for which the VCSP is NP-hard. A cost function is crisp (resp., essentially crisp) if it ranges over \(\{0,\infty \}\) (resp., over \(\{c,\infty \}\) for some c in \(\mathbb {Q}\)). A language is (essentially) crisp if it contains only (essentially) crisp cost functions. The crisp language associated with a valued constraint language \(\varGamma \) denoted by \(\mathrm {crisp}(\varGamma )\) consists of the set of corresponding relations, where \(\mathrm {crisp}(\rho )(t):= 0\) iff \(\rho (t)<\infty \).

An m-ary operation on D is a function \(g : D^m \rightarrow D\). Let \(\mathcal{O}^{(m)}_{D}\) denote the set of all m-ary operations on D. An m-ary fractional operation is a function \(\omega \) from \(\mathcal{O}^{(m)}_{D}\) to the positive rationals such that \(\sum _{g} \omega (g)=1\). The set \(\{ g \mid \omega (g) > 0 \}\) of operations is called the support of \(\omega \) and is denoted by \(\mathrm {supp}(\omega )\).

A fractional operation \(\omega \) is called an m-ary fractional polymorphism of a r-ary valued constraint \(\rho \) if for any tuples \(t_1,t_2,\dots ,t_m\) in \(D^r\), it holds that

$$\begin{aligned} \frac{1}{m}(\rho (t_1)+\rho (t_2)+ \ldots + \rho (t_m)) \ge \sum _{g \in \mathcal{O}^{(m)}_{D}} \omega (g) \rho (g(t_1,t_2,\ldots ,t_m)) \end{aligned}$$
(1)

where the operations g are applied component-wise. We will alternatively say that a fractional operation improves a cost function. A multimorphism is a fractional polymorphism with integral weightsFootnote 1. If \(\omega \) is a fractional polymorphism of every cost function in a constraint language \(\varGamma \), then \(\omega \) is called a fractional polymorphism of \(\varGamma \). A fractional polymorphism of a crisp language is a collection of polymorphisms (one identifies a crisp cost function \(\rho \) with the relation \(\{ t | \rho (t)<\infty \}\)). If \(\omega \) is a fractional polymorphism of \(\varGamma \), any g in \(\mathrm {supp}(\omega )\) is a polymorphism of \(\mathrm {crisp}(\varGamma )\). Rewriting (1) as

$$\begin{aligned} \rho (t_1)+\rho (t_2)+ \ldots + \rho (t_m) \ge \sum _{g \in \mathcal{O}^{(m)}_{D}} \mathbf {m{.}\omega (g)} \rho (g(t_1,t_2,\ldots ,t_m)) \end{aligned}$$
(2)

we will mildly abuse the notation of m-ary fractional polymorphisms in this paper, and write them as a weighted sum of operations of arity m, such that the sum of weights equals m. This explains the notation used in the statement of Theorems 3 and 4, where we listed several examples including ternary fractional operations such as \(3\mathrm {Mjrty}\) and \(2\mathrm {Mjrty}+\mathrm {Mnrty}\), binary ones such as \(2\mathrm {Max}\) and \(\mathrm {Max}+\mathrm {Min}\) and, unary ones \(\mathrm {Const}_0,\mathrm {Const}_1\).

We recall some operation of importance. For a constant c in D, let \(\mathrm {Const}_c\) denote the unary operation that always return c. Given a total order over D, let \(\mathrm {Max}\) (resp, \(\mathrm {Min}\)) denote the binary operation that returns the largest argument. Over the Boolean domain, we shall consider the usual order \(0<1\). More generally, any partial order over D such that the greatest lower bound of any pair of elements exist, induces naturally a semi-lattice operation, which is a binary operation \(\wedge \) that is idempotent (\(x\wedge x = x\)), associative and commutative. The element \(\bot :=\bigwedge _{d\in D} d\) satisfies for any x in D, \(\bot \wedge x = \bot \). If there is a constant \(\top \) such that for any x in D, \(\top \wedge x = x\), then we say that \(\top \) is a unit and \(\wedge \) a semi-lattice with unit. For \(1\le i \le 3\), let \(\mathrm {Mjrty}_i\) denote the ternary operation that returns the argument that occurs the most if some are equal, and its ith argument otherwise. When the domain is Boolean, we drop the unnecessary subscript. We define similarly \(\mathrm {Mnrty}_i\), which returns the least frequent argument if some are equal. A k-ary Hubie operationFootnote 2 f over D with respect to a constant c in D is a surjective operation that remains surjective even when any coordinate is fixed to c. That is, \(f(c,D,\ldots ,D)=f(D,c,D,\ldots ,D)=\ldots =f(D,\ldots ,D,c)=D\). We say that a cost function \(\phi (x_1,\ldots ,x_m)\) can be expressed by \(\varGamma \) if there is an instance I of VCSP(\(\varGamma \)) with objective function \(\phi _{\mathcal {I}}(x_1,\ldots ,x_m,x_{m+1},\ldots ,x_n)\), such that

$$\phi (x_1,\ldots ,x_m)=\underset{x_{m+1},\ldots ,x_n}{\mathrm {Min}} \phi _{\mathcal {I}}(x_1,\ldots ,x_m,x_{m+1},\ldots ,x_n).$$

We can also easily implement cost functions by scaling and translating that is a cost function \(a.\phi +b\), for any \(\phi \in \varGamma \), any \(a\in \mathbb {Q}^+\) and any \(b\in \mathbb {Q}\). Let \(\varGamma ^*\) be the closure of \(\varGamma \) under expressibility, scaling and translation. It is known that this closure preserve (in)tractability and that \(\varGamma ^*\) is the same as the set of cost functions that are invariant under the fractional polymorphisms of \(\varGamma \) [13, Theorem 35].

3 Definition and Examples of QVCSP

An instance of the quantified valued constraint satisfaction problem (QVCSP) is defined as above with the addition of a prefix of quantification \(\mathcal {P}\) applying to all variables: that is, \(\mathcal {P}\) is a strict linear order over variables where variables are either existential or universal. For convenience, we will denote the set of existential variables by \(X^\mathcal {P}\) and universal variables by \(Y^\mathcal {P}\). Given an existential variable x of \(X^\mathcal {P}\), we denote by \(Y_x^\mathcal {P}\) the set of universal variables that precede x in the prefix order given by \(\mathcal {P}\). When the prefix of quantification is clear from context, we feel free to drop the superscript from our notation.

A Skolem function \(\sigma _x\) for the variable x is a D ranging function that takes as input values corresponding to the values of the universal variables that precedes it in \(\mathcal {P}\), that is from \(D^{|Y_x|}\) to D. If \(\beta \) is a family of Skolem functions for our instance \(\beta = \{\sigma _x:D^{|Y_x|}\rightarrow D | x \in X\}\) (we will call such a family a strategy for our instance) and \(\pi :Y\rightarrow D\) an assignment of the universal variables, we write \(\beta \circ \pi \) for the assignment to the variables which assigns a universal variable y in Y to \(\pi (y)\) and an existential variable x in X to \(\sigma _x(\pi _{\upharpoonleft Y_x})\), where \(\pi _{\upharpoonleft Y_x}\) denotes the restriction to \(Y_x\) of \(\pi \).

We are now in a game setting pitching a universal player (male) and an existential player (female). Informally, she is trying to give a label to existential variables with the long term view of optimising the objective function, while he is a malicious opponent trying to prevent her from doing so. She tries to minimise the objective no matter what her opponent plays. This is reasonable if she knows that he is maliciously trying to make sure that after play her objective is as large as possible. We extend therefore the objective function to quantified valued constraints and let \(\mathrm {obj}(\phi ,\beta ):= \max _{\pi :Y\rightarrow D}\mathrm {obj}(\phi ,\beta \circ \pi )\). We will consider the QVCSP to be the optimisation problem, the aim of which is to compute a \(\beta \) such that \(\mathrm {obj}(\phi ,\beta )\) is minimum. We will not as such request that \(\beta \) be given in full as it would be of size at least \(D^m\) where m is the number of universal variables. Instead, we will ask for a procedure that can play the underlying game according to the strategy \(\beta \). Like for the VCSP, there are again two natural decision problems that arise:

  • decision: given an additional input k in \(\mathbb {Q}\), is there a \(\beta \) such that \(\mathrm {obj}(\phi ,\beta )\) is at most k?

  • feasibility: is there a \(\beta \) such that \(\mathrm {obj}(\phi ,\beta )\) is finite?

Note that this definition extends naturally the usual semantic of the QCSP and the feasibility question for the QVCSP amounts to solving the QCSP for the underlying crisp language. This means that the above three problems are Pspace-hard in general, and we study in this paper their restrictions to a valued constraint language \(\varGamma \).

Example 1

Let \(\varGamma _{\text {nae}}\) be the boolean constraint language that consists of the cost function.

$$\rho _{\text {nae}}(x,y,z)=\left\{ \begin{array}{l}\infty \quad \mathrm {if} \quad x=y=z\\ 0 \quad \mathrm {otherwise}\end{array}\right. $$

This language is crisp and we know that the complexity for the VCSP is that of the corresponding CSP, namely NP-complete, and that for the QVCSP, we should look at the QCSP, well known to be Pspace-complete [12].

Example 2

Let \(\varGamma _{\text {neq}}\) be the boolean constraint language that consists of the cost function

$$\rho _{\text {neq}}(x,y)=\left\{ \begin{array}{l}0 \quad \mathrm {if} \quad x\ne y\\ 1 \quad \mathrm {otherwise}\end{array}\right. $$

For this non crisp language, we again get an NP-hard VCSP but not because of the feasibility which is tractable, but because of the optimisation, by reduction from MAX Sat for XOR [17]. Alternatively, one can simulate a variant of \(\rho _{\text {nae}}\):

$$\rho '_{\text {nae}}(x,y,z):=\rho _{\text {neq}}(x,y)+\rho _{\text {neq}}(x,z)+\rho _{\text {neq}}(y,z)-1=\left\{ \begin{array}{l}2 \text{ if } x=y=z\\ 0 \text{ otherwise }\end{array}\right. $$

We can reduce VCSP(\(\varGamma _{\text {nae}}\)) to VCSP(\(\varGamma _{\text {neq}}\)) by replacing every occurrence of the cost function by \(\rho '_{\text {nae}}\). The former instance holds iff the latter has a solution reaching an objective of 0. The same reduction applies for the QVCSP, whose decision version is therefore Pspace-complete.

Example 3

The following boolean cost function

$$\rho _{\text {eq}}(x,y)=\left\{ \begin{array}{l}0 \quad \mathrm {if} \quad x= y\\ 1 \quad \mathrm {otherwise}\end{array}\right. $$

together with two unary crisp cost functions that encodes the constants 0 and 1 forms the boolean language \(\varGamma _{\text {cut}}\), whose VCSP corresponds essentially to the problem MIN-CUT and is tractable [17]. In contrast we will show below that the language \(\varGamma _{\text {eq}}=\{\rho _{\text {eq}}\}\) has already a QVCSP that is Pspace-hard.

Proposition 1

The QVCSP for the constraint language \(\varGamma _{\text {eq}}\) is Pspace-hard.

Proof

We reduce the decision version of QVCSP for \(\varGamma _{\text {neq}}\) (see example above) to that of \(\varGamma _{\text {eq}}\) as follows.

Given an instance \(\phi \) of the former with a quantifier prefix \(\mathcal {P}\), we reduce to the instance \(\widetilde{\phi }\) obtained by replacing every occurrence of the cost function \(\rho _{\text {neq}}\) by \(\rho _{\text {eq}}\) in \(\phi \) and picking the dual quantifier prefix \(\widetilde{\mathcal {P}}\) (that is turn existential variables to universal and vice versa).

Let N be the number of occurrences of the \(\rho _{\text {neq}}\) cost function in \(\phi \). We claim that the objective for \(\widetilde{\phi }\) must be more than \(N-k\), iff the objective for \(\phi \) is less then k.

Indeed, otherwise pitting a strategy \(\widetilde{\beta }\) for \(\widetilde{\phi }\) that would attain an objective of less than N minus k, against any strategy \(\beta \) for \(\phi \) in the game for \(\phi \), we would obtain a final objective of more than k.

The dual argument applies for the other direction, which proves our claim.

The claim gives us the (Turing) reduction. We answer the opposite answer of that for \(\widetilde{\phi }\) with the threshold \(N-k\).    \(\square \)

4 Some Tractable Languages

4.1 Essentially Crisp Languages

We can deduce the complexity of such languages from the complexity of the associated QCSP.

Proposition 2

Let \(\varGamma \) be a valued constraint language over some finite set D. If \(\varGamma \) admits \(3\mathrm {Mjrty}\) or \(3\mathrm {Mnrty}\) as a multimorphism, where \(\mathrm {Mjrty}\) (respectively, \(\mathrm {Mnrty}\)) is any majority (respectively, minority) operation, then QVCSP(\(\varGamma \)) is tractable.

Proof

By Proposition 6.20 (majority) and 6.22 (minority) in [17], \(\varGamma \) is an essentially crisp language. Thus the problem QVCSP(\(\varGamma \)) is the same as QCSP(\(\varGamma '\)) where \(\varGamma '=\mathrm {crisp}(\varGamma )\). By construction, \(\varGamma '\) admits \(\mathrm {Mjrty}\) or \(\mathrm {Mnrty}\) as a polymorphism, which are known to be tractable by Theorem 4.2 (mal’tsev) and 4.5 (near-unanimity) in [6].    \(\square \)

Remark 1

The above can be generalised to a language that admits a multimorphism 3f where f is Mal’tsev or a multimorphism k.f where f is a k-ary near-unanimity operation.

4.2 Permutations and Unary

The proof principle used to discard universal quantifiers for the language of the following result is reminiscent of the case of a language that consists of a single bipartite or a single disconnected graph for the QCSP [24]. In a nutshell, an instance boils down to a collection (conjunction) of instances with a prefix of quantification with at most one leading universal variable or it must be rejected.

Theorem 5

Let \(\varGamma \) be a valued constraint language over some finite set D. If \(\varGamma \) admits \(\mathrm {Mjrty}_1+\mathrm {Mjrty}_2+\mathrm {Mnrty}_3\) as a multimorphism, then QVCSP(\(\varGamma \)) is tractable.

Proof

By Theorem 6.25 in [17], any cost function from \(\varGamma \) can be expressed as a sum of unary cost functions and binary permutation restrictions. The latter are crisp cost functions with costs ranging in \(\{0,\infty \}\), that amount to a restricted permutation in the sense that for any x, there is at most one \(y_2\) such that \(\phi (x,y_2)\) holds (has non \(\infty \) weight) and at most one \(y_1\) such that \(\phi (y_1,x)\) holds.

Our algorithm will apply some simple preprocessing and detect that the instance is not feasible or it will deduce that each connected component of the constraint graph contains at most one universal variable and by some simple case analysis deduce the (worst) cost for these components. In effect this reduces the instance to a VCSP instance for which a simple algorithm is already known.

Let \(\phi \) be a permutation restriction occurring in the instance.

If \(\exists x \forall y \phi (x,y)\) occurs in the instance then it is not feasible since any but at most one value for y will yield an objective of \(\infty \) for a given value of x. In this case, we may answer \(\infty \). Of course, the symmetric case of \(\exists x \forall y \phi (y,x)\) is dealt with in the same way. We ignore symmetric cases from now on.

Similarly, if \(\forall y_1 \forall y_2 \phi (y_1,y_2)\) occurs in the instance, then we may answer \(\infty \).

In the degenerate case of \(\forall y \phi (y,y)\), we may also answer \(\infty \) unless \(\phi \) is the crisp function for equality over D, in which case, we may simply discard \(\phi (y,y)\) from the sum.

So we may assume from now on that any occurrence of a permutation restriction is of the form \(\forall y \exists x \phi (x,y)\). If there is one \(y_0\) such that \(|\{x|\phi (x,y_0)=0\}|=0\) then we may answer \(\infty \). Otherwise, let \(\zeta (y)\) be the unique x such that \(\phi (x,y)=0\). The only Skolem function for x that yields feasibility is essentially unary and depends only of y : \(\sigma _x(y)=\zeta (y)\).

If there is a path \(y_1,x_1,x_2,\ldots ,x_n\) where \(y_1\) is universal and \(x_1,x_2,\ldots ,x_n\) are existential variables in the constraint graph then there are some permutations \(\zeta _i\) on D such that the only Skolem functions for these existential variables that could possibly yield feasibility are of the form \(\sigma _{x_1}(y)=\zeta _1(y_1)\), \(\sigma _{x_2}(y)=\zeta _2(\zeta _1(y_1)) \ldots \sigma _{x_n}(y)=\zeta _n(\ldots \zeta _2(\zeta _1(y_1)) \ldots )\). If there is an edge from \(x_n\) to some universal variable \(y_2\) then the instance is necessarily unfeasible since any but one value for \(y_2\) will yield an objective of \(\infty \) for a given value of \(x_n\).

We can solve in parallel the part of the instance induced by each connected component of the constraint graph, and we may assume that a connected component of this graph contains at most one universal variable that is quantified ahead of the existential variables of this connected component.

A connected component that does not contain any universal variable can be solved efficiently by some simple propagation (see [17]).

If a universal variable y occurs only within the scope of unary constraints then we simply assume that y takes the value yielding the worst cost.

More generally, for each connected component that contains one universal variable y, we can check in parallel for all values d of y, the corresponding cost \(M_{y=d}\) for the component (all other variables are now fixed). Let M be the maximum combined cost among \(M_{y=d}\).    \(\square \)

Remark 2

The tractable languages for the QCSP from [24] mentioned above can be shown to exhibit collapsibility thanks to specific polymorphisms (see examples 2 and 3 in [23]). While we shall proceed similarly for the valued languages of the next section with a suitable multimorphism, we do not yet know of a multimorphism that witnesses directly “collapsibility” for the language of Theorem 5.

5 Collapsibility in the Valued Settings

Following Chen [20], for an input of the QVCSP with m universal variables, we restrict the universal opponent to play universal variables from a specific set of tuples, and investigate the interpolation of unrestricted game from restricted (small sized) ones in the presence of good multimorphisms.

As a concrete application, we will see the case of a language closed under the multimorphism 2.g (2 times g) where g is a semi-lattice with unit \(\top \). On an instance involving cost functions improved by this multimorphism, we may interpolate a winning strategy from winning strategies for all instances induced by replacing all but one universal variable by \(\top \).

The necessary definitions and notations to discuss such an interpolation in general can be a bit off-putting, and the keen reader may refer to the appendix. Here, we will only eventually state our tractability result and give first a detailed and concrete example to illustrate it.

Example 4

Consider the instance \(\forall y_1 \exists x_1 \forall y_2 \forall y_3 \exists x_2 \; \phi (y_1,x_1,y_2,y_3,x_2)\), where \(\phi \) is the 5-ary Boolean cost function such that the cost of (0, 0, 0, 0, 0) is 51, that of (0, 0, 0, 0, 1) and (0, 0, 0, 1, 0) is \(\infty \), that of (0, 0, 0, 1, 1) is 21, etc. This instance is depicted on Fig. 1. It can be checked that the cost function admits \(2\mathrm {Max}\) as a multimorphism: that is, the sum of the costs of any two tuples dominates twice the cost of their max taken component-wise. For example, when \(t_1=(0,0,1,0,1)\) and \(t_2=(1,0,0,1,1)\); their max is \(t_3=(1,0,1,1,1)\); the costs are \(\phi (t_1)=51\), \(\phi (t_2)=13\) and \(\phi (t_3)=5\); it is indeed the case that \(10=2\times 5 \le 51 + 13 = 64\).

We replace all but one universal variable by 0 (the value of the unit \(\top \) for the specific case of \(\mathrm {Max}\)) and derive three restricted games, which amounts to solve the instances that are depicted on Fig. 2. Of course, each restricted game is a relaxation of the original instance. Thus, if one of them is not feasible then the original instance is also not feasible. The same argument applies to the objective reached by feasible instances.

The important point is that the converse holds since we can interpolate a strategy for the original instance from three strategies for the restricted games, in a way that can only improve them.

In what follows, we will assume that we have at our disposal the three strategies that are optimal for each restricted game.

Imagine that the first universal quantifier takes value 1, that is \(y_1=1\). We will play also \(y_1=1\) in the first restricted game and \(y_1=0\) in the other two restricted games (we may not do otherwise). Observe that the max of the triple (1, 0, 0) is 1. Next, we look up where the subsequent existential variable \(x_1\) is played in each restricted game. For example, we must have \(x_1=1\) in the first restricted game, and \(x_1=0\) in the two other games (otherwise, we would end up being necessarily unfeasible). We apply max to this triple (1, 0, 0) and play \(x_1=1\) in the original game. We proceed in this fashion going back and forth taking antecedent and image under max. For example, \(y_2=0\) brings us back to \(y_2\) being played on (0, 0, 0) in the three games, and \(y_3=1\) brings us back to \(y_3\) being played on (0, 0, 1) in the three games. In these three games \(x_2\) must be played on 1,0, and 1, respectively. We play \(x_2\) on their maximum which is 1. The “branches” of play in the four games alluded to above are highlighted on Figs. 1 and 2.

The fact that \(\mathrm {Max}\) is surjective means that we can always go back. The fact that 2\(\mathrm {Max}\) is a multimorphism means (with a little bit of work) that the strategy we have interpolated from those for the restricted games can only improve them.

Fig. 1.
figure 1

An instance of the QVCSP

Fig. 2.
figure 2

Restricted Games: from left to right, we keep the first, second and third universal quantifier. The other universal variables are assumed to take value 0 (we write a \(\bullet \) to denote that they are pinned to a constant).

In the previous example, we have explained how a general strategy can be interpolated from a set of strategies applying to restricted games, where we are left with a single universal quantifier. Each such strategy can be computed by an adaptation of Generalized Arc Consistency that runs also in polynomial time, and there are linearly many such strategies to compute. Thus, we have a tractable QVCSP in this case.

Theorem 6

Let \(\varGamma \) be a valued constraint language. Let g be a semi-lattice with unit \(\top \). If \(\varGamma \) admits 2 times g (2.g) as a multimorphism the QVCSP(\(\varGamma \)) is tractable.

6 Proof of Theorem 4

We have proved in Sect. 4 that any valued Boolean constraint language that admits one of the good multimorphism from the statement is tractable: for \(3\mathrm {Mjrty}\) and \(3\mathrm {Mnrty}\) by Proposition 2, for \(2\mathrm {Mjrty}+\mathrm {Mnrty}\) by Theorem 5, for \(2\mathrm {Max}\) and \(2\mathrm {Min}\) by Theorem 6. So, we are left with the hardness part of the statement.

If a constraint language does not admit any of these multimorphisms, and is essentially crisp it must be hard by Theorem 2. If a constraint language does not admit any of the multimorphisms of Theorem 3, and is not essentially crisp, then by Lemma 7.10 in [17] \(\varGamma ^*\) contains \(\rho _{neq}\) which has a Pspace hard QVCSP as seen in Example 2.

The next lemma concludes the proof, as we show that even if a language must admit all multimorphisms from Theorem 3 that are not in Theorem 4, then it can simulate a cut function.

A language \(\varGamma '\) that admits less multimorphisms than \(\varGamma \) can express any finite subset of \(\varGamma ^*\). So any such \(\varGamma '\) would also simulate a cut function.

A cut function is a binary function from \(\{0,1\}^2\) in \(\mathbb {Q}\cup \{\infty \}\) of the following form \(\phi _{{cut}^\beta _\alpha }(x,y)=\left\{ \begin{matrix}\alpha \mathrm {if} x= y,\\ \beta \; \mathrm {otherwise} \end{matrix}\right. \) where \(\alpha ,\beta \in \mathbb {Q}\cup \{\infty \}\) with \(\alpha<\beta <\infty \). The QVCSP of a cut function is Pspace-hard by Proposition 1, since \(\rho _{eq}\) can be simulated by scaling and translating from any cut function.

Since we are in a quantified and valued setting, we will be able to use expressivity, scaling and translating as for the VCSP but also universal quantifiers as for the QCSP. We will call this simulation with some universal quantifiers in what follows to stress that we go beyond the \(*\) closure from the VCSP.

Lemma 1

Let \(\varGamma \) be a valued Boolean constraint language. If \(\varGamma \) admits \(\mathrm {Const}_1\), \(\mathrm {Const}_0\) and \(\mathrm {Max}+\mathrm {Min}\) as multimorphism but no multimorphism from \(\{3\mathrm {Mjrty}\), \(3\mathrm {Mnrty}\), \( 2\mathrm {Mjrty}+\mathrm {Mnrty},2\mathrm {Max},2\mathrm {Min}\}\) then \(\varGamma \) can simulate with some universal quantifiers a cut function.

Consequently, the decision problem of the QVCSP of \(\varGamma \) is Pspace-complete.

The rest of this section is devoted to a proof of this Lemma.

Fact 1

If \(\varGamma \) admits \(\mathrm {Max}+\mathrm {Min}\) as a multimorphism then \(\mathrm {crisp}(\varGamma )\) admits \(\mathrm {Mjrty}\) as a polymorphism.

Proof

\(\mathrm {crisp}(\varGamma )\) admits both \(\mathrm {Max}\) and \(\mathrm {Min}\) as polymorphisms and the majority can be defined as follows:

$$\mathrm {Mjrty}(a,b,c):=\mathrm {Max}[\mathrm {Max}(\mathrm {Min}(a,b),\mathrm {Min}(a,c)),\mathrm {Min}(b,c)].$$

Fact 2

If \(\varGamma \) does not admit \(3\mathrm {Mjrty}\) as a multimorphism and \(\mathrm {crisp}(\varGamma )\) does admit \(\mathrm {Mjrty}\) as a polymorphism then \(\varGamma \) is not essentially crisp.

Proof

Let \(\rho \) be a cost function in \(\varGamma \) and uvw such that \(\rho (u),\rho (v),\rho (w)<\infty \). Since \(\mathrm {Mjrty}\) is a polymorphism of \(\mathrm {crisp}(\varGamma )\) then \(\rho (\mathrm {Mjrty}(u,v,w))<\infty \). If \(\rho \) is essentially crisp then \(3\rho (\mathrm {Mjrty}(u,v,w)=3\rho (u)= \rho (u)+\rho (v)+\rho (w)\). If \(\varGamma \) was essentially crisp then it would admit \(3\mathrm {Mjrty}\) as a multimorphism which would contradict our assumption.

Recall that \(\rho \) is finitely modular, whenever for all tuples st such that \(\phi (s)\), \(\phi (t)\), \(\phi (\mathrm {Max}(s,t))\), and \(\phi (\mathrm {Min}(s,t))\) have finite costs, we have that \(\phi (s)+\phi (t)=\phi (\mathrm {Max}(s,t))+\phi (\mathrm {Min}(s,t))\).

Fact 3

If \(\varGamma \) does not admit \(2\mathrm {Mjrty}+\mathrm {Mnrty}\) as a multimorphism and \(\mathrm {crisp}(\varGamma )\) does admit \(\mathrm {Mjrty}\) as a polymorphism then there exist a cost function in \(\rho \) that is not finitely modular or \(\mathrm {crisp}(\rho )\) does not admit \(\mathrm {Mnrty}\) as a polymorphism.

Proof

Corollary 6.26 in [17] establishes that a cost function \(\rho \) does admit \(2\mathrm {Mjrty}+\mathrm {Mnrty}\) as a multimorphism iff it is both finitely modular and \(\mathrm {crisp}(\rho )\) admits as polymorphisms both \(\mathrm {Mjrty}\) and \(\mathrm {Mnrty}\).

Fact 4

If \(\varGamma \) is not essentially crisp, and it admits \(\mathrm {Const}_0\) and \(\mathrm {Const}_1\) as multimorphisms, and \(\mathrm {crisp}(\varGamma )\) admits \(\mathrm {Mjrty}\) but does not admit \(\mathrm {Mnrty}\) then \(\varGamma ^*\) contains a cut function.

Before proving this, let us point out that this means that we are only left with the case when there is a \(\rho \) that is not finitely modular, a case that we will settle in the last Fact.

Proof

We follow the same argument as in case 3 of the proof of Theorem 6.27 from [17] and establish that \(\varGamma \) contains a binary cost function \(\rho \) such that for exactly one \((a,b)\in D^2\) there is \(\rho (a,b)=\infty \) (other values being finite). Since \(\varGamma \) admits \(\mathrm {Const}_0\) and \(\mathrm {Const}_1\) as multimorphisms, we know that \(\rho (0,0)=\rho (1,1)\leqslant \rho (b,a)< \rho (a,b)=\infty \). W.l.o.g. up to symmetry, we can suppose that \(a=0\) and \(b=1\) and we have \(\rho (0,0)=\rho (1,1)\leqslant \rho (1,0)< \rho (0,1)=\infty \).

We must ensure \(\rho (1,0)>\rho (0,0)\) for our next construction to work. If it is not the case, then since \(\varGamma \) is not essentially crisp and has the multimorphisms \(\mathrm {Const}_0\) and \(\mathrm {Const}_1\), there is a cost function \(\rho _m\) (of arity m) and a m-tuple u such that \(\rho _m(0,\ldots ,0)=\rho _m(1,\ldots ,1)<\rho _m(u)<\infty \). Let \(\rho _2\) be the binary function obtained by \(\rho _2(x_1,x_0)=\rho _m(x_{u[1]},\ldots ,x_{u[m]})\). We do not know for sure the value of \(\rho _2(0,1)\) but we know that \(\rho _2(0,0)=\rho _2(1,1)<\rho _2(1,0)=\rho _m(u)<\infty \).

Let \(\rho _3(x,y):=\rho (x,y)+\rho _2(x,y)\). By construction, \(\rho _3(0,0)=\rho _3(1,1)<\rho _3(1,0)<\rho _3(0,1)=\infty \).

The last function which is the desired cut function is created by expressibility as follows:

\(\rho _4(x,y):=\underset{z,t}{\mathrm {Min}}[\rho _3(x,t)+\rho _3(z,t)+\rho _3(z,y)+\rho _3(y,z)-4\rho _3(0,0)]\).    \(\square \)

The next step will rely heavily on the technique of compression from [17], which we shall adapt to our purpose. Given an m-ary cost function \(\rho _m\) and two m-tuples u and v, let the compression \(\rho _4\) of \(\rho _m\) w.r.t. u and v is defined as: \(\rho _4(x_{00},x_{01},x_{10},x_{11})=\rho (x_{u[1]v[1]},x_{u[2],v[2]},\ldots ,x_{u[m]v[m]})\). One can verify that \(\rho _4(0,0,0,1)=\rho _m(\mathrm {Min}(u,v))\), \(\rho _4(0,0,1,1)=\rho _m(u)\), \(\rho _4(0,1,0,1)=\rho _m(v)\) and \(\rho _4(0,1,1,1)=\rho _m(\mathrm {Max}(u,v))\).

Next, we want to ensure that the first and last coordinate of \(\rho _4\) must take values 0 and 1 in order to simulate the binary cost function \(\rho _4(0,x_1,x_2,1)\).

Fact 5

If \(\varGamma \) is not essentially crisp and admits \(\mathrm {Const}_1\) and \(\mathrm {Const}_0\) as multimorphisms, then \(\varGamma ^*\) contains a cut function or a small quantified instance with 2 free variables \(x_1\) and \(x_2\) and two universal variables built from cost functions from \(\varGamma \) together with \(\rho _4\) allows to simulates the binary cost function \(\rho _4(0,x_1,x_2,1)\) for any m-ary cost function \(\rho _m\) from \(\varGamma \).

Proof

Let \(\rho _2(0,0)=\rho _2(1,1)<\rho _2(1,0)=\rho _m(u)<\infty \) be defined as in the proof of the previous fact (we only need the assumptions that \(\varGamma \) is not essentially crisp and admits \(\mathrm {Const}_1\) and \(\mathrm {Const}_0\) as multimorphisms). If \(\rho _2(0,1)\) is also finite then \(\rho _2(x,y)+\rho _2(y,x)\) expresses a cut function and we are done.

Otherwise, \(\exists x_1 \exists x_2 \forall y_1 \forall y_2 \quad \rho _2(x_1,y_1)+\rho _2(y_2,x_2)-2 \rho _2(1,0)\) forces \(x_1=0\) and \(x_2=1\) (because this is the only way to avoid an infinite cost).

So for any cost function \(\rho _m\) from \(\varGamma \) and its compression \(\rho _4\), if we insert at the beginning of an instance \(\exists x_1 \exists x_2 \forall y_1 \forall y_2 \quad \rho _2(x_1,y_1)+\rho _2(y_2,x_2)-2 \rho _2(0,1)\), all subsequent constraint of the form \(\rho _4(x_1,x,y,x_2)\) plays the same role as \(\rho _4(0,x_1,x_2,1)\).

Fact 6

If there exists a cost function which is not finitely modular in \(\varGamma \), which does admit \(\mathrm {Min}+\mathrm {Max}\) as a multimorphism but does not admit either \(2\mathrm {Max}\) or \(2\mathrm {Min}\) as a multimorphism then \(\varGamma \) can simulate with some universal variables a cut function.

Proof

Since \(2\mathrm {Max}\) is not a multimorphism there is a function \(\rho _{\text {NMax}}\) and uv such that \(2\rho _{\text {NMax}}(Max(u,v))>\rho _{\text {NMax}}(u)+\rho _{\text {NMax}}(v)\). Both \(\rho _{\text {NMax}}(\mathrm {Max}(u,v))<\infty \) and \(\rho _{\text {NMax}}(\mathrm {Min}(u,v))<\infty \) because \(\mathrm {Min}+\mathrm {Max}\) is a multimorphism and so both \(\mathrm {Min}\) and \(\mathrm {Max}\) are polymorphisms of \(\mathrm {crisp}(\varGamma )\).

By the binarisation method of the compression from the previous fact, either we have a cut function and we are done or we can simulate the binary function \(\rho _{2\text {NMax}}\) and let \(\rho _M(x,y)=\rho _{2\text {NMax}}(x,y)+\rho _{2\text {NMax}}(y,x)-2\rho _{2\text {NMax}}(0,0)\).

We have \(\rho _M : \left\{ \begin{array}{l} 1,1 \mapsto A+\epsilon _1 \text{ with } 0<\epsilon _1\leqslant A\\ 1,0 \mapsto A \text{ with } A>0\\ 0,1 \mapsto A\\ 0,0 \mapsto 0 \end{array}\right. \)

\(0<\epsilon _1\) because \(\rho _{NMAX}\) does not have the multimorphism \(2\mathrm {Max}\) and \(\epsilon _1\leqslant A\) because \(\rho _{NMAX}\) has the multimorphism \(\mathrm {Min}+\mathrm {Max}\).

There is a function \(\rho _{\text {NMod}}\) which is not finitely modular and admits \(\mathrm {Min}+\mathrm {Max}\) as a multimorphism. So there are uv such that \(\rho _{\text {NMod}}(Max(u,v))+\rho _{\text {NMod}}(\mathrm {Min}(u,v))<\rho _{\text {NMod}}(u)+\rho _{\text {NMod}}(v)<\infty \). By the binarisation method from the previous fact, either we have a cut function and we are done or we can simulate \(\rho _{2\text {NMod}}\) and define \(\rho _s(x,y):=\rho _{2\text {NMod}}(x,y)+\rho _{2\text {NMod}}(y,x)-2\rho _{2\text {NMod}}(0,0)\). By construction, we have, \(\rho _s : \left\{ \begin{array}{l} 1,1 \mapsto b_0<2b\\ 1,0 \mapsto b\\ 0,1 \mapsto b\\ 0,0 \mapsto 0 \end{array}\right. \) .

Let \(\alpha \) be a positive integer such that \(\alpha >\frac{b-b_0}{\epsilon _1}\) and \(\varphi _M:=\alpha \rho _M+\rho _s\)

  • \( \varphi _M(1,1)=\alpha A+\alpha \epsilon _1 +b_0>\alpha A + b-b_0+b_0=\varphi _M(1,0)\)

  • \( \varphi _M(1,1)= (A+\epsilon _1)\alpha +b_0<2 A \alpha +2b =2(\alpha A+b)=2\varphi _M(1,0)\)

  • \( \varphi _M(1,0)=\varphi _M(0,1)=b+\alpha A\geqslant b +\alpha \epsilon _1>b+b-b_0>0\)

  • \( \varphi _M(0,0)=0\)

We have \(\varphi _M : \left\{ \begin{array}{l} 1,1 \mapsto M+\epsilon _M \text{ with } 0<\epsilon _M< M\\ 1,0 \mapsto M \text{ with } M>0\\ 0,1 \mapsto M\\ 0,0 \mapsto 0 \end{array}\right. \)

A similar proof with \(2\mathrm {Min}\) instead of \(2\mathrm {Max}\) can be used to construct the binary function \(\rho _m\) such that, \(\varphi _m : \left\{ \begin{array}{l} 1,1 \mapsto 0\\ 1,0 \mapsto m \text{ with } m>0\\ 0,1 \mapsto m\\ 0,0 \mapsto m+\epsilon _m \text{ with } 0<\epsilon _m< m \end{array}\right. \)

We define the function \(\rho (x,y)=(m+\epsilon _m)\rho _M+(M+\epsilon _M)\rho _m\) and we have:

  • \(\rho (1,1)=\rho (0,0)=mM+m\epsilon _M+M\epsilon _m+\epsilon _M \epsilon _m\)

  • \(\rho (1,0)=\rho (0,1)=mM+m\epsilon _M+M\epsilon _m+mM\)

  • \(\epsilon _M \epsilon _m<mM\)

So \(\rho \) is a cut function as required.    \(\square \)

7 Conclusion

We have studied the quantified valued constraint satisfaction problem, also known as the weighted CSPs with min-max quantifiers, and established preliminary results regarding its complexity when restricted by a valued language.

Without introducing any new Galois connection and using only the tools for the VCSP, and only adapting collapsibility from the QCSP, we get several tractability and intractability results, which allows us to derive in particular a dichotomy for the Boolean case. The proof is somewhat complex, and we plan to introduce the correct Galois connection for the QVCSP in the hope that it will allow to streamline this proof, and extend this result to larger domains.

Another line of enquiry would be to better understand collapsibility in the context of valued constraints. Our current attempt does not seem to provide us with transitivity as it does in the non valued case.

Finally, let us note that any attempt at classifying the QVCSP for 3 or more elements might hit the same hurdle as in the case of the QCSP. We can easily build problems that fall in NPO and are NP-hard. For example consider

$$\rho : \left\{ \begin{array}{l} \{0,1,2\}\rightarrow \mathbb {Q}\cup \{\infty \}\\ (x,y) \mapsto \left\{ \begin{array}{l} \rho _{\text {neq}}(x,y) \text{ if } (x,y)\in \{0,1\}\\ \infty \text{ otherwise } \end{array} \right. \end{array}\right. $$

Every instance with a universal quantifier y can be trivially answered as the objective is \(\infty \) as soon as y is 2. We are left with existential instances which likewise must play on \(\{0,1\}\). Consequently, QVCSP has the same complexity as the VCSP on \(\rho _{\text {neq}}\), namely it is NP-hard and in NPO.