1 Introduction

Strategic reasoning has been a major research theme in game theory. However, “much of game theory is about the question whether strategic equilibria exist”, as Johan van Benthem points out, “but there are hardly any explicit languages for defining, comparing, or combining strategies”[22]. The intrinsic difficulty of modelling strategic reasoning is that reasoning about strategies is not purely deductive but combines temporal reasoning, counterfactual reasoning, reasoning about actions and preferences, and multi-agent interaction. If any logic is used, such a logic must be able to support these reasoning mechanisms.

In recent years a number of specific logical formalisms have been proposed for specifying strategic behaviour of agents in multi-agent systems [3, 11, 13, 14, 24]. These frameworks offer modelling facilities and inference mechanisms for specifying strategy reasoning in multi-agent systems. At the same time, the conceptually simple, general Game Description Language (GDL) has been developed as a practical language for encoding the rules of arbitrary games so that they can be understood by general game-playing systems, whose task is to learn to play unknown games without human intervention [6, 20].

In this paper, we explore the middle ground between pure game specification languages like GDL on the one hand, and existing expressive formalisms for strategic reasoning on the other hand. Our main contributions can be summarised as follows:

  1. 1.

    We show how a simple extension of GDL using a standard modality for linear time suffices to describe strategies in addition to the mere rules of a game, and we present a specific semantics by which formulas in this language can be understood as move recommendations for a player.

  2. 2.

    We enrich our language by two preference operators, respectively called prioritised disjunction and prioritised conjunction, and show how to use them to describe strategies that are complete (i.e., provide a move recommendation in every state) and deterministic (i.e., move recommendations are always unique).

  3. 3.

    We demonstrate two example methods of implementation: a formalisation of the semantic interpretation of our language in the Situation Calculus [16] and a translation of a subset of our language into Answer Set Programming [5].

These results are accompanied by a thorough mathematical analysis of the language and its semantics, in particular with regard to our novel preference operators. The advantage of our framework is to allow for concise but complete representations of games and strategic behaviour of agents so that the approach promises to be practically useful for the design of game-playing agents. We will use a simple game scenario as a running example to explain the basic concepts and to demonstrate how our language can be used to write strategies for game-playing agents. Our results lay the foundations for a variety of applications of automated reasoning about strategies in AI systems, including the following:

  • A general game player can be fed not only with the mere rules of a new game but also with declarative descriptions of tailor-made strategies.

  • General game players can use a high-level, declarative representation of their strategies as the basis for learning, maintaining, and reasoning about them.

  • Players can also use the language and its inference mechanism to represent, revise and reason with their beliefs about their opponents’ strategies.

The remainder of the paper proceeds as follows. In Section 2 we will begin our technical exposition with the definition of a simple state transition semantics for games. In Section 3, we will then define the semantic concept of a strategy within this framework, and in Section 4 we will develop a formal game specification language based on GDL for describing both game rules and strategies in a concise manner. In Section 5, we will introduce the two preference connectives and show how to use these to combine strategies into more complex ones. In Section 6, we demonstrate how to verify if a strategy could bring out an expected outcome with the model-based approach. In Section 7, we will encode the semantic interpretation of our language in conjunction with game rules and strategy rules in the Situation Calculus and also illustrate how the reasoning problem can be solved with Answer Set Programming. We conclude with a discussion of related work.

2 State Transition Games

To set the stage for our work, we first define formally a general state transition model for games.

Definition 1

A state transition game G is a tuple \((N, W, \mathcal {A}, \bar {w}, t, l, u,g)\), where

  1. 1.

    N is a non-empty finite set of players;

  2. 2.

    W is a non-empty set of states (possible worlds);

  3. 3.

    \(\mathcal {A}=\bigcup _{i\in N} A^{i}\), where A i is a non-empty finite set of actions for player iN;

  4. 4.

    \(\bar {w}\in W\), representing the initial state;

  5. 5.

    \(l\subseteq W \times \mathcal {A}\) is a binary legality relation, describing what actions are allowed in which states;

  6. 6.

    \(u:\mathcal {A}\times W\mapsto W\) is an update function, specifying the state transitions;

  7. 7.

    g:N↦2W is a goal function, specifying the winning states of each agent;

  8. 8.

    \(t \subseteq W\), representing the terminal states.

To keep our formalism as simple as possible, we assume that all actions are performed asynchronously, each by a single player (although games need not be turn-taking). We also assume that different agents have different actions, that is, \(A^{i}\cap A^{j}=\emptyset \,\) for any ij, but of course two actions may have the same effects.

A sequence \(w_{0}\overset {a_{0}}\rightarrow w_{1} \overset {a_{1}}\rightarrow {\cdots } \overset {a_{m-1}}\rightarrow w_{m} \) is called a complete path if

  1. 1.

    \(w_{0}=\bar {w}, w_{m}\in t\), and w j W for all 0< j< m;

  2. 2.

    \(a_{j}\in \mathcal {A}\) for all 0≤j< m;

  3. 3.

    (w j ,a j )∈l for all 0≤j< m; and

  4. 4.

    u(a j ,w j )=w j+1 for all 0≤j< m.

Any segment \(w_{k} \overset {a_{k}}\rightarrow w_{k+1} \overset {a_{k+1}}\rightarrow {\cdots } \overset {a_{l-1}}\rightarrow w_{l}\), where kl, of a complete path is called a reachable path. We let \(\mathcal {P}(G)\) denote all reachable paths in a state transition game G. Note that a single state without action can be a reachable path (i.e., where k=l). We call such a singleton path \(w\in W\cap \mathcal {P}(G)\) a reachable state.

A state-action pair (w,a) is called a reachable legal move (or simply a move) in G if there is a \(w^{\prime }\) such that \(w\stackrel {a}{\rightarrow }w^{\prime }\) is a reachable path. The set of all such moves in G is denoted by Ω(G). More formally,

$$ {\Omega}(G)=\{(w,a): \mathrm{there\,is}\, w^{\prime}\in W\,\mathrm{such~that}\, w\overset{a}\rightarrow w^{\prime}\in\mathcal{P}(G) \} $$
(1)

Furthermore, the moves for player i are denoted by Ωi(G):

$$ {\Omega}^{i}(G)=\{(w,a)\in {\Omega}(G): a\in A^{i}\} $$
(2)

For convenience, we let

$$ l^{i}(w)=\{a: (w,a)\in {\Omega}^{i}(G)\} $$
(3)

To facilitate the presentation of our framework, we will use as our running example a simple two-player game, which we call CrossDot gameFootnote 1.

Example 1 (CrossDot Game)

Two players take turns in placing either a cross “ ×” (player 1) or a dot “ ⋅” (player 2) into an empty box in a line of m, where m≥2:

$$\underbrace{\Box\Box\Box\ldots\Box}_{m} $$

Each box can contain at most one object. The first player to successfully fill k (1< km) consecutive boxes will end and win the game, where k is arbitrary but fixed. If all boxes have been filled without a winner, the game ends with a tie. We assume that the “ ×”-player goes first.

To describe the scenario in terms of the state transition game, let N={1,2} be the players and

$$W=\{ (t_{1},t_{2}, x_{1},x_{2},\ldots,x_{m}): t_{1},t_{2}\in \{0,1\}\,\&\, x_{1},\ldots,x_{m}\in\{\Box,\boxtimes\, \boxdot\}\} $$

the set of possible states, where t 1,t 2 specify whose turn it is (t i =1 if it is player i’s turn; otherwise t i =0) and x 1,…,x m indicate the status of the boxes. The initial state is \(\bar {w}=(1,0, \Box , \Box , \cdots ,\Box )\).

We write \({a^{i}_{j}}\) to denote the action of player i marking the jth box. Let \(A^{i}=\{{a^{i}_{j}}: i\in N \& 1\leq j\leq m\}\). We refrain from explicitly listing the legality relation, the update function, and the terminal and goal states for the players as this is possible but considerably lengthy even for a very simple game like this; the syntactic axiomatision of this game given in the following section will be much more concise and practical. Let us just pick some random examples: For m=4 and k=2, the state \((1,0,\boxdot , \boxtimes , \boxtimes , \boxdot )\) is a terminal state and also a goal state for player 1. Under our assumption that player 1 takes the first move, the state-action pair \(\left (\left (1,0,\boxdot , \Box , \boxtimes , \Box \right ), {a^{1}_{2}}\right )\), say, satisfies the legality relation, whereas \(\left (\left (1,0,\boxdot , \Box , \boxtimes , \Box \right ), {a^{2}_{2}}\right )\) does not because it is not player 2’s turn at this stage. As an example of the update function we have \(u\left ({a^{1}_{2}}, \left (1,0,\boxdot , \Box , \boxtimes , \Box \right )\right ) = (0,1,\boxdot , \boxtimes , \boxtimes , \Box )\).

For future reference, by \(G_{CrossDot}^{m,k}\) we denote the instance of our game that consists of m boxes and has winning length k.

3 Strategies in State Transition Games

In a multi-agent game environment, agents strive to achieve their goals. How individual agents act is determined by their strategies: At a state of a game, the strategies that the agent applies determine which actions the agent will take. For example, a strategy of a player in the game of chess reflects which moves the player would play in certain positions that can be reached according to the standard chess rules. In terms of the state transition game, a strategy of a player can be defined as a relation between game states and (legal) actions the player will perform.

Definition 2

A strategy S of player i is a subset of W×A i such that \(S\subseteq {\Omega }^{i}(G)\).

Intuitively, a strategy of a player specifies which actions the player should take in which states. Note that our concept of strategy is significantly different from the ones in the context of alternating-time temporal logic (ATL) [1, 24, 25]. In ATL, a strategy is a function that maps each state (or a sequence of states) to an action. In other words, a strategy specifies which action has to do exactly in each state or each history of states. However, our concept of strategy can express a “rough idea” of what to do. A strategy may suggest no action, one action or several actions to do in each state. A player might apply several strategies in one game. A single strategy does not necessarily determine the moves for all possible legal positions of a game. In the later sections, we will further develop a technology to combine and refine strategies in order to generate a strategy with desirable properties.

3.1 Properties of Strategies

We say that a strategy S is valid if S. A strategy S of player i is complete if for each reachable state \(w\in W\cap \mathcal {P}(G)\), there is a move (w,a i)∈S unless l i(w)=. In other words, a complete strategy provides the player with a “complete” guideline that always provides the player with one or more suggestions how to act when it is his move. A strategy S is deterministic if for any (w,a)∈S and \((w,a^{\prime })\in S\), we have \(a=a^{\prime }\). A strategy is functional if it is complete and deterministic. An agent with a functional strategy knows precisely what to do in any reachable game state. For instance, if an agent iN has a default action \({a^{i}_{0}}\in A^{i}\) that is always legal in any reachable state, then the following simple strategy for this player is complete and deterministic, hence functional:

$$S^{i}=\left\{\left(w,{a^{i}_{0}}\right): w\in W\cap \mathcal{P}(G)\right\} $$

Example 2

Consider an instance, \(G_{\textit {CrossDot}}^{4,2}\), of the CrossDot game. The following is an example of strategies for player 1 that intuitively says “fill a box next to one you have marked before”:

$$\begin{array}{@{}rcl@{}} S^{1}&=&\{ ((1,0,\boxtimes, \Box, \boxdot, \Box), {a^{1}_{2}}), \quad ((1,0,\boxtimes, \Box, \Box,\boxdot), {a^{1}_{2}}), \\ & &{\kern4pt}((1,0,\boxdot, \boxtimes, \Box, \Box), {a^{1}_{3}}), \quad ((1,0,\Box, \boxtimes,\boxdot,\Box), {a^{1}_{1}}), \\ & &{\kern4pt}((1,0, \Box, \boxtimes, \Box, \boxdot), {a^{1}_{1}}), \quad((1,0,\Box, \boxtimes,\Box,\boxdot), {a^{1}_{3}}), \\ &&{\kern4pt} ((1,0,\boxdot, \Box, \boxtimes, \Box), {a^{1}_{2}}), \quad ((1,0,\boxdot, \Box, \boxtimes, \Box), {a^{1}_{4}}), \\ &&{\kern4pt} ((1,0,\Box, \boxdot, \boxtimes, \Box), {a^{1}_{4}}), \quad ((1,0,\Box, \Box, \boxtimes, \boxdot), {a^{1}_{2}}),\\ &&{\kern4pt} ((1,0,\boxdot, \Box, \Box, \boxtimes), {a^{1}_{3}}), \quad ((1,0,\Box, \boxdot, \Box, \boxtimes), {a^{1}_{3}})\,\} \end{array} $$

It is easy to see that the strategy is valid but neither complete nor deterministic.

4 Strategy Representation and Semantics

The above example shows that directly representing a strategy in a state transition game requires to list every move that complies with the strategy. In the following, we develop a syntactical representation that allows to describe a strategy much more concisely.

4.1 Formal Game Specification Language

We first present a logic-based, general game description language with linear time.

Definition 3

Consider a propositional modal language \(\mathcal {L}\) with these components:

  • a non-empty finite set Φ of propositional variables;

  • a non-empty finite set N of agent symbols;

  • a non-empty finite set A i of action symbols for each iN;

  • propositional connectives ¬, ∧, ∨, → and ≡;Footnote 2

  • pseudo-function symbols d o e s(.), l e g a l(.), and w i n s(.);

  • modal operator ○;

  • special propositional symbols init and terminal.

Formulas in \(\mathcal {L}\) are defined as follows:

$$\varphi\ :=\ p \ |\ \neg \varphi \ |\ \varphi \wedge \varphi\ |\ \bigcirc \varphi \ |\ does(a)\ |\ legal(a) \ |\ wins(i) \ |\ init\ |\ terminal $$

where p∈Φ, iN and \(a\in \mathcal {A}=\bigcup _{i\in N} A^{i}\).

Note that we overload N and A i as they occur in both syntax and semantics. They can be distinguished from the context. As in Definition 1, we assume that \(A_{i}\cap A_{j}=\emptyset \) if ij.

We call does, legal and wins pseudo-functions because formally each instance d o e s(a), l e g a l(a) or w i n s(i) is taken to be an individual propositional symbol. This language is a direct adaptation of the general Game Description Language [6] and allows to describe games in a compact way.

Example 1

(continued) Footnote 3 We use the propositional symbols \({p^{i}_{j}}\) to represent the fact that box j is filled with i’s marker, where i∈{1,2} and j∈{1,…,m}. In addition, we use two specific propositional symbols t u r n(1) and t u r n(2) to represent players’ turns, respectively. Putting all the propositional symbols together, we have \({\Phi }_{CrossDot}=\left \{{p^{i}_{j}}: 1\leq i\leq 2\ \&\ 1\leq j\leq m\}\cup \{turn(i): i=1\, \mathrm{or}\, 2\right \}\). With this, we are able to describe the game rules in our logical language.

To begin with, the following rules specify the initial game state:

$$\begin{array}{@{}rcl@{}} init & \rightarrow & \neg {p^{i}_{j}} \qquad \qquad \qquad \qquad \qquad \qquad \mathrm{for~all}~i\in~N~\text{and}~ j~\leq~m \end{array} $$
(4)
$$\begin{array}{@{}rcl@{}} init & \rightarrow & turn(1)\wedge \neg turn(2) \end{array} $$
(5)

The following statement defines the winning conditions: Player i wins if there is j such that j+k−1≤m and \({p^{i}_{j}}\wedge \cdots \wedge p^{i}_{j+k-1}\), i.e.

$$ wins(i)\equiv\bigvee\limits_{j=1}^{m-k+1} \bigwedge\limits_{l=j}^{j+k-1} {p^{i}_{l}} $$
(6)

With this, the condition for termination is:

$$ terminal\,\equiv\, wins(1)\vee wins(2)\vee \textstyle\bigwedge\limits_{j=1}^{m}\left({p^{1}_{j}}\vee {p^{2}_{j}}\right) $$
(7)

As before, let \({a^{i}_{j}}\) denote the action of player i filling box j. Legality of the actions of each player i can be described thus:

$$\begin{array}{@{}rcl@{}} & \neg \left({p^{1}_{j}}\vee {p^{2}_{j}}\right)\wedge turn(i)\wedge \neg terminal\,\equiv\, legal\left({a^{i}_{j}}\right) \end{array} $$
(8)

The effects of the actions are given byFootnote 4

$$\begin{array}{@{}rcl@{}} & {p^{i}_{j}}\vee does\left({a^{i}_{j}}\right)\equiv \bigcirc {p^{i}_{j}} \end{array} $$
(9)
$$\begin{array}{@{}rcl@{}} & turn(1) \rightarrow \bigcirc\neg turn(1)\wedge \bigcirc turn(2) \end{array} $$
(10)
$$\begin{array}{@{}rcl@{}} & turn(2) \rightarrow\bigcirc\neg turn(2)\wedge \bigcirc turn(1) \end{array} $$
(11)

Let \({\Sigma }^{m,k}_{CrossDot}\) be the set of axioms (4)–(11).

In the following, we interpret the language based on the state transition model. Give a state transition game \(G=(N,W,\mathcal{A},\bar{w},t,l,u,g)\) (see Definition 1), a valuation function v:W↦2Φ specifies which atom propositions are true at each state. Propositional formulas and their truth values can be defined accordingly.

Definition 4

Let \(G=(N,W,\mathcal{A},\bar{w},t,l,u,g)\) be a state transition game and v a valuation function. We call the pair M=(G,v) a state transition model. Let \(\delta =w_{0}\overset {a_{0}}\rightarrow w_{1}\overset {a_{1}}\rightarrow {\cdots } \overset {a_{m-1}}\rightarrow w_{m} \in \mathcal {P}(G)\) be a reachable path in G and φ a formula. We say that δ satisfies φ under M (written M,δφ) according to the following definition:

$$\begin{array}{@{}rcl@{}} &M&,\delta \models p \qquad\qquad\qquad\quad\>\>\> \text{iff} \> p\in v(w_{0}) \quad (p \in {\Phi})\\ &M&,\delta \models does(a) \qquad\quad \ \ \ \, \>\>\>\>\> \text{iff} \> a= a_{0}\\ &M&,\delta \models init \qquad\quad\quad \ \ \, \ \ \ \, \>\>\>\>\> \text{iff} \> w_{0}= \bar{w}\\ &M&,\delta \models terminal \qquad\quad\ \, \>\>\>\>\> \text{iff} \> w_{0}\in t\\ &M&,\delta \models legal(a) \qquad\quad \, \ \ \, \>\>\>\>\> \text{iff} \> (w_{0},a)\in l\\ &M&,\delta \models wins(i) \qquad\quad \ \, \ \ \, \>\>\>\>\> \text{iff} \> w_{0}\in g(i)\\ &M&,\delta \models \neg \varphi \qquad\quad\quad\ \ \ \ \ \ \ \ \>\>\>\>\> \text{iff} \> M,\delta \not\models \varphi\\ &M&,\delta \models \varphi_{1} \wedge \varphi_{2} \qquad\quad\quad \> \>\>\>\>\> \text{iff} \> M,\delta \models \varphi_{1}~and~M,\delta \models \varphi_{2}\\ &M&,\delta \models \bigcirc \varphi \qquad\quad\quad \ \ \ \ \ \, \ \,\, \>\>\>\> \text{iff} \> M,w_{1} \overset{a_{1}}\rightarrow\cdots\overset{a_{m-1}}\rightarrow w_{m}\models \varphi \end{array} $$

It is worth clarifying that in the limit case m=0 (i.e., δ=w 0) we have that M,δd o e s(a) and M,δ⊧○φ hold for any a and φ.

A formula φ is valid in model M, denoted Mφ, if it is satisfied by any reachable path in the game, that is, M,δφ for all \(\delta \in \mathcal {P}(G)\). Let Σ be a set of sentences in \(\mathcal {L}\), then M is a model of Σ if Mφ for all φ∈Σ.

Observation 1

Consider the CrossDot game \(G=G_{CrossDot}^{m,k}\) introduced in Example 1. Let v be a valuation function such that for each state w=(t 1 ,t 2 ,x 1 ,⋯ ,x m )∈W, \(v(w)=\{turn(i): t_{i}=1\}\cup \left \{{p^{1}_{j}}: x_{j} = \boxtimes \ \& \ 1\leq j\leq m\right \}\cup \left \{{p^{2}_{j}}:x_{j}=\boxdot \ \&\right .\) \( \left . \ 1\leq j\leq m\right \}\) . Let M=(G,v). Then M is a model of \({\Sigma }^{m,k}_{CrossDot}\) (see Eqs 4 11 ).

Proof

Given any reachable path \(\delta ={w_{0} \overset {a_{0}}{\rightarrow }w_{1}\overset {a_{1}}{\rightarrow } \cdots \overset {a_{e-1}}{\rightarrow }w_{e}}\), we only have to verify that each axiom in \({\Sigma }^{m,k}_{CrossDot}\) is satisfied by δ in M. This is straightforward. Consider for instance the proof that \(M,\delta \models init \rightarrow \neg {p^{i}_{j}}\) for any iN and j< m: If \(w_{0}\not =\bar {w}\), this holds trivially because M,δ⊧¬i n i t. If \(w_{0} = \bar {w}\), then M,δi n i t and by the definition of v, \({p^{i}_{j}}\not \in v(w_{0})\), hence \(M,\delta \models \neg {p^{i}_{j}}\) for any iN and j< m. The other axioms can be verified similarly. □

In order to develop a syntactical representation for strategies, we introduce the following specific concepts. Let \(\delta =w_{0} \overset {a_{0}}\rightarrow w_{1}\overset {a_{1}}\rightarrow \cdots \overset {a_{m-1}}\rightarrow w_{m}\). We call δ a reachable path starting with the move (w 0,a 0). The set of all the reachable paths in game G that starts with (w,a) is denoted by (w,a). Given a state transition model M=(G,v), for any move (w,a)∈Ω(G), a formula φ is valid under move (w,a), denoted by M(w,a) φ, if M,δφ for all reachable path δ∈(w,a).

Example 3

Consider the same instance of the CrossDot game \(G_{CrossDot}^{4,2}\) as in Example 2. Let M be the state transition model defined in Observation 1 for the case m=4 and k=2. Assume that \((w,a)=((1,0,\boxtimes , \boxdot ,\Box ,\Box ),{a^{1}_{3}})\). It is easy to verify the following:

$\begin{array}{@{}rcl@{}} &&M\models_{(w,a)}legal({a^{1}_{4}})\\&&M\not\models_{(w,a)}does({a^{2}_{3}})\\ &&M\models_{(w,a)}\neg ({p^{1}_{3}}\vee {p^{2}_{3}})\wedge \bigcirc {p^{1}_{3}}\wedge\bigcirc (legal({a^{2}_{4}})\wedge does({a^{2}_{4}}))\\&&M\models_{(w,a)}\bigcirc\bigcirc terminal \end{array} $

4.2 Describing Strategies

We now turn to the syntactical representation of strategies using the language introduced above. For any state transition model M=(G,v) and formula \(\varphi \in \mathcal {L}\), let

$$ S^{i}(\varphi)= \{(w,a)\in{\Omega}^{i}(G): M\models_{(w,a)} \varphi\} $$
(12)

In other words, S i(φ) comprises all moves for player i under which φ is valid.

Definition 5

Given a state transition model M, let S be a strategy of player i according to Definition 2. A formula φ in \(\mathcal {L}\) is a representation of S iff S = S i(φ).

In the following, we will call “strategy” both a set of moves (i.e., a strategy in the sense of Definition 2) and its representation (using \(\mathcal {L}\)). They should be easy to distinguish from the context. Note that a formula \(\varphi \in \mathcal {L}\) can represent different strategies for different players. For instance, the tautology ⊤ can be a strategy of player i, i.e., S i(⊤), that allows the player to take any reachable moves at any reachable state. Another example is d o e s(a i), where a iA i. If representing a strategy of player i, it means to take a i only at any reachable state. However, if representing a strategy of other players rather than i, it means to do nothing.

While the language we consider in this paper is propositional and finite, a state transition game can have infinitely many worlds which cannot be distinguished by the propositions that hold in them. A strategy that assigns different actions to two indistinguishable worlds cannot be described in our language. However, every strategy that is Markovian in the following sense does have a representation in \(\mathcal {L}\).

Definition 6

Given a state transition model M=(G,v), a strategy S is Markovian if for all (w 1,a)∈S and \(w_{2}\in \mathcal {P}(G)\) such that v(w 1)=v(w 2),(w 2,a)∈S.

Proposition 1

Given a state transition model M=(G,v) of \(\mathcal {L}\) , any Markovian strategy S of a player has a representation in \(\mathcal {L}\).

Proof

For each (w,a)∈S, since the set Φ of propositional variables is finite, v(w) and Φ∖v(w) are both finite. Thus the following is a well-formed propositional formula:

$$ \varphi(w,a)=\left(\bigwedge_{p\in v(w)}p\right)\wedge\left(\bigwedge_{p\in {\Phi}\setminus v(w)}\neg p\right)\wedge does(a) $$
(13)

Again \(\mathcal {A}\) being finite implies the set {φ(w,a):(w,a)∈S} to be finite even though S may be infinite. Therefore the following is also a well-formed propositional formula:

$$\varphi = \bigvee\limits_{(w,a)\in S} \varphi(w,a) $$

Now we show φ is a representation of S. Obviously, for any (w,a)∈S, M(w,a) φ. For any (w,a)∈Ωi(G), where i is the player under consideration, assume that M(w,a) φ. By the construction of φ, there is a move \((w^{\prime },a^{\prime })\in S\) such that

$$M\models_{(w,a)} \left(\bigwedge_{p\in v(w^{\prime})}p\right)\wedge\left(\bigwedge_{p\in {\Phi}\ v(w^{\prime})}\neg p\right)\wedge does(a^{\prime}) $$

It turns out that \(v(w)=v(w^{\prime })\). Since S is Markovian, we have \((w,a^{\prime })\in S\). Note that \(M\models _{(w,a)} does(a^{\prime })\) implies \(a=a^{\prime }\) by Definition 4. We then have (w,a)∈S, as desired. □

Note that Markovian strategies are history-independent. If we want to specify strategies in which move choices depend on the history of a game, then we need to extend our language with syntactic means to talk about past states. This can be done by adding the inverse of the operator ○, written ○−1, to denote that some property holds in “the previous state”. We leave this extension for future work.

The reader is reminded that formulas in our language have been endowed with two different semantics. If it is used to represent a property, it has a truth value as normal propositional formula. If a formula is used to represent a strategy, then it no longer has a truth value but represents a set of moves for a player.

The following observation shows how our language can be used to describe a useful strategy for our running example. Compared to its semantical representation (cf. Example 2), the syntactical expression of the strategy is much more compact and meaningful.

Observation 2

Strategy S 1 from Example 2 is represented by

$$\begin{array}{@{}rcl@{}} \varphi&=&\bigvee_{1< j\leq 4} \left(p^{1}_{j-1} \wedge\neg {p^{1}_{j}}\wedge \neg {p^{2}_{j}} \wedge does\left({a^{1}_{j}}\right)\right)\vee\\&& \bigvee_{1\leq j< 4}\left(\neg {p^{1}_{j}}\wedge \neg {p^{2}_{j}} \wedge p^{1}_{j+1} \wedge does\left({a^{1}_{j}}\right)\right) \end{array} $$

Proof

First we show \(S^{1}\subseteq S^{1}(\varphi )\). This can be done by verifying M(w,a) φ one by one for each (w,a)∈S 1. For instance, consider \(w=(1,0,\boxtimes , \Box , \boxdot ,\Box ,)\) and \(a={a^{1}_{2}}\). Then \(M\models _{(w,a)} {p^{1}_{1}}\wedge \neg {p^{1}_{2}} \wedge \neg {p^{2}_{2}}\) and \(M\models _{(w,a)} does\left ({a^{1}_{2}}\right )\). Thus M(w,a) φ.

In order to show that \(S^{1}(\varphi )\subseteq S^{1}\), we can prove that for each reachable move (w,a) of player 1, (w,a)∉S 1 implies M(w,a) φ. This can be done by enumerating all reachable moves in \({\Omega}^{1}\left(G^{4,2}_{CrossDot}\right)\setminus S^{1}\). For instance, let \(w_{0}=(1,0,\boxtimes ,\boxdot , \Box , \Box )\) and \(a_{0}={a^{1}_{3}}\). Obviously, (w 0,a 0) is a reachable move and (w 0,a 0)∉S 1. To show \(M\not \models _{(w_{0},a_{0})} \varphi \), consider the reachable path \(\delta = w_{0} \overset {a_{0}}\rightarrow w_{1}\), where \(w_{1}=(0,1,\boxtimes ,\boxdot , \boxtimes , \Box )\). It is easy to verify that \(M,\delta \models \neg {p^{1}_{2}} \wedge \neg {p^{1}_{4}}\wedge does({a^{1}_{3}})\) while \(M, \delta \models \neg does(a^{\prime })\) for any \(a^{\prime }\not = {a^{1}_{3}}\). Thus we have M,δφ. We then yield that \(M\not \models _{(w_{0},a_{0})} \varphi \). Other cases can be verified similarly. □

The above observation shows that we can still use our logical sense to design a strategy despite the significant differences of semantics between a propositional formula and a strategy representation. When doing so it is important to keep it in mind that a propositional formula bears a very different meaning when understood as a strategy rather than a state description. In the following we demonstrate the speciality of strategy representation with a few examples based on the CrossDot game \(G=G_{CrossDot}^{4,2}\).

  1. 1.

    One formula can represent different strategies for different players:

    $$\begin{array}{@{}rcl@{}} S^{1}(\bigcirc {p^{1}_{1}}) = & \{((1,0,\Box, x_{2}, x_{3}, x_{4}), {a^{1}_{1}})\!\in\! \mathcal{P}(G): x_{2},x_{3},x_{4} \in\{\Box,\boxtimes,\boxdot\}\}\\ & \cup\{((1,0,\boxtimes, \Box, x_{3}, x_{4}), {a^{1}_{2}})\in \mathcal{P}(G): x_{3},x_{4} \in\{\Box,\boxtimes,\boxdot\}\}\\ & \cup\{((1,0,\boxtimes, x_{2}, \Box, x_{4}), {a^{1}_{3}})\in \mathcal{P}(G): x_{2},x_{4} \in\{\Box,\boxtimes,\boxdot\}\}\\ & \cup\{((1,0,\boxtimes, x_{2}, x_{3}, \Box), {a^{1}_{4}})\in \mathcal{P}(G): x_{2},x_{3} \in\{\Box,\boxtimes,\boxdot\}\} \end{array} $$
    $$\begin{array}{@{}rcl@{}} S^{2}(\bigcirc {p^{1}_{1}})= & \{((0,1,\boxtimes, \Box, x_{3}, x_{4}), {a^{2}_{2}})\in \mathcal{P}(G): x_{3},x_{4} \in\{\Box,\boxtimes,\boxdot\}\}\\ & \cup \{((0,1,\boxtimes, x_{2}, \Box, x_{4}), {a^{2}_{3}})\in \mathcal{P}(G): x_{2},x_{4} \in\{\Box,\boxtimes,\boxdot\}\}\\ & \cup\{((0,1,\boxtimes, x_{2}, x_{3}, \Box), {a^{2}_{4}})\in \mathcal{P}(G): x_{2},x_{3} \in\{\Box,\boxtimes,\boxdot\}\} \end{array} $$

    In other words, if \(\bigcirc {p^{1}_{1}}\) represents a strategy of player 1, it means that player 1 is to fill the first box with his marker if the box is currently empty or to fill any other empty box if the box has already been filled with a cross whenever it is his turn. However, if the formula \(\bigcirc {p^{1}_{1}}\) represents a strategy of player 2, it means player 2 wishes the first box to be filled by player 1. In this case, he waits for the states to come and then do whatever is feasible to him.

  2. 2.

    Forcing another player into a particular action:

    $$\begin{array}{@{}rcl@{}} S^{1}(\bigcirc does({a^{2}_{1}}))&=&\{((1,0,\Box, \Box, x_{3}, x_{4}), {a^{1}_{2}})\in \mathcal{P}(G): x_{3},x_{4} \in\{\boxtimes,\boxdot\}\}\\ &&\cup \{((1,0,\Box, x_{2}, \Box, x_{4}), {a^{1}_{3}})\in \mathcal{P}(G): x_{2},x_{4} \in\{\boxtimes,\boxdot\}\}\\ &&\cup \{((1,0,\Box, x_{2}, x_{3}, \Box), {a^{1}_{4}})\in \mathcal{P}(G): x_{2},x_{3} \in\{\boxtimes,\boxdot\}\} \end{array} $$

    In general, a player does not have control on the other player’s actions. But a player may be able to create a situation in which the opponent has no choice other than the desired action. In the above example, player 1 is enforcing a situation in which player 2 has no other option but to perform \({a^{2}_{1}}\) when it becomes his turn.

  3. 3.

    Thinking forward:

    $$\begin{array}{@{}rcl@{}} &&S^{2}((\bigcirc (\neg does({a^{1}_{1}})) \rightarrow \bigcirc \bigcirc does({a^{2}_{1}}))\\ &&{\kern3pt}= \{((0,1,\Box, \Box, \Box, \boxtimes), {a^{2}_{2}}), ((0,1,\Box, \Box, \Box, \boxtimes), {a^{2}_{3}}),\\ &&{\kern19pt}((0,1,\Box, \Box, \boxtimes, \Box ), {a^{2}_{2}}),((0,1,\Box, \Box, \boxtimes, \Box ), {a^{2}_{4}}), \\ &&{\kern19pt}((0,1,\Box,\boxtimes, \Box, \Box), {a^{2}_{3}}), ((0,1,\Box,\boxtimes, \Box, \Box), {a^{2}_{4}})\} \end{array} $$

    This is a strategy for player 2 to try to find an action so that “he can fill box 1 in his next turn as long as player 1 will not fill it beforehand.” Note that \(S^{2}(\bigcirc \bigcirc does({a^{2}_{1}})) = \emptyset \) because he can do nothing to guarantee that \(does({a^{2}_{1}})\) is doable in his next turn.

  4. 4.

    Try anything to win the game within one step:

    $$\begin{array}{@{}rcl@{}} &&S^{1}(\bigcirc wins(1))\\ &&{\kern3pt}= \{((1,0,\boxtimes, \Box, \boxdot, \Box), {a^{1}_{2}}), ((1,0,\boxtimes, \Box, \Box, \boxdot), {a^{1}_{2}}), \\ &&{\kern19pt} ((1,0,\Box,\boxtimes, \boxdot, \Box), {a^{1}_{1}}), ((1,0,\Box, \boxtimes, \Box, \boxdot), {a^{1}_{1}}), \\ &&{\kern19pt} ((1,0,\Box,\boxtimes, \Box, \boxdot), {a^{1}_{3}}),((1,0,\boxdot, \Box,\boxtimes, \Box), {a^{1}_{2}}), \\ &&{\kern19pt} ((1,0,\Box, \Box, \boxtimes, \boxdot), {a^{1}_{2}}), ((1,0,\boxdot, \Box, \Box, \boxtimes), {a^{1}_{3}}), \\ &&{\kern19pt} ((1,0, \Box,\boxdot, \Box,\boxtimes), {a^{1}_{3}})\} \end{array} $$

    This is a simple strategy for player 1 that aims to try any available action to win the game in one step.

We have seen from the above examples that even though a strategy rule is written in the syntax of (temporal) propositional formulas, its semantics is significantly different from propositional logic therefore we must be very cautious when we start to use the language to design game strategies.

To complete this section, we present a property of our strategy representation for later use.

Lemma 1

Given a state transition model M, for each player i,

  1. 1.

    \(S^{i}(\varphi _{1}\wedge \varphi _{2})=S^{i}(\varphi _{1})\cap S^{i}(\varphi _{2})\)

  2. 2.

    \(S^{i}(\varphi _{1})\cup S^{i}(\varphi _{2})\subseteq S^{i}(\varphi _{1}\vee \varphi _{2})\)

Proof

To prove (1), assume that (w,a)∈S i(φ 1φ 2). It follows that aA i and M(w,a) φ 1φ 2. Then for each reachable path δ∈(w,a), we have M,δφ 1φ 2, which implies M,δφ 1 and M,δφ 2. It turns out that M(w,a) φ 1 and M(w,a) φ 2. We get (w,a)∈S i(φ 1) and (w,a)∈S i(φ 2). The other direction is similar.

To prove (2), assume that \((w,a)\in S^{i}(\varphi _{1})\cup S^{i}(\varphi _{2})\). Without loss of generality, suppose that (w,a)∈S i(φ 1), which implies aA i and M(w,a) φ 1. Hence, M,δφ 1 for each reachable path δ∈(w,a). It follows that M,δφ 1φ 2. Thus we yield M(w,a) φ 1φ 2. Given that aA i, we conclude (w,a)∈S i(φ 1φ 2). □

5 Strategy Composition

As mentioned in the introduction, the main motivation of this work is to introduce a formal logical language for defining, comparing and combining strategies. We have provided our formal definition of strategies in both syntactical and semantical levels in the previous sections. To facilitate the composition of strategies, in this section we extend our language by two specific connectives, called prioritised disjunction and prioritised conjunction, respectively.

5.1 Prioritised Disjunction and Conjunction

The idea behind these two new connectives is the following. The prioritised disjunction “\(\triangledown \)” extends the choice of actions such that if a first strategy fails to apply then a second one offers more options, and if that fails too then a third strategy may offer more options still, and so on. Conversely, the prioritised conjunction “\(\vartriangle \)” narrows down the choice of actions: if a first strategy allows too many options, then a second strategy may be used to constrain these options, a third strategy may narrow down the options even further, and so on—up to the point where the next strategy in line would lead to empty option.

Definition 7

The set of strategy rules is the smallest set such that

  1. 1.

    a formula in \(\mathcal {L}\) is a strategy rule;

  2. 2.

    if r 1,…,r m are strategy rules, then so is \(r_{1} \triangledown r_{2} \triangledown {\cdots } \triangledown r_{m}\);

  3. 3.

    if r 1,…,r m are strategy rules, then so is \(r_{1}\!\vartriangle \!r_{2}\!\vartriangle\!\cdots\!\vartriangle \!r_{m}\).

Note that the new strategy connectives are introduced as macros rather than as additional connectives in the language \(\mathcal {L}\). This is so because we do not want to allow the nesting of the strategy connectives with logical connectives (while nested strategy rules are allowed). For instance, \((\varphi _{1}\vartriangle \psi _{1})\triangledown (\varphi _{2}\vartriangle \psi _{2})\) is a syntactically correct strategy rule while \(\varphi \rightarrow (\psi _{1}\triangledown \!\ \psi _{2})\) is not.

5.2 Semantics of Strategy Rules

Given a strategy S, we let \(S\!\!\upharpoonright _{w}=\{(w,a): (w,a)\in S\}\), i.e., the set of all the moves at state w specified by S.

Definition 8

Let r be a strategy rule. We define S i(r) recursively on the structure of r as follows:

  1. 1.

    If \(r=\varphi \in \mathcal {L}\), then S i(r)=S i(φ).

  2. 2.

    If \(r=r_{1} \triangledown r_{2} \triangledown {\cdots } \triangledown r_{m}\), then (w,a) ∈ S i(r) iff there exists k (1 ≤km) such that \( (w,a)\in S^{i}(r_{k})\ \text {and}\ \bigcup _{j<k}S^{i}(r_{j})\!\!\upharpoonright _{w}=\emptyset . \)

  3. 3.

    If \(r=r_{1}\!\!\vartriangle \!r_{2}\!\!\vartriangle \!\! {\cdots }\!\!~ {\vartriangle }r_{m}\), then (w,a)∈S i(r) iff there exists k (1 ≤km) such that

    1. (a)

      \( (w,a)\in \bigcap _{j\leq k}S^{i}(r_{j})\) and

    2. (b)

      k=m or \(\bigcap _{j\leq k+1}S^{i}(r_{j})\upharpoonright _{w}=\emptyset \).

Intuitively, \(r_{1}\triangledown r_{2}\triangledown {\cdots } \triangledown r_{m}\) represents a strategy that combines strategies r 1,r 2,…,r m in such a way that a strategy rule r k becomes applicable only if none of the higher prioritised rules r j (j< k) is applicable. \(r_{1}\hspace *{-3pt}\vartriangle \hspace *{-3pt} r_{2}\hspace *{-3pt}\vartriangle \hspace *{-4pt}{\cdots }\hspace *{-4pt}\vartriangle \hspace *{-3pt} r_{m}\) tries to apply as many strategy rules all together as possible but gives higher priority to the left rules than the right rules if conflicts occur.

The following observations show that the connective \(\triangledown \) is indeed a kind of prioritised disjunction and the connective \(\vartriangle \) is indeed a kind of prioritised conjunction.

Lemma 2

Given a state transition model M, for each player i,

  1. 1.

    \(S^{i}(r_{1})\subseteq S^{i}(r_{1}\triangledown r_{2})\)

  2. 2.

    \(S^{i}(r_{1}\vartriangle r_{2})\subseteq S^{i}(r_{1})\)

  3. 3.

    If S i (r 1 )=∅, then \(S^{i}(r_{1}\ \triangledown \ r_{2}) = S^{i}(r_{2})\).

  4. 4.

    If S i (r 2 )=∅, then \(S^{i}(r_{1}\vartriangle r_{2}) = S^{i}(r_{1})\).

  5. 5.

    \(S^{i}(\varphi _{1}\triangledown (\varphi _{1}\wedge \varphi _{2}))=S^{i}(\varphi _{1})\)

  6. 6.

    \(S^{i}((\varphi _{1}\wedge \varphi _{2}) \triangledown \varphi _{2}) \subseteq S^{i}(\varphi _{2})\)

  7. 7.

    \(S^{i}(\varphi _{1}\vartriangle (\varphi _{1}\vee \varphi _{2}))=S^{i}(\varphi _{1})\)

  8. 8.

    \(S^{i}(\varphi _{2}) \subseteq S^{i}((\varphi _{1}\vee \varphi _{2})\vartriangle \varphi _{2})\)

Proof

The proof of (1)–(4) is straightforward from Definition 8.

To show (5), let \((w,a)\in S^{i}(\varphi _{1}\triangledown (\varphi _{1}\wedge \varphi _{2}))\). If (w,a)∉S i(φ 1), then by Definition 8, (w,a)∈S i(φ 1φ 2). By Lemma 1 (1), this implies (w,a) ∈ S i(φ 1), a contradiction. Hence, (w,a)∈S i(φ 1). We have proved \(S^{i}(\varphi _{1}\triangledown (\varphi _{1}\wedge \varphi _{2}))\subseteq S^{i}(\varphi _{1})\). The other direction follows (1).

To show (6), let \((w,a)\in S^{i}((\varphi _{1}\wedge \varphi _{2}) \triangledown \varphi _{2})\). Suppose that (w,a)∉S i(φ 1φ 2), then by Definition 8, we have (w,a)∈S i(φ 2), as desired. If, on the other hand, (w,a)∈S i(φ 1φ 2), then by Lemma 1 (1) we also have (w,a)∈S i(φ 2).

To show (7), assume that (w,a) ∈ S i(φ 1). By Lemma 1 (2), (w, a) ∈ S i(φ 1φ 2). Hence, \((w,a)\in S^{i}(\varphi _{1})\hspace *{-4pt}\upharpoonright _{w} \cap \, S^{i}(\varphi _{1}\vee \varphi _{2})\hspace *{-4pt}\upharpoonright _{w} \), which implies \((w,a)\in S^{i}(\varphi _{1}\vartriangle (\varphi _{1}\vee \varphi _{2}))\). The other direction follows (2).

To show (8), assume that (w,a)∈S i(φ 2). By Lemma 1 (2), (w,a)∈S i(φ 1φ 2). It follows that \((w,a)\in S^{i}(\varphi _{1}\vee \varphi _{2})\hspace *{-4pt}\upharpoonright _{w} \cap \, S^{i}(\varphi _{2})\hspace *{-3pt}\upharpoonright _{w}\). By Definition 8, we have \((w,a)\in S^{i}((\varphi _{1}\vee \varphi _{2})\vartriangle \varphi _{2})\). □

Note that in the above lemma, only one direction of inclusion in items (6) and (8) holds. The other direction does not. For instance, assume that S i(φ 1)={(w,a)} and S i(φ 2)={(w,a),(w,b)}. Then S i(φ 1φ 2)={(w,a)}. It turns out that \(S^{i}((\varphi _{1}\wedge \varphi _{2})\triangledown \varphi _{2})=\{(w,a)\}\). This shows that \(S^{i}(\varphi _{2}) \not \subseteq S^{i}((\varphi _{1}\wedge \varphi _{2})\triangledown \varphi _{2})\). Furthermore, if S i(φ 1)={(w,a)} and S i(φ 2)=. By Lemma 1, we have (w,a)∈S i(φ 1φ 2). Thus \((w,a)\in S^{i}((\varphi _{1}\vee \varphi _{2})\vartriangle \varphi _{2}))\not =\emptyset \), which means that the other direction of item (8) does not hold.

The following lemma shows that the prioritised disjunction can be reduced to binary connectives. Interestingly, the prioritised conjunction does not have such a nice property.

Lemma 3

Given a state transition model M, for each player i,

$$S^{i}(r_{1}\triangledown r_{2}\triangledown\cdots\triangledown r_{m})=S^{i}(r_{1}\triangledown (r_{2}\triangledown\cdots\triangledown r_{m})) =S^{i}((r_{1}\triangledown\cdots\triangledown r_{m-1})\triangledown r_{m})$$

Proof

We will prove the first equation, the other one will be quite similar. We consider two cases. If \(S^{i}(r_{1})\upharpoonright _{w}=\emptyset \) then

$$\begin{array}{ll} & (w,a)\in S^{i}(r_{1}\triangledown r_{2}\triangledown\cdots\triangledown r_{m})\\ \text{iff} & (w,a)\in S^{i}(r_{2}\triangledown\cdots\triangledown r_{m}) \\ \text{iff} & (w,a)\in S^{i}(r_{1}\triangledown(r_{2}\triangledown\cdots\triangledown r_{m})) \end{array} $$

On the other hand, if \(S^{i}(r_{1})\upharpoonright_{w}\neq \emptyset \) then

$$\begin{array}{ll} & (w,a)\in S^{i}(r_{1}\triangledown\cdots\triangledown r_{m}) \\ \text{iff} & (w,a)\in S^{i}(r_{1}) \\ \text{iff} & (w,a)\in S^{i}(r_{1}\triangledown(r_{2}\triangledown\cdots\triangledown r_{m})) \end{array} $$

We remark that the prioritised conjunction cannot likewise be reduced to binary connectives. In general neither \(S^{i}(r_{1}\vartriangle r_{2}\vartriangle \cdots \vartriangle r_{m}) = S^{i}((r_{1}\vartriangle {\cdots } \vartriangle r_{m-1})\vartriangle r_{m})\) nor \(S^{i}(r_{1}\vartriangle r_{2}\vartriangle \cdots \vartriangle r_{m}) = S^{i}(r_{1}\vartriangle \)(\(r_{2}\vartriangle {\cdots } s\vartriangle r_{m}))\) is true. For example, let S i(r 1)={(w,a 1),(w,a 2)}; S i(r 2)=; and S i(r 3)={(w,a 1),(w,a 3)}. Then \(S^{i}(r_{1}\vartriangle r_{2} \vartriangle r_{3})=S^{i}(r_{1})=\{(w,a_{1}),(w,a_{2})\}\). But \(S^{i}((r_{1}\vartriangle r_{2}\))\(\vartriangle r_{3})=S^{i}(r_{1}\vartriangle r_{3})=\{(w,a_{1})\}\). On the other hand, if S i(r 2)={(w,a 2),(w,a 3)} and S i(r 1) and S i(r 3) are as before, then we get \(S^{i}(r_{1}\vartriangle r_{2}\vartriangle r_{3})=\{(w,a_{2})\}\) while \(S^{i}(r_{1}\vartriangle (r_{2}\vartriangle r_{3}))=\{(w,a_{1}),(w,a_{2})\}\).

As a remedy, we may define a prioritised conjunction by the prioritised disjunction as follows:

$$\varphi\dot{\vartriangle}\psi =_{def} (\varphi\wedge\psi)\triangledown\varphi $$

Then we extend it to the multi-argument version in the following way:

$$\varphi_{1}\dot{\vartriangle}\varphi_{2} \dot{\vartriangle}{\cdots} \dot{\vartriangle}\varphi_{m} =_{def} ((\varphi_{1} \dot{\vartriangle}\varphi_{2})\dot{\vartriangle}\cdots) \dot{\vartriangle}\varphi_{m} $$

Unfortunately the semantics of \(\dot {\vartriangle }\) coincides with \(\vartriangle \) in the binary version but not in the multi-argument version. We will investigate this alternative in our future work.

5.3 Complete Strategies and Deterministic Strategies

When we build a software game player, we need to instruct it what to do in each possible situation. In other words, each player should be equipped with a functional strategy. In the following, we demonstrate how to generate a complete and/or deterministic strategy by using our prioritised connectives.

We say that a strategy rule is consistent for player i if it represents a valid strategy; a rule is complete for i if it represents a complete strategy for i; and deterministic for i if it represents a deterministic strategy for i. Similarly, a strategy rule is functional if it is complete and deterministic. Note that all these concepts are player-specific.

The following theorems show a number of nice properties of the prioritised connectives, which give us a guideline for how to design a strategy with desired properties. The first result deals with consistency of strategy rules.

Theorem 1

Given a state transition model M,

  1. 1.

    \(r_{1} \triangledown {\cdots } \triangledown r_{m}\) is consistent if and only if there is a k (1 ≤k≤m) such that r k is consistent.

  2. 2.

    \(r_{1}\!\vartriangle \! {\cdots }\! \vartriangle \! r_{m}\) is consistent if and only if r 1 is consistent.

Proof

To prove (1), let \(r = r_{1} \triangledown {\cdots } \triangledown r_{m}\). Assume that S i(r) is non-empty for player i, then there is (w,a)∈S i(r) such that (w,a)∈S i(r k ) for some k, which means that r k is consistent for player i. Conversely, let k be the smallest number such that S i(r k ) is non-empty. This implies that \(S^{i}(r_{1} \triangledown {\cdots } \triangledown r_{m}) = S^{i}(r_{k} \triangledown {\cdots } \triangledown r_{m})\). By the above two lemmas we have \(S^{i}(r_{k})\subseteq S^{i}(r_{1} \triangledown {\cdots } \triangledown r_{m})\), which means that r is consistent.

To prove (2), we also let \(r= r_{1}\vartriangle \cdots \vartriangle r_{m}\). Obviously if r 1 is inconsistent, so is r. Assume that r 1 is consistent. Then there is (w,a)∈S i(r 1). Let k be the biggest number such that \(\bigcap _{j\leq k}(S^{i}(r_{j})\upharpoonright _{w})\not =\emptyset \). Obviously k≥1. By Definition 8 we have \(\bigcap _{j\leq k}(S^{i}(r_{j})\upharpoonright _{w}) \subseteq S^{i}(r)\upharpoonright _{w}\). Thus r is consistent. □

The second result shows us how to generate a complete or deterministic strategy.

Theorem 2

Given a state transition model M, for each player i

  1. 1.

    If r 1 or r 2 is complete, so is \(r_{1} \triangledown r_{2}\).

  2. 2.

    If r 1 is complete, so is \(r_{1}\vartriangle \cdots \vartriangle r_{m}\).

  3. 3.

    If r 1 and r 2 are deterministic, so is \(r_{1} \triangledown r_{2}\).

  4. 4.

    If r 1 is deterministic, so is \(r_{1}\vartriangle \cdots \vartriangle r_{m}\).

Proof

(1) and (4) are straightforward from Definition (8).

To show (2), assume that \(r_{1}\vartriangle \cdots \vartriangle r_{m}\) is incomplete. Then there exists a reachable state \(w\in W\cap \mathcal {P}(G)\) such that \(S^{i}(r_{1}\vartriangle \cdots \vartriangle r_{m})\!\!\upharpoonright _{w} = \emptyset \). By Definition 8, this can happen only if \(S^{i}(r_{1})\!\!\upharpoonright _{w} = \emptyset \), which contradicts the assumption that r 1 is complete.

To prove (3), assume that \((w,a),(w,a^{\prime })\in S^{i}(r_{1}\triangledown r_{2})\). If \(S^{i}(r_{1})\!\!\upharpoonright _{w}\not =\emptyset \), we have \((w,a),(w,a^{\prime })\in S^{i}(r_{1})\) according to Definition 8. Since r 1 is deterministic, this implies \(a=a^{\prime }\). On the other hand, if \(S^{i}(r_{1})\!\!\upharpoonright _{w} = \emptyset \), then we have \((w,a),(w,a^{\prime })\in S^{i}(r_{2})\upharpoonright_{w}\), which also implies \(a=a^{\prime }\) since r 2 is deterministic. □

Statement (1) in the above theorem provides us with an easy way of generating a complete strategy: create a trivial complete strategy first and then combine it with other strategies using the prioritised disjunction. Note that creating a trivial complete strategy is rather easy: let the agent do anything available.Footnote 5 Statement (2) tells us that once we get a complete strategy, we can further refine the strategy targeting more specific properties, say deterministic thus functional, using the prioritised conjunction without losing its completeness. Statement (3) shows us another feasible way of generating a functional strategy: instead of creating a complete strategy then refine it into a deterministic one, we can devise a set of specific deterministic strategies first and then combine them with the prioritised disjunction targeting a complete strategy. Example 4 demonstrates how the above mentioned approaches can be applied to the CrossDot game. Before doing that, let’s show another nice property of the prioritised connectives.

Theorem 3

Given a state transition model M, for each player i

  1. 1.

    If r 1 is complete, then \(S^{i}(r_{1} \triangledown {\cdots } \triangledown r_{m}) = S^{i}(r_{1})\).

  2. 2.

    If r 1 is deterministic, then \(S^{i}(r_{1}\!\vartriangle\!\cdots\!\vartriangle\! r_{m}) = S^{i}(r_{1})\).

Proof

To prove (1), we only have to show \( S^{i}(r_{1} \triangledown {\cdots } \triangledown r_{m})\subseteq S^{i}(r_{1})\). Assume that \((w,a)\in S^{i}(r_{1} \triangledown {\cdots } \triangledown r_{m})\). Because r 1 is complete, \(S^{i}(r_{1})\!\!\upharpoonright _{w}\not =\emptyset \). By Definition 8 we have (w,a)∈S i(r 1).

To show (2), we need to show that \(S^{i}(r_{1})\subseteq S^{i}(r_{1}\vartriangle\cdots\vartriangle r_{m})\). If (w,a)∈S i(r 1), then \(S^{i}(r_{1})\!\!\!\!\upharpoonright _{w}=\{(w,a)\}\) since r 1 is deterministic. By Definition 8 it follows that \((w,a)\in S^{i}(r_{1}\vartriangle \cdots \vartriangle r_{m})\). □

Combining the two statements of this theorem, we can say that if r 1 is functional, then \(S^{i}(r_{1} \triangledown {\cdots } \triangledown r_{m})=S^{i}(r_{1}\vartriangle \cdots \vartriangle r_{m})\) =S i(r 1). This means that once a strategy is functional, neither extending nor refining it with strategies of lower priority has any effect. In fact, our goal of introducing the prioritised connectives is to facilitate the design of functional strategies. Once a functional strategy has been obtained, these connectives automatically stop working.

Example 4

Consider the CrossDot game scenario in Example 1. We define a few strategy rules for player i as follows:

  • Fill a box next to a box that contains player i’s mark:

    $$ \begin{array}{ll} fill\_next^{i} = & \bigvee_{1< j\leq m}\left(\neg {p^{1}_{j}} \wedge\neg {p^{2}_{j}}\wedge p^{i}_{j-1} \wedge does\left({a^{i}_{j}}\right)\right) \ \vee\\& \bigvee_{1\leq j<m} \left(\neg {p^{1}_{j}}\wedge \neg {p^{2}_{j}} \wedge p^{i}_{j+1} \wedge does\left({a^{i}_{j}}\right)\right) \end{array} $$
    (14)
  • Fill an isolated box (i.e., whose immediate neighbours are empty):

    $$\begin{array}{@{}rcl@{}} &&fill\_isolated^{i}\, = \bigvee_{1< j< m}\left( \left(\neg p^{1}_{j-1}\wedge \neg p^{2}_{j-1}\right)\wedge \left(\neg p^{1}_{j+1}\wedge \neg p^{2}_{j+1}\right)\right.\\ &&{\kern9pc}\left.\wedge\,\left(\neg {p^{1}_{j}}\wedge \neg {p^{2}_{j}}\right) \wedge does\left({a^{i}_{j}}\right)\right) \end{array} $$
    (15)
  • Fill any empty box:

    $$ \begin{array}{l} fill\_any^{i}\,=\,\bigvee_{1\leq j\leq m}\left(\neg {p^{1}_{j}}\wedge \neg {p^{2}_{j}}\wedge does\left({a^{i}_{j}}\right)\right) \end{array} $$
    (16)
  • Try f i l l_n e x t i first. If this fails, try fill_isolated i, then try fill_any i:

    $$ combined^{i} \,=\, fill\_next^{i}\ \triangledown\ fill\_isolated^{i}\ \triangledown\ fill\_any^{i} $$
    (17)

Observation 3

Let M=(G,v) be the state transition model defined in Observation 1. The strategy rule combined i is complete for player i.

Proof

We first prove that the strategy fill_any i is complete. Assume an arbitrary reachable state \(w\in W\cap \mathcal {P}(G)\) such that l i(w)≠ (recall formula (3) for the definition of l i and the definition of completeness of a strategy in Section 3.1). Let \({a^{i}_{j}}\in l^{i}(w)\). It follows that \(M\models _{(w,{a^{i}_{j}})}legal({a^{i}_{j}})\). By (8), we know that \(M\models _{(w,{a^{i}_{j}})}\neg {p^{1}_{j}}\wedge \neg {p^{2}_{j}}\), hence \(M\models _{(w,{a^{i}_{j}})}\neg {p^{1}_{j}}\wedge \neg {p^{2}_{j}}\wedge does({a^{i}_{j}})\), which implies \((w,{a^{i}_{j}})\)S i(fill_any i). Therefore fill_any i is a complete strategy rule for player i. According to Theorem 2 (1), we know that the strategy rule c o m b i n e d i is also complete. □

A complete strategy for a player gives the player a feasible option (if any) for any given situation. However, a software game player could still be unsure about what to do under a complete strategy if there is more than one option in the same situation. The prioritised conjunction provides us with a way to narrow down multiple options.

Example 5

For our running example game, let \({c^{i}_{t}}=\bigvee \limits _{j= 1}^{t} does({a^{i}_{j}})\) where 1≤tm (with m being the overall number of boxes as usual), then \({c^{i}_{t}}\) represents the strategy of i to place an object in any box between 1 and a given number t. Let

$$ thoughtful^{i} = combined^{i} \vartriangle {c^{i}_{m}} \vartriangle{\cdots} \vartriangle {c^{i}_{1}} $$
(18)

where c o m b i n e d i is the strategy rule defined in Example 4 (cf. (17)).

It is not hard to see that if c o m b i n e d i gives more than one boxes to fill, t h o u g h t f u l i will choose the left most one.

Observation 4

Let M=(G,v) be the state transition model defined in Observation 1. The strategy rule thoughtful i is functional for player i.

Proof

By Theorem 2 (2), t h o u g h t f u l i is complete because c o m b i n e d i is complete. We prove that t h o u g h t f u l i is deterministic. Assume that

$$\left(w,a^{i}_{j_{1}}\right), \left(w,a^{i}_{j_{2}}\right)\in S^{i}(thoughtful^{i}) $$

such that, without loss of generality, j 1j 2. Let t be the smallest number (between 1 and m) such that \(H=S^{i}(combined^{i})\!\!\!\!\!\,\upharpoonright _{w}\cap \bigcap \limits _{j=t}^{m}S^{i}\left ({c^{i}_{j}}\right )\!\!\!\!\!\upharpoonright _{w}\) is not empty. Such a t exists because \(S^{i}(combined^{i})\!\!\!\!\upharpoonright _{w}\cap S^{i}\left ({c^{i}_{m}}\right )\!\!\!\!\upharpoonright _{w}\not = \emptyset \). By Definition 8, we have \(\left (w,a^{i}_{j_{1}}\right )\in H\) and \(\left (w,a^{i}_{j_{2}}\right )\in H\), hence \(\left (w,a^{i}_{j_{1}}\right ), \left (w,a^{i}_{j_{2}}\right )\in S^{i}\left ({c_{t}^{i}}\right )\hspace *{-8pt}\ \!\!\upharpoonright _{w}\). Hence j 1j 2t. If t=1, then \(a^{i}_{j_{1}}=a^{i}_{j_{2}}={a^{i}_{1}}\). Otherwise, i.e. if t>1, then \(S^{i}(combined^{i})\upharpoonright _{w}\cap \bigcap \limits _{j=t-1}^{m}S^{i}\left ({c^{i}_{j}}\right )\upharpoonright _{w} = \emptyset \), which implies that neither \(\left (w,a^{i}_{j_{1}}\right )\) nor \(\left (w,a^{i}_{j_{2}}\right )\) belongs to \(S^{i}\left (c_{t-1}^{i}\right )\upharpoonright _{w}\). Note that \(S^{i}\left (c_{t-1}^{i}\right )\upharpoonright _{w} = \left \{\left (w,{a^{i}_{j}}\right )\in \mathcal {P}(G):j\leq t-1\right \}\). Thus tj 1j 2. By the above assumption, we have j 1=j 2=t. We have proved that t h o u g h t f u l i is deterministic, thus it is functional. □

To summarise, the prioritised connectives provide a natural way of refining a strategy. If a strategy is too restricted in that it can only be applied to few states, it can be extended using prioritised disjunction. If, on the other hand, a strategy is too generic in that it leaves too many options, it can be strengthened using prioritised conjunction. Once a strategy has been functional, further extension or refinement using the prioritised disjunction or conjunction take no effect as long as give the existing strategy the highest priority.

6 Reasoning About Strategies

Knowing how to write strategies for a game-playing agent, we now consider the question whether a strategy meets its goal, for instance, if it is guaranteed to lead to a winning state or to a desirable state given the strategies that the other players use. In this section, we will demonstrate by using our running example how to reason about strategies within our framework. We also demonstrate how to design a strategy using the prioritised connectives to meet desired properties.

6.1 Compliance with a Strategy

In order to verify whether a strategy can bring about an expected result for a player in a game, we assume that the player complies with the strategy all the way through the game and observes the outcome of the game.

Let M=(G,v) be a state transition model of a game G and S a strategy of the game for player i. We say that M (or G) complies with S by player i if for each reachable move (w,a) of player i in G, (w,a)∈S. In other words, player i follows the strategy S whenever he makes a move.Footnote 6

The following observation shows that for any CrossDot game when k=2 and m>2, Player 1 wins as long as he plays the strategy t h o u g h t f u l i all the way through the game.

Observation 5

Let M be a state transition model for the CrossDot game with k=2 and m>2. Assume that player i takes the first turn. If M complies with the strategy S i (thoughtful i ) by player i, then

$$M\models terminal\rightarrow wins(1) $$

In other words, player i wins as long as he takes the first turn and follows the strategy rule thoughtful i.

Proof

Without loss of generality, we assume that i=1. We have to show that \(M,\delta \models terminal\rightarrow wins(i)\) for any reachable path δ. Since the definitions of terminal and wins do not contain does, legal and ○ (cf. formulas (6) & (7)), we only need to show for any reachable terminal state w e (which is a special case of a reachable path) M,w e w i n s(1). To this end, we assume a complete path \(\delta = w_{0}\overset {a_{0}}\rightarrow w_{1} \overset {a_{1}}\rightarrow \cdot \overset {a_{e-1}} w_{e}\), where \(w_{0}=\bar {w}\) and w e t. Since player 1 has the first turn and plays with strategy t h o u g h t f u l 1, the action the player takes in the initial state must be \({a^{1}_{2}}\), i.e., \(a_{0}={a^{1}_{2}}\). If player 2 responds with action \({a^{2}_{1}}\), i.e., \(a_{1}={a^{2}_{1}}\), player 1 will then take action \({a^{1}_{3}}\) and win the game at state w 3. If player 2 responds with any other action, player 1 will take \({a^{1}_{1}}\) and also will win the game. In any case, δ ends up with w e =w 3 in which player 1 wins. Therefore M,w e w i n s(1). We conclude \(M\models terminal\rightarrow wins(1)\). □

The strategy thoughtful seems like a “smart” strategy that can guarantee a winning state for the player who takes the first turn when k=2. However it is less mighty when taken by the second player, in which case the strategy cannot even compete the following “trivial” strategy

$$ fill\_leftmost^{i}={c^{i}_{m}} \vartriangle{\cdots} \vartriangle {c^{i}_{1}} $$
(19)

where \({c^{i}_{j}}\) was defined in Example 5.

Observation 6

Let M be a state transition model for a CrossDot game with k=2 and m>2. If M complies with S 1 (fill_leftmost 1 ) by Player 1 and with S 2 (thoughtful 2 ) by Player 2, then

$$M\models terminal \rightarrow wins(1) $$

Proof

It is not hard to prove that f i l l l e f t m o s t 1 is functional. Since t h o u g h t f u l 2 is also functional, there is only one complete path in the game:

$$\begin{array}{@{}rcl@{}} (1,0, \Box, \Box, \Box, \Box, \cdots, \Box)\overset{{a^{1}_{1}}}{\rightarrow}(0,1, \boxtimes, \Box, \Box, \Box,\cdots, \Box)\\ \overset{{a^{2}_{3}}}{\rightarrow}(1,0, \boxtimes, \Box, \boxdot, \Box,\cdots, \Box)\stackrel{{a^{1}_{2}}}{\rightarrow}(0,1, \boxtimes, \boxtimes, \boxdot, \Box,\cdots, \Box) \end{array} $$

which implies that player 1 takes the left most box, followed by player 2 fills an isolated box and finally player 1 fills the second box and wins. We can easily verify that \(terminal \rightarrow wins(1)\) is valid with any segment of the path in M. □

The failure of thoughtful is not because it is not “smart” enough but because the t h o u g h t f u l 2 strategy requires to fill an isolated box at the first move, which provides the first player with a chance to win.

6.2 Reasoning About Other Players’ Strategies

The examples of strategies we have shown up to now are all from a single player’s viewpoint, which is obviously not sufficient. We should also reason about other players’ strategies.

From Observation 6 we learnt that a player should check for existing threats before applying any “aggressive” strategy, like thoughtful. The following formula defines a defence strategy, which says that if my opponent can win by filling box j at next step, then I should mark it now to prevent an immediate loss:

$$ \begin{array}{l} defence^{i} = \bigwedge\limits^{m}_{j=1}\left(\bigcirc \left(does\left(a^{-i}_{j}\right)\wedge \bigcirc wins(-i)\right)\rightarrow does\left({a^{i}_{j}}\right)\right) \end{array} $$
(20)

where −i stands for the opponent of i. Note that d e f e n c e i is neither deterministic nor complete. To create a functional strategy with defence, we let

$$ cautious^{i}=\left(defence^{i}\!\vartriangle\! {c^{i}_{m}} \vartriangle{\cdots} \vartriangle {c^{i}_{1}}\right)\ \triangledown\ thoughtful^{i} $$
(21)

Obviously, \(defence^{i}\!\vartriangle \! {c^{i}_{m}} \vartriangle {\cdots } \vartriangle {c^{i}_{1}}\) is deterministic. Since t h o u g h t f u l i is functional, therefore c a u t i o u s i is functional.

The following observation shows that if the second player plays c a u t i o u s 2, which means to protect himself before attacking his opponent, then player 1 cannot win with the fill_leftmost strategy (cf. equation (19)).

Observation 7

Let M be a state transition model for the CrossDot game with k=2 and m>2. If M complies with S 1 (fill_leftmost 1 ) by Player 1 and with S 2 (cautious 2 ) by Player 2, then

$$M\models terminal \rightarrow \neg wins(1) $$

Proof

Since both players’ strategies are functional, there is only one complete path in the game. Assume that the complete path is \(\delta =w_{0}\overset {a_{0}}\rightarrow w_{1} {\cdots } \overset {a_{e-1}}\rightarrow w_{e}\). The first action taken by player 1 must be \(a_{0}={a^{1}_{1}}\). The first two states in the path is then

$$w_{0}=(1,0, \Box, \Box, \Box, \Box, \cdots, \Box)~\text{and}~w_{1}=(0,1, \boxtimes, \Box, \Box, \Box,\cdots, \Box) $$

To know which action player 2 chooses at state w 1, we calculate \(S^{2}(defence^{2})\!\!\upharpoonright _{w_{1}}\). In fact, we will show that \(S^{2}(defence^{2})\hspace *{-4.5pt}\upharpoonright _{w_{1}}=\left \{\left (w_{1},{a^{2}_{2}}\right )\right \}\). Assume any reachable path \(\delta ^{\prime }\in (w_{1},{a^{2}_{2}})^{\leadsto }\). We verify

$$M, \delta^{\prime}\models\bigwedge^{m}_{j=1}\ \left(\bigcirc \left(does\left({a^{1}_{j}}\right)\wedge \bigcirc wins(1)\right)\rightarrow does\left({a^{2}_{j}}\right)\right) $$

This is obviously true because if j=2, then \(does({a^{2}_{j}})\) is satisfied. For any 2< jm, \(\bigcirc \left (does\left ({a^{1}_{j}}\right )\wedge \bigcirc wins(1)\right )\) is false.

Next we show that for any \(a_{1}\not ={a^{2}_{2}}\), \((w_{1},a_{1})\not \in S^{2}(defence^{2})\upharpoonright _{w_{1}}\). Let \(\delta ^{\prime } = w_{1} \overset {a_{1}}\rightarrow w_{2}^{\prime }\overset {{a^{1}_{2}}}\rightarrow w_{3}^{\prime }\). It is easy to verify that

$$M, \delta^{\prime}\models \bigcirc \left(does\left({a^{1}_{2}}\right)\wedge \bigcirc wins(1)\right)\wedge \neg does\left({a^{2}_{2}}\right) $$

It follows that

$$M, \delta^{\prime}\not\models\bigwedge^{m}_{j=1}\ \left(\bigcirc \left(does\left({a^{1}_{j}}\right)\wedge \bigcirc wins(1)\right)\rightarrow does\left({a^{2}_{j}}\right)\right) $$

Note that \(\delta ^{\prime }\in (w_{1},a_{1})^{\leadsto }\). Therefore we have

$$M\not\models_{(w_{1},a_{1})}\bigwedge^{m}_{j=1}\ \left(\bigcirc \left(does\left({a^{1}_{j}}\right)\wedge \bigcirc wins(1)\right)\rightarrow does\left({a^{2}_{j}}\right)\right) $$

which implies \((w_{1},a_{1})\not \in S^{2}(defence^{2})\!\!\!\!\!\!\upharpoonright _{w_{1}}\). This completes the proof that \(S^{2}(defence^{2})\upharpoonright _{w_{1}}=\left \{\left (w_{1},{a^{2}_{2}}\right )\right \}\). Hence, for the complete path δ we find that \(a_{1}={a^{2}_{2}}\) and \(w_{2}=(1,0, \boxtimes ,\boxdot , \Box , \Box ,\cdots , \Box )\). The game continues in the same way until all boxes are filled without a winner. □

The instances of the CrossDot game we considered above are limited to the simple case where k=2. In the following, we proposed a solution to the game in the general setting where k can be any number larger than 2 (and ≤m). Consider the following strategies:

  • Fill a box next to an opponent’s box:

    $$\begin{array}{ll} fill\_o\_next^{i}\,= &\left(\bigvee_{1\leq j<m} \left(\neg {p^{1}_{j}}\wedge \neg {p^{2}_{j}} \wedge p^{-i}_{j+1} \wedge does\left({a^{i}_{j}}\right)\right)\right)\\ & \ \triangledown \left(\bigvee_{1< j\leq m}\left(\neg {p^{1}_{j}} \wedge\neg {p^{2}_{j}}\wedge p^{-i}_{j-1} \wedge does\left({a^{i}_{j}}\right)\right)\right) \end{array} $$

    Note that priority is given to the left empty box if exists.

  • Passive defence:

    $$ passive\_defence^{i}\,=\, ((defence^{i}\!\vartriangle\!fill\_o\_next^{i})\triangledown fill\_any^{i})\! \vartriangle {c^{i}_{m}}\!\vartriangle\!\cdots\!\vartriangle\!{c^{i}_{1}} $$
    (22)

    Note that we use prioritised conjunction, instead of disjunction, to combine defence and f i l l_o_n e x t. This is because defence gives arbitrary actions whenever there is no immediate loss for player i, in which case f i l l_o_n e x t takes over.

The following observation shows that such a passive defence strategy guarantees no loss for a player no matter whether it is taken by the first player or the second player, and no matter what the strategy of the other player, for any instance of the CrossDot game with k>2 and m>2.

Observation 8

Let M be a state transition model for a CrossDot game with k>2 and m≥k. If M complies with S i (passive_defence i ) by player i, then

$$M\models terminal\rightarrow \neg win(-i) $$

where −i represents the opponent player of i. In other words, a player never loses as long as he plays the passive defence strategy.

Proof

Obviously we only have to consider the case when k=3 because if a strategy can help a player effectively blocking his opponent to own three consecutive boxes, he can also block his opponent to form any longer line of consecutive boxes. Also if we can prove the strategy to be effective for player 2, it is sufficient to show that it is also effective for player 1 because he can make his first move at random and then copy player 2’s no-loss strategy (strategy stealing). Altogether this allows us to restrict the proof to k=3 and i=2.

It is easy to show that passive_defence 2 is functional. To show \(M\models terminal\rightarrow \neg win(1)\), we only have to verify that for any complete path \(\delta =w_{0}\overset {a_{0}}\rightarrow w_{1} \overset {a_{1}}\rightarrow \cdots \overset {a_{e-1}}\rightarrow w_{e}\), we have M,w e ⊧¬w i n(1). Recall that w e is a special reachable path that contains only the terminal state w e . If w e is a winning state for player 2 or a tie state, we have M,w e ⊧¬w i n(1).

Suppose by contradiction that w e is a winning state of player 1. It then turns out e is an odd number and there are at least three consecutive crosses in some states along the complete path δ. In other words, there must be a cross next to two other crosses. However, we will show by induction on the length of the path that this can never happen. More precisely, we claim that in each state w 2j (0≤j≤(e−1)/2), each box filled with a cross must be either adjacent to a box with a dot or at an end (left or right) of the line, meanwhile the box on its left, if any, must not be empty.Note that if the claim is true in w e−1, then player 1 cannot win in state w e no matter which action he chooses in state w e−1.

Obviously the claim holds when j=0 because all boxes are empty in the initial state w 0. Now we assume that the claim holds in state w 2(j−1) where j≤(e−1)/2. We show that the claim also holds at w 2j .

Suppose that the action a 2(j−1) that player 1 took in state w 2(j−1) was \({a^{1}_{v}}\), i.e., \(a_{2(j-1)}={a^{1}_{v}}\). Obviously this action can at most affect satisfaction of the claim on the box itself or its two immediate neighbours, i.e., v−1, v and v+1 (if exists). Therefore we only consider the effect of player 1’s action \({a^{1}_{v}}\) on these three possible boxes.

If box v is not at the right end, by the induction assumption, box v+1 can never be with a cross in state w 2(j−1); otherwise box v were not empty thus \({a^{1}_{v}}\) were not doable. Therefore box v+1 (if exists) satisfies the claimed conditions in state w 2j .

We then consider satisfaction of boxes v−1 and v with three cases:

  1. i).

    If box v−1 (if exists) has been filled with a dot, both boxes v−1 and v satisfy the conditions of the claim, which remain true in state w 2j .

  2. ii).

    If box v−1 has been already filled with a cross before state w 2(j−1), by induction assumption, box v−1 is either at the left end or next to a box with a dot on its left. In both situations, box v−1 satisfies the claimed conditions that remain true in state w 2j . After player 1 fills box v with a cross in state w 2(j−1), player 2 must respond with action \(a_{2j-1}=a^{2}_{v+1}\) by applying d e f e n c e 2 in state w 2j−1 to prevent an immediate loss unless the box v is at the right end or v+1 has already occupied by a dot. In both cases, box v satisfies the claimed conditions in state w 2j . Note that the claim guarantees that d e f e n c e 2 is not applicable to any other boxes.

  3. iii).

    If box v−1 was empty in state w 2(j−1), by the construction of passive_defence i, player 2 must respond with action \(a^{2}_{v-1}\) either because v−1 is an immediate loss position or by applying strategy fill_o_next 2 after player 1 fills box v. After v−1 filled with a dot, both boxes v−1 and v satisfy the claim in state w 2j . Note that \(defence^{2}\!\vartriangle \!fill\_o\_next^{2}\) is not applicable to any box other than v−1 (note that fill_o_next 2 gives priority to the left empty box.

We have verified the claim holds in each state w 2j (0≤j≤(e−1)/2), including state w e−1, which implies that the player 2 does not lose in state w e . □

Note that the defence strategy (cf. equation (20)) plays a crucial role in the solution. Assume that a game gets into the following situation after a few moves:

$$\boxdot,\boxtimes,\Box, \Box, \boxdot,\boxtimes,\boxtimes, \Box, \cdots $$

and it is player 2’s turn. If player 2 does not have the defence strategy but simply use f i l l_o_n e x t 2 to block the opponent, he would take action \({a^{2}_{3}}\) instead of \({a^{2}_{8}}\), which gives player 1 a chance to win.

The strategy p a s s i v e_d e f e n c e can effectively prevent from losing but it is hard to win because it does not encode any winning strategy. The reader is invited to extend the strategy with more aggressive rules so that if the other player is “not that smart”, it can have a chance to win.

7 Computing With Strategies

We now turn to the question of how to actually compute with strategy rules. Generally speaking, the conceptual simplicity of our language and the fact that it is not tied to a specific action formalism should make it easy to incorporate knowledge of strategies into various methods for the design and analysis of intelligent agents. To illustrate this, we will adopt here a very general calculus for reasoning about actions, the Situation Calculus (see, e.g., [16]), and show how our strategy representation can be easily integrated. Reasoning problems about strategies can take different forms, and we will specifically consider two of them. First, we will show how the calculus can be used to infer the possible outcomes of a game given information about the strategies of all players. Second, we will illustrate how players can reason about the strategies of opponents to infer their best course of action. We will also show how our variant of the Situation Calculus forms the basis for an encoding of game rules and a restricted class of strategies as Answer Set Programs.

7.1 Example: Situation Calculus

The Situation Calculus is a formalism for reasoning about actions and change that is based on classical predicate logic with a few pre-defined language elements:

  • s 0, a constant denoting the initial situation; and Do (α,σ), a constructor denoting the situation resulting from doing action α in situation σ;

  • Holds (φ,σ), a predicate denoting that fluent φ (i.e., an atomic state feature) is true in situation σ;

  • Poss (α,σ), a predicate denoting that action α is possible in situation σ.

For our purpose, we extend the base language of the Situation Calculus by the two game-specific predicates W i n s(ι,σ) and T e r m i n a l(σ) meaning, respectively, that the game is won for player ι in situation σ and that σ is a terminal game position. With this, any axiom in our game specification language can be easily rewritten for the Situation Calculus similar to an existing mapping of the Game Description Language (GDL) into this calculus [18]. To this end, let A and S be two distinct variables (standing for any action and situation, respectively), then a formula φ in our language \(\mathcal {L}\) can be translated into a Situation Calculus axiom φ SC[A,S] by the following inductive definition:

$$\begin{array}{@{}rcl@{}}p^{SC}[A,S] & := & \text{H\textsc{olds}}(p,S) \\ (\neg\varphi)[A,S] & := & \neg\varphi^{SC}[A,S] \\ (\varphi\wedge\psi)^{SC}[A,S] & := & \varphi^{SC}[A,S]\wedge\psi^{SC}[A,S] \\ does(a)^{SC}[A,S] & := & A=a \\ legal(a)^{SC}[A,S] & := & \text{P\textsc{oss}}(a,S) \\ wins(i)^{SC}[A,S] & := & \text{W\textsc{ins}}(i,S) \\ (\bigcirc\varphi)^{SC}[A,S] & := & (\forall A^{\prime})\,\varphi^{SC}[{\textit{A}^{\prime}},{\text{Do}(\textit{A},S)}] \\ init^{SC}[A,S] & := & S=s_{0} \\ terminal^{SC}[A,S] & := & \text{T\textsc{erminal}}(S) \end{array} $$

Example 1 (continued)

Recall the specification of the game in Example 1. Applying the construction from above to formulas (4)–(11) yields the following, after some slight syntactic simplifications.

$$\neg \text{H\textsc{olds}}\left({p^{i}_{j}},s_{0}\right) $$
$$ \text{H\textsc{olds}}(turn(1),s_{0})\wedge\neg \text{H\textsc{olds}}(turn(2),s_{0}) $$
$$\text{W\textsc{ins}}(i,S)\equiv \bigvee_{j=1}^{m-k+1}\ \bigwedge_{l=j}^{j+k-1} \text{H\textsc{olds}}({p^{i}_{l}},S) $$
$$\begin{array}{@{}rcl@{}}\textsc{Terminal}(S)&\equiv&\!\! \text{W\textsc{ins}}(1,S)\vee \text{W\textsc{ins}}(2,S)\,\vee\\ &&\left({\bigwedge}_{1\leq j\leq m}\left( \text{H\textsc{olds}}\left({p^{1}_{j}},S\right)\vee \text{H\textsc{olds}}\left({p^{2}_{j}},S\right)\right)\right) \end{array} $$
$$\begin{array}{@{}rcl@{}} &&\left(\neg \left( \text{H\textsc{olds}}\left({p^{1}_{j}},S\right)\vee \text{H\textsc{olds}}\left({p^{2}_{j}},S\right)\right)\right.\,\wedge\\ &&\left. \text{H\textsc{olds}}(turn(i),S)\wedge\neg \text{T\textsc{erminal}}(S)\right)\ \equiv\ \text{P\textsc{oss}}\left({a^{i}_{j}},S\right) \end{array} $$
$$ \text{H\textsc{olds}}\left({p^{i}_{j}},S\right)\vee A={a^{i}_{j}}\equiv \text{H\textsc{olds}}({p^{i}_{j}}, \text{D\textsc{o}}(A,S)) $$
$$\begin{array}{@{}rcl@{}} \text{H\textsc{olds}}(turn(1),S) \rightarrow &\neg& \text{H\textsc{olds}}(turn(1),\text{Do}(A,S))\, \wedge\\ && \text{H\textsc{olds}}(turn(2), \text{D\textsc{o}}(A,S)) \end{array} $$
$$\begin{array}{@{}rcl@{}} \text{H\textsc{olds}}(turn(2),S) \rightarrow & \neg \text{H\textsc{olds}}(turn(2), \text{D\textsc{o}}(A,S))\,\wedge\\ & \text{H\textsc{olds}}(turn(1), \text{D\textsc{o}}(A,S)) \end{array} $$

7.2 Adding Strategy Rules

As a general logic-based formalism, the Situation Calculus allows for a straightforward encoding of strategy rules with the help of a direct encoding of their interpretation according to (12) and Definition 8, respectively. Based on the rewriting rules from above, the Situation Calculus encoding r[A,S] for a strategy rule r over language \(\mathcal {L}\) is inductively obtained as follows, where A and S are variables.

$$\begin{array}{@{}rcl@{}} &&\varphi[A,S]\ :=\ \text{P\textsc{oss}}(A,S)\wedge\varphi^{SC}[A,S]\\ &&(r_{1}\!\!\!\!\!{\kern9pt}\triangledown\ r_{2}\ \triangledown \ldots\triangledown\ r_{n})[A,S]\ :=\\ &&{\kern16pt}r_{1}[A,S]\\ &&{\kern12pt}\vee \\ &&{\kern16pt}r_{2}[A,S]\, \wedge\,\neg(\exists A^{\prime})\,r_{1}[A^{\prime},S] \\ &&{\kern12pt}\vee\ \ldots\ \vee \\ &&{\kern16pt}r_{n}[A,S]\wedge\,\neg(\exists A^{\prime})\,(r_{1}[A^{\prime}, S]\vee\ldots\vee r_{n-1}[A^{\prime}, S])\\ &&(r_{1}\!\!\!\!{\kern5pt}\vartriangle r_{2}\vartriangle \ldots\vartriangle r_{n})[A,S]\ := \\ &&{\kern16pt}r_{1}[A,S]\,\wedge\,\neg(\exists A^{\prime})\,(r_{1}[A^{\prime}, S]\wedge r_{2} [A^{\prime}, S]) \\ &&{\kern12pt}\vee \\ &&{\kern16pt}r_{1}[A, S]\wedge r_{2}[A, S]\,\wedge\,\neg(\exists A^{\prime})\,(r_{1}[A^{\prime}, S]\wedge r_{2}[A^{\prime}, S]\wedge r_{3}[A^{\prime}, S]) \\ &&{\kern12pt}\vee\ \ldots\ \vee \\ &&{\kern16pt}r_{1}[A, S]\wedge r_{2}[A, S]\wedge\ldots\wedge r_{n}[A, S] \end{array} $$

7.3 Computing with Strategies: Examples

The representation of strategy rules in the Situation Calculus can be used to define a special predicate, which we denote by Strat(α,σ), whose intended meaning is that it is possible according to some player’s strategy to take action α in situation σ. Consider, for example, a given set of complete strategy rules {r 1,…,r n } for each player, then this can be embedded into a Situation Calculus encoding of a game as follows:

$$\mathrm{S}\textsc{trat}(A,S)\,\equiv\,r_{1}[A, S]\vee\ldots\vee r_{n}[A, S] $$

Such information about strategies can be used for a variety of purposes. Specifically, as we will briefly illustrate next, it can be used to infer possible outcomes of a game under a given set of strategy rules or help a player to decide on a course of action by reasoning about opponents’ strategies.

Inferring Possible Outcomes.

Based on the predicate Strat, the set of all possible playouts of a game according to players’ strategies can be recursively defined as follows.

$$\text{S\textsc{trategic}}(s_{0}) $$
$$\text{S\textsc{trategic}}(S)\wedge \text{S\textsc{trat}}(A, S)\rightarrow\text{S\textsc{trategic}}(\text{D\textsc{o}}(A, S)) $$

This predicate determines all paths that are reachable if all players follow their given strategy rules. Hence, all possible playouts under these strategies can be determined as all situations S that satisfy Strategic(S)∧Terminal(S).

Reasoning About Opponents’ Strategies.

Another way of using reasoning about strategies is for players to use knowledge or belief about their opponents’ strategies in order to compute their own best course of actions in response. As an example, we will consider the encoding of a generalised form of Minimax evaluation in the Situation Calculus. To this end, let us take the perspective of particular player i and assume that this player’s belief about the opponents’ strategy rules is encoded using predicate Strat as above. Let us assume also that Turn(i,S) expresses the fact that it is player i’s turn in situation S. We can then define recursively the notion of a winning situation for i, represented by predicate WIN(S), as follows:

$$ \text{W\textsc{ins}}(i,S)\,\rightarrow\,\text{W\textsc{in}}(S) $$
(23)
$$ \text{T\textsc{urn}}(i,S)\,\wedge\,(\exists A)(\text{P\textsc{oss}}(A,S)\wedge \text{W\textsc{in}}(\text{D\textsc{o}}(A,S))) \\ \rightarrow\, \text{W\textsc{in}}(S) $$
(24)
$$ \neg \text{T\textsc{urn}}(i,S)\wedge(\forall A)(\text{S\textsc{trat}}(A,S)\rightarrow \text{W\textsc{in}}(\text{D\textsc{o}}(A,S)))\\ \rightarrow\, \text{W\textsc{in}}(S) $$
(25)

According to this definition, a situation is winnable for our player i if he can choose a course of action whenever it is his turn, (24), such that if all other players choose their actions according to their strategy, (25), then a terminal situation will be reached in which player i has won, (23).

7.4 Computing with Strategies in ASP

Under specific conditions, problems that require reasoning about games and strategies can be solved by Answer Set Programming (ASP). This general technique provides a way of computing models for logic programs for which particularly efficient implementations have been developed in the recent past, such as [4] just to mention one. ASP has been used successfully for reasoning about actions and plan generation (see, e.g., [9]) as well as for endgame search in general game playing [19]. In this section, we build on these existing methods and show how ASP can be used to compute all possible outcomes of a game under given strategies for the players. We assume the reader to be familiar with basic notions and notations of ASP, as can be found in [5].

The standard use of ASP for computing with actions is to replace the branching time structure of the Situation Calculus by linear time. For deterministic games with complete specification of the initial state and a given time horizon, the game rules can be encoded as an ASP in such a way that each answer set corresponds to a reachable path and vice versa (see, e.g., [19]). For our running example game, Fig. 1 (lines 1–17) constitute an ASP encoding that follows this principle. Specifically, the so-called weight atom in clause 16 requires each answer set to include one, and only one, action at each point in time. The so-called constraint in clause 17 rules out any answer set in which the chosen action is not legal.

Fig. 1
figure 1

An ASP encoding of the game from Example 1, including a strategy rule. For the sake of brevity, we have omitted the domain definitions for variables I, J, and T (the latter ranging from 0 to a given time horizon)

A given ASP that provides a linear-time encoding of a game can be extended by encodings of strategy rules for the players so that the answer sets comply with the strategies. This provides a computational method for inferring the possible outcomes of a game under a given set of strategies.

Provided it does not include the ○-operator, any strategy φ in our language that can be encoded in this linear way according to the following inductive coding scheme. Let n be a unique predicate name that stands for (the satisfaction of) φ, then:

Where necessary, this is accompanied by clauses with head i to encode the sub-formulas ψ i (for i=1,2,…), which are obtained inductively.

Clauses 19–29 in Fig.7.4 are an example of applying this encoding in order to constrain the answer sets for our CrossDot game to those where both players i follow the simple strategy \((fill\_next^{i})\triangledown (fill\_any^{i})\) (cf. Example 4): Constraint 19 in conjunction with clause 20 rejects all answer sets that do not comply with the strategy definition. Clauses 24–27 encode the first part of the strategy rule for both players (in a slightly more compact form than obtained by strictly applying the coding scheme from above), and clause 23, in conjunction with clauses 28–29, encodes the second part.

7.5 Further Extensions and Computational Complexity

The encoding scheme from above does not extend to strategies that include the ○-operator since their evaluation requires a counterfactual lookahead. Hence, they cannot be directly represented in an ASP based on a linear time structure. There are two conceivable ways to overcome this limitation.

  1. 1.

    If the axiomatisation of a game supports the definition of a regression operator similar to the one in the standard Situation Calculus [15], then any strategy of the form ○φ can be regressed to a formula \(\varphi ^{\prime }\) that is logically equivalent under the game axioms and contains one less occurrence of the ○-operator. The repeated application of regression will yield a ○-free formula, which then can be encoded in the same way as above.

  2. 2.

    Alternatively, we can extend the linear time structure to allow for more than one sequence of actions in a single same answer set. A similar approach has been shown to be practically viable for proving epistemic properties in general games using ASPs [8].

A detailed formalisation and analysis of either solution goes beyond the scope of this paper and is left for future work.

Up to now we have demonstrated with two example implementations how our formalism supports automated reasoning. With regard to the efficiency of the implementations, we can consider different aspects regarding both the description of strategies and the problem of reasoning about them.

The complexity of translating a strategy in our language to situation calculus is linear, and so is the translation to ASP for ○-free strategy rules. However, generally speaking it is unrealistic to expect high efficiency for a generic strategy reasoning mechanism because in theory, as an extension of GDL, our language can describe any finite game with perfect information. Verifying an arbitrary strategy such as, “try any possible action to win,” is equivalent to solving a game and therefore equally complex. It is well known that the complexity of finding a winning strategy for complex games like Japanese Go, which can be specified in GDL, is EXPTIME-complete [17].

Moreover, the computational complexity of solving a game sometimes is independent of the length of the game description and the strategies that are used to solve the game. Therefore, a complexity analysis for domain-independent strategy reasoning mechanisms can be meaningless.Footnote 7 However, it is possible that certain restrictions on both game descriptions and strategy rules may lead to specific upper bounds for the computational complexity of solving these games. Restrictions on the number of lookahead steps can also affect the complexity of reasoning. In addition, since our strategies represent some ideas of how to play a game well, it is possible that we express these ideas in our language and design specific algorithms to automatically generate possible moves that correspond to the ideas. The efficiency of these algorithms is crucial to the design of a game player. We leave these issues for future investigation.

8 Related Work

Modelling and specifying strategies is a fundamental research theme in game theory. Researchers in artificial intelligence have recently joined in the research but mostly focus on modelling of strategic reasoning with the help of logical approaches. A number of logical frameworks have been proposed in the literature for strategy representation and reasoning [1, 3, 7, 11, 12, 23]. Most of the frameworks were built on either Coalition Logic (CL), Alternating-Time Temporal Logic (ATL), or Propositional Dynamic Logic (PDL).

Both coalition logic [12] and alternating-time temporal logic [1] were developed to model strategic abilities of coalitions in multi-agent systems. The modality 〈Cφ (or 〈C〉○φ) expresses that “a group of agents, C, has a joint strategy to bring about φ no matter what strategies the other agents choose”. In CL, a strategy of a player is simply an action available to the player. In ATL, a strategy is a function that maps a sequence of states to an action (in ATL). In both logics, strategies stay on the semantic level without syntactical representation.

A number of extensions of either CL or ATL aim to bring strategies to the syntactical level. van der Hoek et al. [24] proposed an extension of ATL, named CATL for Counterfactual ATL, with a variation to the coalition modality, C i (σ,φ), representing the counterfactual statement, “if agent i had committed to a strategy σ, then φ would hold”. Strategies in that framework can be explicitly represented on the syntactical level using dynamic logic-like modalities, even though program connectives of dynamic logic are not allowed to be used for combining strategies. Walther et al. 25refine the work of CATL into an axiomatic logical system with a different semantics. However, strategies are still restricted to primitive forms, which means that the combination of strategies is not supported. Similar restrictions have also been applied in several other ATL- or CL-like logical frameworks for strategic reasoning, such as [3, 10, 11].

Another approach to strategy representation and reasoning is to treat a strategy as a program so that PDL-style program connectives can be used to combine strategies [14, 21, 23]. van Benthem proposed a logical framework, named Temporal Forcing Logic (TFL), with a modality [σ,i]φ, meaning that “player i applies strategy σ, against any play of the others, to force the game to a state in which φ holds”, where a strategy can be defined as any PDL program. Similar proposal can also be found in [14]. Such an “intuitive analogue to strategies” provides a close approximation to strategy representation; nevertheless, a strategy has essential differences from a program, which requires specific ways of composition and reasoning as we have shown in the previous sections.

We like to stress that our treatment of strategies is different from all of the abovementioned approaches in the following aspects. Firstly, we can use the same propositional formulas for different purposes. A propositional formula with the standard semantics of propositional modal logic can represent properties of the game state and be used in domain-dependent axioms. But we also represent a strategy with the help of a propositional formula, by endowing the formula with a specific semantics. This makes strategy design much easier and efficient. Secondly, we view a strategy as a set of possible moves, i.e., a set of state-action pairs, rather than a function from a state (or a sequence of states) to an action. In this sense, our strategies represent “rough ideas”, which can then be combined and refined. Thirdly, instead of using PDL-style program connectives, we introduced two prioritised connectives for combining strategies, which, as we have seen, provides for a very natural and convenient design of strategies.

We want to mention that the idea of the prioritised disjunction was inspired by Brewka et al.’s [2] Qualitative Choice Logic (QCL). QCL contains a non-standard propositional connective \(A\overrightarrow {\times }B\) with the meaning, “A if possible; but if A is impossible then at least B”. We found that the semantics of the connective fits strategies very well. It is an interesting question for future work whether our prioritised conjunction can be integrated into QCL.

9 Conclusion

In this paper we have introduced a logical language to describe, compose and combine strategies for game-playing agents. The language derives from the general Game Description Language (GDL) and extends it by a single temporal operator ○ and two new prioritised connectives: \(\triangledown \) and \(\vartriangle \). The basic components of GDL facilitate the representation of initial and terminal conditions, winning criteria and legality of actions (i.e., preconditions). The temporal operator allows us to describe the effects of actions. These form the basic language for describing game- and player-specific strategies. The newly introduced connectives allow us to combine simple strategies into more complicated and refined ones. When we use the language to describe a strategy, we endow it with a specific semantics so that we can compose a strategy in a logical way and the actual moves the strategy represents can be generated in an automatic way.

We have thoroughly analysed the properties of the language. In fact, the nice properties in particular of the new connectives give us great freedom in practice for strategy design: we can start with a strategy that formalises one specific idea. If it is too restricted, we can extend it with more generic ones using the prioritised disjunction, and if it is too generic, we can refine it with more specific strategies using the prioritised conjunction. We have shown also how strategies can be embedded into existing methods for the design and analysis of intelligent agents in order to solve problems that involve reasoning about strategies, including the computation of possible outcomes of games under given strategies.

Our current implementation for strategy reasoning combines reasoning about actions and change with an encoding of game rules and strategies using Answer Set Programming. It would be more efficient to develop a specific method of model checking based on the structure of our state transition model and the syntax of strategy composition. We leave this for future work.