Abstract
Monte-Carlo Tree Search (MCTS) is a popular technique for playing multi-player games. In this paper, we propose a new method to bias the playout policy of MCTS. The idea is to prune the decisions which seem “bad” (according to the previous iterations of the algorithm) before computing each playout. Thus, the method evaluates the estimated “good” moves more precisely. We have tested our improvement for the game of Havannah and compared it to several classic improvements. Our method outperforms the classic version of MCTS (with the RAVE improvement) and the different playout policies of MCTS that we have experimented.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
Monte-Carlo Tree Search (MCTS) algorithms are recent algorithms for decision making problems [6, 7]. They are competitively used in discrete, observable and uncertain environments with a finite horizon and when the number of possible states is large. MCTS algorithms evaluate a state of the problem using a Monte-Carlo simulation (roughly, by performing numerous playouts starting from this state). Therefore, they require no evaluation function, which makes them quite generic and usable on a large number of applications. Many games are naturally suited for these algorithms so games are classically used for comparing such algorithms.
In this paper, we propose a method to improve the Monte-Carlo simulation (playouts) by pruning some of the possible moves. The idea is to ignore the decisions which seem “bad” when computing a playout, and thus to consider the “good” moves more precisely. We choose the moves to be pruned thanks to statistics established during previous playouts.
We experiment our improvement, called “Playout Pruning with Rave” (PPR) on the game of Havannah. Classic MCTS algorithms already provide good results with this game but our experiments show that PPR performs better. We also compare PPR to four well-known MCTS improvements (PoolRave, LGRF1, MAST and NAST2).
The remaining of this paper presents the game of Havannah in Sect. 2 and the Monte-Carlo Tree Search algorithms in Sect. 3. Our new improvement is described in Sect. 4. We present our results in Sect. 5. Finally, we conclude in Sect. 6.
2 Game of Havannah
The game of Havannah is a 2-player board game created by Christian Freeling in 1979 and updated in 1992 [26]. It belongs to the family of connection games with hexagonal cells. It is played on a hexagonal board, meaning 6 corners and 6 edges (corner stones do not belong to edges). At each turn a player has to play a stone in an empty cell. The goal is to realize one of these three shapes (i) a ring, which is a loop around one or more cells (empty or occupied by any stones) (ii) a bridge, which is a continuous string of stones connecting two corners (iii) a fork, which is a continuous string of stones connecting three edges. If there is no empty cell left and if no player wins then it is a draw (see Fig. 1). Previous studies related to the Monte-Carlo Tree Search algorithm applied to the game of Havannah can be found in [10, 20, 30].
3 Monte-Carlo Tree Search Algorithms
The Monte-Carlo Tree Search (MCTS) algorithm is currently a state-of-the-art algorithm for many decision making problems [3, 9, 16, 31], and is particularly relevant in games [1, 5, 12, 14, 15, 19, 21, 22, 29, 30]. The general principle of MCTS is to iteratively build a tree and perform playouts to bias the decision making process toward the best decisions [6, 7, 18]. Starting with the current state \(s_0\) of a problem, the MCTS algorithm incrementally builds a subtree of the future states. Here, the goal is to get an unbalanced subtree, where the branches with (estimated) good states are more developed. The subtree is built in four steps: selection, expansion, simulation and backpropagation (see Fig. 2).
The selection step is to choose an existing node among available nodes in the subtree. The most common implementation of MCTS is the Upper Confidence Tree (UCT) [18] which uses a bandit formula for choosing a node. A possible bandit formula is defined as follows:
where \( \mathcal {C}_{s_1}\) is the set of child nodes of the node \(s_1\), \(w_j\) is the number of wins for the node j (more precisely, the sum of the final rewards for j), \(n_j\) is the number of playouts for the node j and \(n_{s_1}\) is the number of playouts for the node \(s_1\) (\(n_{s_1} =\sum _j n_j\)). K is called the exploration parameter and is used to tune the trade-off between exploitation and exploration.
Once a leaf node \(s_1\) is selected, the expansion step creates a new child node \(s_2\). This new node corresponds to a decision of \(s_1\) which has not been considered yet. Then, the simulation step is to perform a playout (a random game) until a final state \(s_3\) is reached. This final state gives a reward (for example, in games, the reward corresponds to a win, a loss or a draw). The last step (backpropagation) is to use the reward to update the statistics (number of wins and number of playouts) in all the nodes encountered during the selection step.
3.1 Rapid Action Value Estimate
One of the most common improvements of the MCTS algorithm is the Rapid Action Value Estimate (RAVE) [12]. The idea is to share some statistics about moves between nodes: if a move is good in a certain state, then it may be good in other ones.
More precisely, let s be a node and \(m_i\) the possible moves from s, leading to the child nodes \(s'_i\). For the classic MCTS algorithm, we already store, in s, the number of winning playouts \(w_s\) and the total number of playouts \(n_s\) (after s was selected). For the RAVE improvement, we also store, in s and for each move \(m_i\), the number of winning playouts \(w'_{s,s'_i}\) and the total number of playouts \(n'_{s,s'_i}\) obtained by choosing the move \(m_i\). These “RAVE statistics” are updated during the backpropagation step and indicate the estimated quality of the moves already considered in the subtree (see Fig. 3).
Thus, the selection step can be biased by adding a RAVE score in the bandit formula defined previously:
where \(\beta \) is a parameter approaching 0 as \(n_j\) tends to infinity (for instance, \(\beta = \sqrt{\frac{R}{R + 3n_j}}\) where R is a parameter [13]).
3.2 Playout Improvements
PoolRave is an extension of RAVE [17, 25]. The idea is to use the RAVE statistics to bias the simulation step (unlike the RAVE improvement which biases the selection step). More precisely, when a playout is performed, the PoolRave improvement firstly builds a pool of possible moves by selecting the N best moves according to the RAVE statistics. Then, in the simulation step, the moves are chosen randomly in the pool with probability p, otherwise (with probability \(1-p\)) a random possible move is played, as in the classic MCTS algorithm.
The Last-Good-Reply improvement [2, 8] is based on the principle of learning how to respond to a move. In each node, LGR stores move replies which lead to a win in previous playouts. More precisely, during a playout, if the node has a reply for the last move of the opponent, this reply is played, otherwise a new reply is created using a random possible move. At the end of the playout, if the playout leads to a win, the corresponding replies are stored in the node. If the playout leads to a loss, the corresponding replies are removed from the node (forgetting step). This algorithm is called LGRF1. Other algorithms have been proposed using the same idea but LGRF1 is the most efficient one with connection games [27].
The principle of the Move-Average Sampling Technique (MAST) [11] is to store move statistics globally and to use these statistics to bias the playouts. This is similar to the PoolRave improvement, except that here, the statistics are independent of the position of the move in the tree.
The N-gram Average Sampling Technique (NAST) is a generalization of MAST [23, 28]. The idea is to look at sequences of N moves instead of one move only. This improvement can be costly according to N but it is already efficient with \(N=2\) (NAST2) for the game of Havannah [27].
4 Pruning in the Simulation Step
We propose a new improvement of the MCTS algorithm, called “Playout Pruning with Rave” (PPR). The idea is to prune bad moves in the simulation step in order to focus the simulation on good playouts (see Fig. 4, left). More precisely, before the playout, we compute a list of good moves by pruning the moves which have a winning rate lower than a given threshold \(T_{w'}\). The winning rate of a node j is computed using the RAVE statistics of a node \( s_\text {PPR} \), with \(\frac{w'_{s_\text {PPR}, j}}{n'_{s_\text {PPR}, j}}\).
The node \( s_\text {PPR} \), giving the RAVE statistics, has to be chosen carefully. Indeed, the node \(s_2\), selected during the selection step of the MCTS algorithm, may still have very few playouts, hence inaccurate RAVE statistics. To solve this problem, we traverse the MCTS tree bottom-up, starting from \(s_2\), until we reach a node with a minimum ratio \(T_n\), representing the current number of playouts for \( s_\text {PPR} \) over the total number of playouts performed.
After the PPR list is computed, the simulation step is performed. The idea is to use the moves in the PPR list, which are believed to be good, but we also have to choose other moves to explore other possible playouts. To this end, during the simulation step, each move is chosen in the PPR list with a probability p, or among the possible moves with a probability \(1-p\). In the latter case, we have observed that considering only a part of all the possible moves gives better results; this can be seen as a default pruning with, in return, an additional bias (see Algorithm 1).
The PPR improvement can be seen as a dynamic version of the PoolRave improvement presented in the previous section: instead of selecting the N best moves in a pool, we discard the moves which have a winning rate lower than \(T_{w'}\). PoolRave uses a static pool size, which implies that good moves may be discarded (if the pool size is small in front of the number of good moves) or that bad moves may be chosen (if the pool size is large in front of the number of good moves). PPR automatically deals with this problem since the size of the PPR list is naturally dynamic: the list is small if there are only few good moves, and large if there are many good moves.
5 Experiments
We have experimented with the proposed MCTS improvement (PPR) for the game of Havannah. Since RAVE is now considered as a classic MCTS baseline, we have compared PPR against RAVE (using the parameters \(R=130\) and \({K=0}\)). To have adequate statistical properties, we have played 600 games for each experiment. Since the first player has an advantage in the game of Havannah, we played, for each experiment, half the games with the first algorithm as the first player and the other half with the second algorithm as the first player.
Below we report on the influence of the PPR parmeters (Sect. 5.1), scalability of the playout pruning (Sect. 5.2), and the comparison between PPR and other playout improvements (Sect. 5.3).
5.1 Influence of the PPR Parameters
To study the influence of the three parameters of the PPR improvement (\(T_n\), \(T_{w'}\), P), we have compared PPR against RAVE using 1 k MCTS iterations and a board size of 6. For each parameter, we have experimented with various values while the other parameters were set to default values (see Fig. 5).
PPR has better win rates against RAVE when \(T_n\) (the minimum ratio of playouts for the node \(s_\text {PPR}\) over the total number of playouts) is lower than \(10\%\). A low value for \(T_n\) means that we take a node \(s_\text {PPR}\) close to the node \(s_2\) which has launched the playout; thus the PPR list is built using RAVE statistics that are meaningful for the playout but quite unreliable. When \(T_n\) is too large, no node has sufficient playouts so the PPR list is empty and PPR is equivalent to RAVE (win rate of \(50\%\)).
The best values for the pruning threshold \(T_{w'}\) (win rate in the RAVE statistics of \(s_\text {PPR}\)) stand between \(20\%\) and \(40\%\). The moves with a winning rate lower than this threshold are pruned when building the PPR list. Therefore, if \(T_{w'}\) is too high, all moves are pruned (i.e., the PPR list is empty) and the algorithm is equivalent to RAVE (win rate of \(50\%\)). In addition, if \(T_{w'}\) is too low, then the PPR list also contains bad moves (low winning rate) which lowers the efficiency of PPR.
Finally, the best values for the parameter p (probability for using the PPR list instead of a random sampling, to choose a move) stand between \(60\%\) and \(80\%\) in our experiments. A low value implies that the PPR list is rarely used, making PPR almost equivalent to RAVE. With a very high value, the PPR list is frequently used, so PPR does not explore other moves, hence a highly biased playout computation.
5.2 Scalability of the Playout Pruning
Like classic improvements of the simulation step (for instance, PoolRave and LGRF1), PPR is useful for small numbers of playouts and large board sizes (see Fig. 6).
In our experiments, PPR wins almost 80% of the games against RAVE with 1 k MCTS iterations, and almost 70% with 10 k iterations. PPR wins 60% or less of the games against RAVE with a board size lower than 5 and 80% or more of the games with a board size larger than 7. This is not very surprising because RAVE is already very efficient when the board size is small, so adding pruning is useless in this case. However, large boards have many more “dead areas” (i.e., irrelevant cells) that PPR can detect and prune (see Fig. 4, right).
5.3 PPR Vs Other Playout Improvements
We have compared PPR against several MCTS improvements (RAVE, PoolRave, LGRF1, MAST, NAST2) for several board sizes and numbers of MCTS iterations (see Table 1). Since RAVE is now considered as the classic MCTS baseline, we have implemented all playout improvements (PPR, PoolRave, LGRF1, MAST, NAST2) based on the RAVE algorithm.
Our results indicate that PPR outperforms the previous algorithms for the game of Havannah. For a board size of 6, PPR wins more than 70% of the games with 1 k MCTS iterations and more than 60% of the games with 10 k or 30 k iterations. For a board size of 10, PPR is even better (more than 70%).
6 Conclusion
In this paper, we have proposed a new improvement (called PPR) of the MCTS algorithm, based on the RAVE improvement. The idea is to prune the moves which seem “bad” according to previous playouts during the simulation step. We have compared PPR to previous MCTS improvements (RAVE, PoolRave, LGRF1, MAST, NAST2) for the game of Havannah. In our experiments, PPR is the most efficient algorithm, reaching win rates of at least 60 %.
In future work, it would be interesting to compare PPR with other MCTS improvements such as Contextual Monte-Carlo [24] or with stronger bots [10]. We would also try PPR for other games or decision making problems to determine if the benefit of PPR is limited to the game of Havannah or if it is more general.
References
Arneson, B., Hayward, R., Henderson, P.: Monte-Carlo tree search in hex. IEEE Trans. Comput. Intell. AI Games 2(4), 251–258 (2010)
Baier, H., Drake, P.: The power of forgetting: improving the last-good-reply policy in Monte-Carlo go. IEEE Trans. Comput. Intell. AI Games 2(4), 303–309 (2010)
Bertsimas, D., Griffith, J., Gupta, V., Kochenderfer, M.J., Mišić, V., Moss, R.: A comparison of Monte-Carlo tree search and mathematical optimization for large scale dynamic resource allocation (2014). arXiv:1405.5498
Browne, C., Powley, E., Whitehouse, D., Lucas, S., Cowling, P., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., Colton, S.: A survey of Monte-Carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4(1), 1–43 (2012)
Cazenave, T.: Monte-Carlo kakuro. In: Herik, H.J., Spronck, P. (eds.) ACG 2009. LNCS, vol. 6048, pp. 45–54. Springer, Heidelberg (2010). doi:10.1007/978-3-642-12993-3_5
Chaslot, G., Saito, J., Bouzy, B., Uiterwijk, J., Herik, H.: Monte-Carlo strategies for computer go. In: Proceedings of the 18th BeNeLux Conference on Artificial Intelligence, pp. 83–91, Namur, Belgium (2006)
Coulom, R.: Efficient selectivity and backup operators in Monte-Carlo tree search. In: Herik, H.J., Ciancarini, P., Donkers, H.H.L.M.J. (eds.) CG 2006. LNCS, vol. 4630, pp. 72–83. Springer, Heidelberg (2007). doi:10.1007/978-3-540-75538-8_7
Drake, P.: The last-good-reply policy for Monte-Carlo go. Int. Comput. Games Assoc. J. 32(4), 221–227 (2009)
Edelkamp, S., Tang, Z.: Monte-Carlo tree search for the multiple sequence alignment problem. In: Eighth Annual Symposium on Combinatorial Search (2015)
Ewalds, T.: Playing and Solving Havannah. Master’s thesis, University of Alberta (2012)
Finnsson, H., Björnsson, Y.: Simulation-based approach to general game playing. In: Proceedings of the 23rd National Conference on Artificial Intelligence, AAAI 2008, vol. 1, pp. 259–264. AAAI Press (2008)
Gelly, S., Silver, D.: Combining online and offline knowledge in UCT. In: Proceedings of the 24th International Conference on Machine Learning, pp. 273–280. ACM (2007)
Gelly, S., Silver, D.: Monte-Carlo tree search and rapid action value estimation in computer go. Artif. Intell. 175(11), 1856–1875 (2011)
Guo, X., Singh, S., Lee, H., Lewis, R.L., Wang, X.: Deep learning for real-time atari game play using offline Monte-Carlo tree search planning. In: Advances in Neural Information Processing Systems, pp. 3338–3346 (2014)
Heinrich, J., Silver, D.: Self-play Monte-Carlo tree search in computer poker. In: Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence (2014)
Herik, H.J., Kuipers, J., Vermaseren, J.A.M., Plaat, A.: Investigations with Monte Carlo tree search for finding better multivariate horner schemes. In: Filipe, J., Fred, A. (eds.) ICAART 2013. CCIS, vol. 449, pp. 3–20. Springer, Heidelberg (2014). doi:10.1007/978-3-662-44440-5_1
Hoock, J., Lee, C., Rimmel, A., Teytaud, F., Wang, M., Teytaud, O.: Intelligent agents for the game of go. IEEE Comput. Intell. Mag. 5(4), 28–42 (2010)
Kocsis, L., Szepesvári, C.: Bandit based Monte-Carlo planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 282–293. Springer, Heidelberg (2006). doi:10.1007/11871842_29
Lanctot, M., Saffidine, A., Veness, J., Archibald, C., Winands, M.: Monte Carlo*-minimax search. In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, pp. 580–586. AAAI Press (2013)
Lorentz, R.: Improving Monte-Carlo tree search in Havannah. In: Computers and Games 2010, pp. 105–115 (2010)
Lorentz, R.J.: Amazons discover Monte-Carlo. In: Herik, H.J., Xu, X., Ma, Z., Winands, M.H.M. (eds.) CG 2008. LNCS, vol. 5131, pp. 13–24. Springer, Heidelberg (2008). doi:10.1007/978-3-540-87608-3_2
Mazyad, A., Teytaud, F., Fonlupt, C.: Monte-Carlo Tree Search for the “mr jack” board game. J. Soft Comput. Artif. Intell. Appl. (IJSCAI) 4(1) (2015)
Powley, E.J., Whitehouse, D., Cowling, P.I.: Bandits all the way down: UCB1 as a simulation policy in Monte-Carlo tree search. In: CIG, pp. 81–88. IEEE (2013)
Rimmel, A., Teytaud, F.: Multiple overlapping tiles for contextual Monte Carlo tree search. In: Chio, C., Cagnoni, S., Cotta, C., Ebner, M., Ekárt, A., Esparcia-Alcazar, A.I., Goh, C.-K., Merelo, J.J., Neri, F., Preuß, M., Togelius, J., Yannakakis, G.N. (eds.) EvoApplications 2010. LNCS, vol. 6024, pp. 201–210. Springer, Heidelberg (2010). doi:10.1007/978-3-642-12239-2_21
Rimmel, A., Teytaud, F., Teytaud, O.: Biasing Monte-Carlo simulations through RAVE values. In: Herik, H.J., Iida, H., Plaat, A. (eds.) CG 2010. LNCS, vol. 6515, pp. 59–68. Springer, Heidelberg (2011). doi:10.1007/978-3-642-17928-0_6
Schmittberger, R.: New Rules for Classic Games. Wiley, New York (1992)
Stankiewicz, J.A., Winands, M.H.M., Uiterwijk, J.W.H.M.: Monte-Carlo tree search enhancements for havannah. In: Herik, H.J., Plaat, A. (eds.) ACG 2011. LNCS, vol. 7168, pp. 60–71. Springer, Heidelberg (2012). doi:10.1007/978-3-642-31866-5_6
Tak, M.J., Winands, M.H., Björnsson, Y.: N-grams and the last-good-reply policy applied in general game playing. IEEE Trans. Comput. Intell. AI Games 4(2), 73–83 (2012)
Taralla, D.: Learning Artificial Intelligence in Large-Scale Video Games. Ph.D. thesis, University of Liège (2015)
Teytaud, F., Teytaud, O.: Creating an upper-confidence-tree program for havannah. In: Herik, H.J., Spronck, P. (eds.) ACG 2009. LNCS, vol. 6048, pp. 65–74. Springer, Heidelberg (2010). doi:10.1007/978-3-642-12993-3_7
Wilisowski, Ł., Dreżewski, R.: The application of co-evolutionary genetic programming and TD(1) reinforcement learning in large-scale strategy game VCMI. In: Jezic, G., Howlett, R.J., Jain, L.C. (eds.) Agent and Multi-Agent Systems: Technologies and Applications. SIST, vol. 38, pp. 81–93. Springer, Heidelberg (2015). doi:10.1007/978-3-319-19728-9_7
Acknowledgements
Experiments presented in this paper were carried out using the CALCULCO computing platform, supported by SCOSI/ULCO (Service Commun du Système d’Information de l’Université du Littoral Côte d’Opale).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Duguépéroux, J., Mazyad, A., Teytaud, F., Dehos, J. (2016). Pruning Playouts in Monte-Carlo Tree Search for the Game of Havannah. In: Plaat, A., Kosters, W., van den Herik, J. (eds) Computers and Games. CG 2016. Lecture Notes in Computer Science(), vol 10068. Springer, Cham. https://doi.org/10.1007/978-3-319-50935-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-50935-8_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-50934-1
Online ISBN: 978-3-319-50935-8
eBook Packages: Computer ScienceComputer Science (R0)