Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

What are the origins of game theory? Answering the question in a way that can enlighten the increasingly broad field of contemporary applications of game theory in economics, biology, psychology, linguistics, philosophy, etc., requires a close examination of both the past and the present state of the discipline. Indeed, asking about the origins of game theory with such purposes in mind is an enterprise that consists in tracing the historical emergence and refinements of game-theoretical methods of analysis and concepts of solution, as well as the improvement of their philosophical, mathematical, and scientific foundations, and in comparing the various stages of development with today’s stage.

This paper focuses on the historical emergence of a concept of solution that plays a central role in game-theoretical analyses of strategic games that have no pure strategy equilibria. The concept in question is, of course, that of mixed-strategy equilibrium. What, then, are the origins of the concept of mixed-strategy equilibrium? Perhaps because there is relatively little literature on the history of game theory, almost all academics who are asked this question answer it incorrectly. Typically, it is thought that the origins of the concept of mixed-strategy equilibrium and the origins of modern game theory are one and the same, and that the origins of the latter can be portrayed along these lines:

[…] the conventional view of the history of game theory in economics is relatively simple to narrate. It was that von Neumann wrote a paper in the late 1920s on two-person games and minimax. Borel claimed priority but this claim was rejected as mistaken. Then von Neumann and Morgenstern got together in Princeton, wrote their book in 1944, and the word went forth. (Weintraub 1992, p. 7)

Weintraub, however, correctly emphasizes that “this potted history is misleading in all its details,” and the collection of essays he edited is meant to rectify the situation. Indeed, insofar as the concept of mixed-strategy equilibrium is concerned, far from finding its origins in the work of von Neumann and Morgenstern (1944), we must instead look back to the beginning of the eighteenth century.

To be clear, the claim is not that a general theory of mixed-strategy equilibria providing general existence proofs had been formulated in the eighteenth century. This general theory is uncontroversially a twentieth century creation. Borel provided the minimax solution of games with a small number of pure strategies (Borel 1921) together with some conjectures about the existence of such solutions in general [see Dimand and Dimand (1992) for a detailed account], and von Neumann (1928) proved the general statement of the minimax theorem for any finite number of pure strategies in any two-person zero-sum game. A few decades later, von Neumann’s minimax theorem was generalized by Nash’s theorem for mixed-strategy equilibria in arbitrary non-cooperative finite games (Nash 1950).

This being said, concepts and methods are typically older than their foundations, and in this case the first known articulation of the concept of mixed-strategy equilibrium and the first calculation of a mixed-strategy equilibrium are due to an amateur mathematician going by the name Waldegrave. This fact has been noted by many authors [e.g., Todhunter (1865), Fisher (1934), Kuhn (1968), Rives (1975), Hald (1990), Dimand and Dimand (1992), Bellhouse (2007), and Bellhouse and Fillion (2015), to name a few], but the story is to this day not entirely clear.

A first element bringing confusion is that, following Kuhn (1968), the Waldegrave in question is normally identified as James 1st Earl Waldegrave. See Fig. 1. However, as Bellhouse (2007) pointed out, this is incorrect since Montmort (1713, p. 388) reveals that the Waldegrave in question is a brother of Henry Waldegrave; this leaves Charles, Edward, and Francis as candidates. Bellhouse (2007) argued in favor of Charles, but in a later paper (Bellhouse and Fillion 2015) we have examined calligraphic evidence that shows that Francis Waldegrave deserves the credit for this innovation.

Fig. 1
figure 1

Some members of the Waldegrave family. Reproduced from Bellhouse (2007). Whereas James is often identified as the one who contributed the mixed-strategy solution, calligraphic evidence suggests that it is Francis (Bellhouse and Fillion 2015)

In addition to confusion about who Waldegrave was, many who have written on the topic have harshly judged Waldegrave, Montmort, and Nicolaus Bernoulli, concluding that none of them really had a good grasp of the nature of the problem. For instance, based on a misinterpreted remark, Henny suggests that even though Waldegrave somehow stumbled upon the right solution, he did not have the mathematical skills to demonstrate his result (Henny 1975). Another case in point is the argument by Fisher (1934) that “Montmort’s conclusion [that no absolute rule could be given], though obviously correct for the limited aspect in which he viewed the problem, is unsatisfactory to common sense, which suggests that in all circumstances there must be, according to the degree of our knowledge, at least one rule of conduct which shall be not less satisfactory than any other; and this his discussion fails to provide.” Once again, Fisher’s argument is based on a misinterpretation of Montmort’s position; as I will argue in Sect. 3, far from leaving common sense unsatisfied, Montmort was considering a perspective on game theory that only came to the forefront in the second half of twentieth century. In a previous paper (Bellhouse and Fillion 2015), we have translated correspondence involving Montmort, Bernoulli, and Waldegrave that was not published in the second edition of Essay d’Analyse; this additional correspondence decisively refutes such claims. Instead, it shows that over the years all three came to a very clear understanding of the situation.

2 Le Her and Its Solution

At the end of Essay d’Analyze des Jeux de Hazard (Montmort 1708), Montmort proposed four unsolved problems to his readers. The second one concerns the game Le Her, and its statement is as follows:

Here is the problem of which we request the solution:

Three players, Pierre, Paul & Jacques are the only remaining players, and they have only one chip left. Pierre is the dealer, Paul is to his right, & Jacques follows. We request what their odds are with respect to the position they occupy, & in which proportion they should split the pot, if it is, say, 10 coins, if they wanted to share it among themselves without finishing the game. (Montmort 1713, p. 279)

The situation can be represented as in Fig. 2.

Fig. 2
figure 2

Setup of players at a card table for Montmort’s fourth problem concerning Le Her

The game Le Her that Montmort described in Essay d’Analyse is a game of strategy and chance played with a standard deck of 52 playing cards. The dealer Pierre distributes a card face down to Paul, Jacques, and then to himself, and places the remainder of the deck aside. The objective of the game is to end the round with the highest card, where aces are below twos and kings are highest. If there is a tie, the dealer or the player closest to the dealer wins; for example, if all three players end the round with a 10, Pierre wins, whereas if Pierre receives a 9 and Paul and Jacques receive a 10, Jacques wins. The strategic element of the game comes from the fact that the players do not have to stick with the card they are initially dealt. At the beginning, all players look at the card they have received. Then, one after the other, starting with Paul (or whoever is the first player to the right of the dealer), the players have the opportunity to switch their card with that of the player to their right; since the dealer is last, he can switch his card for the card at the top of the deck. However, if a player has a king, he can (and therefore will) refuse to switch his card and simply block the move. In this eventuality the player who attempted the move is stuck with his card.

Consider again Fig. 2. Pierre deals a card to each player, face down, and they look at their cards. Paul goes first, and seeing a 7, he decides to switch. He must switch with Jacques, but Jacques has a king and blocks the switch. It is now Pierre’s turn and, being rightly convinced that his 4 will not fare well, he decides to switch his card for the one at the top of the deck. He turns the card and sees a king. His switch is thus automatically blocked,Footnote 1 and Jacques wins the hand.

In the second edition of Essay d’Analyse (Montmort 1713), Montmort added a fifth part that includes correspondence between Nicolaus Bernoulli, Waldegrave, l’Abbé d’Orbais, and himself concerning this game. Despite the fact that Montmort posed the problem for three players, however, their discussion has focused on the two-person case for the sake of simplicity. Their correspondence eventually establishes that there is no pure strategy equilibrium and provides the calculated value for a mixed-strategy equilibrium. Note, however, that none of them actually provide the details of their calculations and instead merely state their results. For the sake of clarity, before examining their correspondence, we will consider a modern approach to analyzing the game and finding its solution.Footnote 2

Let us look at things from Paul’s point of view.Footnote 3 To get started, observe that if Paul receives a king, it is obviously in his best interest to hold on to it, since switching would guarantee that he would lose. However, if Paul were to hold only to a king and switch with any card of lower value, he would be letting go of strong cards that would very likely win the game, such as jacks and queens. The opportunity cost of this conservative policy would lead to a likely loss (as we will see, around 66 % of the time), so he needs to be more inclusive. At the same time, if Paul were to hold on to an ace, he would lose the round no matter what card Pierre had. Thus, it is easy to see that holding on to an ace and other very low cards is a losing strategy. This outlines the strategic landscape: Paul should hold on to high cards, and switch with low cards. But what exactly should be the threshold value below which it is recommendable for Paul and Pierre to switch? And how should this threshold change as a function of their respective positions? This requires a more extensive analysis.

In general, let (i, j) be the values of the cards dealt to Paul and Pierre, respectively. By “the value” of a card, we mean the ranking 1, 2, , 13 of the card notwithstanding its suit, where 1 is an ace, 11 is a jack, 12 is a queen, and 13 is a king. Moreover, we let m be the minimal card value to which Paul holds and n be the minimal card value to which Pierre holds, with the understanding that they switch with any card below this value. Each of the 13 possible holding thresholds constitutes a pure (i.e., non-randomized) strategy, and a choice of a holding threshold for Pierre and Paul is a (pure) strategy profile, which we denote \(\langle m,n\rangle\). Our first objective is to find the probability \(P(\langle m,n\rangle )\) that Paul will win when a certain strategy profile \(\langle m,n\rangle\) is employed. We will then compare those probabilities to find which strategy is favorable to Paul, and the extent to which it is favorable.

The probability of Paul winning when the pair of cards (i, j) is dealt and when the strategy profile \(\langle m,n\rangle\) is employed, denoted \(P_{i,j}(\langle m,n\rangle )\), is simply the product of the probability of dealing (i, j), denoted P i, j , and of the probability that Pierre draws a card k that makes Paul win if he decides to draw a card. Let \(C_{i,j}(\langle m,n\rangle )\) be the number of cards left in the deck that would make Paul win if Pierre were to draw one of them. Then, we have

$$\displaystyle{ P_{i,j}(\langle m,n\rangle ) = P_{i,j}\frac{C_{i,j}(\langle m,n\rangle )} {50}. }$$
(1)

If the game is decided without necessitating the drawing of a third card k, then for convenience we will let \(C_{i,j}(\langle m,n\rangle )\) be 0 if Paul loses and 50 if he wins. Also, the probability of being dealt (i, j) is simply

$$\displaystyle{ P_{i,j} = \left \{\begin{array}{c@{\qquad }l} \frac{4} {52} \cdot \frac{4} {51}\qquad &i\neq j \\ \frac{4} {52} \cdot \frac{3} {51}\qquad &i = j \end{array} \right.. }$$
(2)

Moreover, the probability \(P_{i}(\langle m,n\rangle )\) that Paul will win when he is dealt a given card i is a cumulative probability given by

$$\displaystyle{ P_{i}(\langle m,n\rangle ) =\sum _{ j=1}^{13}P_{ i,j}(\langle m,n\rangle ) =\sum _{ j=1}^{13}P_{ i,j}\frac{C_{i,j}(\langle m,n\rangle )} {50}. }$$
(3)

Finally, the probability that Paul will win under strategy profile \(\langle m,n\rangle\) is given by

$$\displaystyle{ P(\langle m,n\rangle ) =\sum _{ i=1}^{13}P_{ i}(\langle m,n\rangle ) =\sum _{ i=1}^{13}\sum _{ j=1}^{13}P_{ i,j}\frac{C_{i,j}(\langle m,n\rangle )} {50}. }$$
(4)

For both Paul and Pierre, their decision will consist in comparatively analyzing those values for all combinations of m and n in order to determine how to maximize their respective chances of winning.

As we see, most of the work consists in finding the values of \(C_{i,j}(\langle m,n\rangle )\) for various (i, j), and \(\langle m,n\rangle\), based on the rules of the game. Instead of writing the function \(C_{i,j}(\langle m,n\rangle )\) explicitly as a complicated piecewise function, it might be more instructive to consider a few examples. To begin, suppose the strategy employed is \(\langle 8,9\rangle\) and that the dealt cards are (7, 10). Paul would then keep any card equal or higher to an 8. But since he has received a 7, he switches with Pierre and gets a 10. Now, Pierre knows that the 7 with which the trade left him is a losing card, and he thus switches it with a card from the deck. By examination of the cases, we find that drawing any ten, jack, or queen would make him win (remember, kings will not work, as they would block his switch), for a total of 11 cards. As any other card would make Paul win, we find that \(C_{7,10}(\langle 8,9\rangle ) = 50 - 11 = 39\). Now, suppose that the strategy employed is still \(\langle 8,9\rangle\), but now the players are dealt the cards (9, 10). In this scenario, Paul does not switch as 9 > 8, and neither does Pierre as 10 > 9. As a result, Pierre wins. Following our convention, \(C_{9,10}(\langle 8,9\rangle ) = 0\) as Pierre didn’t draw and won the hand.

For more generality, there is a Matlab code in Appendix 1 to compute the function \(C_{i,j}(\langle m,n\rangle )\) for any admissible values of i, j, m, and n. Using this code, we can easily compute the winning card counts for Paul in any strategy profile. For instance, if we once again consider the strategy profile \(\langle 8,9\rangle\), the resulting winning card counts for Paul corresponding to any pair of dealt cards (i, j) are given in Fig. 3.

Fig. 3
figure 3

Table of winning card counts for Paul for any cards (i, j) when the strategy profile \(\langle 8,9\rangle\) is employed. In the rightmost column are the cumulative probabilities (rounded to four digits) that Paul will win with card i when the strategy profile \(\langle 8,9\rangle\) is employed

With an efficient way of calculating the values of \(C_{i,j}(\langle m,n\rangle )\), we can then proceed to finding Paul’s advantage under a certain strategy profile using Eqs. (1), (3), and (4). This can also be easily achieved with the Matlab code given in Appendix 2. As an illustration, the values of \(P_{i}(\langle 8,9\rangle )\) (i.e., the probabilities of winning with card i under the strategy profile \(\langle 8,9\rangle\)) and the probability \(P(\langle 8,9\rangle )\) that Paul will win with this strategy profile are given in Fig. 3.

Since this is a general way of computing the probabilities that Paul will win with a given strategy profile, we can then obtain the probabilities of winning associated with each of the 13 × 13 strategy profiles. The results are displayed in Fig. 4; this information is the basis for our analysis seeking to determine what would be the most advantageous course of action for Paul.

Fig. 4
figure 4

The matrix on the right contains the probabilities \(P(\langle m,n\rangle )\) (rounded to the fifth digit) that Paul wins in each of the 13 × 13 strategy profiles. The matrix on the left results from a first round of removal of dominated strategies

Let us begin our analysis of this information. In what follows, “strategy m” refers to holding any card of value m or higher. Moreover, we say that strategy m dominates strategy n if strategy m does at least as well as strategy n against all the strategies that the opponent might employ. First, observe that for Paul strategy 1 is dominated by strategy 2, strategy 2 is dominated by strategy 3, strategy 3 is dominated by strategy 4, strategy 4 is dominated by strategy 5, and strategy 5 is dominated by strategy 6. Thus, Paul would be making a clear mistake by holding on to a card lower than a 6. Moreover, observe that strategy 13 is dominated by strategy 12, strategy 12 is dominated by strategy 11, strategy 11 is dominated by strategy 10, strategy 10 is dominated by strategy 9, and strategy 9 is dominated by strategy 8. Thus, only holding on to cards higher than 8 would also be a clear mistake. This captures the idea mentioned earlier that only holding on to very high cards would be too conservative, while holding on to low cards would be too inclusive, so that the question revolves around which middle value Paul and Pierre should hold on to. By using the method of iterated removal of strictly dominated strategies, this leaves us with the possibilities displayed on the left in Fig. 4, namely holding on to sixes, sevens, or eights.

Now, let us remember that the game is zero-sum, so that Pierre’s probabilities of winning are 1 minus those of Paul’s. Thus, observing the reduced game on the left in Fig. 4, we see that for Pierre, strategies 1–7 are all dominated by 8. Moreover, the overly conservative strategies are also dominated, i.e., strategies 10–13 are dominated by strategy 9. Thus, assuming that Paul will not play a dominated strategy, the only strategies that are not dominated for Pierre are 8 and 9. However, assuming that Pierre will restrict himself to those two non-dominated strategies, we observe that for Paul 6 is also dominated by 7. Now, it is not possible to further reduce this game by eliminating dominated strategies, so the process of iterated removal of dominated strategies is complete. Whereas we started with 169 possible strategy profiles, we have now identified only four possibilities that are truly viable options for Paul and Pierre. The resulting reduced game is displayed in Fig. 5.

Fig. 5
figure 5

Reduction of the game by removing the strictly dominated strategies for both players. On the left, values calculated by the Matlab code provided here; on the right, the rational values provided by Montmort (1713, p. 413)

Could there be a self-enforcing agreement between Paul and Pierre on a strategy profile, i.e., is there a strategy profile among those four that is such that, if it were played, neither player would gain from changing his strategy? To begin, were Paul to play 7, Pierre would play 8. But if Pierre were to play 8, Paul would play 8 himself. However, if Paul played 8, then Pierre would play 9. Finally, if Paul played 9, Paul would play 7. Thus, as we see, none of the four remaining strategy profiles is a Nash equilibrium. If we were limited to pure strategy equilibria, we would have to conclude that there is no self-enforcing agreement on a pair of strategies, and consequently that it is impossible to uniquely determine the advantage of Paul over Pierre.

However, a standard procedure in such circumstances is to use an extended reasoning that involves chance. Instead of insisting that the game is solved only by identifying a pure strategy profile in which each player is best-responding to the opponent’s strategy, we allow players to determine which strategy they will play by using a randomizing device. A solution would then be a probability distribution over the pure strategies that guarantees each player the maximal value of his minimum payoff. This type of solution is now known as a minimax solution in the case of two-person zero-sum games, and more generally as a mixed-strategy equilibrium.

In the game Le Her, each player has the choice between 13 pure strategies. However, the 11 dominated strategies should not be employed, and thus they should be employed with a zero probability. Moreover, Paul will hold on to a seven with a non-zero probability p and he will hold on to an eight with a probability 1 − p, and Pierre will hold on to an eight with a probability q and he will hold on to a nine with a probability 1 − q. If we suppose that there is $1 in the pot, Paul’s payoff will vary with respect to p and q as follows:

$$\displaystyle{ \mathrm{Payoff}(p,q) = p(a_{11}q + a_{12}(1 - q)) + (1 - p)(a_{21}q + a_{22}(1 - q)), }$$
(5)

where the a i j s are the entries in the probability matrix of Fig. 5. This function has the characteristic shape of a saddle (see Fig. 6). The so-called saddle point, indicated by the dot in the figure, is the probability allocation that constitutes the mixed-strategy equilibrium. To calculate the (p, q) coordinates of the saddle point, we reason that, as it is a zero-sum game, Paul should choose p so as to make Pierre indifferent between holding on to an eight or a nine, and Pierre should choose q so as to make Paul indifferent between holding on to a seven or an eight. This gives us two equations:

$$\displaystyle\begin{array}{rcl} a_{11}p + a_{21}(1 - p)& =& a_{12}p + a_{22}(1 - p){}\end{array}$$
(6)
$$\displaystyle\begin{array}{rcl} a_{11}q + a_{12}(1 - q)& =& a_{21}q + a_{22}(1 - q).{}\end{array}$$
(7)

Solving the two equations gives us

$$\displaystyle\begin{array}{rcl} p = \frac{3} {8}\qquad (1 - p) = \frac{5} {8}\qquad q = \frac{3} {8}\qquad (1 - q) = \frac{5} {8}.& &{}\end{array}$$
(8)

Thus, Paul should hold on to the 8 slightly more often than to a seven, and Pierre should hold on to a nine slightly more often than to an eight, just as Waldegrave had found.

Fig. 6
figure 6

Contour plot for Paul’s payoff as a function of p and q. The saddle point is indicated by the dot

3 Jouer au Plus Fin

After the publication of his Essay d’Analyse (1708), Montmort sent copies of his book to Johann Bernoulli, and this sparked a correspondence on Le Her involving Montmort, Nicolaus Bernoulli, Waldegrave, and l’Abbé d’Orbais. The correspondence began in 1710 and went on until 1715. The earlier part of this correspondence—slightly corrected and edited by Montmort—was published as part V of the second edition of Essay d’Analyse (1713). Despite initial confusion, they reached a surprisingly refined understanding of the various ways in which Montmort’s problem could be claimed to be solved.

The main elements of the published correspondence are the ones explained earlier. Firstly, they discuss their respective calculations of the probability that Paul and Pierre will win under different strategy profiles and establish that they are in agreement in this respect. Furthermore, after some hesitation they come to an agreement concerning the fact that there is no fixed point on which both players can settle so that, in modern terminology, there is no pure strategy equilibrium. Moreover, the idea of using a randomizing device to determine which strategy is to be employed is introduced, most likely by Waldegrave. The randomizing device in question is a bag in which we put a number of black and white counters. In addition, Waldegrave advances the idea that the ratio of counters that will be optimal for Paul is 5 to 3, whereas it is 3 to 5 for Pierre. We have examined this discussion in detail elsewhere (Bellhouse and Fillion 2015). Here, I will examine their later debate on whether this mixed-strategy profile is solving Montmort’s problem in a completely satisfactory way.

Montmort thought that the calculation of this optimal mixed strategy was not fully answering the question, and that a full answer was in fact impossible to obtain on purely mathematical grounds. This claim was vehemently opposed by Bernoulli and, two centuries later, by Fisher. Despite having a clear understanding of the lack of pure equilibrium and of the existence of the optimal mixed strategy \(\langle [5/8,3/8],[3/8,5/8]\rangle\), Montmort makes the following claim:

But how much more often must he switch rather than hold, and in particular what he must do (hic & nunc) is the principal question: the calculation does not teach us anything about that, and I take this decision to be impossible. (p. 405)

The problem, according to him, is that the calculation of the ratios 5:3 and 3:5 does not tell us how probable, in fact, it is that the players will play each strategy, and as a result he claims that a solution is impossible.

The way in which he explains his reasoning is interesting. He believes that it is impossible to prescribe anything that guarantees the best payoff, because the players might always try, and indeed good players will try, to deceive the other players into thinking that they will play something they are not playing, thus trying to outsmart each other (Montmort uses the French “jouer au plus fin”). Waldegrave also emphasizes this point by considering the probability that a player will not play optimally:

What means are there to discover the ratio of the probability ratio that Pierre will play correctly to the probability that he will not? This appears to me to be absolutely impossible […]. (p. 411)

Thus, both Montmort and Waldegrave claim that the solution of the game is impossible, but Bernoulli does not.

Their disagreement concerns what it means to “solve” the game Le Her. Bernoulli claims that the solution is the strategy that guarantees the best minimal gain—what we would call a minimax solution—and that as such there is a solution. However, despite understanding this “solution concept,” Montmort and Waldegrave refuse to affirm that it “solves” the game, since there are situations in which it might not be the best rule to follow, namely, if a player is weak and can be taken advantage of.

Bernoulli disagreed with their views on the relation between “establishing a maxim” and solving the problem of Le Her. As he explains in two letters from 1714,

[o]ne can establish a maxim and propose a rule to conduct one’s game, without following it all the time. We sometimes play badly on purpose, to deceive the opponent, and that is what cannot be decided in such questions, when one should make a mistake on purpose. (p. 144)

The reason for which he considers the mixed strategy profile \(\langle [5/8,3/8],[3/8,5/8]\rangle\) the solution is that playing \([5/8,3/8]\) constitutes the best advice that could be given to Paul:

If, admitting the way of counters, the option of 3 to 5 for Paul to switch with a seven is the best you know, why do you want to give Paul another advice in article 6? It suffices for Paul to follow the best maxim that he could know. (p. 191b)

And he continues: “It is not impossible at this game to determine the lot of Paul.”

Montmort will finally reformulate his position and reply to Bernoulli’s insistence that the game has been solved in a letter dated 22 March 1715. In this letter, he also stresses the relation between solving a game and advising the players. However, he distinguishes between the advice that he would put in print, or give to Paul publicly, and the advice he would give to Paul privately. In his view, the public advice would unquestionably be the mixed strategy with a = 3 and b = 5, since it is the one that demonstrably brings about the lesser prejudice. However, in the course of an actual game in which Paul would play an ordinary player who is “not a geometer,” he would mutter a different advice that could allow Paul to take advantage of his opponent’s weakness. In his view, the objective of an analysis of a game such as Le Her is not only to provide a rule of conduct to otherwise ignorant players, but also to warn them about the potential advantages of using finesse.

It is clear that the disagreement is not based on confusion, but instead on the fact that they are using different concepts of solution. Bernoulli’s concept is in essence the concept of minimax. However, the concept of solution Montmort and Waldegrave have in mind further depends on the probability of imperfect play (i.e., on the skill level of the players). Thus, in addition to the probability of gain with a pure strategy and the probability allocation required to form mixed strategies, their perspective on the analysis of strategic games also requires that we know the probability that a player will play an inferior strategy. However, this probability distribution is not possible to establish on purely mathematical grounds, and it is in this sense that there is no possible solution to this problem. Ultimately, the view they defended is that one should not decide what to do in a strategic game based on minimax payoff, but instead based on expected payoff.

The probability distribution over the set of mixed strategies that Montmort considers has an epistemic and subjective character that echoes the Bayesian tradition. Moreover, both Montmort and Waldegrave suggests that this probability distribution can be interpreted as capturing one’s expectations of the type of player against whom one is playing, which is another key methodological component of Bayesian game theory. Finally, as it utilizes this probability distribution to capture the possibility that players are weak, Montmort’s perspective on solutions is sensitive to the sort of concerns that were addressed by early works on bounded rationality and on games of incomplete information. Thus, far from being “contrary to common sense,” Montmort’s perspective on the solution of games is in many respects similar to the perspectives that led to key developments in game theory in the second half of the twentieth century.