1 Introduction

Financial models based on interacting agents possess a large tradition in the economic literature (Hommes 2006)—one of the first references in which the evolution of a market is related to the activity of individual investors dates back to 1974 (Zeeman 1974)—but they have gained relevance in the interdisciplinary literature in relatively recent years (De Martino and Marsili 2006; Samanidou et al. 2007). The complete list of such models is so extensive and their properties so diverse that we can merely sketch here the recurrent traits shared by most of the models, and address the reader to the references cited in Hommes (2006), De Martino and Marsili (2006) and Samanidou et al. (2007).

The pioneering work of Zeeman (1974) already contains one of the more ubiquitous ingredients in the subsequent agent-based models: heterogeneity. Agents are assumed to be heterogeneous to some extend, and therefore, they can be aggregated into one out of a finite set of categories. Since the minimum number of different categories is two, and simplicity is often a plus, investors are usually arranged into two (competing) groups. The terms used to name them and their defining properties are uneven across the literature—chartists and fundamentalists in Zeeman (1974), trend followers and contrarians in De Martino et al. (2004), speculators and producers in Zhang (1999), imitators and optimizers in Conlisk (1980)—but the underlying ideas are similar, and can be well represented by chartists and fundamentalists. Chartists are (sometimes adaptive) agents whose investment strategy is based on the belief that past information may contain clues about the future evolution of the security and, therefore, that they can infer future prices. Fundamentalists are in essence agents who think they can deduce the present value of a firm on the basis of the information currently available, such as dividend payments or earning rates. Fundamentalists operate in a rather predictable way since they expect the market to correct any observed deviation between fundamental and market prices: They sell overpriced securities and buy underpriced ones. The picture is not so simple for chartist-like investors since at the end they deploy rule-of-thumb strategies, sometimes based on market indicators like the moving average convergence–divergence (MACD) indicator or the relative strength index (RSI), two tools of technical analysis broadly used by actual financial practitioners. Therefore, the list of available strategies in agent-based models may be so large that, in the most extreme situation, strategies may differ for any pair of investors in the market, as in some instances of the minority game market model (Challet and Zhang 1997; De Martino and Marsili 2006). In fact, any single agent may combine technical trading rules with fundamental ones, or decide among them, what makes evolve the profile of the investors. There is no doubt that this diversity adds more heterogeneity into the model.

Another general trait of current models is that agents have bounded rationality (Simon 1979): They decide their actions in the next time step on the basis of a limited and possibly incomplete amount of information. They ignore the beliefs of the rest of investors and usually cannot evaluate the consequences of their own decisions. Under these circumstances, selfish agents try to maximize a payoff or utility function, a measure of their individual success.

The final ingredient is the pricing mechanism. The usual paradigm when the activity of the agents does not explicitly set the price of the asset is to define a differential equation or a finite difference equation that relates the price evolution to the relevant (global) variables of the model. Since these variables are affected by the mutual interaction of the investors in a complex way, two complementary approaches are generally considered: The behavior of the system is computer simulated and/or the complexity is reduced by considering that the number of agents approaches to infinity, the thermodynamic limit.

As we will shortly show, some of the previous ingredients either are not present in this agent-based model or have been introduced with a different philosophy. The model was inspired by a previous article on population dynamics by McKane and Newman (2005), where the authors reported the presence of large oscillations in the species densities due to a finite size stochastic effect. We export this idea into the financial language with the development of a model that describes general aspects of investor dynamics. The behavior of the asset price is first obtained by assuming a simple model of excess demand, and subsequently, we follow the same approach to model the interplay between limit and market orders in a stock market. The overall result exhibits similarities with prevailing agent-based financial models (see, e.g., Beja and Goldman 1980; Kirman 1993; Lux and Marchesi 1999; Cont and Bouchaud 2000; Challet et al. 2001).

The paper is structured in three main sections. Section 2 deals with the agent model strictly speaking: First, we define who are the agents and the three different states in which they can be found at every moment, the mechanisms that govern the changes from one state to the other and the transition rates between states. Then we derive a master equation that characterizes the time evolution of the system, and analyze the stationary solutions of this equation in the thermodynamic limit. Finally, we find the second-order corrections and show their relevance in finite size models. In Sect. 3, we establish a first connection between the agent model and market price changes: We simulate the time evolution of the system under representative market conditions, analyze the most relevant traits and compare them with well-known empirical properties of actual financial time series. In Sect. 4, we propose a second identification for the species categories (liquidity providers and liquidity takers), and a different price formation procedure is considered. The outcome presents new properties that are still consistent with what one may find in practice. This reinforces the potentials of the model. Conclusions are drawn in Sect. 5, and some technical aspects are left to the appendices.

2 Agent dynamics

As we have just stated, this section deals exclusively with the intrinsic features that the agent interplay generates. For this aim, as we will see, we do not need a detailed description of the internal properties of the agents. The most important point to be made here is about the motivation and plausibility of the agent-based approach introduced.

Along this article, we will assume that any trader that may ever operate in our financial market can be accommodated in two great, well-defined and excluding groups. The first and most populated group of investors will constitute what is usually termed as noise traders (Challet et al. 2000, 2001). We will assume that each one of these traders acts in a purely random fashion, independently of the rest of agents in the market. We are not considering this kind of traders as individuals; they merely act as a some sort of thermal bath or noise source, what reinforces the foundations of the stochastic character of the dynamics to be introduced.

The second group of traders is the set of those which we will call qualified investors, but the term informed traders (Hachmeister 2007; Brody et al. 2009) would be fit for them as well. As we are showing below, we will connect the price evolution with the collective state of these players, so we will consider that this group embraces the main actors with bigger influence in financial terms: mutual funds, investment banks or corporations in general. The total amount of such participants in a real market is much more moderate (Lillo et al. 2008), what makes sense to consider a finite size agent model to describe them. We will assume that the instantaneous state of these investors will fluctuate within three categories (termed A, B and E) as a consequence of their interaction with noise traders but also with the rest of qualified investors, being the latter another source of noise.

2.1 The interactions of the species

Let us consider then a finite set of N fully connected interacting agents who, at every instant of time t, may be found in one out of three possible states that we will label by the letters A, B and E. In a very general sense, which must be further refined from case to case, we will assume that an abundance of agents in state A, \(N_A(t)\), is related to a bear market scenario, that the increasing of population in the B side, \(N_B(t)\), leads to a bull market scenario, whereas the market is not sensitive to changes in the number of agents of type E, \(N_E(t)\), beyond the fact that \(N=N_A(t)+N_B(t)+N_E(t)\) is fixed. We delay the precise economic interpretation of states A and B until particular market models are introduced; see Sects. 3 and 4. Note that, in any case, E will always represent a neutral position within our formulation.

The mechanism that allows these agents to change their minds and move from one state to another is based on self- and mutual interactions. Decisions are not affected by the previous history (what renders the mechanism Markovian); they will only be constrained by the relative abundances of agents in the states involved and depend on some rate intrinsic to the interaction. This will allow us to describe this model by using the language of population dynamics. We have two species living in a finite world: the A’s which will play the role of preys and the B’s which will be the predators. The E’s, those agents without a definite or explicit intention, will act as empty space.

The basic unitary interaction in population problems is the death process, \(A{\mathop {\rightarrow }\limits ^{p}}E\) and \(B{\mathop {\rightarrow }\limits ^{q}}E\). Each one of these two processes (and the same applies for the rest of interactions) may encompass the aggregate effect of disparate contributions.Footnote 1 In this scheme, p and q are the intensities of Poisson processes, measures of the probability per unit of time that a given active agent when observed separately passes into inactivity. The same kind of notation is used in the description of the remaining transitions.

Yet another typical unitary interaction in population models in the spontaneous birth of preys, \(E \rightarrow A\), but this is not considered here. All birth processes are due to those binary interactions that also occur in the system. At this point, it can be useful from a practical point of view to establish the probability \(\nu \) of having rather a two-component transition than a single-component one.

The first two-component interaction that we are going to introduce is \(AB {\mathop {\rightarrow }\limits ^{a}} EE\). This interaction, in a broad sense, conveys a form of agreement between two active investors in such a way that none of them convinces the other. This annihilation process is not usually considered in population models: It represents a situation in which both individuals, predator and prey, die after fighting. The ordinary result in predator–prey models after AB interactions is predation: \(AB {\mathop {\rightarrow }\limits ^{b}} BB\). In our case, this counts for the possibility that an active investor may perform a change in the evaluation of the market scenario (from bear to bull) due to the predominance of B’s. This may eventually lead to a market bubble. Once again, it may become useful to consider that a fraction \(\lambda \) of AB interactions conduces to annihilation, whereas \(1-\lambda \) of them ends in predation.

Our third binary interaction is \(AE {\mathop {\rightarrow }\limits ^{c}} AA\), in which an agent that was not interested in operating in the market comes into activity in the A side. Under financial optics, this imitative behavior can lead to market panic and ultimately to a crash. Here lies our birth mechanism for preys which also incorporates in the model population pressure against unbounded prey growth.

Therefore, as it is summarized in Table 1, we are assuming that:

  1. (i)

    states A and B can spontaneously decay into inactivity;

  2. (ii)

    there is a basic non-trivial interaction that is not sensitive to the interchange of the roles of A’s and B’s;

  3. (iii)

    B’s can convince A’s only; and

  4. (iv)

    A’s can convince E’s only,

where the asymmetry in the two last items expresses the fact that bubbles and crashes in actual stock markets are different in shape (Bouchaud and Cont 1998; Lillo and Mantegna 2000). Since all agents are identical, the heterogeneity of our model relies on these asymmetric interactions.

Table 1 The table summarizes the allowed interactions, the corresponding intensities and the associated transition rates

2.2 The master equation

The complete state of the agent system at a given instant t is fully determined by the number of investors belonging to species A and B, \(N_A(t)\) and \(N_B(t)\), respectively. Since these numbers will be stochastic magnitudes, we are interested in obtaining an expression for P(nmt), the probability of having \(n\,A\)’s and \(m\,B\)’s at time t:

$$\begin{aligned} P(n,m,t)=\Pr \{N_A(t)=n, N_B(t)=m\}. \end{aligned}$$
(1)

To this end, we will consider the transition rates \(T(n',m'|n,m)\), the transition probabilities (per unit of time) between macroscopic states (nm) and \((n',m')\), in terms of which one can express the master equation (ME), the equation that defines the time evolution of P(nmt):

$$\begin{aligned} \frac{\mathrm{d}P(n,m,t)}{\mathrm{d}t}= & {} \sum _{n'}\sum _{m'} T(n,m|n',m') P(n',m',t)\nonumber \\&-\sum _{n'}\sum _{m'}T(n',m'|n,m)P(n,m,t). \end{aligned}$$
(2)

Based on the above interactions, one has five transition rates which change n and/or m in one unit, see again Table 1, whereas those not listed there are forbidden: i.e., \(T(n,m+1|n,m)=0\), \(T(n+1,m+1|n,m)=0\) and \(T(n+1,m-1|n,m)=0\). Note that the terms in (2) containing T(nm|nm) mutually cancel out and that the Markov character of the model makes superfluous considering more sophisticated transition rates in the elaboration of the ME.

With this proviso, Eq. (2) can be rewritten as

$$\begin{aligned} \frac{\mathrm{d}P(n,m,t)}{\mathrm{d}t}= & {} (\alpha _{AA}-\gamma _A) ({\mathscr {E}}_x^{+1}-1)[n P(n,m,t)] \\&+\gamma _B({\mathscr {E}}_y^{+1}-1)[m P(n,m,t)]\\&+ \frac{\alpha _{AB}-\beta _{AB}-\alpha _{AA}}{2} ({\mathscr {E}}_x^{+1}{\mathscr {E}}_y^{+1}-1)\left[ n \frac{m}{N-1} P(n,m,t)\right] \\&+\frac{\alpha _{AB}+\beta _{AB}-\alpha _{AA}}{2} ({\mathscr {E}}_x^{+1}{\mathscr {E}}_y^{-1}-1)\left[ n \frac{m}{N-1} P(n,m,t)\right] \\&+ \alpha _{AA} ({\mathscr {E}}_x^{-1}-1)\left[ n \frac{N-n-m}{N-1} P(n,m,t)\right] , \end{aligned}$$

where we have introduced the following increment/decrement operators

$$\begin{aligned} {\mathscr {E}}_x^{ \pm 1} f(n,m,t)\equiv & {} f(n \pm 1,m,t), \\ {\mathscr {E}}_y^{ \pm 1} f(n,m,t)\equiv & {} f(n,m \pm 1,t), \\ \end{aligned}$$

and five new parameters

$$\begin{aligned} \gamma _A\equiv & {} \frac{2\nu c-(1-\nu ) p}{N}, \end{aligned}$$
(3)
$$\begin{aligned} \gamma _B\equiv & {} \frac{(1-\nu ) q}{N}, \end{aligned}$$
(4)
$$\begin{aligned} \alpha _{AA}\equiv & {} \frac{2 \nu c}{N}, \end{aligned}$$
(5)
$$\begin{aligned} \alpha _{AB}\equiv & {} 2 \nu \frac{\lambda a+(1-\lambda )b +c}{N}, \end{aligned}$$
(6)
$$\begin{aligned} \beta _{AB}\equiv & {} 2 \nu \frac{(1-\lambda )b-\lambda a}{N}, \end{aligned}$$
(7)

which encode all the relevant information of the model parameterization. Let us stress that \(\lambda \) and \(\nu \) were defined in order to clarify how the update mechanism can be approximately implemented, see Fig. 1, but they do not introduce further degrees of freedom in the problem since they would disappear after a redefinition of the constants. This is the case if one uses the exact algorithm by Gillespie (1976) in the simulation of the system, as we have done.

Fig. 1
figure 1

Flux diagram for a discrete-time update procedure of the state of the agents

The relevance of the new parameters becomes noticeable soon afterward. Suffice it to say for the moment that we will proceed as they were independent of the size of the system in what follows, because we will consider the expansion of the ME in terms of powers of N. To this end, let us define \(R_{A,B}(t)\),

$$\begin{aligned} R_{A,B}(t)\equiv \lim _{N\rightarrow \infty }{\mathbb {E}}[N_{A,B}(t)]/N, \end{aligned}$$

and introduce two new stochastic processes, X(t) and Y(t), in such a way that

$$\begin{aligned} N_A(t)= & {} N\ R_A(t) + \sqrt{N} X(t), \end{aligned}$$
(8)
$$\begin{aligned} N_B(t)= & {} N\ R_B(t) + \sqrt{N} Y(t) \end{aligned}$$
(9)

hold. X(t) and Y(t) are thus responsible for the fluctuations of \(N_A(t)\) and \(N_B(t)\) around their mean values. It is expected that the strength of those fluctuations will diminish as the system reaches the thermodynamic limit, that is, when \(N \gg 1\). Note that this approach implies that, for any two given values of the population of the species, n and m, we will have that

$$\begin{aligned} n= & {} N\ R_A(t) + \sqrt{N} x,\\ m= & {} N\ R_B(t) + \sqrt{N} y, \end{aligned}$$

where x and y—as well as \(R_A(t)\) and \(R_B(t)\)—are real magnitudes in spite that n and m were integers. In such a situation, increment/decrement operators become partial differential operators (see Van Kampen 1992, chap. X),

$$\begin{aligned} {\mathscr {E}}_x^{ \pm 1}= & {} 1 \pm \frac{1}{{\sqrt{N} }}\partial _x + \frac{1}{{2N}}\partial _{xx}^2 + {\mathscr {O}}(N^{ - 3/2} ),\\ {\mathscr {E}}_y^{ \pm 1}= & {} 1 \pm \frac{1}{{\sqrt{N} }}\partial _y + \frac{1}{{2N}}\partial _{yy}^2 + {\mathscr {O}}(N^{ - 3/2} ). \end{aligned}$$

Finally note that P(nmt) must be replaced by \(\varPi (x,y,t)\),

$$\begin{aligned} \varPi (x,y,t)\mathrm{d}x \mathrm{d}y \equiv \Pr \{x<X(t)\leqslant x+\mathrm{d}x,y<Y(t)\leqslant y+\mathrm{d}y\}, \end{aligned}$$

through

$$\begin{aligned} P(n,m,t) =\frac{1}{N}\varPi \left( \frac{n-N R_A}{\sqrt{N}},\frac{m-N R_B}{\sqrt{N}},t\right) \mathrm{d}x\mathrm{d}y, \end{aligned}$$

what affects the time derivative term in the ME in the following way:

$$\begin{aligned} \frac{\mathrm{d}P}{\mathrm{d}t} =-\left[ \frac{1}{\sqrt{N}}\frac{\mathrm{d}R_A}{\mathrm{d}t}\partial _x \varPi +\frac{1}{\sqrt{N}}\frac{\mathrm{d}R_B}{\mathrm{d}t}\partial _y \varPi -\frac{1}{N}\partial _t \varPi \right] \mathrm{d}x \mathrm{d}y. \end{aligned}$$

2.3 First-order stationary solutions

The first-order approximation of the ME collects terms of order \(N^{ - 1/2}\), ignores those of \({\mathscr {O}}(N^{ -1})\) and leads to a set of coupled Volterra equations for \(R_A(t)\) and \(R_B(t)\),

$$\begin{aligned} \frac{\mathrm{d}R_A}{\mathrm{d}t}= & {} \left[ \gamma _A - \alpha _{AA} R_A -\alpha _{AB} R_B \right] R_A, \end{aligned}$$
(10)
$$\begin{aligned} \frac{\mathrm{d}R_B}{\mathrm{d}t}= & {} \left[ \beta _{AB} R_A-\gamma _B\right] R_B. \end{aligned}$$
(11)

Let us analyze the factors appearing in these equations. \(\gamma _A\) as defined in Eq. (3) represents a trade-off between a positive term that comes from the imitation influence and a negative term that measures the death rate of preys. If positive, it would correspond to an effective birth rate of preys in Eq. (10). Recall, however, that in this system preys suffer from population pressure instigated by the imitation interaction that constrains the growth of preys; see the definition of \(\alpha _{AA}\) in (5). The term with the \(\alpha _{AB}\) factor counts for the reduction in the number of preys due to all binary operations, not only predation, Eq. (6). The \(\beta _{AB}\) term appearing in Eq. (11) is a consequence of the imbalance between predation and annihilation alternatives, as it can be observed in (7), whereas \(\gamma _B\) measures exclusively the death rate of predators, expression (4). Summing up, there are two parameters, \(\gamma _A\) and \(\beta _{AB}\), with no definite sign, whereas \(\gamma _B\), \(\alpha _{AA}\) and \(\alpha _{AB}\) are positive constants ab initio.

Equations (10) and (11) present three stationary solutions for which

$$\begin{aligned} \frac{\mathrm{d}R_A}{\mathrm{d}t}=\frac{\mathrm{d}R_B}{\mathrm{d}t}=0. \end{aligned}$$

The first solution is the trivial one, \(R_A=R_B=0\). It represents the death of the market due to a complete lack of activity. This is a feasible scenario that threatens any real market. For instance, investors may lose interest in any given commodity that becomes useless or exhausted. The stability analysis of this fixed point determines that it will be a saddle point if \(\gamma _A>0\); otherwise, it would turn stable. The analysis of the second stationary solution, \(R_A=\gamma _A/\alpha _{AA}\equiv M/N<1\)—note that \(\gamma _A<\alpha _{AA}\) by construction, cf. expressions (3) and (5), and \(R_B=0\), leads to the constraint

$$\begin{aligned} 0<\frac{\gamma _B}{\beta _{AB}}<\frac{M}{N}, \end{aligned}$$
(12)

if one wants to avoid conferring stability to this fixed point as well.Footnote 2 In conclusion, all the parameters defined in (3)–(7) must be positive definite.

We must point out that the presence of those unstable equilibrium solutions is not a flaw, but a merit of the model, as is the fact that the remaining stationary solution

$$\begin{aligned} R_A=R_A^\circ\equiv & {} \frac{\gamma _B}{\beta _{AB}}, \end{aligned}$$
(13)
$$\begin{aligned} R_B=R_B^\circ\equiv & {} \frac{\gamma _A \beta _{AB}-\gamma _B \alpha _{AA}}{\alpha _{AB} \beta _{AB}}, \end{aligned}$$
(14)

is always present, accessible and corresponds to a stable fixed point.

Regarding the occurrence of the fixed point, it is evident that \(R_A^\circ >0\), and Eq. (12) leads to \(R_A^\circ<M/N<1\). The same equation determines that \(R_B^\circ >0\). Also \(R_B^\circ <1\), as it can be proved as follows:

$$\begin{aligned} R_B^\circ = 1- \frac{(\alpha _{AB}-\gamma _A) \beta _{AB}+\gamma _B \alpha _{AA}}{\alpha _{AB} \beta _{AB}}<1, \end{aligned}$$

because trivially \(\alpha _{AB}>\gamma _A\), cf. Eqs. (3) and (6). We can also show that \(R_A^\circ +R_B^\circ <1\),

$$\begin{aligned} R_A^\circ +R_B^\circ= & {} 1- \frac{(\alpha _{AB}-\gamma _A) \beta _{AB}+ (\alpha _{AA}-\alpha _{AB})\gamma _B}{\alpha _{AB} \beta _{AB}}\\< & {} 1- \frac{\gamma _B}{\gamma _A}\frac{(\alpha _{AB}-\gamma _A) \alpha _{AA}+ (\alpha _{AA}-\alpha _{AB})\gamma _A}{\alpha _{AB} \beta _{AB}}\\= & {} 1- \frac{\gamma _B(\alpha _{AA}-\gamma _A)}{\gamma _A \beta _{AB}}<1, \end{aligned}$$

because \(\alpha _{AA}>\gamma _A\) as we have just pointed out above.

The analysis of the stability of this fixed point leads to the conclusion that the point is stable and that the transient term will exhibit oscillations when \(\omega _0\in {\mathbb {R}}^+\),

$$\begin{aligned} \omega _0\equiv \sqrt{\alpha _{AB} \beta _{AB} R_A^\circ R_B^\circ -\frac{1}{4}\left( \alpha _{AA} R_A^\circ \right) ^2}, \end{aligned}$$
(15)

which is true whenever

$$\begin{aligned} \frac{\alpha _{AA}}{\beta _{AB}}<2 \left( \sqrt{1+\frac{\gamma _A}{\gamma _B}}-1\right) . \end{aligned}$$

When the system shows transient oscillations, there is a single characteristic timescale for the decay rate,

$$\begin{aligned} \tau _0=\frac{2}{\alpha _{AA} R_A^\circ }, \end{aligned}$$
(16)

and for \(t\gg \tau _0\), the system would reach the stable solution. This assertion is no longer true when a second decay rate appears. Let us define

$$\begin{aligned} T_0^{-2}\equiv \tau _0^{-2}-\alpha _{AB} \beta _{AB} R_A^\circ R_B^\circ <\tau _0^{-2}. \end{aligned}$$
(17)

If \(T_0^{-2}>0\), the steady state is reached when \(t^{-1} \ll \tau _0^{-1}-T_0^{-1}\). Therefore, we may define \(t_{0}\), \(t_{0}^{-1} \equiv \tau _0^{-1}-\mathfrak {R}[T_0^{-1}]\), and the steady state is always achieved for \(t\gg t_{0}\).

After the transient regime, and whenever N is finite, we will expect that the time evolution of prey and predator densities, \(N_A(t)/N\) and \(N_B(t)/N\), makes them attain their limit values \(R_A^\circ \) and \(R_B^\circ \), and exhibit some fluctuating activity afterward. Since the characteristic size of the fluctuations is of order \(N^{-1/2}\), a naive analysis could lead to the conclusion that if we have, let us say, 1000 interacting agents, the error in neglecting the remaining terms in the ME should be around \(3.2\%\). In Fig. 2, we can find the outcome of a realization of the model with \(N=1000\)—the complete set of parameter specifications is listed in Sect. 3. The example shows that in such a system fluctuations may be larger than expected, and further corrections to the first-order equations must be taken into account (Challet and Marsili 2003; McKane and Newman 2005).

Fig. 2
figure 2

Time evolution of prey and predator densities (solid red lines) for an exact realization of our interacting agent model with \(N=1000\). The dashed black line depicts the first-order approach to the problem, whereas in green is shown the stationary solution. We can see how fluctuations in both populations are larger than expected (color figure online)

A final word on the N dependency of the above expressions before exploring the next-to-leading-order terms in the ME. The analysis under progress relies on the fact that the parameters defined in Eqs. (3)–(7) are independent of N. Note, however, that the expressions for \(R_{A}^\circ \), \(R_{B}^\circ \) and M/N are not sensitive to this need. It only affects those constants where time is involved, like \(\tau _0\) or \(\omega _0\): See in Fig. 1 how the time needed to update all the agents is not of order of \(\delta t\), but of \(N\delta t\).

2.4 Beyond the first-order equations

When one gathers the terms of order \(N^{-1}\) in the ME expansion, a Fokker–Planck equation for \(\varPi (x,y,t)\) emerges:

$$\begin{aligned} \partial _t \varPi= & {} \left[ -\gamma _A + 2 \alpha _{AA} R_A+\alpha _{AB} R_B\right] \partial _x (x \varPi ) \\&+ \alpha _{AB} R_A \partial _x(y \varPi ) -\beta _{AB} R_B \partial _y(x \varPi )+\left[ \gamma _B-\beta _{AB} R_A\right] \partial _y(y \varPi )\\&+\frac{R_A}{2}\left[ -\gamma _A + \alpha _{AA} (2- R_A - 2R_B) + \alpha _{AB} R_B \right] \partial ^2_{xx} \varPi \\&+ \frac{R_B}{2}\left[ \gamma _B + (\alpha _{AB}-\alpha _{AA}) R_A \right] \partial ^2_{yy} \varPi - \beta _{AB} R_A R_B \partial ^2_{xy} \varPi . \end{aligned}$$

If we concentrate our analysis of the previous equation for times large enough to let \(R_A(t)\) and \(R_B(t)\) reach their steady-state values, \(R_A^\circ \) and \(R_B^\circ \), the expression simplifies considerably:

$$\begin{aligned} \partial _t \varPi= & {} \mu _{xx} \partial _x (x \varPi ) + \mu _{xy} \partial _x(y \varPi ) -\mu _{yx} \partial _y(x \varPi )\\&+\frac{1}{2} \sigma ^2_x \partial ^2_{xx} \varPi + \frac{1}{2}\sigma ^2_y \partial ^2_{yy} \varPi - \rho \sigma _x \sigma _y \partial ^2_{xy} \varPi , \end{aligned}$$

with

$$\begin{aligned} \mu _{xx}\equiv & {} \alpha _{AA} R_A^\circ =\frac{2}{\tau _0}, \end{aligned}$$
(18)
$$\begin{aligned} \mu _{xy}\equiv & {} \alpha _{AB} R_A^\circ , \end{aligned}$$
(19)
$$\begin{aligned} \mu _{yx}\equiv & {} \beta _{AB} R_B^\circ , \end{aligned}$$
(20)
$$\begin{aligned} \sigma ^2_x\equiv & {} 2\alpha _{AA}R_A^\circ \left( 1- R_A^\circ - R_B^\circ \right) , \end{aligned}$$
(21)
$$\begin{aligned} \sigma ^2_y\equiv & {} R_A^\circ R_B^\circ \left( \beta _{AB} + \alpha _{AB}-\alpha _{AA} \right) , \end{aligned}$$
(22)
$$\begin{aligned} \rho\equiv & {} \beta _{AB}\frac{R_A^\circ R_B^\circ }{\sigma _x \sigma _y}, \end{aligned}$$
(23)

positive definite quantities.Footnote 3 Therefore, we have a linear multivariate Fokker–Planck equation for the joint probability density of X(t) and Y(t), whose solution can be systematically obtained after some algebra (see again Van Kampen 1992, chap. X). An alternative approach is based on the following set of coupled (Itô) stochastic differential equations:

$$\begin{aligned} \mathrm{d}X= & {} -\mu _{xx} X\mathrm{d}t - \mu _{xy} Y \mathrm{d}t+ \sigma _x \mathrm{d}W_1, \end{aligned}$$
(24)
$$\begin{aligned} \mathrm{d}Y= & {} \mu _{yx} X \mathrm{d}t-\rho \sigma _y \mathrm{d}W_1 + \sigma _y \sqrt{1-\rho ^2}\mathrm{d}W_2, \end{aligned}$$
(25)

where \(W_1\) and \(W_2\) are two independent Wiener processes. Note that the same set of equations can be recovered from the Kramers–Moyal expansion of the ME, see “Appendix A.”

2.5 The magnifying effect

To explore the reason for the abnormal magnitude of fluctuations, we should compare X(t) and Y(t) with \(R_A^\circ \) and \(R_B^\circ \), respectively. A quick analysis reveals that mean values are not useful in this task because \(\lim _{t\rightarrow \infty } {\mathbb {E}}[X(t)]=\lim _{t\rightarrow \infty } {\mathbb {E}}[Y(t)]=0\)—remember that (24) and (25) are valid for \(t\gg t_{0}\). We concentrate in variances and covariances instead. In “Appendix B,” we can find how the stationary values of \({\mathbb {E}}[X^2(t)]\), \({\mathbb {E}}[Y^2(t)]\) and \({\mathbb {E}}[X(t)Y(t)]\) follow:

$$\begin{aligned}&\displaystyle C_{xx}(0)=\lim _{t\rightarrow \infty } {\mathbb {E}}[X^2(t)]=\frac{\mu _{yx} \sigma ^2_x+ \mu _{xy} \sigma ^2_y}{2 \mu _{xx}\mu _{yx}}, \\&\displaystyle C_{yy}(0)=\lim _{t\rightarrow \infty } {\mathbb {E}}[Y^2(t)]=\frac{\mu _{yx}^2 \sigma _x^2 +\left( \mu _{xx}^2 +\mu _{xy} \mu _{yx}\right) \sigma ^2_y-2\rho \mu _{xx}\mu _{yx}\sigma _x \sigma _y}{2 \mu _{xx}\mu _{xy}\mu _{yx}}, \\&\displaystyle C_{xy}(0)=\lim _{t\rightarrow \infty } {\mathbb {E}}[X(t)Y(t)]=-\frac{\sigma ^2_y}{2 \mu _{yx}}, \end{aligned}$$

and fluctuations will be appreciable if these quantities are larger than \((R_A^\circ )^2\), \((R_B^\circ )^2\) and \(R_A^\circ R_B^\circ \), respectively. If we define the magnifying factors \(\varOmega _{xx}\), \(\varOmega _{yy}\) and \(\varOmega _{xy}\) as the corresponding quotient of these magnitudes, e.g.,

$$\begin{aligned} \varOmega _{xy}\equiv \lim _{t\rightarrow \infty } \frac{{\mathbb {E}}[X(t)Y(t)]}{R_A^\circ R_B^\circ }= \frac{C_{xy}(0)}{R_A^\circ R_B^\circ }, \end{aligned}$$

fluctuations will be prominent when \(\varOmega \sim N\), because then one can overcome the \(N^{-1/2}\) dumping factor of the second-order corrections. The analysis of the possible values that \(\varOmega \) can take is difficult because the inner relationships that \(\mu _{xx}\), \(\mu _{xy}\), \(\mu _{yx}\), \(\sigma _x\), \(\sigma _y\) and \(\rho \) do present. In fact, the difficulty is inherited from \(\gamma _A\), \(\gamma _B\), \(\alpha _{AA}\), \(\alpha _{AB}\) and \(\beta _{AB}\), which are neither bounded nor independent. Then, it is useful to introduce the following (final) re-parameterization:

$$\begin{aligned} \alpha _{AA}= & {} \frac{1}{ \chi }\frac{2}{\tau _0},\\ \alpha _{AB}= & {} \frac{1}{\eta \chi }\frac{2}{\tau _0},\\ \beta _{AB}= & {} \frac{\xi }{ \chi }\frac{1-\eta }{\eta } \frac{2}{\tau _0},\\ \gamma _A= & {} \left[ 1+\frac{1-\chi }{\chi }\varepsilon \right] \frac{2}{\tau _0},\\ \gamma _B= & {} \xi \frac{1-\eta }{\eta } \frac{2}{\tau _0}, \end{aligned}$$

where the four new variables \(\chi \), \(\varepsilon \), \(\eta \) and \(\xi \) are in the (0, 1) range and can be arbitrarily set. With the proposed parameterization, all the constraints that affect the old parameters (the pure algebraic ones, as well as those coming from stability considerations) are identically satisfied,Footnote 4 and \(\tau _0\) carries the characteristic timescale of the interactions at the microscopic level. The magnifying factors in the new parameters read

$$\begin{aligned} \varOmega _{xx}= & {} (1-\eta \varepsilon )\frac{1-\chi }{\chi ^2} + \frac{1}{2}\frac{1+\xi }{\xi } \frac{1}{\chi \eta }, \\ \varOmega _{yy}= & {} \left\{ \left[ \frac{1-\chi }{\chi }(1-\eta \varepsilon )-1\right] \eta \xi +\frac{1+\xi }{2}\right\} \frac{1-\eta }{(1-\chi )\eta ^2\varepsilon }\\&+\frac{1}{2}\frac{1+\xi }{\xi } \frac{\chi }{(1-\chi )^2\eta \varepsilon ^2}, \\ \varOmega _{xy}= & {} -\frac{1}{2}\frac{1+\xi }{\xi }\frac{1}{(1-\chi )\eta \varepsilon }, \end{aligned}$$

and the stationary first-order solutions are \(R_A^\circ = \chi \) and \(R_B^\circ = (1-\chi )\eta \varepsilon \).

The first point to be noted is that no \(\tau _0\) appears in any of the these expressions. So, the magnification effect does not depend on the characteristic timescale of the correlations. The second aspect of importance is that, for fixed values of \(\chi \), \(\varepsilon \) and \(\eta \), the magnifying factors become unboundedly large as \(\xi \rightarrow 0\), and this parameter does not contribute to the value of \(R_A^\circ \) and \(R_B^\circ \). Then, magnification can be achieved for any (regular) value of the species stationary densities. Another favorable scenario is \(R_A^\circ \rightarrow 0\) and \(R_B^\circ \rightarrow 0\): Check, for instance, how for \(\chi \rightarrow 0\), \(\varOmega _{xx}\rightarrow \infty \). This implies that the phenomenon is relevant in sparse systems as well, in spite of the fact that in such cases N may be very large. Note finally that magnification is not connected with the presence of oscillations of any peculiar frequency. On the one hand, the condition that determines that \(T_0^{-1}\) replaces \(\omega _0\) is

$$\begin{aligned} \xi <\frac{\chi \eta }{4(1-\chi )(1-\eta )\varepsilon }, \end{aligned}$$

and as we have shown above how \(\xi \rightarrow 0\) always leads to magnification. This is reasonable since for a fixed \(\tau _0\), \(T_0\) increases the microscopic correlation range—see “Appendix B.” On the other hand, for a fixed value of \(\varepsilon \), \(\eta \) and \(\xi \), \(\omega _0\) embraces the whole positive real axis as \(\chi \) varies. Therefore, in principle, one can consider models with either a large value for \(\omega _0\) and reproduce the typical bid–ask bounce in a liquid market, as in Montero et al. (2005), or a smaller one, and capture some seasonal character present in the market evolution, like in the electricity market analyzed by Perelló et al. (2006). As it is shown in detail in “Appendix B” and it is shown in Fig. 2, the oscillatory behavior is also present in the second-order terms.

Let us see magnification in a practical example. For clarity reasons, we will condense the three magnifying factors defined above in a single plot. To this end, we define \(\varOmega _{zz}\),

$$\begin{aligned} \varOmega _{zz}\equiv \varOmega _{xx}+\varOmega _{yy}-2\varOmega _{xy}=\lim _{t\rightarrow \infty } {\mathbb {E}}\left[ \left( \frac{Y(t)}{R_B^\circ }-\frac{X(t)}{R_A^\circ }\right) ^2\right] , \end{aligned}$$

a relevant quantity in the pricing models to be introduced below. Further, we assume that \(R_A^\circ =\chi \) is kept fixed, and that \(\varepsilon \) changes in a way that \(R_B^\circ =R_A^\circ \) is guaranteed—we are interested in models in which no side is prioritized. This leaves \(\eta \) and \(\xi \) as the only free parameters.Footnote 5 In particular, we have set \(R_A^\circ =R_B^\circ =0.2\), since we are looking for mean states that are macroscopically populated. In Fig. 3, we observe some contour lines that represent configurations with the same amplification level, and how these lines cross (or do not cross) the threshold that delimits those configurations with and without oscillating properties. Thus, for instance, we have marked with a small circle the location of the following parameter set: \(\chi =0.2\), \(\varepsilon =0.625\), \(\eta =0.4\), \(\xi =0.2\). With this configuration, \(M=0.7 N\), and the amplification factor is about one hundred.

Fig. 3
figure 3

Contour plot of the magnifying factor \(\varOmega _{zz}\). The magnifying factor values, to be compared with N, are 100 (dotted line), 250 (dashed line) and 1000 (dot dashed line). The solid line is the borderline between the zones with oscillating behavior (\(\omega _0\)) and without it (\(T_0\)). In this case, since \(R_A^\circ =R_B^\circ =0.2\), \(\eta >0.25\)

3 Price dynamics in an excess demand model

As we have stated above, the formula that will determine the price dynamics must depend on the nature of the species. Therefore, it is time to identify A and B states and define how the evolution of the population of agents in each category translates into prices changes. Within this first model, we will consider that \(N_A(t)\) represents the offer and \(N_B(t)\) represents the demand in a certain financial market.

Specifically, let us consider the case of a market that operates through a limit order book: Limit orders are orders with a limit price that represents the minimum (respectively, maximum) price the investor is accepting for selling (respectively, buying) a given number of shares, the volume of the order. The limit order is placed in the so-called limit order book, which is visible to rest of qualified investors, and it remains there until one of the two following major events takes place: Someone accepts the ask (respectively, bid) price and the transaction is completed, or the investor removes the order from the book. Market orders, on the other hand, are orders that automatically match the best opposite limit order in the limit order book.

The five interactions listed in Table 1 can be interpreted here as follows: If A represents an ask order, a sell order, and B represents a bid order, a buy order, then \(A\rightarrow E\) and \(B \rightarrow E\) can be either a canceled order or the result of a trade between the limit order and an incoming market order. A trade between two agents leads to \(AB\rightarrow EE\), whereas \(AB \rightarrow BB\) is the replacement of a sell order by a buy order because the agent awaits a change in the market evolution, from bear to bull. The opposite situation, a change from bull to bear, produces that inactive investors enter into the market in the ask side, \(AE\rightarrow AA\).

These two imitative reactions are supported by the assumption that in the market under consideration excess return reacts linearly to excess demand: a classical and ubiquitous point of view in the economic literature (see, e.g., Zeeman 1974; Beja and Goldman 1980; Lux and Marchesi 1999; Cont and Bouchaud 2000; Challet et al. 2001; Dibeh 2007). Excess return measures the logarithmic earnings of the stock beyond the risk-free interest rate r, \(R(t)\equiv \ln \left[ S(t)\mathrm{e}^{-r t}\right] \), and excess demand is the difference between \(N_B(t)\) and \(N_A(t)\). Therefore, we will have

$$\begin{aligned} \mathrm{d}R(t)= & {} \frac{\varXi }{N} (N_B-N_A) \mathrm{d}t\\= & {} \varXi \left( R_B-R_A+\frac{Y-X}{\sqrt{N}}\right) \mathrm{d}t {\mathop {\longrightarrow }\limits ^{t \gg t_{0}}}\varXi \left( R_B^\circ -R_A^\circ +\frac{Y-X}{\sqrt{N}}\right) \mathrm{d}t, \end{aligned}$$

where \(\varXi \) measures the sensitivity of prices to excess demand. This first pricing model is a good testing ground since the agent model will be responsible for any observed market property: we are simply integrating the differences in population.

Fig. 4
figure 4

Time evolution of the daily closing value of (discounted) stock prices. We can see how the market suffers a market bubble (an upward trend followed by a downward trend) lasting 5 years and ending at the beginning of year 25. After that, we find what is called a sideways trend, i.e., no trend at all, from the mid of year 25 to the mid of year 27, followed by a new upward trend with corrective movements in the middle. The inset shows the exponential growth in the long run

Let us consider the following paradigmatic example with \(N=1000\) investors. We have set \(\tau _0=10\) min, so it is of the same order of magnitude as a typical correlation length found in actual financial data by Masoliver et al. (2000). Beyond this, the rest of values were not based on actual market observations. In fact, we have set \(\chi =0.2\), \(\eta =0.4\) and \(\xi =0.2\), like in the example we emphasized in the previous section, but slightly increased the value of \(\varepsilon \), \(\varepsilon =0.643\). This was intended to get \(R_B^\circ \gtrsim R_A^\circ \), \(R_B^\circ \approx 0.206\), whereas \(R_A^\circ =0.2\). Note that \(R_B^\circ -R_A^\circ >0\) characterizes a growing economy in which wealth is injected into the market. This term is also responsible for any long-run exponential growth.

A possible realization of the dynamics of the species population was previously introduced in Fig. 2, and in Fig. 4, we find the corresponding evolution of the stock price when \(\varXi =10^{-3}\,\hbox {min}^{-1}\). Here, we sampled the complete data series to consider closing prices only, a usual practice in technical analysis. Moreover, in the confection of Fig. 4 and hereafter we are assuming that a trading day lasts 480 min, and that there are 250 trading days in a year. We observe in Fig. 4 the appearing of typical market charts: upward trends—an increasing succession of minima, downward trends—a decreasing succession of maxima and sideways trends—a bouncing movement between two price levels.

Fig. 5
figure 5

Fixed-horizon return behavior. We can see how probability density functions at small timescales depart slightly, but clearly, from Gaussian behavior by exhibiting negative skew: The negative tail is fatter than the positive tail. Returns were divided by their sampling standard deviations to make them commensurable

In Fig. 5, we present the outcome of a statistical analysis performed with the stationary data set of fixed-time returns \(R(\tau ;t)=R(t+\tau )-R(t)\), \(t>100\) min. We check that for \(\tau \sim \tau _0\) correlations are important, Gaussian limit is not attained and skewness is observed, like in actual markets (Mantegna and Stanley 1995; Plerou et al. 1999; Masoliver et al. 2000; Cont 2001). This phenomenon is even more noticeable when the standard deviation of fixed-time returns, a measure of the volatility of the market, is analyzed (Fig. 6). Since X(t) and Y(t) are anti-correlated, and the return change is sensible to the difference of those magnitudes, we expect that volatility grows faster for small timescales, and reaches the diffusive regime for \(\tau >\tau _0\). Abnormal (both sub- and super-) diffusion has been reported to be present in real markets as well (Masoliver et al. 2000, 2003, 2006).

Fig. 6
figure 6

Volatility growth. In this figure, we can see how the volatility growth presents two well different regimes. For \(\tau <\tau _0\), the standard deviation of fixed-time returns shows (super-diffusive) linear growth, whereas for \(\tau >\tau _0\) it scales as \(\sqrt{\tau }\), like in a diffusive process

In the confection of the previous plot, we have used the complete set of returns available for each timescale \(\tau \), by assuming the statistical equivalence of every sample \(R(\tau ;t)\) as a function of t. Moreover, the above results seem to indicate that, for \(\tau \gg \tau _0\), the samples \(R(\tau ;t)\) and \(R(\tau ;t+\tau )\) ought to be also (almost) independent one from the other. So, if we compute the realized n-\(\tau \) volatility, \(V_n(\tau ;k)\):

$$\begin{aligned} V_n(\tau ;k)=\sqrt{\frac{1}{n}\sum _{m=1}^{n}\left[ R\Big (\tau ;(k-m) \tau \Big )-\frac{1}{n} R\Big (n \tau ;(k-n) \tau \Big )\right] ^2}, \end{aligned}$$

we should expect the outcome to be uniform in k, as well as an absence of correlation between \(V_n(\tau ;k)\) and \(V_n(\tau ;k+n)\). In order to check whether this assumption is true, we have chosen \(\tau =1\) day, and \(n=20\) trading sessions, as a proxy for the one-month realized volatility, a typical choice among practitioners.Footnote 6 The results were also annualized, which means here that they were increased by a factor \(\sqrt{12.5}\), and only \(k\geqslant 21\) are considered—we ignore the whole first day of simulation. The outcome, as it is shown in the inset of Fig. 7, is that the market alternates long periods where the volatility is large, with periods of relative calm, a phenomenon known as volatility clustering (Cont 2001). The presence clustering in the volatility is a well-documented feature of real markets that is usually explained in terms of the existence of volatility self-correlation. This correlation, as opposed to the return-to-return correlation, is long ranged (Lo 1991)—confront timescales in Figs. 7 and 12.

Fig. 7
figure 7

Volatility clustering. In this figure, we can see how the one-month annualized volatility presents clustering (inset) and long memory. The increment of the correlation for times below 20 days is due to data overlapping

Fig. 8
figure 8

Phase diagram of the state of the system. We show the mean recurrence time, the mean time passed between two consecutive visits to the same state

In order to offer a plausible origin for this larger timescale, we have composed Fig. 8. There we present, in a phase diagram, the mean recurrence time: For each possible state of the system, we have recorded all the visiting times and performed a sample mean with the inter-event times. Therefore, at least two visits to a given state are needed in order to attach a nonzero value to that point. Once again, we have disregarded the data within the first day. As it can be observed in this figure, the mean time grows in an exponential fashion when we depart from the stable fixed-point values: \(N_A=200\) and \(N_B=206\). Since the scale is logarithmic, some lower bound must be chosen, and we have decided to remove those data points with a mean recurrence time smaller than 1 min. This retains in the plot almost all nonzero valuesFootnote 7: The lowest recurrence time near the core is attained at \(N_A=207\) and \(N_B=203\), yielding a value of 20.17 min. As we can see in Fig. 12 again, this magnitude coincides with the timescale for which 1-min returns exhibit stronger anti-persistence. The slow decay in the volatility self-correlation may have thus its origin in those long periods needed by the system to return to the most outer zone, from where the largest absolute returns come: The green ring marks a recurrence time of about 60 days, the timescale for which the volatility self-correlation is more intense—see Fig. 7.

Therefore, the agent model is capable of conciliating short-range correlations in the microscopic level with long-range correlations in the macroscopic level, what may be linked with the presence of large business cycles in the financial data (Burns and Mitchell 1946).

Finally, another stylized fact that is commonly associated with clustering and long-range memory in the volatility is the so-called leverage or Fischer Black effect (Christie 1982; Cont 2001; Bouchaud et al. 2001). This phenomenon is generically characterized by a negative relationship between returns and volatilities. In our case, this effect can be barely observed when the cross-correlation between 20-day returns and volatilities are depicted—see Fig. 9. We must point out, moreover, that there are features shown in empirical studies related to this effect that are not detected in our example. For instance, from Fig. 9 one cannot sustain the presence of a noticeable temporal asymmetry in the correlation, as expected (Bouchaud et al. 2001). However, we can explain this departure from what is observed in actual markets on the basis of the usual interpretation of the leverage effect: The market digests with nervous the losses and with confidence the rises. And we must remember at this point that the price information is not fed back into the species, so it is not possible such a reaction here. Therefore, the slight anti-correlation present in Fig. 9 could be a side effect of the volatility self-correlation, but just some spurious result as well.

Fig. 9
figure 9

Leverage effect. In this figure, we can see how 20-session returns and volatilities are negatively correlated, a phenomenon known as the leverage effect in the literature

4 Price dynamics in a liquidity model

Another possible identification of the two species comes from considering that an agent in the A state is a liquidity provider, whereas an agent in the B state represents a liquidity taker (Hachmeister 2007; Bouchaud et al. 2009). This scenario has some contact points with the model proposed in Sect. 3, but shows differences as well. Liquidity providers will introduce limit orders into the market as before, but in such a way that they simultaneously hold buy and sell orders with the goal of ensuring that these securities are always available on demand. Therefore, in the present case, we are going to make no distinction between bid and ask orders, being \(N_A\) proportional to the amount of entries in the order book. On the contrary, liquidity takers operate through market orders. This is just the way in which noise traders are assumed to participate in the market, but, unlike them, liquidity takers survey the limit order book and look forward to a convenient trading opportunity, like predators do (Vaglica et al. 2008). Note, however, that this results in transaction (\(AB \rightarrow EE\)), not in predation (\(AB \rightarrow BB\)). These and the rest of interactions determine how qualified agents migrate from one set to the other or remain inactive (Handa and Schwartz 1996; Hall and Hautsch 2007).

In the current situation, A, B and E qualify the status of the agent in that market: \(A \rightarrow E\) and \(B \rightarrow E\) measure the probability that an agent becomes temporarily inactive. \(AB \rightarrow BB\) is the response to the appreciation that the excess of liquidity favors finding a bargain. Finally, acting as a liquidity provider deserves public recognition and prestige for the agent, a financial institution or an investment bank, what stimulates the transition \(AE \rightarrow AA\) by imitation.

The present interpretation of the species in terms of liquidity providers and liquidity takers is more suitable for very liquid markets, where changes in offer or demand have a small impact on prices. In such a situation, a relevant magnitude in the price formation mechanism is the spread, the difference between the lowest ask price and the higher bid price. Then, the pricing formula must connect the spread with the relative populations of liquidity providers and liquidity takers, but the issue is not so straightforward as in the case considered in Sect. 3. Here, we have decided to use a pricing expression inspired by the works of Farmer (2002), and Farmer and Joshi (2002). Consider these general guidelines:

  1. 1.

    The bigger the number of limit orders are in an order book, the lower the spread will be, and therefore, the lower the price change will be.

  2. 2.

    If the number of liquidity takers is small with respect to the number of liquidity providers, the price should tend to exhibit the typical bid–ask bounce pattern.

  3. 3.

    If the number of liquidity takers is large with respect to the number of liquidity providers, the most likely is that the price shows a trend.

A feasible candidate that incorporates the above properties is the following discrete-time update formula:

$$\begin{aligned} R(t+\Delta t)=R(t)+\varXi \left[ 2 \varTheta \left( R(t)-R(t-\Delta t)\right) -1\right] \left[ \frac{N_B(t)}{N_A(t)}-\zeta \right] \Delta t, \end{aligned}$$

where \(\Delta t\) is the time between two consecutive changes in the state of the agents, \(\varTheta (\cdot )\) is the Heaviside step function and, for a matter of model simplicity, we will assume that \(\zeta =R_B^\circ /R_A^\circ \). This liquidity pricing model will share some features with the previous excess demand model, since for large values of N and t we have

$$\begin{aligned} R(t+\Delta t)\sim R(t)+\frac{\varXi }{\sqrt{N}}\left[ 2\varTheta \left( R(t)-R(t-\Delta t)\right) -1\right] \left[ \frac{Y(t)}{R_B^\circ }-\frac{X(t)}{R_A^\circ }\right] \Delta t, \end{aligned}$$

and the two formulas become very similar when \(R_B^\circ \approx R_A^\circ \). The main distinguishing trait is the presence of the factor with the Heaviside step function that may distort the evolution that the agent model dictates and introduce new properties.

In Fig. 10, we see how a sample time series mimics again a typical stock market evolution. In the confection of this plot, we have kept the same parameters as in the previous market model with just one exception, and the sensibility was set to \(\varXi =0.05\) \(\hbox {min}^{-1}\), with the aim of recovering a similar growth in the long run. However, we see how the market becomes much more volatile than it was in the previous case.

Fig. 10
figure 10

Time evolution of the daily closing value of (discounted) stock prices for the alternative dynamics based on the liquidity model. Among the several trends that are present, the sudden large drops which resemble market crashes are remarkable

The impact of the new pricing dynamics in the probability density function of returns is also noticeable. In Fig. 11, one observes how one may find in practice changes amounting tens of standard deviations, like in actual markets (Mantegna and Stanley 1995; Masoliver et al. 2000). Finally, the correlation length is affected, but timescale \(\tau _0\) is preserved: In Fig. 12, we show a comparison between the 1-min return correlation curves for both cases. On the contrary, the pricing mechanism in the liquidity model wipes completely the negative correlation that one can relate to \(\omega _0\): The characteristic timescale of the oscillations is about 44.4 min with our parameter selection.

Fig. 11
figure 11

Fixed-horizon return behavior. We can see how probability density functions present fat tails at intra-day timescales and how the Gaussian behavior is not fully recovered even in the case of daily returns

Fig. 12
figure 12

Linear correlation of 1-min returns for the two pricing models. The basic timescale \(\tau _0=\) 10 min can be observed in both cases

5 Conclusions

Along this article, we have introduced a dynamical model that describes ultimately the evolution of financial prices. The main ingredient of the model is a finite set of identical interacting agents that, at every moment, can be accommodated into one of the three excluding categories. The agents represent those traders whose activity may have a noticeably impact on the market, and the three available states characterize in a broad sense the possible attitudes of investors.

Active agents can spontaneously adopt a neutral attitude, but any other change is the outcome of an agent-to-agent interaction: Agents may agree, or one agent can convince the other following hierarchical relationships. These simple rules encode a system in which the number of active agents may strongly fluctuate, thus overcoming the second-order nature of the effect. We have looked for the conditions that promote such amplification and concluded that it does not depend on the timescale of the interactions and can be obtained for any choice of the first-order stationary densities—even though it is more relevant in sparse systems, and it is not the result of a resonance.

Once we have analyzed the dynamics of the agent instantaneous properties, we have moved into the pricing problem. We have considered two different ways of identifying the categories, and in each case, a suitable pricing expression is set. We have simulated the time evolution of the asset price for representative values of the involved parameters. We have shown how sample realizations reproduce several stylized facts reported in actual financial data sets: The price evolution displays upward, downward and sideways trends; probability density functions of small timescale returns present fat tails and skewness; volatility behaves accordingly in a non-diffusive way within the same time horizon and presents clustering in a larger timescale; and traces of some leverage effect can be found.

In a future work, we are planning to explore how the properties shown by the agent model depend on the assumptions made, to refine its connections with actual financial systems and to consider further alternative interpretations that may be relevant in market dynamics not considered here.