1 Introduction

After the 2007 crisis, systemic risk and macro-prudential policies have been at the center of the economic debate (Basel Committee 2010; Yellen 2011; Ghosh and Canuto 2013; MARS 2014). Several studies on interbank markets and credit networks show that networks configurations play a crucial role in determining the vulnerability of the economic system (Allen and Gale 2000; Iori et al. 2006; Battiston et al. 2012; Caccioli et al. 2012; Allen et al. 2012; Palestrini 2013; Battiston et al. 2016).

Simulating an endogenous evolving credit network, we aim at gaining insights into the relation between network configurations and systemic risk in order to select early warning indicators for crises and define policy precautionary measures.

Following the network-based financial accelerator approach (Delli Gatti et al. 2010; Riccetti et al. 2013; Catullo et al. 2015), we constructed a macroeconomic agent-based model (Delli Gatti et al. 2005; Delli Gatti et al. 2010; Dosi et al. 2010) reproducing an artificial credit network, which evolves endogenously through individual demand and supply of loans of heterogeneous firms and banks. We modified both learning and credit matching mechanisms of the Catullo et al. (2015) model in order to increase the stability of individual leverage choices and to reduce the inertia of the network configuration dynamics. Moreover, adopting the methodology followed by Schularick and Taylor (2012), we isolated early warning indicators for crises (Alessi and Detken 2011; Babecky et al. 2011; Betz et al. 2014; Drehmann and Juselius 2014; Alessi et al. 2015) and we simulated different policy scenarios using capital-related macroprudential policiesFootnote 1 (IMF 2011; Claessens 2014; MARS 2014; Angeloni 2014).

The model defines a simple interaction structure between firms and banks. Firms fund production through internal resources and by borrowing money from banks. By increasing the leverage, firms are able to raise their production level and thus boost the expected revenues. However, firm revenues are influenced by idiosyncratic shocks. Consequently, if a firm increases its leverage, its expected profits will augment. At the same time, however, the firm’s exposure to negative shocks will raise along with its failure probability (Greenwald and Stiglitz 1993; Delli Gatti et al. 2005; Riccetti et al. 2013). Moreover, high levels of target leverage are associated with high interest rates on loans and high probability of suffering credit rationing. Similarly, high levels of leverage result in the augmentation of the expected profits for banks, but also in the raising of their exposure to the firm’s failure.

Therefore, firms and banks have to deal with the trade-off between increasing their leverage to augment expected profit, and reducing exposure to contain failure probability (Riccetti et al. 2013; Catullo et al. 2015). In the attempt of gaining satisfying levels of profits without being exposed to excessive risks, firms and banks choose their target level of leverage through a simple reinforcement learning procedure (Tesfatsion 2005; Catullo et al. 2015). Consequently, modifying individual credit demand and supply, agent target leverage choices determine the evolution of the credit network.

We calibrated the model on a sample of firms and banks quoted on the Japanese stock-exchange markets from 1980 to 2012 (Marotta et al. 2015), reproducing the levels of leverage, connectivity and output volatility observed in the empirical dataset.

Model simulations generate endogenous pro-cyclical fluctuations of credit and connectivity. Indeed, since they lend to relatively robust firms, banks are able to increase their net-worth during expansions: hence, they do not suffer from firm failures. Consequently, the bank’s supply of loans augments implying that borrowing money for firms becomes easier. Thus, both the firm’s leverage and the integration of the credit network tend to increase. However, the default risk increases with leverage, and high connectivity may foster the diffusion of negative effects of firms and banks failure (Delli Gatti et al. 2010; Riccetti et al. 2013; Catullo et al. 2015). In effect, aggregate credit leverage and connectivity are positively correlated with the number of firm failures. Therefore, during expansionary phases credit, leverage and connectivity growth may create the conditions for future recessions and crises (Minsky 1986).

Indeed, following the methodology developed by Schularick and Taylor (2012), we found that both credit and connectivity growth rates are positively correlated with crisis probability and that they are effective early warning measures in both empirical and simulated data.

Moreover, the model is suitable for designing macro prudential policies which exploit agent heterogeneity and the network’s interaction structure. Indeed, capital-related measures which force banks to avoid lending to more indebted firms may decrease the output’s volatility without causing consistent credit reductions and, thus, output contractions.

We also tested permanent capital-related measures applied to larger and more connected banks only. When interventions target banks that are relatively central in the credit network in terms of size and connections, the vulnerability of the economic system may be substantially reduced without affecting aggregate credit supply and output. Thus, the analysis of credit network connectivity may be useful for assessing the emerging of system risk. Besides, agent-based models that endogenize credit network dynamics may be used for testing the effectiveness of early warning indicators and the effects of different macro-prudential policies.

The paper is structured as follows. The next section describes the agent-based model: agents behavioral assumptions, matching mechanisms between banks and firms and leverage decisions. The third section illustrates simulation results. In first instance, we will focus on the patterns of calibrated simulations. Secondly, we test the effectiveness of connectivity measures as early warning indicators. After that, we will implement simple macro-prudential capital-related measures. The last section contains our conclusions.

2 The model

Our artificial economy is populated by M banks and N heterogeneous firms. Firms produce a homogeneous good using capital as the only input, they fund production through their net-worth and thanks to bank loans. We assume that, in each period, both banks and firms try to reach a target leverage level (Adrian and Shin 2010; Riccetti et al. 2013; Aymanns et al. 2016) chosen according to a reinforcement learning mechanism based on Tesfatsion (2005).

2.1 Firms

Firms use capital (\(K_{it}\)) to produce output through a non-linear production function:

$$\begin{aligned} Y_{it}=\rho K_{it}^\beta \end{aligned}$$
(1)

The firm’s balance sheet is:

$$\begin{aligned} K_{it}=L_{it}+\phi L_{it-1}+ E_{it} \end{aligned}$$
(2)

Capital is given by net-worth (\(E_{it}\)), loans contracted at time t (\(L_{it}\)) and the share of the loans borrowed at time \(t-1\) that have not been repaid yet (\(\phi L_{it-1}\)). Firms can receive loans from more than one bank, thus the amount of loan borrowed by a firm is given by the sum of the loans received by the lending banks (z):

$$\begin{aligned} L_{it}=\sum _{z} L_{izt} \end{aligned}$$
(3)

In each period the firms fix a target leverage level (\(\lambda _{it}\)) so that the loan demand (\({L_{it}}^d\ge 0\)) derives from the target leverage chosen:

$$\begin{aligned} {L_{it}}^d=\left( \lambda _{it}-1 \right) E_{it}-\phi L_{it-1} \end{aligned}$$

with the target leverage equal to:

$$\begin{aligned} \lambda _{it}=({E_{it}+L_{it}}^d+\phi L_{it-1})/E_{it} \end{aligned}$$
(4)

The higher the target leverage, the higher the level of indebtedness. If \(\lambda _{it}=1\) the capital is financed completely by internal sources. Each period, target leverage (\(\lambda _{it}\)) is chosen following the reinforcement learning algorithm described in Sect. 2.4. Firms can choose their leverage strategy (\(\lambda _{it}\)) among a given discrete set of strategies \(\varLambda _f\), with \(\lambda _{it}\ge 1\). We assume that \(\lambda _{it}\) can not be greater than a given maximum value, which represents the maximum level of risk a firm is allowed to take.

The interest rate (\(r_{it}\)) associated to each loan is a function of the firm target leverage and the interest rate (r) payed by banks on deposits. The \(\eta \) parameter is a measure of the sensitivity of banks to the borrower risk (\(\eta >0\)):

$$\begin{aligned} r_{it}=\eta \lambda _{it}+r \end{aligned}$$
(5)

Profits derive from the difference between revenues (\(u_{it}Y_{it}\)) and costs, which are in turn given by the sum of interests on loans (\(r_{it}L_{it}+\phi r_{it-1}L_{it-1}\)) and a fixed costs (F).

$$\begin{aligned} \pi _{it}=u_{it}Y_{it}-r_{it}L_{it}-\phi r_{it-1}L_{it-1}-F \end{aligned}$$
(6)

Net revenues (\(u_{it}Y_{it}\)) depend on the stochastic value \(u_{it}\), which represents uncertain events that are not explicitly modeled (Greenwald and Stiglitz 1993; Riccetti et al. 2013):

$$\begin{aligned} u_{it}= & {} m+\epsilon _{it} \end{aligned}$$
(7)
$$\begin{aligned} \epsilon _{it}\sim & {} N(0,\sigma ) \end{aligned}$$
(8)

Thus, net revenues are given by both a fixed component (m) and a probabilistic one representing idiosyncratic shocks on firms revenues (\(\epsilon _{it}\)). Since the expected value of \(\epsilon _{it}\) is zero, the expected marginal net revenue is equal to m.

A part of the profits is not accumulated (\(\tau \pi _{it}\), \(0<\tau <1\)), thus net-worth becomes:

$$\begin{aligned} \bigg \{ \begin{array}{l r} E_{it}=E_{it-1}+(1-\tau ) \pi _{it} &{} \; \; \; \pi _{it}>0\\ E_{it}=E_{it-1}+\pi _{it} &{} \; \; \; \pi _{it} \le 0\\ \end{array} \end{aligned}$$
(9)

2.2 Banks

Banks supply loans (\(L_{zt}\)) through their net-worth (\(E_{zt}\)) and deposits (\(D_{zt}\)): the banks’ balance sheet is given by \(L_{zt}=D_{zt}+E_{zt}\). Banks establish the level of credit supply following the same reinforcement learning algorithm used by firms, choosing a level of target leverage \(\lambda _{zt}\), from a discreet set of values (\(\varLambda _b\)). Deposits (\(D_{zt}\)) are computed as the residual between loans (\(L_{zt}\)) and net-worth (\(E_{zt}\)). The amount of new credit potential supply is reduced by the sum of the loans to firms i (\(i \in I_{zt-1}\)) that are not already matured

$$\begin{aligned} {L_{zt}}^s=\left( \lambda _{zt}-1 \right) E_{zt}-\sum _{I_{zt-1}} \phi L_{izt-1} \end{aligned}$$
(10)

Thus, as for firms, riskier leverage strategies correspond to higher levels of \(\lambda _{zt}\). Indeed, the higher \(\lambda _{zt}\), the higher the supply of loans that is not covered by net-worth (\(E_{zt}\)) but relies on deposits (\(D_{zt}\)). We assume that banks have a maximum level of target leverage because of prudential reasons and in conformity with international credit agreements (Basel’s agreements). Moreover, for prudential reasons, a bank can provide to a single firm only a fraction of its supplied loans according to the parameter \(\zeta \), hence \(\zeta {L_{zt}}^s\) is the maximum amount of loan that can be offered to a single firm.

Bank revenues are given by the interest payed on the loans by borrowers at time \(t-1\), \(i \in I_{zt-1}\) and borrowers at time t (\(i \in I_{zt}\)). Bad debts (\(BD_{zt}\) and \(BD_{zt-1}\)) are costs for banks, they are given by loans that are not payed back because of the failure of the borrowing firms. Moreover, banks have to pay a given interest rate r on deposits along with a fixed cost (F).

$$\begin{aligned} \pi _{zt}=\sum _{I_{zt}} r_{izt} L_{izt}+\sum _{I_{zt-1}} r_{izt-1} L_{izt-1} -BD_{zt}-BD_{zt-1}+D_{zt}r-F \end{aligned}$$
(11)

A part of the profits is not accumulated (\(\tau \pi _{zt}\), \(0<\tau <1\)), thus the net-worth (\(E_{zt}\)) becomes:

$$\begin{aligned} \bigg \{ \begin{array}{l r} E_{zt}=E_{zt-1}+(1-\tau )\pi _{zt} &{} \; \; \; \pi _{zt}>0\\ E_{zt}=E_{zt-1}+\pi _{zt} &{} \; \; \; \pi _{zt} \le 0\\ \end{array} \end{aligned}$$
(12)

2.3 Matching among banks and firms

Each period, banks and firms establish respectively their supply and demand of loans choosing their target leverage. Each bank offers loans to demanding firms until its supply is exhausted. On the other hand, firms may borrow credit from different banks until their loan demand is satisfied. Thus, firms can be linked with more banks at each time.

In first instance, firms ask for loans to linked banks. To each linked bank (z) firm i asks for a fraction (\(L_{izt}^d\)) of its total demand for loans (\(L_{it}^d\)); this fraction is proportional to the credit of the previous period provided by bank z to firm i (\(L_{izt-1}\)) divided by the total credit received by firm i at time \(t-1\) (\(L_{it-1}\)).

$$\begin{aligned} L_{izt}^d=L_{it}^d \frac{L_{izt-1}}{L_{it-1}} \end{aligned}$$

If the loans received by firm i are lower than the ones asked to its linked banks, firm i will ask for loans to randomly chosen other banks.

A bank can deny loans to riskier firms, the probability (\(p_R\)) that the demand for loans of firm i is not accepted increases in the firm target leverage (\(\lambda _{it}\)):

$$\begin{aligned} p_R=\iota (\lambda _{it}) \end{aligned}$$

If the bank loan supply is lower than the sum of the accepted demands of the linked firms, the bank grants to each firm a loan proportional to its demand. Thus, the loan given to firm i, in the set of the j linked firms (\(I_a\)) is given by:

$$\begin{aligned} L_{izt}=L_{zt}^s \frac{L_{izt}^d}{\sum _{I_a}L_{jzt}^d} \end{aligned}$$

On the contrary, when credit supply is higher than the accepted demand for loans, a bank may provide credit to other firms.

Therefore, credit network evolves according to the individual demand and supply of loans. A new credit link is established when the demand of loans of a firm is accepted by a bank with which the firm was not previously linked, while the credit link between a bank and a firm is cut when:

  1. 1.

    The firm or the bank fails;

  2. 2.

    The firm or the bank does not ask/offer loans at time t;

  3. 3.

    The bank refuses to provide loans because the firm is considered too risky;

2.4 Leverage choice

In each period, the agents (both banks and firms) choose a target leverage level (\(\lambda _{xt}\)). The target leverage (\(\lambda _{xt}\)) is chosen among a limited set of values (\(\varLambda \), in effect \(\varLambda _f\) for firms and \(\varLambda _b\) for banks). The set of possible levels of leverage is limited for banks because they have to respect capital requirements (in line with Basel’s Agreements).Footnote 2 It is limited for firms because we assume that banks do not provide credit to highly indebted firms in order to reduce bad debts.

The choice mechanism is based on the Tesfatsion (2005) reinforcement learning algorithm. In each period, firms and banks choose one of the possible levels of leverage. At the beginning of the following period, the agents observe the result of their choices: i.e., the profit (\(\pi _{xt-1}\)) received. In this paragraph, we denote the past profit \(\pi _{xt-1}\) as \(\pi _{xst-1}\) to underline that it is the profit deriving from the choice of a particular leverage strategy, i.e. a particular value of \(\lambda _{s}\) at time \(t-1\) for agent i. The profit received in the previous period, when the target leverage \(\lambda _s\) was chosen, is used to update \(q(\lambda _s)_{xt}\):

$$\begin{aligned}&q(\lambda _s)_{xt}=(1-\chi )q(\lambda _s)^F_{xt-1}+\chi \pi _{xst-1} \end{aligned}$$
(13)

The strength of the memory of the agent is given by the parameter \(\chi \). At the beginning of each period, the effectiveness of every leverage level \(q(\lambda _s)_{t-1}\) is reduced by a small percentage (\(\xi \)): \(q(\lambda _s)^F_{xt-1}=(1-\xi )q(\lambda _s)_{xt-1}\), where \(\xi \) represents the extent of ‘forgetting processes’.

Agents may choose among a restricted set of possible levels of leverage (\(\varLambda _a \subseteq \varLambda \)). In fact, since loans have a two-period maturity, agents have to consider also their past debts, which leads to a certain level of leverage inherited from past loans. Moreover, firms would not choose levels of leverage that generate costs higher that the expected profits. Since the production function is concave, the higher the level of capital used, the lower the marginal production and thus the marginal value of expected profit, while financial costs increase linearly with leverage. According to Eqs. 1, 5 and 6, it is convenient to take a certain level of leverage if the associated loan cost (\(r_{it}L_{it}^D\)) is lower than gains, \(m \rho (K+L)^\beta -\rho (K)\). Therefore, firms will choose among a reduced set of leverage target possibilities which may not include lower level of \(\lambda _{xt}\) because of the debt inherited from the past. At the same time, this set excludes higher level of \(\lambda _{xt}\) because at higher levels of leverage financial costs may overcome expected profits. Bank’s leverage choices are reduced only if constrained by previous period loans (in Fig. 1, in panel a the complete set of leverage, while the reduced set is illustrated in panel b).

Agents adjust their leverage only gradually, thus each period agents may choose among three strategies: the leverage chosen in the past period and the two levels of leverage immediately adjacent (for instance, in the Fig. 1 panel c, if leverage was 5, the agent may choose among 4, 5 and 6). Thus, the agent’s choices are restricted to the \(\varLambda _c\) set, with (\(\varLambda _c \subseteq \varLambda _a \subseteq \varLambda \)). If the past level of leverage is not into the available set of leverage, the agent will choose the nearest level of leverage allowed (in Fig. 1 panel d, the level of leverage 6 is not allowed thus the agent will choose 4). Moreover, if no level of \(\lambda \) is allowed, the agents will choose the one corresponding to the lowest level of leverage, thus the lower \(\lambda \). The lowest value of \(\lambda \) for firms is \(\lambda =1\), meaning that in this case a firm does not borrow money but use only its internal resources and previous loans, if any. The lowest level of \(\lambda \) for banks is higher than one, otherwise with \(\lambda =1\) a bank would not offer any credit, thus, ceasing its activity.

Fig. 1
figure 1

Target leverage choice

Once the effectiveness of each strategy is valued, the agents will associate to each strategy a certain probability that this strategy will be chosen in the following period. The probability of choosing a particular level of leverage (strategy \(\lambda _s\)) among the allowed levels of leverage (\(\varLambda _c\)) is given by \(p(\lambda _s)_{xt}\). This probability is different for each agent according to its past profit results:

$$\begin{aligned}&X_{xst}=\left( \frac{q(\lambda _s)_{xt}}{c}\right) ^{\nu } \end{aligned}$$
(14)
$$\begin{aligned}&p_(\lambda _s)_{xt}=\frac{e^{X_{xst}}}{\displaystyle \sum \limits _{\varLambda c} e^{X_{ct}}} \end{aligned}$$
(15)

where \(X_{xst}\) is the strength associated by agent x to a strategy at time t, which depends on its effectiveness. The exponential values of the strength (\(X_{xst}\)) of each strategy are used to compute the probability of choosing it \(p(\lambda _s)_{xt}\). Taking the exponential, strategies that are more efficient have a more than proportional probability to be chosen. The probability of choosing a strategy s is computed as the exponential value of its strength divided by the sum of the exponential value of all the strategies among which the agent may choose.

In general, choosing higher levels of leverage may lead to higher profits. However, higher leverage implies higher risks for both firms and banks. Moreover, firms with higher target leverage levels pay higher interest rates and they have a higher probability of not being accepted as borrowers. Besides, banks with high target leverage have to pay high volumes of interest on deposits while, in case of scarcity of credit demand, they may not be able to lend all the credit they offer.

To allow a continuous exploration of the action space, there is a relatively little probability (\(\mu \)) that in each period the agents choose their leverage strategy randomly without considering their respective effectiveness, always into the set of the allowed levels of leverage (\(\varLambda _c\)). The forgetting mechanism and the random choice probability allow agents to explore their strategy space avoiding the possibility of being trapped in sub-optimal solutions or in strategies that are not more effective in a continuously evolving economic environment.

3 Simulation results

3.1 Empirical data and simulation results

Simulated data are calibrated on a sample of firms and banks quoted on the Japanese stock exchange markets from 1980 to 2012. This dataset collects yearly balance sheet data and the value of loans among banks and firms. On average, the dataset includes 226.181 banks and 2218.152 firms, located in 35 prefectures.

Fig. 2
figure 2

Size and degree distribution of banks and firms in empirical data in 2012

We have run three groups of 10 different Monte Carlo simulations and we calibrated the model in order to replicate in each of these simulated groups the same level of leverage, connectivity and output volatility observed empirically. Indeed, high output volatility may imply high crisis probability. Moreover, high leverage may increase the vulnerability of the economic system, and high connectivity may foster the diffusion of shocks throughout the economy.

In our simulations, we fix a ratio of 10 firms to each bank that is similar to the empirical one. We assume that each simulated period corresponds to a quarter of year and we run simulations for 500 periods. We analyze simulated data from the 368th to the 500th period in order to have 132 quarters that correspond to 33 years, the same time span of the empirical dataset.

An extensive stream of empirical literature has focused its attention on the study of the size distributions of economic agents. Among the others, Axtell (2001) has analyzed the 1997 census data for U.S. firms and has discovered that firms size distribution follows a Zipf’s Law. Janicki and Prescott (2006) replicate this analysis for the U.S. banks, employing a dataset of the U.S. federal bank regulators. They have found that the banks size follows a nested distribution which is lognormal in the lower-middle part with a Pareto-distributed tail. In credit markets Masi and Gallegati (2011) and Lux (2016) observe a more heterogeneous degree distribution for banks rather than for firms. This means a thicker tail of the banks’ degree distribution compared to the one of the firms. In other words, a pool of few banks finances a large set of firms.

Fig. 3
figure 3

Size and degree distribution of banks and firms in ten Monte Carlo simulated data

Following the methodology proposed by Clementi et al. (2006), Clauset et al. (2009) we test the presence of “fat tails” in the distributions of size and degree in both empirical and simulated data (Figs. 2, 3), hence considering the last year of our empirical dataset and ten different Monte Carlo simulations (for a more detailed description of the methodology see the “Appendix”). The estimation procedure consists of three steps: the estimation of the power-law distribution scale (\(\hat{x}_{min}\)) and shape (\(\hat{\alpha }_{\text {Hill}}\)) parameters, testing the goodness-of-fit of the power law hypothesis through data bootstrap (Kolmogorov–Smirnov, as null hypothesis the distribution has a power low tail), testing the power law hypothesis against a “light-tailed” distribution (we used the Vuong (1989) test, testing the power law against the exponential hypothesis).

Table 1 Empirical data power law model estimates, year 2012
Table 2 Simulated data power law model estimates, 10 Monte Carlo runs

Tables 1 and 2 report the results of the three tests performed for both empirical and simulated data respectively. Considering the Pareto \({\hat{\alpha }_{Hill}}\) parameters and the Vuong tests, our estimates suggest that the distributions of firm and bank size (assets and equities for banks) show the presence of “fat tails”. This evidence seems to be stronger for the distribution of bank assets. Moreover, according to the Pareto \({\hat{\alpha }_{Hill}}\) parameters, the degree distribution of banks seems to be more heterogeneous that the one of firms. Therefore, in both simulated and empirical data few banks seems to occupy a central position in our credit network in terms of both size and connectivity.

The empirical dataset includes only yearly data over just 33 years, while simulated data are collected from multiple simulations with a quarterly structure. Thus, simulated data can be used to explore the relation between macro-variables considering the cross correlation of their cyclical component detrended through the HP filter. As shown in Fig. 4, the output is positively correlated with leverage and credit, thus high levels of output are associated with high levels of credit and firm leverage. Also connectivity, defined as the average normalized degree, is positively correlated with the output.

Fig. 4
figure 4

Cross correlation among output and other macro-variable: leverage, credit, connectivity

In the simulated model, crises are triggered by the failure of firms: if indebted firms suffer a negative idiosyncratic demand shock, they may fail. Failing firms do not repay their debts to lending banks thus, in turn, some banks may fail or may contract their loans supply. If the banks that are hit by failed firms are relatively central in the credit network (in terms of both credit supplied and number of loans provided to firms) the credit supply contraction may affect the whole economy and, consequently, aggregate output may fall down. Figure 5 shows how failures are significantly correlated with the four macro variable we considered above: output, credit, leverage and connectivity. Indeed, when leverage is high, also the probability of failure of firms increases; also, as mentioned above, the leverage is positively correlated with output, credit and connectivity. Moreover, these four macro variables seem to anticipates firm failures: when their values increase, also the number of firm failures tends to augment in the following periods.

Fig. 5
figure 5

Cross correlation among number of firm failures and selected macro-variable: output, leverage, credit and connectivity

Therefore, expansionary phases, characterized by high output, lead to increasing leverage and connectivity growth, which in turns may augment the vulnerability of the system, amplifying and diffusing the effects of local shocks as, for instance, firm or bank failures. In fact, expansions may create the condition for following output decrease. In the next section, we are going to test the effectiveness of these macro variables as early warning indicators for crisis, conceived as huge output contractions.

3.2 Empirical and simulated relation between credit network dynamics and crises

In order to detect the early warning properties of credit, leverage and connectivity, we follow the methodology applied by Schularick and Taylor (2012). They showed that after WWII, credit growth rates have a significant impact on crisis probability and, then, credit variations are effective early warning measures for crises. The relation between credit dynamics and crisis probability is tested implementing a Panel Logit model on a dataset composed by annual data of 12 country over 140 years:

$$\begin{aligned} logit(p_{it})=\beta _{0i}+\beta _{1}(L)\varDelta log CREDIT_{it}+\beta _2 (L) \mathbf X _{it}+ \epsilon _{it} \end{aligned}$$
(16)

where (\(p_{it}\)) is the crisis probability, \((L)\varDelta log CREDIT_{it}\) are lagged credit logarithmic variations and \(\mathbf X _{it}\) are control variables. The higher the predicted values of the regressions, the higher the probability of a crisis to occur.

Fig. 6
figure 6

Roc curves in simulated data made using credit variations and connectivity variations (deg)

Roc curve analysis is used to test the effectiveness of credit variations as early warning measures, valuing the extent of the trade-off between false alarm and hit ratio (Alessi and Detken 2011; Babecky et al. 2011; Betz et al. 2014; Drehmann and Juselius 2014; Alessi et al. 2015). Indeed an early warning indicator may be conceived as a source of signal of different intensity. Over a certain threshold, the signal may alert policy makers, since the stronger the signal, the higher the probability of crisis (Fig. 6). Thus, if the policy maker intervenes only when the intensity of the signal is strong, false alarm probability is reduced. Conversely, an alert threshold that is too high may discourage policy intervention even when a crisis is approaching, hence it reduces the hit ratio of the indicator. Therefore, early warning indicators may be valued by their capacity of reducing the trade-off between false alarms and hit ratio. A basic measure of this trade-off is the Auroc, which is the area below the Roc curve: the largest this area, the higher the effectiveness of the indicator as an early warning measure.

We implement this Logit regressions on our empirical dataset. In order to apply this methodology, we build a Panel, dividing firms according to the prefecture they belong to. In the dataset we have yearly data for 35 prefectures, for a 33 years time span. However, to balance our Panel we consider only prefectures that are present in all the periods of our dataset. Moreover, we drop those prefectures that have a small number of firms (we fixed the minimum number of firm at twenty). Hence, our dataset shrinks from 35 to 8 prefectures over 33 years. For each prefecture we compute the aggregate output as the sum of firm revenues, and we define a crisis as an annual reduction of aggregate output lower than \(-5\)%.Footnote 3

Table 3 Relation between crisis and macro-variables variations in empirical data

Consequently, the effective dimensions of our dataset is quite limited. Nevertheless—as suggested by Schularick and Taylor (2012)—credit variation lags seems to be significant predictors of crisis probability (Table 3). Indeed, we test the relation between credit variations for 5 year lags (credit) and crisis probability (Table 3, column credit), and we assume this specification as the baseline econometric model.

The third lag of credit variation is significant and positively correlated with credit probability. The sum of the credit variation coefficients is statistically significant at 10%. Moreover, the Wald test on the five lags is significant at the 15% (\(\chi ^2\) lags) and the whole regression \(\chi ^2\) is also significant at 1%. Besides the AUROC is greater than 0.5. Thus, credit growth seems to contribute to generate the conditions for crisis occurrence and it may be an effective early warning indicator for crises.

We control for lagged output variations (y), for a leverage level measured as credit over output (credit / y) and for a connectivity measure (deg).Footnote 4

Output variations seems to be slightly positively correlated with crises probability, even if they do not contribute to increase the Auroc level. The leverage level does not seem to impact on crisis probability in a significant way. Conversely, the connectivity variations seem to be positively correlated with crisis probability. Three lags are positive—the first, the fifth and the fourth—but only the first two are significant, while the the second and the third are negative but not statistically different from zero. Therefore, the sum of the lags is not significant while jointly connectivity variation lags are significant at 10%. Adding connectivity among the regressors has increased the Auroc: according to the De Long test, this increment is statistically significant at 20%. Indeed, connectivity variations are positively correlated with crisis probability, since a strictly interconnected network may foster the diffusion of shocks.

In the last column, we jointly regress all the control variables (Table 3, column all). This regression seems to confirm the results emerging from the previous ones: credit variations are positively correlated with crisis probability, moreover output and connectivity variations anticipate crises probability.

Nevertheless, taking into account the limits of our dataset (in particular the scarce number of observations), our analysis seems to show that credit variations may be correlated with crisis probability and connectivity may improve the effectiveness of early warning measures of crisis.

We apply the same econometric analysis to simulated data, and we try to compare it with the empirical one. Simulations gives the advantage of generating data from a stylized and controlled process. However, in simulation we don’t have different prefectures. Thus, in order to build a Panel dataset—with a comparable dimension respect to the empirical one—instead of prefectures we use ten different Monte Carlo runs of the simulation model. Moreover, as for the calibration of the model, we repeat the analysis three times, using each time only ten different simulation runs (see “Appendix”).

Table 4 Relation between crisis and macro-variables variations in simulated data, first set

Credit variations are positively correlated to crisis probability (see Table 4, 9, 10, column credit) in all the three sets of simulation that are taken into consideration. In particular credit variations are strongly significant in both the second and third sets (9 and 10), where both the sum of the lagged coefficients and the five lags are jointly statistically significant at the 5%. Moreover, in all the simulated sets the associated Auroc level is higher than 0.5 in a significant way. Thus, in simulated data, similarly to empirical results, credit variation lags are positively correlated with crisis probability.

When we include output or connectivity in the regression (\(credit+y\) and \(credit+deg\)), we observe that output and connectivity variations are correlated to crisis probability and, according to De Long test, they produce an increase of the Auroc, significant at 5% level. The regression model with both credit and output variations (credit+y) displays a stronger positive correlation between credit variations and crisis occurrence, respect to the baseline specification (credit). Also the output variation coefficients are significant even though they show a negative sign, which is at odds with the empirical results. This is not due to a simple negative correlation between output variations and crisis, since if we consider only the lagged \(\varDelta (y)\) as regressors we get positive coefficients. Rather the (credit + y) model seems to capture two different effects of opposite sign. On one hand, there is an “exposure” effect, i.e. the increase of credit augments the likely of a crisis occurrence. On the other, there is a “patrimonial effect”, namely during expansionary phases there is also a net worth increase that augment agent’s resilience to negative shocks, and lowers the crisis probability accordingly.

Similarly to the empirical dataset, in all the three simulation sets, the crisis probability is positively correlated with an increase in the average normalized degree. At the same time, the correlation between crisis and credit variations becomes not significant of negative. Indeed, credit plays a twofold role respect to crisis probability. On one side, credit growth increases the systemic risk by fostering bilateral exposures; on the other, it sustains output, thus, reducing the crisis probability. Adding the connectivity variation among regressors, the bilateral exposure effect is catched by the normalized degree variable. Therefore, the variation of connectivity is able to capture the prevalence of systemic risk respect to the sharing risk.

Furthermore, the (\(credit+deg\)) model leads to a higher Auroc level, respect to the baseline model (Fig. 6) and—in two out of three sets—to the (\(credit+y\)) one. Moreover, including connectivity the R-squared increases and the AIC shrinks. Besides, connectivity variations are significant even in the regression with all the independent variables (All), thus controlling also for output variations (y) and leverage (credit / y). Therefore, simulation data suggest that connectivity variations may be effective early warning measures for crisis.

Besides, in simulated data, following the same econometric approach we control for other network measures that may be correlated to crisis probability: assortativity with respect to leverage (assortativity) and the maximum value of agents page rank (pageRank).

Table 5 Relation between crisis and network measure variations in simulated data, first set

Assortativity measures the degree of homogeneity in terms of leverage among connected firms and banks. Assortativity varies between one and minus one. If assortativity is near to one, it means that high leverage banks are connected with high leverage firms and vice-versa, while if assortativity is close to minus one, high leverage banks are connected with low leverage firms and vice-versa. Thus, assortativity may provide a synthetic measure of bank credit diversification. The ‘PageRank’ algorithm is used by Google for ranking agents connectivity. Indeed, for instance, Battiston et al. (2012) and Thurner and Poledna (2013) applied the ‘DebtRank’, a measure inspired to the ‘PageRank’, to value agent centrality in credit networks.

Normalized degree variations (deg) show higher Auroc levels, which are associated to higher R-squared and lower AIC, in two out of three sets. Thus, normalized degree seems to be more efficient as an early warning measure with respect to assortativity and ‘PageRank’ (Table 5, and in the “Appendix” tables).

4 Precautionary macro policy experiments

The previous paragraph suggested that credit and connectivity may be used as early warning measures of crisis: in this section we describe some simulation experiments on precautionary policies aiming at reducing crisis probability (Alessi and Detken 2011; Babecky et al. 2011; Betz et al. 2014; Drehmann and Juselius 2014; Alessi et al. 2015). Agent-based methodology allows us to implement macroeconomic prudential policies that focus on agents heterogeneity and their interactions (Popoyan et al. 2017), thus we tested a simple capital-related measure: when a bank is targeted by the policy, this bank can not provide credit to riskier firms, which are those firms with high leverage.Footnote 5

Firstly, we permanently apply this policy to all the banks. After that, we focus the policy only to bigger and more connected banks. We test these three scenarios running 30 simulations lasting 33 years as above. We apply the policy measures to the last 17 years of the simulation, and we use the same simulation seeds for the different policy scenarios in order to reproduce the same conditions when each policy begins.

In order to show the effects of this capital-related measure on output level and crisis probability, we apply it permanently to all the banks and we consider different levels of firm riskiness allowed.

Figure 7 shows that starting from no intervention (‘no’) and gradually reducing the maximum level of firm leverage allowed, the lower the firm leverage permitted, the stricter the policy. Initially, the output slightly increases and the crisis probability decreases. However, when policies become stricter, the level of output declines because credit supply suffers an excessive compression that leads to output reduction.

Fig. 7
figure 7

Strength of the prudential intervention. On the left, the aggregate output level in logarithms. On the right, crisis probability

The effective implementation of permanent credit restrictive measures applied to all the agents may be problematic, while the previous paragraphs show that network configuration dynamics may be correlated to crisis probability. Thus, we apply capital-related measures to banks that are central in the credit network, which are the biggest and most connected. Indeed, the distributions of net-worth and normalized degree of banks in simulated data are characterized by the presence of few larger banks with high levels of connectivity (Fig. 8; Tables 1, 2). In order to value the effect of the policy we consider: output level, crisis probability and probability of intervention (i.e.; the probability p(int) that in each period a bank is targeted by the policy).

Fig. 8
figure 8

Net-worth and normalized degree distribution of banks in simulated data

Bank size is computed according to its relative net-worth level (\(E_{it}\)): bigger banks are those bank that cover a larger share of total banks net-worth in a given period of the simulation. The credit restrictive policy is a temporary policy that is applied only when a bank overcomes a certain relative size threshold and consists in banning credit to firms with relatively high target leverage (target leverage \(\lambda _{it}>5.0\)) We tested different relative size thresholds, starting from a high level of concentration (0.2: when a bank controls more of the 20% of the total bank net-worth).

Indeed, when the credit restriction is applied to bigger banks only, the economy becomes more stable without reducing output levels (Fig. 9). For instance, if the policy is applied to banks that represent more than the 10% of total bank net-worth, crisis probability decreases, while output level remains stable. Moreover, the number of banks involved by this measure is quite limited.

Fig. 9
figure 9

Targeting larger banks. On the left, the aggregate output level in logarithms. In the center, crisis probability. On the right, intervention probability

Finally, we target the previous capital-related measure only to the more connected banks. Figure 10 shows the effects of this policy calibrated on different normalized degree thresholds, starting from a threshold of 0.5 (i.e.; targeting the banks with a normalized degree higher than 0.5 only, that are those bank connected with at least half of the firms). When the policy is focused on the more connected banks, crisis probability decreases without affecting the output. For instance, targeting banks with a normalized degree higher than 0.3 consistently reduces crisis probability, while the output level remains stable and the number of banks affected by this policy is extremely low.

Fig. 10
figure 10

Targeting more connected banks. On the left, the aggregate output level in logarithms. In the center, crisis probability. On the right, intervention probability

Therefore, policy interventions that target bigger or more connected banks only seem to be effective macro prudential instruments. In fact, these selective policies reduce the number of agents that should be monitored. Moreover, they could be calibrated according to early warning measures. Thus, the coordination between early warning indicators and agent specific interventions may help controlling systemic vulnerability.

5 Conclusions

In this paper we described an agent-based model that reproduces the evolution of a credit network as the result of the individual leverage choices of banks and firms. We calibrated the model on an empirical credit network dataset and, following the methodology used by Schularick and Taylor (2012), we analyzed the effectiveness of connectivity as an early warning indicator for crises.

The model underlines the importance of credit network configurations for the analysis of systemic risk. The empirical data we used and particularly, simulated data, suggest that credit and connectivity variations are effective early warning measures for crises. Indeed, in our agent-based model, expansionary phases are associated with increasing credit and connectivity which, in turn, may create the conditions for crises: high levels of credit may be associated with high leverage, which increases firm failure probability. Firm failures affect negatively bank balance sheets, leading to credit contractions, which in a strongly connected network, may affect the whole economy. Therefore, credit network configurations may be used to define the timing of macro-prudential policies.

Moreover, according to our simulated experiments, capital-related policies based on quantitative restriction may be effective in reducing crisis probability. Selective quantitative restriction measures focusing on larger or more connected banks may reduce systemic risk without negatively affecting credit supply and output level. In fact, capital-related policies that target agents that are central in the credit network, in terms of both credit and connections, may reduce systemic vulnerability.

The model can be extended in different ways. For instance, if it would be possible to access other datasets, the analysis developed in this paper may be tested by calibrating simulations to other, possible larger, samples of firms and banks. Moreover, the model could be extended to include different macro-prudential policies, trying to coordinate interventions with early warning measures.