1 Introduction

Over the past two decades, a multitude of studies have been conducted aimed at understanding how climate change might affect a range of natural and social systems, and at identifying and evaluating options to respond to these effects. These studies have highlighted differences between systems in what is termed “vulnerability” to climate change, although without necessarily defining this term. As shown by Füssel and Klein [8], the meaning of vulnerability within the context of climate change has evolved over time. In the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC), vulnerability to climate change was described as “a function of the character, magnitude and rate of climate variation to which a system is exposed, its sensitivity and its adaptive capacity” ([20], p. 995). Straightforward as it may seem, this conceptualisation of vulnerability has proven difficult to make operational in vulnerability assessment studies. Moreover, it appears to be at odds with those conceptualisations developed and used outside the climate change community (e.g., natural hazards, poverty).

It is increasingly argued that many climate change vulnerability studies, while effective in alerting policymakers to the potential consequences of climate change, have had limited usefulness in providing local guidance on adaptation (e.g., [26]) and that the climate change community could benefit from experiences gained in food security and natural hazards studies [11]. As a result, the climate change community is currently engaged in a process of analysing the meaning of vulnerability and redefining it such that assessment results would be more meaningful to those wishing to reduce vulnerability. The clearest preliminary conclusion reached to date is that there is much confusion.

We wish to take up the challenge put forward by O’Brien et al. [23], who suggested that “one way of resolving the prevailing confusion is to make the differing interpretations of vulnerability more explicit in future assessments, including the IPCC Fourth Assessment Report” (p. 12). We feel that there is an opportunity and, indeed, a need to cast the discussion in a formal, mathematical framework. By a “formal framework” we mean a framework that defines vulnerability using mathematical concepts that are independent of any knowledge domain and applicable to any system under consideration. The use of mathematics also avoids some of the limitations of natural language. For example, natural language can obscure ambiguities or circularities in definitions (e.g., [4]).

Inspired by conceptual work done by, among others, Kates [15], Jones [13], Brooks [1], Luers et al. [19], Turner et al. [29], Jaeger [12], Downing et al. [6] and Luers [18] and using vulnerability to climate change as our point of departure, we see this paper as taking a first step towards instilling some rigour and consistency into vulnerability assessment. Building on Suppes’ arguments in favour of formalisation in science [28], our motivation for developing a formal framework of vulnerability is fourfold:

  • A formal framework can help to ensure that the process of examining, interpreting and representing vulnerability is carried out in a systematic fashion, thus limiting the potential for analytical inconsistencies.

  • A formal framework can improve the clarity of communication of methods and results of vulnerability assessments, thus avoiding misunderstandings among researchers and between researchers and stakeholders (especially if the language used for communication is not the native one of all involved).

  • By encouraging assessments to be systematic and communication to be clear, a formal framework will help users of assessment results to detect and resolve any inaccuracies and omissions.

  • A formal framework is a precondition for any computational approaches to assessing vulnerability and will allow modellers to take advantage of relevant methods in applied mathematics, such as system theory and game theory.

Note that, consistent with Suppes [28], a formal framework of vulnerability is not the same as a formal framework for vulnerability assessment. We see the latter as being aimed at providing guidance to those conducting vulnerability assessments or measuring the costs and benefits of adaptation (e.g., [2, 3, 7]). A framework of vulnerability as proposed in this paper would be aimed at understanding the structure of vulnerability and, thereby, at clarifying statements and resolving disagreements on vulnerability. Also important to note is that the use of mathematics does not require quantitative input, nor does it have to lead to quantitative results. We use mathematical notation as a language in which we can formulate both qualitative and quantitative statements in a concise and precise manner.

We are well aware of two important caveats involved in developing a formal framework of vulnerability. First, the framework could be perceived as being overly prescriptive, limiting the freedom and creativity of researchers to generate and pursue their own ideas on vulnerability. Second, it could be seen as being developed for illicit rhetorical purposes, namely to throw sand in the eyes of those unfamiliar with mathematical notation. In spite of these caveats, we hope that the development of a formal framework of vulnerability will turn out to be a worthy undertaking, offering an opportunity for rigorous interdisciplinary research that can have important academic and social benefits. If this opportunity is seized by many, the risks represented by the two caveats will be minimised.

This paper is organised as follows: Section 2 investigates the “grammar” of vulnerability for three cases of increasing complexity; it extracts the building blocks for the actual formalisation of vulnerability and related concepts, which is presented in Section 3. Section 4 relates the framework thus developed to the approach taken to vulnerability assessment by the IPCC and in two recent studies. Section 5 presents conclusions and recommendations for future work.

2 Grammatical Investigation

Before analysing the current technical usage of the concept of vulnerability within the climate change community, this section starts with an analysis of the everyday meaning of the word. The reason for this is that we consider it likely that the technical usage represents a refinement of the everyday one. In Section 3, we then first present definitions that capture the more general meaning of vulnerability and refine them in order to represent the technical meaning.

2.1 Oxford Dictionary of English

The latest edition of the Oxford Dictionary of English gives the following definition of “vulnerable” ([27], p. 1977):

  1. 1.

    Exposed to the possibility of being attacked or harmed, either physically or emotionally

  2. 2.

    Bridge (of a partnership): liable to higher penalties, either by convention or through having won one game towards a rubber

The Oxford Dictionary of English provides the following example sentence with the first definition: “Small fish are vulnerable to predators”.

It follows from the definitions and the example sentence that vulnerability is a relative property: it is vulnerability of something to something. In addition, both the definitions and the example sentence make it clear that vulnerability has a negative connotation and therefore presupposes a notion of “bad” and “good”, or at least “worse” and “better”. It also follows that vulnerability refers to a potential event (e.g., of being harmed); not to the realisation of this event. In particular, the “good” or “bad” is a judgement referring to a possible future, while the vulnerability statement refers to the present entity. For instance, the small fish are vulnerable now because the predators might harm them in the future (considered “bad”).

This quality judgement implies, in both examples, a comparison of possible futures. In the case of the small fish, we compare a future evolution in which the fish are harmed by predators with one in which the fish are unharmed. If, say, the lake in which the fish live were so polluted that in a short time the fish would die irrespective of the presence of predators, we would probably not make the same vulnerability statement. Similarly, the bridge partnership is vulnerable (now) because they might lose (“bad”) more points than they would under “normal” circumstances (had they not, for example, won a game towards a rubber).

2.2 Vulnerability in Context: a Non-Climate Example

The small fish in the Oxford Dictionary of English example sentence are assumed to have no means of protecting themselves against predators. They would not be able to respond in any effective way once they realise that a predator has chosen them for lunch, with fatal consequences. Many natural and human systems, however, will be able to react to imminent threats or experience non-fatal consequences. Consider a motorcyclist riding his motorcycle on a winding mountain road, with the mountain to his left and a deep cliff to his right. There is no other traffic, but unbeknownst to the motorcyclist an oil spill covers part of the road ahead of him, just behind a left-hand curve. In natural language we would say that the oil spill represents a hazard and that the motorcyclist is at risk of falling down the cliff and being killed. Expanding on the example sentence of the Oxford Dictionary of English, we would consider that the motorcyclist is vulnerable to the oil spill with respect to falling down the cliff and being killed.

We would normally say that a second motorcyclist who drives more slowly or more carefully is less vulnerable to the oil spill, or that the hazard represents less of a threat to him. One challenge of formalising vulnerability is to account for such comparative statements.

The situation may be considerably more complex if we expand the time horizon. What about a third motorcyclist, who has heard about the oil spill on the road and has been able to prepare for it by buying new tires and improving his driving skills? Can his condition be meaningfully compared to that of the first two, who are confronted with an immediate hazard? What about a fourth motorcyclist, who has been informed but has no money to buy new tires?

Vulnerability to climate change has to account for all these different time scales and introduces new aspects, such as the ability of the vulnerable entity to act proactively to avoid future hazards (by mitigating climate change or by enhancing adaptive capacity).

2.3 Vulnerability to Climate Change

Climate change will affect many groups and sectors in society, but different groups and sectors will be affected differently for three important reasons. First, the direct effects of climate change will be different in different locations. Climate models project greater warming at high latitudes than in the tropics, sea-level rise will not be uniform around the globe and precipitation patterns will shift such that some regions will experience more intense rainfall, other regions more prolonged dry periods and again other regions both.

Second, there are differences between regions and between groups and sectors in society, which determine the relative importance of such direct effects of climate change. More intense rainfall in some regions may harm nobody; in other regions it could lead to devastating floods. Increased heat stress can be a minor inconvenience to young people; to the elderly it can be fatal. Extra-tropical storms can lead to thou sands of millions of dollars worth of damage in Florida; in Bangladesh they can kill tens of thousands of people.

Third, there are differences in the extent to which regions, groups and sectors are able to prepare for, respond to or otherwise address the effects of climate change. When faced with the prospect of more frequent droughts, some farmers will be able to invest in irrigation technology; others may not be able to afford such technology, lack the skills to operate it or have insufficient knowledge to make an informed decision. The countries around the North Sea have in place advanced technological and institutional systems that enable them to respond proactively to sea-level rise; small island states in the South Pacific may lack the resources to avoid impacts on their land, their people and their livelihoods.

As stated before, the IPCC Third Assessment Report described vulnerability to climate change as “a function of the character, magnitude and rate of climate variation to which a system is exposed, its sensitivity and its adaptive capacity” ([20], p. 995). This is consistent with the explanation above as to why different groups and sectors will be affected differently by climate change. Differences in exposure to the various direct effects of climate change (e.g., changes in temperature, sea level and precipitation) and different sensitivities to these direct effects lead to different potential impacts on the system of interest. The adaptive capacity of this system (i.e., the ability of a system to adjust to climate change to moderate potential damages, to take advantage of opportunities or to cope with the consequences ([20], p. 982) then determines the system’s vulnerability to these potential impacts. These relationships are made visible in Fig. 1.

Fig. 1
figure 1

Graphical representation of the conceptualisation of vulnerability to climate change in the IPCC Third Assessment Report

3 Formalisation of Vulnerability and Related Concepts

An important result of the grammatical investigation is that the concept of vulnerability is a relative one: it is the vulnerability of an entity to a specific stimulus with respect to certain preference criteria that refer to possible future evolutions of the entity. We would expect any formalisation of vulnerability to represent these primitives, which leads us to look at how the concepts of “entity”, “stimulus” and “preference criteria” can themselves be formalised. More complex notions, such as “adaptive capacity”, will require the additional formalisation of the entity’s ability to act.

This section presents a stepwise formalisation of vulnerability. It uses mathematical notation, with which not every reader may be familiar. For those readers, Table 1 explains the mathematical symbols used in this section in the order in which they appear.

Table 1 Mathematical symbols and their meanings

The mathematical model developed in this section is meant as a clean-room environment, within which statements on vulnerability can be interpreted in isolation from the many real-world “details” that can obscure the issues (while, of course, providing the motivation for making such statements in the first place). The definitions are to be read as tentative formulations of vulnerability and related concepts. The reader is encouraged to criticise them and to submit alternatives.

3.1 Systems with Input

The mainstream mathematical interpretation of an entity is that of a dynamical system in a given state. This is the interpretation we will adopt here. The stimuli to which such a system can be subjected are then naturally represented by the inputs to the system. The simplest kind of dynamical system with input is a discrete, deterministic one, given by a transition function (see [14]):

$$ f : X \times E \to X, \label{eq:TransitionFunction} $$
(1)

where X is the set of states of the system and E is the set of inputs. In systems theory, the state of a system at a given time describes all relevant properties the system has at that time. In particle physics, the state of a particle contains not just the position of the particle but also its speed. Similarly, in economics, the state of a company would not be described just by the company’s capital but also by its manufacturing potential, relevant contracts, growth, etc.

Given the current state of the system x (an element of X; x ∈ X) and an input e (e ∈ E), the transition function tells us which element of X will be the next state of the system: f(x, e).

Example 1

We can interpret the example sentence from the Oxford Dictionary of English in the context of a simple Lotka–Volterra model [17, 30], where the state of the system of small fish is given by their number. The input could be represented by the number of predators in the same environment. Given the present number of small fish and that of predators, the transition function computes the number of small fish in the next time step according to

$$ f(x,e)=ax-bx^{2}-cxe, $$
(2)

where x is the number of small fish, e is the number of predators and a, b and c summarise environmental factors (e.g., density of fish population, reproduction rates).

We have chosen this deterministic, discrete dynamical system because of its structural simplicity. The value of this simplicity will become apparent once we start analysing vulnerability. The trained mathematician can extend the formalism so as to cover continuous-time, non-deterministic or stochastic systems and so on, but the presentation of the results would be more difficult to follow.

In this simplified setting, let us first consider a possible evolution from a state x to be the state reached after one step under the given input, that is f(x, e).

Preference criteria are used to ascertain whether or not a possible evolution of the entity is “bad” or “good”. In the examples we have considered, we have seen that this judgement is usually made by comparison with a “normal” evolution, or an evolution under a “zero input”. In the case of the first example sentence in Oxford Dictionary of English, the evolution of the fish in the presence of predators was probably compared to one in which there were no predators. In the case of climate change, the evolution of the state under inputs representing the magnitude of climate change is usually compared with the evolution under smaller or no climate change.

Since we are comparing possible evolutions, and we have made the simplifying assumption that we represent an evolution by a future state, we will therefore represent the preference criteria by a relation on the set of states X. Let us denote this relation by ≺, which can be read as “is worse than”. We expect ≺ to be

  1. 1.

    Transitive, that is, if x ≺ y and y ≺ z, then x ≺ z

  2. 2.

    Anti-reflexive, that is, no state is worse than itself.

We do not expect ≺ to be total, that is, sometimes we might not be able to say whether x ≺ y or y ≺ x (given anti-reflexivity this question is valid only if x ≠ y). A relation with these properties is called a partial strict order. In spite of ≺ representing preference criteria, ≺ will usually not be a preference relation as used in economics (e.g., [16]).

The following examples serve to develop some intuition about strict partial orders, by presenting simple typical cases.

Example 2

Assume \(B \subseteq X\) is a non-empty subset of states that can be interpreted as “bad states”. Consider the following relation:

$$ x {\prec} x' \equiv x \in B \wedge x' \not\in B, \label{eq:ex1} $$
(3)

that is, x is worse than x′ if and only if (iff) x is a state in B and x′ is a state outside B. This relation is a partial strict order (we cannot compare two states that are both in B, or both outside B).

Example 3

Suppose we have a function g : X →ℝ +  that associates to every state a positive real number. This function can be interpreted as an “impact function”, where impacts may be represented as costs. If we assume that the bigger the impact, the worse the state, we can define the relation ≺ by

$$ x {\prec} x' \equiv g(x) > g(x'). \label{eq:ex2b} $$
(4)

Again, ≺ is a partial strict order: we cannot compare states to which the same value has been associated by g.

Example 4

As before, assume we have a function g : X →ℝ + . We now consider a threshold value T ∈ ℝ +  and take ≺ to be defined by

$$ x {\prec} x' \equiv g(x) > T > g(x'), \label{eq:ex3} $$
(5)

that is, the impacts observed in state x are considered too high, while the impacts observed in x′ are acceptable. This example is a special case of Example 2: the “bad” states are those of which the impacts are above the threshold.

3.2 Vulnerability with a Reference Input

In Section 2.1, we interpreted the Oxford Dictionary definition to mean that an entity is vulnerable to a stimulus if its evolution is going to be “bad”. We can now formalise this interpretation by considering the entity to be a system f in state x, the stimulus to be an input e and a “bad” evolution to be one that is “worse” than a “normal” or “reference” evolution. The reference evolution is considered to be obtained by applying to the state x a reference input, which we denote e*.

Accordingly, we have:

Definition

(Vulnerability with a reference input)

A system f : X ×EX in state x is vulnerable to e with respect to the strict partial order ≺ and the reference input e* if

$$ f(x, e) {\prec} f(x, e^*) $$
(6)

Example 5

For the system of small fish described in Example 1, the states are represented by positive real numbers. We take as the strict partial order the familiar strict order < on real numbers. It seems natural to take as a reference input the case of no predators, e* = 0. Thus, we have, applying the definition, that the system in a state x is vulnerable to e > 0 predators if

$$ f(x, e) < f (x, 0) $$

which, applying the definition of f, reduces to

$$ -cxe < 0 $$

which is always true. Thus, if the reference case is that of no predators, the small fish are indeed vulnerable to predators.

Example 6

We can represent the state of the partnership in bridge by the score accumulated, the function f giving us the next score on the basis of the current score, x, and the input pair consisting of points lost or won, p, and the rubber status r. Thus,

$$ f : \mathbb{Z} \times (\mathbb{Z} \times \{ 1, 2\}) \to Z $$
$$ f (x, (p, r)) = x + pr $$

The next score adds to the current score the penalties or gains multiplied by 2 if the partnership has won one game toward a rubber, and otherwise left unchanged.

When assessing the effects of a given penalty, p < 0 when r = 2, to take as a strict partial order the usual relation < on integers, and as reference input the same penalty p when r = 1. Thus, the partnership is vulnerable if

$$ f (x, (p, 2)) < f (x, (p, 1)) $$

which reduces to

$$\begin{array}{rcl} 2p &<& p \\ p &<& 0 \end{array}$$

which is true by assumption. Therefore, the definition does capture the fact that the partnership is vulnerable to penalties after having won a game toward a rubber.

Example 7

The motorcyclist examples discussed in Section 2.2 can be formalised as follows. The elements of state space X summarise the state of the motorcyclist with regard to his health, skills, motorcycle maintenance status and so on. The inputs are going to represent the road conditions: oil spill or no oil spill, 1 or 0. The transition function f gives us the state of a motorcyclist at the end of a journey along a fixed road, when starting from a state x with road conditions e. Thus, the different motorcyclists are represented by different starting states. When assessing the vulnerability of a motorcyclist along a road with an oil spill, e = 1, we compare the outcome with the one registered along the same road with no oil spill, e* = 0. We assume ≺ to be defined in a natural way, giving preference to cases without injuries or damages, though perhaps not as a total order (for example, we do not assume we can compare minor injuries and no motorcycle damage with no injuries but major motorcycle damage). Then, the motorcycle will be vulnerable to the oil spill if

$$ f (x, 1) {\prec} f (x, 0) $$

We would expect that the “better” the starting state x, for example, the more prepared the motorcyclist is, other things being equal, the better the state at the end of the journey will be, and thus, the less likely it will be that the motorcycle will be vulnerable to the oil spill.

Example 8

When attempting to assess the influence of climate change on a region, one can use a computer model that projects the evolution of variables of interest, such as rainfall, sea level and so on, given a certain rise in mean temperature. In this case, the state of the region of interest, including the values of all variables of interest, is an element of the state space X. The transition function f gives us the state of the region after a certain period of time, say one decade, under the influence of a given rise in mean temperature, e. As a reference input, one can consider no rise, or an agreed-upon acceptable rise in mean temperature, e*. Our intuition accords with the formal definition in considering that the region in state x is vulnerable to the mean temperature rise expressed by e if f(x, e) ≺ f (x, e*), where ≺ is an appropriately chosen strict order. Different regions are represented by different states. As in the previous example, regions that are in a “better” initial state are less likely to be vulnerable to a given mean temperature rise.

The last two examples illustrate a common pattern of comparing the vulnerability of different entities. This is the case in which the two different entities are represented by different elements of the state space, and are subject to the same input, while the reference input is also the same. The identity of these inputs and of the transition function allows one to avoid the risk of “comparing apples and oranges”.

3.3 Dynamical Extensions

When defining vulnerability with a reference input, we have associated an evolution of the system from a current state with the next state given by the transition function. Correspondingly, the input and the reference input were elements of an input space E, on the basis of which the transition function computed the next state. The examples provided also have this “punctual” or “one-step” character. However, in many applications, it is more natural to consider an evolution of the system to be a sequence of states, and to consider scenarios and reference scenarios instead of punctual inputs for the vulnerability assessment.

A scenario is just a sequence of inputs: \(\mathit{es} = [e_1,\) e 2, ..., e n ]. Corresponding to such a sequence, the system will undergo n transitions, \(\mathit{xs} = [x_0, x_1, \ldots, x_n]\), where:

$$ \label{moretransitions}\begin{array}{ll} &x_0 = x \\ &x_1 = f(x_0, e_1) \\ &x_2 = f(x_1, e_2) \\ &\ldots \\ &x_n = f(x_{n-1}, e_n). \end{array} $$
(7)

A sequence of states such as \(\mathit{xs}\) is usually called a trajectory of length n. Similarly, we consider a reference scenario to be a sequence of reference inputs, \(\mathit{es^*} = [{e^*}_1, {e^*}_2, \dots, {e^*}_n]\). Using the reference scenario, we can compute a reference trajectory, denoted \(\mathit{xs^*} = [{x^*}_0, {x^*}_1, \dots, {x^*}_n]\), where x*0 = x (the initial state) and \({x^*}_{k+1} = f({x^*}_k, e^{*}_{k+1})\).

As before, we compare evolutions with a strict partial order ≺, but now we assume ≺ to be defined on trajectories of length n, instead of just on elements of X.

With these preliminaries, we have:

Definition

(Vulnerability with a reference scenario) A system f : X ×EX in state x is vulnerable to input scenario \(\mathit{es} \in E^n\) with respect to the strict partial order ≺ and the reference scenario \(\mathit{es^*} \in E^n\) if

$$ \mathit{xs} {\prec} \mathit{xs^*} $$
(8)

where \(\mathit{xs}\) and \(\mathit{xs^*}\) are the trajectories induced by the input scenario and reference scenario, respectively.

Example 9

In Example 8, we have considered the evolution of the state of a region given a change in mean temperature. In most applications, however, we consider the evolution when given a sequence of changes in mean temperature, say one per year for a period of 50 years. In vulnerability assessment, it is of interest how the trajectory of the region compares to the reference trajectory at every point, not just in the final state. Even if the final state is acceptable, for instance, the trajectory might lead to undesirable or unacceptable states in intermediate time periods.

3.4 Systems with Control

The definitions presented so far capture some important aspects of the concept of vulnerability as used in everyday language and technical applications. However, they are insufficient to represent terms that apply to more complex systems capable of learning, incorporating feedbacks and developing possibilities of adaptation. To overcome this limitation, we need to extend our system by distinguishing inputs from controls, which represent the possibility of the system to influence the transition. Denoting the set of controls by U, we have

$$ f : X \times E \times U \to X, \label{eq:endosys} $$
(9)

and therefore, the next state f(x, e, u) depends on the value of u ∈ U. We refer to u alternatively as actions, commands or controls of the system. As in the case of inputs, where we considered e* to be a reference input or a “no input”, the set U will be assumed to contain a “reference” action or a “do nothing” option, denoted u*. The transition function f will, in general, be partial: not all actions are possible in every state.

Example 10

When trying to assess the vulnerability of farmers in a given region to climate change, we have to consider not only the input change in mean temperature, but also their possibilities of reacting to or foreseeing these changes, and taking action, for example by switching crops or investing in irrigation technology. Let X be the set of possible states of the farmers: an element x ∈ X will give the relevant information about the geographical location of the farmers, the capital and technology at their disposal, the access to information and so on. The set of inputs will be ℝ, representing the changes in temperature. The various possibilities of action of the farmers are represented by the set U. The transition function f : X ×ℝ ×UX will be partial: an action u, say, investing in irrigation, will be available to some farmers but not to others, for example, depending on the capital available in the state x.

We can now tackle the definitions of hazards and potential impacts. A hazard is, intuitively, an input that has the potential to lead to a “bad” evolution of the system. In order to simplify the treatment, we choose to identify an evolution with a next state, as in Section 3.2, instead of with a trajectory of length n, as in Section 3.3. This seems desirable, especially in view of the added complication that the system can react by choosing an action u. Reformulating the definitions for the more complex interpretation could be a useful exercise for the more mathematically inclined reader.

In the following, we assume that we have fixed a reference scenario e* and a reference control u*. We start by defining a relative hazard, that is, one that depends on an action of the system. A hazard then is an input for which there exists at least one control that would lead to a worsening of the situation.

Definition

(Relative hazard) An input e ∈ E is a relative hazard for a system f in state x relative to an action u ∈ U if f(x, e, u) ≺ f(x, e*, u*).

Definition

(Hazard, potential impact) An input e ∈ E is a hazard for a system f in state x if \(\exists\ u \in U: f(x, e, u) {\prec} f(x, e^*, u^*)\). In this case, f(x, e, u) is called a potential impact.

Example 11

A given change in mean temperature is a hazard for a given farmer if there exists an action the farmer might take that will lead to a worse state than the reference action taken under the reference temperature change (for example, the yield will be less valuable in the first case than in the second).

Definition

(Unavoidable hazard) An input e is an unavoidable hazard for a system f in state x if ∀ u ∈ U: f(x, e, u) ≺ f (x, e*, u*).

Example 12

A given change in mean temperature is an unavoidable hazard for a farmer if, no matter what action is taken, the resulting yield is worse than in the reference case.

As a final remark, “risk” is usually defined as a measure of the set of potential impacts. This measure could be, for example, the sum of damages associated with the potential impacts weighted by their respective probabilities. In the case of the motorcyclists, the risk could be taken as the probability of them falling down the cliff. However, if we consider the possible outcomes to be injuries and damage instead of alive or dead, we could take the risk as being the expected value of the injuries and damage (the sum of damages weighted by the respective probabilities).

3.5 Adaptive Capacity

Given a system in state x, subjected to an input e, we can define a number of problems:

  1. (a)

    Optimisation

    Choose an action u ∈ U such that f(x, e, u) is optimal, that is, \(\forall\ u'\in U: u' \not = u\), we have not (f(x, e, u) ≺ f(x, e, u′)).

    The optimisation problem as stated here does not necessarily have a unique solution (it may have several or none). In addition, in realistic situations we will not have complete knowledge of f, and therefore, we will at most be able to solve approximate versions of the problem. A more useful question is, therefore:

  2. (b)

    Adaptation

    Choose an action u ∈ U such that we have not (f(x, e, u) ≺ f(x, e*, u*)).

    Such an action avoids the potential impacts. We call it effective. For many practical purposes, “effectiveness” is not a clear-cut notion. For example, an action might avoid part of the impact. Future refinements of the framework will consider this aspect.

Definition

(Effective action) An action u is effective for a system f in state x subjected to an input e if not (f(x, e, u) ≺ f(x, e*, u*)).

If there are no effective actions against e, then e is an unavoidable hazard. If e is not unavoidable, then problem b has at least one solution. The set of effective actions available to the system can be used to interpret the notion of adaptive capacity. For example, we could consider the set itself:

Definition

(Adaptive capacity as a set) The adaptive capacity of a system f in state x subjected to an input e is represented by the set of its effective actions.

Example 13

The adaptive capacity of the motorcyclist is the set of all actions that do not result in the motorcyclist falling down the cliff due to the oil spill. It can be thought of as a measure of, among other things, his skill set and the technical specifications of the motorcycle.

Example 14

Similarly, the adaptive capacity of the farmer is the set of all actions that do not result in a poorer yield than in the reference case. It is a measure of the farmer’s planting choices, access to information and other resources and so on.

If we consider the quality of the actions available to the system, not just their number, we may also define adaptive capacity as a measure of this quality. The complication here is that such a definition would require the additional assumption that actions have “qualities” that can be measured and compared. In this paper, we have chosen to make a minimal set of such assumptions, as we aim for generality in our definitions.

3.6 Co-evolution of System and Environment

One aspect not yet captured by our framework is that vulnerability to climate change is the result of a long-term interaction between the system and its environment. To take this interaction into account, we introduce a model of the environment as a dynamical system, h : X ×E ×UE, so that the next input from the environment h(x, e, u) depends on the state of the system and on the control.

As in the previous, static case, given x 0 and e 0, we can define a number of problems. In the static case, the problems involved finding an action u with some property (e.g., optimality). In the dynamic case we need to find a policy ϕ: X ×EU, that is, a function that specifies which actions are to be taken, depending on the state of the system and the input with which it is faced.

Let us consider an initial state of the system, x 0, and an initial input from the environment, e 0. Given a policy ϕ, we can consider trajectories of length n, as in Eq. 7:

$$\begin{array}{rll} u_0\! &=&\! \phi(x_0, e_0), \;\; x_1 = f(x_0, e_0, u_0), \;\; e_1 = h(x_0, e_0, u_0) \\[3pt] u_1\! &=&\! \phi(x_1, e_1), \;\; x_2 = f(x_1, e_1, u_1), \;\; e_2 = h(x_1, e_1, u_1) \\[3pt] & \ldots & \\[3pt] u_{n-1}\! &=&\! \phi(x_{n-1}, e_{n-1}), \;\; x_n = f(x_{n-1}, e_{n-1}, u_{n-1}), \\[3pt] e_n\! &=&\! h(x_{n-1}, e_{n-1}, u_{n-1}). \end{array} $$
(10)

In order to make statements about vulnerability, we also need a strict order on trajectories and a reference trajectory \(\mathit{xs}^*\).

A natural condition on a policy ϕ is that the actions it returns should be effective where possible. Under this assumption we define the following problems:

  1. c)

    Optimisation

    Choose a policy ϕ such that the actions taken drive the system along an optimal trajectory.

    As in the static case, the problem will in most cases have several or no solutions, and for realistic examples only approximate versions of the problem will be solvable.

  2. d)

    Mitigation

    Choose a policy ϕ such that, for all k ∈ {0, 1, ..., n}, no e k + 1 = h(x k , e k , u k ) is an unavoidable hazard. This means that we can choose a u k + 1 such that x k + 1 is not worse than x* k + 1.

  3. e)

    Maintaining adaptive capacity

    Choose a policy ϕ such that, for all k ∈ {0, 1, ..., n}, there exists at least one effective u k . Again, the effectiveness of an action is assessed in terms of the reference trajectory: f(x k , e k , u k ) is not worse than x* k+1.

Both problems d and e can have more than one solution, even when problem c has no solution. For stochastic systems, problem d might translate to the reduction of the probability of all unavoidable hazards below a certain threshold. Similarly, for a less abstract notion of effectiveness, problem e might require, for example, the improvement of the effectiveness of actions available at step k and therefore of adaptive capacity. These and other refinements to the framework are in progress and will be presented separately.

3.7 Multiple Agents

Owing to the insistence of ascribing vulnerability to an entity, it might seem that our framework cannot represent multiple agents. This is not the case: in this section, we show two possible ways of dealing with interacting systems.

For simplicity, we consider two systems:

$$\begin{array}{ll} &f_1 : X_1 \times E \times X_2 \times U_1 \to X_1, \\ &f_2 : X_2 \times E \times X_1 \times U_2 \to X_2. \end{array} $$
(11)

The systems interact with the environment and with each other:

$$\begin{array}{lll} x_{1,k+1} & = & f_1(x_{1,k}, e_k, x_{2,k}, u_{1,k}),\\ x_{2,k+1} & = & f_2(x_{2,k}, e_k, x_{1,k}, u_{2,k}). \end{array} $$
(12)

Let us assume we have (partial) strict orders ≺ 1 and ≺ 2 on X 1 and X 2, respectively.

A first problem would be an assessment of the vulnerability of the combined system:

$$ f_{1,2} : X_{1,2} \times E \times U_{1,2} \to X_{1,2}, $$
(13)

where

$$\begin{array}{rll} X_{1,2} &=& X_1 \times X_2, \\ U_{1,2} &=& U_1 \times U_2, \\ {\kern-1.5pt} f_{1,2} ((x_1,{\kern-1.5pt} x_2){\kern-1pt},{\kern-1pt} e, (u_1, u_2)) \!&=&\! (f_1(x_1,{\kern-1pt} e,{\kern-1pt} x_2,{\kern-1pt} u_1{\kern-1pt}){\kern-1pt},{\kern-2pt} f_2(x_2,{\kern-1pt} e,{\kern-1pt} x_1,{\kern-1pt} u_2)).\\ \end{array} $$
(14)

This assessment requires choosing a (partial) strict order on the set X 1,2, which would combine the two (partial) strict orders, ≺ 1 and ≺ 2. For example, we can choose

$$ (x_1, x_2) {{\prec}^{1,2}} \big(x_1', x_2'\big) \quad \text{iff} \quad x_1 {{\prec}^1} x_1' \quad \text{and} \quad x_2 {{\prec}^2} x_2'. $$
(15)

In this case, the roles of the two systems are symmetrical.

We can give more weight to one of the systems by combining the (partial) strict orders in a lexicographical way:

$$ (x_1, x_2) {{\prec}^{1,2}} \big(x'_1, x'_2\big) \quad \text{iff} \quad x_1 {{\prec}^1} x'_1 \quad \text{or} \quad \big(x_1 = x'_1 \quad \text{and} \quad x_2 {{\prec}^2} x'_2\big). $$
(16)

Here, the first system is given more importance because, if its output grows worse, the combined system is considered to be worse off, whereas the output of the second system is only relevant if the first one remains unchanged.

A second possible problem is to assess the vulnerability of each system independently. Taking the case of the first system, we would simply consider the environment as including the second system:

$$ f'_1 : X_1 \times E' \times U_1 \to X_1, $$
(17)

where

$$\begin{array}{rll} E' & = & E \times X_2, \\ x_{1,k+1} & = & f'_1 (x_{1,k}, (e_k, x_{2,k}), u_{1,k}). \end{array} $$
(18)

The problems of optimisation, mitigation and maintaining adaptive capacity can now be addressed with respect to the extended environment. Multi-scale analysis becomes important in this case because the environment will contain a part f 2, which operates at the same scale as the system f 1, and another part, given by the evolution of e, which typically takes place at a much slower pace.

4 Preliminary Applications

The objective of this section is to relate the framework developed in Section 3 to the IPCC conceptualisation of vulnerability (see Fig. 1) and to two recent vulnerability assessments: Advanced Terrestrial Ecosystem Analysis and Modelling (ATEAM) and Dynamic and Interactive Assessment of National, Regional and Global Vulnerability of Coastal Zones to Climate Change and Sea-Level Rise (DINAS-COAST). It is not our objective to evaluate the IPCC conceptualisation and the two assessments but, rather, to test the practical applicability of the framework using real examples. The choice of these examples is motivated chiefly by the fact that we have first-hand knowledge of both the IPCC conceptualisation and the two assessments. Future work will apply the framework to vulnerability assessments that are more qualitative in nature, do not follow the IPCC conceptualisation and do not focus primarily on climate change.

As mentioned earlier and discussed in detail by Füssel and Klein [8], the meaning of vulnerability with in the context of climate change has evolved over time. This is reflected in the respective assessment reports of the IPCC. In our application, we use the definition of vulnerability as provided in the glossary of the Working Group II contribution to the Third Assessment Report [20]. The approaches of both ATEAM and DINAS-COAST were developed to be consistent with this definition. An important difference between the two projects is that DINAS-COAST explicitly considered feedback from human action on the natural system.

4.1 Intergovernmental Panel on Climate Change

In the glossary of the Working Group II volume of the IPCC Third Assessment Report, vulnerability is defined as “the degree to which a system is susceptible to, or unable to cope with, adverse effects of climate change, including climate variability and extremes. It is a function of the character, magnitude and rate of climate variation to which a system is exposed, its sensitivity, and its adaptive capacity” ([20], p. 995). The extent to which this definition can be made operational for assessing vulnerability is limited because the defining elements themselves are not well defined or understood. In addition, vulnerability is said to be a function of exposure, adaptive capacity and sensitivity, but no information is given about the form of this function. As a result, we can only verify whether or not all elements of the IPCC definition are contained in our framework and whether there are any inconsistencies between the two.

There are four defining elements in the IPCC definition, two of which can be mapped directly to primitives used in our definition. The first element, the “degree to which a system is susceptible to, or unable to cope with”, is represented in our definition by the (partial) strict order ≺. These preference criteria on the set of states X make it possible to assert that the system may end up in an undesirable state, in which it is “unable to cope with” some stimulus. The second element, the “character, magnitude and rate of climate variation to which a system is exposed” describes the (climate) stimulus to which the system is exposed. In our definition, this element is the input e. Since we want to be able to consider non-climatic input as well, we do not limit e to climate stimuli.

The other two defining elements in the IPCC definition, sensitivity and adaptive capacity, have no direct correspondent in the primitives of our framework. We consider both sensitivity and adaptive capacity to be more complex properties of a system, unsuitable as starting points for a formal definition. However, both concepts can be defined using our primitives (see the discussion in Section 3.5). “Sensitivity” is a well-established concept in system theory, characterising how much a system’s state is affected by a change in its input. It requires the differentiability of the transition function f. If this requirement is met, it can be shown that a system cannot be vulnerable to an input if it is not sensitive to that input, which agrees with the IPCC definition. However, in our framework, this requirement and the notion of sensitivity are not necessary to define vulnerability.

The fourth element, adaptive capacity, is defined by us as the set of effective actions available to the system. In our framework, it is a more complex notion than vulnerability in that its definition relies on four primitives, not three. In addition to a dynamical system, an input and a (partial) strict order, controls are required to define adaptive capacity (see Section 3.5). In contrast to the IPCC conceptualisation, knowledge of adaptive capacity is not required for assessing vulnerability, as is illustrated by the case of simple systems (as in Example 1). However, adaptive capacity will influence the vulnerability of the more complex systems typically considered by the IPCC. As shown in Section 3.6, assessments of vulnerability and adaptive capacity are interrelated: their influence on one another depends on the preference criteria chosen.

4.2 Advanced Terrestrial Ecosystem Analysis and Modelling

The project ATEAM (http://www.pik-potsdam.de/ateam/) was funded by the Research Directorate-General of the European Commission from 2001 to 2004. It was concerned with the risks that global change poses to the interests of people in Europe relying on the following services provided by ecosystems: agriculture, forestry, carbon storage and energy, water, biodiversity and mountain tourism. It involved 13 partners and six subcontractors, whose joint activities resulted in the development of a vulnerability mapping tool [21]. The project adopted the IPCC conceptualisation of vulnerability, which required combining information on potential impacts with information on adaptive capacity (see Fig. 1). Socio-economic data were used to assess adaptive capacity on a sub-national scale, in a way that allowed it to be projected into the future using the same set of scenarios as for the assessment of potential impacts. The information on potential impacts and adaptive capacity was then combined in a series of vulnerability maps [25].

When taking a closer look at ATEAM using the formal framework of Section 3, we first need to identify the framework’s three primitives. ATEAM aimed “to assess where in Europe people may be vulnerable to the loss of particular ecosystem services, associated with the combined effects of climate change, land use change and atmospheric pollution” ([22], p. 3). Thus, the entity is a coupled human–ecological system: the people in Europe who rely on ecosystem services. The system receives both input (the stimuli) and controls (the human actions). The evolution of such a system can be given by

$$ x_{k+1} = f(x_{k}, e_{k}, u_{k}), \label{eq:EvolutionEquationWithControl} $$
(19)

where k denotes the time step and u k is an element of the set of available controls U k , which are the management actions people can apply to adapt to potential impacts and, thus, maintain the ecosystem services on which they rely. These actions are usually specific to the ecosystem service considered. For example, a management action for ensuring the ecosystem service “agriculture” could be to irrigate the land.

The second primitive is the stimulus or input e ∈ E, to which the system’s vulnerability was assessed. This input was given by the scenarios of climate, land use and nitrogen deposition, which represent the possible evolutions of the environment. The scenarios were based on the IPCC SRES storylines (for details, see [22]).

The third primitive notion concerns the preference criteria represented by a (partial) strict order ≺, which relate to the loss of ecosystem services. We will discuss the preference criteria in more detail below. Given these three primitive notions, it is now possible to interpret ATEAM as assessing the vulnerability of a region (more accurately: people in a region) in state x k to an input e k with respect to ≺. One way of doing so would be to compute the set of possible next states X k + 1 by evaluating Eq. 19 for all actions in the set of controls U k and to compare this set to the previous state x k or to possible next states obtained by a different scenario. Note that, for clarity of presentation, we consider only one transition of the system, that is, we associate an evolution to a next state, and not to a trajectory.

In the case of ATEAM, the transition function of the coupled human–ecological system f in Eq. 19 was not known. The available knowledge, in the form of ecological and hydrological models, did not consider the feedback from human action to ecosystems. The models can be thought of as simplifying the “real” non-deterministic system into a deterministic one by assuming some average action \(\tilde{u}\) that is independent of the input. This average action represents “management as usual”. The transition function of the deterministic system can then be given by

$$ x_{k+1} = f_{\tilde{u}}(x_{k}, e_{k})\, . \label{eq:ADAMDeterministicEvolution} $$
(20)

This equation now allows for the computation of possible future states (i.e., x k + 1) for the given scenarios. However, to assert that an entity is vulnerable, the third primitive, a (partial) strict order, is needed to compare different states (e.g., future states with present states, states determined by different scenarios or states of different regional sub-systems). In the case of ATEAM, the elements of the set of states X are vectors, so it is not trivial to provide an appropriate order relation. The (partial) strict order was therefore developed in consultation with stakeholders in the form of an impact function on the set of states (also referred to as output or indicator function), in a similar way as shown in Example 3. The impact function reduces the thematic components of the state vector to a single real number between 0 and 1 for each ecosystem service. The spatial dimension of the state could be seen as the combined state of several regions (here, “combined” is taken as in Section 3.7). A benefit of the indicator-based approach is that comparisons could be made between these regions. To allow for such comparisons was one of the main objectives of ATEAM. Depending on the purposes of the assessment, the reference input could be chosen to be “no input”, that is, the next state was compared to the current one, or one of the other inputs prepared in accordance to the SRES scenarios.

Up to this point, the approach was that of a traditional assessment of potential impacts. However, ATEAM also assessed the third element of the IPCC definition: adaptive capacity. Adaptive capacity was modelled as an index that was chosen to be a real number between 0 and 1. It was developed by building a statistical model from observed socio-economic data, which was then applied to the IPCC SRES scenarios to produce future projections of adaptive capacity.

The adaptive capacity index can be seen within our framework as an estimate of the size of the set of available actions U k . The socio-economic data used to derive the index (e.g., GDP per capita, literacy rate and labour participation rate of women) indicate the capacity of society to prepare for and respond to impacts of global change by choosing an appropriate action (i.e., ecosystem management strategy). The size of this set of actions can be assumed to be an indication of the size of the set of effective actions, since the latter is a subset of the former.

4.3 Dynamic and Interactive Assessment of National, Regional and Global Vulnerability of Coastal Zones to Climate Change and Sea-Level Rise

The project DINAS-COAST (http://www.dinas-coast.net) was also funded by the Research Directorate-General of the European Commission from 2001 to 2004. Five partners and two subcontractors worked together to develop the dynamic, interactive and flexible tool Dynamic and Interactive Vulnerability Assessment (DIVA, [5]). DIVA enables its users to assess coastal vulnerability to sea-level rise and to explore possible adaptation policies. While also following the IPCC conceptualisation, DINAS-COAST took a somewhat different approach to assessing vulnerability compared to ATEAM in that it included feedback from human action to the environment in the representation of the vulnerable system.

At the core of DIVA is an integrated model of the coupled human–environment coastal system, which itself is composed of modules representing different natural and social coastal subsystems [9, 10]. The model is driven by sea-level and socio-economic scenarios and computes the geodynamic effects of sea-level rise on coastal systems, including direct coastal erosion, erosion within tidal basins, changes in wetlands and the increase of the backwater effect in rivers. Furthermore, it computes socio-economic impacts that are either due directly to sea-level rise or are caused indirectly via the geodynamic effects.

Let us now analyse this model in terms of the three primitives of our framework. The first primitive, the vulnerable entity, is the coastal system. The second primitive, the stimulus or input to which the entity’s vulnerability was assessed, was given in the form of climate, land-use and socio-economic scenarios. Similar to ATEAM, these were developed on the basis of the IPCC SRES storylines.

In contrast to ATEAM, the transition function of the coupled human–environment system was known and has the form of Eq. 19. In addition to the input, controls (i.e., adaptation actions) were included in the model. The actions contained in the set of controls U were (1) do nothing, (2) build dikes, (3) move away and (4) nourish the beach or tidal basins.

Given f, U and a set of scenarios E, the vulnerability of the system could have been assessed by computing the transition of the system for every adaptation action u ∈ U and comparing the resulting set of possible states X k + 1 with the previous state x k . However, doing so would be computationally expensive. Instead, DIVA introduced adaptation policies. An adaptation policy is a function that returns an adaptation action u for every state of the system and input it receives from the environment:

$$ \phi : X \times E \to U \, ,\qquad \phi(x_{k}, e_{k}) = u_{k}\, . \label{policy} $$
(21)

The following adaptation policies were considered:

  • No adaptation: the model computes only potential impacts.

  • Full protection: raise dikes or nourish beaches as much as is necessary to preserve the status quo (i.e., x 0).

  • Optimal protection: optimisation based on the comparison of the monetary costs and benefits of adaptation actions and potential impacts.

  • User-defined protection: the user defines a flood return period against which to protect.

The composition of the adaptation policy ϕ with the state transition function f transforms the non-deterministic system into a deterministic one:

$$ x_{k+1} = f(x_k, e_k, u_k) = f(x_k, e_k, \phi(x_k, e_k)) = f^{\prime}(x_k, e_k). $$
(22)

The third primitive, the partial strict order was given in the form of an impact function on the set of states. The function computes additional diagnostic properties such as people at risk of flooding, land loss, economic damages and the cost of protecting the coast. In contrast to ATEAM, the impact function does not reduce and normalise the dimensions of the state vector. One could say that DINAS-COAST provides a sparser partial strict order than ATEAM. Only the vector’s monetary components can be directly compared, which is also the basis for the optimal protection policy. The comparison of the vector’s non-monetary components is left to the individual user, as is the choice of a reference scenario and reference control policy. For this purpose the model is provided with a graphical user interface that allows for the visual comparison of the outputs for different regions, time steps, scenarios and adaptation policies in form of graphs, tables and maps.

5 Conclusions and Future Work

In this paper we presented the contours of a formal framework of vulnerability to climate change. This framework is based on a grammatical investigation that led from the everyday meaning of vulnerability to the technical usage in the context of climate change. The most important result of this investigation is that the definition of vulnerability requires the specification of three primitives: the entity that is vulnerable, the stimulus to which it is vulnerable and a notion of “worse” and “better” with respect to the outcome of the interaction between the entity and the stimulus, compared with the outcome resulting from a reference stimulus. Section 3 presented a mathematical translation of this result, grounded in system theory. In addition, it introduced refinements that capture the informal concepts of adaptive capacity and mitigation. Section 4 served as a first test of the framework by assessing whether or not it can represent concepts used in recent work of which the authors have first-hand knowledge.

Preliminary findings of this test include that the three determinants of vulnerability as identified by the IPCC correspond only in part with the three primitives of our formal framework and that ATEAM and DINAS-COAST have chosen not to specify a single partial strict order on their models’ respective outputs, or a single reference scenario. Instead, they specified several orders on components of their outputs, leaving room for interpretation by the user. However, it has not been the purpose of this paper to evaluate these projects, in particular because the current version of the framework is too rudimentary for such a task. A more important finding is that the framework has served as a heuristic device to help scientists from very different disciplines (the authors, workshop participants and formal and informal reviewers of this paper) to communicate clearly about an issue of common interest, thereby enriching each other’s understanding of the issue. At the same time, the paper has shown that there is scope for many refinements, specialisations and applications of the framework, which means that much work remains to be done to develop it into a useful tool.

The definitions in this paper aimed at showing that a certain type of mathematical theory can account for a simplified grammar of vulnerability rather than at being of immediate use to researchers in the field. A major part of the work to be done will concern structural refinements: formulating stronger, more precise definitions for more complex systems, in a way that makes it easy to deal with continuous time, stochasticity, fuzziness, multiple scales, etc. The problems of optimisation, mitigation and maintaining adaptive capacity must be formulated for these systems in ways that relate them to questions asked in vulnerability assessments. To do so will enable us to incorporate results from the fields of control theory, game theory and decision theory, which was, after all, one of the motivations for developing our framework. These theoretical developments should be accompanied by practical applications that elaborate on those in Section 4. The analytical framework must be informed by the large body of results available from past case studies and by the needs of ongoing vulnerability assessments and the users of their results.

As mentioned in Section 1, it is increasingly argued that the climate change community could benefit from experiences gained in food security and natural hazards studies. These communities have their own well-developed fields of research on vulnerability, although there are important differences with vulnerability assessment carried out in the context of climate change [23, 24]. The framework proposed in this paper could be used to analyse approaches to vulnerability assessment in these communities, as well as in the climate change community. This could make more explicit and thus lead to a better understanding of the perceived and real differences between the respective models of vulnerability in use. Moreover, it will serve to test the framework proposed here. It will be a challenge to see whether or not the framework can capture in mathematical terms the complexity and richness of individual communities, sectors and regions, as well as of the factors leading to their vulnerability. In addition, the value of the framework for qualitative approaches to vulnerability assessment needs to be demonstrated, especially in those places where data are scarce.

On a final note, we realise that some may perceive a formal framework as limiting the flexibility required to capture the breadth and diversity of issues relevant to vulnerability assessment. Others may consider the mathematical approach to developing the framework as an impediment to discussion and application. As stated before, the framework is not intended to be prescriptive, nor is it meant to exclude non-mathematical viewpoints on vulnerability. The least we hope to achieve is that our framework makes vulnerability researchers aware of the potential confusion that can arise from not being precise about fundamental concepts underpinning their work. At the most, we hope they will recognise the potential benefits of testing, applying and further developing the framework proposed here. Every attempt has been made to make the formal description of the framework as accessible as possible to the mathematically challenged, a group that includes the second author of this paper.