Introduction

Defining the problem

The sea level rise and flooding of the coastal areas is one of the most critical challenges of this century. The literature reports a variety of studies concerning the sea level and its past and present acceleration, as well as future changes. It was expected that the radiative forcing and the related Global Warming should produce the same acceleration on all oceans and seas, except for local departures. However, the situation is complex: different acceleration values have been determined from individual records over different geographical locations and different periods, most of which with different duration (Church and White 2006, 2011; Jevrejeva et al. 2008, 2014; Woodworth et al. 2009; Kemp et al. 2011; Olivieri and Spada 2013; Long et al. 2014; Camuffo et al. 2017). All records have a limited duration and are affected by multidecadal oscillations and regional effects that mask the true behavior: the longer the record, the less the noise affects the signal. However, even with the longest records there is not shared consensus about a common acceleration. Spada et al. (2015) considered that the existence of non-linear sea-level rising trends is still debated, and current estimates of the secular acceleration are subject to ample uncertainties. In particular, in a polynomial sea level representation, local effects with linear trend, e.g. land subsidence, affect the first order term, but not the second order, that represents the acceleration. In general, quadratic equations are preferred because they are simple and easy to interpret (Baart et al. 2012) and in addition make the acceleration clear (Jevrejeva et al. 2014).

A key problem is how acceleration is defined. The acceleration derived from a quadratic polynomial is necessarily constant, as discussed later. However, it has been recognized that the acceleration could vary over time. Therefore, alternative methods have been devised, e.g. using variable windows and sliding the window year-by-year along the observation period (Jevrejeva et al. 2014). An exponential equation was proposed to represent the sea level trend for coastal management in Australia, but this caused a dispute between Parker et al. (2013) and Hunter (2014).

Basically, two different views are involved: who prefers to rely on some basic equations to follow the physical process; who prefers to rely on the statistical analysis, takes a long-term record and determines from it the equations derived from a best-fit interpolation. The former assumes to know, and to have the control of the whole system; therefore, he/she makes a subjective choice imposing an equation, although well motivated on scientific grounds. The latter assumes that the system is very complex and our knowledge insufficient; therefore, he/she leaves the record free to choose its equation, i.e. the best fit of the dataset, relying on statistical interpolation tools. Results and other considerations will show which approach has been most successful.

In Venice, an exceptionally long-term series has been obtained for the sea level over the 1350–2016 period, based on tide gauge record (1871–2016) and some independent proxies, i.e.: the levels of the green algae belt on the palaces reported in the paintings by Canaletto and Bellotto (eighteenth century) and comparing them with the present-day level (Camuffo and Sturaro 2003, 2004); sea stair free from algae in a painting by Veronese dated 1571 (Camuffo 2010); the submersion depth of the sea stairs of the palaces facing the Grand Canal going back up to 1350 (Camuffo et al. 2017); the written documentary sources since the sixteenth century (Camuffo 2021). This long multiproxy series can be interpolated at the same level of confidence (i.e. R2 = 96%) with a quadratic and an exponential equation. However, even if both equations represent equally well the sea level, their first and second derivatives, that concern the rising rate and the acceleration, give different responses. This gives the opportunity to make a general discussion about the consequences that may derive from choosing an equation instead of the other one, although they are apparently equivalent to describe the sea level over the whole observation period.

Basic mathematical definitions and difficulties met in oceanography

In this section, a few basic formulae are listed. They are well known, but are helpful in the following discussions and reference will be made to them. In kinematics, for any given function that describes the positions that a point occupies as it moves through space S(t) over time t, the velocity V(t) is represented by the first-order derivative of S(t) with respect to time, and the acceleration A(t) by the second-order derivative, i.e.

$$V\left(t\right)=\frac{\mathrm{d}S(t)}{\mathrm{d}t}$$
(1)
$$A\left(t\right)=\frac{{\mathrm{d}}^{2}S(t)}{\mathrm{d}{t}^{2}}=\frac{\mathrm{d}V(t)}{\mathrm{d}t}$$
(2)

The above applies to all physical systems and variables, including climate changes and sea level rise. In the following, the sea level (SL) will stand for S(t); the rate at which sea level is rising (SLR) for V(t) and the acceleration of sea level (SLA) for A(t). If the equation of SL(t) is a polynomial of degree N, then SLR and SLA are represented by the following equations with degrees N-1 and N-2, respectively

$$\mathrm{SL}=\sum\limits_{n}^{N}{a}_{n} {t}^{n}$$
(3)
$$\mathrm{SLR}=\frac{\mathrm{dSL}}{\mathrm{d}t}=\sum\limits_{n}^{N-1}{na}_{n} {t}^{n-1}$$
(4)
$$\mathrm{SLA}=\frac{{\mathrm{d}}^{2}\mathrm{SL}}{\mathrm{d}{t}^{2}}=\frac{\mathrm{dSLR}}{\mathrm{d}t}=\sum\limits_{n}^{N-2}{n\left(n-1\right)a}_{n} {t}^{n-2}$$
(5)

where n = 0, 1, 2, … N. In the particular case SL(t) is a quadratic equation, i.e. N = 2, the acceleration becomes very simple, i.e. SLA = 2 a2. The easy rule of thumb by which the acceleration is twice the quadratic coefficient a2 constitutes the conventional method used by oceanographers to define the acceleration (Jevrejeva et al. 2014). However, the fact of restricting the use to quadratic equations only is a strong limitation imposed to the representation of sea level records, and to the forcing factors as well, as explained later.

The last basic formula listed here is the fundamental principle of dynamics, i.e. the second law of motion that establishes a cause-effect relationship that links acceleration to force, i.e.

$$F\left(t\right)=m(t)\frac{\mathrm{d}V\left(t\right)}{\mathrm{d}t}=m(t) A(t)$$
(6)

where m(t) is a proportionality coefficient that characterizes the inertia of the physical system, e.g. the resistance that a body offers to a change in its speed upon the application of a force F(t). In classical physics, an example of a system that changes mass over time is an aircraft burning fuel during flight, i.e. the aircraft becomes lighter and performance is improved. In this context, F(t) represents the atmospheric forcing that causes the sea level to rise and m(t) the inertia of oceans to respond, i.e. the ratio between the atmospheric forcing and sea level acceleration. The inertia m(t) may be considered constant over relatively short time periods, but not on the long-term. A simple explanation may be the melting of the continental glaciers that will increase the total mass of the oceanic waters. A more complex explanation considers that m(t) is an effective value determined by the oceanic waters that exchange heat with the atmosphere or redistribute it on the surface and in depth, e.g. thermocline, surface wave mixing, oceanic currents, sinking of cold or salty waters. Therefore, the ocean inertia is not simply related to the total water mass expressed in kilograms, but depends on the physical mechanisms and efficiency in exchanging and redistributing heat for the complex interactions between oceanic and atmospheric drivers. It is trivial that, if both F(t) and m(t) have constant values, A(t) too is constant and V(t) has constant rate; if F(t) = 0, also A(t) = 0 and V(t) is constant; if F(t) is constant and different from zero, and m(t) changes, then A(t) too will change in inverse proportion to m(t).

In classical mechanics, all the physical quantities in Eq. (6) are derived from the Newton's laws: they are well-defined, with simultaneous interactions, and can be followed over time instant by instant. As opposed, in the complex Earth system, the atmospheric forcing drives mechanisms with different inertia and time scales. Therefore, the system may be split in a number of components, i.e. m(t) A(t) may be expressed as sum ∑mi(t) Ai(t) of individual components mi(t) Ai(t), each with a particular inertia and time scale, e.g. thermal expansion of oceanic waters, melting of land-based glaciers. The risk of direct sea level observations is that the short-term components are easily detected, while the long-term may pass unobserved. For this complication, it has been suggested to use the term ‘apparent acceleration’ (Douglas 1992) to emphasize that it is a partial result.

Equation (6) can be read from left to right, i.e.: knowing or hypothesizing a forcing factor it is possible to know the acceleration and predict the sea level. The acceleration is a physical variable that indicates if the system determining the sea level is stationary or is forced to change, and represents the bulk effect of the primary and secondary forcing factors, e.g. radiative forcing, thermal expansion of waters, continental ice melting etc. If the same equation is read on the opposite direction, long records of sea level may be used to assess the forcing over the recording period. The former approach is used in models to predict future scenarios; the latter for global climate analyses.

The concept of dose explains that a certain physical quantity may be distributed at high intensity for a short time, or at low intensity over a long time. High intensities produce high signals and are easily detected; low intensities produce low signals that may be difficult to observe, or may be masked by noise. This constitutes a serious difficulty for the long-term effects.

All the above considerations explain why the acceleration in sea level rise is difficult to detect and even to define; a further difficulty is that very long and accurate records are needed. Long tide gauge records are too short to monitor effects on the long-term time scales. Long proxy records may help to extend back in time our knowledge but this does not mean that they can always ensure the required time coverage.

From the above discussions, an important consequence may be derived for the prediction of A(t). Every physical model based on the physical dynamics (i.e. Eq. (6)) assumes that both F(t) and m(t) are known, or can be calculated and predicted. i.e. the dynamic method is based on two different (unknown) variables. On the other hand, if the kinematic method of Eq. (2) is considered, the acceleration is simply due to the second derivative of the sea level-record, i.e. the method is based on only one variable, and the approach is greatly simplified. In the next sections, the kinematic method will be applied taking advantage of a long sea level record.

Analytical prediction models versus trend-forecast extrapolations

Physical models are based on the best-knowledge hypotheses about mechanisms, boundary conditions and estimation of the forcing factors. Several authors have calculated SL scenarios for 2100 with sophisticated models, obtaining different results. Garner et al. (2018) compiled a comprehensive database of global SL projections showing a wide scatter of values. Wide ranges may be explained considering that projections beyond 2050 remain highly uncertain (Jevrejeva et al. 2019).

The Earth system is complex, and its representation is based on equations, forcing factors, rates, accelerations and time responses not so clearly determined. There is who believes that predictive models are based on physical equations to the best of our (limited) knowledge, and the method is scientific, but the application is biased because it involves uncertainties and subjective or hardly known assumptions, e.g. the forcing factors assumed in the model.

Another approach is to abandon the analytical treatment and introduce a different one, similarly to artificial intelligence and machine learning process. Machine learning was developed with the advent of computers for pattern classification and recognition, and to make predictions. The underlying idea is that a sufficiently long dataset includes the due information, and that statistical tools are able to extract the trend and extrapolate it. The trend extrapolation makes a simple projection of how a physical system is evolving, and will continue to evolve in the absence of specific interventions to stop or to change it. When the historical data are smooth, and follow some nonlinear patterns and curves, extrapolation may be better than time-series analysis (Glanz and Moon 2011).

Extrapolation is a trend-forecast because it assumes that the forecast will not differ too much from the trend, especially when this has been established on a sound dataset. This method is convenient when forcing factors are expected to persist, or when they are not clearly understood, or their evolution is unclear, and avoids personal interpretations. It assumes that recent and historical trends will continue, i.e. the past behaviour is a good predictor of the future. The trend-forecast is realistic when: (i) forecasts are short-term, i.e., the projected period is short in proportion to the record duration; (ii) R2 is high, because R2 represents the percentage variation of the observed variable (e.g. RSL) that is explained by the reference variable (e.g. time).

Aims of this paper

The first aim is to discuss the mathematics and the physics behind the equations, and in particular to interpret the difference between the quadratic and the exponential equations representing the best-fit of the multiproxy series in Venice (Camuffo et al. 2017).

The second aim is to calculate and discuss the rate and the acceleration of sea level rise. These are obtained as first and second time-derivative of the best-fit equations, and are commented in general, as well as for the specific case of Venice.

The third aim concerns the trend-forecast extrapolation. It has been recognized that, if not assumed to be the best approach, it could at least be used as a reference (Baart et al. 2012), i.e. it may be used to verify how much a model departs from the evolution kept over the known period, as well as the influence of the forcing factors considered in a model. It is clear that both computational models and trend-forecast extrapolations are subject to criticisms and limitations. However, they constitute two complementary approaches and the long multiproxy series offers the possibility of making a comparison between them.

The multiproxy series representing the sea level in Venice

This study has considered the multiproxy series of sea level rise (Camuffo et al. 2017; Camuffo 2021) but making reference to the year 2000 to fix the zero level of the vertical scale, i.e. previous sea level being negative and subsequent positive.

The multiproxy series of the observed sea level (SL) in Venice for the 1350–2016 period (Camuffo et al. 2017; Camuffo 2021) has been interpolated with two best-fit equations, one quadratic and one exponential, i.e.:

$${\text{SL}} = 0.00153 \, t_{{{\text{yr}}}}^{{2}} - 3.3t_{{{\text{yr}}}} + 480$$
(7a)
$${\text{SL}} = - 1866 + 67.5\exp \left( {0.00166\;t_{{{\text{yr}}}} } \right)$$
(7b)

where tyr is the time in years of the common era, SL is expressed in mm, and the zero level is referred to the year 2000 (Fig. 1). This has produced a very small difference in some decimal places of the coefficients, but an easier presentation. Both interpolating equations have the same fraction of unexplained variance, i.e. the same Pearson determination coefficient R2 = 0.96. From the physical point of view, any equation (including the sea level over time) is uniquely determined by the set of observed data and the regression analysis. If more than one best-fit equation is obtained, the best approach can be recognized from the unexplained variance. In statistical learning theory, another factor to consider in selecting a regression model is the trade off between bias and variance. At the end, this means taking into consideration the complexity of models used. Here, probably, the quadratic model and the exponential should have the same complexity, so the R2 is effectively an index that can be used for selection (James et al. 2021). As Eqs. (7a) and (7b) are characterized by the same R2, any choice between them is subjective, but implies different consequences, as discussed later.

Fig. 1
figure 1

Plots of the multiproxy series of sea level in Venice. Dashed line: quadratic interpolation (Eq. 7a); continuous line: exponential interpolation (Eq. 7b); thick dotted line: difference between the quadratic and the exponential interpolation. The thin dotted lines (enclosing light grey rectangular areas) parallel to the exponential interpolation indicate the uncertainty bands of the series, i.e.: tide gauge (± 4 cm), Canaletto and Bellotto paintings (± 10 cm), and sea stair proxies (± 17 cm). The dark grey curved triangle represents the set of all exponential lines that pass across the ± 17 cm vertical base in 1350, and converge to the reference point (i.e. Zero level at the year 2000). The future includes extrapolated lines

As both interpolations are equally reliable, their difference in absolute value represents the uncertainty of this system. The difference between the two equations (dotted line in Fig. 1) is almost constant from 1500 to 1860 (in this interval it is close to the average, i.e. 22 mm) and increases at the lower end. It is trivial to say that the difference vanishes at the year 2000 because it has been imposed as starting point (i.e. the zero level). Over the forecast period, i.e. 2020–2100, the difference increases and will reach 41 mm in 2100.

The rate of sea level rise in Venice

The rate of SL, SLR (mm yr−1) is obtained by calculating the time derivatives of Eqs. (7a) and (7b), i.e.

$$\mathrm{SLR}=\frac{\mathrm{dSL}}{\mathrm{d}{t}_{\mathrm{yr}}}=\left(0.0031 {t}_{\mathrm{yr}}-3.3\right)$$
(8a)
$$\mathrm{SLR}=\frac{\mathrm{dSL}}{\mathrm{d}{t}_{\mathrm{yr}}}=0.11205 \,\mathrm{exp}(0.00166 {t}_{\mathrm{yr}})$$
(8b)

where the former is linear, the latter remains exponential (Fig. 2).

Fig. 2
figure 2

Rate of relative sea level rise and rate of LLS in Venice. Dashed line: linear interpolation (Eq. 8a); continuous line: exponential interpolation (Eq. 8b). Horizontal lines: LLS (continuous central value: 1.15 mm yr−1; lower and upper dashed limits: 0.9 and 1.4 mm yr−1). Grey area: extrapolated values

Over the whole 1350–2016 common period, Eqs. (8a) and (8b) provide the same average, i.e. SLR = 1.9 ± 0.1 mm yr−1, but in general they differ, especially before 1500 and after 1900.

Both equations are time dependent, showing that the rising rate was increasing over time, but the two formulae, and their graphs, are different. The linear graph of Eq. (8a) responds to a scenario with constant forcing factors, or with a unique average value, or responds to an inertial system with very long response time, i.e. able to filter out short-term fluctuations or small forcing changes. Therefore, the linear equation (derived from the quadratic one) may be used with the average value of the forcing factors, but misses any time evolution. As opposed, the exponential graph of Eq. (8b) responds to a scenario with increasingly stronger forcing factors (e.g. Global Warming) and is more convenient to follow the time evolution of the system.

The relative sea level rise can be expressed as the sum of two factors: (i) eustatism i.e. rising the global sea level for thermal expansion of waters, melting of ice sheets, movements of the ocean floor, sedimentation, etc.; (ii) local land subsidence (LLS) i.e. lowering of the basin bottom for tectonic motions, soil compaction etc. The inhabitants and the tide gauge perceive the sum of the two, but each component can be determined separately. In Fig. 2, the relative SLR has been reported together with LLS and its uncertainty band. In the literature, LLS ranges between 0.9 and 1.4 mm yr−1, with median 1.15 ± 0.25 mm yr−1 (see citations in Camuffo 2021). In the early period of the Venice series at the beginning of the Little Ice Age (LIA), the rate of the relative sea level rise was close to the LLS value, which implies that the eustatic component was very small or even null. This suggests a stationary period around the fourteenth century in which the global sea level remained unchanged, or even had a turning point around a minimum in level. To ascertain which of the two hypotheses is correct, or if this is a misleading effect due to a larger uncertainty near the lower border, it is necessary to find new proxies to extend back in time the dataset. The eustatic component continued to grow over time and equated LLS at the end of the eighteenth century and will likely be twice it near the end of this century (see Sect. 5).

The graph of SLR shows an impressive acceleration, and the palaces with their sea stairs on the canals witness this dramatic situation (Fig. 3). In the middle of the seventeenth century, the sea water rose at the rate to submerge one step in a century (in Venice, the rise of the step is ½ Venice foot, i.e. 17.4 cm). Over this century, more than two steps will be submerged. This is the tangible impact derived from Fig. 2.

Fig. 3
figure 3

At high tides, this sea stair is fully submerged. All steps are colonized by algae and shells. Nowadays this stair is totally impractical. The upper front of green algae is the level currently reached by sea water (normal tides and small waves). Picture taken at low tide

The acceleration of sea level rise in Venice

The acceleration is represented by the second-order time derivatives of Eqs. (7a) and (7b), i.e.

$$\mathrm{SLA}=\frac{{\mathrm{d}}^{2}\mathrm{SL}}{\mathrm{d}{t}_{\mathrm{yr}}^{2}}=\frac{\mathrm{dSLR}}{\mathrm{d}{t}_{\mathrm{yr}}}=0.0031$$
(9a)
$$\mathrm{SLA}=\frac{{\mathrm{d}}^{2}\mathrm{SL}}{\mathrm{d}{t}_{\mathrm{yr}}^{2}}=\frac{\mathrm{dSLR}}{\mathrm{d}{t}_{\mathrm{yr}}}=0.000186 \, \mathrm{exp}(0.00166 {t}_{\mathrm{yr}})$$
(9b)

The plots are reported in Fig. 4.

Fig. 4
figure 4

Acceleration of sea level rise in Venice. Dashed line: constant acceleration (Eq. 9a); continuous line: exponential acceleration (Eq. 9b). Left scale: International System of Units (mm yr−2); right scale: myc unit (mm yr−1 century−1) popularly used in oceanography. Grey area: extrapolated values

The quadratic equation of SL leads to a constant acceleration, SLA = 2a2 = 0.0031 mm yr−2 as established in Eq. (5). As previously discussed for Eq. (6), this implies one of the three following assumptions. (i) The system is governed by constant forcing. However, it is hard to believe that the radiative forcing has been constant over the last seven centuries with the Mediaeval Climatic Optimum, the Little Ice Age and the present-day Global Warming. (ii) The system is highly inertial. However, this is not fully realistic because the melting of ice in Antarctica has a long-term response, while the thermal expansion of waters has short-term. (iii) The equation is based on average values, i.e. average acceleration and average forcing over the record period: this gives a crude, but consistent representation of the physical system, as explained below.

The exponential equation of SL leads to an acceleration exponentially growing over time and shows, year by year, the evolution of the system.

These two results, i.e. constant and exponential acceleration, are only apparently contradictory because the former i.e. SLA = 0.0031 ± 0.0004 mm yr−2, equals the average of the values of Eq. (9b) over the 1350–2016 common period. As commented for the rate, both equations give the same bulk result, but in different ways. Equation (9a) gives a stationary representation of the average acceleration that corresponds to the average forcing. On the other hand, the exponential equation is analytical and provides the time evolution of a system subject to increasing forcing factors, that reacts with an increasing acceleration. If in the past, a certain part of the time series had a deceleration, or a different behaviour, the proportion of variance explained would have been decreased, i.e. the high R2 indicates that the series is characterized by a homogeneous trend. The choice between the two equations depends on the type of solution required, i.e. stationary (i.e. average, constant over time), or dynamic (i.e. instantaneous, varying over time). A stationary representation may be useful to interpret changes over a long period, once the average forcing is known. A dynamic representation may be convenient to represent an evolution over time, or to make future projections.

The result in Fig. 4 can be compared to the acceleration on the global scale by Spada et al. (2015) over the common period 1898–1975. With the help of statistical methods and meta-analysis they found a global sea-level acceleration of 0.0054 ± 0.0027 mm yr−2. In the same period, the average SLA in Venice calculated with Eq. (9b) has been 0.0047 ± 0.0004 mm yr−2. The two values are in reasonable agreement between them.

Trend extrapolation to 2100 and uncertainties

Trend extrapolation method

In this section, the quality and possible consequences of the multiproxy series of Venice are tested making use of the trend extrapolation method. This allows useful discussions and comparison with other scholarly results obtained with the same method or with physical models, based on the conceptual framework of different hypotheses.

The extrapolation method responds to the following analysis: «It’s important to underline that as no physical model is used for supporting extrapolated trends, results are to some extent speculations, though they have been sometimes used in scientific literature (e.g. Carbognin et al 2010; Troccoli et al 2012). Future evolution of global factors such as ice melting, which is a possible cause, of the remote signal, is virtually unknown. Our aim is not to attribute to any of these interpolations a predictive skill, but to show that likely the lack of understanding of the remote global signal is a larger source of uncertainty than local processes» (Scarascia and Lionello 2013).

The reasons to perform a further projection in addition to those existing in the literature are: (i) to compare them with the novel equation and discuss some subtle differences in methods; (ii) to test the relevance of the dataset length, that in scholarly projections starts from 1872, while our series from 1350.

The trend extrapolation method is very simple: it avoids any physical model, and only assumes that, in the absence of external changes, for a reasonably short time, a system will continue with the same trend. Therefore, the system behaves like a black box that gives prediction without need to know the processes that govern the relationships between the internal variables. In theory, the trend-forecast extrapolation should have no error bar because the method is a tautology: if one assumes that a system has no changes in forcing, it is stationary and its trend remains unchanged. Under this assumption, by expanding the time domain, the extrapolation overlaps the trend. The question is another: how much the extrapolation will depart from truth if forcing will change? This is a specious question, i.e. to estimate the error that will derive from a method, when it is used improperly, in contrast to the assumption of steady conditions. It should be made clear that the aim of the trend-forecast extrapolation is not to forecast the future, but only to identify a threshold: i.e. the value that a physical system would attain in the absence of changes, i.e. business-as-usual. On this ground, if the sea level will exceed the projected threshold, it will be possible to know the excess in rise and to evaluate the related excess in forcing.

A strong point of this method is that the learning period on which the business-as-usual is based is extended to the whole dataset that includes a number of variables. In the specific case of the long series of Venice, it includes the response of the marine-climate system from the onset of the Little Ice Age (LIA) to the present-day Global Warming; changes in radiative forcing including periods of intense volcanic eruptions; inertial responses on the short and medium term of various factors to which the sea level responds. The trend-forecast extrapolation is sometimes used for projections on the short or medium term, until changes remain negligible, i.e. when the forecast period (i.e. the future) is short compared to the duration of the learning period (i.e. the record of historical data). It could be argued that in every prediction the uncertainty will increase with the departure from the initial conditions, and ultimately with the length of the extrapolation period. This condition may be expressed in term of relative future (RF),

$$\mathrm{RF}= \frac{\mathrm{PT}}{\mathrm{OT}}$$
(10)

i.e. the ratio between the prediction time (PT), i.e. the forecast period, and the observation time (OT) that constitutes the learning period of the black box that generates the trend.

Scholarly projections to 2100

The literature includes a series of projections to 2100 for Venice or the global mean sea level (GMSL), as reported in Table 1. These projections are based on the tide-gauge record, that in Venice was operating since 1871, although some papers start from 1872. In the case of predictions concerning GMSL, or the Northern Adriatic, the prediction can be applied to Venice adding 10 cm for LLS.

Table 1 Scholarly projections to 2100 concerning the sea level in Venice, or the global mean sea level (GMSL)

The Venice Agency for the Protection of the Environment and Technical Services (APAT) made a simple projection to 2100 by extrapolating the best-fit of the 1872–2005 tide-gauge record using three equations: exponential, parabolic and linear (Ferla et al. 2006). The paper did not specify the assumptions, but these Authors used the trend extrapolation method, which requires the stability of the system, including unchanged forcing conditions. The respective projections were: 31.3, 25.3 and 26.6 cm without indication of the uncertainty. These three different interpolations (i.e. exponential, parabolic and linear) have been possible because the instrumental period is relatively short and the curvatures of the exponential and parabolic interpolations are indistinguishable from a linear one, especially because they are masked by noise. The extrapolated period, from 2006 to 2100, corresponds to a very long RF = 70% of the record length.

Scarascia and Lionello (2013) also used the tide-gauge record under the A1B emission scenario. They applied three different climate models based on an exponential, parabolic and linear equation, and predicted 38–49 cm (i.e. 43.5 ± 5 cm), 14–24 cm (i.e. 18 ± 6 cm) and 22–32 cm (i.e. 27 ± 5 cm) respectively; RF was 62% of the record length. A comparison with Ferla et al. (2006) shows that the result of the linear interpolation was almost the same; the parabolic was 29% lower; and the exponential 39% higher. It is surprising that the parabolic and the exponential gave markedly different results, and in the opposite directions.

Recently, Lionello et al. (2021) returned to the same subject and proposed a broader projection range for the Northern Adriatic, from about 30 to 110 cm at the end of the twenty-first century. In a review article (Zanchettin et al. 2021, that includes the Authors of Lionello et al.), the range between 32 and 110 cm was better specified for different emission scenarios, with ± 10 cm uncertainty due to the subregional impact of atmospheric forcing, thus yielding to the range reported in Lionello et al. (2021). For Venice, 10 cm should be added for the LLS.

The above results may be compared with the predictions for 2100 made by the Intergovernmental Panel on Climate Change (IPCC). In the 1990 report, Warrick et al. (1990) proposed 31–110 cm of global mean sea level (GMSL) rise under the business-as-usual scenario. The recent IPCC Special Report on the Ocean and Cryosphere in a Changing Climate (Oppenheimer et al. 2019) updated the GMSL projection to 43 ± 15 cm for RCP2.6 and 84 ± 25 cm for RCP8.5. A comparison with this paper should be made using the RCP2.6 scenario, that is characterized by lower anthropogenic radiative forcing. However, it must be specified that IPCC considers the GMSL value, to which the projected local land subsidence (LLS) should be added for Venice. By adding LLS = 10.5 cm (Zanchettin et al. 2021), the lower levels projected by Warrick et al. (1990), and Oppenheimer et al. (2019) become 41.5 cm and 53.5 ± 15 cm, respectively.

Local land subsidence

A comment is due to the LLS in Venice, as this must be added to the projections concerning GMSL or the Northern Adriatic. An extensive review has been made by Zanchettin et al. (2021) to which Vacchi et al. (2016, 2021), Baldin and Crosato (2017), and Kaniewski et al. (2021) should be added. The subsidence in Venice is known since the Pleistocene. It was characterized by a long-term component (with time-scale of about 106–104 years) controlled by tectonics, geodynamics and sedimentation, and a short-term component (with time-scale of 103–104 years) controlled by glaciation cycles and glacial isostatic adjustment. In the past century, the subsidence had an anomalous period during the intensive groundwater pumping from 1930 to 1970. After pumping was stopped, LLS tended to return to the natural rate and (partial) rebounding of the aquifer (Gatto and Carbognin 1981; Carbognin and Taroni 1996; Pirazzoli and Tomasin 1999). After 1992, satellite altimetry provides a remote detection of LLS, showing that LLS varies over space (Tosi et al. 2009, 2013, 2016). In the past, LLS was measured with different methodologies (e.g. local sampling of sediments, geological and biological indicators, marine isotope stratigraphy; dating with radiocarbon, archaeological remains) not strictly comparable between them. In addition, most values are reported without the uncertainty band. This makes difficult the comparison between the different findings and to obtain a calibrated time record. In their models, Doglioni (1993), Carminati et al. (2003, 2005) and Lionello et al. (2021) assumed that the natural subsidence (i.e. due to enduring long-term geological trends) has been, and will continue to be, around 1 mm yr−1.

It should be noted that all Authors agree that LLS had an extremely slowly variability over time, except for the anomalous period of anthropogenic disturbance in the past century. Therefore, LLS is very important in determining the relative sea level at a certain date (i.e. cumulative effect over time) and in assessing the relative rising rate (i.e. sum of the instantaneous values of the eustatic and subsidence rates), but is irrelevant for the acceleration (i.e. when the yearly rate is considered constant, acceleration is null).

Differently from predictive models, this paper does not require to know LLS, because this work is based on the best-fit interpolation of actual relative sea level data. The relative sea level constitutes the undistinguished sum of the eustatic and the isostatic components, thus automatically including both of them as well as their evolution.

Uncertainties in the various periods constituting the multiproxy series of Venice

A strong point of the multiproxy series of Venice is its exceptional length, with short RF = 12% of the record duration, i.e. an excellent condition for prediction robustness.

The accuracy of the dataset is always a crucial point. It is obvious that uncertain datasets are not a good prerequisite for accurate predictions. However, it must be specified that the predictors are constituted by the set of observations, whose value and uncertainty depend on how they have been generated and measured, either directly with instruments, or indirectly with proxies. It has been specified that the prediction is based on calculations, and the uncertainty of predictions is different from the uncertainty of predictors. The latter can be tested, while the uncertainty of long-term predictions cannot be verified. Therefore, it is not fully justified to apply to predictions the uncertainty of predictors (James et al. 2021) although this is a common practice, that we will follow as a reference. To understand the quality of the dataset, it is useful to make explicit the uncertainties, as follows.

Tide gauge record (1871-today). The instrumental error of the tide gauges with strip chart recorder and potentiometric transducer used in Venice since the 1970s is ± 3 mm with large float (20 cm diameter, mostly used) and ± 9 mm with small float (10 cm diameter), while the bulk error of the still well (e.g. marine current or waver dynamic effect, thermal and salinity layering, orifice fouling) may reach ± 6 cm under extreme conditions (Camuffo et al. 1972). These errors are relevant for the instantaneous values, but are mostly filtered out in yearly averages. In addition to the instrumental error, the sea level is affected by fluctuations on different time-scales for meteorological forcing (e.g. inverse barometric effect, wind drag) or water exchanges between the Atlantic and the Mediterranean (e.g. NAO, multi-decadal oscillations). This introduces some randomness in the record. The standard deviation (ST DEV) of the de-trended yearly sea level averages over the instrumental period (i.e. the values used in this study) is ST DEV = 4 cm, mostly due to atmospheric and marine interactions. In Fig. 1, the error band for the tide gauge period has been indicated ± 4 cm.

Canaletto and Bellotto paintings (eighteenth century). The uncertainty in the determination of the sea level rise obtained from the displacement of the green algae belt reported in the paintings by Canaletto and Bellotto is 10 cm (Camuffo and Sturaro 2003, 2004), i.e. error band ± 10 cm.

Sea stairs. Sea stairs have been measured over the whole pre-instrumental period for reconstruction, calibration and validation of the method. The constitute the main reference source for the fourteenth–seventeenth centuries. Their uncertainty is 17 cm (Camuffo et al. 2017), i.e. error band ± 17 cm.

Discussions

The above uncertainties have been reported in Fig. 1 for the periods based on tide gauge, paintings and sea stairs. These uncertainty bands constitute three slightly curved rectangles symmetrical to the exponential interpolation line. The uncertainty width is largest in the early period of stairs, i.e. 34 cm, becomes 20 cm with paintings, and 8 cm with tide gauge. This particular sequence has suggested to draw the exponential line that connects the upper (as well as the lower) extreme border of the uncertainty in 1350, i.e. −123.1 ± 17 cm, to the reference point (i.e. Zero level at the year 2000). By definition, the uncertainty is null at the year 2000, because it has been taken as a reference. In average, these two lines may be assumed to represent how the uncertainty progressively decreases over the time series.

To find the exponential line passing across two points with coordinates (x1, y1) and (x2, y2) the following system of equations should be solved

$${y}_{1}=a \,\mathrm{exp}(b{x}_{1})$$
(11a)
$${y}_{2}=a \,\mathrm{exp}(b{x}_{2})$$
(11b)

where a and b are the two unknown to be determined.

Dividing Eqs. (11a) by (11b), and putting A = y1/y2 one obtains

$$\frac{{y}_{1}}{{y}_{2}}=A= \frac{a \,\mathrm{exp}(b{x}_{1})}{a\, \mathrm{exp}(b{x}_{2}) }= \frac{\mathrm{exp}(b{x}_{1})}{\mathrm{exp}(b{x}_{2})}$$
(12)
$$\mathrm{exp}\left(b{x}_{2}\right)= A\mathrm{exp}(b{x}_{1})$$
(13)
$$b\left({x}_{2}-{x}_{1}\right)=\mathrm{ln}(A)$$
(14)
$$b=\frac{\mathrm{ln}(A)}{{x}_{2}-{x}_{1}}$$
(15)

By substituting in Eqs. (11a) or (11b) the numerical value of b found with Eq. (15), one finds the second unknown a, i.e.:

$$a=\frac{{y}_{1}}{\mathrm{exp}(b{x}_{1})}= \frac{{y}_{2}}{\mathrm{exp}(b{x}_{2})}$$
(16)

The two exponential lines that represent the uncertainty borders form a curved triangle (similar to an elephant tusk) and all the exponential lines that pass across the vertical base −123.1 ± 17 cm in 1350, and converge to the reference point determine the geometric dataset domain bounded by the lower and the upper uncertainty borders. The extrapolation of the two border lines to 2100 defines the extremes of the projected values. The extrapolated uncertainty border lines depart ± 42 mm from the central value given by the trend-line equation.

Calculating with Eq. (7b) the extrapolation for the year 2100 one obtains 33.8 ± 4.2 cm above the level in 2000; similarly, 29.7 ± 4.2 cm with Eq. (7a). As it has been demonstrated that only the exponential equation is physically acceptable because it responds to an increasing radiative forcing while the quadratic representation with constant acceleration does not, the quadratic equation should be rejected. Therefore, the trend extrapolation method suggests for 2100 a lower threshold around 34 ± 4 cm.

It might be useful to remind that this is not a forecast, but the minimum threshold that might be expected if the natural system could continue undisturbed till the end of the century, or if the long-term inertial mechanisms involved will be largely dominant. Every perturbation may change it, and an increased anthropic warming will likely raise the sea level above this minimum threshold.

Conclusions

Although quadratic equations are popularly preferred to represent the sea level for their easy interpretation and the explicit acceleration value (Baart et al. 2012; Jevrejeva et al. 2014) it is not said that they constitute the best choice. This paper has demonstrated that, when a quadratic polynomial is used to represent the sea level, the acceleration (i.e. the second-order derivative) is necessarily constant, which implies a constant forcing. Constant forcing is typical of the physical situation in the middle of a homogeneous climatic age, but can hardly represent the transition period form a climatic age to another, e.g. from LIA to Global Warming and especially nowadays in view of the IPCC emission scenarios. An increasing radiative forcing requires an increasing acceleration, e.g. sea level represented with an exponential or a polynomial of order higher than two.

Climate forcing may drive different mechanisms that will affect the sea level, each of them characterized by a different response time and related acceleration. Effects with short response time have more probability of being observed, and will be better monitored, than mechanisms with long response. This constitutes a serious difficulty for analytical equations.

When the sea level is represented with quadratic equations, their derivatives generate constant acceleration values. These can be interpreted in terms of average values over relatively short periods, in which the selected physical variable does not change too much. As already noted (Jevrejeva et al. 2014; Camuffo et al. 2017), in every record, every selected period is characterized by different accelerations. The case-study of Venice gives an excellent example of acceleration continually increasing over time that fits with a continuous warming from LIA to nowadays. Therefore, to make comparable scholarly results concerning different geographical locations, it is necessary to consider accelerations over a common reference period, the same for all records. By comparing the acceleration of the multiproxy series in Venice to the acceleration on the global scale by Spada et al. (2015) over the common period 1898–1975, the two values are respectively 0.0047 ± 0.0004 mm yr−2 and 0.0054 ± 0.0027 mm yr−2, in reasonable agreement between them.

The case-study of Venice has shown that a (long) dataset may be represented with more than one best-fit equation. Although the choice of the equation may be irrelevant to determine the average sea level of the current year, either in the past, or in future projections, the choice may become highly relevant when dealing with the formulae of rising rate and acceleration.

By calculating the first, and the second time-derivatives of the best-fit equations of the sea level in Venice, one obtains two equations for both the rising rate and the acceleration. These equations give the same average results, but the quadratic one presents a scenario in which the forcing factors are constant over time, while the exponential grows proportionally to the actual value, at constant doubling time. The quadratic gives a stationary response (i.e. constant over time), the exponential a dynamical one (i.e. variable over time). The choice depends on the aims (e.g. bulk average representation and characterization of a climate period; the need of instantaneous data or future projections) and the physical assumptions about the involved scenario. In general, the exponential is preferable because it is subject to less limitations.

It has been explained that the trend-forecast extrapolation cannot be used to forecast the sea level, but may be used to determine the lower threshold that may be reached by the end of this century. This constitutes the most favourable scenario if the radiative forcing will continue to keep the trend included in the dataset, i.e. business-as-usual, where the term “usual” means the trend and the increasing forcing over the multiproxy dataset. It must be specified that the Venice dataset has the exceptional duration of over 6.5 centuries, from LIA to nowadays. Therefore, it includes a radiative forcing with increasing trend (not a constant one), in analogy with the present-day warming. Under these assumptions, the projected scenario includes a threshold of 33.8 ± 4 cm rise over this century.

This paper has extracted the information naturally included in the long multiproxy series of Venice following the kinematic approach. This approach is simple, being based on only one physical variable, but needs very long series. Finally, the results have been compared to other scholarly papers based on the multi-variable dynamic approach.

It should be considered that this paper is based on a very long record, i.e. around seven centuries, and supposes a homogeneous, coherent evolution of the natural and anthropic system, thus excluding turning points or anomalous future changes. It constitutes a black box analysis of the system; it is not a forecast that should consider any future change to the system or to the external forcing. Therefore, this extrapolation is mainly conditioned by the natural evolution of long-term mechanisms, while predictive models are based on projections of future emissions. Any difference should not be interpreted in terms of “better” or “worse” prediction, but as a different information. This paper predicts the baseline corresponding to the unperturbed evolution of the system, and any difference with physical models will point out the potential impact of selected trends of radiative forcing that will cause the sea level to exceed this baseline. The exponential extrapolation of this paper is intermediate between the extrapolation by Ferla et al. (2006), and to the lower limit by Lionello et al. (2021) and Zanchettin et al. (2021). Similarly, it is close to the 1990 IPCC projection (Warrick et al. 1990), but lower than the 2019 IPCC projection (Oppenheimer et al. 2019).

In Venice, the relative sea level followed a continually rising trend since 1350, when the multiproxy documentation starts. The observed data do not support a minimum level around the early eighteenth or nineteenth century as suggested by Vermeer and Rahmstorf (2009), Grinsted et al. (2010) and Kemp et al. 2011. This suggestion might respond to a theoretical consideration that, for the cooling that culminated in the middle of LIA, the oceanic waters should have increased their density and contracted their volume, reaching a minimum level, but this trend has not been observed. A stasis, or even a turning point around a minimum of the eustatic component could have been possible in the early period of the multiproxy series (i.e. the fourteenth century). After, the eustatic component continued to grow over time. Starting from the nineteenth century, it became dominant over LLS, and near the end of this century, it will likely exceed twice the LLS rate.