Introduction

The impact of the weather on the productivity of arable farming is beyond dispute. The year 2018 is a clear case in point, as countries ranging from Sweden and the Netherlandsto Switzerland and Italy experienced extremely low levels of rainfall from spring to autumn, resulting in bad harvests and expected price rises of potatoes, onions and other crops, but also in record-breaking harvests for north-European wine growers. Hence, there is no denying the influence of weather on agriculture. It seems obvious then that, if the weather influences agricultural productivity, changes in the long-term pattern of the weather affect agriculture in the long run.

Many recent studies claim that climate change and its impact on agriculture determined the fate of past societies. The prosperity of the Roman Empire at its height is attributed to the Roman Climatic Optimum, which “fueled the agricultural engine of the economy” (Harper 2017, 52; Cf. McCormick et al. 2012). Adverse climate change is thought to have disrupted the food supply, causing not only misery and increased mortality, but also social and political unrest, and even the fall of empires. Population itself is directly linked to climate change in the form of shifts in the carrying capacity of the land (Galloway 1986). However, a thorough analysis of the causal links between climate change and agricultural productivity is generally missing. The argument is largely based on instances of extreme weather (exceptionally dry summers, wet winters etc.) that caused harvest failure and sometimes even famine, and on the experiences of peoples who lived in geographical circumstances that were on the margins of the biological requirements of arable crops, such as the extreme north or semi-arid regions. However, we should not confuse bad weather events with long-term climate change, although the latter can change the frequency of the first. Moreover, if we want to test the hypotheses regarding the demographic, economic and political consequences of climate change in less clear-cut cases, we should go beyond intuitive conjectures and analyse the consequences of climate change for agriculture more thoroughly. In this paper, we shall attempt to do that for the Roman period, and in particular for the Mediterranean region.

We will start with some remarks on the nature of climate change in relation to agriculture and continue with an analysis of the impact of weather conditions (primarily temperature and precipitation) on crop growth. Answering the question whether the impact on agriculture of climate shifts during the Roman period was sufficiently strong to have a major impact on society is hampered by the lack of quantitative and long-term series on weather and agriculture—with the necessary high spatial and temporal resolution—needed for measuring the link between them. Hence, we will step outside the chronological boundaries of the Roman world and look at data pertaining to the more recent past—and even to the future—in order to offer a clearer picture of the potential impact of climate change on agriculture and food supply in the Roman period. The main question to be answered in this part of the paper is: to what extent did long-term changes in temperature and precipitation have an impact on general levels of agricultural production? The focus will be on cereals,Footnote 1 as these constituted the main staple food in this part of the world from prehistory to the present. Other foodstuffs and crops, like pulses, vegetables, olives and grapes/wine, may have played an important supplementing role in providing the necessary calories and nutrients, but the potential consequences of the annual fluctuations in their production were less far-reaching. We will see that there is no clear and predictable linear link between temperature and agricultural production. Shifts in precipitation potentially have a greater impact on cereal production than temperature shifts, but they are also geographically more variable. In order to assess the scale and nature of climate shifts in the Roman period, we will also look at climate shifts in the second millennium CE, for which historical sources and proxies are better than for the earlier period. Subsequently, we will assess the short-term and long-term implications for the Roman world, including the debate on carrying capacity and population. Several recent and prominent studies of climate and society in ancient worlds are based on Malthusian models. Adverse climate change is seen as lowering available resources, with inescapable negative results on economy and population.

Climate, Trends and Weather

One of the difficulties that presents itself in investigating the consequences of climate change in early societies is the absence of high-resolution data—both in terms of space and time. With the exception of tree ring evidence, most proxies offer a picture of climate change with a relatively low chronological resolution, but the contribution that dendrology can make to the reconstruction of the Mediterranean climate in the first millennium CE is limited. The state of past climate research is different for the less distant past. Even before instrumental measurements emerged, documentary evidence and a wide range of individual observations from the late Middle Ages onwards help modern scholars to reconstruct weather patterns at a local level from the early modern period onwards. Similar sources for the ancient world only allow very rare insights, which are much too fragmented and isolated and too limited to exceptional circumstances to be helpful in climate reconstruction. Hence, despite much progress in recent years, our view of climate change in the Roman period is limited to trends in temperature, precipitation and variability at a multi-decadal level, with only isolated observations of extreme phenomena, caused for instance by short-term factors such as volcanic eruptions. Just a few years ago, the chronological resolution of paleoclimate data was lower than nowadays, and therefore the danger that climate and weather were confused was even greater than now. As Contreras et al. (2018b, 56) recently pointed out, in the absence of data on a seasonal or annual scale, it is easy to posit a homogeneity in the weather that clearly does not reflect the actual situation.Footnote 2 In the absence of seasonal or at least annual data, it was tempting to describe the weather based on the trend, leading to such notions as the beneficial weather during the Roman Climatic Optimum (Brooke 2014, 2018) or the “century-long drought” that caused economic decline in the early Byzantine period (Fuks et al. 2017, 214). Conversely, singular extreme weather events may not be picked up in the relatively low-resolution proxies. For example, the textual sources describe a major famine in the Byzantine Empire in 927, caused by an extremely severe winter, but the available proxies show no cooling trend or significant variability in the 920s or 930s (Izdebski et al. 2018, 295). Chronological resolution is, however, crucial for the assessment of the impact of climate change, as agriculture and crop growth ultimately functioned at a seasonal, even daily, scale.

That climatic periods are not homogeneous and uniform is not so easy to show regarding the Roman world,Footnote 3 but the much better sources on the Little Ice Age from about 1400–1850 make this very clear. Recent studies have pointed out the high degree of variability throughout this climatic era. The climate, let alone the annual weather, was not invariably cool; periods of exceptionally high temperatures occurred. While around 1420, the Spörer solar minimum ushered in a period of generally lower temperatures and unstable weather conditions, years of very hot and dry weather occurred nevertheless (Camenisch and Rohr 2018, 100; Rodrigo 2018, 248–249; Rohr et al. 2018, 255).Footnote 4 In fact, even the chronological boundaries of the LIA have come into question. A recent dendrological study of multiple long-term tree-ring series in the northern hemisphere showed that the fourteenth century was among the warmest in the second millennium and that the transition of the Medieval Warm Periodto the LIA may have been significantly later than assumed, effectively limiting the LIA from the seventeenth to nineteenth century (Esper et al. 2018, 87). Earlier climatic eras will have been just as heterogeneous as the better documented Little Ice Age. In short, a phrase like “a century-long drought” evokes images of environmental catastrophe—a clear example of the devastating impact of adverse climate change on agriculture and society—but it is misleading in its assumed uniformity.

The limitations are similar concerning the spatial resolution. Proxies are either very general (such as Greenland ice) or relatively sparse and local (such as speleothems), while the Mediterranean landscape creates many diverse local microclimates. Temperature in general tends to be spatially less variable than precipitation (PAGES hydro 2k 2017, 1863–1865), but the variability of the climate was even larger in the Mediterranean than in central or northern Europe. One study combining all available proxy data for the Mediterranean in the first millennium CE concludes that “there is no Mediterranean-wide agreement on overall temperature trends for the 1st millennium AD”. The proxies even give conflicting signals on whether either the earlier or the later part of the millennium was warmer (Labuhn et al. 2018, 73). Spatial variability of precipitation tends to be still larger (Reidsma et al. 2009, 32), mostly so in northern Africa, which lies between the climatic regions of the Mediterranean and the Sahara (Mougou et al. 2011). Labuhn et al. (2018, 73–80) tentatively conclude that as far as hydro-climate (i.e. precipitation) is concerned, proxies point to four subregions in the Mediterranean in the first millennium CE. Even more clearly: “The Roman experience of climatic change in the Mediterranean was never the same” (Labuhn et al. 2018, 81).

Not only temperature and precipitation were important, but evapotranspiration (the loss of moisture from the soil, plants and water surfaces) as well, which depends on air humidity and wind speed, which differs locally. Elevation and distance from the sea are additional important variables. In other words, the geographically low resolution of climatic reconstructions of the ancient world makes it difficult to assess the climatic influences on agriculture against the varied geographical and topographic background of the Mediterranean (Saadi et al. 2015, 107; Izdebski et al. 2016, 5; Contreras et al. 2018a, 4; 2018b, 58; Haldon et al. 2018, 4).

Climate, Agricultural Systems and Cropping Strategies

Agricultural productivity is obviously not only determined by the influence of the weather on biological processes. In fact, agricultural systems, differing in sowing times and sowing rates, fertilization, treatment of the soil and the selection of crop varieties, determine yield at least as much. Porter and Semenov (2005, 221, referring to the research of Monteith in the 1980s) estimated that less than 20% of yield variation of winter cereals (12% on heavy soils; 17% on light soils) was determined by weather (rainfall, radiation and temperature). Human decisions have a significant impact on agricultural productivity, and therefore choices regarding production goals in an ever-changing social-economic context were just as important as climate. Agricultural systems are fairly stable in the short term and for the largest part do not change annually. Farmers in the Mediterranean knew the variability of the weather and made their decisions accordingly.Footnote 5 In that sense, agricultural systems were partly adapted to the climate.

However, climatic adaptability does not imply local uniformity in agricultural systems or cropping strategy (Heinrich 2017). Market-oriented farmers generally responded to profit opportunities, while subsistencefarmers may have aimed at fulfilling their household’s needs. The optimal crop to cultivate under the local climatic conditions depended on one’s goal: it may have been wheat for the rich landowner and barley or rye for his poorer neighbour. This also means that, if climate changes, so did cropping strategies and agricultural systems. When prices of foodstuffs changed, so did the rewards for producing these crops, which altered the configuration of production factors and considerations of farmers. For example, a long-term decline in wheat yields and an increase in variability of wheat yields may have caused a general rise in the price levels of wheat in relation to other crops, with peak prices occurring with some regularity. This may have induced some farmers to invest more in wheat than in other crops or to expand their arable land, while at the same time, others may have opted for the climatically more optimal and less risky cereals. This also ties in with the notion of carrying capacity, which is often based on the false assumption that human factors in agriculture are always optimized for highest production. Societal factors meant that agricultural systems were not optimized and neither were they fixed and unchanging. The questions to what extent there was scope for adapting the agricultural system and what this meant for the impact of climate change on food supply in the Roman world will be discussed below.

Annual harvests varied between poor and abundant, but in Mediterranean rain-fed agriculture, conditions were rarely so extreme as to offer no harvest at all. Harvest failure is not a biological concept, but a societal one. What was considered a failed harvest depended on expectations and goals. Paul Halstead pointed out that precisely the vagaries of agriculture urged ancient farmers to aim for harvests that exceeded their needs. Hence, a “normal surplus” was their goal (Halstead 1989). The ratio between land, labour and the consumption needs of their households determined the smallholders’ ability to pursue this strategy, which means that harvest failure is not a biologically fixed concept, but depends on the socio-economic position of farmers, and also on the extent to which authorities claim a share of the harvest. This also means that it is impossible to give a climatic-biological definition of harvest failure, although some modern historians precisely do that and offer precipitation figures as proxy for harvest failure. To give one telling example: a study of agriculture in Tunisia points out that in the Kairouan region between 1950 and 2004, durum wheat had nearly constantly been cultivated under drought conditions, as the water requirements for wheat in rain-fed arable farming were almost never met and water deficits generally occurred from December to April.Footnote 6 In other words, even the meaning of “drought” depends on its context, as precipitation levels that would be regarded as “drought” in one place are normal in others. Of course, agricultural productivity and most of all, the variability of harvests, will differ accordingly, but the point is that what would be a failed harvest to some farmers, would be normal to others.

Measuring the Impact of Climate Change

Despite these caveats, harvests fluctuated between the proverbial “times of feast, times of famine”. As observed above, agricultural systems did not change annually, so that, all other things being equal, the vagaries of the weather were a cause of great concern to farmers. As the long-term patterns of the weather changed, so did the circumstances of agriculture. The extreme cases on the cold and arid fringes of the agricultural world seem quite clear, but how to determine the impact of climate change in more temperate conditions?

Analysing the link between weather patterns and climate change on the one hand and cereal yields on the other requires spatially and temporally high-resolution data both on harvests and weather. Sowing times, crop varieties and soil conditions play an important role too. Such data are not available in sufficient detail for the pre-modern world, so it is to the modern world that we may turn. In view of the debates on global warming, many studies on the impact of climate change are available. These studies are based either on (1) the data on yields and climate of the recent past (generally limited to the second half of the twentieth century and the first decade of the 21st) or (2) the combination of crop models (that is, generalized models of crop requirements during the various stages of the growth cycle) and simulations of local climate change in the twenty-first century. Moreover, these studies are usually focused on limited areas. The results offer detailed analyses of the various weather-related determinants of crop development under a specific range of climatic, geographic and agricultural conditions.

However, the results cannot simply be projected onto the distant past. First, rising CO2 levels directly change the biological conditions of plant growth. Plants convert atmospheric CO2through photosynthesis into glucose. Higher levels of atmospheric CO2 stimulates plant growth. Currently, the atmospheric CO2 level is 390 ppm, but it is predicted to reach 700 ppm later this century (Rosenzweig and Hillel 1995; figures from Jalota et al. 2013, 29). Since atmospheric CO2 levels were lower up to the recent past, certainly much lower than in the near future, the figures on yields of these studies (and in particular those based on future simulations) cannot be directly projected onto the past. Second, the simulations based on crop models and the empirical data concerning yields under various weather conditions in the recent past pertaining to modern, post-Green Revolution, cereal cultivars that have been extensively engineered with characteristics desirable for modern industrial agriculture and the food industry in mind. While the ancients practiced selective breeding and humans have always sought to improve their crops, these modern cultivars did not exist in antiquity and crop populations on the field would have been more genetically diverse (Heinrich and Hansen 2019). The crop models based on these modern cultivars therefore only imperfectly reproduce the response of pre-modern crops to various weather conditions (Contreras et al. 2018a, 4). Third, pre-modern farmers had much less recourse to fertilizers than modern agriculturalists, which creates a bias in the comparison between ancient and modern farming. Finally, modern insurance companies and, in some cases, the agricultural policies of national governments, offering compensation for bad harvests, limit the consequences of bad harvests to farmers, whose household’s wellbeing or even survival does not depend as much on an adequate crop as in pre-modern times. Pre-modern farmers had no recourse to institutionalized help and had more reason to avoid risks than their modern successors. Despite these differences between modern and pre-modern crops and agricultural systems, studies on the impact of climate change in the recent past and near future provide insights that are relevant to the ancient world.

Let us start with an analysis based on crop models and climate change simulations in Apulia during the twenty-first century. Two recent studies, by Ventrella et al. (2012) and Lionello et al. (2014), each offer different scenarios of climate change, one more moderate and one more extreme. The extreme scenario seems less relevant to the past than the more moderate one, so we will limit ourselves to the latter, which assumes an increase by mid-century of the mean maximum (day) temperature of 2.3 °C and of the mean minimum (night) temperature of 1.7 °C. Precipitation is assumed to fall by 10.4% (Ventrella et al. 2012, 409), which may be put in perspective by comparison to the drop of precipitation of 5% in northern Italy and 15% in southern Italy over the course of the twentieth century (Appiotti et al. 2014, 2008). Hence, the climate in Apulia, which is hot and dry in summer, with precipitation mostly concentrated in the winter months, will become warmer and drier still. In Apulia—as in most of the Mediterranean lowland areas—wheat is sown in late autumn and harvested before the hottest and driest months. Under the conditions of the moderate scenario, yields of durum wheat are predicted to increase by about 10% in comparison to yields in the recent past. We should take into account though, that this predicted rise in yield occurs under the conditions of increasing CO2. In the more extreme scenario, yields are predicted to fall. The study of Lionello et al. (2014) confirms that durum wheat, sown in the winter season, was not particularly sensitive to moderate climate change. No link was found between wheat yields and temperature variability and only a weak correlation with spring rain, but not with precipitation in other seasons. On the other hand, the predicted temperature rise and decline of precipitation did have a relevant impact on the productivity of wine and olives. The latter are crops growing in summer and are thus more affected by the peak summer temperatures and the low summer precipitation. The drop in wine production is explained as crossing a high-temperature threshold for grapes in an already hot region. Durum wheat, on the other hand, was not significantly affected by the temperature rise of about 2 °C.

The importance of thresholds of temperature is also noted in other studies. In particular, when critical levels of temperature are crossed, steep drops (or, reversely, increases) in yields can occur. These thresholds of temperature, which have effects mainly on certain processes in plant development, have a narrow range (Rosenzweig and Hillel 1995; Porter and Semenov 2005, 2022). This vital role of thresholds also means that variability is more important than the mean, as the annual harvest will be affected by the conditions of each step in the growing process, not by the mean. For example, a simulation of the effect of global warming on wheat in Punjab (India) concluded that by mid-century the yield would be significantly lower in 7 out of 30 years. In other words, in more than 75% of years, the wheat yield would not significantly deviate from the past mean (Jalota et al. 2013, 30). Another study analysed the effect in simulation of increased variability with equal mean. The conclusion was that increased variability of temperature and precipitation caused a drop in mean wheat yields in Spain, but not in England, which again should be related to the importance of thresholds. While the increased variability meant that conditions in Spain exceeded beneficial levels in more years, the increased variability remained within the acceptable range in the case of England (Porter and Semenov 2005, 2031). A study of trends in Europe between 1990 and 2003 showed that a high variability in the yield of wheat and barley was strongly linked to high variability in precipitation, but not to temperature variability. “However, the effect of temperature variability may increase when temperature shifts away from the crop-specific optimum” (Reidsma et al. 2009, 32, 38). In sum, there is no linear link between changes in weather conditions and changes in yield; the response to the same trend in temperature and precipitation depends on how the starting point relates to the biological tolerance range for that particular crop. However, as the case study of Apulia has shown, the impact of temperature and precipitation also differs in relation to season and stage in crop development.

Impact of Temperature on the Crop Cycle

We may distinguish four stages in the development of wheat: (1) Germination and seedling growth; (2) Development stage and tillering; (3) Stem elongation (4) Anthesis and grain filling (Chourghai et al. 2016, 1627). The duration of the entire growth cycle depends primarily on temperature and sunlight. The mean length of the crop season of winter wheat in the Mediterranean in 2000 was 216 days +/−33 days. The average shortening of the crop cycle due to global warming, which mainly takes place during the pre-flowering phase, is predicted to be 15 days (Saadi et al. 2015, 107). The shortening of the crop cycle in the Mediterranean has both beneficial and harmful consequences. On the one hand, the shortening may lead to the reduction of biomass growth, which results in a lower yield (Rosenzweig and Hillel 1995; Saadi et al. 2015, 107). On the other hand, the shortening of the crop season improves the chances of avoiding hot and dry weather during spring, which is also related to sowing dates of winter wheat. Sowing in low altitudes in the Mediterranean should correspond with the drop of temperature below the threshold for the first development stages of the crop and the start of the precipitation season (Saadi et al. 2015, 105). Changing sowing dates is an important adaptation strategy to climate change (Rosenzweig and Hillel 1995). The crop cycle of pre-modern varieties of wheat differs from modern cultivars in the sense that their early development stages were shorter (Contreras et al. 2018a, 4–5), which gave farmers in the past more scope to adapt sowing times to the climatic conditions of their location.

The growth of the plant slows down during the cold months and increases again with the rise of warmth and sunlight in spring (Klages 1947, 92–93). It then becomes crucial for the crop to ripen before the drought of summer endangers the harvest (Spurr 1986, 44).Footnote 7 In the last phase of the ripening process, however, precipitation is not required; the seeds will ripen without using much moisture. Except for the latest phase of the wheat crop cycle, insufficient moisture is harmful to the crop’s development and hence yield, which means that precipitation levels or soil moisture should be sufficient from sowing in autumn onwards until the final ripening of the crop in spring. Total annual precipitation is therefore less important than its spread over the seasons, which may be illustrated by a study of Algeria (Chourghai et al. 2016).

Agriculture in this country is almost completely rainfed (98.6% of total arable land). Durum wheat, the main crop of modern Algeria, is currently sown between mid-October and late November. Annual precipitation in the coastal region of Algiers is 643 mm with a standard deviation of 172 mm. In the mountainous inland region of Bordj Bou Arreridj, it is significantly lower: 369 mm, standard deviation 87 mm. Precipitation is predicted to fall in Algeria, but this trend is unequally spread over the seasons. Moreover, the trend is geographically varied. Rising temperature will also lead to increasing evapotranspiration in durum wheat, potentially causing increased moisture stress. The maximum of temperature and evapotranspiration occurs in spring and summer. The mean annual temperature is predicted to rise in the coastal region of Algiers by 2.8 °C; in the mountainous inland region of Bordj Bou Arreridj by 3.5 °C. In the region of Algiers, total annual rainfall is estimated to fall by 18%, mostly in spring and summer, but precipitation is predicted to rise in October. In Bordj Bou Arreridj, however, rainfall will fall in particular in spring, but increase for the months of June to October i.e. much of the summer and autumn. In both regions, an earlier sowing of durum wheat will be possible, but this will be insufficient to avoid a decrease of yields in the region of Algiers, while it will keep yields at current levels in the inland region of Bordj Bou Arreridj. In sum, the trend in annual precipitation is less important than what happens at the monthly level; regional variation can have significant consequences.

Weather and Risks in Roman Agriculture

The most important sources for the crop cycleand weather hazards in ancient Greece and Italy are the works of Hesiod, Xenophon, Varro, Columella, Pliny and Palladius. Hesiodand Xenophon wrote in archaic and classical Greece; Varro, Columella and Pliny in the first century BCE and CE. The Roman authors dealt primarily with farming in Italy. Palladius is not securely dated; he wrote in the fourth or fifth century CE. Hesiod’s Works and days is a poetic celebration of the farmer’s year, the others are well-informed and more or less factual accounts of arable cultivation and crops (Spurr 1986, ixff). The details of their work reflect geographical and climatic differences, but their accounts are not detailed and specific enough to perceive possible shifts within the crop cycle due to climate change. They all wrote from the viewpoint of the well-to-do gentleman-farmer. While the wealthy farmer may have owned the more fertile soil and better-situated land, and probably cultivated a different array of crops than the smallholder, primarily opting for crops that fetched a good price in urban markets, all other things being equal, there was no great difference in the crop cycles between the various types of farms.Footnote 8 The natural factors influencing the growth cycle were equal, whatever the social context. The fact that agricultural writers regularly mention “emergency crops” in their works reflects their wide use and efficiency. Since the readers of Columella or Pliny did not have to fear marauding armies ravaging their farms, we may safely assume that the crop failures the authors had in mind were caused by bad weather. Climatic factors would lead to reduced yields, but seldom caused a crop to fail completely.

Columella, who is the most detailed of the agricultural writers in this respect, pays more attention to sowing than to harvesting. Detailed advise is given regarding the time of year when seeds have to be sown under warm, moderate and cold conditions. For instance, sowing of wheat in cold areas is advised as early as September or, regarding the three-month variety, as late as March in the hills. Columella does realize that circumstances are not always ideal, and that some farmers have to accept less than optimal results. While sowing of emmer normally occurred in autumn or early winter, Pliny and Columella mention spring-sown emmer. According to Columella, this would yield better results when sown at the usual time of year; it is therefore not seen as a special variety of emmer, but as the adaptation in sowing date to particular circumstances. Pliny on the other hand treats it as a special variety of emmer. Less detailed than the advice on sowing is that on harvesting, which according to Columella occurs in June or July. The reason for the more careful treatment of sowing practices is obvious: while the farmer can best judge the time to harvest on account of the state of the crop, sowing times have to be determined by considerations concerning soil, seed and the weather.Footnote 9

Just as in modern times, the growth cycle as described by the agricultural writer’s centres around the avoidance of two unfavourable circumstances: drought and heat in summer, and cold in winter. They also point out that the susceptibilities of the different types of cereals varied, which was probably one of the reasons various types of cereals were cultivated throughout Italy.Footnote 10 An important type of wheat in Antiquity was emmer wheat (Triticum turgidum ssp. dicoccon), mainly because of its resistance to extremes in weather conditions during its crop cycle and its hardiness during storage. Its various cultivars resist cold quite well, but also survive under circumstances of heat and drought. Because of these characteristics, it has remained an important crop among peasant cultivators throughout history. Besides the hulled emmer wheat, naked wheats like bread wheat (Triticum aestivum ssp. aestivum) and durum wheat (Triticum turgidum ssp. durum) were also grown. Bread wheat is the more cold resistant, durum wheat is the more drought resistant of the two (Heinrich 2017, 160). Rye was more cold resistant too and therefore occurred mostly in northern Italy (Heinrich 2017, 159). The growth cycle of barley was shorter than that of wheat. Worth mention are various types of panicum and milium, which were spring-sown cereals, as they were quite drought resistant, but sensitive to cold.

In the plains of the south of the Italian peninsula and in eastern Greece, the drought and heat of summer may be seen as the main determining factors of the crop season. In the mountains of the Mediterranean regions, however, it is not the summer but the winter which constitutes the main obstacle, since the wind and heavy rainfall of the coming autumn could harm the crop. In general, the seed had to be sown in time for the young seedling to become resistant to the cold and heavy rains of autumn or winter. In the warmer regions of southern Italy and eastern Greece, where winters are fairly mild and short, a late sowing is possible, while the growth is not severely slowed down, which is necessary in order to be able to harvest before the extreme summer heat and drought. The main advantage of plants with a short growth period, such as barley, is that they can be cultivated in moderate regions during a larger part of the year without winter or summer posing a serious threat, while they can also be cultivated in regions with long winters or summer droughts. The long growth cycle of some types of naked wheat for instance cannot be adjusted to the seasons of the mountain regions.

The same considerations result in a similar pattern in the different circumstances of Greece. The precise information given by Greek authors, in particular the agricultural calendar of Hesiod, can only be valid for the region they refer to. The Greek author Hesiod, writing about farming in Boeotia, places the sowing and harvesting at the setting and rise of the Pleiades, in November and May. Xenophon advises sowing at the beginning of the rainfall in autumn. Also spring-sown grain is mentioned, regarded by some as an emergency crop, by others as a different kind of cereal in its own right. The variation of the agricultural calendar will have been as wide in Greece as in Italy (Osborne 1987, 13; Isager and Skydsgaard 1992, 22ff, 161). The differences within agricultural cycles should warn us not to project the crop cycle or climatic and biological conditions of plant growth of one area to another.

In sum, the literary sources on agriculture in the ancient world point out the diversity of the impact of weather on arable farming—diversity in two meanings of the word. First, the geographical diversity of the Mediterranean landscape; second, the diversity of crops and their tolerance range to weather impacts. The agricultural writers, who were landowners themselves, were very much aware of the importance of crop diversity and the adjustment of crops to the particular conditions of each location. The writings of Columella and Pliny, both dating to the first century CE, show the awareness of the dangers posed to the coming harvest by extreme weather conditions. The question is, however, to what extent—and if indeed—these dangers were greater after the end of the so-called Roman Climatic Optimum.

Roman Climate Change in a Long-Term Perspective

The proxy data for the distant past allow to identify relative trends in temperature and precipitation, but it is not always possible to translate them into absolute figures with any degree of precision. Hence, it is difficult to compare the climate change from the Roman Climatic Optimum to the so-called Late Antique Little Ice Age in terms of °C or mm of precipitation, or to compare it with the climate transitions in other periods in absolute terms.Footnote 11 Let us start with a comparison of the transition periods in more relative terms. Starting point may be the general overview of climatic eras in the northern hemisphere recently presented by John Brooke in the Palgrave Handbook of Climatic Change (2018). Over the past 6,000 years, Brooke distinguishes three cold periods, which are characterized by the concurrence of deep solar minima and pulses in the Siberian High: 4000–3000 BCE, 1200–700 BCE and 1400–1700 CE (Little Ice Age = LIA). In addition, he sees an irregular half-cycle of lesser solar minima, which did not agree with pulses in the Siberian High and which caused less intense episodes of global cooling: 2500–2100 BCE and 550–700 CE (Late Antique Little Ice Age = LALIA). The Roman Climatic Optimum (in Brooke’s terminology: the Roman-Han Imperial Optimum) of 200 BCE–500 CE is described as a warm and humid period, possibly with temperature levels as high as the twentieth century (Brooke 2018, esp. 179). The LIA stands out as a particularly clear climatic long-term phenomenon. In fact, on the basis of 73 world-wide proxy records of different kinds, Sam White (2014, 328) proclaims the LIA “as undoubtedly the most pronounced global climate anomaly of the past 8000 years (until contemporary global warming)”. So, the least we can say is that the LALIA appears to have been a less severe cold period than the LIA.

It is unfortunately less feasible to generalize precipitation levels in more precise terms than “dry” or “humid”. Moreover, we have seen that precipitation levels are much more spatially variable than temperature. Little can be said, therefore, regarding the relative extent of shifts in precipitation levels between the climatic periods. Hence, we must limit ourselves to temperatures. Our question is: how does the temperature change between de RCO and the LALIA compare to temperature changes in the second millennium CE? One study observes that for most of the 10,000 years, since the start of the Neolithic, northern hemisphere temperature variation in relation to its long-term mean has been about 1 °C. The latest reconstructions of past climate, based on multiple long-term dendrological series in the northern hemisphere, indicate that the range of temperature between the warmest and coldest century of the past 1100 years was 0.95 °C in one study, 0.33 °C in another (Esper et al. 2018, 87).

Recent estimates of temperature changes between the much better-documented LIA and the twentieth century are informative. In the above mentioned Palgrave Handbook of Climate Change, Pfister (2018, 276–281) gives various estimates of temperature differences for Central and Western Europe during the second millennium CE.

Pfister 2018: temperature differences with modern reference period (1951–1990)

A recent study of past climate change in Andalusia offers similar figures. The mean winter temperature in the Guadalquivir river basin, which is estimated on the basis of documentary evidence, was up to 0.5 °C lower during the years 1780–1830 than in 1960–1990. Climate model simulations for Andalusia reckon with three main periods: 1501–1735, the main phase of LIA, with a mean temperature of 13.8 °C; 1736–1890: 14 °C; 1891–1990: 14.4 °C. In other words, during the main phase of LIA, the mean temperature was 0.6 °C lower than in the twentieth century. Empirical proxy data indicate lower temperatures during the cold period: in the high mountains of the Sierra Nevada, temperatures were 0.9 °C lower than the mean value for the twentieth century. In the Guadalquivir river basin, this was 0.5 °C (Rodrigo 2018, 251–252).

The LIA, however, chronologically defined,Footnote 12 is visible as a separate climatic era in the proxy records, for instance in the growth of glaciers in this period. It is also visible in a study of 92 speleothems in western Europe covering the past two millennia (Lechleitner et al. 2018). Taking them all together (“stacking”), they show a long-term trend in more positive dO18 values from about 2000 to 550 BP. At that point, around 1400, the aggregate value begins to fall. But the most interesting observation is this: “there is no clear indication of systematic changes in18Ospel corresponding to the Roman Warm Period (RWP), the Late Antique Little Ice Age (LALIA), or the Medieval Climate Anomaly (MCA), periods of significant temperature change in Europe”. The authors of this study point to the large extent of “noise” in the proxy record and to the fact that the method of “stacking” all speleothem values in western Europe obscures that individual speleothems reflect different climatic changes. In other words, the authors emphasize that the other fundamental climatic changes might simply not be visible (Lechleitner et al. 2018, 18). However, the fact that the LIA is clearly visible in the aggregate speleothem values, while the previous caesura’s are not, might also support the conclusion that there were no clearly definable, prolonged climatic eras in western Europe from the start of the common era until the end of the Middle Ages.

A few more observations can be made on the basis of the above estimates and observations. (1) Climate change is a matter of shifts in wildly fluctuating variables. Long-term climate change manifested itself in subtle and regionally highly variable shifts in averages of temperatures and precipitation and in changing patterns of weather extremes, not in distinct and homogeneous climatic optimums and “dark ages”. Extreme weather phenomena occurred in all periods. The band-width of such phenomena and their frequency shifted over time, but this is a matter of grey-scales, not black and white. In other words, at least during the late Holocene, there have never been eras of good or bad weather. (2) At a decadal scale, the largest temperature difference between the LIA and the second half of the twentieth century is 1.2 °C, pertaining to summer temperatures in western and central Europe at the end of the seventeenth century. Note that this is the difference with the later twentieth century i.e. a period in which global warming was already occurring. Since 1900, both the mean and variance of temperature have been increasing (Porter and Semenov 2005, 2027). Surely, the temperature shift occurring between the LIA i.e. one of the severe cold periods of the past 6,000 years, and the period of global warming in the later twentieth century must constitute the largest (or at least, one of the largest) of the past millennia. Nevertheless, in absolute terms, the estimated differences in long-term mean are in the range of 1 °C or lower. In other words, climate change in the Roman era remained within a relatively limited band of climatic conditions. (3) It has been stated that summer temperatures during the Roman Climatic Optimum may have been as high as during the twentieth century, but Pfister observes that summer temperatures in western and central Europe during most of the seventeenth century (i.e. during the LIA) were also “almost” as high as those in the reference period. So, we should not attach too much meaning to the high summer temperatures of the RCO. (4) The estimated extent of climate change in the past millennia does not nearly reach the levels of climate change expected during the twenty-first century. Even the moderate scenarios of global warming in the twenty-first century reckon with a difference of 2 °C and more (Saadi et al. 2015, 105). In many regions, the predicted increase in temperature had limited or no impact on the yields of cereal crops, as these remained within the tolerance range of these crops.

So, we should ask whether it is likely that the shifts from the RCO to the LALIA, which were in all likelihood not of the same magnitude as those from the LIA to the twentieth century, had serious consequences for society. We must take into account that modern cultivars and modern agricultural technologies decrease the susceptibility to weather extremes to some extent, so we cannot project modern studies on crop resistance onto the distant past. Nevertheless, while the comparatively low extent of shifts in temperature and (probably) precipitation may have resulted in limited shifts in weather patterns, with possibly a higher frequency of extreme phenomena in some periods, it seems very unlikely that the impact on agriculture in the Roman world was as dramatic as often assumed.Footnote 13

Altitudinal Margins

It is often stated as a general rule that warm periods increased the carrying capacity of the land and thus allowed populations to grow and prosper, while cold periods exactly did the opposite (Galloway 1986).Footnote 14 In 2013, the economic historian Paolo Malanima (2013, 72–73) wrote that “climatic phases marked the past history of mankind. […] While warm periods were favorable to the spread of cultivation and the multiplication of humankind, cold epochs coincided with periods of demographic decline. Roman civilization flourished in a period of warm climate and was accompanied by population increase, while the early Middle Ages was an age of demographic decline and cold climate”. Harper (2017, 52) also makes the claim that the prosperity of the Roman Empire at its height is due to the climatic warm period, which he calls—in one of his suggestive metaphors—“a potent incubator of growth”. This is based on the assumption that “yields in Mediterranean agriculture respond positively to increasing temperature”.Footnote 15 Now, formulated in such general terms, it is a questionable statement indeed. It is valid only for those parts of the Mediterranean in which cold acts as a constraint on crop yields i.e. at higher altitudes.

In more concrete terms, Harper (2017, 52) argues that the warming of the climate increased the total arable and thereby the carrying capacity of the land, and the reverse happened with a cooling down of the climate. Since this is a major argument for linking climate change to hunger and pressure on resources, it is worthwhile to look at the line of reasoning in detail. “In hilly Italy, an extended rise of 1 °C would have rendered, on conservative assumptions, an additional 5 million hectares of land suitable for arable cultivation; that is enough land to feed 3–4 million hungry bodies” [my italics]. The adjective “hungry” implies a situation of persistent population pressure on the land. As we shall see, this image of an inevitably starving population suits his argument well.

Harper took these figures from a study by Lo Cascio and Malanima (2005, 27), whose argument is as follows: “A long lasting one-degree decrease reduces the maximum altitude of cereal cultivation by 100–200 metres. Such a change probably would have no significant effect on the agriculture of a quite level country; but in a country like Italy, where hills cover 13 million hectares, things are different. In this case, even in the most restrictive hypothesis, the loss of arable can exceed 5 million hectares. Even supposing a lower yield in the hills than in the plains, this decline may entail a loss of product capable of feeding 3–4 million inhabitants”. In his contribution to the Brussels conference on climate change and ancient societies in May 2019, Malanima specified some of the assumptions on which it is based: “we know that temperature diminishes with altitude by 0.5 degree every 100 metres. If, for example, wheat is cultivated until 500 m of altitude, 1 degree more of temperature makes it possible to cultivate until the altitude of 700 metres. A lot of land be-comes cultivable and able to support increasing populations. Any rough calculation suggests that millions of people may be supported by this mere displacement of the cultivation”.

In my view, the argument is incorrect in both its basic figures and in its general line of reasoning. An area of 5 million hectares of arable land would certainly be able to feed millions of people. However, this figure is based on the assumption that the difference in temperature applies to the altitudinal zone between 500 and 700 metres. In reality, various kinds of wheat are grown in peninsular Italy up to a height of 1,000–1,200 metres, while the shorter growth cycle of barley and rye allow cultivation higher still (Spurr 1986, 21). In the southern Alps, wheat is advised for “warm medium altitudes” of about 700 metres, but rye and oats up to 1,300 metres. i.e. “very cold high altitudes” (Andreae 1981, 56). So, the area affected would be limited to a zone of 200 metres at a height of more than 1,000 metres, which is certainly significantly less than 5 million hectares.Footnote 16

Moreover, while a long-term change in temperature undeniably has an impact on the altitudinal boundaries of crops, the argument of Lo Cascio, Malanima and Harper is based on the false supposition that land suitable for arable farming is always optimally used. If that were the case, there would be a direct causal link between climate change, arable land and agricultural production. However, land that was potentially suitable for arable farming was not all used for this purpose, but was partly exploited in a wide spectrum of less intensive forms of agriculture or for fuel. In Lucania in the recent past, for example, wheat was grown to a height of about 1,200 meters and rye even higher, but most of the upland area was not used to grow crops, but as grazing land (McNeill 1992, 34, 126). It therefore seems a reasonable hypothesis that only a small share of the land at higher altitudes that was lost to a cooling down of the climate after the end of the RCO had been used for arable farming. At the same time, there was sufficient land available to compensate for this loss.

This must certainly have been the case in the Central Apennines, where not much land was used in intensive arable cultivation, as the logistical obstacles prohibited production for export and there were no large local markets for agricultural goods. Varro (2.2.9) famously refers to long-distance transhumance of sheep between winter pastures in Apulia and summer pastures in the mountains of Reate in the central Apennines to the northwest of Rome. Strabo (5.3.1) mentions acorns as one of the products of the land of the Sabine and emphasizes domestic livestock, including the mules of Reate. While population density may have increased after the days of Strabo and Varro, there is no reason to assume great changes in the landscape. A recent pollen study in the area of Reate (Rieti basin) concluded that there was no sign of extensive clearance or deforestation. Based on the share of pollen, forests declined somewhat in the imperial period compared to the republic, but only to a limited extent. In general, the conclusion on the basis of this and other pollen analyses is that deforestation in the Italian peninsula was localized and degradation limited (Mensing et al. 2015, 88, 2018, 4). A pollen analysis in the Upper Sangro Valley in Abruzzo (central Italy) shows evidence of forest clearance at higher altitudes particularly in two periods: roughly around 800 BCE and 500/600 CE. Both times may have coincided with colder periods and it may be speculated that clearance compensated for the loss of arable and grazing land. However, the authors conclude—rightly so, in my view—that societal processes far outweighed the effects of climatic fluctuations: around 800 BCE, this was the expansion of livestock holding and grazing; in the sixth century CE, the withdrawal of the population to safer heights in times of catastrophic warfare in Italy (cf. Brown et al. 2013). In short, while a cooling period predominantly affected the less extensive forms of the exploitation of upland regions, thereby lowering the availability of grazing or firewood, this is a far cry from the millions of people losing their livelihood. Hence, in Italy, there is only a relatively weak link between changes in the altitudinal crop boundaries and total agricultural production.Footnote 17

Tipping Points, Carrying Capacity and the Scope for Adaptation

It might be argued, though, that when high population levels put much strain on available resources, even a little nudge can turn the balance and that relatively minor climatic shifts could have major historical consequences. Such a “tipping point” metaphor also neatly sidesteps the accusation of environmental determinism and mono-causality. While Kyle Harper regularly emphasizes that the Roman Empire at its demographic high point was not subject to Malthusian pressure and that there is no sign of overpopulation leading to general poverty and malnutrition, he does note that the demographic and economic growth of the Roman Empire was stretching it’s limits: “The Romans had edged outward the very limits of what was possible in the organic conditions of a premodern society” (Harper 2017, 12). So, no Malthusian positive check in the Roman world yet, but the reserves were wearing thin. The “tipping point” metaphor is thus closely related to the concept of “carrying capacity”, as it is the sustainability of the population that determines whether a society is near the “tipping point”.

The idea that in general climatic and population cycles are indissolubly linked, with population rising during climatic optimums and falling during the climatic lows, is based on the concept of carrying capacity. The carrying capacity of a region may be defined as the potential production level of requirements for its inhabitants, given the technological and agricultural level of that population. While the Malthusian model emphasizes “land” as the determining factor in the land-population equation, Boserup turned this around and emphasized “population” as a determining factor: society will adapt to population pressure in the form of agricultural systems that increase the productivity of its natural resources. Since this adaptation comes at a cost—primarily consisting of labour input—Boserupian growth requires pressure to occur. Climate change adds an additional element to the equation, as climate amelioration means that the ceiling rises, while it falls as a result of adverse climate change.

However, while an ecologically defined “carrying capacity” may be a helpful concept in biology, it is of limited use in the study of complex societies, as societal factors act as constraints on the utilization of natural resources. Hence, societies do not make optimal use of natural resources. Agricultural production is not only determined by such ecological factors as soil, yields and weather, but also by the structure of landholding and the extent of (under)employment, of specialization and of market integration.Footnote 18 One of the main factors in the rise of total economic production in pre-industrial Europe was the productive employment of labour, which was a function of the market integration and division of labour. Peasant agriculture is characterized by high labour availability in their households and limited access to land. In an underdeveloped economy (such as for instance nineteenth-century Russia) with weak non-agricultural sectors, this resulted in high input of labour per unit land, low levels of labour productivity, low investment of capital and high levels of underemployment. In other words, underutilization of land and labour-constrained agricultural production. Furthermore, market integration allowed specialization in certain crops, and therefore optimization of natural conditions, stimulating production (e.g. Grantham 1999; Allen 2003; Erdkamp 2016). It is precisely the scope for adaptation that societal factors offer that allows the Boserupian response to population pressure, as changes in agricultural systems and technologies go hand in hand with changes in the input of labour and/or capital. In sum, complex societies were not passive subjects to changes in the climate-land-population nexus. The response to ecological changes in the equation, including long-term climate shifts, was partly determined by societal factors, which governed the scope for changes in land use, agricultural systems and technologies.

Let me add a remark on Harper’s views on the decline of the Roman Empire. Harper’s combination of pandemic and climatic catastrophe ushering in the decline of the Roman Empire is based on an internal contradiction. He argues that, when the demographic downturn occurred in the form of the Antonine Plague of the later second century (which was an outbreak of smallpox), followed by the Cyprian Plague of the mid-third century (not identified yet), it was not as a result of Malthusian population pressure, but because of the arrival of deadly diseases, brought about by increased connectivityand urbanization. Harper estimates mortality levels that were very high (10–20%), though not as high as those of the Black Death in the fourteenth century, when about one-third of the European population fell victim to the real plague (Yersinia pestis). If Harper is right and the inhabitants of the Roman Empire succumbed by the millions to the 2nd- and 3rd-century pandemics, population levels would have fallen well below the supposed ceiling, leaving more than sufficient scope for the postulated decline in agricultural production as a result of climatic deterioration occurring at the same time. Moreover, in a landscape as diverse as that of the Mediterranean, people would only have had to move short distance to find habitats less or not affected by the trend in temperature and precipitation.Footnote 19

Conclusions

The limitations of the currently available data do not allow us to give clear and quantitative answers to all questions regarding the impact of climate change on agricultural production in the Roman Mediterranean world. However, a few conclusions can nevertheless be drawn.

First, climate change in the Roman era consisted of changes in climatic patterns within a relatively limited band of climatic conditions. Variability characterized all climatic periods; there were no uniformly dry, wet, cold or warm eras. Extreme weather phenomena occurred in all periods. The band-width of such phenomena and their frequency shifted over time, but this is a matter of grey-scales, not black and white.

Second, the long-term shifts in temperature between climatic periods—whether that was the onset of the RCO or the advent of the LALIA—were not of a nature to significantly change agricultural production. In general, temperature shifts remained well within the tolerance range of grains and other staple crops. McCormick, Harper and others postulate disastrous effects of temperature shifts on the food supply, but the only specific argument concerning the impact of climate on agricultural production that Harper gives, consists of the hypothesized loss of arable land at higher altitudes. The altitudinal limits of arable farming may have risen or fallen a bit, but not to the extent as to dramatically increase or decrease potential agricultural production, the more so as most land at higher altitudes was not exploited in intensive forms of agriculture.

Third, precipitation was potentially a far greater factor in determining local agricultural production, but changes in precipitation are not only more difficult to deduct from natural proxies, but shifts in precipitation patterns varied much more widely even at a regional level. Much depends on the seasonal distribution of precipitation, which also means that the impact on agricultural production may have differed regionally, with some regions benefitting, others not. Moreover, in view of the fragmentation of the landscape in terms of climate and physical conditions of agriculture, it is impossible to draw general conclusions on the same climate trend. Wet periods may have increased the frequency of floods in regions susceptible to flooding, while more unstable weather may have caused damage to standing crops more often than before. Conversely, years of inadequate rainfall may have reduced yields more frequently in regions that were dry to begin with. Flooding and drought will have occurred largely in regions that were vulnerable to such phenomena to begin with. While we cannot draw a clear picture of the impact of general shifts in precipitation, we can at least conclude that in the current state of affairs, there is no clear evidence that indicates a catastrophic impact of precipitation shifts on Mediterranean rain-fed agriculture.

Fourth, ancient farmers were well aware of the vagaries of the weather and the dangers they posed to their survival or profits, and therefore they had devised strategies to limit the consequences. In other words, farmers in places that were most susceptible to specific climatic conditions also had most experience with such weatherextremes. Farmers responded to short-term variation in the weather by such strategies as fragmentation of farmland and diversity in cropping strategies, which reduced but not eliminated risks. Responses to long-term changes in the climate were probably less deliberate, but the result of accumulated year-by-year experience. As circumstances changed, so did the strategies employed by farmers. Again, this did not eliminate harvest failure, but reduced its impact. Expectations—and thus the meaning of harvest failure—changed as a result of climate change.

In sum, climate change undoubtedly had an impact on agricultural production and practices, but the impact was far from uniform in the diverse landscapes. It is very well possible that changes in averages and the variability of temperature and precipitation and in the pattern of extreme weather raised the frequency of bad harvests of cereal and other crops in those regions that were susceptible to these changes. However, in most regions, the severity of change remained well within the biological tolerance range of crops. Whether it all remained within the tolerance range of Roman society depended mostly on society itself. Agricultural systems were never fixed, but always subjected to environmental and societal circumstances. To the extent that more frequent weather extremes affected agricultural production in the long term, climate change may have caused a kind of “Boserupian” response that compensated for the fall in crop yields and land productivity. Whether this response occurred, was a question of societal factors, not climate itself.