Keywords

9.1 Synopsis – The Story So Far

This book set out to offer practical advice and guidelines on the design and construction of subsurface reservoir models. The overall objective has been to promote the skills and procedures for the design of fit-for-purpose models that allow the reservoir modeller to make useful estimates of resources and forecasts of fluid behaviour, both within reasonable bounds of uncertainty.

We organised the discussion under eight thematic headings:

  1. 1.

    Model Purpose

  2. 2.

    The Rock Model

  3. 3.

    The Property Model

  4. 4.

    Upscaling Flow Properties

  5. 5.

    Handling Uncertainty

  6. 6.

    Generic reservoir types

  7. 7.

    Models for storage

  8. 8.

    Workflows

In order to achieve good model design within each of these themes, we need to access a selection of data manipulation and mathematical modelling tools, including those for seismic analysis, petrophysical analysis, geological modelling, statistical estimation, fluid flow simulation and analysis of outcomes. This is a rather long list of tools and functions, typically handled by several different computer software packages often linked by spreadsheets. The quest for a fully integrated subsurface data package will no doubt continue, and we welcome those efforts, but is likely to be compromised by the development of new niche tools, and we live with this. The primacy of the geological concept in deciding what information to capture in a reservoir model does, however, give us a framework for addressing the subsurface data integration challenge. The first step in reservoir modelling is always to think rather than click.

We have tried to hold two important themes in balance:

  1. 1.

    The rock: the conceptual geological model. The first concept, “it’s a river-dominated delta system”, could be wrong but that is better than having no concept formulated at all. Better still, we should have several reservoir concepts that can be tested and refined during the modelling process, e.g. “we think it’s a fluvially dominated system, but there are indications of tidal influence so we need to test tidal versus fluvial deltaic models.”

  2. 2.

    The fluid: the physics of the system. Fluid flows have their own natural averaging processes. Not all geological detail matters, and the geological heterogeneities that do matter depend on the fluid flow system. Low viscosity fluids are more indifferent to rock heterogeneity than high viscosity fluids and all multiphase fluid systems are controlled by the balance of capillary, viscous and gravity forces on the fluid displacement processes.

Because rock-fluid interactions are multi-scale – from the microscopic pore-scale (μm) to the macroscopic rock architecture scale (km) – we need a framework for handling data as a function of scale. The concept of the Representative Elementary Volume (REV) has been identified as absolutely fundamental to understanding and using reservoir property data. If your measurements are not representative and your flow properties are estimated at the wrong length scale the modelling effort is undermined and the outcomes generally poor. The multi-scale REV concept gives us a framework for determining which measurements and averages are useful and which model scales and grid sizes allow us to make reasonable forecasts given that data. This is not a trivial task, but it does give us a basis for deciding how confident we are in our analysis of flow properties.

Subsurface data analysis leads us quickly into the domain of ‘lies and statistics.’ Geostatistical tools are immensely useful, but also very prone to misuse. A central challenge in reservoir modelling is that the data we have is usually imperfect and statistically insufficient. When making estimates based on incomplete data we cannot rely on statistics alone – we must employ intuition and hypothesis. To put this simply in the context of reservoir data, if we wish to know the porosity and permeability of a given reservoir unit that answer is seldom found in a simple average. The average can be wrong for several reasons:

  • Some data points could be missing – incomplete sampling.

  • The model elements could be wrongly identified – the porosity data from two distinct rock types do not necessarily give us the average of reservoir element 1 or 2.

  • We may be using the wrong averaging method – effective permeability is especially sensitive to the choice of averaging (the usefulness of the arithmetic, harmonic and geometric averages are controlled by the rock architecture).

  • We may be estimating the average at an inappropriate scale – estimates close to the scale of the REV are always more robust.

  • The average may be the wrong choice – many reservoir issues are about the inherent variability, not the average.

Because of these issues, we need to know which average to use and when. Averaging is essentially a form of upscaling; we want to know which large-scale value represents the effects of small-scale variations evident within the reservoir. It is useful to recall the definition of the upscaled block permeability, kb (Sect. 3.2):

kb is the permeability of a homogeneous block, which under the same pressure boundary conditions will give the same average flows as the heterogeneous region the block is representing.

If the upscaled permeability is closely approximated by the arithmetic average of the measured (core plug) permeability values, then that average is useful. If not, then other techniques need to be applied, such a numerical estimation methods or a power average.

Assuming, then, that we have the first four elements of reservoir design in place – a defined model purpose, a rock model based on explicit geological concepts, a property model estimated at an REV, and then upscaled appropriately – we have the issue of uncertainty, as no amount of careful reservoir model design will deliver the ‘right’ answer. The model purpose might be redefined, the geological concept could be false, the property model may be controlled by an undetected flow unit, and upscaling may yield multiple outcomes.

In order to handle reservoir uncertainty we have advocated the use of scenario-based thinking, whether this is expressed in multi-deterministic concepts or a statistical ensemble. It may at first appear dissatisfying to argue that there may be several possible outcomes after a concerted period of reservoir data analysis, modelling and simulation. The asset manager of financial investor usually wants only one answer, and becomes highly irritated by the ‘two-handed’ geologist (“on the one hand the answer may be this, but on the other hand…”). Some education about reservoir forecasting is needed at all levels. It is never useful to say that the sky tomorrow will be a shade of grey on average. It is however, accurate to say that the skies tomorrow may be blue, white or grey – depending on the weather patterns and the time of day – and it is useful to present more explicit scenarios with probabilities, such as that there is a 60% of blue sky tomorrow and a 10% chance of cloud (assuming this is based on a sound analysis of weather patterns).

In the same way, scenarios describing alternative plausible reservoir realities do provide useful forecasts. For example, who wouldn’t invest in a reservoir development or energy storage plan where nine out of ten fully integrated and upscaled model scenarios gave a positive outcome, but where one negative scenario helped identify potential downsides that would need to be mitigated in the proposed plan?

The road to happiness is therefore good reservoir model design, conceptually-based and appropriately scaled. The outcome, or forecast, should encompass multiple scenarios, using geostatistical tools guided by deterministic concepts. Reservoir and simulation models integrate knowledge, allow us to forecast futures, quantify value and hence help make significant commercial and environmental decisions.

This is where we are now. Based on current needs and trends, what are the foreseeable developments that may lie just around the corner? We capture these under four themes:

  • Use of analogues and data

  • Restoring lost heterogeneity

  • New algorithms

  • Modelling for understanding

9.2 Use of Analogues and Data

Reservoir systems are complex, so the ambition of reservoir modellers to understand the effects of ancient subsurface rock strata on fluid flow processes several km beneath the surface is a bold venture. We may recall the underlying principles of geology to guide us in that process. One of the founders of geology, Sir Archibald Geikie (1905), established the principle:

The present is the key to the past

This concept is now so embedded in geology that we can easily forget it. We use our understanding of modern depositional processes to interpret ancient systems. Modern aeolian processes in the Sahara desert (Fig. 9.1) can tell us a lot about how to correctly describe a North Sea reservoir built from Permian aeolian sands. The many efforts to understand outcrop analogues for subsurface reservoir systems (Fielding and Crane 1987; Miall 1988; Brandsæter et al. 2005; Howell et al. 2008; Cabello et al. 2011; Keogh et al. 2014; Puig et al. 2019) are devoted to this goal and will continue to bring important new insights into the reservoir description of specific types of reservoir.

Fig. 9.1
figure 1

Modern dune systems in the Sahara, central Algeria. (Photo B. Paasch, reproduced with permission)

A wide range of imaging techniques are now being used in outcrop studies (Pringle et al. 2006; Rittersbacher et al. 2014; Nyberg et al. 2015; Cawood et al. 2017) in order to obtain more quantitative and multi-scale information from outcrop analogues of reservoir systems. These include digital aerial photogrammetry, digital terrain models, ground penetrating radar, satellite imaging, differential GPS location data, ground-based laser scanning (LIDAR), all hugely expanded in use since the arrival of drones. While these new high-resolution outcrop datasets provide valuable information at faster rates of acquisition, they still require sound geological interpretation to make sense of the data and to apply them to reservoir studies.

Despite the growing body of knowledge, reservoirs will always present us with surprises and for this reason, and because of the inherent challenge of the estimation of inter-well reservoir properties, reservoir forecasting will always carry large uncertainties. In the process of making forecasts about the subsurface we therefore employ a variation of Geikie’s dictum, because we use our knowledge of the geological record to make these forecasts, such that:

The past is the key to the future

This principle has grown in use in the last decades, and formally elaborated as a branch of geological research by Doe (1983). Geological forecasting has received most attention in the study of climate change (Sellwood and Valdes 2006), but also in the fields of earthquake hazard forecasting and in subsurface fluid flow modelling.

In reservoir modelling studies we use the past is the key to the future principle in several ways:

  1. 1.

    We use our knowledge of the rock system to make credible 3D models of petrophysical properties giving us some confidence in our flow predictions. This principle is axiomatic to the proposed basis for reservoir model design – that there must be some level of belief in the reservoir concepts embodied in the model for there to be any value in the forecasts made using that model.

  2. 2.

    We use our experience from other similar reservoirs to gain confidence about new reservoirs. This includes the ‘petroleum play’ concept and the use of subsurface reservoir analogues; we have much more confidence in reservoir forecasting in a mature petroleum basin than we do in a frontier province. This is now extending to the field of CO2 disposal, where we build on experiences of early schemes such as those at Sleipner and Snøhvit (Chap. 7) to plan CCS schemes globally (Ringrose 2020).

  3. 3.

    We use our growing body of knowledge on rock-fluid interactions to make better forecasts of fluid flow. One important example of this is the role of wetting behaviour in multiphase flow. There was a time (1950–1980s) when most petroleum engineers assumed water-wet behaviour for oil mobility functions, i.e. the oil had negligible chemical interaction with the rock. The growing appreciation that most rock systems are mixed wet (that is, they contain both water-wet and oil-wet pores controlled by the surface chemistry of silicate, carbonate and clay minerals) led to improved two- and three-phase relative permeability functions and to the use of different chemicals and altered water salinity to improve oil mobility. The tools available for understanding rock-fluid interactions are constantly improving. New technology is being applied at the macroscopic scale, such as the use of advanced inversion of seismic data and electromagnetic data (Constable and Srnka 2007) and at the nanoscopic to microscopic scale, such as the use of scanning electron microscopes (SEM) to study pore-surface mineralogy (Fig. 9.2).

  4. 4.

    We match dynamic model forecasts to historical production data. The amount of effort we invest in this process and the value we gain can be questioned, as discussed in Sect. 8.4 and reflected upon in Sect. 9.5 below.

Fig. 9.2
figure 2

SEM petrography and spectroscopic analysis used to identify pore mineralogy and their controls on porosity and permeability. A fracture filled with carbonate cements (pink) and a sandstone pore space with grain coatings of chlorite (green) can be identified using the Energy-Dispersive X-ray Spectroscopy (EDS) image, shown on the inset which is 500μm across. (Photo T. Boassen/Equinor © Equinor ASA, reproduced with permission)

9.3 Restoring Lost Heterogeneity

We suggest the near future will see renewed interest in the importance of modelling fine scale detail in the more heterogeneous components of reservoirs. Much effort was expended in researching this field in the early years of modelling and simulation, and was essential to make the relatively simple models of that period predict reasonably. With increasing computing power and software sophistication this has become less common with practitioners, even though the literature contains significant knowledge of the issues and researchers continue to improve understanding of fluid flow in the more challenging reservoir types. The reason behind this seems to be a perception among practitioners that current model size and complexity should be enough to capture all the necessary detail in a single big model. As pointed out in Chap. 8 this is often not the case, and multi-scale questions may become increasingly important because:

  • Most oil and gas fields are mature and remaining production is coming from the poorer quality, more heterolithic zones;

  • More mature fields are being produced by EOR techniques, which are more sensitive to reservoir heterogeneities than primary production techniques;

  • Many of the new fields coming on stream are in more heterogeneous, tight or unconventional reservoirs;

  • CO2 storage is gaining importance, and storage capacity is strongly influenced by reservoir heterogeneity and capillary interactions, in fact it may rely on them.

This is manifestly the case in highly heterolithic reservoirs, but even in simpler reservoirs a 10% heterolithic content is likely to have more than a 10% impact on a production forecast. We need to recover this ‘lost heterogeneity’.

The extent of this can be assessed by examining analogue outcrops with an appropriate focus – here we may return to the well-studied outcrop near Annot Town referenced in Sect. 6.5.4.

A current, high-resolution simulation model would represent the Annot outcrop (Fig. 9.3a) with a cell architecture similar to that in Fig. 9.3b. This would be a ‘big model’ on a field scale, and although cell resolution in the thick sand units is perhaps too high for most purposes, the resolution is insufficient to capture necessary detail in the heterolithic interval. This can be tackled by varying the cell size (Fig. 9.3c) but even if the significant cell size variation is managed in a simulator this still fails to capture the detail in the heterolithic interval which would significantly baffle fluid flow in the whole system. A better solution is to adopt the multi-scale techniques described in Chap. 4 and the workflows of Chap. 8 and pursue a design such as that in Fig. 9.3d.

Fig. 9.3
figure 3

Capturing scales of heterogeneity at the analogue outcrop near Annot Town shown in Figs. 6.29–31: (a) the exposed outcrop at the Scafferels, approximately 100 m high; (b) a simulation grid overlay with typical grid cell sizes – the grid is unnecessarily fine for the thick layers and too coarse for the heterolithic interval; (c) an improved cell size distribution, although still insufficient to capture the cm-scale detail in the heterolithic; (d) a multi-scale model alternative with on-scale small-scale effective property models for the heterolithic interval, either grid- or surface-based (drone images courtesy of John Howell, University of Aberdeen)

The knowledge and techniques required to capture lost heterogeneity are available, but require more widespread adoption by practitioners; many current and future model purposes will require them.

9.4 New Workflows

In addition to the better use of analogues and fuller application of techniques for capturing heterogeneity, new modelling techniques are emerging which may find general application and support both these initiatives.

A general theme is a move away from grid-centric modelling (Bentley & Ringrose 2017) (Fig. 9.4). Conventional reservoir simulators use a finite-difference gridding scheme where only small deviations from an orthogonal grid can be accepted. The modelling approach is governed by the demands of the flow simulator such as computational limits on the number of grid cells, and convergence of the numerical flow solutions. This generally results in distortion and over-simplification of the reservoir architecture that the flow model is attempting to represent. Handling of structural features such as fracture systems is particularly challenging (Sect. 6.7). Moreover, the construction and update of a fixed corner-point grid is typically time-consuming and tends to be the ‘efficiency bottleneck’ in most modelling projects (Fig. 9.4a).

Fig. 9.4
figure 4

Alternative workflows: (a) grid-centric; (b) workflows based around a disposable grid

Current research seeks more efficient alternatives and one vision incorporates surface-based modelling, with gridding only called upon for the forecasting moment (Fig. 9.4b). This opens a path for new tools which offer the opportunity for more nimble model workflows, described below.

9.4.1 Surface-Based Models

Commonly, methods for geological reservoir modelling are either object-based, beloved of sedimentologists, pixel-based, such as indicator simulation, focused on geostatistical estimation (exploiting two-point spatial statistics) and more recently texture-based approaches using multi-point geostatistics, the latter requiring the use of training images (Sect. 2.7 and see Mariethoz and Caers 2014 for further discussion). All these techniques require the allocation of rock properties to a pre-defined 3D grid which remains fixed for the duration of the modelling project (Fig. 9.4a).

A different approach is to consider depositional process in an attempt to re-create geological history and build a rock architecture using the sequential build-up of 2D surfaces and 2D structural frameworks. Some tools are available now such as SBED for small-scale clastic depositional architectures (Wen et al. 1998; Nordahl et al. 2005, Fig. 9.5) and 3DMove for creating or recreating structural architectures (Zanchi et al. 2009).

Fig. 9.5
figure 5

Surface-based representation of reservoir architecture using SBED: tidal bedding model (left) converted to a permeability cube (right). (Modified from Ringrose et al. 2005 © Geological Society of London [2005])

The chief merit of these process-based methods is geological realism, but the level of detail required makes significant demands on flexible, hierarchical, gridding algorithms. An integrated numerical description of the subsurface is captured by the GeoChron model (Mallet 2014) and represents an important step towards building these algorithms.

A step towards making this process nimble is captured by the Rapid Reservoir Modelling initiative, with sketch-based techniques drawn from advances in 3D digital graphics (Jackson et al. 2015; Zhang et al. 2020). If you can sketch it, you can indeed model it. An example of this application is shown in Fig. 9.6, in which a stacked channel architecture is sketched directly into a drawing package and automatically rendered into 3D using a hierarchy of 2D surfaces. The flow performance of the sketch is then quantified by applying an orthogonal grid on the surface-based architecture.

Fig. 9.6
figure 6

Surface-based representation of reservoir architecture using a sketch-based interface, and calculation of flow properties for a given well pattern and production mechanism: ‘Rapid Reservoir Modelling’. (Image courtesy the Rapid Reservoir Modelling consortium, www.rapidreservoir.org, Heriot-Watt University, Imperial College London, University of Calgary)

9.4.2 Disposable Grids

The example in Fig. 9.6 is an illustration of the schematic in Fig. 9.4b in which the 3D grid appears only for the purpose of a final calculation and may then be effectively disposed of (or ‘archived’); the ‘evergreen’ or living part of the workflow is the surface-based representation which underlies it.

The concept of the ‘disposable grid’ can be taken one step further by the application of a flexible mesh (Fig. 9.7). This is a complete move away from the relatively regular grids required by finite-difference simulators to finite-volume flow simulation methods which allow more grid flexibility. These have been developed and applied to multi-scale reservoir systems including complex structural architectures, e.g. Jenny et al. 2006; Geiger et al. 2004; Coumou et al. 2008; Jacquemyn et al. 2019). The methodology allows detailed geological features such as fracture zones to be explicitly included in the flow simulation (Matthäi et al. 2007). This can be taken one step further by making the mesh adaptive (Jackson et al. 2013, 2014), in which case the grid moves with every time-step to focus computational effort in the parts of the volume where greatest changes are occurring, such as around a floodfront (Fig. 9.7b). By these means the step away from 3D grid-centric modelling is finally made.

Fig. 9.7
figure 7

Meshes: (a) unstructured grids which allow highly irregular geometries to be modelled and simulated; (b) an adaptive mesh, in which the mesh moves with each time-step to focus computational effort in portions of the volume where change is occurring, such as a flood front – a truly ‘disposable’ grid. The approaches require an underlying subsurface description of the reservoir, which may be 2D surface- or 3D grid-based. (Images courtesy of the Novel Reservoir Modelling and Simulation (NORMS) group, Imperial College London)

The meshes can be combined with the surface-based techniques described above, as the mesh requires some underlying description of the subsurface. In this approach the ‘fixed’ aspect is the underlying surface-based representation and the conceptual understanding of the reservoir, both of which evolve steadily through a field life cycle – the ‘resource model’ of Chap. 8. The grid itself becomes a variable, to be built and discarded quickly once a decision has passed – the ‘decision models’.

9.5 Stepping Beyond the Solution – ‘Modelling for Understanding’

Sometimes modelling does not yield a simple solution, and this final section reflects on the thought that the best value activity may simply be to model for improved understanding of a subsurface process.

Figure 9.8 shows production forecasts based on a mature field case in which a decision on infill drilling was being made using sector models. The work was of a high technical quality, with uncertainties carefully explored and quantified using an ensemble-based technique.

Fig. 9.8
figure 8

Multiple production forecasts for a static/dynamic model ensemble supporting a mature field infill decision; when the work illustrates that the outcome is simply not known

Uncertainties were significant, and the work illustrated that plausible outcomes ranged from a very positive cumulative production to effectively zero production and commercial failure, with all possibilities in-between. Other than capturing a slight bias to the downside, the technical work is essentially illustrating that the uncertainty is high and future outcomes are unclear. As this was apparent from a cursory inspection of the initial uncertainties, it begged the question “what was the point of all that modelling work?”

This is an extreme case, but raises the question of why we model at all, if the honest outcome of our efforts is only to illustrate that we don’t know the answer. This is an uncomfortable conclusion that we are generally loath to acknowledge out of technical-professional pride; it is a conclusion we fear will not go down well with superiors and is one of the causative factors behind the paradigm of ‘modelling for comfort’ introduced back in Chap. 1. It may, however, be a true reflection of our situation. So what should we do?

An improvement on modelling for comfort is to abandon the notion that are we trying to finesse a solution every time we study the subsurface. Sometimes the uncertainties are too great and modelling only serves to illustrate this rather than to resolve those uncertainties. In this case we can benefit by simply stepping away from the idea that there is a subsurface problem, and that we have engaged in a modelling and simulation study to find ‘the solution’ (Fig. 9.9a).

Fig. 9.9
figure 9

Alternative workflows: (a) the pursuit of an optimal solution; (b) the pursuit of understanding to support a decision

The alternative is to replace the formulation of a ‘problem’ with that of a ‘question’, and the notion of a ‘solution’ with a ‘decision’ (Fig. 9.9b). We don’t actually need to conceptualise subsurface issues as problems and, as Fig. 9.8 illustrates, we cannot always come up with clear solutions. We will, however, always have questions and in a commercial world we will always have to make decisions and this applies equally to the storage projects of the energy transition as it has done to historical resource extraction projects.

Framed in this way, we can re-evaluate many of our modelling and simulation efforts and overcome some of the frustrations of modelling for comfort and the inefficiencies of the detailed full-field model default. Workflows such as the resource/decision models of Sect. 8.2 and the truth models of Sect. 8.4 fit into this framework neatly:

  • The ‘resource model’ is simply a data base and is not a decision-making tool in itself.

  • Truth models are there to generate understanding, as are the recent advances in machine learning which offer tremendous insights into our data and particularly into the management of production data – but they do not make decisions for us.

  • Artificial intelligence (AI) is a candidate for decision-making but needs a place within a framework of statistically insufficient, highly subjective, concept-based modelling; a potential role for AI lies in the construction of truth models based on a wealth of knowledge from outcrop analogue data (Fig. 8.12b).

All of the above requires thinking and modelling with the purpose of generating improved understanding, based on which (on the next cantilever of the decision-making process; Sect. 8.3) we can go forward and quantify uncertainties associated with the decision at hand. As discussed in Chap. 5, this can be explored using statistical ensembles, with or without machine learning, or deterministic scenarios or some combination of both.

If the outcome of this work is likely to give an answer similar to Fig. 9.8 then we may realise that our modelling work so far has been enough, we have all the understanding we need or can generate, and are in the position to stop modelling and make the next decision (which might be to collect new data). If, however, the modelling is leading us towards an improved or more precise understanding then we are perhaps evolving towards a solution.

Either way, the future of reservoir modelling must be more about improved understanding and less about finding the ‘right answer.’ This evidence-based learning approach to modelling the subsurface is likely to be the dominant paradigm for the near future – especially since careful use of resources is increasingly vital to our society, whether this be for the continued use of remaining hydrocarbon resources or for the growing efforts in CO2 management, emissions reductions and subsurface energy storage.