Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Quantifying the Metocean Environment

Metocean is an acronym from meteorology and oceanography and is commonly used in the offshore oil industry to encompass almost all topics involving the quantitative description of the ocean and atmosphere needed to design and operate man-made structures, facilities, and vessels in the ocean or on the coast. When engineers design a major facility or vessel to operate and survive in the sea, they must consider the loads and other constraints that may affect the structure. If those loads and constraints are underestimated, then damage can result and lives may be lost. Conversely, if loads and constraints are overestimated, then the costs will be overestimated perhaps to the point that the project becomes uneconomic and is never built.

The metocean environment controls so many aspects of facility design and operation that errors in quantifying metocean conditions can cascade though the design and operational decisions. For instance, overestimating a design wave height for a deepwater floating production platform could result in adding too many mooring lines. Since these additional lines would add tons of static load, a larger facility would be needed to provide the necessary buoyancy, thus generating additional capital cost well beyond the cost of the excess mooring lines. In short, the accurate quantification of metocean criteria can have far-reaching effects on the safety and profitability of offshore facilities. For this reason, metocean criteria are usually specified and described in a separate chapter or stand-alone document in a project’s design documents. In 2005, the American Petroleum Institute (API) recognized the influence of metocean criteria and began publishing a stand-alone set of recommended practices for the offshore industry [3.1].

Metocean criteria are typically broken into two categories: operating and extreme. The former involves quantification of metocean conditions in which the facility or vessel should be capable of achieving the routine functions of its primary purpose. Examples of routine functions include pumping oil, drilling, receiving or pumping out natural gas, and generating wind energy. Typical products used to quantify operational conditions include a cumulative probability distribution of wave height and a table of wind speed persistence. These products are used in estimating the fatigue lives for components. In contrast, extreme conditions occur rarely and are often generated by episodic events (e. g., storms). During extreme conditions, normal operations are usually suspended – the vessel is slowed, oil or gas production is stopped, wind turbines are feathered, etc. A common example of an often used extreme condition parameter is the 100-year maximum wave height – the largest wave expected over a three-hour period once in 100 years.

With this background in mind, the goal of this chapter can now be stated: it is to outline the methods commonly used in industry to quantify the most important metocean variables that impact offshore facilities. These methods are drawn largely from the offshore oil and gas industry but they are also readily applicable to other engineering applications involving the design and operation of vessels, coastal structures, offshore wind farms, navigational aids, coastal geomorphology, and to some extent, pollution studies. While we attempt to provide some physical insights into the underlying metocean processes, this chapter focuses on the methodology for deriving the key variables, and the nuances of their correct application.

Of course there are a multitude of metocean variables that could be covered in this chapter. Potential topics include water temperature, tides, and salinity. While these variables can be important for some engineering applications such as acoustics, this chapter will focus on winds, waves, and currents (WWC), since these are the variables that most often control extreme loads or operating conditions on man-made facilities. However, even this narrowing leaves countless aspects of WWC that could be covered with far too little space to do them justice. Thus we again have chosen to narrow the frame further by specifically focusing on aspects of WWC that tend to drive capital or operating decisions in large offshore facilities. For those interested in coastal features where shallow-water effects are important, the Coastal Engineering Manual [3.2] serves as an excellent reference.

Much as we the authors have had to narrow the topics, a metocean design basis for a major project must narrow the variables that are covered. This is because the sea and atmosphere are filled with complicated processes, many of which are site specific and poorly understood. If aggressive filtering is not undertaken, then too much time can be spent quantifying variables that make little difference to the design or operation of the facility. The first and best way to eliminate variables from investigation is to understand the basic responses of the particular facility. In other words to answer the question: which metocean variables impact this facility most and which have little or no impact? For example, squalls and their dynamic effects can be especially important in designing the moorings for floating production vessels near the equator, such as off Indonesia. Carefully quantifying squall intensity and its change over time scales of a few minutes and length scales of the order of 50 m is of highest importance. In contrast, quantifying water, storm surge, and air temperatures is not critical.

Section 3.2 is an overview of key processes, including a discussion of WWC spectra, wind and current profiles, and important though arguably tangential discussions of wave growth and wave breaking.

Sections 3.3 and 3.4 briefly address some of the more important issues that can arise regarding measurements and models. Since all metocean criteria are founded on one or both of these inputs, it is important to understand the various sources and databases, and their advantages and limitations. Otherwise one may well suffer the consequences of garbage in, garbage out.

Section 3.5 examines ways to calculate the marginal probability of WWC processes. One of the more interesting cases is when two variables are statistically independent (or nearly so) in time or space and yet there is often a non-negligible probability that the two can occur simultaneously and generate loads that exceed the load from an individual process at the same probability level.

Section 3.6 describes some of the analysis products that are typically used to quantify operating conditions. The discussion begins with the simplest approaches such as univariate probability density functions and then moves on to address more sophisticated products to characterize storm and calm persistence, directional dependence, and vertical space variations.

Finally, the last section covers the topic of extreme criteria. It is important because the economic and safety consequences of getting it right are so high. It is also important because there is no general and all encompassing methodology to estimate extremes so the topic is rich in subtleties, complexity, and potential traps.

2 Overview of WWC Processes

2.1 Winds

Most winds that are important for offshore design and operations come from extra-tropical or tropical storms. Extra-tropical storms are large-scale systems that are well represented on standard meteorological charts. The measurements used to produce these charts are discussed in Sect. 3.3. Tropical storms are relatively small features on common weather charts. Observations in them are scarce. Detailed wind fields in tropical storms are produced using dynamic or kinematic numerical models, which are discussed in Sect. 3.4.

Wind specification requires especially careful attention to definitions. Richardson [3.3] made an eloquent statement of the problem many years ago:

Does the wind possess a velocity? This question, at first sight foolish, improves on acquaintance …let us not think of velocity, but only of various hyphenated velocities.

Richardson was concerned that Δ x / Δ t might not have a limit in a turbulent fluid. Examples of hyphenated velocities which do have a clearly defined meaning are the one-hour or three-second wind. The one-hour wind is the wind velocity averaged over an hour. The three-second wind is the maximum three second average velocity in an hour interval unless another interval is stated. The three-second wind gust is about 30 % higher than the one-hour average.

Wind speeds also vary with altitude. Friction at the water surface reduces the wind speed near the boundary. Wind speeds increase with height through the atmospheric boundary layer. The speed at 30 m height is about 15 % higher than that at the common anemometer height of 10 m. Unless the averaging time and height of a wind measurement are given, that measurement is not very useful.

The standard offshore engineering method for converting wind speeds from one averaging period and height to another is given by Standards Norway (NORSOK) [3.4] and serves as the basis for the ANSI (American National Standards Institute)/API [3.1] recommended practices. These guidelines give the wind speed u ( z , t ) o at height z above mean sea level for averaging period to as

u z , t o = U z 1 - 0.41 I u z ln t t o ,
(3.1)

where the one-hour mean wind U(z) is given by a modified logarithmic profile that depends on the one-hour mean wind speed at 10 m elevation, U ( 10 ) , as given in (3.2) and (3.3)

U z = U 10 1 - C ln z 10 ,
(3.2)
C = 0.0573 1 + 0.15 U ( 10 ) ,
(3.3)

and the wind speed at other averaging periods given in (3.1 ) depends on the turbulence intensity I u ( z ) defined as the standard deviation of the wind speed at height z, σ ( z ) , divided by the one-hour mean wind speed at height z, U(z). According to the API standard

I u z = 0.06 [ 1 + 0.043 U 10 ] z 10 - 0.22 .
(3.4)

Note that the equations use units of meters for height and m/s for velocity. Equations (3.1)–(3.4) are based on an extensive set of wind measurements made from a tower on a small islet off the coast of Norway. While all these measurements were made in extra-tropical storms, the equations are commonly used for tropical storms as well, e. g., [3.1]. However, recent work by Vickery et al [3.5] shows that the equations from GlossaryTerm

ESDU

(Engineering Sciences Data Unit) [3.6, 3.7] fit the observations from tropical cyclones noticeably better than the NORSOK Standards [3.4] equations.

The original ESDU equations are more complicated than the NORSOK Standards [3.4] equations but Vickery et al [3.5] found a number of simplifications which apply in cases of engineering interest and yield the following equations

U z = u k ln z z o ,
(3.5)

where U is the one-hour averaged velocity at a height of z above GlossaryTerm

MSL

(mean sea level), k is von Karman’s constant (0.4), u is the friction velocity, and zo is the roughness height. The latter two are defined as

u = U ( 10 ) C d ,
(3.6)
z o = 10 e - k / C d ,
(3.7)

where Cd is the drag coefficient at 10 m above sea level. There are many expressions cited in the literature for the drag coefficient but Vickery et al [3.5] chose Large and Pond [3.8]

C d = 1.2 4 U ( 10 ) < 11 m s - 1 ,
(3.8)
C d = 0.49 + 0.065 U ( 10 ) 10 - 3 11 U ( 10 ) < 25 m s - 1 ,
(3.9)

where U must be in units of m s - 1 . For hurricanes, Vickery et al [3.5] suggests restricting the maximum value of Cd based on Vickery et al [3.9] to

C dMax = ( 0.0881 r + 17.66 ) 10 - 4 ,
(3.10)

where r is the horizontal distance from the storm center to the site. The value in (3.9) exceeds the value in (3.10) at about 22 m s - 1 for r = 20 km . More will be said shortly about the cap on Cd.

The peak wind gusts for averaging time to can be calculated with

u z , t o = U z [ 1 + g ϑ , t o , z I u ( z ) ] ,
(3.11)

where I u ( z ) is defined in [3.4]. Using the simplifications described by Vickery et al [3.5]

σ z = 7.5 u 0.538 + 0.09 ln z z o 1 + 0.156 ln u f z o ,
(3.12)

where f is the Coriolis parameter .

The peak factor g ( ϑ , t o , z ) is a function of the length of the record (typically 1 h), To, and the zero crossing period, ϑ, or

g ϑ , t o , z = 2 l n ( T 0 ϑ ) + 0.577 2 l n ( T 0 ϑ ) σ z , t o σ z ,
(3.13)

where the variables are defined as

σ z , t o = σ z 1 - 0.193 T u t o + 0.1 - 0.68 ,
(3.14)
ϑ = 0.007 + 0.213 T u t o 0.654 T u ,
(3.15)
T u = 3.12 z 0.2 .
(3.16)

Neither NORSOK or ESDU equations used to calculate wind at various time averages apply to short-lived squalls because the wind speed is not statistically stationary in them. Nor is it clear how well the wind profiles apply.

Squalls are important for engineering design and operations in low latitudes or where the wave fetch is limited by land. Squall lines often originate onshore where convection is strongest and then propagate with the mean winds. When a squall line passes a site, the wind speed rapidly increases and then decays over a few hours, perhaps with some oscillations. Squalls are generally modeled in design analyses as time series scaled up from actual measured squall records.

Compliant structures in deep water can have natural periods much longer than the vibration periods of fixed structures. Resonant oscillations of these structures can be excited by long period variations in wind speeds. Knowledge of the wind spectrum is required in order to calculate the response. Again, the standard engineering wind spectrum is that given by NORSOK [3.4]. It is

S ( f , z ) = ( 320 m 2 s - 1 ) ( U ( 10 ) / U ref ) 2 ( z / z ref ) 0.45 1 + f ̃ 0.468 3.561 ,
(3.17)

where

f ̃ = ( 172 s ) f z z ref 2 / 3 U ( 10 ) U ref - 0.75 .
(3.18)

The reference elevation above the mean sea surface zref is 10 m, and the reference wind speed Uref is 10 m s - 1 .

The drag of wind on the sea surface produces waves and currents, so accurate knowledge of the drag as a function of wind speed is important for modeling waves and currents. The drag coefficient depends on atmospheric stability, but in the high winds that interest us, the equations for neutral conditions usually apply. The wind stress τ is equal to u 2 ρ o where u is the so-called friction velocity given by (3.6) and ρo is the air density. The friction velocity is dependent on the drag coefficient, Cd. There are many formulations for Cd but one of the more popular is from Large and Pond [3.8] as shown in (3.8) and (3.9). For years, many metocean experts used (3.9) well above the maximum 25 m s - 1 suggested by Large and Pond [3.8], but more recent hurricane measurements by Powell et al [3.10] showed that the drag coefficient starts to level off around 30 m s - 1 . They conjecture that high wind speeds create a layer of sea foam and bubbles at the sea surface thus dropping the effective roughness of the sea. This reasoning was supported by the laboratory experiments by Donelan et al [3.11]. Powell [3.12] provided additional support from field measurements. Frolov [3.13] showed that capping the drag coefficient at 0.0022 was essential to model currents measured in Hurricane Katrina in the Gulf of Mexico.

2.2 Waves

Waves grow because of the input of momentum from the wind, but knowledge of the exact mechanism by which this momentum is transferred has remained elusive. The fundamental mechanism, first proposed by Miles [3.14], seems to be a resonance interaction between wave-induced pressure fluctuations and the waves. As the waves propagate, they are modified by nonlinear interactions between different frequencies, frictional dissipation and wave breaking. A fuller discussion of wave generation and modeling is given in Sect. 3.4.

Ocean waves are a complex and irregular function of space and time. This complexity is best understood by considering the sea surface to be the superposition of many cosine waves, as shown in Fig. 3.1 . Each of the cosine waves is characterized by a period T and an amplitude a. The height of a cosine wave H = 2 a . Later we will see that this relation is not true for real waves. The wave frequency f = 1 / T is the inverse of the wave period. The wave length L between two crests is given by L = g T 2 / 2 π in deep water. The phase speed or celerity is given by c = L / T . A more detailed discussion of wave kinematics and dynamics is given in Chap. 2.

Fig. 3.1
figure 1figure 1

Superposition of cosine waves to make regular waves

A wave record measured at a point can be analyzed into its component cosine waves using the Fourier transform . This transform gives the amplitude and phase of each component. It includes all of the information and irregularity of the original record. This is too much detail for most purposes because an individual wave record is a single realization of a random process. We would usually prefer to know the distribution of wave energy with frequency in the underlying process. If F(f) is the Fourier transform of the wave record, its power spectral density is given by

S ( f ) = 2 F ( f ) / n ,
(3.19)

where n is the number of points in the time series. Taking the square of the amplitude of the Fourier transform removes the phase information from the record, but the result is still a very irregular function of frequency. A smooth version of the spectrum is found by filtering S(f) over frequency or averaging spectra from several ensembles. Glover et al [3.15] give a good, practical guide to the details of calculating power spectra.

Once the spectrum is known, the significant wave height HS is defined as

H S = 4 S ( f ) d f = 4 σ ,
(3.20)

where σ is the variance of the wave record. The peak wave frequency is the frequency at the highest point in the power spectrum. The mean frequency fm and zero-crossing frequency fz are given by

f m = m 1 / m 0 , f z = m 2 / m 0 .
(3.21)

Where the spectral moments are calculated as

m n = S ( f ) f n d f .
(3.22)

Wave spectra for design are generally specified in an analytic form. The most popular of these is the Joint North Sea Wave Observation Project (JONSWAP) spectral form . It is given by

S ( f ) = β f - 5 exp - 5 4 f f p - 4 γ exp - f - f p 2 2 σ 2 f p 2 ,
(3.23)

where

σ = σ a = 0.07  if  f f p , σ b = 0.09  if  f > f p .
(3.24)

The JONSWAP spectrum was originally proposed to describe fetch-limited waves, but by adjusting its parameters, it can give a reasonable fit to most single-peaked spectra. Given the significant wave height, peak period, and peak enhancement factor γ, Goda [3.16] showed that the scale factor is approximated by

β = 5 16 H s 2 f p 2 1.15 + 0.168 γ - 0.925 1.909 + γ - 1 .
(3.25)

The JONSWAP spectrum can be used to describe most spectra with single peaks. However, combinations of sea and swell in storms can result in spectra with two or more peaks. The Ochi–Hubble [3.17] spectrum is often used to describe double-peaked spectra in areas subject to tropical storms. It is the sum of two Gamma distributions

S ( f ) = j = 1 2 H S j T P j ( λ j + 0.25 ) λ j 4 Γ ( λ j ) ( T P j f ) ( 4 λ j + 1 ) exp - λ j + 0.25 T P j f 4 .
(3.26)

This spectrum has three parameters for each of the two wave systems, a significant wave height, a peak period, and a shape factor λ.

The Torsethaugen and Haver [3.18] double-peaked spectrum is also the sum of two Gamma functions. Their paper gives parameters which were fit to measurements made in the North and Norwegian seas.

The spectral representation of waves makes it natural to think of them as a Gaussian random process. The envelope of a Gaussian process has a Rayleigh distribution , and to first order, so do wave and crest heights. However, the trough preceding a large crest is likely to be on a lower part of the envelope. Trough to crest wave height differences are, therefore, slightly smaller than given by the Rayleigh distribution

P ( h ) = exp - 2 h H S 2 ,
(3.27)

where HS is four times the standard deviation of the wave trace.

The empirical distribution suggested by Forristall [3.19] accounts for the observed reduction in wave height and has been shown to agree with many observations, including measurements in water depths less than 30 m. It is given by

P ( h ) = exp - 2.263 h H S 2.126 .
(3.28)

Crest heights in steep waves are higher than those predicted by Gaussian theory because the waves are nonlinear. The distribution produced from simulations of second-order waves by Forristall [3.20] accounts for the most important nonlinearity. It is a Weibull distribution of the form

P ( η 2 ) = exp - η α H s β ,
(3.29)

where

α = 1 8 + 0.2568 S 1 + 0.0800 U r , β = 2 - 1.7912 S 1 - 0.5302 U r + 0.2824 U r 2 .
(3.30)

The mean steepness and Ursell number are given by

S 1 = 2 π g H s T 1 2 , U r = H s k 1 2 d 3 .
(3.31)

The wave and crest height distributions in (3.27)–(3.29) do not take into account higher-order nonlinearities that may lead to rogue waves. The evidence for rogue waves and possible theoretical reasons for their existence are discussed in Sect. 3.7.9.

Representing waves as a Fourier series makes the tacit assumption that the waves do not break. A Fourier series, and most wave theories, cannot handle double-valued time series. Yet during a storm, the sea is covered with breaking waves [3.21] . Fortunately, almost all of these breaking events are spilling events that only affect a small portion of the wave crest. Because of this design calculations in deep water typically ignore breaking. Measured forces and the survival of structures in severe storms indicate that neglecting deep water breaking waves does not change wave forces significantly [3.22].

The situation is completely different near the shore. The transformation of wave spectra near the shore is modeled by specialized hindcasting tools such as GlossaryTerm

SWAN

(Simulating WAves Nearshore) [3.23]. Shoaling waves can steepen rapidly and form a plunging breaker. Longuet-Higgins and Cokelet [3.24] succeeded in integrating the equations of motion in a free surface flow past overturning many years ago. Such computations show that particle velocities in the crest of plunging breakers can exceed the phase velocity of the wave and are much higher than particle velocities in non-breaking waves. Christou et al [3.25] used a boundary element method to calculate the particle kinematics in a shoaling wave shown in Fig 3.2. The velocities in the crest are about twice the velocities calculated before the wave breaks.

Fig. 3.2a–d
figure 2figure 2

Particle velocities in a shoaling breaking wave calculated using a boundary element method (after [3.25])

2.3 Currents

Knowledge of ocean currents is important when designing, building, or operating an offshore structure. Wind-driven currents are the most important consideration for structural design because their velocities add to wave particle velocities. Wind stress imparts momentum to the sea surface. Turbulent processes mix the momentum downward. The Coriolis force rotates the resulting currents (to the right in the northern hemisphere and to the left in the southern hemisphere). A fuller discussion of wind-driven current generation and modeling is given in Sect 3.4.

In the northern hemisphere, wind-driven currents are particularly strong on the right-hand side of a hurricane track. There, the current rotation due to Coriolis is often close to resonance with the turning of the wind stress as the hurricane passes, so near-surface currents can exceed 2 m s - 1 . Figure 3.3 shows an example from Hurricane Gloria in 1985 [3.26]. At the time of the measurements, the hurricane center was at 28.75 °N, 74.98 °W and moving toward the north-northwest. The closed arrow heads show measurements made with air-dropped expendable current profilers and the open arrow heads show results from a one-dimensional (GlossaryTerm

1-D

) numerical model that used the turbulence closure model of Kantha and Clayson [3.27].

Fig. 3.3
figure 3figure 3

Currents measured near the surface in Hurricane Gloria (1985). The solid arrows are measurements from air-dropped expendable current profilers and the open arrows are from a one-dimensional current model (after [3.26])

The downward propagation of wind-driven momentum is constrained to the upper water column by vertical stratification if it exists. Strong stratification is usually found at most sites in water depths greater than about 30–60 m during the summer months. Kantha and Clayson [3.28] give a detailed discussion of mixing in vertically stratified flows. Both measured and modeled currents are much stronger on the right-hand side of the storm because the Coriolis rotation is in the same direction as the wind stress rotation . The agreement between measured and modeled currents is good except for a direction difference to the east of the storm center.

Friction in deep water is extremely low, so wind-driven currents can persist as inertial currents for several days after the wind dies out. The rotation of the earth, through the Coriolis force, causes these inertial currents to rotate clockwise (in the northern hemisphere). The rotation period is π / Ω sin Φ , where Φ is the latitude and Ω is the earth’s rotation rate ( 2 π / day ). Figure 3.4 shows currents measured during and after Hurricane Katrina in the Gulf of Mexico [3.13]. The peak wind speed was at about the same time as the peak current early on August 29 2005, but the inertial currents persisted for 7 days after that when the wind was essentially calm.

Fig. 3.4
figure 4figure 4

Currents generated by Hurricane Katrina at the Telemark platform in the Gulf of Mexico. The blue lines are from the measurements and the red lines are from a three-dimensional (3-D) numerical model. The three panels show the current speeds at three depths

Deep water structures must often contend with the permanent strong current systems that exist near the margins of the ocean. Examples include the Gulf Stream , the Loop Current , the Brazil Current, Kuroshio, and the Somali Current. Figure 3.5 shows these and many others. Tomczak and Godfrey [3.29] give a good descriptive introduction to these current systems.

Fig. 3.5
figure 5figure 5

Major current systems of the world

Most of these currents are permanent features of the oceanic circulation. However, their position and strength can vary greatly. When they depart from the shelf break, they can often have large meanders and shed eddies than can persist for months [3.30]. Current speeds in these systems can exceed 2 m s - 1 with speeds over 1 m s - 1 down to 200 m.

The best design information for these current systems comes from combining remote sensing of the current positions with in situ measurements of current profiles. These techniques have been applied extensively in the Gulf of Mexico to study the Loop Current. The studies have led to the development of a kinematic hindcast model for Loop Current eddies that uses historical eddy positions and shapes as input [3.31].

The external astronomical tide generates weak currents in the deep ocean. Tidal currents are typically less than 10 cm s - 1 in deep water. In shallower water tidal currents can exceed 2 m s - 1 and must be considered in the design of facilities such as floating GlossaryTerm

LNG

(liquefied natural gas) terminals. Tides and tidal currents are predictable compared to other oceanographic phenomena, so it only takes a few weeks of measurements followed by harmonic analysis to enable multiyear prediction. The numerical models discussed in Sect. 3.4 also do well predicting tidal currents.

In waters that are strongly stratified in the vertical, the external tide in conjunction with sharp bathymetric features can generate a strong internal tide that is characterized by internal waves and possibly solitons with amplitudes of up to 40 m, phase speeds of about 50 cm s - 1 , and wavelengths of several kilometers [3.32]. As these waves approach shallower water, they can break and cause high velocities and scouring of the seabed [3.33].

A power spectrum of current measurements typically shows a broad peak at periods of a few days corresponding to the motion of weather systems. There are then sharp peaks at the inertial period and any tidal periods that are important at that site. Measurements in laboratory flumes often show high frequency turbulence with amplitudes of as much as 5 % of the mean flow speed. That turbulence strongly affects vortex-induced vibrations of cylinders, so it is important to understand whether such turbulence exists in open ocean currents. Mitchell et al [3.34] made turbulence measurements in a Loop Current eddy using a towed body with a specially configured acoustic doppler current profiler (GlossaryTerm

ADCP

). The most energetic events had speed scales of only 1 cm s - 1 . The typical and average values are more than ten times smaller. Dhanak and Holappa [3.35] made similar measurements using an autonomous underwater vehicle (GlossaryTerm

AUV

). These measurements of low turbulent intensity were made in deep water far from land. Turbulence is expected to be higher in shallow water and near the surface or bottom.

3 Measurements

Metocean criteria ultimately trace their roots to measurements or models. While models have become the predominant source data, measurements are still needed to provide boundary conditions and/or initial conditions and to validate or calibrate the model. In the case of small-scale ocean currents or atmospheric storm systems (e. g., squalls), modeling accuracy is problematic in large part because of a lack of understanding of the fundamental physics of geophysical fluids at small length and time scales. For these processes, measurements remain the dominant source of input data for metocean criteria.

Measurements can come from many different sources, including air or satellite-based sensors, vessel-mounted instruments, dedicated moorings, bottom-mounted instruments, Lagrangian drifters, and most recently, automated mobile platforms such as gliders or AUVs. Some of the more common and useful sources are described in further detail below.

3.1 Historical Storm Databases

Observations from ships formed the basis for the first database of winds and waves providing nearly global coverage. Wind speeds taken from ships are often measured with an anemometer, but waves are usually based on a sailor’s visual observations. As one might expect, the primary challenge in using human observations is to remove bias and reduce scatter. These wind measurements also have significant error because a ship’s superstructure distorts wind flow patterns. Thomas et al [3.36] discuss this issue in detail. Of particular concern with all ship observations is the so-called fair weather bias – the tendency for ships to avoid storms and thus underestimate the true probability of larger waves and winds. A number of attempts were made to remove bias and scatter [3.37] and these efforts eventually culminated in the work of Hogben et al [3.38] who used observations from ships passing close to instrumented buoys to develop corrections that largely removed bias, at least in the North Atlantic. However, as Hogben et al [3.38] point out, they were not nearly as successful in the southern hemisphere where there are far fewer offshore instruments. Nor could they do much about the scatter inherent in subjective human observations.

Another important historical dataset is the so-called HURDAT (National Hurricane Centers HURricane DATabases) best-track data maintained and distributed by NOAA (National Oceanic and Atmospheric Administration) [3.39] and documented by Jarvinen et al [3.40]. It contains the time histories of tracks, peak winds, central pressure, and radius for historical North Atlantic tropical storms from 1851 to the present. NOAA provides similar information for the eastern North Pacific but only from 1949 to the present. The NOAA GlossaryTerm

GTECCA

(global tropical and extratropical cyclone climatic atlas) database contains historical track data for global storms up to 1995. For cyclones, the coverage starts as early as 1870 in the North Atlantic but not until 1945 for the other major basins of the world [3.41]. Coverage of Northern Hemisphere extratropical cyclones (winter storms) starts in 1965. Note that these datasets do not include detailed wind fields, just the basic intensity information that can be used to reconstruct the detailed wind fields using various methods such as parametric models [3.42]. Others have used the historical tracks to assimilate into much more sophisticated numerical models providing high resolution gridded wind velocity and pressure [3.43].

The accuracy of early storms in HURDAT have been questioned especially as climate scientists have tried to detect trends in historical storm severity. Karl et al [3.44] provide a fairly recent summary of these findings. In part because of these questions, NOAA recently undertook a re-analysis of the data underlying HURDAT. These efforts have been documented in a series of publications, which are referenced in Hagen and Landsea [3.45]. Even after their reexamination, the researchers in this effort readily admit that large uncertainties remain in the storms prior to routine airborne observations which started in the early 1950s. Emanuel [3.46] and others have noted that there is even more uncertainty in basins outside the North Atlantic, and this uncertainty extends into the post-1950 era because of the lack of reconnaissance flights in most basins.

3.2 Satellite Databases

The study of the oceans and winds from space started in the 1970s with the launch of Skylab and Geos-3, which were equipped with a radar-altimeter, wind-scatterometer, radiometer, and infrared scanner. Le Traon [3.47] gives an overview of the state of operational satellites used in oceanography and to a lesser extent, meteorology.

One of the most useful sensors for ocean engineering has proven to be the altimeter. The first of many operational altimeters began with TOPEX (Ocean Topography Experiment)/Poseiden and GlossaryTerm

ERS1

(Earth Resources Satellite) in 1991. Since 1998, there have been as many as four altimeters flying simultaneously because several altimeters are needed to properly resolve the length and time scales of energetic oceanographic phenomena such as mesoscale eddies, storm-driven waves, etc. The most used channels from the altimeter are wind velocity (speed and direction), wave height and period, and sea surface height.

Sea surface height from the operational altimeters is available in near real time and in historical archives from various sites [3.48, 3.49, 3.50]. Though accuracy varies by satellite, typical GlossaryTerm

RMS

(root mean square) errors are less than 3 cm [3.51]. These heights are useful in tracking geostrophic ocean currents and developing comprehensive maps of astronomical tides in deeper water, the latter being prohibitively expensive before the advent of satellite altimeters. Shum et al [3.52] assessed 20 of these tidal models and found many to be accurate to better than 2 cm.

Wind speed and wave height and period measurements from the operational satellite altimeters are available over the web [3.53, 3.54, 3.55] but these are typically organized by individual tracks for each satellite or statistics from several satellites averaged over large areal blocks. The track data must typically be filtered to eliminate periods with heavy rainfall or close passage to land. Several companies offer commercial products with fully analyzed databases that can be accessed with their proprietary software [3.56, 3.57].

Numerous researchers have investigated the accuracy of altimeter-derived winds and waves, e. g., [3.58, 3.59]. These efforts show that there is a systemic bias unique to each satellite but it is easily corrected leaving an RMS difference with buoy current meter measurement of about 30 cm for significant wave height and 1.5 m s - 1 for wind speed. Some of this difference is due to buoy measurement error.

Synthetic aperture radar (GlossaryTerm

SAR

) has been sporadically deployed starting with Seasat. By far the most successful application was the QuikSCAT (quick scatterometer) satellite which launched in 1999 and continued to operate until late 2009. Unlike altimeters which only measure along their track, SARs measure over a wide swath. In the case of QuikSCAT the swath was 1800 km, resulting in the coverage of 90 % of the Earth’s surface in a single day. Results were widely used to improve forecast models, so archived model results are one of the best ways to use QuikSCAT since native measurements have many hours between samples. Archived QuikSCAT measurements are downloadable from the web [3.60].

SAR from various satellites has also been used to measure ice coverage, oil slicks, waves and currents as described in [3.47]. Unlike the altimeter, SAR measures wave direction in addition to wave height and period. Furthermore, it makes those measurements over a wide swath. Several commercial satellite wave databases include SAR measurements. SAR also has significant theoretical advantages over other sensors when it comes to identifying near-surface currents. Unlike the altimeter, SAR can identify non-geostrophic current fronts (i. e., currents that cause no detectable change in sea surface height). However, analysis of SAR images is complex in part because artifacts can be caused by natural surfactants, and also because a minimum wind threshold is needed. These disadvantages have limited the use of SAR for the measurement of waves and currents.

The wind velocity, wave height/period, sea surface temperature, and sea surface height measurements from satellites are routinely assimilated into ocean models to provide nowcast and forecast products [3.61, 3.62, 3.63, 3.64, 3.65]. It is probably in this form that the satellite results are the most valuable, since the models are able to interpolate between the large gaps in time and space that invariably appear in all sources of satellite measurements. Some of the more popular modeling products that are publicly available are described in Sect. 3.4.

3.3 In Situ Measurements

Instruments are commonly deployed at a fixed offshore location using quasi-permanent facilities like oil production jackets, moorings with subsurface or surface buoyancy, or quasi-permanent coastal facilities.

No matter how instruments are deployed, the ocean currents at a particular site are commonly measured by an acoustic doppler current profiler (ADCP) . The accuracy and range of the instrument mostly depends on the transmission frequency, which ranges from 38 to 1200 kHz for commercially available instruments. ADCPs offer many advantages over earlier technologies. They are solid state instruments which are not easily fouled by marine growth. Perhaps most attractive of all is their ability to accurately measure at up to 1000 m from the instrument. That said, ADCPs can yield problematic results which may not be obvious to the untrained eye, especially in cases where the primary sources of scatter are mobile like plankton or fish scatterers or where the scatterer is fixed (i. e., risers on an offshore platform). Hogg and Frye [3.66] and Magnell and Ivanov [3.67] give some good examples of artifacts that can contaminate ADCP measurements.

High-frequency radar is a somewhat newer and more expensive technology than ADCPs but its use has grown rapidly in the past 5 years, largely because GlossaryTerm

HF

radar (High Frequency radar) can map surface currents over areas of the order of 1000 km 2 using only two coastal installations. Dozens of HF radars have been installed along most of the eastern and western coastlines of the US [3.68] and are available in real time from NODC [3.69]. Paduan and Graber [3.70] discuss the basic technology along with some of its limitations and provide numerous references.

Surface gravity waves can also be accurately measured with ADCPs [3.71] and HF radar [3.70]. However, most historical measurements have been taken with surface-following buoys equipped with accelerometers and perhaps augmented by roll sensors to measure directionality. Many developed countries with coastlines have deployed such instruments for several decades and the US results are available from the National Data Buoy Center (NDBC). Pandian et al [3.72] summarize the limitations of accelerometer-based systems. The most noteworthy is the tendency of the smaller buoys to be pulled under water or be tossed about in larger waves.

Wind velocity has typically been measured with mechanical anemometers using some type of impeller. Most offshore buoys are still equipped with this type of sensor. While these measurements are useful most of the time, questions have been raised about their accuracy during large wave events, especially for the smaller buoys. Better sensors are needed to get detailed profiles. The least expensive of these better sensors are based on LIDAR (light detection and ranging; optical) or GlossaryTerm

SODAR

(sonic detection and ranging; sound). Both systems use a Doppler principle to determine velocity and are capable of measuring multiple bins over ranges of roughly 200 m above the sensor. Freeman et al [3.73] compare a LIDAR profiler to more traditional anemometers and show excellent accuracy. In contrast, de Noord et al [3.74] raise serious concerns about the accuracy and consistency of SODAR measurements, especially in storm conditions or near obstructions.

Regardless of the variable being measured, interference from nearby man-made or natural features must always be considered. Cooper et al [3.75] discuss some of the challenges of taking metocean measurements off of oil-industry facilities like jackets. Similar issues crop up with wind measurements from stations on the coast, and these have been well studied and codified into numerous recommended practices, e. g., ASCE (American Society of Civil Engineers) 7-05 [3.76] and EUROCODE [3.77].

Another issue that frequently arises is the averaging interval. As explained in Sect. 3.2, it is especially important for wind velocity because there is considerable wind energy at higher frequencies. Another important factor affecting winds is the elevation of the measurement above the sea or land, which is also explained further in Sect. 3.2. In short, specification of wind velocity should always include a minimum of four variables: speed, direction, averaging interval, and elevation. Similar issues arise with ocean currents, although they tend to be less pronounced because of the inherent difference in the turbulence spectra of winds and currents. In the case of waves, the issue of elevation is irrelevant. However, the temporal scale of the sampling is important when calculating statistical values like significant wave height. This issue will be discussed in more detail in Sect. 3.7.8. A good general rule is that the minimum sampling period for wave spectra should never be less than 20 min.

As indicated above, a great deal of in situ measurements, including wind velocity, are reported in real time and archived at the NDBC. Another good source for measurements from limited duration deployment is the National Oceanographic Data Center (NODC).

3.4 Mobile Measurements

Instrumented wind measurements have been taken from vessels for decades. In the 1980s, oceanographers began mounting ADCPs on ships [3.78]. In both cases, correcting for ship motion presents challenges but GlossaryTerm

GPS

(global positioning system) has largely resolved these. Vessel-based wind measurements remain susceptible to flow interference from the ship superstructure.

Starting in the 1980s there was a rapid increase in the use of semi-automated or fully automated mobile platforms, starting with Lagrangian drifting buoys whose paths are tracked by satellite, e. g., [3.79]. With the advent of GPS, the drifter position could be precisely tracked and accurate velocities estimated. Coholan et al [3.80] describe the use of drifters to measure the strong currents associated with the Loop Current in the Gulf of Mexico.

Autonomous underwater vehicles (AUV) have not been used much for current measurements because of their cost and limited range. However, as mentioned in Sect. 3.2.3, Dhanak and Holappa [3.35] made good use of an AUV to measure turbulence.

In the late 1990s gliders became increasingly common thanks to their light weight (50–100 kg), small size (2 m), relatively low cost ($100 k), and lengthy deployment capability (several months). Rudnick et al [3.81] describe the technology in some detail. Though gliders can only progress horizontally at about 1 knot, their long endurance and the ability to remotely pilot them make gliders highly cost effective and adaptable. The present crop of sophisticated gliders can reach 1000 m depth, though this will likely be extended in the near future. Gliders have limited payload and power capacities. They are typically equipped with GlossaryTerm

CTD

(conductivity-temperature-depth) sensors, although other sensors have been deployed, including fluorometers, dissolved oxygen, and pH. A time-mean, depth-averaged water velocity can be derived from the surfacing coordinates of a glider. Efforts are underway to incorporate ADCPs into a glider, though power consumption and obtaining an absolute velocity measurement in deeper water remain challenges.

The Global Drifter Program [3.82] began deploying large numbers of drifting buoys in 1999 as a fundamental component of the global ocean observing system (GlossaryTerm

GOOS

) and as of August 2011, nearly 11000 buoys had been deployed worldwide and are available from the GlossaryTerm

GDP

website. Unfortunately, there is not yet an equivalent to NDBC for obtaining real-time or archived measurements from other mobile instruments.

4 Modeling

Since the advent of relatively cheap computing power in the past 30 years, numerical modeling has started to replace measurements as the primary feedstock for metocean criteria. There are many reasons for this change.

Models are typically much less expensive than measurements and can provide results at a specific site and for durations of many years. In contrast, one rarely has the luxury of having more than a year or two of measurements at their site of interest. Calculating extreme criteria, say the 100-y event, from short duration measurements will give values with extremely large uncertainty and a high likelihood of major bias. Even 1–2 years of measurements are often inadequate to capture interannual variability.

4.1 Winds

Extratropical wind field calculations generally use pressure contours on archived meteorological analysis charts as input information. A balance between the pressure gradient and the Coriolis force gives the wind speed. That calculation must be modified using a boundary layer model to find the desired wind speed and direction at 10 m elevation.

An important modeling hindcast dataset is the NCEP (National Centers for Environmental Prediction) reanalysis product [3.83]. The first phase is documented in Kalnay et al [3.84] and consists of a numerical model hindcast of wind and pressure fields from 1948 to the present. Observations from ships, satellites, and fixed sites have been assimilated into the model. A follow on effort, NCEP/DOE (Department of Energy) Reanalysis II, covered 1979–2010 [3.85] and included far more satellite observations, as well as bias correction and a more refined model. Saha et al [3.86] describe the most recent model and processing.

The NCEP data is on a rather coarse grid, so for storm hindcasts it probably needs to be augmented with an analysis by an experienced meteorologist using all available data. Wind speeds derived from satellite scatterometers can be very helpful in this process.

Hurricanes offer a special challenge since they are small features relative to the scale of regular weather charts. To compensate for this, kinematic or dynamic hurricane models are often used to hindcast hurricane winds. The models typically begin with specification of the atmospheric pressure field. Winds due to that pressure field are found from the gradient wind balance equations. Then the wind is adjusted to 10 m elevation using a boundary layer model. Holland [3.42] introduced the radial pressure model

p ( r ) = p c + Δ p exp - R max r B ,
(3.32)

where r is the distance from the center of the storm, Rmax is the radius to maximum winds, Δ p is the central pressure deficit, and the Holland B parameter modifies the exponential shape of the pressure curve. If enough data is available, different pressure curves may be used in different storm quadrants. A second exponential function is now often added to account for secondary wind speed maxima. Cardone et al [3.87] give a good description of how the pressure gradient is transformed to boundary layer winds. The Hurricane Research Division HWIND model [3.88] uses these methods to produce wind fields for Atlantic Basin hurricanes.

4.2 Waves

Komen et al [3.89] describe how wave hindcasts solve the transport equation directional wave spectrum S ( f , θ )

S ( f , θ ) t + v S ( f , θ ) = S in + S nl + S ds ,
(3.33)

where v is the group velocity of the waves, so the left-hand side of the equation represents the advection of wave energy. The right-hand side of the equation schematically lists the source terms for the spectrum: Sin represents the input of energy from the wind, Snl represents the nonlinear interactions between wave frequencies, and Sds represents dissipation terms such as bottom friction and wave breaking. Only the nonlinear term is known theoretically, but because its computation is formidable it is greatly simplified in operational models. The other two terms must be parameterized based on experimental data and tuned to observed wave growth. The directional spectrum calculated from the model is summarized as significant wave height, peak and average wave periods, mean wave direction, and wave directional spreading.

The standard wave model , WAM (Wave Modeling Project), was created by an international consortium of wave modelers called the WAMDI group. The development of WAM is thoroughly described by Komen et al [3.89]. WAM has been continually modified, and versions have been installed at many national forecast offices. The NOAA version, WAVEWATCH III, is available for download at ftp://polar.ncep.noaa.gov/pub/wwatch3/v2.22. That site also maintains an archive of forecast and hindcast wave data for US waters.

The accuracy of wave modeling crucially depends on accurate specification of the wind fields. For severe storms, this often requires hand analysis by an experienced meteorologist. Given good wind fields, RMS wave height accuracies of less than 10 % can be achieved for extratropical storms [3.90] and for hurricanes [3.91].

The various NCEP reanalysis products have been used to force wave models and generate 50 + year hindcast databases, e. g., [3.92]. The primary limitation of these products (other than NARR (North American Regional Reanalysis)) is the relatively coarse spatial grid (2.5 °) and temporal resolution (6 h). Cardone et al [3.93] discuss some of the implications of this coarse resolution on modeling extreme waves and winds. In short, the reanalysis products suffer from poor resolution of areas of high winds in extratropical storms and in virtually all parts of tropical storms.

4.3 Currents, Surge, and Tides

With the advent of relatively fast and inexpensive computers in the 1970s, numerical models of ocean currents in shallow water began to proliferate. While the numerical discretization differs, all these models solve a similar set of differential equations conserving mass and momentum, which are often referred to as the shallow water equations. Model output includes the time series of depth-averaged velocity and surface elevation at discrete grid points in the horizontal domain. This class of model is now routinely used to accurately simulate astronomical tides and wind-induced currents and surge in coastal waters where stratification in the water column is not important, typically in 30 m of water or less and beyond the influence of substantial river inflow. Johnsen and Lynch [3.94] provide numerous examples of these so-called two-dimensional (GlossaryTerm

2-D

) models. Several models such as MIKE 21 HD [3.95, 3.96] and ADCIRC [3.97] have good user interfaces and can be successfully applied by users with modest familiarity with numerical ocean modeling. The accuracy of these models of course depends on the specifics but assuming that the bathymetry and the initial and boundary conditions are specified accurately, modeled currents and surface elevations can achieve a 15 % RMS error when compared to measurements.

A similar story can be told for stratified deeper waters but only for some types of forcing such as winds and external tides. Some examples are given in [3.94]. An example for hurricane-generated currents is given by Frolov [3.13], who used a 3-D model to accurately simulate the ocean response from Katrina and Georges, both during the initial phase of direct wind forcing and the subsequent phase of inertial oscillations. It is worth noting that during the Katrina simulations, a storm marked by Category 4 winds, Frolov had to cap the surface drag coefficient, as discussed in Sect. 3.2.

Three-dimensional current models can also accurately simulate the longer time and length scales of quasi-geostrophic currents like the Gulf Stream, provided that they have accurate boundary and initial conditions, which are typically provided by satellite altimeters. While these models can achieve less than 20 % RMS error on large length scale processes, they have difficulty simulating processes of approximately 100 km or less [3.98, 3.99]. That is because these processes often exhibit baroclinic instabilities with small length and timescales that are undersampled by the present altimeter array. These limitations can be circumvented to some degree by assimilating fine-scale measurements from ships, drifters, gliders, etc. [3.99]. A number of models are publicly or commercially available, but the skill needed to effectively apply these models is much higher than for the 2-D models, in large part because the underlying physics of a stratified ocean (3-D) are far more complex than an unstratified one (2-D). Examples of generally usable 3-D models include HYCOM [3.100] , ROMS [3.101] , and MIKE 3 HD . As in the case of 2-D models, the numerical methods employed by 3-D models differ greatly, but the better ones are capable of modeling similar real-world situations with equivalent accuracy.

That is the good news. The bad news is that there are a host of other processes, many of them energetic, where numerical models yield RMS errors of more than 100 %. Published examples are hard to find because poor matches tend to go unpublished. However, the authors’ experience suggests models have great difficulty simulating internal (baroclinic) tides and solitons, turbidity currents, and river outflows. In these cases, data assimilation is usually impractical because of the short length and time scales of the characteristic processes. In addition, since these cases are dominated by small length scales where turbulence and mixing play an important role, the physics are not well understood. GlossaryTerm

CFD

(Computational Fluid Dynamics) may someday be a viable tool but not until computer capacity increases substantially.

A number of extensive data sets of ocean currents have been generated in the last decade and are readily available over the web. These models assimilate data from satellite measurements and sometimes buoy and drifter measurements. Some noteworthy and useful data sets include:

NOAA’s RTOFS global model [3.102] provides forecasts up to 7 days. The historical forecasts are not downloadable at this time, though that may change in the future (personal communication, Hendrik Tolman, NCEP, Environmental Modeling Center, 23 Aug 2012). The model is based on HYCOM and is composed of curvilinear grid points with variable horizontal sizes spanning 5–17 km. It uses 26 hybrid layers/levels in the vertical.

The HYCOM global model provides forecasts up to 7 days and archives back to 2003, though until 2013 the archive only saved the modeled fields at midnight. The model uses a  1 / 12 grid.

GlossaryTerm

NCOM

(Navy Coastal Ocean Model) regional models [3.103] provide forecasts up to 4 days. The historical forecasts are not downloadable at this time, though that may change in the future. The models use a  1 / 36 version of the global NCOM model, a version of the Princeton Ocean Model (GlossaryTerm

POM

). Nowcasts have been archived back to 2010. Other regional models are also available: RTOFS for the North Atlantic, MERCATOR for Mediterranean [3.104], and BLUElink for Australia [3.105]. These models assimilate satellite observations within their domain and take their boundary conditions from larger-scale global models.

When utilizing the archive data sets from 3-D models, one must keep in mind the weaknesses described earlier in this section. More specifically, these archived products will not adequately resolve the peak current during tropical cyclones. Nor will they be able to reliably replicate historical mesoscale features, though they may be able to reproduce the statistics of those features (e. g., reproduce the histogram of speed). In summary, if there are energetic ocean current processes with length scales of less than 100 km affecting the site of interest, model archives should only be used with caution. At the very least, several months (preferably much more) of local measurements should be obtained and used to validate and calibrate the model before relying on model results.

5 Joint Events

Most ships and offshore facilities are designed to withstand a load with a specific return interval of n years, e. g., the 100-year event. For many decades, offshore designers assumed that the n-y event was created by the simultaneous occurrence of the n-y wind, n-y wave, and n-y current (i. e., the so-called n-y independent events), all aligned in the same direction. However, about 30 years ago, metocean researchers started collecting detailed measurements during major storm events and realized that the peaks of winds, waves, and currents, in fact, did not occur simultaneously in direction or time and they began developing various techniques to account for this fact. The more popular ones are described next.

5.1 Response-Based Analysis

The simplest and perhaps most accurate way of estimating the n-y response is to feed a time series of wind, wave, current into a response model of the facility and then do an extreme analysis of a key response variable. For example, if a structural engineer designing the legs in an offshore jacket for the 100-y overturning moment (GlossaryTerm

OTM

). In the response-based approach, the structural engineer would first develop a fairly simple response function whose input variables include wind, wave, and current and whose output is the OTM. Second, the metocean time series is fed into the response function resulting in a time series of OTM. Third, a peak over threshold (POT) analysis is done, as described in Sect. 3.7.2 and the n-y OTM calculated. Finally, a set of winds, waves and currents that produce the OTM is found and used for detailed analysis.

Ewans [3.106] describes the application of the response-based approach to pipeline stability. Heideman et al [3.107] provide one of the earliest examples of the approach and show that for a jacket-type structure in the North Sea, one can combine the 100-y wind and wave with an equivalent current that is 0.25 times the 100-y current to reach the 100-y OTM. Their case is perhaps on the extreme end of potential savings as it is situated in the North Sea where storm winds and waves are weakly correlated to the extreme current. Nevertheless, even in regions dominated by hurricanes where the metocean variables are highly correlated, ANSI/API [3.1] recommends that the 100-y wave can be combined with 0.95 of the 100-y wind speed and 0.75 of the 100-y surface current speed. A further 3 % reduction of the current and wind is allowed if directionality is considered.

The response-based approach is versatile and can apply to the calculation of extreme loads like base shear or OTM in a jacket, extreme responses like the n-y heave in a ship, or operating conditions like the marginal probability distribution of pitch and roll. ANSI/API [3.1] recommends the response-based analysis as the preferred alternative. Part of the reason for the rise in popularity of response-based analysis is the increase in computer power, which has made the repetitive solution of fairly complex response functions feasible. Another enabling technology has been the advent of long-duration hindcast datasets of simultaneous wind, wave, and current time series derived from numerical models.

That said, the downside of response-based analysis is the need for a response model with sufficient complexity to accurately reflect the critical response of the facility yet with sufficient computational efficiency to run many thousands of times. Developing such response models can be daunting for complex floating systems like GlossaryTerm

TLP

s (tension-leg platform) or spars. Of course there are shortcuts in the analysis that can reduce the computational requirement yet still preserve accurate results. For instance, in the case of calculating extreme loads, the metocean time series can be truncated into a much smaller set of events that only considers the stronger storms. Obviously this approach does not work as well for developing operational criteria.

5.2 Load Cases

As discussed in the previous section, a response-based analysis is generally the preferred approach in developing extreme criteria, but in practice the response function is often not easily simplified, so metocean specialists are requested to develop so-called load cases. These consist of likely combinations of wind, wave, and current that could cause the n-y load or response. For instance, one common load case would be the n-y wave and the associated wind (the most likely wind velocity to occur simultaneously with the n-y wave condition) and associated current. Another analogous case would be the n-y current and associated wind and wave. ANSI/API [3.1] makes routine use of load cases in their recommended practice.

There are two main questions to be answered when using load cases: What combinations of wind, wave, and current can cause the n-y load/response? How is the associated value found? Answering the first question is straightforward for well-studied facilities like offshore jackets. That is because previous work has shown that the n-y load for key global responses like base shear and overturning moment occurs during the n-y wave and associated wind/current. Other combinations such as the n-y wind and associated wave and current, come close but do not exceed the n-y wave case. However, for other facility types this may not be true and so there is a risk of missing load combinations with n-y recurrence that exceed traditional cases like the n-y wave and associated wind/current. One way to mitigate this risk is to provide a broad range of possible load cases, though a firm justification for those cases may be difficult to establish unless a response-based analysis is performed.

Several methods have been developed to answer the second question and these are discussed in the following sections.

5.2.1 Regression Analysis

Using a regression analysis to find the associated values can be straightforward, especially when the primary and secondary variables are well correlated. The analyst starts by estimating the n-y value of the primary variable using a peak-over-threshold (POT) method, as described in Sect. 3.7.2. Next, a scatter plot is made of the coincident (in time) primary and secondary variables. If there is some correlation evident in the plot, the data is fit with a curve to derive an equation expressing the secondary variable in terms of the primary one. The associated value can then be found by substituting the n-y primary variable into the equation.

Figure 3.6 illustrates this approach for the case where the primary variable is Hs, the significant wave height, and the secondary variable is W, the wind speed. The figure suggests that Hs is well correlated to W (correlation coefficient of 0.91) in a linear way. The red line shows the least squares fit with the resulting algebraic expression shown in the upper left-hand corner of the figure. A threshold of H s > 3 m has been applied to remove the weaker winds and waves and make the best-fit curve linear. For this particular dataset, the 100-y Hs is about 9 m, so the red line suggests an associated wind speed of 24.5 m s - 1 , well less than the 32 m s - 1 suggested by an independent POT analysis of the 100-y W in this dataset. The 24.5 value represents a mean estimate with a 50 % probability of being exceeded. Therefore, one might want to increase that value to reflect the scatter in the data and uncertainty in the fit.

Fig. 3.6
figure 6figure 6

Scatter plot of Hs vs W for all hurricane-generated waves with H s > 3 m . The red line shows the least squares fit

The data shown in the figure was taken from a hurricane dataset, so the highly correlated relationship between the stronger waves and wind is not surprising. However, there are other situations in which the correlation may be weak or nonexistent, such as with currents and wind in deep water. In such cases, it is sometimes reasonable to set the associated value to the mean of the secondary variable. That said, there are subtleties that crop up in certain parts of the world. Consider the derivation of the 100-y wind speed and associated wave off Nigeria where the extreme winds are controlled by squalls that pass quickly and only generate small waves. It would be unconservative to use those squall-generated waves with the squall-generated winds, since much stronger waves are frequently found in the region originating from persistent southeasterlies and/or swell from the Roaring 40s. In this case, a reasonable estimate of the associated wave for the n-y squall case would be the mean for the entire population of wave-producing events during the squall season.

5.2.2 Simulations

Numerical simulations are often one of the best ways of determining associated values. Take, for example, the challenge of estimating the astronomical tide and storm surge to associate with the peak wave crest height. Such a combined event is needed for setting the deck height on jackets. Fox [3.108] describes a Monte Carlo approach of numerical simulations to estimate the expected value of the three processes.

Simulations are often the only way to determine the associated value when there are strong nonlinear interactions between the two variables. Cooper and Stear [3.109] describe an example where the Loop Current and a hurricane simultaneously affected a site in the Gulf of Mexico. While both processes are statistically independent, a tropical cyclone crosses over the Loop or one of its eddies every 3 years, on average, in the deep water Gulf. Most crossings are glancing and of no consequence, but every few decades a hurricane will cross the western half of an eddy or the Loop, resulting in a strong nonlinear interaction that can magnify the subsurface ocean currents by four times the linear superposition of the hurricane-only and Loop-only currents [3.13]. Cooper and Stear [3.109] estimate the frequency of occurrence of the Loop and hurricane current by shuffling the years from a hindcast historical dataset with a hindcast Loop/eddy database. They then use a lookup table of hindcasted joint hurricane/Loop events to estimate the n-y combined current.

5.3 Environmental Contours

The largest structural responses may not come from the combination of the largest primary variable and the associated secondary variables. For example, the largest roll response of a floating structure may come from a lower wave height and a wave period that matches the roll period. Those cases can be systematically investigated using environmental contours.

Haver and Winterstein [3.110] give a good description of the method and its use. In their example, they fit an extreme value distribution to the significant wave height. Then they fit marginal distributions for peak wave period to ranges of wave height. Finally they fit the parameters of the marginal distributions so they can be extrapolated to low wave height probability levels. This process produces a functional form for the environmental contours of wave height and period.

It is also possible to produce non-parametric environmental contours using a kernel density estimator. In this method, each point in a scatter diagram is replaced by a probability density function. All of those density functions are added together to give a smooth probability density function for the entire data set. According to Scott [3.111], if we use a bivariate normal kernel, the optimum standard deviation of the kernel is given by

h i = σ i n - 1 / 6 ,
(3.34)

where σ i is the standard deviation of the data in dimension i, and n is the number of data points.

The resulting probability density can be contoured using standard library functions like MATLAB’s contourc.m. The probability levels of the contours are chosen so that the maximum wave heights on the contours equal the independent return period wave height. At low probability levels it may be necessary to limit the steepness of the waves to eliminate waves that are steeper than physically realistic. Figure 3.7 shows significant wave height and peak period contours based on NDBC buoy measurements during hurricanes from 1978–2010 in the Gulf of Mexico.

Fig. 3.7
figure 7figure 7

Contours of significant wave height and peak period based on NDBC buoy measurements made in the Gulf of Mexico

Similar contours can be calculated for other pairs of parameters such as wave height and wind speed.

5.4 Inverse FORM

Once equal probability contours of environmental parameters are calculated, inverse first-order reliability methods (GlossaryTerm

IFORM

) provide a general procedure for finding design conditions. The contours are searched for the point which maximizes some response function such as the roll of a floating system. The environmental parameters at that point then become the design point. Winterstein et al [3.112] give a good explanation of the method.

Inverse FORM maps the environmental contours to standard normal distributions. If the probabilities are expressed as annual extremes and the return period of interest is 100 years, then the probability of exceeding the 100 year value is p = 1 / 100 , and the reliability index is

β = Φ - 1 ( 1 - p ) ,
(3.35)

where Φ is the standard normal distribution. The design contour expressed in standard normal variables is then defined by

β 2 = x i 2 .
(3.36)

In two dimensions, (3.36) defines a circle. In three dimensions it is a sphere. Each point on the surface has the same probability. The theory extends to hyperspheres in higher dimensions, although the search for the maximum response then becomes much more time consuming.

Suppose the actual environmental parameters are wave height H and period T. If these distributions are independent, then each point on the contour given by (3.36) has the physical parameters

H = F H - 1 Φ ( x 1 ) , T = F T|H - 1 Φ ( x 2 ) .
(3.37)

where F H - 1 is the inverse wave height distribution and F T|H - 1 is the inverse distribution of T given H.

6 Operational Criteria

The operating conditions are the metocean conditions in which a facility or vessel should be capable of achieving its routine functions. Typical products used to quantify operational conditions include a cumulative probability distribution of wave height or a table of wind speed persistence. These products are used in estimating the fatigue lives for components for which this is a concern. In contrast, extreme conditions rarely occur and are often generated by storms of some kind. During extreme conditions, normal operations are usually suspended – the vessel is slowed, oil production may be stopped, windmills feathered, etc. The first two sections below describe several common methods for describing operational criteria of variables that have at least one, highly correlated associated variable, e. g., wind speed and direction. However, the methods are often used even when there are more than one correlated associated variable such as waves, e. g., wave height, wave period, and wave direction.

6.1 Probability Distributions

The simplest method for quantifying a variable with a single dimension like wave height is to provide a table or plot of the probability distribution (histogram) as shown in Fig. 3.8a. However, since virtually all metocean variables are vectors, such tables or graphs are typically expressed as joint probability distribution tables, as shown in Fig. 3.8b for the case of wind speed and direction. In this table, each cell shows the probability of the occurrence of wind speed for a given wind direction. A wind rose is another way of graphically displaying a vector like wind velocity (Fig. 3.8c). In this case, each bar shows the percent occurrence of the speed in discrete bins along the indicated heading. All three images are based on the same dataset, so Fig. 3.8a basically shows a plot of the first column of the table on the x-axis versus the tenth column on the y-axis, while Fig. 3.8c shows the percent occurrence of the speed (binned in 5 m s - 1 increments) by direction.

Fig. 3.8a–c
figure 8figure 8

Samples of typical methods of displaying the probability distribution. Panel (a) shows the probability distribution of wind speed, (b) shows tabular marginal distribution of wind speed and direction, and (c) shows wind rose of wind velocity. (ac) use the same dataset

One of the challenges in clearly quantifying the operational environment is dealing with variables that have multiple associated variables that are highly correlated. The section on currents below describes several ways of dealing with this issue for currents.

Waves are typically described by pairing of the associated variables. For instance, one can generate joint probability distribution tables of wave height versus wave period by direction sector. Alternatively, one could generate tables of wave height versus heading by period bin, e. g., a table like that shown in Table 3.1 for all wave periods between 10–12 s.

Table 3.1 Calm persistence for 1-y time series of wind gusts

Many facilities are sensitive to wave fatigue, so designers need the probability distribution of the wave spectra. For regions dominated by single-mode spectra this is straightforward – one can use the tables described in the previous paragraph in conjunction with parametric spectra like JONSWAP. In other words, knowing the probability of a discrete bin of wave height, period, and direction, one can calculate the corresponding spectra at that probability level. Further refinements may be needed if the spectral width and/or directional spreading vary in the region.

Many regions of the world such as Brazil experience sea states characterized by spectra that have multiple peaks, several of which contain substantial energy. For floating facilities, the weaker secondary or tertiary peaks may be close to resonance of the facility and thus cause far larger forces than the primary peak. In these situations, it can be very unconservative to utilize single-peak spectra like JONSWAP. Perhaps the most widely used dual-peaked spectrum is that of Ochi–Hubble [3.17].

6.2 Persistence

Certain types of offshore operations require that the metocean environment not exceed a threshold for a specific period of time. If it does, the operation is suspended and there is downtime. While estimates of downtime can be made using the probability distributions described in the previous section, such an approach is an oversimplification that can distort the perceived risk. A more accurate method is to scan a time series of the variable of interest and characterize the periods when the variable lies above or below a specified threshold. For example, consider the case where a wind sensitive operation can be completed in 12 h, provided that the wind never exceeds 7.5 m s - 1 . A simple frequency analysis shows that winds at this site exceed 7.5 m s - 1 nearly 60 % of the time, which at first glance might be discouraging. However, a persistence analysis of the events below 7.5 m s - 1 (Table 3.1) indicates that there were 68 calm events in which the wind was less than 7.5 m s - 1 and all of them lasted more than 12 h (minimum 0.54 days). A closer look at the events exceeding 7.5 m s - 1 (Table 3.2) shows that when the winds did exceed 7.5 m s - 1 , the events lasted an average of 3.08 days and roughly 85 % (mean + standard deviation; 3.08 + 6.01) of these events lasted less than 9.09 days.

Table 3.2 Storm persistence for a 1-y time series of wind gusts

Thus by looking at persistence one could conclude that there is an expected downtime of about 3 days for the operation, which is a lot less onerous than might be concluded from looking at the 60 % occurrence rate based on the frequency analysis.

While persistence analysis can provide valuable insights, it cannot easily incorporate multiple variables. This is especially limiting for floating systems, which are often dependent on wave height, period, direction, etc. For these cases, numerical simulations using a vessel response function are often preferred, e. g., Beamsley et al [3.113].

6.3 Currents

Fatigue damage caused by currents is an important design consideration for oil drilling and production risers in deep water. Deep water current profiles have complicated shapes, and thousands of profiles are often now available from models or measurements. All of that information must be condensed into a manageable number of cases for analysis. Prevosto et al [3.114] discuss three methods for doing this and compare the results of using them with a full analysis of all profiles.

Empirical orthogonal functions (GlossaryTerm

EOF

) provide a method for capturing the important characteristics of current profiles in a few variables. Forristall and Cooper [3.115] outline the method and give examples. Singular value decomposition permits any matrix A to be decomposed as

A i j = k = 1 N w k U i k V j k .
(3.38)

Each current profile is written as a row in matrix A and each column represents the time series at one depth. The columns of V are called the EOFs. Each EOF is a vector with a value at each depth in the original data. There are the same number of functions as there are depths. They play the same role as cosine waves in a Fourier analysis. The diagonal elements of W are the magnitudes of the EOF modes. They give the relative importance of the modes. The matrix U gives the amplitudes of the modes in each current profile. There is one row in U for each profile. It gives the amplitudes of each mode at one time.

As it stands, (3.38) is not a more efficient representation of the data. The gain in efficiency comes from the fact that the magnitudes of the first few modes are often much larger than the rest. A good representation of the data can then come by summing over many fewer than N modes. The amplitudes of those modes can then fill a manageable scatter diagram.

There are, however, locations where a few EOF modes fail to describe all the dominant characteristics of the current profiles. The characteristic current profile (GlossaryTerm

CPC

) was developed by Jeans et al [3.116] to work with those cases. For each current velocity time series, a number of possible states are defined at each selected depth level, and possible characteristic profiles are constructed from every permutation of these states. The number of measured profiles corresponding to each of these possible characteristic profiles is then counted and percentage occurrence values derived. The reduction in the number of profile shapes is accomplished by selecting a relatively small number of depth levels.

Self-organizing maps (GlossaryTerm

SOM

) are useful to better categorize current profiles. The SOM process begins with a two-component EOF analysis. Then, a nonlinear cluster analysis groups the thousands of current profiles into a smaller number of clusters [3.117]. The EOF amplitudes are varied to produce a two-dimensional array of current profiles. Each original profile is assigned to the EOF profile that it best matches. The EOF profiles are modified by taking weighted averages of the neighboring profiles in the grid. Then, the original profiles are re-assigned to the modified profiles that they best match. This process is iterated until the sum of differences between the SOM profiles and the original profiles is minimized. If the array of profiles is small, there can be a lot of variability around some of the weaker SOM profiles. The variability around the SOM profiles decreases when more profiles are used.

Prevosto et al [3.114] found that using a few hundred profiles calculated by one of these methods gave good accuracy in fatigue damage calculations.

7 Extreme Criteria

7.1 Risk and Reliability

Metocean design specifications should be set considering the risk and cost of failure. The risk tolerance is different for structures that are not normally manned and structures that are evacuated before severe storm conditions than it is for structures that are manned and not evacuated before severe storms. Gulf of Mexico structures are evacuated upon the approach of a hurricane. North Sea structures remain manned during frequent severe winter storms. For structures that are unmanned or evacuated, the risk calculation is complicated but straightforward. The cost of strengthening the structure is balanced against the monetary cost of structural damage or failure. The cost includes not only repairing or replacing the structure, but sometimes also lost production, pollution-related costs, and damage to corporate image. These costs can be an order of magnitude greater than the cost of replacement.

The failure rate is found by calculating the ultimate strength of the structure and comparing it to the metocean loading at different probability levels. The cost of strengthening the structure is then added to the cost of failure after strengthening. If the total cost is lower, designing to a lower probability of failure is economically justified. For standard steel jacket structures, an annual failure rate near 10-3 is generally appropriate. This is consistent with the normal practice of designing for a 100-y storm because steel jackets have considerable reserve strength beyond the first yielding of a member.

Establishing an appropriate failure rate for a manned structure is conceptually more difficult because no one wants to put a price tag on human life. Rational risk levels can still be set by considering risk levels in other industries and risk on offshore structures due to causes other than structural failure. Those risks include travel, explosions, collisions, and falls. These considerations indicate that the individual risk per annum (GlossaryTerm

IRPA

) should be reduced to a level below 10-3. An IRPA lower than 10-6 is considered negligible. However, the risk should be reduced below 10-3 to a level that is as low as reasonably practical (ALARP). Measures to reduce IRPA should be examined and implemented until the cost of the upgrade becomes grossly disproportionate to the benefit obtained. Efthymiou et al [3.118] discuss the ALARP principle in detail. The cost of stuctural strengthening should be weighed against the cost of lowering other risks. Manned structures in the North Sea are now usually designed for a 10-4 annual risk of failure. Providing criteria with such a low probability is a special challenge to the metocean specialist, who must extrapolate conditions far beyond experience levels.

7.2 The Historical Method

The traditional way of estimating metocean extreme values is extrapolation from historical data. The data generally come from hindcasts rather than measurements so that more years of data are available. However, even hindcast records are short compared to 1000 or 10000 years, so extrapolation using extreme value distributions is required.

Extreme value theory assumes a time series of independent events, so the first step is to choose those events. Generally, this is done by finding the peak values over a threshold (POT ). The peaks are sorted in ascending order, and their probability is plotted against their magnitude. There are then many choices for choosing an extreme value functional form and fitting it to the data [3.119]. In the limit as the number of points tends to infinity, it can be shown that the extremes follow the generalized Pareto distribution

F ( y ) = 1 - ( 1 + ξ y / σ ) - 1 / ξ .
(3.39)

The parameter ξ controls the shape of the distribution, giving a heavy tail if ξ > 0 and a finite upper limit if ξ < 0. In practice, we do not know whether the data extends far enough into the tail of the distribution for the limit to hold, and small changes in the data can influence whether an upper limit is predicted. For these reasons, engineers often choose to fit peaks to the Weibull distribution

F ( y ) = 1 - exp - y - γ α β .
(3.40)

The commonly used Weibull plotting position is

P i = i N + 1 ,
(3.41)

but Goda [3.120] showed that the unbiased plotting position is actually

P i = i + 0.5 β - 0.6 N + 0.2 + 0.23 β .
(3.42)

Gibson et al [3.121] tested various methods of fitting (3.40) to data simulated from a known Weibull distribution. They found that both least squares and maximum likelihood fits gave good results when

  1. 1.

    The unbiased plotting position in (3.42) was used.

  2. 2.

    The position parameter γ was set to the threshold value.

  3. 3.

    All of the data were used instead of binning the data into ranges of y.

  4. 4.

    The fit was made of y to α ln 1 - P β + γ .

There are numerous weaknesses with the historical approach. First and foremost, historical datasets are often short relative to the probability level needed for design criteria. This is especially a problem for tropical storms, which are spatially small and infrequent. Toro et al [3.122] show the 100-y criteria for a particular site in regions like the Gulf of Mexico, is most heavily influenced by how close a few strong storms passed to the west of the site. As a consequence, large differences in the 100-y design condition are often observed at sites separated by only 50 km in deep water far from the coast where there is no physical basis to believe the n-y condition would be any worse at one site than the other. These unrealistic spatial gradients are more apparent at even shorter return periods (e. g., 10 and 25-y) in basins like north Australia and the Gulf of Thailand, where the reliable historical database is shorter or the storm frequency is lower than in the Gulf of Mexico. The spatial gradients in n-y criteria are largely a result of under sampling. There are simply not enough larger storms in the database. Furthermore, it is reasonable to expect that if the database could be extended over a longer period of time, a strong storm would eventually cross near all the sites.

To counter this under sampling, metocean experts often pool nearby sites, as described in Cooper et al [3.123]. Pooling basically combines all storm peaks from several nearby sites into a single probability distribution. In essence, pooling adds synthesized storms by shifting the tracks of the historical storms. When pooling, the probability distributions from each site are assumed to be statistically independent, even though they are not. However, Toro et al [3.122] show that this assumed independence does not lead to substantial errors for return intervals of up to roughly 100 years for the Gulf of Mexico hurricane population. That is because the 100-y condition is strongly dependent on the track crossing distance and much less dependent on the other particulars of the storm. Unfortunately, for rarer return periods beyond a few hundred years, pooling starts to yield increasingly biased results because the longer return intervals become more sensitive to the intensity of the stronger storms, and pooling does nothing to increase that population of storms. When pooling it is important not to extend the averaging grid too widely else one can suppress real spatial gradients like those suspected to exist in the Gulf of Mexico hurricane patterns, e. g., Cooper [3.124].

Another limitation with the historical method is that it cannot easily be used to develop criteria involving the rare combination of two relatively independent events. A case in point is the superposition of the Loop Current (or one of its detached eddies) and a hurricane. Recent simulations by Cooper and Stear [3.125] suggest these events happen roughly every 4 years. When they do, a number of potential nonlinear interactions can occur such as wave focusing [3.126], amplification of mid-water currents [3.13], and intensification of the hurricane [3.124]. The first two phenomena depend strongly on the distance between the hurricane and Loop, and there are virtually no comprehensive measurements of the wave and current field in joint events. Hence, historical events are missing.

7.3 Synthetic Storm Modeling

The previous section pointed out two major weaknesses of the historical approach. To address these weaknesses, researchers have looked at various means of generating so-called synthetic storms; that is, storms that did not actually occur but could have occurred. Georgiou et al [3.127] describe one of the first efforts. They first fit standard storm parameters like intensity and radius to standard distribution functions (e. g., lognormal). They then drew randomly from these distributions to construct synthetic storms whose probability was calculated from the underlying distributions of the storm parameters. Once the combination of storm parameters was selected, these were input into a standard parametric wind model that could calculate the detailed wind field along the historical tracks. Because the probability distributions of each hurricane parameter is at most weakly dependent on the other parameters (e. g., radius to maximum wind is only weakly correlated to intensity), the overall probability of a given synthetic storm scales roughly as the product of the probability of the individual parameters. Hence the method can generate rare (low probability) synthetic storms using a combination of storm parameters that are well away from the tail of their respective probability distributions and hence have relatively low uncertainty.

While the early models went a long way in reducing statistical uncertainty of the longer return period estimates, they continued to utilize historical tracks to estimate the frequency of storm passage and they assumed that the change in storm parameters was independent of that track. This latter assumption is clearly problematic in places like the Gulf of Mexico, where the warm waters of the Loop Current likely affect storm intensity as do nearby land masses. To partially address these limitations, Vickery et al [3.128] used statistical properties of track heading, track speed, and intensity, combined with a regression model to generate synthetic storms. This approach allows for the generation of thousands of years of storms with low statistical uncertainty. Emanuel et al [3.129] investigated stochastic techniques to generate many synthetic storm tracks and a deterministic model to calculate storm intensity along each of those tracks. They investigated two track models. Their first model was conceptually similar to that of Vickery et al [3.128], while their second track generation method accounted for large-scale weather, including vertical shear and steering flow. Once Emanuel et al [3.129] had constructed the tracks, they used a deterministic model to calculate the parameters, including intensity and radius. Vickery et al [3.130] used a track model that accounts for large-scale weather but in a more deterministic fashion than Emanuel et al [3.129], by using NCEP reanalysis. To calculate the storm parameters they used a statistical intensity model that incorporated atmospheric inputs, much as Emanuel et al [3.129] did, but Vickery et al [3.130] also included ocean temperature feedback.

Perhaps the biggest challenge in using these models is determining whether some of the more extreme synthetic storms are realistic. The next section addresses this point.

7.4 Modeling Versus Measurements

In an ideal world, the ocean would be covered with measurement sites that have operated for centuries. In the real world, the metocean specialist is often faced with developing criteria where there are no measurements at the site, or if there are, they may only be a year or less in duration. Extrapolating such a short record to return intervals of a few decades or more will usually result in large statistical uncertainty at best, and at worst, large biases. On the other hand, numerical model hindcasts spanning many decades now cover most of the world, as discussed in Sect. 3.4. Depending on the horizontal resolution of the model grid, the nearest model element to the site of interest is often only a few kilometers away. Thus the best strategy for developing extreme design criteria at site is often to use available measurements to calibrate a hindcast model that has been run for several decades, rather than to do an extreme analysis on the measurements themselves.

7.5 Accounting for Physical Limits

Whether one uses the historical method or synthetic modeling to estimate extremes, extrapolation of some form is almost always being used to estimate criteria well beyond any observed storms. This raises the concern that the method can be generating values that cannot be physically attained in the real world. Perhaps the clearest example of this danger is the case where a metocean specialist tries to fit a Weibull distribution (historical method) to waves measured during a 2-y long measurement program in a water depth where wave breaking can occur for the stronger storms. Fitting this kind of data with a classic historical method can yield 100-y estimates that are unrealistically high because those waves would have broken in the real world. The problem, of course, is that the extreme distributions are purely statistical functions with no physical basis.

One solution to this potential problem is to use numerical models that include the necessary physics to account for the limits. This can be a practical and straight forward solution for the example of breaking waves cited above. In that case, hindcasting storms over many decades using a wave model that accounts for breaking is usually a quick and effective solution.

Regrettably, incorporating physical limits into numerical models is not straightforward when the physics are not well understood. A case in point is calculating extremely rare hurricane conditions, say the 10000 year significant wave height. Cardone and Cox [3.131] applied a third-generation wave model to strong storms and found that the wave heights trended toward an asymptotic limit. However, it is debatable whether the asymptotic limit is generated by real physical limits or artificial ones imposed by the model equations. There is no way to be sure, as wave and wind measurements during the events considered by Cardone and Cox have not been recorded. Another approach used by Vickery et al [3.130] applied the concept of the maximum probable storm intensity. Emanuel [3.132] and others provide evidence that such limits appear to exist.

7.6 Seasonality

Fixed offshore facilities are designed for year-round conditions but there are some instances where metocean conditions are needed for seasonal construction or drilling. If the operation is, say, planned for only the three months of summer, then only the metocean conditions for those months need to be considered. More specifically, if a  1 % / y risk of failure is desired (expected failure of once every 100-y), then the extreme values of metocean conditions in 100 years of summers should be calculated.

However, caution must be exercised when considering seasonality for a drilling rig or operations which will continue year-around. To illustrate this point, consider the question of how one might combine seasonal criteria to calculate the annual survival rate. Assume that the target reliability is an average 99 % survival rate (1 % failure rate) each year. One might be tempted to use the 99 % probability value for each season, but more careful consideration reveals this will badly overestimate the survivability. That is because the annual survival rate is given by the probability that the rig will survive the summer and the fall and the winter and the spring. It follows that if the extremes in each season are statistically independent, then the annual survival probability is given by the product of the seasonal probabilities, or 0.99 4 = 0.96 ; 4 % less than the annual target survival rate of 99 %. An obvious solution to this shortfall is to use the 0.9975 probability for each of the four seasons, which yields an annual survival rate 99 % = 99.75 4 .

7.7 Directionality

Directional metocean specifications are sometimes desired when a structure is considerably stronger or less prone to motion in some directions than others. The considerations in this case are similar to those for seasonal specifications, especially the concept that the total survival probability from all individual directions should not be significantly different from the omnidirectional survival probability. Using the same arguments given in the previous discussion on seasonality, it is clear that using the n-y metocean criteria in each direction bin will give a much lower survival probability than using the n-y omni-directional criteria [3.133].

The simplest way to insure a reasonable result is to make the probabilities in all of the direction bins equal. So, for example, if the target annual survivability is 99 % and four direction bins are used, then the target survival probability in each of these four bins should be 99.75 %.

7.8 Combining Long and Short-Term Distributions

Estimating extreme values of individual wave and crest heights requires combining a long-term extreme value distribution with the short-term distributions discussed in Sect. 3.2.2. Borgman [3.134] showed how the maximum wave and crest height during a storm can be estimated by integrating the short-term wave and crest height distributions over the storm’s sea state history. Tucker and Pitt [3.135] give a thorough description of how the Borgman integral has been applied to the calculation of extreme wave heights. Forristall [3.136] validated these methods using long-time series of individual waves.

If the probability that the wave or crest height exceeds η is given by P ( η ) , then the probability that the height will not exceed η in N waves is given by

P ( η max < η ) = 1 - P ( η ) N .
(3.43)

For a sequence of records i = 1 k during a storm, the probability of non-exceedance becomes

P ( η max < η ) = i = 1 k 1 - P i ( η ) N i .
(3.44)

The calculations can be performed more accurately by taking the logarithm of (3.44) to give

log P ( η max < η ) = i = 1 k N i log 1 - P ( η ) .
(3.45)

Equation (3.45) is applied to calculate the expected maximum wave and crest height in each storm. The sets of maximum heights are then fit to extreme value distributions to determine the return period individual wave and crest heights.

Fitting an extreme value distribution to the most probable maxima does not account for the fact that values higher than the most probable value can occur in any storm. Tromans and Vanderschuren [3.137] proposed a method for taking account of this short-term variability. If the probability distribution of the most probable maxima in a storm is P ( H mp ) and the distribution of the maximum given Hmp is P ( H | H mp ) , then the distribution of the maximum in a single random storm is

P ( H | s r s ) = P ( H | H mp ) p ( H mp ) d H mp .
(3.46)

Tromans and Vanderschuren found that P ( H | H mp ) was very similar from one storm to another, and that its mean could be described by the function

P ( H | H mp ) = exp - exp - log N H H mp 2 - 1 ,
(3.47)

where N is an equivalent number of waves in a storm. The value of ln N can be estimated from the short-term distributions in the historical storms. Values between ln N = 8 and ln N = 10 are typical, and the results are not very sensitive to the exact value. The result is to increase the estimates of extreme maximum wave and crest heights to about 5 % more than the most probable values.

7.9 Rogue Waves

It is generally agreed that a rogue wave is one with a height greater than 2.2 times the significant wave height or a crest greater than 1.25 times the significant wave height. There have been many reports of such waves in the literature in the last few years. The best known is the Draupner wave, recorded in the North Sea on January 1, 1995 [3.138]. The crest height of this wave was 1.55 times the significant wave height. Unfortunately, very little is known about the instrumentation used for this measurement. The Andrea wave [3.139] is a much better documented case. It was also recorded in the North Sea on November 9, 2007. Essentially, the same wave was recorded by four laser altimeters. Analysis of the intensity of the return signals indicated that there was no sea spray at the wave crest. The height of the Andrea wave was 2.49 times the significant wave height, and the crest was 1.63 times the significant wave height.

The central question in the study of rogue waves is whether they can be explained as a statistical anomaly or whether they require a physical explanation different than second-order theory [3.140]. The statistical explanation for something like the Andrea wave is certainly a stretch. According to second-order statistics, its crest had a probability of 6 × 10 - 8 . However, Christou and Ewans [3.141] did a careful study of over 108 measured waves and found that the sample crest distribution was only slightly higher than predicted by second-order statistics.

Some processes that produce very high waves are understood reasonably well. Waves traveling into an opposing current can steepen and become much higher. Many ships have been damaged when they encountered such waves in the Agulhas Current south of Africa. Bottom features can refract waves, making them much larger in localized areas. Surfers are well aware of this phenomenon. The more difficult cases to explain are unusually high waves in deep water far from shore.

Theoretical attempts to explain rogue waves involve the integration of nonlinear equations that approximate the development of steep random waves [3.142]. All of these show a modulation of the wave envelope similar to the Benjamin–Feir instability observed in regular waves. The modulated waves alternate between series of higher and lower waves than predicted by Gaussian statistics. The kurtosis of the wave trace becomes greater than 3.0, and the extreme waves are higher than the Rayleigh distribution. Steep random unidirectional waves in laboratory basins often show this behavior. However, several numerical and laboratory studies, such as that by Toffoli et al [3.143], have shown that modulational instabilities are much less effective in producing large wave groups when the waves are spread.

Rogue waves remain an active area of research and it is too early to draw definite conclusions. Fortunately, rogue waves may not have a big influence on extreme wave heights for design. Almost by definition, they have a low probability. The probability that a rogue wave occurs during one of the sea states far out in the distribution of significant wave height is even lower. Haver [3.138] estimated the effect of adding rare rogue waves to the short term distribution and found that it had little effect on the risk of failure.

7.10 Extremely Rare Events

Designers now frequently request metocean criteria with return periods of 1000–10000 years. Deriving these values from a few years of measurements is difficult, if not impossible, to justify. Even deriving such rare events from historical hindcasts is problematic, since reliable historical databases rarely extend beyond 50 years. Synthetic modeling certainly holds the most promise for deriving rare events, but even with this technology it is difficult to overcome our ignorance of the physical limits which probably occur for many metocean phenomena. This issue is discussed in more detail in Sect. 3.7.5.

7.11 Quantifying Uncertainty

Uncertainty affecting the calculation of metocean extremes comes primarily from the noise and/or bias in the numerical models or measurements used to generate the peaks, and from the inability of the chosen extreme distribution to fit the peaks – what is often referred to as statistical uncertainty. The impacts of these two types of errors on extreme value uncertainty are discussed in more detail in Sect. 3.7.4.

Statisticians have extensively studied statistical uncertainty and developed numerous ways of quantifying it, as discussed in Tucker and Pitt [3.135]. If the input peaks come from a short time series and the extrapolation is lengthy (e. g., 2-y of measurements extrapolated to a 100-y return period), then the statistical error can be large.

One often sees fits to extreme distributions that show confidence limits that are based on the statistical uncertainty only. If the peaks are based on site-specific measurements, the statistical uncertainty is fairly representative, but if the peaks come from measurements some distance from the site or from models, then the statistical uncertainty is probably much smaller than the uncertainty from the input data source. Sections 3.4.1 to 3.4.3 can help quantify that error.

7.12 Stationarity

Nonstationarities can be thought of as low-frequency processes that have been sampled at far less than their Nyquist frequency. For example, if one has only a few months of data to analyze, then nonstationarities will arise from seasonal, annual, decadal, etc., time scales. Innumerable papers and books have been written on the topic, including a relatively recent one by Rao et al [3.144].

Issues regarding nonstationarities have always plagued metocean analysis. The challenge is perhaps greatest when dealing with the calculation of extreme values (e. g., 100-y wind speed) where stationarity of the underlying time series is assumed in almost any analysis method and nonstationarities in the underlying dataset will tend to be amplified. For storm extremes, important sources of nonstationarities can come from natural oscillations in the atmosphere like the North Atlantic Oscillation or El Nino, which can cause substantial variations in storm severity over periods of several decades [3.145]. In theory, the obvious solution is to include many decades of historical storms in the extreme value analysis, but such long time series are available in only a few regions of the world and even there, data quality from the older decades my be problematic and introduce other forms of bias [3.146].

Global warming is introducing strong nonstationarities in many variables, the most obvious being atmospheric temperature. Projections from the IPCC (Intergovernmental Panel on Climate Change) [3.147] show that these nonstationarities or trends will increase rapidly over the coming decades and for long-lived facilities, the changes will need to be considered. A starting point for estimating nonstationarities is to use projections from numerical models such as those provided by the IPCC [3.147]. However, these projections do not consider all variables of interest to engineers (e. g., waves) and use models with fairly large grid sizes which can fail to capture important regional variability. Fortunately, computer power is continuing to increase, so the limits on grid size are starting to recede, allowing for the development of regional nested models with smaller grid sizes [3.148].

8 Conclusions

The metocean environment controls many aspects of facility design and operation so errors in quantifying metocean conditions can cascade though the design and operational decisions. Errors can result in damage and lost lives. Conversely, if the variables are overestimated, costs will be overestimated perhaps to the point that the project becomes uneconomic and is never built.

Metocean criteria are typically broken into two categories: operating and extreme. The former involves quantification of metocean conditions in which the facility or vessel should be capable of achieving the routine functions of its primary purpose. In contrast, extreme conditions occur rarely and are often generated by episodic events (e. g., storms). Both categories may start with the same databases but the analysis techniques and final design specifications will differ substantially.

There are a host of sophisticated methods and tools that can be used to quantify the most important metocean variables that impact offshore facilities. We have suggested methods drawn largely from the offshore oil and gas industry but they are also generally applicable to other engineering applications involving the design and operation of vessels, coastal structures, offshore wind farms, navigational aids, coastal geomorphology, and pollution studies.

When developing a metocean design basis for a major project, the metocean engineer should first identify the variables of primary importance. This is because the sea and atmosphere are filled with complicated processes, many of which are site specific and poorly understood. If aggressive filtering is not undertaken then too much time can be spent quantifying variables that make little difference to the design or operation of the facility. The first and best way to eliminate variables from investigation is to understand the basic responses of the particular facility. In other words to answer the question: which metocean variables impact this facility most and which have little or no impact?

Finally, it should be noted that the methods, tools, and databases cited in this chapter reflect a snapshot in time; they are continually being updated and improved. The reader should always consider these citations as a starting point and check the web and journals for updates before proceeding with the analysis.