Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

7.1 Recoverable Versus In-Situ Resources

The objective of the resource model is to predict the tonnage and grade that the beneficiation plant will receive at specified time intervals. This is true at all times in a mining operation: at the initial evaluation of the project, as part of pre-feasibility and feasibility studies, and in the context of long-term and short-term resource models in operating mines. The procedures for estimating and managing dilution need to be updated regularly to capture all the new information and experience collected as the deposit is being mined. A model that attempts to satisfy this requirement is called a “recoverable model” (David 1977; Journel and Huijbregts 1978; Rossi and Parker 1993).

A recoverable resource model is an estimate of the tonnage and grade of economic material above certain cutoffs, but can also include other geo-metallurgical and geo-mechanical characteristics that affect mill performance. Revenue is a function of grades, product prices, metallurgical recoveries, and operating costs such as mining, metallurgical, and general and administration (G&A) costs:

$$ \text{Revenue}\,\text{=}\,\text{Price * Recovery * Grade(s)}-\text{(Mining Cost}\,\text{+}\,\text{Metallurgical Costs}\,\text{+}\,\text{G }\!\!\And\!\!\text{ ACosts)} $$
(7.1)

The grade for which revenue is nil is called the break even (or economic) cutoff grade. Depending on which costs are considered, different types of cutoffs are used. At the breakeven point, Revenue in Eq. 7.1 is zero, and the corresponding economic cutoff grade is

$$ \text{Economic Cutoff Grade}\,\text{=}\,\frac{\text{Mining}\,\text{+}\,\text{Metallurgical}\,\text{+}\,\text{G }\!\!\And\!\!\text{ A Costs}}{\text{Price * Recovery}} $$
(7.2)

Costs are usually expressed on a per unit basis, such as dollars per ton. The units used in the calculation have to be consistent, which often requires conversion factors.

Another important cutoff in an open pit mining operation is the marginal cutoff, similar to the economic cutoff, except that the mining cost is not considered. This is to fairly valuate the rock when mining is progressing and the material has to be mined. The only decision is where to send it, the mill, a stockpile, or the waste dump. The mining cost must be spent and is considered a sunk cost. The marginal cutoff is used for example in grade control, as discussed in Chap. 13.

Cutoff calculations become complex if there are several metals to consider, each with different metallurgical recoveries and costs. Also, there may be different mining costs associated with sending material to the mill, as opposed to the waste dumps or stockpiles. In the case of stockpiles, re-handling costs should also be considered. Finally, G&A costs are a mixture of costs, not all of them directly related to the operation. Mining companies have different policies for which costs to include in these calculations on a project by project basis. For example, the company’s headquarters corporate overhead may or may not be included. Each block must be valued separately considering all of the revenues and costs, and then blocks with positive total revenue are considered ore.

In what follows “cutoff” implies the economic cutoff described by Eq. 7.1 above, unless otherwise defined.

At a very early stage of the project the main concern is to determine if the deposit contains enough mineralization to warrant further study and investment, that is, very little may be known about the potential of the deposit to become an operating mine. Technical details and specifications for mine planning and metallurgy are required to estimate tonnages and grades delivered to the mill. In this case, since the proportion of the mineralization that would be recovered is unknown, it is preferable to estimate “in-situ” resources.

Accounting for mine and mill considerations at the time of estimating resources is not yet universally accepted. The sources of dilution and ore loss are well known, but not easily quantifiable. Some practitioners prefer to calculate a model of mineralization without engineering constraints. Dilution then has to be added to the block model by the mine planning engineer, usually using global factors. In general, all resource models should be recoverable.

The differentiation between a recoverable resource model and a reserve model is based on the wording of the different Resource Classification Systems currently in use (see Sect. 12.3). The term “reserves” is used for material that has been reasonably proven to be minable with an economic benefit. This implies that a well-defined mine plan is in place, that metallurgical studies have proven that the ore is amenable to beneficiation, that there is a viable market for the product, and that there are no legal or environmental impediments for mine development. In addition, a reserve model may include some additional operational dilution not explicitly included in the recoverable resource model.

The available drill hole information has a much smaller volume and scale than mine planning volumes and ore/waste selection. Drill holes are a few centimeters in diameter, and each sample typically represents between 10 and 50 kg of material. In contrast, a very selective open pit mine would consider mining units of 5 × 5 × 5 m (approximately 325 metric tons assuming a 2.65 t/m3 density), while the larger, massive deposits plan on units that are as large as 25 × 25 × 15 m (approximately 25,000 metric tons). Some underground mines can be more selective, but the volume of the planning unit is still orders of magnitude larger than the drill hole.

The volume of extraction is represented with a “Selective Mining Unit”, or SMU. The SMU is defined as the smallest volume that the operation can recover, and depends on the mining method, the equipment size, the data available at the time of selection and the selectivity characteristics of the operation. For convenience, it is generally represented as a rectangular block, even though mines never extract ore and waste as perfect parallelepipeds.

For open pit mines, the vertical dimension of the SMU is the bench height, although occasionally some mines operate on double- or half-bench heights. The lateral dimensions represent the minimum width of the extraction equipment, with consideration to digging depth, the material’s angle of repose, the equipment’s maneuverability, and the available information to support estimates of the grade at short distances. If it is a massive electric shovel, see Fig. 7.1, with a nominal loading capacity of 90,000 tons of material per day, the minimum width will be about 18–20 m. For such a large operation, the bench height is usually 15 m, and thus the SMU would be 20 × 20 × 15 m.

Fig. 7.1
figure 1

Bucyrus SME 60 Shovel used at the large tonnage Escondida Cu Mine, Northern Chile (photo courtesy of BHP Billiton). Benches are 15 m high

If the equipment considered is a front-end loader (such as the one shown in Fig. 7.2), the width of the bucket varies between 5.6 and 6.2 m (depending on the model), so it is generally accepted that the minimum width for selectivity will be about 8–10 m. Typical bench height is 10 m, so that a common SMU size for this type of operation could be 10 × 10 × 10 m.

Fig. 7.2
figure 2

Caterpillar 992 front-end loader used at Cerro Vanguardia’s Osvaldo Diez vein. Cerro Vanguardia is a Gold-Silver deposit located in the Patagonia Region of Southern Argentina (photo courtesy of Cerro Vanguardia S.A.). Benches are 5 m high

These two examples assume that there would be sufficient grade control sampling and adequate grade control practices to estimate reliable values at the SMU scale mentioned. The SMU size could be bigger with difficult deposits and poor grade control sampling. The ore and waste may be defined by a sharp visual contact. In such cases, the equipment may be able to mine to contacts with only 2 or 3 m of dilution/lost ore.

Underground mining methods vary widely in selectivity. They are often more selective than open pits, but there are significant exceptions, such as mines that use block or sublevel caving methods. In a traditional cut-and-fill operation, with 5 m lifts, the SMU depends on the geometry of the orebody, but usually is 5 × 5 × 5 m, assuming that the mine can separate ore and waste from the stope.

The definition of an SMU is convenient for block modeling, but does not realistically represent the extraction process: shovels and loaders do not load cubes! Moreover, individual SMUs cannot be selected independently although the concept of an SMU assumes free selection. The actual practice of ore and waste selection shows that the SMU concept is a convenient approximation. Mining along boundaries is generally more selective than the nominal SMU size for the mine, and typically an isolated SMU-size pod of waste or mineral will not be mined.

7.2 Types of Dilution and Ore Loss

There are several sources of dilution and ore loss. Dilution and ore loss are always closely linked, and references to dilution include both cases. The main sources of dilution may be classified into three different categories (Rossi 2002):

Internal Dilution or Change of Support is a consequence of predicting resources at a different volume than the original data (Parker 1980). The resource estimate requires a degree of averaging within blocks and is generally modeled using the volume-variance or change of support correction, as discussed in detail in the next section. This mixture of material necessarily includes high and low grade mineralization, which will be more significant if the mineralization is less continuous. Also, the larger the block size considered, the larger the amount of mixing of mineralization or internal dilution.

The photo in Fig. 7.3 is a hand specimen of typical Porphyry Cu mineralization, where, within the solid rock mass, high-grade veinlets of Chrysocolla (Cu mineralization) are seen. If this mineralization was to be sampled on a very fine scale, the dispersion of the Cu grades resulting from the laboratory assays could be represented by a distribution like the one shown in Fig. 7.4, top. If the sample volume taken were to be larger, then there would be more mixing of material in any given sample, thus the higher-grade veinlets being mixed with the lower grade material surrounding them. In this case, a distribution like the one shown in Fig. 7.4 (bottom) may be obtained.

Fig. 7.3
figure 3

A hand-specimen approximately 3 inches in size showing typical Porphyry Cu mineralization (Chrysocolla in Type-D veinlet, courtesy of BHP Billiton)

Fig. 7.4
figure 4

The point distribution above is corrected to a block (SMU) distribution below

Note that the means of the distributions are the same (grades are mass fractions and they scale up linearly, so that the overall average is maintained), but the standard deviation and coefficient of variation is smaller for the larger volume distribution. Also, the minimum and the maximum of the distribution are closer to the overall mean. There is also a general tendency for the larger-volume distribution to be more symmetric than the original distribution.

Since mineralization is not homogeneous, mixing of different grade material always occurs. This is true for all types of mineralization, and depends on the nature of the geologic events that produced the mineralization. The presence of mineralized veinlets, highly fractured zones or units, and more or less permeable lithologies impact the amount of internal dilution to be expected.

Geologic Contact Dilution is defined as the dilution and ore loss resulting from the extraction of material of different geologic characteristics. This type of dilution can often be accounted for when using sub-cells or partial blocks in the definition of the resource block model (Chap. 3): the grades and other characteristics of each geologic unit that comes into contact within each block can be averaged according to the proportions of each within the mining blocks of the model.

The impact and relative importance of this type of dilution depends on the geometry of the boundaries between geological units and the differences in grade between units. In high tonnage, massive base metals deposits the impact of geologic contact dilution will be small if compared to deposits with complicated geometries, such as vein type or skarn deposits, or a stratigraphically controlled deposit with significant folding and faulting. For a fixed block size, say an SMU, contact dilution can be characterized for individual geologic zones or estimation domains by the ratio of surface contact volume (SCV) to the overall extraction volume (V), SCV/V, as measured by the volume represented by blocks with geologic contacts to the overall volume of the unit. This unit-less factor provides an indication of how important contact dilution may be. A ratio of 0.05 or higher generally indicates high contact dilution, and is characteristic of vein-type, skarns, or thin, tabular deposits, while values less than 0.01 correspond to bulk tonnage, massive, or porphyry type deposits.

For massive deposits, contact dilution is generally a local issue, since the bulk of the tonnage will be mined away from contacts, and therefore its importance from a global resource model may be limited. Still, it can impact the positioning of a final pit wall or stope, as well as the corresponding volume of waste that needs to be removed to access the ore (mining strip ratios). It is a very different case for skarn-type and small, narrow tabular or vein-type deposits, where contact dilution may be the most consequential type of dilution.

Figure 7.5 shows a cross section of a lithology model for the Lince-Estefanía Cu deposit, with the corresponding block model with sub-cells overlaid on the view. Notice how the general stratigraphy is crosscut by intrusive dykes. Also notice that, by virtue of the relative high contact surface area to volume ratio, the impact of geologic contact dilution is likely to be significant. The contact dilution can be incorporated into the block model using two alternative but conceptually similar techniques:

Fig. 7.5
figure 5

Sectional view of a deposit with a pseudo-stratigraphic control. The lithology units are represented by red (volcanic breccias) and blue (andesites), with cross cutting dykes (in purple). Blocks are 5 × 5 m and can be used for scale; the vertical extension shown is about 800 m. The block model (with sub-cells) is overlaid on the geologic model; supporting drill holes are not shown. Courtesy of Minera Michilla S.A., Chile

  1. 1.

    The sub-cell method, as shown in Fig. 7.5, provides a better definition of the geologic contacts. As discussed in Chap. 3, these sub-cells are then re-blocked to the parent block size of the model to provide the diluted grades and maintaining the proportions of each geologic unit within each block.

  2. 2.

    A direct calculation of the proportion (percentage) of each unit within each block, storing the percentage of each unit within the block.

The average grade of the block is expressed as the proportion-weighted average of the grades of each individual geologic unit within the block:

$$ Z_{V}^{*}=\sum\limits_{i=1}^{n}{{{p}_{i}}\cdot z_{i}^{*}} $$
(7.3)

where z v represents the block grade average, p i , i=1,…,n, represent the percentage of total mass for each of the n geologic units that may be present in the block, and z i represent the grade of each individual unit within the block.

Another, less desirable option, is to empirically introduce into the block model factors that penalize the grades of blocks at or near contacts, according to pre-specified criteria. This was done, for example, for one of the Escondida Mine’s resource models. In this method, if a contact between a high grade and waste geologic zones passes through any given block, the grade of that block is downgraded arbitrarily. The limitations of this procedure are significant, since the factors applied are empirical and global, as opposed to diluting according to the locally estimated grades.

Another method that can be used to estimate dilution and ore loss due to geologic contacts is to draw ore envelopes around the mineralized zones, and then estimate an over-break, or additional volume for mining. This can be done on sections or benches, and provides an estimate of the total grade and tonnage of material that will be recovered. A similar method is also used by mining engineers to estimate operational dilution. The method is best suited for deposits with well-defined ore zones with hard boundaries, such as vein type or epithermal Au deposits.

Geologic contact dilution is quantified from the geologic model. Thus, the local accuracy of the contact dilution estimate depends on the quality of the geologic model.

Operational Mining Dilution includes dilution and ore loss that occurs at the time of mining. Mining equipment unavoidably mixes material, because the precision with which the equipment can follow a dig line is limited, even with Global Positioning Systems (GPS). If the ore/waste contacts correspond with the geologic contacts, operational and contact dilution is the same. More commonly, however, the contacts of ore and waste that occur at the time of mining are defined in economic terms, and they do not necessarily follow geologic contact zones.

One possible estimate of this type of dilution can be obtained by simple geometric calculations. Figure 7.6 illustrates the case of an open pit mine, where the dilution and ore loss is incorporated into the resources considering a specific bench height and assuming an angle of repose for the material. The total metal lost depends on the characteristics of the contact, including the grade of ore lost and the grade of the diluting material. A good reference for quantification of dilution for underground deposits from a mine planning perspective can be found in Pakalnis et al. (1995).

Fig. 7.6
figure 6

Schematic of operational mining dilution and ore loss. Dilution and ore loss are represented for a bench height of 10 m and an angle of repose of broken ore of 45°. The overall volume of each is 125 m3 if a 10 × 10 × 10 m SMU is assumed

Another source of dilution and ore loss is blast heave and movement that shifts the position of the material to be mined and complicates the modeled dig-lines. Significant research has been done in this area (Yang and Kavetsky 1990; Harris 1997; and Zhang 1994), but to date there are few operations that attempt to accurately quantify and account for blast heave.

Ore loss and dilution also occurs when the extracted material is transported to the wrong destination: waste sent to the mill, or ore sent to the dumps. Control equipment such as GPS and Truck Dispatch systems has reduced the frequency of this error, but the destination control problem persists and can be significant.

Sometimes it is important to distinguish between planned and unplanned dilution; there may be some unexpected operational practices in the mine that are increasing dilution. In some, ore losses and dilution are accounted for using factors obtained from some degree of production reconciliation, and applied to the resource model globally.

A well-planned geostatistical conditional simulation study, as discussed in detail in Chap. 10 can be used to help understand dilution and ore loss (Guardiano et al 1995; GeoSystems International 1999). Such a conditional simulation study can address all three types of dilution.

7.3 Volume-Variance Correction

Internal dilution is sometimes modeled using geostatistical tools for volume-variance correction. The most common distribution shape change methods for volume-variance correction are the Affine Correction, the Indirect Lognormal, and the Discrete Gaussian methods. These methods correct a distribution of a grade attribute sampled at an initial support (often called the point scale distribution) into an SMU block distribution. These analytical methods are fast and generally applicable to small scale changes. Classical references on these methods include Journel and Huijbregts (1978) and Isaaks and Srivastava (1989).

The relationship between volume and variance is shown in Fig. 7.7. The variance decreases as the volume increases due to the averaging out of high and low values. The averaging is affected by the size and shape of the volume, the continuity of the variable, and the averaging process. For most variables in mining, since they average arithmetically, the mean does not change as the volume increases and the variance of the distribution decreases. There are exceptions, however, mostly when considering some geotechnical and metallurgical performance variables.

Fig. 7.7
figure 7

Schematic showing volume-variance relations for original data, SMU-sized distribution, and a larger panel distribution

The point distribution of an attribute will have a larger variance than the block distribution of the same attribute. The corrections described in this section apply to the distribution of samples within a chosen estimation domain. The goal is to take the representative distribution of point scale data and infer a global block or SMU distribution.

The traditional variance defined in Chap. 2 is the squared difference of the samples with respect to the overall mean implicitly states the support size (samples). A more general Dispersion Variance is defined as

$$ {{D}^{2}}(v,V)={{\sigma }^{2}}(v,V)=\frac{1}{n}{{\sum\limits_{i=1}^{n}{( {{z}_{i,v}}-{{m}_{V}} )}}^{2}} $$
(7.4)

where v represents a smaller support such as the sample size, and V represents a larger block support mean value, such as the stationary population or the SMU-sized block distribution.

The dispersion variance quantifies the reduction in variance for specific increases in volume. The dispersion variance is the same expected squared difference as the variance defined before, except that it is related to specific support sizes for the data and the mean.

The dispersion variance can be expressed as a function of average covariances or variograms, see Isaaks and Srivastava (1989) or Journel and Huijbregts (1978):

$$ {{D}^{2}}(v,V)=\overline{C}(v,v)-\overline{C}(V,V) $$
(7.5)

where \(\overline{C}(v,v)\) and \(\overline{C}(V,V)\) are the average covariance values for the samples at smaller sample support v and the SMU support respectively, as defined in Chap. 2. Note that these are spatial averages, and therefore are location-independent.

The additive property of variances leads to the following expression:

$$ {{D}^{2}}(v,G)={{D}^{2}}(v,V)+{{D}^{2}}(V,G),\text{ }\forall v\subset V \subset G $$
(7.6)

where v, V, and G represent increasingly larger volumes.

Equation 7.6 states that the variance of samples within a deposit can be found as the sum of the variance of samples within blocks of a certain size plus the variance of those blocks within the deposit. This relationship was found experimentally by D. Krige in the 1950’s, and is thus often called Krige’s relation.

In Eq. 7.6 two terms are usually known: (1) the variance of the data (\({{D}^{2}}(v,G)={{\sigma }^{2}}\)) and (2) the variance within blocks\({{D}^{2}}(v,V),\) which can be estimated from the covariance or variogram model (Eq. 7.1). The variance between blocks (for example, the SMU variance within the Deposit,\({{D}^{2}}(V,G)\)) can be obtained.

The variance within blocks \(( {{D}^{2}}(v,V) )\) is obtained from discretizing the SMU block V using n v sample points, and calculating the average covariance \(( \overline{C}(V,V) )\) or variogram value for all possible pairs within the block. The number of discretization points used to estimate \({{D}^{2}}(v,V)\) affects somewhat its final value. As a rule of thumb, it is generally accepted that a 5 × 5 × 5 grid of points within the SMU block is sufficient to obtain a robust estimate of \({{D}^{2}}(v,V).\) Considering too many discretization points could lead to numerical precision problems. One option is to obtain the dispersion variance for several discretization grids. Figure 7.8 shows the resulting dispersion variance for a given variogram model and SMU size for several discretizing grids. Note how the dispersion variance stabilizes after a reasonable number of discretization points have been used.

Fig. 7.8
figure 8

An example of block dispersion variances resulting from different discretization grids. The variogram model and the block size are fixed. The discretization in Z is always 1 because bench height and composite length are the same in this example. Note that a 3 × 3 × 1 grid in this case is sufficient to approximate the block dispersion variance

The dispersion variance is a key parameter needed to predict recoverable resources (recall Sect. 7.1). The volume-variance correction is often characterized by a single parameter, known as the Variance Correction Factor (VCF). The VCF (or more simply, f) is defined as the ratio of the SMU block variance to the original sample variance:

$$ VCF=f=\frac{{{D}^{2}}(V,G)}{{{D}^{2}}(v,G)}=\frac{{{D}^{2}}(v,G)-{{D}^{2}}(v,V)}{{{D}^{2}}(v,G)}=1-\frac{{{D}^{2}}(v,V)}{{{D}^{2}}(v,G)} $$
(7.7)

The factor f is a measure of how much the variance of a sample distribution will change, therefore giving an idea of the importance of the volume-variance correction in the estimation of recoverable resources. An f value close to one implies that the variances of samples within the deposit \(( {{D}^{2}}(v,G) )\) and of SMU blocks \(( {{D}^{2}}(V,G) )\) within the deposit are fairly similar. This is either because the SMUs are small (small volume, highly selective mine), or the spatial distribution is fairly continuous, that is, there is relatively little mixing of high and low grades within an SMU. The opposite is true for low f values.

As volume increases from the data support to an SMU support, the mean stays the same and the variance changes by a predictable amount (summarized in the f factor). The shape of the distribution also changes. The influence of the central limit theorem is felt to some extent, since the average of identically distributed values tends to a normal distribution. The grades inside an SMU in fact are not independent; therefore, the distribution of SMU grades does not always approach a normal distribution.

7.3.1 Affine Correction

The affine correction is the simplest method for volume-variance correction. It is based on the concept that the distribution does not change its shape while the variance is reduced, therefore assuming that there is no increase in symmetry of the resulting distribution. Although there is no additional explicit assumption about the point and SMU distributions, the permanence of shape assumption is limiting, since it is known that the distribution shape will change as the variable is averaged within larger volumes. Therefore, in practice, the range of application of this method is limited to small changes in variances, for which changes in distribution shape are small.

The affine correction works by transforming each value of the sample distribution into a different value of the SMU distribution, according to the following relationship:

$$ z'=\sqrt{f}\bullet (z-m)+m $$
(7.8)

where z is any value of the original distribution, z′ is the corresponding value of the SMU distribution, f is the variance correction factor, and m is the mean of both sample and SMU distributions.

According to Journel and Huijbregts (1978, p. 471), the affine correction can be applied up to about a correction factor of 30 % (f > 0.7), although in the experience of these authors this is optimistic. Even for volume-variance corrections much smaller than 30 % the affine correction seems to provide the wrong prediction, see Rossi and Parker (1993) and the example below.

7.3.2 Indirect Log-normal Correction

The indirect log-normal correction (ILC) is based on the idea that the change of support is described by two Log-normal distributions that have the same mean, but different variances. This is considered true regardless of the characteristics of the two original distributions (point and SMU support), except that they need be positively skewed.

The quantiles of the original distribution are transformed into the SMU distribution following an exponential equation:

$$ {{q}^{'}}=a{{q}^{b}} $$
(7.9)

with the coefficients a and exponent b given by:

$$ a=\frac{m}{\sqrt{f\cdot C{{V}^{2}}+1}}{{\left[ \frac{\sqrt{C{{V}^{2}}+1}}{m} \right]}^{b}} $$

and

$$ b=\sqrt{\frac{\ln (f\cdot C{{V}^{2}}+1)}{\ln (C{{V}^{2}}+1)}} $$

where m is the mean, CV is the coefficient of variation of the point distribution, and f is the variance correction factor (VCF) previously defined.

However, since the distributions will not in general be exactly lognormal, then the transformation of Eq. 7.9 will not result in the same mean for the transformed and untransformed distributions. So, a final step is required to ensure that the original mean is obtained:

$$ {{q}^{''}}=\frac{m}{{{m}^{'}}}\cdot {{q}^{'}} $$
(7.10)

After applying Eq. 7.10, the quantiles of the SMU distribution have been rescaled to the correct mean. Interestingly, the differences between the first transformed mean and the rescaled mean can be used as a measure of the dissimilarity between the original distribution and a Log-normal distribution. The final correction may cause the variance to be slightly different than the target variance.

7.3.3 Other Permanence of Distribution Models

As a generalization of the previous methods, the same principle can be applied to other distributions, most practically to those that are characterized by two parameters, such as the Gaussian, Lognormal, and even Gamma distributions.

Under the assumption that a sample distribution can be approximated by a multivariate Gaussian distribution, then the resulting block distribution will also be multi-Gaussian, with the same mean and corrected variance, as described before.

Similarly, the sample distribution can be assumed to be multi-Lognormal, in which case the resulting SMU distribution is also assumed to be multi-Lognormal (although, as in the case of the affine correction, this is an assumption known to be incorrect), with the same mean and corrected variance.

As these methods have had little use in practice, the reader is referred to Journel and Huijbregts (1978, pp. 468–469) for the specific formulae and further details on the limitations of these methods.

7.3.4 Discrete Gaussian Method

The permanence of distribution assumption is a limitation because most real-life mining distributions cannot be easily fitted with a two-parameter distribution (Gaussian or Log-normal). They have multiple modes and mixtures of populations that can only be overcome by using a method that makes no such assumption. The discrete Gaussian model (DGM) has been proposed as a more robust method to obtain the volume-variance correction.

The key idea of the DGM is that the distributions for different supports will be Gaussian after transformation to Gaussian units. The transformation to Gaussian units is achieved in two steps: (1) a normal scores transformation like that described in Chap. 2, then (2) fitting the relationship between the original grades and the normal scores transform with a series of Hermite polynomials. These polynomials are orthogonal, which is important because the variance of the original grades is then a simple summation of the squares of the coefficients. A change to the variance is achieved by scaling the coefficients of the Hermite polynomials by a change of support coefficient related to the factor f. As expected, the corrected distribution gradually becomes more Gaussian in shape as the scale increases.

The fitting of Hermite polynomials and the details of the mathematics are embedded in widely used computer programs and documented in references such as Armstrong and Matheron (1986), Rivoirard (1994) or Machuca-Mory et al. (2007). An overview will be presented here. An anamorphosis function needs to be fit to the sample data. The anamorphosis function is defined by a Hermite polynomial expansion fit to the data. Hermite polynomials are related to the Gaussian distribution and are defined by Rodrigues’ formula (Abramovitz and Stegun 1964, p. 773). The anamorphosis function is equivalent to the normal score transformation in that it provides a mapping of the point variable Z to the Gaussian variable Y and vice-versa:

$$ z(\mathbf{u})=\Phi (y(\mathbf{u}))\approx \sum\limits_{p=0}^{\infty }{{{\Phi }_{p}}{{H}_{p}}(y(\mathbf{u}))} $$

where\({{\Phi }_{p}}\)is the coefficient of each polynomial term, and\({{H}_{p}}(y(\mathbf{u}))\)is the Hermite polynomial value. This fitting can be thought of as a polynomial fit to the Q-Q plot between the original grades and the normal scores.

The anamorphosis function is fit by calculating the value of the Φ coefficients of the Hermite polynomials. The first coefficient is simply the mean of the Z samples:

$$ {{\Phi }_{0}}=E\{\Phi (Y(u))\}=E\{Z(u)\} $$

Higher order coefficients are found with the following approximation:

$$ \begin{matrix}{{\Phi }_{p}}=E\{Z(\mathbf{u})\cdot {{H}_{p}}(Y(\mathbf{u}))\}=\int{\Phi (y(\mathbf{u}))\cdot }{{H}_{p}}(y(\mathbf{u}))\cdot g(y(\mathbf{u}))\cdot dy(\mathbf{u})\\ \approx\sum\limits_{\alpha =2}^{n}{(z({{u}_{\alpha -1}})-z({{u}_{\alpha }}))\cdot \frac{1}{\sqrt{p}}{{H}_{p-1}}(y({{\mathbf{u}}_{\alpha }}))}\cdot g(y({{\mathbf{u}}_{\alpha }})) \\ \end{matrix} $$

where\(g(y({{u}_{\alpha }}))\)is the probability value y corresponding to a standard Gaussian distribution. Since the polynomials are orthogonal and thus there is no correlation between them, the variance of the Z samples can be identified to

$$\begin{aligned}\,& Var\{\Phi (Y(u))\}=Var\{Z(u)\}\\ &\approx \sum\limits_{p=1}^{n}{\sum\limits_{q=1}^{n}{{{\phi }_{p}}{{\phi }_{q}}\operatorname{cov}\{{{H}_{p}}(Y(u)),{{H}_{q}}(Y(u))\}}}=\sum\limits_{p=1}^{n}{\phi _{p}^{2}}\end{aligned} $$

The modeled anamorphosis function can be checked against the original data by comparing the distributions resulting from the samples to the distribution from the anamorphosis. The distributions should be identical, although in practice extreme values can be difficult to model.

Then, the sample histogram at the SMU block support is obtained using the bi-Gaussian assumption. To correct the sample distribution to a predicted-SMU distribution the anamorphosis function is modified by adding a change of support coefficient r:

$$ Z(v)=\Phi ({{y}_{v}}(v))\approx \sum\limits_{p=0}^{\infty }{{{r}^{p}}\cdot {{\Phi }_{p}}{{H}_{p}}(Y(v))} $$

The calculation of r requires the dispersion variance of the SMU-sized blocks, in obtained from the variogram model derived from samples values (Chap. 7). The anamorphosis function corresponding to the SMU support v assumes that the distribution of \( [Y(u),Y(v)] \) is bi-Gaussian, and is found with:

$$ \sigma _{v}^{2}=\sigma _{u}^{2}-{{\overline{\gamma }}_{v,v}}\approx \sum\limits_{p=1}^{n}{\sum\limits_{q=1}^{n}{{{r}^{p}}{{\phi }_{p}}{{r}^{p}}{{\phi }_{q}}\operatorname{cov}\{{{H}_{p}}(Y(\mathbf{u})),{{H}_{q}}(Y(\mathbf{u}))\}}}=\sum\limits_{p=1}^{n}{{{r}^{2p}}\phi _{p}^{2}} $$

from which the r coefficient can be obtained. The distribution of grades representing SMU volumes is easily determined with the obtained r coefficient, the fitted coefficients and the Hermite polynomials. Although apparently complex, the procedure is automated and widely available in different programs.

The DGM is deemed to be more robust than the affine or indirect lognormal correction because the normal scores transform is general, and no additional assumptions are necessary for the original or the SMU distributions.

7.3.5 Non-Traditional Volume-Variance Correction Methods

There are other methods used for volume-variance correction, some of them empirical. These range from adjusting the kriging plan used to estimate the blocks to get the predicted dispersion variance, to the use of probabilistic estimation techniques (Chap. 9), to the application of conditional simulations (Chap. 10).

7.3.6 Restricting the Kriging Plan

The concept is based on tuning the kriging plans to control smoothing to match the resulting block distribution to the expected SMU distribution as closely as possible.

This method was proposed originally by Parker and is discussed in Rossi and Parker (1993) and Rossi et al. (1993). It utilizes the notion that the smoothing property of kriging (see Chap. 8, and Journel and Huijbregts 1978, pp. 450–452) can be controlled to obtain an estimated block distribution that closely matches the predicted SMU distribution. Certain parameters of the kriging plan, such as search neighborhoods, minimum and maximum number of samples and drill holes, the use or not of octant searches, etc. can impact the degree of smoothing of the resulting block distribution.

Restricting the kriging plan has the advantage of being simple, although rarely the kriged block distribution will match exactly the desired SMU distribution. More commonly, the matching is achieved for certain cutoffs of interest along the grade-tonnage curve. It is local in the sense that the method is estimating individual block grades, which combine to form a distribution similar to the desired SMU distribution.

One of the disadvantages of the method, as pointed out by Journel and Kyriakidis (2004), is that it is specific to each mineral deposit, and cannot be formulated in general terms. Also, the increased restrictions on the Kriging plans result in higher variance of the resulting block distribution, typically at the expense of higher conditional bias. The spatial distribution of estimates is still smooth, that is, the variogram of the estimates will show a significantly lower nugget effect and continuous behavior at the nugget effect.

It is important to note that the requirement of conditional unbiasedness of the kriged block model is incompatible with the requirement of predicting tons and grade received at a future date by the processing plant, see for example Isaaks and Davis (1999) and Isaaks (2004). This has been empirically verified in practice. Still, too much conditional bias in the output kriged model can lead to significant prediction biases that should be avoided.

The SMU estimates at this time are interim estimates awaiting much more data from blast hole sampling or infill drilling. At the time of final estimation for grade control, care should be taken to avoid conditional bias. It is often more important at the prefeasibility and feasibility stage of resource estimation to get predictions that reasonably reflect the recoverable resource that will ultimately be obtained.

7.3.7 Probabilistic Estimation Methods

Several probabilistic estimation methods, described in detail in Chap. 9, can be used to incorporate the volume-variance effect into the resource estimation process.

One option is to modify the point probability distributions resulting from the multiple indicator kriging (MIK) technique into block probability distributions using either an affine, ILC, or DGM correction. A variant of is the procedure has been used by Newmont Gold at its Gold Quarry mine in Nevada (Hoerger 1992), which, appears to work reasonably well when there is sufficient production data for a correct calibration.

A different option within the application of MIK is to apply the volume-variance correction to a cumulative probability distribution, at the composite scale, resulting from MIK. The compositing refers here to simply averaging the MIK probability distribution values to larger panels. A discussion of this method can be found in Chap. 9 and in Journel and Kyriakidis (2004).

Methods used to estimate distributions that are based on the Gaussian or Lognormal assumptions are also applied to incorporate the volume-variance effect into the resource estimation model. The available options include Multi-Gaussian Kriging (Verly 1984), Disjunctive Kriging (Matheron 1976) and its derivative, Uniform Conditioning (Roth and Deraisme 2000), and the Lognormal Shortcut methods (David 1977). The change of support models afforded by these methods is generally robust, as long as the corresponding underlying Gaussian or Log-normal assumptions are reasonable.

The volume-variance correction methods described share in the same limitations: they do not account for other types of dilution and the information effect. They assume that every block can be selected individually and independently from any other (free selection), and that the selection itself is made based on a known true grade (perfect selection).

7.3.8 Common Applications of Volume-Variance Correction Methods

The methods for volume-variance correction described are applied to ore resource modeling in several manners. The traditional application has been the correction of the global resource model to match the predicted grade-tonnage curve according to the volume-variance effect predicted (David 1977; Journel and Huijbregts 1978). This application is now less common for multiple reasons:

  1. a.

    The volume-variance correction performed in such a way is a global correction, and therefore of little practical use, except for the overall assessment of resources from a deposit; the mineralization’s internal dilution should be somehow incorporated into the resource block model based on more local corrections, so that downstream work, such as mine planning, takes its effect into account.

  2. b.

    Forcing the overall resources to match the volume-variance corrected distribution implies ignoring all other dilution sources described above. Therefore, the reported overall resources are known to be wrong, since they are based on the incorporation of a single source of dilution. The resource model should incorporate more dilution than predicted by volume-variance correction to include geologic contact dilution, the information effect, and planned operational dilution.

Another application is correcting the drill hole data such that an estimate of the expected SMU distribution is obtained prior to estimating the resources. This provides a target distribution against which the resource model can be compared.

The example shown in Fig. 7.9 corresponds to the Cerro Vanguardia operation, which mines gold and silver vein deposits in the Patagonia Region in Southern Argentina. Figure 7.9 shows the distributions of the 2 m composites used for estimation, as well as the DG-predicted and the Affine-predicted SMU distributions. Note that in this case, the SMU is a 5 × 10 × 5 m cube, to account for the open pit mining method currently used. The example shown is from the Osvaldo Diez vein, one of more than 40 Au-Ag bearing veins identified in the district, and the source of most of the mine’s production through the late 1990’s and early 2000’s. It is instructive to note several points:

Fig. 7.9
figure 9

Grade-Tonnage Curves for the Osvaldo Diez Vein, Cerro Vanguardia Mine, Argentina. There is a high volume-variance effect. The 2 m composites distribution is shown along with the DG-predicted and affine-predicted SMU distributions

  • The graph in Fig. 7.9 shows the Au cutoff grades applied to the distribution on the X axis, the left Y-axis shows the predicted proportion of tonnage above the corresponding cutoff, while the right Y-axis shows the corresponding grade above cutoff.

  • The grade-tonnage curves allow an immediate analysis for the cutoffs of interest, and how the distributions change for different grade ranges.

  • The volume-variance correction factor is estimated at 28 %, implying that there is a very significant change in variance from the original 2 m composite to the 5 × 10 × 5 m SMU distributions.

  • The Affine correction is not the appropriate method to use in this case. It is presented here to highlight the differences in the resulting distributions. Among other reasons, the artificial minimum generated by the Affine correction is quite high, and, although not shown here, the DG model was proven by production data to be more robust.

  • The difference between the tonnage and grades for any given cutoff between the SMU distributions and the composites distribution is an indication of the how severe the predicted volume-variance correction is.

In the literature there are several other detailed examples and comparisons of the different volume-variance corrections, see for example Verly (2000) and Rossi and Parker (1993).

The volume-variance correction of drill hole information for each estimation domain can also provide a target global distribution of blocks (SMUs), grade-tonnage curves that can be used to calibrate and/or check the grade-tonnage curves resulting from the resource block model, and in particular for specific cutoffs. The comparison between the actual versus target distributions can also be done through distribution parameters, such as the Coefficient of Variation (CV), a robust measure of variability.

Figure 7.10 shows a comparison of the grade-tonnage curves of the DGM-predicted SMU and the estimated block model grades for the high enrichment units of the Escondida Norte Porphyry Copper deposit. Note that for most cutoff grades the estimated grades of the block model are slightly smoother than the corresponding DG-predicted SMU distribution. The conclusion from Fig. 7.10 is that the estimated resource model is incorporating additional dilution, besides the internal dilution represented by the DG model. In this case, the SMU size is 20 × 20 × 15 m, 15 m composites were used to estimate the block model, and the cutoffs of interest are in the range of 0.3 to 0.7 % Cu.

Fig. 7.10
figure 10

Grade-Tonnage curves of the high secondary enrichment units of the Escondida Norte Porphyry Cu deposit

Another application of the volume-variance correction is to help define the selectivity of the mine. This can be approximated by quantifying the impact that different mining equipment used in the operation has on dilution, and based on changes in the volume of the SMU. Most commonly, operations study the impact of changes in bench heights. However, there are limitations to the use of volume-variance methods to predict optimal bench heights, because of the free and perfect selection assumptions.

7.4 Information Effect

The Information Effect describes the fact that, at the time of mining, the information used to decide which portion of the deposit is ore and which is waste is based on more information than that available when obtaining a resource model.

Ore/waste selection is described in more detail in Chap. 13. Although more data is available, the ore/waste selection is always made with an estimate and not the true grades. This is imperfect selection in the sense that an estimation error is always present. Additionally, the selection process is not free, meaning that each SMU is not selected as ore or waste independently of other SMUs in the vicinity. There may be other geometrical and mining constrains that restrict the accessibility of each SMU. All these approximations and sources of error are implicit in the Information Effect.

The problem of selection can be mathematically described by the following recovery equations:

$$ i_{v}^{{}}(\mathbf{u};{{z}_{c}})=\left\{ \begin{matrix} 1\text{}if\text{ }{{z}_{v}}(\mathbf{u})\ge {{z}_{c}}\\ 0\text{ }if\text{ }{{z}_{v}}(\mathbf{u})<{{z}_{c}}\\\end{matrix} \right. $$

where \(i_{v}^{{}}(\mathbf{u};{{z}_{c}})\) represents an indicator of perfect selection for the SMU v and z c is the cutoff grade. If the value of the SMU z v is higher than the cutoff, then the SMU is recovered \(\left( i_{v}^{p}\left( \mathbf{u};{{z}_{c}} \right)=1 \right).\)  The total tonnage, quantity of metal, and grade thus recovered for any panel or region V is

$$ {{t}_{v}}({{z}_{c}})=\sum\limits_{j=1}^{N}{{{i}_{v}}({{\mathbf{u}}_{j}};{{z}_{c}})}\text{, }v\in [ 1,N ];\text{}{{\text{x}}_{\text{j}}}\in V $$
(7.11)
$$ {{q}_{v}}({{z}_{c}})=\sum\limits_{j=1}^{N}{{{i}_{v}}({{\mathbf{u}}_{j}};{{z}_{c}})\cdot {{z}_{v}}(\mathbf{u})}\text{,}v\in [ 1,N ];\text{}{{\mathbf{u}}_{\text{j}}}\in V $$
(7.12)
$$ {{m}_{v}}({{z}_{c}})=\frac{{{q}_{v}}({{z}_{c}})}{{{t}_{v}}({{z}_{c}})} $$
(7.13)

For simplicity, the density (tonnage factor) in the above equations is assumed to be 1.0. Equations 7.11–7.13 assume perfect selection, that is, knowledge of the true SMU value. However, in reality, only an estimate of that true value is available.

Graphically, the ore/waste selection problem can be represented by a scatter plot of the unknown true SMU values vs. the estimated SMU values shown in Fig. 7.11. Consider, for example, a z c  = 2.0 cutoff; there are four possible outcomes:

Fig. 7.11
figure 11

Scatter Plot of Hypothetical True vs. Estimated SMU values. The Zc = 0.3 cutoff value defines 4 quadrants in the graph, two if which correspond to misclassification. (SMUs represented by dots)

  1. a.

    The SMU is estimated to be ore and is recovered as such; in this case, no error (or misclassification) is made (Quadrant I).

  2. b.

    The SMU is estimated to be waste, and is recovered as such; as before, no error (or misclassification) is made (Quadrant IV).

  3. c.

    The SMU is estimated to be ore, and is in fact waste; in this case, dilution is sent to the processing plant (Quadrant II).

  4. d.

    The SMU is estimated to be waste, and is in fact ore (Quadrant III); in this case, ore loss occurs as economic material is being discarded.

The imperfect selection described is a major component of the information effect. The economic performance of any operating mine is impacted by this unavoidable selection error. Commonly, little attention is paid to optimizing that selection, relative to its economic impact.

The simple scenario shown in Fig. 7.11 becomes more complicated if there are several destinations for the ore, such as crushed ore to the mill, crushed ore to the leach pad, and Run-of-Mine ore to a different leach pad. In this case, there are four possible destinations including waste. Optimal procedures for ore/waste selection are discussed in Chap. 13.

Imperfect selection and other components of the information effect are difficult to understand and predict with the often-used empirical models. The better alternative is to use geostatistical conditional simulations (Chap. 10), which allows the reproduction, based on simulated data, of the entire process of blast hole sampling and ore/waste selection, as discussed in Chaps. 10 and 13, and exemplified in Chap. 14. This approach has been used successfully in recent years in practice (Guardiano et al. 1995; Badenhorst and Rossi 2012) and Journel and Kyriakidis 2004).

Variants of the probabilistic estimation methods discussed above in the context of volume-variance correction (based on Gaussian or Log-normal assumptions) can be modified to incorporate the information effect. One such method is advocated by Roth and Deraisme (2000), and is based on a Bi-Gaussian assumption between the true, unknown SMU value, and its estimate. The Uniform Conditioning method (as well as others) can be applied to incorporate a correction to the predicted SMU grades and tonnages above cutoff.

Besides the more complete and complex conditional simulation approach, there are several ad-hoc methods that deal with the information effect. One such method, commonly used, is to conservatively bias the ore resource model (similar to what is shown in Fig. 7.10) to compensate for the information effect and future losses. This entails purposefully introducing a certain degree of dilution in the resource model. As all empirical methods, it can only be successfully applied if there is sufficient knowledge about the deposit and valid production data to adequately calibrate the amount of additional dilution incorporated into the model.

A conceptually similar method consists in defining an SMU larger than the SMU that the operation can realistically mine, and assume perfect selection on it. This procedure compensates for the information effect and the fact that the theoretical SMU can never be selected (extracted) perfectly, without any further ore loss and dilution. The impact of assuming a larger-than-expected SMU can be quantified in terms of additional dilution incorporated into the model.

These empirical methods are subjective, and rely heavily on assumptions that cannot be easily verified or quantified. As such, they should be considered only approximations to the incorporation of the information effect into the resource model.

The amount of additional data available at the time of ore/waste selection is significantly more than that available at the time of developing a resource model for a feasibility study. Therefore, predictions about mineable tonnage and grades for economic cutoffs can be much different and improved at the time of selection, if only because of the massive amount of information available.

7.5 Summary of Minimum, Good and Best Practices

The minimum practice in modeling resources requires the following:

  1. a.

    All models should have an assessment of the global internal dilution by estimation domains. This assessment should be used to quantify the impact of internal dilution, and compare it with the dilution introduced into the block model due to the smoothing property of Kriging.

  2. b.

    The geologic contact dilution should also be included through geometric considerations if deemed important enough, or discussed in the documentation of the model if considered negligible. The methods used could include the use of factors to penalize block values along contacts. A more direct approach is preferred, estimating the grade of each geologic unit within the block and then obtaining the average block grade using Eq. 7.1.

  3. c.

    The information effect is usually handled with factors, sometimes calibrated to production figures, and often applied by mining engineers to the ore resource model at the time of developing the mine plan. In any case, the block model documentation should clearly state its limitations in terms of dilution, and to what extent it can be considered “recoverable”.

  4. d.

    If an indirect or empirical method has been used to incorporate additional dilution into the model to compensate for planned and unplanned operational dilution, such as using a larger SMU size, this should be clearly stated in the documentation.

In addition to the above, good practice requires:

  1. a.

    A more specific method to include internal dilution into the resource model. This can be done through any of the methods mentioned in Sect. 7.3, and in all cases should include a fair assessment of the uncertainties and tradeoffs involved.

  2. b.

    Geologic contact dilution should be explicitly incorporated into the block model, and a statement about the uncertainty of the position of the contacts should be included. The information effect should be dealt with using at least a reasonable empirical approximation, or a modification of the estimation method.

  3. c.

    All the work should be well documented and clearly presented, detailing the checks performed and the quality control procedures in place.

Best practice consists of using uncertainty models to deal with all three types of dilution described: block averaging, geologic model uncertainty, and operational dilution. The full conditional simulation study would:

  1. 1.

    Incorporate the uncertainty of the geologic model, thus implicitly considering geologic dilution.

  2. 2.

    The internal dilution is more accurately incorporated by direct block simulation or simply by averaging the simulated values into the SMU size.

  3. 3.

    The simulation model should also incorporate operational dilution and the information effect by simulating the complete mining process.

Thus, most of the possible sources of dilution and ore loss are modeled simultaneously. In such case, it is not necessary to apply any of the volume-variance correction methods, unless it is done as checks on simulation models, for example. The work is only completed when, as always, a very thorough validation and checking of the models is completed and documented. Preferably, the simulations models should be validated against production, or at least alternative models, and through thorough statistical and graphical checking, see Chap 11.

7.6 Exercises

The objective of this exercise is to review change of support calculations. Some specific (geo)statistical software may be required. The functionality may be available in different public domain or commercial software. Please acquire the required software before beginning the exercise. The data files are available for download from the author’s website—a search engine will reveal the location.

7.6.1 Part One: Assemble Variograms and Review Theory

You will use the Cu variable from the largedata.dat dataset. The key parameter in all scaling is the variogram; however, the normal scores transforms of grades do not average linearly and we cannot use the normal scores variograms for scaling. The variograms of the Cu grades directly are required. Of course, the direct grade variogram should be similar to the normal scores variogram.

Question 1::

Compute and fit a 3-D Cu variogram (like that modeled in Chap. 6). Comment on the “stationarity” of the variogram model, that is, does it flatten off at the variance of Cu grades?

Question 2::

Write a short review of the key theoretical results needed for variogram scaling: (1) the definition of the average variogram/ average covariance, (2) the definition of the dispersion variance and the link to the average variogram, (3) krige’s relation or the additivity of variance, and (4) the scaling of variogram sill parameters.

Question 3::

Derive the volume scaling law of the nugget effect, that is, demonstrate that the following relation is exact: CV = |v|/|V| Cv. Where CV and Cv are the nugget effects at scales V and v, respectively.

7.6.2 Part Two: Average Variogram Calculation

Average variogram or “gammabar” values tell us the variance at any scale. The discretization required for stable numerical integration is a consideration. Average variogram values can be calculated between two disjoint volumes V and v′; however, classic histogram and variogram scaling requires the average variogram to be calculated for V ‽ v′, that is, for the same volume and itself. This brings up the zero effect as another complicating factor.

Question 1::

Consider your reference Cu variogram model and a 10 m cubed block size for a number of sensitivity studies. Create a plot with the average variogram versus discretization level (starting with 1 × 1×1 and going to 20 × 20 × 20). Plot two lines—one with the zero values for coincident discretization points and another for this corrected.

Question 2::

Calculate the average variogram for regular cubic block sizes from 1 through 20 m with the zero effect correctly handled. Comment on your choice of discretization level. Plot and tabulate (1) the average variogram versus block size, and (2) the block variance versus block size.

7.6.3 Part Three: Change of Shape Models

The global mean does not change with scale. The variance changes in a predictable manner; however, the shape change is not precisely known.

Question 1::

Consider cubic block sizes of 5, 10, and 20 m. Calculate the scaled distributions using the (1) affine, (2) indirect lognormal, and (3) discrete Gaussian models. Plot the original Cu histogram and all of the scaled histograms. Comment on the results.

Question 2::

Attempt to quantify the importance of the shape change by plotting grade tonnage curves at the 10 m scale. Discuss the different models and explain where you require such a model.