FormalPara Summary

Mineral resource evaluation should provide a basis on which economic decisions can be taken. At least, four aspects can be identified if a mining project is evaluated, technical, economic/financial, social, and political; this chapter introduces the first two. The technical aspects include all matters related to the geological setting of the deposit, characteristics of the mineralization (grade, tonnage), and technology that determines the production system. A general introduction to geostatistics from the conceptual viewpoint is provided as well as the main classical methods used in mineral deposit evaluation. As regards the economic/financial aspects, they cover the economic inputs and outputs in the project and the amount, type, and cost of capital forthcoming for a project. On this subject, net present value, internal rate of return, payback period, as well as risk analysis are included. Some case studies are presented to illustrate the main methods used in mineral resource evaluation.

4.1 Introduction

Mineral resource evaluation should provide a basis on which economic decisions can be taken. There are several steps needed to ensure a logical progression of a mining project from the initial scattered prospection data to final resource/reserve valuation that meets the needs of potential investors and bankers. Thus, a mine will come into existence if it generates and sells something valuable (Scott and Whateley 2006). At least, four aspects can be identified if a mining project is evaluated: technical, economic/financial, social, and political.

The technical aspects include all matters related to the geological setting of the deposit, characteristics of the mineralization (grade, tonnage), and technology to establish the system of production. The economic and financial aspects cover the economic inputs and outputs in the project and the amount, type, and cost of capital forthcoming for a project. The latter will be defined partially based on financial environment at the moment the investing is carried out. Regarding the social aspects, they include the social costs and benefits originated in a mining project. The infrastructure development, the use of local work and commodity resources can afford positive contributions to society, but conversely mines generate tailings and effluents that can produce negative impact on the environment. Finally, political aspects mean the mineral, fiscal, foreign exchange, and employment policies of the country governments where the deposit is situated. They are especially noticeable to governments contributing in mineral projects. In this chapter, technical and financial aspects will be described, whereas everything else is left to more specific texts.

The relative importance of each type of evaluation of a mineral project at any point in time depends on the stage of development. Thus, target selection or drilling draw relies mainly on the geological sciences, while the later stage (feasibility study) depends more on the engineering sciences and economics. The socioeconomic evaluation is carried out preferentially where development of a mineral deposit is considered. Moreover, rather than being independent of one another, these types of evaluation are interrelated and they are often carried out in parallel. The results of the technical evaluation serve as important input to the economic evaluation, and together, the technical and economic evaluations serve as a starting point for the socioeconomic evaluation. In addition, these evaluations are constantly revised in the light of new information.

4.2 Sampling

On any deposit delimitation program, sampling is an essential step to establish the limits, volume, mass, and grade of the mineral deposit. Thus, the main goal of sampling is to generate values about the mineralization (e.g., assays of metal grades) that are the fundamental information to be utilized in carrying out resource and/or reserve estimations. Therefore, sampling of an ore deposit is a process of approximation, and the objective is to arrive at an average sample value that closely depicts the true average value for the ore body (Readdy et al. 1982). Sampling is also important to study several geotechnical properties of the overburden and the host rock of the mineralization during the prospecting stage of the mining project. These include properties (strength or degree of weathering, among others) that are essential in designing a mine (e.g., size of the underground chambers or different pit slopes).

Sampling determines the day-to-day of any operation in the mine. Since inappropriate sampling procedure can originate incorrect estimation of present production and future potential, the mine department commissioned of resource/reserve estimations and mine sampling should be monitored by qualified and experienced professionals with technical backgrounds qualifying them to obtain precise data (Tapp 1998). In sampling an ore body to estimate grade, the geologist is mainly concerned with the reliability of his estimate as measured by its accuracy and precision. Accuracy, the close correspondence of an estimate to the «true» value, is achieved by obtaining unbiased results through appropriate sampling, sample preparation, assaying, and data analysis (◘ Fig. 4.1). To avoid bias, the geologist must control issues such as salting (e.g., Bre-X affaire; ◘ Box 1.4) or nonrepresentative samples. On the other hand, precision is the closeness of a single estimate obtained by sampling and ore body or other geologic entity to the estimates that would be obtained by repeated sampling of the ore body.

Fig. 4.1
figure 1

Fully automated sample plant-taking samples; the process is completely hands off and uses robotics to perform the analysis (Image courtesy of Anglo American plc.)

4.2.1 Significance of the Sampling Process

The sampling of metalliferous and industrial mineral deposits is undertaken for a variety of reasons and at various stages in their evaluation and exploitation. During the exploration phase, the sampling is largely confined to the analysis of drill cuttings or cores and is aimed at the evaluation of individual, often well-spaced, intersections of the deposit. During the exploitation phase, sampling is also used to define assay hanging walls and footwalls together with the grade over mineable thicknesses. Sampling is much more intense in this situation and is undertaken to allow the assignment of overall weighted grades to individual ore blocks or stopes. Also at this stage, sampling will be used to extend existing reserves and attempt to prove new ore zones accessible from existing developments (Annels 1991). Perhaps one of the most important applications of sampling during the exploitation phase is in grade control (◘ Fig. 4.2) (e.g., bench grades in an open-pit mine) since it determines the boundaries of mineralization and waste (see ► Chap. 5).

Fig. 4.2
figure 2

Taking samples for grade control (Image courtesy of Alicia Bermejo)

It is important to remember that to take a sample means that the information obtained from the analytical data of the sample will be finally utilized to someone who will use the information contained in the analytical result to make a decision. These decisions can involve immense capital engagements to open or close a mine or marginal process costs that include the decision if a batch of mineralized rock must be sent to the beneficiation plant or to the tailings dump (Minnitt 2007). For this reasons, the process of sampling is among the most essential activities in mining operations because the possibility always exists for large occult costs to accumulate in mineral development due to sampling errors. These hidden costs arise due to misunderstanding of the principal factors that affect the size of sampling errors (e.g., amount of the sample, the consequences of dividing a sample to reduce the amount, or the notorious impact of the particle size in the mineralization). Items such as sample procedure, sample reduction, assaying methods, and obviously geological data collecting and modeling are critical for a high-quality estimation of the resources and/or reserves. This is because many times data collection techniques are not of adequate quality to correctly define a mineral deposit.

All the processes involved in sampling must be checked continuous and appropriately. Obviously, there will constantly be a difference among the content of the lot, the sample obtained, and the sample for assay since the comparatively large amount of a sample is reduced to a small subsample of some grams that are needed for the final chemical analysis. This discrepancy is termed the sample error. Attention to the matters cited above reduces the errors and improves the quality, which is essential for interpretation of geological data and modeling and consequently the quality of resources/reserve estimation. The so-called sampling due diligence, which carries out an authentic geological resource evaluation, requires a validation process of many components, including, among others, (a) adequacy of samples, (b) sample representation, (c) accuracy of laboratory assays, (d) insertion of blank and standards, and (e) quality assurance and quality control protocols; these are the currently famous QA/QC (◘ Box 4.1: QA/QC in Coringa Gold Project).

QA/QC includes duplicate analysis and standard analysis. The precision of sampling and analytical data are estimated by analyzing twice the same sample utilizing the same methodology (duplicates), being the variance between the two data an estimation of their precision. Precision is affected, as aforementioned, by mineralogical factors such as grain size and distribution but also by errors in the sample preparation and analysis processes. Regarding standard samples (or reference materials), they are samples with a known grade and variability. These are commonly used to assess analytical accuracy and bias by comparing the assay results against the expected grade of the standard. In these sense, managers and consultants always insist that standard and duplicate samples are invaluable items to measure the accuracy and precision of commercial analytical laboratories. Moreover, they can ensure there can be a realistic confidence in the data by correctly utilizing these measurements of data quality to quantify the future risk of the mining project.

4.2 Box 4.1

4.2 QA/QC in Coringa Gold Project: Courtesy of Anfield Gold Corp.

The QA/QC program included the insertion of two standards, two duplicates, and one blank every 42 samples. ◘ Table 4.1 shows a summary list of control samples. Company created sample «duplicates» on site utilizing a method that creates a disparity in the duplicates. The procedure employed was to place a one-half core split in a plastic bag and to then crush the core with a hammer. The resulting crushed material was mechanically hand-mixed and then divided into equal portions for shipment. The use of this procedure introduces a bias in the sample since they are divided at a very coarse particle size. The bias is compounded by the fact that the majority of the mineralization is represented in discrete veins which are typically represented by a fraction of the particles at the large particle size. Due to this procedure, comparison of the duplicates created by Magellan is poor. In contrast, the laboratory preparation duplicates compare well with each other showing the sampling program was unbiased. ◘ Figure 4.3a shows a comparison graph of the laboratory duplicates, and ◘ Fig. 4.3b shows a comparison of the laboratory repeat assays which also compare well. Regarding the assay results of the blanks used in the QA/QC program, a total of 26 blanks returned detectable gold values. In all cases, the assay result was below 100 ppb Au. These discrepancies do not greatly affect the resource calculation since the average grade of the resource is 3.92 g/t Au or 47 times the highest gold value returned for a blank sample. Thus, analysis of the results from the blank insertion indicates that there was no contamination apparent.

Table 4.1 Summary QA/QC program
Fig. 4.3
figure 3

Laboratory duplicate comparison a and laboratory repeat comparison b (Illustration courtesy of Anfield Gold Corp.)

In addition to blanks and duplicates, standards were used to check the accuracy of the assay results. A total of 12 different standards were used. The gold values of the assay standards cover the variation of the average gold grade for the resource, from 0.081 ppm Au to 14.89 ppm Au. The 14.890 ppm Au standard shows that all of the 12 samples report above the certified standard value which may indicate that the lab was overreporting the gold grade. The certificate of analysis shows that gold grade for the 14.890 ppm Au standard was determined by laboratory consensus which represented the average of eight-subsample sets analyzed by 11 different laboratories. It indicates acceptable accuracy performance of the standard despite the fact that all samples return assays higher than the certified value. Overall, the QA/QC program for sample assays indicates acceptable performance of all standards and blanks with only a few minor discrepancies.

4.2.2 Definition of Sample

From a practical viewpoint, it is impossible to gather all the components of a population for study unless the population itself is very small. For this reason, it is essential to resort what is commonly known as «sample.» There are many definitions of sampling, but the concept is quite elementary. For example, a sample is «a representative part or a single item from a larger whole, being drawn for the purpose of inspection or shown as evidence of quality,» and it is «part of a statistical population whose properties (e.g. physical and chemical) are studied to gain information about the whole» (Barnes 1980). Another definition of sampling is «the operation of removing a part convenient in size for testing, from a whole which is of much greater bulk, in such a way that the proportion and distribution of the quality to be tested (e.g. specific gravity, metal content, recoverability) are the same in both the whole and the part removed (sample)» (Taggart 1945).

Both definitions are very similar being essential that the sample be representative (◘ Fig. 4.4). It is the key to a successful process of sampling. If the samples are not representative of the deposit, the rest of the evaluation is useless. There is no point in geological interpretation and modeling is carried out correctly if the initial data are wrong. Thus, the accuracy of a mineral resource or reserve calculation depends on the quality of the data gathering and handling processes used (Erickson and Padgett 2011). Large amount of sampling is carried out in the mineral industry, but little attention is given to ensure representative sampling. The responsibility for sampling is often tasked to people who do not take into account the significance of sampling, with cost being the main factor rather than the representative of the sample. The quality of the subsequent analysis is undermined, and mineral companies are exposed to enormous potential financial losses.

Fig. 4.4
figure 4

The sample must be representative

The successive steps of sampling must be therefore tested continuously, although it is important to bear in mind that the condition of representativeness for the sample obtained from a whole is never fulfilled where heterogeneous materials are sampled, unless the sample includes all the mineralization. Thus, «an orebody is a mixture of minerals in proportions that vary in different parts of the mass. As a consequence the proportion of contained metals also varies from place to place. Therefore, a single sample taken in any particular place would not contain the same proportion of metals as does the orebody as a whole except by a highly improbable coincidence. The probable error, which would be very large if only one sample were taken, decreases with the number of samples, but it never disappears completely unless the samples are so numerous and so large that their aggregate is equal to the orebody itself, in which case the orebody would be completely used up in the process of sampling» (McKinstry 1948).

Random and systematic errors involved in the collection, preparation, analysis, and evaluation of samples must be recognized and accounted for. In fact, this is not a problem but rather an incentive. In this sense, Sarma (2009) affirmed that a good sample design must:

  1. 1.

    result in a really representative unit;

  2. 2.

    lead to exclusively a small error;

  3. 3.

    be cost-efficient;

  4. 4.

    be one that monitors systematic bias; and

  5. 5.

    the results of the sample study can be utilized for the population with a fair degree of confidence.

The samples must also be representative from a spatial viewpoint, which means that the spatial coverage of the deposit is adequate. Thus, the samples can be taken roughly in a regular or quasi-regular sampling grid (◘ Fig. 4.5), representing each sample a similar volume of mass in the valuable mineralization. Furthermore, the most important norm for an accurate sampling is that all components of the mineralization or other raw material must have the same probability of being sampled and constituting part of the final sample for the assay. The logic of sampling is to collect a minimal mass (grams, kilograms, or tons) that equals a certain parameter (e.g., gold content) of a much larger mass (hundreds or thousands of tons) (Pohl 2011). It is necessary to take into account that finally only a tiny portion of the mineral deposit is collected and that often less than one-millionth of the total mass of a deposit is being drilled; it is quite easy to obtain this datum estimating the volume of drillholes, the volume of an entire deposit, and dividing both data.

Fig. 4.5
figure 5

Sampling grid in blasting

The type and number of samples collected depend on a range of factors which include (1) the type of mineral deposit and the distribution and grain size of the valuable phase; (2) the stage of the evaluation procedure; (3) whether direct accessibility exists to the mineralization; (4) the ease of collection, which is related to the nature and condition of the host rock; and (5) the cost of collection, funds available, and the value of the ore (Annels 1991). It is clearly incorrect to take over that many samples remove any errors in the sampling procedure. To obtain unbiased samples, the location of the sample in relation to the mineralization and waste is just as important. In fact, the accuracy of a sampling procedure is only known where all the mineralization is mined and later milled and processed.

Obviously, the cost of intense sampling of a low-grade or low-value deposit (e.g., aggregates for construction) can be prohibitive. For instance, the mode of occurrence and morphology of a mineral deposit has considerable impact on the type and density of sampling and on the amount of material required. Indeed, sampling of vein deposits, where many veins are narrow, is quite different than sampling of stratiform deposits where mineralization tends to be thick (e.g., up to 30 m). Thus, a mineral deposit classification with sampling as one of the main goals has been proposed taking into account the geometry, the grade distribution, and the coefficient of variation (◘ Table 4.2; Carras 1987).

Table 4.2 Mineral deposit classification based on geometry, grade distribution, and coefficient of variation (Carras 1987)

4.2.3 Steps in Sampling

To acquire accurate analytical data for resource estimation, it is indispensable to carry out a correct process of collecting samples (methodology, sampling pattern, and sample size), including a study of the ore with particular attention to the particle size distribution and the composition of the particles in each size class. Samples of several kilograms or even some tons are later cut to several grams, the so-called assay portion, which are further assayed for valuable elements; theoretically, this final aliquot must still replicate targeted properties of the original large mass. The reduction in weight is around 1,000 times with a kilogram sample and 1,000,000 with a sample mass of one ton. This process obviously involves errors, and Gy (1992) established a relationship between sample particle size, mass, and sampling error. Analytical errors are ascribed to laboratories and commonly take place from the selection of the portion for analysis. As aforementioned, these errors must be considered with and external control by submitting to the laboratory duplicate samples and reference materials of similar composition of the unknown samples.

To reduce errors in sampling, one solution is to divide the mineral deposit and the mineralization into distinct parts (a previous step in the sampling process). Thus, to take samples of the previously defined different types as separate units instead of as only one large sample can minimize natural variation and maintain the sample weight in a minimum. This method is the so-called stratified sampling, and it is very important if the separate types of mineralization need different mineral beneficiation techniques. Regarding the different steps in the sampling process, sampling, sample preparation, analysis, and interpretation in the final stages of exploration and mining are planned and carried out by a staff of geologists, chemists, statisticians, and engineers, who contribute their expertise to the interpretation of the sampling data. The importance of thorough, joint planning and interpretation is obvious because they form the basis for an economic and technical evaluation of the mineral prospect and because of the large financial commitment that the development of a potential ore deposit requires (Gocht et al. 1988).

4.2.4 Sampling Methods

Sampling methods are as different as the mines in which they can be utilized. The most suitable type of sampling and the combination of methods used depend to some extent on the type of deposit being evaluated. For instance, to conduct an unbiased sampling in vein gold deposits presents particular challenges because the features of the mineralization and host rocks are extremely complex. Variance however can be diminished by carrying out a well-planned procedures of sampling as well as careful collection of samples as possible. The mine geologist or engineer devoted to sampling process must select a method of sampling, test in a specific area, and later critically evaluate the results obtained. If these outcomes are sufficiently accurate within the economic limits determined by the mining company, then the methodology can be embraced as a general rule in the project and/or mine.

In general, there are three hand sampling methods: channel, chip, and grab sampling. Other sampling techniques include pitting and trenching or drill-based sampling (diamond drilling and in some cases rotary percussive drilling are the main sampling techniques available to the geologist in the exploration of a mineral deposit). In fact, the most satisfactory method should ensure that the sample properly represents the deposit at the smallest cost. It is very important to bear in mind that whether the samples are collected on surface or underground is not in itself a significant factor, that is, the same process must be assigned to sampling a core drill in surface drilling and in underground drilling.

4.2.4.1 Channel Sampling

Channel samples (◘ Fig. 4.6) are suited particularly to outcrops, trenches, and underground workings. The method consists of cutting a relatively precise narrow channel of constant depth and width across the exposed width of the mineralization, typically a vein ore. The cut can be either horizontal, vertical, or perpendicular to the dip of the ore. In the case of strongly preferred orientations (e.g., bedding), channels must be guided across the layering. The samples are collected across the full width of the vein, or at some uniform fixed length in wide; in complex veins, any identifiable subdivisions should be sampled separately. In theory, if the channels were continuous, and uniform, the channel sample would be similar to a drill core.

Fig. 4.6
figure 6

Channel sampling (Image courtesy of Martin Pittuck)

As far as possible, the channel is kept at a uniform width (e.g., 3–10 cm) and depth (e.g., 5 cm), although the spacing and length depend on the inhomogeneity in the distribution of the ore or the amount of material needed for analysis. The channel is best cut at a right angle to the ore zone, but if this is too difficult, the channel can be taken horizontally or vertically. As an example of the procedure, in the Cornish tin mines, a standard practice was to collect channel samples at 8–10 m intervals at the face on every other bench up the dip of the stope. Approximately 2 kg of material was collected to represent a length of channel not exceeding 50 cm (Annels 1991).

Samples are usually collected by hand and can be cut with a hammer and chisel (◘ Fig. 4.7) or an air hammer. The chips are set out on a plastic sheet laid out the floor of the working area, from which it is collected and bagged. Accessibility and rock hardness determine the applicable sampling tools. If the quantity is large, it can be quartered before being placed in the sample bag. In hard rock, it is quite difficult to achieve the ideal channel unless a special mechanical diamond-impregnated disk cutter is used, so that a reasonable approximation is generally accepted to be satisfactory. The working area to be sampled must be cleaned thoroughly employing a wire brush or water, among others. This is done to reduce the potential for contamination of the sample by loose fragments on the face being sampled.

Fig. 4.7
figure 7

Channel sample obtained with a hammer and chisel (Image courtesy of Martin Pittuck)

The main problem of channel sampling is related to the presence of soft minerals since they can commonly be broken preferentially. Thus, soft mineralization can be overrepresented in a sample, which imposes a high bias on the grade results. On the opposite, soft gangue minerals can be overrepresented and produce an undervaluation of grades. This problem may be partially resolved by taking large samples or taking separate samples from soft and hard zones, if possible. Channel has commonly a maximum length of 1.5 m and the samples must be divided into smaller parts in longer samples. This subdivision is carried out based on the structures in the mineralization, changes in rock types, or differences in rock hardness. Although channel sampling possibly originates the best method of delimiting and extracting a sample, the process is expensive, laborious, and time-consuming.

4.2.4.2 Chip Sampling

Chip sampling (◘ Fig. 4.8) is a modification of channel sampling utilized where the rock is too hard to channel sample economically or where little variation in the mineral content shows that this type of sampling method will provide results comparable to those originated by channel sampling. Chip sampling sometimes is applied as an inexpensive method with the objective to control if the ore is really valuable and allows the implementation of the more expensive channel sampling technique. It is the most common method used for underground grade control sampling. Since the advantage of chip sampling is its high productivity, the method is rapid and easy way to get information about the mineralization, but samples are less representative than in channel sampling. For this reason, this method should not be used for quantitative ore reserve calculations.

Fig. 4.8
figure 8

Chip sampling (Image courtesy of Gold One Group Limited)

Chip samples are taken by chipping over the whole area or a portion of the face, for example using a grid laid out on the face of an exposed outcrop. Where a line is sampled, rock chips are taken over a continuous band across the exposure approximately 15 cm wide using a sharp-pointed hammer or an air pick. This band is usually horizontal and samples are collected over set lengths into a cloth bag, usually 15 cm by 35 cm, and equipped with a tie to seal it. At Sigma Mine, Val d’Or (Canada), rock chips are taken at intervals of 0.25–0.5 m along horizontal lines marked on the face. Each line is spaced at 0.75 m from its neighbor and provides between 3.5 and 5.0 kg of material which is sent for assay (Annels 1991).

A general requirement is to collect small chips of equal size or in some cases coarser lumps at uniform intervals over the sampling band or area. The distance between any two points, horizontally or vertically, must be the same on any one face and can vary with the character of the ore. The recommended number of points depends on the variation of the ore: 12–15 for uniform to highly uniform deposits, 20–25 for nonuniform deposits, and 50–100 if mineralization is extremely uneven (Peters 1978). The possibilities for unintentional or intentional bias due to variable chip sizes and the oversampling of higher-grade patches or zones are high. Effort should be made to keep relatively constant sample volume proportional to the widths of the ore, and care must be taken to collect approximately the same size chips across the zone being sample; chip points should also be as regularly spaced as possible. Often, a composite sample is commonly obtained to establish the average grade of the ore present.

4.2.4.3 Grab Sampling

Grab sampling is usually performed as the inexpensive and easy option, but it is the least preferred sampling method and consisting of already broken material (◘ Fig. 4.9). The method involves collecting large samples from the stockpile at a face or at a drawpoint or from the trucks or conveyor belts transferring the mineralization from these points. The accuracy of this sampling method is frequently in doubt and sampling bias is known to be large. Care must be taken that the sampler is not selective and does not tend to select only large or rich-looking fragments; some correlation usually exists whereby the larger fragments are enriched or depleted in the critical component of value. Impartiality is rather difficult to achieve unless rigorous precautions are taken, and this is one of the disadvantages of the method (Storrar 1987).

Fig. 4.9
figure 9

Grab sampling

However, if the grab sample is composed of enough fragments and if taken over a large enough area, it can sometimes represent the grade of the mineralization in that area. Thus, in disseminated or massive deposits where the ore limits are outside the available site, a composite of several pieces from a freshly blasted face can be the most successful sample. In general, grab sampling is not considered reliable since many independent variables can affect this type of sampling process. For example, if the ores occur in the softer fraction and a proportional amount of the resulting fines are not sampled, the results are clearly erroneous. Because of the lack of significant dimension and the commonly biased collecting procedure, grab sampling can neither be used to volumes estimation nor utilized in mineral deposit evaluation.

It is commonly accepted that the value of a grab sample is only applicable to the aliquot that was assayed. Thus, a grab sample from a stockpile gives information just on the sample itself and is unsuitable for any accounting purposes. The main problem «is that the material in stockpiles or the material loaded into trucks is rarely sufficiently mixed to be representative of the block of ground from which it was drawn; also, material collected will be from the surface of the pile and rarely from its interior» (Annels 1991). Grab sampling «works better in more homogeneous low-nugget effect mineralization types such as some disseminated base metal deposits, while in heterogeneous high-nugget effect types (e.g., gold, especially if coarse gold is present), strong bias is expected» (Dominy 2010). In brief, nugget effect means error.

One of the greatest problems with grab sampling is related to the size of the sample that is needed, being the amount of individual samples ranging between 1 and 5 kg. These few kilograms of sample that are obtained over a pile are therefore commonly inadequate which leads to a large error. In most cases, it is likely that tons of materials are required for each sample. One approach to stockpile sampling is that employed at the gold mines in Val d’Or, Quebec, where the «string and knot» method is used. According to Annels (1991), «the broken ground from each blast at the face is transported to surface and spread over a concrete pad; three of four strings, with knots at 0.5 m intervals, are then placed over the pile at 3 m intervals and, at each knot, a sample is taken and its weight recorded, along with the position of the knot; each sample is assayed and the result weighted by the relevant weight to obtain the overall grade.»

4.2.4.4 Bulk Sampling

Bulk sampling is a usually utilized term to outline the method of the removal of large quantities of ore for the purpose of testing mineral contents. Before taking a decision to develop a mine, an explorer can extract a bulk sample of the material to be mined for further metallurgical or chemical testing and refinement of the proposed mining procedures. Thus, bulk sampling is carried out only in a much evolved exploration if making the decision to mine is required. Bulk samples are also used for developing beneficiation flow sheet and maximizing the recovery efficiency in mineral processing. Moreover, in parallel with the bulk sampling and geological appraisal work, the geomechanical and mining features of the mineral deposit commonly can be studied in more detail.

Extraction of a bulk sample (e.g., 100 tons) commonly involves excavation of a small pit or underground operation. Samples are dispatched for analysis in strong bags or in steel drums. The primary purpose is to collect a representative sample and to reliably determine the grade for comparison with the resource estimate; this aspect is essential for advanced mineral projects with a nugget problem (e.g., gold mineralization). Therefore, an integral part of a bulk sampling program is the verification of the geological interpretation used for a resource estimate, for example, where the grades of diamond drill core or reverse circulation drilling chips are suspect due to poor drilling conditions. A typical bulk sampling and sample preparation protocol relies on several stages of comminution, each followed by mass reduction through splitting (◘ Fig. 4.10). While expensive, bulk sampling provides relatively cheap insurance against a failed mine investment as part of a pre-feasibility or feasibility study. Many minerals and metals, especially industrial minerals, also require testing for the quality of the concentrate or mineral produced. In these cases, large-scale samples of the concentrates or products may be needed by the customer.

Fig. 4.10
figure 10

Bulk sample plant where kimberlite samples are being treated (Image courtesy of De Beers)

Bulk sampling is also typically used in exploration of diamond-bearing kimberlites. The bulk samples are the first stage to establish the economic parameters of the kimberlites with the objective of obtaining information that leads to the decision of a more detailed program of drilling to determine kimberlite size, morphology, geology, and grade distribution. In these deposits, the economic evaluation is usually carried out in four stages, and at the third stage, a limited bulk sampling program (order of 200 tons) must be carried out to provide the grade of the diamond expressed as carats per ton (1 carat = 0.2 g). A bulk sampling procedure in diamond kimberlites (bulk samples typically 50–200 tons) costs usually several USD 100,000 if not several USD millions (Rombouts 2003). If macro-diamonds are present, only a mini-bulk sample is necessary, being obtained either from drill core or localized pit sampling. Typical sample sizes of these mini-bulk samples range from 500 kg to several tons.

4.2.4.5 Pitting and Trenching

If the soil is thin in a mineralized area, the definition of bedrock mineralization is commonly carried out by the examination and sampling of outcrops. However, in locations of thick cover, it is imperative a sampling program using pitting or trenching (or drilling). In these methods, heavy equipment is utilized to clear surface soil and expose the bedrock. Hereafter, trenches or pits are excavated into the rock to expose ore zones for sampling (◘ Fig. 4.11). Despite their relatively shallow depth, pitting and trenching have several benefits in comparison with drilling such as the comprehensive geological logging that can be delineated and large and undisturbed samples obtained. Pits and trenches can be dug by bulldozer, excavator, or even by hand, being excavators commonly much quicker, inexpensive, and environmentally less harmful than bulldozers.

Fig. 4.11
figure 11

Trenching in progress (Image courtesy of Petropavlovsk)

In general, pitting and trenching can often be regarded as special cases of bulk sampling. The advantages of pits and trenches are that they permit the accurate sampling of mineralized horizons and they facilitate the collection of very large samples, which is particularly important in the evaluation of some types of mineral deposits such as diamondiferous or gold deposits. If the terrain is unfavorable for trenching or if greater depth of penetration is required, drilling techniques must be employed. In some cases, the pit can be sunk not including wall support, but correct safety procedures are crucial if there is any possibility of the sides caving or of rocks being moved from the sides (MacDonald 2007).

Pitting is usually employed to test shallow, extensive, flat-lying bodies of mineralization, being buried heavy mineral placers an ideal example. In tropical regions, thick lateritic soil constitutes optimal conditions for pitting, and if the soil is dry, pits to 30 m in depth can be safely extracted. The sinking of 1 m diameter pits through the overburden into weathered bedrock has been a standard practice in Central Africa, where exposure is poor due to the depth of weathering. Circular pits, 5–10 m apart, are sunk to depth of 10–15 m along lines crossing the strike of geochemical anomalies to allow the geologist to cut sampling channels in the pit wall and to identify the bedrock type, structure, and mineralization, if present (Annels 1991). Pitting is a slow, labor-intensive exercise, and the depth of penetration can be limited by a high water table, the presence of gas (CO2, H2S), or collapse due to loose friable rubble zones in the soil profile and hard bedrock.

With regard to trenches, they are commonly utilized to expose steep-dipping bedrock buried below shallow overburden, being useful for further channel sampling where bulk sample treatment facilities are not available (◘ Fig. 4.12). Excavated depth of up to 4 m is common in trenches, and they can be cut to expose mineralized bedrock where the overburden thickness is not great (<5 m). Most trenches are less than 3 m deep because of their narrow width (<1 m) and their tendency to collapse.

Fig. 4.12
figure 12

Results of channel sampling in a trench (Illustration courtesy of Nouveau Monde Mining Enterprises Inc.)

4.2.4.6 Sampling Drillholes

Although expensive, diamond drilling has many advantages over other sampling techniques in that:

  1. 1.

    a continuous sample is obtained through the mineralized zone;

  2. 2.

    constant volume per unit length is maintained; this is very difficult to achieve in both chip and channel sampling;

  3. 3.

    good geological, mineralogical, and geotechnical information can be obtained as well as assay information;

  4. 4.

    problems of contamination are minimal for the core has good clean surfaces; where contamination does exist, the core can be easily cleaned using water, dilute HCI, or industrial solvents; and

  5. 5.

    drilling allows samples to be taken in areas remote from physical access (Annels 1991).

These methods are now utilized routinely, especially for evaluation of large ore where profuse data are needed from what would otherwise be inaccessible parts of a deposit. Mining geologist tends to play only a supervisory role in chip, channel, and grab sampling in a mine, but a direct involvement in the logging and assaying of drill cores will be essential.

Either solid rock core or fragmented or finely ground cuttings are brought to surface by drilling and sampled for assay (◘ Fig. 4.13). Cuttings are either sampled invariably by machine as it reaches the surface or piled up that must be later subsampled. Samples are collected at depth intervals of 1 m or more, depending on the variability of the mineralization. In this sense, the quantity of cuttings from a single drillhole can be huge and the sampling problem is not unimportant (Sinclair and Blackwell 2002). Drill cuttings generally can be generally reduced in mass by riffling to generate samples of handy size for further subsampling and analysis. In this sampling method, it is essential that as much of the mineralization as possible for a specific drilled interval is obtained. The RC drill recovers broken rock ranging from silt size up to angular chips a few centimeters across. The total mass of cuttings produced in each drilled interval is then collected from the cyclone and the material should be routinely weighed, being the common weigh of a 1 m interval of about 25–30 kg.

Fig. 4.13
figure 13

Samples with cuttings

In diamond drilling, core recovery should be 80% or more for an accurate evaluation, although even at this level of recovery, it is needed to establish whether losses are random or whether specific types of mineralization or gangue are lost preferentially, yielding a systematically biased result. Once the core has been brought up from underground, it should be washed and then examined to ensure that all the sections of core fit together and that none have been misplaced or accidentally inverted in the box. After the core is in the correct order, the core recovery is measured throughout the mineralized interval, and where losses have occurred, an attempt is made to assign these to specific depth ranges in the core boxes. Core is commonly split along the main axis, one-half being maintained for geologic information and the rest generating material for analysis. The decision to utilize mainly half or quarter (◘ Fig. 4.14) as a sample for assay is based on the requirement for a sample size adequate to overcome any nugget effects. In general, half-core split lengthwise is the most common amount taken for assay. Core splitting can be done with a mechanical splitter or with a diamond saw (◘ Fig. 4.15), being sawing the standard and preferred way to sample solid core. Thus, the core is sawn lengthways into two halves using a diamond-impregnated saw. The diamond saw also gives a flat surface on which the mineralization can be examined with a hand lens and on which intersection angles of bedding or vein contacts can be measured with ease.

Fig. 4.14
figure 14

Half and quarter core as samples (Image courtesy of Pedro Rodríguez)

Fig. 4.15
figure 15

Core cutting with a diamond saw (Image courtesy of Euromax Resources)

Half-core must be stored safely because it is a crucial background material with which to create new ideas of both geologic and grade continuity as understanding of a deposit evolves (Vallée 1992). For this reason, to take photographs of the split core in the core boxes is one of the most used procedures to preserve evidence of the character of the core and is especially needed if all of the core is consumed for assaying or testing milling procedures. The next stage is the subdivision of the split core into sample intervals. There are many criteria that could be taken into account, and a decision has to be made as to what information is most important and what may be lost without too great an impact. To some extent, the method that will be used to compute the ore reserves will also play a role in the final decision (e.g., classical methods or geostatistics – see ► Sect. 4.4.6) (Annels 1991).

4.2.5 Sampling Pattern and Spacing

The sampling pattern is a consequence of the sampling method, the accessibility of the site, the objectives of the project, and the further requirements for statistical analysis of the data. For this reason, uniform grid sampling is preferred for deposits of any appreciable size so that optimal statistical coverage can be obtained. In practice, the final pattern is generally a compromise between what is preferable and what is convenient or economical. Since most of the sampling methods are necessary, the main goal in optimizing a sampling pattern is to produce the exact number of samples required for representing the grade and dimensions of an ore body. It is essential to take enough ore samples to obtain an estimate sufficiently precise to guide evaluation of mining but also to avoid the expense of taking unnecessary samples.

A relatively widely spaced sampling pattern can be useful for the delineation of the mineral deposit and to calculate the resource estimates or where a geologic model can be provided with precision. More closely separated control data are needed for local estimation, especially where the block size for estimation procedures is clearly smaller than the drillhole spacing at a first step of prospection (Sinclair and Blackwell 2002) (◘ Table 4.3). A systematic grid of samples taken normal to the ore zone is commonly the preferred pattern because it originated a good statistical hedge. Sampling patterns progress as the mineral deposit evaluation process evolves through successive steps, and large and relatively uniform ore deposits may be effectively sampled at intervals as great as 100 m or even 200 m. In less regularly mineral deposits, for instance, in gold deposits, the following general guideline can be used: a drillhole spacing between 25 and 30 m is required for measured resources, about 50 m for indicated, and rarely inferred resources are informed if the spacing is more than 100–120 m.

Table 4.3 Drilling grid spacing used for exploration and development in nickel laterites

Perhaps the most worrying question to answer is whether a deposit is being under- or over-drilled. The best sampling interval is commonly based on understanding of the nature of the deposit and on empirical studies of predicted and realized grades in blocks of ground. Different statistical methods have been used in an attempt to resolve this problem such as those based on variation coefficient (Koch and Link 1970), correlation coefficient (Annels 1991), Student’s t-distribution (Barnes 1980), or successive differences (De Wijs 1972), among others. In fact, the coefficient of variation serves not only to guide the number of ore samples to be taken in order to obtain a specified precision of an unsampled ore deposit. It serves also as a guide to the form of statistical distribution that is likely to be appropriate for data analysis and as a measure to control the quality of sampling (Koch and Link 1970).

There is no doubt that the semivariogram is the best estimator of sampling interval where sampling is done by drilling (see ► Sect. 4.4.6.2). According to the range of the semivariogram, which is a measure of correlation among samples, a critical distance can be outlined, that is, the optimum spacing between drillholes or sample locations in this particular direction would be indicated by the range of the semivariogram; samples taken at a greater distance would miss significant correlation. It is important to bear in mind that more drillholes do not always imply more precision on reserve estimates. ◘ Figure 4.16 (Annels 1991) is a good example of this assertion since the relationship between drilling grid size, number of holes drilled, and the precision of reserve estimates is not linear and the maximum precision is not strictly related to the maximum number of drillholes. In other words, further drilling improves the confidence only to a certain extent and marginally.

Fig. 4.16
figure 16

The relationship between drilling grid size, number of holes drilled, and the precision of reserve estimates for the Offin River placer, Ghana (Annels 1991)

4.2.6 Sample Weight

A long recognized pitfall of ore reserve estimation is the dependence between sample size and assay distribution, often referred to as the volume/variance relationship. Mathematically, samples are treated as point values without dimensions, but in reality samples are taken at many different support sizes. It is clearly observed that as the support size increases, the variance of the assay will reduce. Thus, it is crucial in sampling to estimate the smallest simple mass to guarantee that a sample is representative of the whole. The initial weight of a sample must be representative, but not too big since reducing the bulk of a sample for chemical analysis is time-consuming and expensive. The appropriate weight is influenced by the following factors:

  1. 1.

    The distribution of the ore: the initial weight can be smaller on deposits with a regular distribution of useful minerals such as massive and banded structures.

  2. 2.

    The size of the ore fragments: the coarser the useful minerals, the higher the initial weight of the sample should be and conversely.

  3. 3.

    The specific gravity of the mineralization: the higher the specific gravity of a useful mineral, the larger the initial weight of the sample must be.

  4. 4.

    The mean grade of the ore: the lower the average content of useful mineral, the larger the initial weight of the sample must be.

From an empirical point of view, many tables to calculate the minimum sample weight are present in the literature. For instance, ◘ Table 4.4 illustrates the data from EN 932-2 (1999) used to select the minimum permissible sample weight in aggregates for a given particle size. On the other hand, there are several formulas to estimate the initial weight of the sample such as, for instance, the Richards-Czeczott formula (Kuzvart and Bohmer 1978), the Royle formula (Royle 1992), or the Page formula (Page 2005).

Table 4.4 Minimum permissible sample weight for a given particle size in aggregates (EN 932-2)

Thus, the necessary weight of sample (Q) can be often determined using the Richards-Czeczott formula:

$$ Q=k\times {d}^2 $$

where d is the size of the largest grain of useful mineral and k is a constant expressing the qualitative variation of the deposit. This constant ranges from 0.02 for deposits with uniform distribution of the economically valuable component (e.g., large stratabound sedimentary deposits) to 1.0 for deposits with extremely irregular distribution of the useful mineral (e.g., diamond or gold deposits).

Another way to establish the initial weight of the sample is to apply the Royle formula (Royle 1992). A simple expression to give a minimum safe weight (MSW) of sample can be derived from the expression: weight of metal in the largest mineral particle divided by MSW equals maximum contribution made by this particle to the analysis. If the largest mineral particle in a deposit contains A grams of metal and the grade is expressed in percent metal and if this particle is not to contribute more than G% to the analysis, then:

$$ \frac{A}{MSW}=\frac{G}{100}\kern0.875em \mathrm{or}\kern0.875em MSW=\frac{100A}{G} $$

For example, if the largest galena grains in a mineralization are spherical and are 2 cm in diameter, then the weight A of contained lead in such a grain is 27.2 g. If G is set to 0.2% for example, then:

$$ MSW=\frac{100\times 27.2}{0.2}=13.6\ kg $$

In the Page method, the size of the sample is such that the largest particle is diluted by the bulk of the sample to the same extent. Therefore:

$$ {V}_{\mathrm{s}}={V}_{lp}\times {V}_{\mathrm{d}} $$

Where V s is the volume of sample, V lp is the volume of largest particle, and V d is the volumetric dilution. Dilution is the inverse of concentration, so the reciprocal of the grade measures the dilution of the mineral by the country rock. Mineral-volumetric grade is a suitable way of expressing grade proportion. An example is 2.5 cm3/m3 native gold meaning that 2.5 cm3 of native gold are likely to be found in 1 m3 of country rock. Again continuing the example, for the mineral-volumetric grade 2.5 cm3/m3 = 2.5 cm3 per 106 cm3 the corresponding dilution is 10−6/2.5 = 400,000. Thus, considering spherical particles of 2 mm in size and a density of 16.5 g/cm3 in a country rock of density 2.75 g/cm3 where the mineral volumetric dilution is 400,000, then the sample mass will be 4,608 g.

4.2.7 Sample Reduction and Errors

Most field and mine samples need to be reduced in size for laboratory assay. In general, some grams of homogeneous very fine material at 100–150 μm size are needed by the laboratory for chemical analysis. This process of reduction is achieved by progressive comminution to ensure that the reduced volume of the largest valuable particle, if included in or excluded from the reduced sample size, does not cause an unacceptable difference in the assay result; the process is also called subsampling (e.g., Pohl 2011). It designates procedures that reduce the total mass sampled to the few grams of powder in a small bottle that is all a modern laboratory requires for analysis. Thus, the reduction value is around 1,000 times in a kilogram sample and 1,000,000 with a ton sample, if 1 g of sample is required to analysis. However, the final weight of a sample is chosen at 0.5–1 kg because a certain number of samples are deposited as duplicates in the chemical laboratory and in the mining company. Regarding the size of the particles, in practice grinding would be continued to pass sieve 200 μm for fire assay and even finer where chemical dissolution is involved. The normal result of an inadequate sample reduction system is a large random error in assays, including sampling plus analytical error. Obviously, these large errors contribute to a high nugget effect.

There are two main forms to generate errors in sampling process: (a) related to the inherent properties of the material being sampled and (b) from inappropriate sampling procedures and preparation. Errors can be introduced at many stages during sampling of an ore deposit and also during crushing and splitting of the sample in preparation for analysis. In the first case, the sample taken can be too small to be truly representative of the large block of ground to which its value will be assigned, or, in the case of diamond drill sampling, the two halves of the core can contain different concentrations of mineralization. In general, sampling errors can be classified into four main groups: (a) fundamental error, which is due to the irregular distribution of ore values in the particles of crushed ore to be sampled; (b) segregation and grouping error, which results from a lack of thorough mixing and the taking of samples; (c) integration error, which results from the sampling of flowing ore; and (d) operating error, which is due to faulty design or operation of the sampling equipment, or to the negligence or incompetence of personnel (Assibey-Bonsu 1996). Sampling protocols must be designed, so they will minimize the errors introduced through improper procedures (second to fourth group). The fundamental error is the only error that cannot be eliminated using proper sampling procedures because it will be present even if the sampling operation is perfect.

The preparation of samples depends on their size, physical properties, and on the analytical method to be used. Samples are reduced by crushing and grinding, and the resultant finer-grained material is separated by halving or quartering into discrete mass components for further reduction. For this reduction, a relationship between the sample particle size, mass, and sampling errors was established (Gy 1979, 1992). It has been widely accepted and sometimes criticized. The Gy relationship gives an expression for the relative variance (error) at each stage in the sampling reduction process (fundamental error). Therefore, it is possible to either calculate the variance for a given sample size split from the original or calculate what subsample size should be used to obtain a specified variance at a 95% confidence level.

In any reduction system, the most sensitive pieces of equipment are the crushers and grinders. Each one works efficiently within a limited range of weight performance and size reduction. Depending of the primary size of fragments, the sample must be crushed in jaw crushers and then ground and pulverized to the final analytical size in rotary mills or disk mills. The reduction of sample weight is carried out by riffle division method or by coning and quartering method. In the riffle division method, the sample shall be mixed well and placed with a uniform thickness into the riffle tray and divided into almost two equal parts (◘ Fig. 4.17). Either of the two divided samples shall be selected at random each time the sample is reduced.

Fig. 4.17
figure 17

Riffle division method (Image courtesy of Alicia Bermejo)

The sample splitters are commonly called riffle or chute splitters and consist of a series of chutes that run in alternating directions and producing a randomly divided two equal-sized fractions. One of the fractions can then be split again, and the process can be reiterated until a sample of the desired size is generated. If a material is recurrently split into smaller fractions using a riffle, the errors from each procedure of splitting will be added together, resulting in increasing variance between samples. The rotary, or spinning, riffle is the best method to use for dividing material into representative samples. In these riffles, the material to be sampled is fed to a feeder, which drops the material at a uniform rate into a series of bins on a rotating table (◘ Fig. 4.18).

Fig. 4.18
figure 18

Rotary riffle (Image courtesy of Anglo American plc.)

Where a mechanical splitter is not available for separating finely crushed material or where the fragments in a bulk sample are too large to be handled, the sample can be reduced by the method of coning and quartering (◘ Fig. 4.19). In this method, the crushed ore shall be well mixed up and then scooped into a cone-shaped pile. After the cone is formed, it shall be flattened by pressing the top of the cone with the smooth surface of the scoop. Then it is cut into quarters by two lines, which intersect at right angles at the center of the cone. The bulk of the sample is reduced by rejecting any two diagonally opposite quarters.

Fig. 4.19
figure 19

Coning and quartering method

A simple rule in sample reduction is that all fragments must be crushed to such a size that the loss of any single particle would not affect the analysis. This rule without numbers depends on the accuracy required, the contrast in value between ore and rock particles, and the size of the sample. Empirical guidelines for the maximum allowable particle size in respect to approximate sample weights are shown in ◘ Table 4.5 (Peters 1978). For a very homogeneous ore, somewhat larger particle size would be acceptable. A sequence of crushing and splitting in which each step is selected according to some values determined by a variant of Richards-Czeczott formula (see previous section) can be outlined (Kuzvart and Bohmer 1978).

Table 4.5 Empirical guidelines for the maximum allowable particle size in respect to approximate sample weights (Peters 1978)

Regarding analytical errors, assaying can be done by a commercial or company laboratory. In any case, a certain percentage of the samples, usually a minimum of 10%, should be assigned a new sample number and resubmitted for a repeat analysis to provide a check on the analytical precision of the laboratory. It is also recommended practice to send a percentage of the samples to a different laboratory for accuracy comparison (◘ Fig. 4.20). Should there be any doubt as to the accuracy of the particular laboratory used, a few standard samples, including a blank, should be submitted for analysis. Control samples can be included in the sampling stream, before shipment to the assay laboratory.

Fig. 4.20
figure 20

Samples analyzed in a different laboratory for accuracy comparison

Three types of errors can occur when making measurements in a laboratory: «(a) random errors, which are usually due to an inherent dispersion of samples collected from a population; as the number of replicate measurements increases, this type of error is reduced; (b) instrument calibration errors, which are associated with the range of detection of each instrument; uncertainty about the calibration range varies; and (c) systematic errors or constant errors, which are due to a variety of reasons such as biased calibration-expired standards, contaminated blank, interference (complex sample matrix), inadequate method, analyte instability, among others» (Artiola and Warrick 2004).

4.3 Determination of Grades

Evaluation of grade distribution and estimation of overall grades are the first quantitative analyses of the grade data and are basic tools to provide inputs to the resources/reserve estimation. The grade of ore on a portion of a mine or on an entire deposit is estimated by averaging together the assay returns of the samples that have been taken. The process involves basically two methods of estimation: weighting techniques and statistical techniques (mean, median, geometric mean, and Sichel’s t estimator). The first ones are commonly applied to estimation of grades in drillholes, whereas statistical estimators of grade require the samples are randomly, but uniformly, distributed throughout the area being evaluated and that the values are far enough apart to be independent variables.

4.3.1 Weighting Techniques

Grade estimations involving assay intervals in drillholes are enough for a general estimate of a potential mineral deposit in the early steps of prospection. One of the most frequent calculations is to compute a grade value for a composite sample (e.g., the average grade of a channel sample from data intervals of several lengths) developing a weighted average for unequal sample lengths and/or widths. Thus, each sample grade in an intersection of a deposit can be weighted in a variety of ways (Annels 1991). The first is simply by length-weighting, in which the sum of the products of intersected length and grade are divided by the sum of the intersected thickness. This method can be expressed mathematically as follows:

$$ G=\frac{{{\displaystyle \sum}}_{i=1}^n\left({G}_i\times {L}_i\right)}{{{\displaystyle \sum}}_{i=1}^n\left({L}_i\right)} $$

where G indicates weighted grade, n is the number of samples combined, and G i and L i are the grades and lengths of each sample, respectively. Sometimes a thickness × grade (metal accumulation) values is computed and utilized to estimate minimum mining width.

All these calculations assume that there is no significant difference in the specific gravities of different types of material and thus that equal volumes represent equal weights. The assumption is usually not far from the truth, but if certain portions of the ore body consist of material that is considerably heavier or lighter than the average, it can be necessary to weight the samples not only for volume but for specific gravity. It often occurs in vein deposits where massive sulfide and disseminated mineralization are present together. So previous equation should be modified as follows:

$$ G=\frac{{{\displaystyle \sum}}_{i=1}^n\left({G}_i\times {L}_i\times S{G}_i\right)}{{{\displaystyle \sum}}_{i=1}^n\left({L}_i\times S{G}_i\right)} $$

where SG i is specific gravity of each sample. Precise application of the principle of weighting for specific gravity would require specific gravity determination for each sample, a practice which is not common and, ordinarily, is hardly warranted. In some ores, the specific gravity is closely related to the assay value so that it is feasible to construct a curve based on a limited number of determinations and then read off the specific gravity corresponding to any given metal content.

Another weighting method is the frequency weighting. It was originally developed for the evaluation of the reserves of Witwatersrand gold ores (Watermeyer 1919). It requires the production of a frequency histogram or curve from a large assay data base which is assumed to be representative of the deposit from which the intersection has been made. For each assay value (G i ) obtained during the sampling, the corresponding frequency of occurrence (F i ) is read off and used to weight the assay as follows:

$$ G=\frac{{{\displaystyle \sum}}_{i=1}^n\left({G}_i\times {L}_i\times {F}_i\right)}{{{\displaystyle \sum}}_{i=1}^n\left({L}_i\times {F}_i\right)} $$

Very high assay values, which only occur infrequently, are thus assigned a very low frequency weighting factor and their tendency to bias the overall grade is reduced. For this reason, this technique is applicable where abnormal assays (outliers) are present (see ► Sect. 4.2.3).

Finally, where a face which has been sampled by vertical channels at irregular intervals and by samples of variable length must be evaluated (◘ Fig. 4.21; Annels 1991), weighting by zone of influence is applied. In this situation, the weighted grade assigned to the panel is thus calculated by multiplying the grade of each sample by its area of influence, which is based on the sum of half the distances to the adjacent channels (the ZOI) times its sample length. According to ◘ Fig. 4.21, estimation of grade would be

$$ \begin{array}{l}{G}_p=\sum \left({L}_i\times ZO{I}_i\times {G}_i\right)/\sum \left({L}_i\times ZO{I}_i\right)\hfill \\ {}\kern2.75em =\Big[{L}_1\cdot {G}_1+{L}_2\cdot {G}_2+{L}_3\cdot {G}_3\hfill \\ {}\begin{array}{l}\kern4em +{L}_4\cdot {G}_4\left(a+b\right)\\ {}\kern4em +\left({L}_5\cdot {G}_5+\dots +{L}_8\cdot {G}_8\right)\left(b+C\right)\end{array}\hfill \\ {}\kern4em +\left({L}_9\cdot {G}_9+\dots +{L}_{11}\cdot {G}_{11}\right)\left(c+d\right)\hfill \\ {}\kern4em +\left({L}_{12}\cdot {G}_{12}+\dots +{L}_{15}\cdot {G}_{15}\right)\left(d+e\right)\Big]\hfill \\ {}\kern3.5em /\Big[\left({L}_1+{L}_2\dots +{L}_4\right)\left(a+b\right)\hfill \\ {}\kern4em +\left({L}_5+\dots +{L}_8\right)\left(b+c\right)\hfill \\ {}\kern4em +\left({L}_9+\dots +{L}_{11}\right)\left(c+d\right)\hfill \\ {}\kern4em +\left({L}_{12}+\dots +{L}_{15}\right)\left(d+e\right)\Big]\hfill \end{array} $$
Fig. 4.21
figure 21

Face sampling and zones of influence (Annels 1991)

4.3.1.1 Compositing

Raw data in a mineral deposit are usually matched in such a way as to generate composites of roughly similar support, being composites combinations of samples. The term compositing, where used in mineral resource evaluation, is applied to the process by which the values of adjacent samples are matched so that the value of the longer intervals can be evaluated. Thus, compositing is a numerical process that includes the estimation of weighted average grades over larger volumes than the original samples (Sinclair and Blackwell 2002; Hustrulid et al. 2013). Data are composited to standard lengths to due to many reasons such as:

  1. 1.

    Reduce the number of samples.

  2. 2.

    Provide representative data for analysis where irregular length assay samples are present.

  3. 3.

    Bring data to a common support; for example, to combine drill core samples of different lengths to a general length of 1 m.

  4. 4.

    Reduce the effect of isolated high-grade data.

  5. 5.

    Produce bench composites, that is, composites extending from the top of a bench to the base in an open-pit; such composites are especially helpful if two dimensional evaluation procedures are utilized in benches.

  6. 6.

    Incorporate dilution (e.g., in mining continuous height benches in an open-pit exploitation).

  7. 7.

    Provide equal-sized data for geostatistical analysis.

After compositing, the composited drillhole dataset is commonly validated (◘ Table 4.6).

Table 4.6 Results of a statistical validation of composited intervals

Since compositing is linear in nature, a substantial smoothing effect (reduction in dispersion of grades) results because compositing is equivalent to an increase in support. It should be considered that compositing can also be performed for values of variables other than grade. Downhole composites are computed using constant length intervals that generally start from the collar of the drillhole or the top of the first assayed interval. These composites are used where the holes are drilled at oblique angles (45° or less) to the mining benches and bench composites would be excessively long (Noble 2011). Bench compositing has the advantage of providing constant elevation data that are simple to plot and interpret on plan maps. For large and regular mineral deposits where the transition from ore to waste is gradual, the compositing interval is often the bench height and fixed elevations are selected. This bench compositing is nowadays the procedure most generally utilized for resource modeling in open-pit mining (Hustrulid et al. 2013).

In the process of compositing, the starting and ending points of each composite is recognized, and the value of composite grade is estimated as a weighted average by matching the samples included within these limits (◘ Fig. 4.22). In the case of a sample that crosses these limits, only the part of the sample that falls within the mineralization is included in the calculation. If density is extremely variable, for example, in massive sulfides, compositing must be weighted by length times density.

Fig. 4.22
figure 22

Compositing

4.3.2 Statistical Estimation of Grades

Statistical estimators of the grade of a deposit require that the distribution of grades be Gaussian or normal. This probability density function is the common bell-shaped curve, which is symmetric about the mean value of the distribution. Normal curves can be adjusted to an unbiased histogram to prove the probability that the variable is normally distributed (◘ Fig. 3.42). The first stage in the process is therefore the production of histograms or frequency curves so that an overall impression of the nature of the assay distribution can be obtained. The approach to normality of this population can also be assessed by producing a cumulative frequency diagram and a probability plot. Once the arithmetic mean and associated variance or standard deviation are calculated, then the shape of the assay distribution can also be described in terms of skewness. This value measures the departure from symmetry for a population. A positive value indicates a positive skew (e.g., excess of high values compared to a normal population), while a symmetrical distribution should approach zero.

The coefficient of variation C, expressed as standard deviation divided by mean, is also used to describe the variability of assays in a deposit. For a data population to be considered as normal, the coefficient of variation should be less than 0.5, and larger values indicate either lognormality or an erratically distributed data set (Koch and Link 1970). Other values cited are less than 1.0 (Carras 1984) and less than 1.2 (Knudsen 1988). Where there is any doubt of the normal distribution of the grades, a chi-square test can also be carried out since this test is used to determine mathematically how closely the natural distribution can be compared to a normal distribution. Thus, the «closeness» of the approximation is tested (◘ Fig. 4.23). Chi-square test compared observed data (e.g., grade values) with data awaited to obtain using a specific hypothesis (normal distribution) and «decided» if the observed data can be adjusted to a normal distribution according to a predefined level of confidence.

Fig. 4.23
figure 23

Chi-square test

4.3.2.1 Normal Population

If the data conform to a normal population (the simple assumption of a normal distribution occurs only rarely for geological data), the sample mean (X), arithmetic mean or average value, or the 50 percentile (median) value is conceptualized on the central tendency of distribution parameters around it is distributed. This value is calculated by the sum of the values of all observations within the population divided by the number of samples, and it is used as average grade estimator of the group of samples, bench, or an entire deposit.

4.3.2.2 Lognormal Population

Most of natural distributions encountered in geology are not symmetric, but they are usually more or less skewed to the right, that is, positively skewed (◘ Fig. 4.24). Thus, higher grades occur in addition to the average grade and they extend beyond the range considered as normal distribution. The lognormal distribution, in which logarithms of the individual values can be described by a normal distribution, has become very important for the treatment of skewed distributions in exploration geology. In this sense, experience shows that in the majority of cases, geological assay data do not display a normal distribution but rather that their logarithms trend to be normally distributed (David 1977). The type of logarithm is not important, and either the natural logarithm, which is based on the natural number e = 2.7183 (thus x is transformed to ln x) or the decimal logarithm to the base 10 (thus x is transformed to log x) can be used.

Fig. 4.24
figure 24

Lognormal population (right skewed)

Where a population is positively skewed, it is generally advisable to undertake a log transformation of the data and then replot the histogram to see if the population is normalized by this process. If it is, then it is possible to describe the population as being a two-parameter lognormal population (the parameters being log mean and log variance). Again, a chi-square test or by plotting a log-probability diagram can be used to test the approach to normality of the log-transformed data. Logarithmic values are therefore used for the derivation of the mean and calculation of the variance and standard deviation, in the same way as has already been described for normal untransformed values. All values to be considered in logarithmic distribution have to be >0; otherwise statistical parameters like the mean and the variance cannot be calculated.

The parameters normally used to describe a lognormal distribution are the median of the distribution, which is ɣ = , being α the average of the logarithms and β their standard deviation. This characterization is most used in ore reserve calculations (Sichel 1952; Krige 1951), and the better way to estimate the mean of a lognormal population is to use the following relationship:

$$ \overline{X}={e}^{\alpha}\cdot {e}^{\frac{\beta^2}{2}}={e}^{\left(\alpha +\frac{\beta^2}{2}\right)} $$

The mean of the lognormal distribution, which is the geometric mean, is commonly less than the arithmetic mean. Sichel (1966) developed a factor, the Sichel’s t estimator, to solve the problem of obtaining the best estimation of the arithmetic mean for skewed sample sets that have an approximately lognormal distribution (◘ Box 4.2: Sichel’s t estimator).

4.3 Box 4.2

4.3 Sichel’s t Estimator

Sichel (1966) developed a factor, Sichel’s t estimator, to solve the problem of obtaining the best estimation of the arithmetic mean for skewed sample sets that have an approximately lognormal distribution. Thus, where an assay population is small (n < 30), for example, at the early feasibility stage of deposit evaluation, and where the raw data population has a high coefficient of variation and is lognormal, Sichel’s t estimator can be used to estimate its mean. The t estimator is a useful conservative estimator of the arithmetic mean for small data sets where a lognormal distribution can be assumed with confidence. However, it should be realized that if the log-transformed assay population deviates from normality, then Sichel’s t estimator would also be biased. Thus, the best estimator of a deposit is the one that gives the lowest variance where the variance of the data about the estimator is calculated.

Sichel’s t estimator can be calculated from

$$ t=m\times f\left(V;\kern0.5em n\right) $$

where m = ɣ = eα and f is a value obtained from tables which is a function of V and n, being V = β 2 and n the number of samples (α is the average of the natural logarithms of the data and β their standard deviation). Tables for rapid determination of the t estimator are provided in the literature. Moreover, 95% confidence limits can also be determined using tables provided by Sichel up to samples of size 1,000 and variance up to 6.0. These tables give the values ɸ95 (V; n) and ɸ5 (V; n) which, when multiplied by t, give the upper and lower confidence limits, respectively.

For instance, the results of five gold grade analyses (g/t) are the following: 3.6, 7.4, 9.5, 8.1, and 14.3. Consequently, the natural logarithms are as follows: 1.28, 2.00, 2.25, 2.09, and 2.66, respectively. Thus, the average of these logarithm data (α) is 2.06 and their standard deviation (β) is 0.45 (V = β 2 = 0.20). Calculation of m = ɣ = eα gives a result of 7.85. Therefore, the formula to estimate the arithmetic mean using t estimator is

$$ t=7.85\times f(0.2)5 $$

In ◘ Table 4.7a the value for n = 5 and V = 0.2 is 1.103. Thus:

$$ t=7.85\times 1.103=8.66\kern0.28em \mathrm{g}/\mathrm{t} $$
Table 4.7. Sichel’s t estimator tables: a Sichel’s function (V; n), b upper confidence limit factor ɸ95 (V; n), and c lower confidence limit ɸ5 (V; n)

If the upper and lower confidence limits must be calculated, then ɸ95 (V; n) = 2.087 (◘ Table 4.7b) and ɸ5 (V; n) = 0.713 (◘ Table 4.7c). Therefore:

$$ \begin{array}{l}\mathrm{Upper}\ \mathrm{limit}=8.66\times 2.087=18.07\ \mathrm{g}/\mathrm{t}\hfill \\ {}\mathrm{Lower}\ \mathrm{limit}=8.66\times 0.713=6.17\ \mathrm{g}/\mathrm{t}\hfill \end{array} $$

Thus, the estimate of the arithmetic mean grade of this data is 8.66 g/t with a 95% probability that this estimate lies between 6.17 g/t and 18.07 g/t.

4.3.3 Outliers

Outliers are anomalously high values outside the main population which result in grade bias (Annels 1991) or observations that appear to be inconsistent with the vast majority of data values (Sinclair and Blackwell 2002) (◘ Fig. 4.25). How to consider these errant high values is one of the essential problems in ore evaluation. The reason why no rules of thumb can apply to all cases is that, no two orebodies being alike, erratic highs can reflect any one of a number of conditions depending on the manner in which valuable minerals are distributed throughout the ore body. Thus, the problem is fundamentally geological rather than purely mathematical (McKinstry 1948). The populations of outlier are usually geologically distinct and display limited physical continuity relative to lower grade values. Therefore, to establish that high grades can be expanded into neighboring rock could originate a significant overstatement of the resource or reserves.

Fig. 4.25
figure 25

Outliers

These abnormal assays can appear in a sequence of assays that, if not due to contamination, reflect much localized random phenomena such as gash veins, concretions/accretions, or coarsely crystalline aggregates of the valuable mineral (Annels 1991). In other words, sometimes the outliers depict different geologic population in the data that can correspond with an identifiable physical domain and this domain can be accounted separately of the main domain. It is necessary to decide whether to accept them, even though they are much localized and will probably heavily weight or bias the results, or whether to reduce them in some way. In any case, all outlier values must receive special handling, which can involve a number of options: (a) reanalyzing if possible, (b) cutting (also capping) to some predetermined upper limit based on experience, or (c) using an empirical cutting method»(Parrish 1997). The most common method to resolve the problem of outliers is to cut the grade to the average of the adjacent samples, or to the mine average grade, or to an arbitrary percentile value (e.g., 95th percentile of data) based on a cumulative frequency or log-probability plot of mine assays. Alternatively, the mean plus two or three standard deviation value of the mine assay population could be calculated and applied as the level of cut. ◘ Table 4.8 shows an example of the result of a capping process for different rock types in a gold mineralization.

Table 4.8 Grade capping for different rock types in a gold mineralization

Nowak (2015) recommended the following steps in a procedure for treating outliers during resource estimation:

  1. 1.

    determine data validity considering errors in sampling and handling;

  2. 2.

    review geology logs for samples with high grade assays; capping may not be necessary for assays where the logs clearly explain the presence of high grade;

  3. 3.

    capping should not be considered for deleterious substances that have negative impacts on project economics;

  4. 4.

    decide if capping should be considered before or after compositing;

  5. 5.

    keep capping to a necessary minimum;

  6. 6.

    restrict influence of very high grade assays; commercial software is well designed for this approach;

  7. 7.

    visually and/or numerically assess the effect of high grade assays to be sure they don’t affect estimated block grades; and

  8. 8.

    check the effect of capping on final resource estimates and document the differences.

4.3.4 Coproduct and By-Product

By-product components are both economically and technologically valuable minor elements that are obtained from the ores of the main metals. These components are generally present in ppm ranges, whereas the main metals occur within percent ranges in the mineralization. For instance, germanium occurs in zinc ores, gallium in bauxites, indium in zinc, copper or tin ores, tellurium in copper ores, hafnium in zirconium ores, and tantalum in tin ores. Moreover, many high-technology commodities currently are mostly provided by-product commodities.

Three types of commodities can be defined and classified according to the relative value of each commodity (Jen 1992). Thus, «the principal (metal) product of a mine is the metal with the highest value of output, in refined form, from a particular mine, in a specified period; a co-product is a metal with a value at least half ... that of the principal product; and a by-product is a metal with a value of less than half ... that of the principal product. By-products are subdivided into significant by-products, which are metals with a value of between 25% and 50% ... that of the principal product and normal by-products, which are metals with a value of less than 25%… that of the principal product». Evaluation of coproducts and by-products usually is carried out by methods similar to that of the main component (e.g., inverse distance weighting or kriging; see the next headings). In these cases, each estimate of the products is calculated regardless of the other, with the tacit assumption that no important correlation is present among the different products, being the estimation procedure time-consuming and costly. In other cases, these estimation processes can be carried out indirectly if a strong correlation among the coproducts and by-products to the principal component is present. Consequently, many multi-mineral deposits are generally valued, planned, and operated on the basis of equivalent grades (◘ Box 4.3: Equivalent Grades).

4. Box 4.3

4. Equivalent Grades

Multi-mineral deposits are generally valued, planned, and operated on the basis of equivalent grades. The use of equivalent grades for these types of deposits has been a standard practice in the mining industry for many years, especially for base metal deposits. Equivalent grades are used commonly to simplify the problem of mineral inventory calculation by estimating a single variable, rather than the two or more variables from which the single variable (equivalent grade) is derived. In general, the use of equivalent grades should be discouraged (Sinclair and Blackwell 2002). In this approach, each mineral is converted to its equivalent economic value in terms of one of the minerals, which is taken as a standard. For example, in a silver-lead-zinc deposit, a weighted sum of the three metal grades can be used to provide a single zinc-equivalent grade. This is generally done to avoid the complexities of a three-dimensional, or in general n-dimensional, grade analysis (Cetin and Dowd 2013). With this method, the amounts of each mineral extracted in the mining stage and sent to the processing plant and subsequent stages are estimated on the basis of equivalents and not on the basis of the component minerals.

Since equivalent grade values are values in which the grade of one metal is expressed in terms of another, after allowance has been made for the difference in metal prices, an example of determination of Au equivalent grade (Au eq) in an Au deposit containing some Ag should be as follows:

$$ Aueq\left(\mathrm{g}/\mathrm{t}\right)=Au\left(\mathrm{g}/\mathrm{t}\right)+k\cdotp Ag\left(\mathrm{g}/\mathrm{t}\right) $$

where k is a parameter that generally is taken as the ratio of Ag price to Au price (e.g., k = 1/66 if Au and Ag values are USD 990/oz and USD 15/oz, respectively). It can be seen that equivalent grades depend on both prices and grade, and thus they are time-dependent based on how prices behave and how grades vary during operation. Metal recoveries can also be included in the calculation.

Considering the cutoff grades, operating cutoff grades for the equivalent grades do not necessarily correspond to achievable, or even meaningful, cutoff grades for the grade-tonnage distributions of the individual minerals. While there is direct relationship between the individual grades and the equivalent grade, there is no unique inverse relationship from the equivalent grade back to the individual grades. The actual amount of each individual mineral above the equivalent cutoff grade therefore will differ from the values calculated from the equivalents. This difference will increase as the correlation among the components decreases. Thus, using equivalent grades can undervalue or overvalue mining projects.

In other cases, for the purpose of assigning a dollar value to mineral blocks, so that a cutoff can be applied to show reasonable prospects of economic extraction, a dollar equivalent can be calculated in a similar way.

4.4 Cutoff Grade and Grade-Tonnage Curves

The so-called cutoff grade is commonly the standard value that discriminates between ore and waste within a given mineral deposit (◘ Fig. 4.26). As economic conditions change continuously, obviously the cutoff grade can increase or decrease. Thus, it is the most important economic feature for estimation of resource and reserve data from prospecting information. It is common to calculate the resources/reserves of a mine for different cutoff grades and plot the results as a series of curves, usually termed grade-tonnage curves, which are widely used in the mining industry. From geology and mining planning to management and investment areas, grade-tonnage curves are used for economic and financial analysis, being probably one of the most important tools for representing variations in the characteristics of a deposit in function of cutoff grades.

Fig. 4.26
figure 26

Discriminating between ore and waste in underground mining (Image courtesy of North American Palladium Ltd.)

4.4.1 Cutoff Grade

Cutoff grade (COG) is generally defined «as the minimum amount of valuable product of metal that one metric ton of material must contain before this material is sent to the processing plant» (Rendu 2014) or it is «an artificial boundary demarcating between low-grade mineralization and techno-economically viable ore that can be exploited at a profit» (Haldar 2013). A similar definition of cutoff grade is as «any grade that, for any specific reason, is used to separate two courses of action, for example to mine or to leave, to mill or to dump» (Taylor 1972). These definitions are utilized to discriminate raw materials that cannot be mined from those which must be processed. Therefore, cutoff grades reused to choose blocks of ore from waste blocks at various stages in the evolution of mineral resources/reserve estimation in a mineral deposit (e.g., during prospection and mining stages). Consequently, if material concentration in the mineralization is above cutoff grade, it is defined as ore; conversely, if material concentration is below cutoff grade, it is considered as waste. However, blending methods (low-grade and high-grade mineralization) are commonly carried out in the mine for an effective usage of the mineral resources.

Cutoff grade is a geological/technical measure that embodies the important economic aspects of mineral production from a deposit. It is defined not only by the geological characteristics of the deposit and the technological limits of extraction and processing but also by costs and mineral prices. Annels (1991) classified the many factors that influence the cutoff grade in three categories: geological (e.g., mineralogy, grain size, presence of deleterious-penalty elements, shape and size of the deposit, structural complexity, or water problems), economic (e.g., accessibility to markets, labor availability, current metal prices, political and fiscal factors, cost of waste disposal and reclamation, or capital costs and interest rates), and mining methods (open-pit versus underground mining) (◘ Table 4.9). Change of any one criterion or in combination of more gives rise to different cutoff and average grade of the deposit. For instance, if mineral prices rise and all costs stay the same, then the COG will fall because extraction of mineralization with lower grades now will be profitable. COG can vary significantly from deposit to deposit, even in those that are very similar geologically because of differences among deposits in a wide variety of factors, as those cited above.

Table 4.9 Cutoff grades based on underground mining method

The concept of cutoff grade works well in case of deposits with disseminated grade gradually changing from outer limits to the core of the mineralization; on the contrary, in heterogeneous vein-type deposits with rich mineral at the contacts, the COG indicator has no use in establishing the ore boundaries (Haldar 2013). It is possible also to differentiate between COG and minimum mining grade (MMG), since there is a confusion in the utilization of both terms. Thus, one definition of COG is «the lowest grade material that can be included in a potentially economic intersection without dropping the overall grade below a specified level, referred to as the minimum mining grade» (Annels 1991).

Technical literature includes many publications on estimation and optimization of cutoff grades, being the most comprehensive reference the book entitled The Economic Definition of Ore: Cut-Off Grades in Theory and Practice (Lane 1988). This book is considered the standard for mathematical formulation of solutions to COG estimation where the objective is to maximize net present value (see ► Sect. 4.5.1.4) because the cutoff grades define the profitability of a mining operation as well as the mine life. There are many approaches for the determination of cutoff grades, but most of the research done in the last four decades shows that determination of cutoff grades with the objective of maximizing NPV is the most acceptable method. A high cutoff grade can be utilized to increment short-term profitability and the net present value of a mineral project, but increasing the cutoff grade is also likely to decrease the life of a mine. This shorter mine life can also produce higher socioeconomic effect with decreased long-term jobs and decreased profits to employees and local communities (2013). It is generally accepted that «the COG policy that generates higher NPVs is a policy that use declining cut-off grades throughout the life of the project» (Ganguli et al. 2011).

Estimation of cutoff grade, although a complex economic problem, is tied to the concept of operating costs per ton and can be viewed simplistically for open-pit mines (John 1985). Although long-range production planning of an open-pit mining operation is dependent upon several factors, cutoff grade is probably the most significant aspect, as it provides a basis for the determination of the quantity of ore and waste in a given period (Asad and Topal 2011). Thus, operating cost per ton milled, OC, is given by (John 1985)

$$ \mathrm{O}\mathrm{C}=\mathrm{F}\mathrm{C}+\left(\mathrm{S}\mathrm{R}+1\right)\times \mathrm{M}\mathrm{C} $$

where FC are the fixed costs per ton milled, SR is the strip ratio, and MC are the mining costs per ton mined. Cutoff grade, useful at the operational level in distinguishing ore from waste, is expressed in terms of metal grade; for a single metal, cutoff grade can be determined from operating cost as follows:

$$ {g}_{\mathrm{c}}=\frac{\mathrm{OC}}{p} $$

where g c is the operational cutoff grade (e.g., percent metal) and p is the realized metal price per unit of grade (e.g., the realized value from the smelter of 10 kg of metal in dollars where metal grade is in percent).

Another equation to derive the cutoff grade (e.g., in gold) is the following:

$$ \begin{array}{l}\mathrm{Cutoff}\ \mathrm{grade}\ \left(\mathrm{gold}\right)=\left[\mathrm{mining}\ \mathrm{cost}+\mathrm{process}\ \mathrm{cost}+\mathrm{general}\ and\ \mathrm{administrative}\left(\mathrm{G}\&\mathrm{A}\right)\ \mathrm{cost}\mathrm{s}\right]\\ {}/\left[\right(\mathrm{payable}\ \mathrm{recovery} \times \kern0.5em \Big(\left(\mathrm{gold}\ \mathrm{price}-\mathrm{refining}\ and\ \mathrm{s}\mathrm{ales}\ \mathrm{cost}\right)\\ {}/\mathrm{conversion}\ \mathrm{factor}\left)\right) \times \kern0.5em \left(1+\mathrm{royalty}\right)\Big].\end{array} $$

An example of estimation of cutoff grade using this equation is as follows:

$$ \begin{array}{l}\mathrm{Cutoff}\ \mathrm{g}\mathrm{r}\mathrm{ade}\ \left(4.94\ \mathrm{g}/\mathrm{t}\ \mathrm{g}\mathrm{o}\mathrm{ld}\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{mineral}\ \mathrm{r}\mathrm{eserves}\right)\\ {}=\kern0.5em \left[\mathrm{mining}\kern0.5em \mathrm{cost}\ \mathrm{U}\mathrm{S}\$50/\mathrm{t}+\mathrm{process}\ \mathrm{cost}\ \mathrm{U}\mathrm{S}\$38/\mathrm{t}+\mathrm{G}\&\mathrm{A}\ \mathrm{cost}\mathrm{s}\ \mathrm{U}\mathrm{S}\$62/\mathrm{t}\right]\\ {}/\ \left[\right(\mathrm{payable}\ \mathrm{r}\mathrm{ecovery}\ 95\%\kern0.5em \times \kern0.5em \Big(\mathrm{gold}\ \mathrm{price}\ \mathrm{U}\mathrm{S}\$1,100/\mathrm{oz}.\\ {} - \mathrm{r}\mathrm{efining}\ \mathrm{and}\ \mathrm{s}\mathrm{ales}\kern0.5em \mathrm{cost}\ \mathrm{U}\mathrm{S}\$7/\mathrm{oz}\left)/\mathrm{conversion}\ \mathrm{f}\mathrm{actor}\ 31.1035\mathrm{g}/\mathrm{oz}\right)\\ {} \times \left(1+\mathrm{royalty}\ 10\%\kern0.5em \mathrm{o}\mathrm{f}\ \mathrm{s}\mathrm{ales}\right)\Big];\end{array} $$

It is important to note that «sustainable development basis are being increasingly applied by mining companies and there is a balance between the cut-off grade determination and sustainable mining practice» (Franks et al. 2011). In fact, to obtain the optimal cutoff grades and maximum NPV, the environmental issues and social impacts must be included in the mine design (Mansouri et al. 2014). Thus, optimum cutoff grades determination is counted as one of the main challenges in sustainable development principles of mining, including environmental, cultural, and social parameters. Therefore, an optimum cutoff grade model must rely not only on economic and technical considerations but also reclamation, environmental, and social parameters (Rahimi and Ghasemzadeh 2015).

4.4.2 Grade-Tonnage Curves

At the early stages of the planning of a mine, an important decision tool is the grade-tonnage curve. For a given cutoff, a certain tonnage of ore is expected and consequently a certain profit. If the tonnage later proves to be less than expected, the consequences are obvious (David 1972). Thus, it is common practice to calculate the resource tonnage at a series of cutoff grades since the resource potential of a mineral deposit will be determined by the cutoff grades (◘ Fig. 4.27). The action of changing these values usually produces a clear impact on resource/reserve data. The information is plotted on a grade-tonnage graph and the obtained curves are called grade-tonnage curves, which are essential in mine planning. It is clear that compilation of this information will mean knowing the deposit fully. The information can be also showed in table format (◘ Table 4.10). Grade-tonnage curves are used extensively and updated regularly to calculate the impact that different cutoff grade strategies have on the economics of a mining operation. The type of information, for example, sample data or block estimates, used in the construction of a grade-tonnage curve should be documented clearly.

Fig. 4.27
figure 27

Grade-tonnage curves (Illustration courtesy of AngloGold Ashanti)

Table 4.10 Grade-tonnage table including multi-element information

The approximation of the grade-tonnage curve to reality is highly dependent on some natural parameters as the geology and grade distribution of the deposit. In general, the more variable the grades, the more complex is the geometry and the less reliable becomes the curve. All grade-tonnage curves contain several errors, including those based on an abundance of closely information. However, obviously the better is the quality of data, the better are the calculations and the grade-tonnage curves obtained. One error that needs to be mentioned in grade-tonnage curves is analytical and sampling error since the election process is not based on true grades but on estimated grades from samples. With relatively little data at the prospection stage, large sampling and analytical error can generate an important effect on the grade-tonnage patterns, usually originating an overvaluation of high-grade tonnage (Sinclair and Blackwell 2002).

4.5 Estimation Methods

The prediction of grade and tonnage in a mineral deposit is an essential problem in mineral resource estimation. The classical approximation to this issue is to calculate the mineral grade for quantities significant to the mine planning and base the recoverable resource estimation on those calculations (Rossi and Deutsch 2014). The process of calculating a mineral resource can only be carried out after the estimator is convincing of the robustness of the factors that justify the evaluation process, from choice of method of sampling to sales contract specifications. In this sense, ore estimation is the bridge between exploration, where successful, and mine planning (King et al. 1982). Thus, the geological data must be sufficiently complete to establish a geological model and this itself «must have internal consistency, should explain the observed arrangement of lithological and mineralogical domains, and should represent the estimator’s best knowledge of the genesis of the mineral deposit» (Glacken and Snowden 2001). In summary, regardless of the method used, all estimates start with a comprehensive geological database, primarily derived from drilling; without detailed, high-quality geological and geochemical data, a resource estimate cannot be considered valid.

The estimation procedure is not only a mere calculation but also a process that includes assumption of geological, operational, and investigational information. All estimates should have the best possible geological input combined with well thought out statistical or geostatistical treatment; no purely mathematical estimate should be accepted. The calculations therefore form only part, and not necessarily the most important part, of the overall procedure. It is common practice in exploration to begin with economic evaluations as early as possible and to update these evaluations in parallel with the physical exploration work. In an early stage, the geologist has only a tentative idea about expected grades and tonnages based on the initial geological concept and early concrete indications through observations from trenches or a limited number of drillholes. This early idea about grades and tonnages can be called grade potential and tonnage potential (Wellmer et al. 2008). In this sense, the four Cs (character of mineralization, continuity, calculation, and classification) are the basis for the correct estimation of ore resources or reserves (Owens and Armstrong 1994).

4.5.1 Drillhole Information and Geological Data

The essential data needed for resource estimation are derived from drillhole information. It includes detailed logs of the rock types and mineralization and geochemical and assay data for all samples that were collected. It also includes survey data for each drillhole. It is critical that the locations in 3-D space of the mineralized zones are known. Moreover, the shape, form, orientation, and distribution of mineralization in a deposit must be known with sufficient confidence to estimate the grade and tonnage of mineralization between drillholes.

Regarding the geological model, it obviously should support the distribution of mineralization achieved by sampling. A geological model involves examining cross sections, long sections, plan maps, and 3-D computer models of the deposit. The resource estimation process includes definition of ore constraints or geological domains, analysis of the sample data, and application of a suitable interpolation technique. In general, less than one-millionth of the volume of a deposit is sampled, and grades and other attributes must be estimated in the unsampled region, which is a high-risk process. In summary, knowledge of the geology of the mineral deposit is a prerequisite to any reliable computation: an incorrect model for the deposit will lead to incorrect resource estimate (Stevens 2010). This understanding involves space location, size, shape, environment, country rock, overburden, and hydrology; mineral, chemical, and physical characteristics of the raw material; as well as average grade and distribution of valuable and gangue minerals (Popoff 1966).

4.5.2 General Procedure

It is important to note that in ore reserve calculation, it is necessary express the data including a volume, a tonnage, and an average grade. The tonnage is derived from the volume by multiplying by the specific gravity of the ore. The volume is commonly determined by calculating an area in two of the dimensions and then multiplying by the third dimension to determine the final volume. To determine total area, it is usually possible to divide the area under consideration into a number of regular geometric figures such as squares, triangles, etc. (Reedman 1979). Thus, the resource or reserve calculation in a mineral deposit includes one formula, or a variation of it, which is always used:

$$ T=A\times \mathrm{T}\mathrm{h}\times \mathrm{B}\mathrm{D} $$

where T is the tonnage of ore, in tons; A is the area of influence on a plan or section, in m2 or km2; Th is the thickness of the deposit within the area of influence, in meters; and BD is the bulk density. Then, tons of valuable component (e.g., copper) are obtained multiplying tonnage of ore by the grade of the ore. In summary, the general procedure is a three-step process: limit and volume determination, grade estimation, and mass determination using the specific gravity of the rocks and ores.

The method used to calculate the ore reserve estimation will change according to the type of commodity, type of mineral deposit, geometry, distribution and homogeneity of the ore, mode of data collection, among others, but conceptually the steps to be taken will be always the same as expressed in the previous formula. It should also be borne in mind that ore reserve statement is an estimate, not a precise calculation. All formulas for computing volumes, tonnage, and average factors are approximate because of the irregular size and shape of the ore body, errors in substituting natural bodies by more simple geometric ones, geologic interpretation, assumptions, and inconsistency in the variables. Accuracy of the results usually depends more on geologic interpretation and assumptions rather than on the method used (◘ Fig. 4.28). Resources or reserves of the same category computed by different methods and based on the same data usually differ slightly. In fact, if sampling spacing could be sufficiently close, estimation would be a matter of simple arithmetic; this is almost the situation, for example, in grade control process (see ► Chap. 5) where samples are separated 3 or 5 m each other. In other words, the closer the sample spacing, the less important the procedure of ore estimation; the sparser the data, the more critical the procedure, not only quantitatively, but also qualitatively, because of the greater dependence on subjective assumptions (King et al. 1982).

Fig. 4.28
figure 28

Accuracy of the results depends mainly on geological interpretation

4.5.3 Bulk Density

Bulk density or specific gravity, which is a term that is widely used interchangeably with density, is required to convert volumes of ore to tons of ore (tonnage = volume × bulk density). A density that takes voids in account is termed specifically bulk density. Obviously, where porosity is negligible, density and bulk density are equivalent terms. In situ bulk density must be modeled at the time of resource estimation. Although bulk density determinations can seem to be a trivial matter, if the values are incorrect, the accurate amount of mineralization in a deposit cannot be determined: accurate rock bulk density values are required for accurate resource estimates (Stevens 2010). Any error in bulk density determination is directly incorporated into tonnage estimation. Bulk density determination is controlled by many factors such as homogeneity or heterogeneity of the materials to be sampled, the practice of computing dry or wet densities, relationships between ore grade and densities, and many others. If the volume is expressed in cubic feet, it is divided by the tonnage-volume factor, which is the number of cubic feet in a ton of ore. This is the origin of the term «tonnage factor».

The bulk density of a mineralization is obtained by laboratory measurement of field samples (◘ Fig. 4.29) or from the mineralogical composition of the ore. The most common way to determine bulk density of an ore in the laboratory is to weigh a sample in the air and then weighing the same sample suspended in water, and later apply the formula:

$$ \mathrm{Bulk}\ \mathrm{density}=\frac{\mathrm{weight}\ \mathrm{in}\ \mathrm{the}\ \mathrm{air}}{\mathrm{weight}\ \mathrm{in}\ \mathrm{air}-\mathrm{weight}\ \mathrm{in}\ \mathrm{water}} $$
Fig. 4.29
figure 29

Station for measuring dry bulk density (Image courtesy of Lydian International Ltd.)

In ore bodies that have more than one contained metal, the method of determining specific gravity based on the mineralogical composition of the mineralization is to compute an average specific gravity utilizing specific gravities of individual minerals and being the percentages of minerals in the ore correctly known. At an early stage of defining the deposit, the bulk density of a suite of representative samples is determined, and these values are applied to the rest of the deposit. Sometimes, a constant value obtained from the average of representative samples is applied for the entire deposit, but this method can lead to considerable errors in the determination of tonnage of ore and contained metal, especially if metal grades are highly variable, if the host-rock lithology changes, if the degree of alteration or depth of weathering is variable, and if the mineralogy of the valuable components change (Annels 1991). Failure to utilize specific gravity in mineral deposits with a high-density contrast between valuable minerals and gangue will result in incorrect determinations of the average grade.

In some cases, different bulk densities are determined and applied to different areas of a mineral deposit and/or different lithologies. Since mineralogical variation is the principal control on bulk density in many deposits, mineralogical zonation is commonly a practical guide to systematic variations in bulk density. For example, in a massive sulfide deposit, samples with between 70 and 90% sulfides will be assigned one bulk density, those with 40–60% sulfides another (lower) bulk density, and so on. In deposits with simple mineralogy, it is often possible to prepare a nomograph relating bulk density to assay data. Thus, the factor used for ore is controlled by changes in the ore content and grade. However, graphs and linear equations of specific gravity against the grade of one metal are not considered accurate enough in a multi-mineral deposit but appear to be very satisfactory in a theoretical one sulfide/gangue mix (Bevan 1993). Alternatively, a relationship between density and combined grades can be established (◘ Fig. 4.30). It is important to bear in mind that a typical massive sulfide deposit contains pyrite and/or pyrrhotite and varying amounts of chalcopyrite, sphalerite, and galena. A more fundamental approach to developing a mathematical model for bulk density is the use of multivariate methods, such as multiple regression. This arises because bulk density is commonly a function of mineralogy and porosity.

Fig. 4.30
figure 30

Relationship between density and combined grades

4.5.4 Estimation Procedures

A variety of procedures have been developed to estimate the tonnage and grade of mineralization in a deposit. The methods can be grouped into two categories: classical and geostatistical methods. Classical methods involve commonly the use of section and plan maps, whereas geostatistical methods involve complex, computer-driven 2-D and 3-D statistical techniques to estimate tonnage and grade. The utilization of geostatistical methods involves a further complexity in calculation, all based upon the theory of regionalized variables described by the French mathematician Georges Matheron in the early 1960s. These methods use the spatial relationship between samples, as quantified by the semivariogram, to generate weights for the calculation of the unknown point or block values. The standard technique of geostatistics was called «kriging» by Matheron in honor to the South African mining engineer Danie Krige and the type most frequently utilized is the variants of ordinary kriging, namely, linear kriging techniques.

Classical (also called traditional, geometric, or conventional) estimation methods can be used to assign values to blocks (e.g., polygonal or inverse distance methods), and they are commonly utilized at early stages of a mining project. These techniques are not particularly reliable but can offer an order-of-magnitude resource calculation. They are also utilized to check the results obtained using more complex geostatistical estimation methods. The classical methods have stood the test of time, but, because of the uncertainties and subjectivities involved in assigning areas of influence, they are now largely superseded by geostatistical techniques for the past three decades, which are described in the following section. However, these classical methods are still applicable in many situations and can well produce an end result superior to that possible by a geostatistical method. Critical assessment for the use of geostatistical kriging should always be undertaken before dismissing the classical methods. Too often, attempts to apply kriging are based on the use of mathematical parameters that have not been adequately tested or proven, perhaps due to time or information constraints. Geostatistical methods will only work satisfactorily if sufficient sampling is available to allow the production of a mathematical model adequate to describe the nature of the mineralization in the deposit under evaluation. Otherwise, it is much better to apply one of the classical methods.

Classical and geostatistical methods for reserve estimation in a single deposit are complex to apply with skewed distribution mineralization variables that include grade, ore body thickness, and grade thickness and need sophisticated data processing (Wang et al. 2010). The problem lies in the presence of local outliers or anomalies, which produce great effects on the estimation process, and the need for replace these outliers.

4.5.5 Classical Methods

Classical or traditional methods utilize analytical and geometric procedures and constitute a deterministic approach. The method aims to establish discrete geological boundaries to the mineralization, both in mineral exploration and exploitation, that are directly related to a sampling grid.

For resource/reserve computations, a mineral deposit is converted to an analogous geometric body composed of one, several, or an aggregate of close-order solids that best express size, shape, and distribution of the variables. Construction of these blocks depends on the method selected. Some methods offer two or more manners of block construction, thus introducing subjectivity. In such a case, a certain manner of construction is accepted as appropriate, preferably based on geology, mining, and economics (Popoff 1966). Numerous methods of reserve computations are described in the literature; some are only slight modifications of the most common ones. Depending on the criteria used in substituting the explored ore bodies by auxiliary blocks and on the manner of computing averages for variables, classical methods can be classified into six main types: (1) method of sections, (2) polygonal method, (3) triangular method, (4) block matrices, (5) contour methods, and (6) inverse distance weighting methods (◘ Fig. 4.31). These methods do not consider any correlation of mineralization between sample points nor quantify any error of estimation. All of them are empirical and their use depends mainly of the experience of the user.

Fig. 4.31
figure 31

Classical methods for resource/reserve calculations: a block matrices, b inverse distance weighting, c polygons, d contour, e triangles, and f sections

Selection of a method depends on the geology of the mineral deposit, the kind of operation, the appraisal of geologic and exploration data, and the accuracy required. Time and cost of computations are often important considerations. The purpose of reserve computations is one of the most important considerations in selecting a method. For preliminary exploration, the method should best illustrate the deposit, the operations, and permit sequential computations and appraisal. On the other hand, time-consuming procedures must be avoided if reserves are being computed for prospective planning. The system of mining or the problem of selecting one can influence the preference. A certain method of computation can facilitate more than others the design of development and extraction operations owing to technical and economic factors such as mining by levels, average grade, or different cutoff grades.

A careful analysis of geology and exploration should be made to select the best method of estimation. In general, the method (or combination of methods) selected should suit the purpose of computations and the required accuracy; it should also best reflect the character of the mineral deposit and the performed exploration. In a complex or irregular deposit, it is advisable to use two or more methods for better accuracy and self-confidence. Average of these methods can be accepted as a final result, or the values of one method can be considered as a control of others. Thus, the use of two or more methods to compute reserves for the same deposit is common practice. Various methods can be also applied for different parts of a body depending on the geology, mine design, type and intensity of exploration workings, and category of reserve computations. A second method can often be used for control of the computations made by the principal method, so that no crude errors can occur. A common example of combined methods is where one method is applied to outline and divide the mineral body into blocks and another to determine the parameters of each block.

4.5.5.1 Cross-Sectional Methods

If a deposit has been systematically drilled on sections according to a regular grid, reserve calculation will be based on cross sections along these lines. The cross-sectional methods are based on a careful consideration of the geology of the mineral deposit and the developing of a correct geological model that is essential for good resource estimates (Stevens 2010). It is possible to distinguish two variables of the standard method: vertical sections or fence used mainly in exploration and horizontal sections or level used in mining. Although there are many geometric possibilities, in the traditional cross-sectional method, the area or ore in a given cross section is calculated (e.g., with a planimeter, counting squares, or through Simpson’s rule), and the volume of the ore body is commonly computed using, as a solid figure, two consecutive cross sections and the distance between them (◘ Fig. 4.32a):

$$ V=\frac{A_1\times {A}_2}{2}\times L $$

where V is the volume, in m3; A 1 is the area of section A 1, in m2; A 2 is the area of section A 2, in m2; and L is the distance between A 1 and A 2, in meters. The interval between sections can be constant, for example, 50 m, or can vary to suit the geology and mining requirements. Another possibility is to compute the volume corresponding to half the distance to the two adjoining sections (◘ Fig. 4.32b). Thus, the limits of the blocks defined lie exactly halfway between the drillholes. Obviously, an end correction is necessary for the volumes at the extremities of the ore body. For these two cases, the volume can be calculated using half the distance between drillholes, seldom more than 50 m. To increase the accuracy of computations, the number of blocks should be as large as possible. Care should be exercised to avoid arbitrary locations and construction of sections. In exploration, distance between sections is usually governed by the character of the mineral body and the distribution of mineral values. Selection of sections unjustified by exploration data can influence the size of the areas and, in turn, computation. Most of the disadvantages in the use of this method can be avoided by properly planned exploration.

Fig. 4.32
figure 32

Cross-sectional methods: a solid figure formed by two consecutive cross sections; b solid figure obtained corresponding to half the distance to the two adjoining sections

The volume of each block multiplied by the bulk density of the mineralization, calculated, for example, in the laboratory with samples including valuable mineral, waste, pores, etc., gives the tonnage of ore in tons. The reserves in tons of the valuable component in each block (e.g., copper in sulfide mineralization) are subsequently estimated multiplying the ore tonnage by the average grade. As explained before, a range of methods is available to determine average grade: statistical, metal accumulation, area of influence, etc. The sum of the tonnages of ore or valuable component in each block generates the total tonnage ore resources/reserves for the entire mineral deposit.

The cross-sectional methods are simple and rapid, but they are not accurate because normally intercross-sectional distance varies between 50 m and 100 m. These methods however are the most convenient ways for computing reserves of uniform mineral deposits. Thus, well-defined and large bodies that are uniform in thickness and grade or show gradually changing values can generally be computed accurately by cross-sectional methods. The method should be used with caution where the bodies are irregular or where values tend to concentrate in some ore zones. Where computations of several valuable components are required and the mineral body shows grade variations for each component, it is difficult and often impossible to apply cross-sectional methods.

In underground mining, horizontal cross sections constructed along the proposed mining levels are often preferred in mine design. Two sets of vertical sections at right angles to each other would better illustrate ore bodies than any other method. The method is applied most successfully in the case of a deposit that has sharp, relatively smooth contacts, as with many tabular (vein and bedded) deposits. Assay information, for instance, from drillholes, commonly is concentrated along equispaced cross sections to produce a systematic data array; in some underground situations, more irregular data arrays can result, for example, from fans of drillholes. The great strengths of the procedure based on sections are the hard geologic control that can be imposed (Sinclair and Blackwell 2002). Moreover, cross-sectional methods are easily adaptable for use simultaneously with other classical methods. In fact, these methods have an advantage over the polygonal methods (see next section) in that it is easy to observe variations in the shape and grade of mineralization.

4.5.5.2 Method of Polygons

Where drillholes are randomly distributed (e.g., in an irregular grid), the grade and thickness of each hole can be assigned to an irregular polygon, and it is assumed that both variables remain constant throughout the area of the polygon. The polygonal estimate is based on assigning areas of influence around drillhole intercepts. Thus, this method shows the intuitive idea that the amount of data generated by each sample is proportional to its area or volume of influence. The most common drawing of polygons around the drillholes is using a series of perpendicular bisectors of lines joining sample locations (◘ Fig. 4.33a). The perpendicular bisector of a line segment is a line for which points are at the same distance from either side of the line segment. This procedure is equivalent to a process known as Voronoi tessellation. Therefore, in this method each polygon incorporates a unique sample location and all the points included in the polygon are nearer to the contained datum than to any external datum. The Russian scientist B.T. Boldyrev gave the first description of the method applied to geology as early as 1909. Another possibility to define the polygons is to use the angular bisectors (◘ Fig. 4.33b). Here, each polygon is established by linking drillholes with tie lines and then constructing angular bisectors between these lines to define a central polygon (Annels 1991).

Fig. 4.33
figure 33

Method of polygons: a perpendicular bisectors; b angular bisectors

There are arbitrary decisions that must be made as to how marginal prisms are bounded at their outer edge. There are different possibilities to resolve this problem, including the utilization of geologic information at the boundary, if possible, or more usually to fix a maximum distance from the sample. A combination of indicated, probable, and possible resources constructing outer fringes and assigning resources categories to each fringe from distances to drillholes can be also used to solve the problem (Annels 1991) (◘ Fig. 4.34). In any case, the final drawing to close the polygons is almost always arbitrary, which produces an important impact on the results.

Fig. 4.34
figure 34

Polygons based on resources categories

The third dimension, that is, the height of the polygonal prism, is defined by the thickness of the deposit or bench and is perpendicular to the projection plane. This process originates a general pattern of polygonal prisms that are assigned the grade of the contained datum. Regarding the grade procedure, the average grade of ore found in the sample point (e.g., drillhole) within the polygon is considered to accurately represent the grade of the entire volume of material within the polygon. In this sense, the use of raw sample grades for mean grades of large volumes overestimates the grade of high-grade blocks and, correspondingly, underestimates the grade of low-grade blocks (e.g., a conditional bias, in which the bias is dependent on the grade estimated).

The polygonal method is deficient in exposing the morphology of the mineral body and the fluctuations of variables within the individual blocks; although average thickness and grade are computed, the pattern of their space distribution is not revealed. An alternative to single grade weighting by polygon can be drawn (Camisani-Calzolari 1983). The method involves allocating 50% of the weight to central drillhole and the remaining 50% to surrounding drillholes, in equal proportions. These weighting coefficients are entirely arbitrary and no allowance is made for thickness. However, it is an attempt to overcome one of the main criticisms of the method: that polygons, sometimes very large in areas of sparse drilling, are evaluated by only one drillhole, totally ignoring adjacent drillholes (Annels 1991). Another possibility is to weigh the grades of the adjacent drillholes according to their distance away the center of the polygon, with the inverse squared of the distance being the most common weighting factor.

With regard to the mathematical procedure of estimation, it is somewhat similar to that used in the cross-sectional method. After the polygons had been drawn, the area of each polygon is computed by using a planimeter or counting squares. Then, a polygonal prism is constructed using the thickness of the mineralization as the height of the prism. The volume computed in the prism is later multiplied by the bulk density of the mineralization to obtain the tonnage of ore in tons and the average grade of the drillhole is then used to calculate the reserves in tons of the valuable component. The sum of ore or valuable component in each polygonal prism produces the resources and/or reserves for the studied mineral deposit.

Favorable criteria for the use of the method of polygons are the proven continuity of a mineral body between drillholes and the gradual changes of all variables. The polygonal method is successfully used in computing reserves of tabular deposits such as sedimentary beds of coal, phosphate rock, or oil shales as well as large lenses and thick vein bodies. The greater the number of polygonal prisms and the more regular the grid, the more accurate are the computations. Polygons must be used with caution in cases of no uniform and irregularly shaped mineral bodies. They are incorrect where the bodies cannot be correlated satisfactorily between drillholes, where they are small and distributed erratically, or where intercalations of waste are present. In mineral deposits composed of several bodies overlying each other, separate groups of polygons can be delineated for each one (Popoff 1966).

4.5.5.3 Method of Triangles

The method of triangles represents a modification of the polygon method. In this method, a series of triangles is constructed with the drillholes at the apices (◘ Fig. 4.35). This method has the advantage in that the three drillholes are considered in the calculation of the thickness and grade parameters for each triangular reserve block. Obviously, the triangles method is more conservative than the assignment of single values to large blocks, just as in the polygonal method. The construction of the triangles can use Delaunay triangulation, the precursor to Voronoi tessellation. The triangles must have angles as close to 60° as possible, but certainly avoiding acute-angled triangles (Annels 1991). In this way, triangular prisms are defined on a two-dimensional projection (e.g., bench plan) by linking three sample sites so that the resulting triangle contains no internal sample sites. Each triangle on the plant represents the horizontal projection or the base area of an imaginary prism with edges equal to vertical thicknesses of the mineral body in the drillholes. Thus, the average of the three values of the variables, grade and thickness, at the apices of a triangle is assigned to the triangular prism.

Fig. 4.35
figure 35

Method of triangles

Calculating ore reserves by this method involves the determination of the area of each triangle using the procedures described above for polygonal or cross-sectional methods, the calculation of the volume of each triangular prism multiplying the area by its weighted thickness, and obtaining the tons of mineralization and valuable component using bulk density and grade, respectively. Where the support of the grades is a constant, as in the bench of an open-pit, there are two main methods of estimating the grade: arithmetic mean and included angle weighting. Discrepancy between the two values obtained increases as the corner angle deviate from 60° (Annels 1991). Where the thickness at each intersection is variable, again two methods can be used to determine grade: thickness weighting and thickness and included angle weighting. Even side lengths of each triangle, distances of each hole from the center of gravity, and/or areas of influence of each hole, constructed by rule of nearest point, can be used for weighting (Popoff 1966).

The principal advantage of this method is that it produces some smoothing in the calculations of individual prisms. As a result, estimation of the tail of the grade density distribution is more conservative than is the case with the traditional polygonal approach. That the samples can be used an unequal number of times is part of the fringe problem: how far should ore be assumed to extend beyond an outside hole in ore, although this problem is common to most procedures. Regarding the disadvantages of this method, Sinclair and Blackwell (2002) suggest the following: (1) the smoothing is entirely empirical; (2) the weighting (equal weighting of three samples) is arbitrary and thus is not optimal other than coincidentally; (3) anisotropies are not considered; and (4) the units estimated do not form a regular block array.

For many decades, the triangular method was considered standard, although errors in results due to the manner of dividing the area into triangles were recognized. The procedure for reserve computations by the method is relatively simple, although modifications of the method, such as included angle weighting or distances of each hole from the center of gravity to calculate grade, required more elaborate computations. The relative error depends on the manner in which the area is divided into triangles, their form, and the total number of triangles. Thus, errors in computing reserves can be substantial, particularly where fluctuation of variables is large and the number of triangles is small. In comparison with other methods, the triangle method requires construction of a greater number of blocks ultimately resulting in labor and time-consuming computations. Where an ore body contains several valuable components, computations can also be cumbersome. Moreover, the method is not exact where variables decrease from the center to the outside boundaries, such as the thickness of lens-like bodies. In these cases, the volume reserves computed will be underestimated. In general, the uniform and gradual changes of variables, which are positive to use the triangular method, are characteristic features of only a few mineral deposits, predominantly sedimentary.

4.5.5.4 Block Matrices

Where the data are on lines or on rectangular or regular offset grids, regular blocks of square or rectangular shape can be fitted to the drillholes (◘ Fig. 4.36). The method is basically similar to the use in the polygon method and is particularly suited to the exploration phase of drilling of a prospect where rapid updating of the reserve can be undertaken as each new hole is drilled and where precision of the estimates is not as crucial as at a later feasibility or mining stage. According to the method used to construct the blocks, some methods allow extrapolation of mineralization beyond drilling but only use one hole to evaluate each block; other methods give conservative reserves using four holes to evaluate both grade and thickness and they are thus somewhat reliable. Generally, the thickness apply in the latter cases is the arithmetic mean, while the grade is thickness weighted, plus bulk density if required, among the four holes (Annels 1991).

Fig. 4.36
figure 36

a–e Block matrices (Annels 1991)

4.5.5.5 Contour Methods

Contour methods are very simple to use and produce good results, especially for mineral bodies where there are certain natural regularities of the variations in thickness and grade. The methods are based on the assumption that unit values, from one point to another, undergo continues and uninterrupted changes according to the rule of gradual changes. To construct isolines, intermediate values are determined by interpolation between points of known values. As a result, certain properties of mineral bodies can be presented graphically on a plan or section by a system of isolines. Common cases are computation of average thickness (◘ Fig. 4.37), average grade, and average value of a mineral deposit from appropriate isoline maps. The methods require sufficient number, appropriate density, and distribution of observations for accurate plotting of isolines. A major advantage of the methods is their descriptiveness; the isopach map gives an idealized likeness of the mineral body, whereas the isograde map shows the distribution of rich and poor ore. Thus, the boundaries of cutoff ore are easily constructed and changed; likewise, volume can be computed by measuring areas of respective isolines without additional drawing. Moreover, if the requirements for minimum grade, thickness, or value of ore are changed, the isomaps remain the same. The methods of isolines are applicable to deposits of gradual physical and chemical changes such as sedimentary deposits, for instance, large placer gold deposits explored with hundreds of drillholes.

Fig. 4.37
figure 37

Contour of magnesite average thickness in a magnesite deposit (Illustration courtesy of Pedro Rodriguez)

Contouring is normally invoked to avoid the irregular and commonly artificial ore/waste boundary that arises in estimating blocks. In cases in which data are abundant, they commonly are contoured directly without the intermediate stage of grid interpolation. As an estimation procedure, contouring of grades is typically applied to grade control in open-pit mines where the controlling data are blasthole assays.

Up to four methods of contouring can be distinguished (Annels 1991), being the main three described below: (1) the grid superimposition method, (2) the moving window method, and (3) the graticule method. In the grid superimposition method, drillhole intersection points are plotted on plan along with the relevant component of thickness and grade. Contour plans are then produced and a matrix of ore blocks is superimposed, whose dimensions allow them to fit exactly within mining blocks. For all blocks within the ore limits, values are assigned to the midpoint of each block by interpolation between contours, first for thickness and second for grade. Where blocks overlap the boundary, an estimate of the proportion of ore in the block is made together with an estimate of grade and thickness as the center of gravity of this section of ore. To calculate the reserves of the deposit, the area of each block, obviously the same for all blocks, is multiplied by the interpolated thickness, and the volume obtained is multiplied by the bulk density, in a similar way than proceeding methods, to obtain the tonnage of mineralization. The tonnage will be later multiplied by the interpolated grade to compute the valuable component reserves of the block. The sum of reserves of each block will give the reserves for the entire deposit.

The moving window method is a smoothing technique, particularly suited to the calculation of reserves of an open-pit bench that has been intersected by a series of irregularly spaced drillholes, or blast holes, which have revealed a highly erratic fluctuation in bench composite grades. For this reason, contouring of the data is not possible, and as a result, grid superimposition and the grade interpolation method could not be applied. The moving window method involves fitting of a grid of ore blocks to the outline of the deposit in the bench under evaluation. A search window is then drawn whose dimensions are twice as those of each ore block. Ideally, at least 15 drillholes should fall in the search area, so the search window dimensions can be modified to achieve this number if required. As the dimensions of the search windows increase, greater degree of smoothing of the data will be achieved. The window is positioned so that its center falls over the first block to be evaluated and the arithmetic mean of all the raw data values falling in the window, or their log-transformed equivalents, is calculated and the result assigned to this block. The window is then moved laterally to the next block and the above calculation is repeated. The rest of the procedure is equal to that shown in the grid superimposition method.

Where no correlation exists between thickness and grade, the graticule method can be used (Annels 1991) (◘ Fig. 4.38). Contour maps of the variables are superimposed and the area of each graticulate is determined using the methods described above. The thickness and grade assigned to each graticulate within the ore body limits are the mean of the bounding contours. The global procedure is similar to that shown in previous methods: determination of volumes, tonnage of mineralization, and tonnage of the valuable component.

Fig. 4.38
figure 38

Graticule method (Annels 1991)

4.5.5.6 Inverse Distance Weighting Methods

Inverse distance methods are a family of weighted average methods, being one of the most characteristic features the clear smoother process generated in the estimations. Thus, these methods provide for a gradual change in values between multiple sample points rather than an abrupt and unnatural change at the boundary between adjacent polygonal blocks. The technique applies a weighting factor to each sample surrounding the central point of an ore block. This weighting factor is the inverse of the distance between each sample, and the block center is raised to the power «n,» where «n» usually varies between 1 and 3. Only samples falling within a specified search area (2-D) or volume (3-D) are weighted in this way. Because the method is laborious and repetitive, it is necessary to use a geological modeling software package.

The inverse distance is taken into consideration by assuming that the influence of a borehole over a point varies inversely as the distance. The method begins to take the spatial distribution of data points into account in the calculations, a characteristic that will be repeated with geostatistical methods. Although subjective, inverse distance weighting estimation procedures remain popular. They have been found commonly often to generate results that are somewhat similar to geostatistical estimates produced using ordinary kriging methods. However, the application of ID methods has been steadily decreasing through the years in favor of geostatistical methods.

The procedure comprises the division of the deposit into a group of regular blocks within the geologically defined boundary. The available data are then used to calculate the variable value, thickness or grade of the mineralization, for the center of each block. According the name of the method, obviously near points are given greater weighting than points far away. The weighted average value for each block is calculated using the following general formula:

$$ {Z}_{\mathrm{B}}=\frac{{{\displaystyle \sum}}_{i=1}^n\left({Z}_i/{d}_i^n\right)}{{{\displaystyle \sum}}_{i=1}^n\left(1/{d}_i^n\right)} $$

where Z B is the estimate of block grade or thickness based on the values of each of these (Z i ) at each sample location in the search area; (1/d) is the weighting function, being d the distance of each sample from the block center; and «n» is the power to which the distance is raised. It is necessary to define the data utilized in the process, being this selection based on distance factor for the search area, the power factor used, and how many points should be utilized to estimate the center point for each block. Inverse distance methods must be done so that weights sum to one; otherwise, the method is biased and therefore unacceptable.

The most common exponents used are n = 2 (inverse distance squared, IDS) and n = 3 (inverse distance cubed, IDC). ◘ Figure 4.39 shows an example of the application of inverse distance method and the influence of the power «n» in the estimation final result (Annels 1991). Three-drillholes fall in the search area (circular) and their grades (%) are given in the diagram, together with distance values. As can be seen in ◘ Table 4.11, the weighting given to the nearest sample (1.6%) increases with «n», while that given to the others decreases.

Fig. 4.39
figure 39

Inverse distance weighting with circular search area (Annels 1991)

Table 4.11 Results of the example considered in ◘ Fig. 4.39 for different values of the power «n» (Annels 1991)

Larger exponents (IDC) are applied where large weights are decided for the closest samples. The extreme case is to increase the value of the exponent so that only the closest sample receives any weight at all, but this selection is a nonsense because then the procedure is equivalent to polygonal method. The opposite extreme occurs where the exponent is zero, which amounts to an equally weighted moving average as described in the previous methods (moving window method).

Techniques such as search using a quadrant or an octant can also optimize the spatial distribution of data utilized to produce a block or point estimate. Where the deposit is considered to be isotropic, in that grade or thickness variations are constant in all directions or the drilling grid is square, a circle (2-D) or a sphere (3-D) is used as search area. But if the deposit is anisotropic, an ellipse (2-D) or an ellipsoid (3-D) is preferred as search area. Another possibility is to divide the search area around the block center to be evaluated into four or more commonly eight sectors and then proceed to search for the nearest specified number of samples in each sector in turn; usually, an eight-point sector search is used, but this can be varied by the user. This means that a maximum of 64 samples would be utilized in an eight-point sector search, although some sectors can reach the set distance limit before eight points are located and thus a smaller data set would be used. This method reduces the bias incurred where denser sampling exits to one side of the block under evaluation. Problems still exist however for blocks at the ore body fringes where some sectors will be totally empty.

As aforementioned, inverse distance weighting is a smoothing technique and as such is unsuited to deposits that have sharply defined boundaries and very sudden drop in grade. In these situations, the methods tend to produce larger tonnages at lower grade than actually exist, which can thus seriously affect the results of any economic feasibility study. Therefore, it is evident that inverse distance weighting works best for mineralization that displays gradual decline in grade across its economic fringes. It is ideal for porphyry deposits, some alluvial or eluvial deposits, and for limestones (Annels 1991).

4.5.6 Geostatistical Methods

4.5.6.1 Introduction

The classical methods described so far are based on the assumption that the individual samples, such as sample values from drillholes, are statistically independent of each other. In the context of an ore body, this implies that the position from which any sample was taken is not relevant. Theoretically, using classical statistics, taking samples in opposite sides of an ore body would be as good as taking them a short distance apart. This kind of independence is rarely found in mineral deposit data, but there is frequently certain spatial interdependence among the samples, which is studied by geostatistics. Geostatistics is therefore statistics by which the spatial association is taken into consideration and where the variables are known as regionalized variables. Matheron published in 1963 that «Geostatistics, in their most general acceptation, are concerned with the study of the distribution in space of useful values for mining engineers and geologists, such as grade or thickness, including a most important practical application to the problems arising in ore-deposit evaluation… Any ore deposit evaluation as well as proper decision of starting mining operations should be preceded by a geostatistical investigation which may avoid economic failures.» Moreover, classical methods do not include any estimation of the errors involved in the evaluation and this general, being this concept fundamental in any estimation method of mineral resources and reserves. In this sense, geostatistics estimates the error involved in estimation. In ◘ Fig. 4.40, two blocks are going to be estimated, one of which by relatively few data (left) and the other by more abundant data (right). In addition to generating block estimates, in the same way as classical methods, geostatistics computes the error of estimation.

Fig. 4.40
figure 40

Geostatistics estimates the error of the estimation

Numerous books are available on the subject, including those by Matheron (1971), David (1977), Journel and Huijbregts (1978), Clark (1979), Isaaks and Srivastava (1989), and Goovaerts (1997). Geostatistics is also applied to other topics in mineral resource exploration/evaluation such as classification of ore reserves based on geoestatistical and economic parameters (Wober and Morgan 1993).

There are two areas where geostatistical calculations can be important, even in the early phases of evaluating a mineral deposit: (a) the calculation of errors or uncertainties in reserve estimates («knowledge of ore grades and ore reserves as well as error estimation of these values is fundamental for mining engineers and mining geologists») (Matheron 1963) and (2) the determination of grades, for instance, in single mining blocks. As a consequence, a geostatistical reserve study with careful attention to geologic controls on mineralization will provide not only a good total reserve estimate but also a more reliable block-by-block reserve inventory with an indication of relative confidence in the block grades estimated. Obviously, geostatistical methods, like any others, cannot increase the quantity of basic sample information available nor they can improve the quality or accuracy of the basic assays. Geostatistical techniques should be regarded as a comprehensive suite of ore reserve estimation tools, which, if they are correctly understood and utilized, should commonly lead to few astonishments where the mine come into production (Readdy et al. 1998). Other advantages of geostatistical methods include determination of the best possible unbiased estimate of grade and tonnage, which is important where an operation is working close to its economic breakeven point, or the assignment of confidence limits and precision to estimates of tonnage and grade.

In general, the geological contest defines the grade and thickness in a deposit. Thus, changing geological and structural conditions produce variations in grade or quality and thickness between deposits, even within one deposit. However, it can be logically considered that samples taken close together tend to reflect probably the same geological conditions. And, as the sample distance increases, the similarity decreases until at some distance there will be no correlation. In this way, geostatistical methods quantify this concept of spatial variability within a deposit and display it in the form of a semivariogram. Once that correlation between samples is established, it can be utilized to estimate values between existing data points. The estimation of the correlation is referred to variogram modeling. Thus, geostatistical methods use the spatial relationship between samples as quantified by the semivariogram to generate weights for the estimation of the unknown point or block.

Matheron developed the basis for geostatistics in the mineral industry during the 1950s and 1960s. As aforementioned, geostatistics is defined as applications of the theory of regionalized variables. They are associated with both a volume and shape, called a «support» in geostatistics (◘ Fig. 4.41), and a position in space. The term regionalized variable also emphasizes the two aspects of the variables: a random aspect which accounts for local variations and a structured aspect which reflects large-scale tendencies of a phenomenon. Geostatistics also assume the stationarity into the mineral deposit. It means, simply, that the statistical distribution of the difference in, for example, grade between pairs of point samples is similar throughout the entire deposit or within separate subareas of the deposit. The concept of stationarity can be difficult to understand but can be associated to the term homogeneity utilized by geologists to characterize domains of similar geologic features such as types of mineralization.

Fig. 4.41
figure 41

The volume, size, and position in space of a sample is the «support»

Classical statistic considers only the magnitude of the data and geostatistics takes into account not only the value at a point but also the position of that point within the ore body and in relation to other samples. Of course, geostatistical estimation does not mean necessarily better estimates than those obtained by other methods. In fact, any estimation procedure can produce incorrect results because the procedure has not been applied correctly, inappropriateness of the procedure, or changings in the geologic model as a consequence of new information further obtained (Wellmer 1998). Geostatistics has a clear potential if it is reconciled with the geology of the mineral deposit (King et al. 1982). Thus, it is important to note that geostatistical methods cannot replace meticulous geological data acquisition and interpretation. They are computational tools that rely on good geology and extend its reach. For instance, erroneous application of geostatistics is to calculate a semivariogram with data that comprise distinct domains. For this reason, geostatistical results (kriging) should always be checked with other method such as classical ones. Geostatistical calculations also require suitable computer programs and a considerable mathematical background. However, geologic cross sections, bench plans, and most importantly, the acquired understanding of the ore body in terms of the lithologic, structural, or other controls on the mineralization is of paramount importance in any geostatistical study.

A geostatistical ore reserve study will usually include the following main steps: (1) study of the geologic controls on the grade, thickness, or other variables of the mineralization, (2) computation of experimental semivariograms, (3) selection of suitable semivariogram models to adjust to the experimental semivariograms, and (4) estimation of the variable value and the estimation error from the surrounding sample values using kriging. Geostatistical methods are optimal where data are normally distributed. Therefore, the first step in geostatistical studies is to check the normality of the data distribution. It can be carried out using the methods described in ► Sect. 4.2.2. The four numbered steps will be presented here in such a way to minimize the use of mathematical expressions and notation (e.g., triple integrals).

4.5.6.2 Spatial Correlation: Semivariogram

The amount of spatial correlation or continuity is determined by the primary geostatistical tool: the variogram or semivariogram; there is a clear difference between both terms, but here they will be used indistinctly. The semivariograms, which represent the characteristics of the mineralization, are a prerequisite to any geostatistical ore reserve estimation, and they are used in all subsequent phases. As explained in a previous section, semivariogram defines the concept of «area of influence» and can be used in determining the optimum drillhole spacing and optimum sample size. The semivariogram serves to measure and express the correlation of the variable under consideration in a specific space and at a given orientation. In this method, it is always assumed that variability between two samples depends upon the distance between them and their relative orientation. By definition, this variability (semivariance or γ(h)) is represented calculating the variance between pair of samples separated by a distance «h» (lag distance), following the formula:

$$ \gamma (h)=\frac{1}{2{n}_{(h)}}\cdot {\displaystyle \sum}_{i=1}^n{\left({x}_i-{x}_{i+h}\right)}^2 $$

where γ(h) is the semivariance, x i are the data values of the regionalized variable (e.g., ore grades), x i + h is the data value at a distance «h» from x i , and «n» is the total number of value pairs that are included in the comparison; lag (h) is purely the spacing at which the squared differences of sample values are obtained (lag 1 is thus the minimum sampling interval). For n samples regularly distributed along a line, at intervals of «h» meters, we will have (n − 1) pairs to compute γ(h), n − 2 to compute γ(2h), and so on.

The sample pairs are each oriented in the same direction, are each separated by the same distance (h) in meters, and are equivolume, the support concept commented previously. On the semivariogram, γ(h) is plotted as a function of the spacing or lag h, and the result is the so-called experimental or empirical semivariogram, because it is based only of samples. Alternatively, semivariograms can be computed on the logarithms of grade if this variable is logarithmically distributed. A semivariogram is therefore ideally suited for clarifying the problem of whether the sample values are statistically independent of each other or if they are spatially interdependent. Commonly, values of γ(h) increase steadily with increasing distance and reach a limiting or plateau level. ◘ Figure 4.42 shows how the semivariance or γ(h) is calculated.

Fig. 4.42
figure 42

Calculation of γ(h) at different lags

If sampling density is too low for any underlying correlation to be detected, if the ore body is extremely homogeneous, or if poor sample collecting, preparation, and assaying procedures were used, then no structure or continuity will be visible in the semivariogram. From an operational viewpoint, geostatistical calculation requires a large sample size. With a small number of exploration works, for example 20 drillholes, the calculation of variograms becomes increasingly uncertain, even impossible. At least 30 pairs are necessary for each lag of the experimental semivariogram and that no lag greater than L/2 should be accepted, where L is the average width of the data array in the direction for which the semivariogram is being estimated (Journel and Huijbregts 1978).

The examination of the variogram can be used also to determine the nature of mineralization. The uniformity of the ore, the degree to which it has been concentrated by various processes during precipitation of the ore minerals, or remobilized in later metamorphism or secondary enrichment, can be deduced from the study of the semivariogram. An insight is gained into the relative importance of spatial controls (e.g., distance from an igneous contact, presence of faults, or palaeo-shoreline) and random influences (e.g., fracture infillings or metamorphic lateral secretion veins) operating during the mineralization process (Annels 1991).

4.5.6.3 Semivariogram Models and Fitting

Once an experimental semivariogram has been calculated, it must be interpreted by fitting a model to it (◘ Fig. 4.43). Any function that depends on distance and direction is not necessarily a valid semivariogram. The experimental semivariogram cannot be utilized directly to generate kriging estimates since it is established only for a finite number of lag distances, those used in its construction. After joining its values at such lag distances, the resulting function can not necessarily fulfill the conditions that every semivariogram must meet. In kriging estimation process, a continuous function must be included in the calculations, and since experimental semivariogram is not a function of this type, it is necessary to fit a theoretical model to the experimental semivariogram obtained. In other words, kriging estimation will need access to semivariogram values for lag distances other than those used in the empirical semivariogram. Another reason is to ensure that kriging equations are solvable and kriging estimates have positive kriging variances.

Fig. 4.43
figure 43

Experimental semivariogram and fitting a model

There are several possibilities to select a model to fit but not infinite because strong mathematical constraints exist (concept of a mathematical property called positive definiteness). Fitting a semivariogram model can be done by manual or automatic statistical fitting, usually being a combination of both the best option. Cross validation is then performed to compare alternative variogram models to fit. Fitting models is not easy for different reasons: (a) the accuracy of the observed semivariances is not constant; (b) the spatial correlation structure is not the same in all directions, that is, anisotropy is commonly present; and (c) the experimental semivariogram can contain much point-to-point fluctuation, among others.

The spherical or Matheron model is the most common type of model used in mining variables, for instance, grade or thickness of the mineralization, although other types do exist such as circular, exponential, linear, Gaussian, or de Wijsian, among others (e.g., Journel and Huijbregts 1978; Annels 1991) (◘ Box 4.4: Spherical Model). From a mathematical point of view, it is possible to combine two or more simple models to fit. In many cases, it is not possible to make an adequate approximation of an experimental semivariogram by a single model. In other words, regionalization can be present at several scales. The use of nested structures or combined models provides enough flexibility to model most combinations of geologic controls. It is important to note that all semivariograms fitted in various directions of a mineral deposit are part of the same model, and all should have the same components, except in the case of a zonal anisotropy.

4.5.6 Box 4.4

4.5.6 Spherical Model

The spherical or Matheron model, and many others, can be described quantitatively by three parameters: (1) range, (2) sill, and (3) nugget effect (◘ Fig. 4.44). Range (a) is the distance at which the semivariogram levels off at its plateau value. This reflects the classical geological concept of an area of influences. Beyond the distance of separation, sample pairs no longer correlate with one another and become independent. Regarding the sill, it is the value at which the variogram function plateaus. For all practical purposes, the sill is equal to the variance of all samples used to compute the semivariogram. As a general rule, the semivariogram model starts at zero on both axes; at zero separation (h = 0) there should be no variance. Even at relatively close spacings, there are small differences and variability increases with separation distance. This is seen on the semivariogram model where a rapid rate of change in variability is marked by a steep gradient until a point where the rate of change decreases and the gradient is zero. Beyond this point, sample values are independent and show variability equal to the theoretical variance of sample values. This variability is termed the sill (C) of the semivariogram. The sum of the nugget effect plus the sill is known as the total sill value (C + C 0).

Fig. 4.44
figure 44

Spherical model

The third characteristic considered is the nugget effect. The semivariogram value at zero separation must be zero, but there is often a discontinuity near the origin, which is called the nugget effect (C 0). It expresses the local homogeneity or lack thereof of the deposit. This is generally attributable to differences in sample values over very small distances and can include inaccuracies in sampling and assaying (this item sometimes is called «human» nugget effect) as well as associated random errors. If the semivariogram shows random fluctuation about a horizontal line, a so-called pure nugget effect is present in the ore body. In that case, the best option is to evaluate the deposit using classical methods since errors estimating the reserves of an ore deposit with pure nugget effect in the semivariogram can be huge.

The three parameters mentioned (range, nugget effect, and sill) characterize each type of mineral deposit. Very irregular deposits, such as gold or pegmatite deposits, will show high nugget effect and/or small range; relatively uniform deposits such as stratiform, sedimentary Pb–Zn occurrences show low, even zero, nugget effect and large range. Information from other deposits of the same type, preferably neighboring deposits or deposits in the same geological region, can help as a priori information, for example, for the estimation of the range or the relative nugget effect, if in the early stage of the exploration only limited data were available to calculate the variogram (Wellmer 1998). Semivariograms in different orientations can also identify the presence of anisotropic features in mineral deposits. Anisotropic features are reflected by the range and sill, which are dependent on the orientation; the nugget effect is generally an isotropic quantity.

The spherical model has the mathematical form shown in the two equations below:

$$ \begin{array}{l}\gamma (h)={C}_0+C\left(\frac{3h}{2a}-\frac{1{(h)}^3}{2{(a)}^3}\right)\left( forh<a\right)\\ {}\gamma (h)={C}_0+C\kern1em \left( forh>a\right)\end{array} $$

Some semivariogram phenomena in the spherical model can appear, such as proportionality effect, drift, directional anisotropism, or hole effect. If a deposit is very large, then it is perhaps unrealistic to assume constant spatial variation so that it is necessary to divide the deposit up into subareas or levels provided that there are still enough samples in each. Each subarea or level will get a semivariogram with a different sill: this is the proportionality effect. Regarding the drift, an assumption made in geostatistics is that no significant statistical trends occur within the deposit, which would cause a breakdown in stationary, but sometimes such a statistical trend can be present and the sill value increases over a specific distance (drift); since the drift used to be at distances beyond the range, it will not interfere with local estimations of the deposit. In respect of the cited anisotropy, this occurs where different semivariograms are obtained for different directions in an ore body, and this means that an elliptical zone of influence exists. Anisotropism is especially marked, for example, in alluvial deposits where the range across the deposit is short compared to that parallel to its length. Finally, hole effect can be recognized when areas of high-grade mineralization alternate with areas containing low values. The result is a pseudo-periodicity which is reflected by an oscillation of the semivariogram about the apparent sill level. This effect can be easily confused with the usual erratic oscillation of the semivariogram about the sill value for lag values greater than the range.

4.5.6.4 Kriging

Georges Matheron selects this name for the estimation process because he wanted to recognize the work of D. G. Krige. This author proposed the use of regression after concluding that the polygons method originated overestimation or underestimation of the grades in the estimation results. The kriging method is a geostatistical technique or a group of techniques for determining the best linear unbiased estimator (BLUE) with minimal estimation variance. It is best because it keeps the errors as low as possible; if Z and Z* are the true and estimated values, respectively, the variance of differences (ZZ*) for all estimates must be minimized. Linear because kriging calculates the variable as a linear combination of the values of the nearest samples. And unbiased because the estimation process is globally unbiased, but unbiased on average, that is, over the entire data range. Therefore, kriging is the operation of weighting samples in such a way as to minimize errors in the estimation of grades of the deposit.

Kriging generates the estimate at each point or block employing the semivariogram model fitted to the experimental semivariogram. The main problem to be resolved by kriging is to generate the best possible estimate of an unknown point or block from a group of samples. The general term kriging covers several specific methods are such as simple kriging (SK), ordinary kriging (OK), indicator kriging (IK), universal kriging (UK), and probability kriging (PK), among others. In kriging, the coefficients of such a linear combination are obtained indirectly from the semivariogram, hence the importance of trying to fit correctly the model of semivariogram. Unlike other estimation methods (e.g., inverse distance weighing or nearest point), kriging also gives a confidence level of each estimate.

The kriging estimator has the following general form, for instance, to estimate a grade value in a point:

$$ {Z}^{\ast }={\displaystyle \sum}_{i=1}^n{\lambda}_i{x}_i={\lambda}_1{x}_1+{\lambda}_2{x}_2+{\lambda}_3{x}_3+\dots +{\lambda}_n{x}_n $$

where Z* is the estimated grade, X i is the sample grade, λ i is the weighting coefficient assigned to each respective X i , and «n» is the selected number of nearest neighbor samples that will be used to estimate the grade. The suitable weights λ i assigned to each sample are determined by two conditions. The first one expresses that z* and z must have the same average value within the whole large field and is written as Σλ i = 1. The second condition expresses that the λ i have such values that estimation variance of z by z*, in other words, the kriging variance should take the smallest possible value (Matheron 1963). In minimizing this estimation error or variance, kriging results in a series of simultaneous equations, which can be solved for each weighting factor, given the position of the sample and a model of the semivariogram representative of the mineralization being studied. The estimation errors in the process will be higher in regions of low drilling density and obviously lower where the deposit has been extensively drilled with closer spaced holes.

The system of linear equations (system of ordinary kriging equations) is set up as follows:

$$ {\displaystyle \sum}_{j=1}^m{\lambda}_j{\gamma}_{ij}+\mu ={\gamma}_{i0}\kern1.75em i=1,2,\dots m $$

where «I» and «j» are data locations and «m» is the number of data used in the estimation. The solution of the m + 1 linear equations, including Σλ i = 1, minimizes the variance of the estimation error. Thus, the essence of ordinary kriging is that the estimation variance is minimized under the condition that the sum of the weights is 1. In the kriging system, «μ» is a Lagrange multiplier needed for the final solution, γ ij are known semivariogram values from the semivariogram function estimated between data points «I» and «j», and γ i0 are known semivariogram values between data points «I» and the estimated location x 0 and y 0, if 2-D.

For instance, in a four-sample kriging estimation, the full set of kriging equations is the following, being K equal to λ in the previous formula:

$$ \begin{array}{c}{K}_1{\gamma}_{1,1}+{K}_2{\gamma}_{1,2}+{K}_3{\gamma}_{1,3}+{K}_4{\gamma}_{1,4}+\mu ={\gamma}_{0,1}\\ {}{K}_1{\gamma}_{2,1}+{K}_2{\gamma}_{2,2}+{K}_3{\gamma}_{2,3}+{K}_4{\gamma}_{2,4}+\mu ={\gamma}_{0,2}\\ {}{K}_1{\gamma}_{3,1}+{K}_2{\gamma}_{3,2}+{K}_3{\gamma}_{3,3}+{K}_4{\gamma}_{3,4}+\mu ={\gamma}_{0,3}\\ {}{K}_1{\gamma}_{4,1}+{K}_2{\gamma}_{4,2}+{K}_3{\gamma}_{4,3}+{K}_4{\gamma}_{4,4}+\mu ={\gamma}_{0,4}\\ {}{K}_1+{K}_2+{K}_3+{K}_4=1\end{array} $$

In addition to the estimate, the kriging variance σ 2 E or σ 2 K is found from

$$ {\sigma}_{\mathrm{E}}^2={\displaystyle \sum}_{i=1}^m{\lambda}_i{\gamma}_{i0}+\mu $$

Kriging variance depends on the distance of samples used to estimate the point or block value. Thus, a lower kriging variance means a point or block that is estimated from a near set of samples and a higher kriging variance represents a point or block that is calculated using samples some distance away. Having computed a reliable group of regular data values using kriging, these values can be contoured and showed graphically as well as the corresponding estimation variances (◘ Fig. 4.45). Thus, areas with comparatively high estimation variances can be analyzed to see whether there are data errors or if further drilling is needed to diminish the value of the estimation variance. This is one of the most important applications of point kriging.

Fig. 4.45
figure 45

Contoured kriging and kriging standard deviation estimates

Once kriging variance is determined, it is possible to calculate the precision with which it is possible to know the various properties of the deposit investigated by obtaining confidence limits (σ E) for critical parameters. According the features of geostatistics commented previously, the errors show a normal distribution, which allows the 95% confidence limit (±2σ E) to be calculated. Another application of kriging variance can be the classification of reserves according their levels of uncertainty and precision, the latter based on the relative kriging standard deviation (Diehl and David 1982).

The general procedure of kriging contains a number of important implications that are not particularly obvious to those with limited mathematical background. Some of them are:

  1. 1.

    Kriging is correct on average although any single comparison of a kriging estimate with a true value might show a large difference; however, on average such differences generally are less for kriging than for other interpolation techniques.

  2. 2.

    Kriging of a location (point) for which information is included in the kriging equations results in a kriging estimate equivalent to the known data value; in other words, kriging reproduces existing data exactly.

  3. 3.

    Kriging takes into account data redundancy; in the extreme, a very tight cluster of several analyses carries almost the same weight as a single datum at the centroid of the cluster.

  4. 4.

    Kriging can be carried out as described but on transformed data; if the transform function is not linear, the back transform will not produce an optimum estimator (Sinclair and Blackwell 2002).

In lognormal distributions, kriging is carried out using log-transformed data. These lognormal distributions are very common when using geochemical variables, for instance, gold values. Thus, the value estimated is the mean log-transformed value, the back transform of which is the geometric mean. But in lognormal distributions, the geometric mean is commonly lower than the arithmetic mean. It should be therefore borne in mind that the arithmetic mean and associated error dispersion must be calculated from the estimates of log parameters.

Semivariogram modeling process and the need of a high-quality model fitted to the experimental semivariogram are of paramount importance. Thus, an almost perfect semivariogram model must be integrated with the geologic model of the mineral deposit. Once the semivariogram model is determined, the subsequent processes are (1) cross validation of the semivariogram model, (2) criteria to select data for individual point or block estimates, and (3) definition of minimum and maximum numbers of data for kriging each point or block. Finally, a systematic kriging of each point or block is carried out.

Regarding the selection of data, generally all data within a special search radius of the point or block being evaluated are selected. The search volume can be spherical or ellipsoidal, if anisotropy is present. A maximum number of data is imposed on each point or block estimate so that the set of kriging equations is relatively small and its solution is efficient. In addition, a minimum number of data is usually established with the objective to prevent large errors in which only local stationarity is ensured and guarantee interpolation as opposed to extrapolation. It is appropriate to need that data be fairly well spread spatially, not all clustered.

For a particular data density, a search radius too small results in too few data being selected. On the contrary, a search radius too large originates huge amount of data being elected, with the result that computation time is clearly increased. Sometimes, just less than the range of the semivariogram can be a good option to select the search radius, since beyond this distance sample pairs no longer correlate with one another and become independent.

4.5.6.4.1 Point Kriging and Block Kriging

Point kriging takes into account only relationships between individual sample points, which were drillhole sites in the previous example, but does not take the size of the blocks into consideration. This technique is then best suited to contour isolines of equal grades or thicknesses of the ore body. With regard to block kriging, it estimates the value of a block from surrounding data. Block kriging can therefore replace such techniques as inverse distance weighing or cross sections to evaluate the reserves of a mineral deposit. The estimation block selected initially should have dimensions consistent with the anisotropy of the deposit, the geological model, the grid size, and the area of influence.

To determine the covariance between a sample and a block, the block is considered to be represented by a grid of «n» points. Thus, covariance between each of these points and the sample is determined and the average computed. The grid size could be 10 × 10 and the estimation would be the mean of 100 values. Block kriging amounts to estimating the individual discretization points (e.g., 10 × 10) and then average them to obtain the block value. This formulation was originally the most widely used form of kriging in mining applications.

As a general rule, it is not prudent to compute blocks whose dimensions are less than half the sample spacing. As they diminish in size, such blocks become too numerous and the estimation grades quickly become meaningless as they become less and less related to the sample information. Moreover, the variance of such blocks will be excessive, in inverse ratio to size. Regarding the shape of the block, an appropriate shape may be cubic blocks for an isotropic mass and parallelepipeds with proportions related to the dimensions of the zone of influence.

4.5.6.4.2 Indicator Kriging

Indicator kriging (IK) was introduced in the early 1980s as a technique in mineral resource estimation (Journel 1983). It is the prime nonlinear geostatistical technique used today in the mineral industry. The original appeal of IK was that it was nonparametric, in the sense they do not make any prior assumption about the distribution being estimated. IK involves transformation of data to zeros or ones based on the situation of a value relative to an assigned threshold. The binomial coding of data into either zero or one, depending upon its relationship to a cutoff value, Z k , is given, for a value Z(x):

$$ i\left(x;{z}_k\right)=\Big\{\begin{array}{lll}1\hfill & \mathrm{if}\hfill & z(x)\ge {z}_k\hfill \\ {}0\hfill & \mathrm{if}\hfill & z(x)<{z}_k\hfill \end{array} $$

IK has the potential to generate recoverable resources where carried out over a larger area for a series of blocks. In other words, the proportion of a block theoretically is available above a given cutoff grade (an arbitrary threshold called the indicator threshold or indicator cutoff). Thus, if the observed grade is less than the cutoff grade, indicator will be 1. Otherwise, it will be 0. Therefore, indicator kriging is simply the use of kriging to estimate a variable that has been transformed into an indicator variable. Obviously, the indicator variable changes as the variable (e.g., cutoff grade) changes. IK is really a procedure that avoids the influence of the high samples over the whole of the deposit rather than applying it only to the estimation blocks close to these very high grades. The technique is particularly applicable where strict ore/waste boundaries exist within giving blocks, for example, large copper porphyries where grade zoning is the major control as well as in low-grade deposits where the cutoff value is of major concern.

Such applications of indicator kriging have found an extensive utilization in mineral deposit estimation due to their simpleness. This indicator kriging method has been utilized for estimating relative proportions of mineralized versus unmineralized ground, the proportion of barren dykes within a mineralized zone (Sinclair et al. 1993), to delineate different lithological units of an ore deposit (Rao and Narayana 2015), and so on. Repeated indicator kriging for different thresholds, a process known as multiple indicator kriging (MIK), allows the local cumulative curve to be estimated. Thus, the local mean can be determined, a block distribution can be estimated, and the proportion of blocks above cutoff grade, and their average grade, can be calculated. MIK is broadly used to apparently erratic values, such as those usual in most gold and uranium deposits (Sinclair and Blackwell 2002). New applications of MIK include to infer the variogram for the median of the input data and to use this for all cutoffs. This so-called median IK approach is very quick because the kriging weights do not rely on the cutoff being considered (Ali Akbar 2015). Median indicator kriging is achieving growing acceptance as practical and cost effective method for resource estimation and grade control.

4.5.6.4.3 Cokriging

Cokriging is a method of estimation that obtains the value of a variable evaluated in a point in space based on the neighboring values of one or several other variables. For example, gold grades can be estimated from a combination of gold and copper samples values. The equations used in cokriging are basically the same as for simple kriging, but considering the direct and cross covariances. The utilization of a secondary variable that is commonly more regular is clearly an interesting advantage over ordinary kriging. It allows estimation of unknown points using both variables globally for the mean of all estimates but also conditionally for the estimates within individually specified grade categories. This can aid in minimizing the error variance of the estimation. To perform cokriging, it is necessary to model not only the variograms of the primary and secondary data but also the cross-variogram between the primary and secondary data. If the secondary variables are present or available, then the use of these secondary variables via the cokriging technique could be advantageous in estimating values of the primary variable, although sometimes the improvement of cokriging is very little or none (Genton and Kleiber 2015).

4.5.6.5 Cross Validation

Different models can fit the same experimental data, so it is natural to control which is the best model. The best method to select the adequate semivariogram model for kriging is the so-called cross-validation process. It estimates the value at each drillhole or sample location after removing the observed value by kriging all those adjacent values that fall in the search area around this point. Therefore, not only the known value (Z) but also the estimated value (Z*) are computed, and the experimental error (Z − Z*) can be estimated as well as the theoretical estimation variance. The best variogram model would be the one that yields lower average error. In summary, cross validation is to krige known values to obtain the best semivariogram to krige in unknown points or blocks. If the number of samples is great, cross-validation techniques can be used to see if the method applied or the model fitted to the experimental variogram is acceptable or can be improved. However, in the early exploration stages there are rarely enough data to do a meaningful cross-validation computation.

Cross-validation process can be performed in two distinct ways: (1) a spatial leave-one-out reestimation whereby one sample at a time is removed from the data set and reestimated from the remaining data and (2) a subset of the data (e.g., 20 or 30% of the total) being separated completely from the data set and reestimated utilizing the rest of the data. The first method is the most commonly used, but there have been a number of objections to this option: (a) the method is generally not sensitive enough to detect minor differences from one variogram model to the next; (b) the analysis is performed on samples or composites but not in a different volume support; (c) the sill of the semivariogram cannot be cross-validated from the reestimation; (d) semivariogram values smaller than the minimum lag between samples cannot be cross-validated (Isaaks and Srivastava 1989). Therefore, it is difficult to define a useful goodness of fit test for a semivariogram model. Many times the most important factor to select the best semivariogram model is the user’s experience and the goals of the project.

As previously commented, in cross validation each drillhole or sample has both an observed value and a kriged estimate for the regionalized variable at this point. Thus, final outputs in the cross-validation process are the following (Annels 1991):

  1. 1.

    Mean algebraic error:

$$ \frac{{{\displaystyle \sum}}_{i=1}^N\left({Z}_i-{Z}_i^{\ast}\right)}{N} $$

where Zi is the actual value at each point and N is the number of points. This calculation takes into account the sign of (ZZ*).

  1. 2.

    Mean absolute error:

$$ \frac{{{\displaystyle \sum}}_{i=1}^N\kern0.5em \left|{Z}_i-{Z}_i^{\ast}\right|}{N} $$

This is the mean of the differences, but this time the sign is ignored.

  1. 3.

    Mean kriging variance:

$$ \frac{{{\displaystyle \sum}}_{i=1}^N{\sigma}_K^2}{N} $$
  1. 4.

    Mean square error of estimation:

$$ \frac{{{\displaystyle \sum}}_{i=1}^N{\left({Z}_i-{Z}_i^{\ast}\right)}^2}{N} $$
  1. 5.

    Number of points valued by point kriging.

If the model allows accurate estimation of the data population, the value of (1) approaches zero and is not more than 1% of Z (the mean of all the Z i values), (4) should be almost equal to (3), and (5) should be as large as possible. A significant difference between (3) and (4) can be due to outliers, for example, abnormally high or low values in the data set, which greatly increase the (difference)2 values between these and adjacent points. Removal of these outliers can allow the mean squared differences value to approach the mean point kriging variance. Another way to test the semivariogram model is plotting Z against Z* and if the values are uniformly distributed about a best fit regression line whose slope is 45° (the values show a high correlation coefficient, near 1), then the conditional unbiasedness has been achieved (◘ Fig. 4.46). The following Box is an example of using geostatistical methods in mineral deposit evaluation (◘ Box 4.5: Amulsar Deposit Evaluation).

Fig. 4.46
figure 46

Testing the semivariogram model selected using cross validation (plotting Z – true value against Z* – estimated value)

4. Box 4.5

4. Amulsar Deposit Evaluation: Courtesy of Lydian International Ltd.

The Amulsar Gold Project is located in south-central Armenia approximately 170 km southeast of the capital Yerevan and covers an area of approximately 56 km2. The Amulsar gold deposit is situated on a ridge in south-central Armenia and is hosted in an Upper Eocene to Lower Oligocene calc-alkaline magmatic-arc system that extends northwest through southern Georgia, into Turkey, and southeast into the Alborz-Arc of Iran. Volcanic and volcano-sedimentary rocks of this system comprise a mixed marine and terrigenous sequence that developed as a nearshore continental arc between the southern margin of the Eurasian Plate and the northern limit of the Neo-Tethyan Ocean.

The geology of the Amulsar deposit area consists of mainly porphyritic andesites with strong argillic alteration forming strata-parallel panels with typical thicknesses of 20–100 m. Interleaved with these rocks are silicified volcano-sedimentary rocks that host gold and silver mineralization. The strong stratiform control on the location of the base of the silicified volcano-sedimentary rocks has given rise to the mapping definition of upper volcanics and lower volcanics representing silicified volcano-sedimentary and altered andesites rock units, respectively. The division into upper volcanics and lower volcanics is also based on alteration and structural position. The Amulsar project is a high-sulfidation epithermal deposit, but its close association with syndepositional deformation adds a signature characteristic of orogenic gold systems. The deposit also has some characteristics of low-temperature variants of IOGC deposits.

The resource database used to evaluate the mineral resources for the Amulsar project comprise an Excel spreadsheet database updated with drilling completed after the previous resource estimate. These spreadsheets contained all information for diamond core and reverse circulation drillholes and channel samples for the project. The database consists of 1,298 drillholes and channel samples collected in exploration work undertaken between 2007 and 2013. The data is comprised of 315 diamond drillholes (41,819 m), 512 reverse circulation drillholes (73,543 m), and 358 channel samples (1,337 m). The Amulsar deposit has a complex history of structural events, including east- and west-directed thrusting and related complex deformation, and two episodes of extensional faulting within large northeast-trending grabens. This has resulted in a complex of structurally positioned blocks of upper volcanic (UV) and lower volcanic (LV) rocks. Mineralization is predominantly confined to rocks of the UV zone. The LV zone is generally not mineralized, except near contacts with mineralized UV rocks or related mineralized structures. Based on a major structural break, UV rocks have been subdivided into a northern Erato zone and a southern Tigranes-Artavasdes-Arshak (TAA) zone (◘ Fig. 4.47).

Fig. 4.47
figure 47

Wireframe models for Amulsar deposit and interpreted faults (Illustration courtesy of Lydian International Ltd.)

The drillhole and chip sample database used for estimation of resources consists of 106,038 gold assays and 101,038 silver assays, and 1,198 dry bulk density measurements. The drillhole database excludes 92 geotechnical, metallurgical, and condemnation drillholes which were not assayed for gold and silver or were not assayed using the same techniques used for all other samples (i.e., metallurgical boreholes). In addition, eight drillholes within the mineralization areas were excluded or partially excluded as all or part of the drillholes were not sampled or drillholes were abandoned due to drilling problems. Drillholes for each of the two UV zones comprising Erato and TAA, and a single LV zone covering the rest of deposit volume, were composited at 2 m intervals to provide common support for statistical analysis and estimation for gold and silver grades. Approximately 99 percent of assay samples were sampled at 2 m intervals or less. Capping of high gold and silver grades for the Erato, TAA, and LV zones is not required.

Conditional statistics were generated for the Erato and TAA zones using gold composites and were used to determine intraclass mean grades to be used for post-processing of model panel grade estimates. Seventeen and sixteen indicator thresholds were selected for Erato and TAA zones, respectively. These indicators were considered sufficient to discretize both the composite and metal values. The selected thresholds represent the entire grade range and therefore represent the spatial variability of the mineralization. A suite of gold variograms were generated and modeled for the Erato-LV and TAA-LV declustered composites; variograms were generated for gold and indicator thresholds. Traditional semivariograms were used as the spatial model for Erato and TAA zones. Gold indicator variograms were used to estimate gold grades. Gaussian-transformed gold variograms were developed for variogram analysis and were back transformed to gold values to derive change-of-support correction factors and for the selective mining unit (SMU) localization of the MIK estimates. Gaussian-transformed omnidirectional variogram models were generated for LV zone gold composites and silver composites for Erato, TAA, and LV zones (◘ Fig. 4.48). Some examples of the variogram models for the project are provided in ◘ Table 4.12.

Fig. 4.48
figure 48

Silver variogram model for LV zone (Illustration courtesy of Lydian International Ltd.)

Table 4.12 Some variogram models used in the project

Erato and TAA zone composites were used to estimate gold into each of the Erato and TAA models using hard boundaries. A panel model with the dimensions of 20 mE × 20 mN × 10 m elevation was used for the MIK estimates. In preparation for ranking of localized estimates, gold grades were estimated by ordinary kriging (OK) into a target SMU model with the dimensions 10 mN × 10 m E × 5 m elevation. Hard boundaries were used for each respective zone to estimate gold grades into the Erato and TAA block models (◘ Table 4.13). A change-of-support adjustment was applied in order to produce resource estimates that reflect the anticipated level of mining selectivity. When estimating local recoverable resources, the objective is to obtain the proportion of mineralization above a particular cutoff grade (pseudo tonnage), within panels that are large enough to achieve a robust estimation.

Table 4.13 Block model definition

A localized MIK (LMIK) SMU model was generated using the MIK SMU-corrected histogram and partitioning of the estimated tonnage and metal from the MIK panel model evenly into SMU blocks within the panel. In this manner, grades are mapped into each of the SMU-sized blocks, thereby replicating the targeted mining selectivity. Gold grades were estimated by ordinary kriging for the LV unit using hard boundaries. No distinction was made between Erato and TAA areas for these estimates. Silver grades were estimated using OK for the Erato, TAA, and LV zones using silver composites with hard boundaries for each zone. Uncapped composites are used for estimation of silver grades in the Erato and TAA models. Silver grades were estimated using an OK estimator. Dry bulk density values were assigned to each estimated model on the basis of the average dry bulk density measurements in each of the estimated zones.

Indicated resources were classified on the basis of a volume that enclosed relatively closely spaced drilling (approximately 45 m intervals) and included holes drilled vertically and at inclined angles, demonstrating vertical and horizontal continuity. The outline was drawn to enclose a continuous zone of mineralization and areas where a high number of composites are used to make each block estimate. These outlines were designed around areas that showed lateral continuity exceeding 150 meters. Indicated classification was extended to include overlying or underlying blocks of the lower volcanic unit. Resources classified as measured were contained within the indicated wireframe, but block grades are estimated by 40 composites and 60 composites for the Erato and TAA zones, respectively. The measured classification encompassed only blocks in the Erato or TAA zones. The likelihood of the resource being potentially economic was determined by generating a conceptual optimized pit shell using the following assumptions: (a) metal prices of USD 1,500 per ounce gold and USD 25 per ounce silver, (b) average pit slope of 32 degrees, (c) average mining cost of USD 2.00 per ton and processing and administration costs USD 4.60 per ton, and (d) gold cutoff grade of 0.20 g/t. Mineral resources are reported on the basis of all estimated blocks that are contained within this pit shell.

At a cutoff grade of 0.20 g/t gold, the mineral resources are estimated at 77.2 Mt at 0.78 g/t Au and 3.6 g/t Ag (1.9 million ounces gold and 8.8 million ounces silver) of measured category, 45.1 Mt at 0.76 g/t Au 3.5 g/t Ag (1.1 million ounces gold and 5.1 million ounces of silver) of indicated category, and 106.2 Mt at 0.59 g/t Au and 2.6 g/t Ag (2.0 million ounces of gold and 8.9 million ounces of silver) of inferred category resources (◘ Table 4.14).

Table 4.14 Mineral resource statement

Regarding the mineral reserves of the project, the pit designs and the estimate of mineral reserves were based on a number of pit optimization runs carried out utilizing the Lerchs and Grossmann algorithm. These optimization runs examined the effect of:

  1. 1.

    Cutoff grade ranging from 0.1 Au g/t to 0.3 Au g/t, in increments of 0.05

  2. 2.

    A 6.5 percent ramp gradient

  3. 3.

    The inclusion of Inferred material

  4. 4.

    Waste haulage options, exploring the effect of a reduction in mining cost due to utilizing a combination of waste dump and in-pit waste backfill

  5. 5.

    Optimizing each deposit separately

  6. 6.

    The effect of sterilization due to a zone containing an endangered flora

  7. 7.

    the effect of applying dilution by regularizing the resource model

  8. 8.

    The sensitivity of the resource block model considering only gold compared to including the contribution of silver

◘ Figure 4.49 shows the optimization results by pit shell for all deposits and ◘ Table 4.15 tabulates the mineral reserves for the project.

Fig. 4.49
figure 49

Optimization results by pit shell for all deposits (Data courtesy of Lydian International Ltd.)

Table 4.15 Mineral reserves for the project

4.6 Mining Project Evaluation

Project evaluation is the process of identifying the economic feasibility of a project that requires a capital investment and making the investment decision (Torries 1998). Much care and perhaps multiple evaluation methods are required to obtain results on which to base mineral investment decisions. Mineral investments show certain characteristics that differentiate them from other types of investment opportunities such as the depletable nature of the ore reserves, the unique location of the deposit, the existence of many geologic uncertainties, the significant time needed to place a mineral deposit into production, the commonly long-lived nature of the operation itself, and the strong cyclical nature of mineral prices. This decrease in flexibility obviously increment the risk of mining projects compared to other types of investment opportunities. The term risk has many meanings in the mining world, but a broad definition of risk is «the effect of uncertainty on objectives» (ISO 31000: 2009. Risk management – Principles and guidelines). It can be used by any organization regardless of its size, activity, or sector.

Deeper in the subject, Rudenno (2012) selects up to seven differences between resource and industrial companies:

  1. 1.

    Volatility of share prices: share price volatility for resource stocks has historically been greater than for industrials.

  2. 2.

    Exploration: a unique feature of the mining industry is the need to explore in order to find and define an economic resource on which a mining project can be built.

  3. 3.

    Finite reserves: any mineral resource has a finite volume and therefore will have a finite life; industrial companies are in theory able to operate for an indefinite length of time, once they have a raw material supply and a market for their product.

  4. 4.

    Commodity price volatility: resource stocks are exposed to greater external commodity price volatility than most industrial stocks, since most of the world’s major exporters of raw mineral commodities are price takers rather than price makers.

  5. 5.

    Capital intensity: the mining industry, by its very nature, is capital intensive, being the high level of expenditure due to exploration, economies of scale, isolation, and power and water factors.

  6. 6.

    Environmental: protection of the environment is important for both industrial and resource companies, but mining cycle environmental impacts (see ► Chap. 7) are clearly more intensive and harmful in mining projects.

  7. 7.

    Land rights: although industrial-based companies can be faced with problems related to land rights, they are not as exposed as mining companies, which are often involved in exploration on land not covered under freehold title.

Moreover, the effects of time greatly influence the value of a mineral project as they do any other long-lived investment because many mineral procedures are cyclical and the issues to forecast prices and expenditures poses special problems in calculating and planning mineral projects (Labys 1992). Time also affects mineral projects in several ways that are not always present in other investment opportunities. For example, the first higher-grade ore mined increases early profits but diminishes the average grade of the rest of the ore, thus reducing the global life of the mine. Moreover, it is impossible to establish the right amount or grade of material to be mined until the deposit is depleted. This is related to geologic certainty (only statistical estimates of the reserves) and economic certainty (it is almost impossible to determine reserves since future prices cannot be forecast accurately) (Torries 1998).

In summary, the use of adequate project evaluation techniques is more important in the mining industry than in many other industries. This is because the mining projects are extremely capital intensive and require many years of production before a positive cash flow commences and their life is much longer compared to other industries. It is important to keep in mind the dynamic nature of project evaluation. Numerous projects compete for the same scarce resources at any given time. Changes in the budget, evaluation criteria, or costs or benefits of any of the competing projects can change the evaluation results and ranking for any single project under consideration.

4.6.1 Types of Studies

Three levels of geological/engineering/economic studies are commonly applied by the mining industry: the scoping study, the preliminary feasibility (pre-feasibility) study, and the feasibility study. Depending on the context, each of these types of study is sometimes generally referenced as a «feasibility study.» The two important requirements for these types of studies, especially feasibility reports, are as follows: (1) reports must be easy to read and their information must be easily accessible; and (2) parts of the reports need to be read and understood by nontechnical people (Hustrulid et al. 2013). Once a resource estimate has been completed, a decision will be made to either to shelve the project, to continue drilling on the project with the hope of increasing the resource, or to proceed with a preliminary economic assessment or pre-feasibility study. These studies build upon the resource estimate by designing a mine around the deposit and undertaking economic analysis of the viability of a mining operation. Each study builds upon the earlier study by increasing the detail and level of rigor.

The primary goal for determining the feasibility of a mineral property is to prove that the mining project is economically feasible if it is designed and operated properly. The terminology for each stage of feasibility study is very varied, and there is no agreed standard for quality or accuracy. Thus, it is very common to refer it as scoping studies, pre-feasibility studies, and feasibility studies. It is convenient to use this terminology although the study process is iterative and several increasingly detailed pre-feasibility studies can be undertaken before committing to the final feasibility study. Some of these steps are usually overlapped, but this is improbable to reduce the time involved. In this sense, it is not rare to spend about 15 years between the beginning of the prospection program and the start mine production (Moon and Evans 2006).

The studies range from the lowest level of certainty (scoping) to the highest level of certainty (feasibility) and show increasing levels of detail and expense associated with their completion. Only the final feasibility study is considered to have sufficient detail to allow a definitive positive or negative decision for corporate and financial purposes. However, it is important to note that production of a final feasibility study report does not in itself mean that a project is viable or that the project will be one that will attract project finance. Often these project stages are required to be undertaken in line with international codes such as JORC or NI 43-101 (see ► Chap. 1) to determine what is required and includes their associated confidence levels. Regarding the cost of these studies, they vary substantially depending on the size and nature of the project, the type of study being undertaken, the number of alternatives to be investigated, and numerous other factors. For this reason, some estimated data are offered in each type of study.

Pre-feasibility and feasibility studies involve establishing several key components of a mining operation, including mine design, processing methods, reclamation and closure plans, and cash flow analysis. These are referred to as the «modifying factors» under the International Reporting Standards. Mine design involves determining the mining methods, annual and life-of-mine production, equipment needs, and personnel requirements. Processing methods are the methods and equipment needed to concentrate mineral or recover metal from ore, commonly presented in a flow sheet diagram that outlines the steps the ore will go through from the time it leaves the mine until the final product is produced. Reclamation and closure plans are part of the overall mining operation and must be factored into the mine and mill design as well as the cash flow analysis. It represents the detailed economic assessment of the proposed mine and will be taken in detail in next section. Cash flow analysis may be very complex and generally include the capital costs (◘ Table 4.16), the operating costs (◘ Table 4.17), taxes and royalties, and the revenues generated by the sale of products.

Table 4.16 Example of capital costs in a feasibility study (sustaining cost covers the entire mine site operation from year 1 to the end of production)
Table 4.17 Example of operating costs

4.6.1.1 Scoping Studies

NI 43-101 Canadian code defines a preliminary assessment, or scoping study or order-of-magnitude study, as «a study that includes an economic analysis of the potential viability of mineral resources taken at an early stage of the project prior to the completion of a preliminary feasibility study.» Thus, this study is the first level of geological/engineering and economic analysis that can be performed, usually at an early stage in the project. At this phase, it is obviously undesirable to expend further funds on something that has no chance of being economic. The bases for these studies are the geology plans from the exploration phase (◘ Fig. 4.50), limited drilling, and other sample collections. This allows to carry out rational estimates using known costs and likely outcomes. The results define the presence of sufficient inferred resources to warrant further work. Where a resource is classified as indicated, a coping study will provide financial assessment of the resource.

Fig. 4.50
figure 50

Property geology of Klaza project used in exploration (Illustration courtesy of Rockhaven Resources Ltd.)

This type of study provides a first-pass examination of the potential economics of developing a mine on a mineral deposit. Though a scoping study is useful as a tool, it is neither valid for economic decision-making nor sufficient for reserve reporting. The evaluation is conducted by using mine layouts and factoring known costs and capacities of similar projects completed elsewhere. The study is directed at the potential of the property rather than a conservative view based on limited information, and it is commonly performed to determine whether the expense of a pre-feasibility study and later feasibility study is warranted. At this stage, mineralogical studies will identify undesirable elements and other possible metallurgical issues. It is also common to explore different options for mining and processing the deposit in order to choose the most promising methods for further study.

Scoping study usually takes a few weeks to a few months to complete and cost USD 20,000 to USD 200,000 (Stevens 2010) or 0.1–0.3%, expressed as a percentage of the capital cost of the project (Rupprecht 2004). The major risk at this stage is that a viable mining project is abandoned due to an inadequate assessment. For this reason, it is paramount that expert people are implicated in the study. The intended estimation accuracy is usually 30–35%, though some companies accept ±50%.

4.6.1.2 Pre-feasibility Studies

NI 43-101 defines a pre-feasibility study as «a comprehensive study of the viability of a mineral project that has advanced to a stage where the mining method… has been established and an effective method of mineral processing has been determined, and includes a financial analysis based on reasonable assumptions of technical, engineering, legal, operation, economic, social, and environmental factors and the evaluation of other relevant factors which are sufficient for a qualified person, acting reasonably, to determine if all or part of the mineral resource can be classified as a mineral reserve.» One of the most important aspects of a pre-feasibility study is that a mineral resource cannot be converted to a mineral reserve unless it is supported by at least a pre-feasibility study. It is common that the results of the pre-feasibility study can be the first hard project information which is seen by corporate decision-makers and investors. The aim of the pre-feasibility study is «to evaluate the various options and possible combinations of technical and business issues to assess the sensitivity of the project to changes in the individual parameters, and to rank various scenarios prior to selecting the most likely for further, more detailed study» (Moon and Evans 2006).

There are many reasons for carrying out a pre-feasibility study, being the most important as follows: (a) as a basis for further development of a major exploration program following a successful preliminary program, (b) to attract a buyer to the project or to attract a joint venture partner, (c) to provide a justification for proceeding to a final feasibility study, and (d) as a means to determine issues requiring further attention (Rupprecht 2004). For these reasons, especially the second one, the pre-feasibility study must be carefully prepared by a small multidisciplined group of experienced technical people, and its conclusions should be heavily qualified wherever necessary, being the assumptions realistic rather than optimistic. Thus, the pre-feasibility study represents an intermediate step between the scoping study and the final feasibility study, requiring a high level of test work and engineering design. At the end of a pre-feasibility study, geological confidence is such that it is suitable to publicly disclose ore reserves from measured and indicated resources and any other mineral resources that can become mineable in the future with further study. These studies tend to achieve an accuracy within 20–30%.

In a pre-feasibility study, economic evaluation (see the following headings) is utilized to assess various development options and overall project viability. The results of the study are used to justify expenditure on gathering this additional information and the considerable expenditure needed to carry out the final feasibility study on a substantial project. In a pre-feasibility study, the details of the processing methods will be based on initial metallurgical studies of the mineralization of the deposit (◘ Table 4.18) rather than solely on standard industry methods. Accordingly, pre-feasibility studies can include washing, milling, and numerous other techniques designed to prepare the material for sale and distribution to customers.

Table 4.18 Test results of a selective flotation process carried out in a pre-feasibility study

Environmental protection, permits including legal and social, and the eventual closure of the mine must all be considered during this phase. The option that demonstrates the highest value with acceptable (lowest) risk will be selected as demonstrably viable. The cost of a pre-feasibility study can be as little as USD 50,000 for a simple project to more than USD 1,000,000 for larger, more complicated projects or 0.2–0.8% of the capital cost of the project (Rupprecht 2004). It commonly takes from 6 months to 1 year to complete (Stevens 2010).

Social and environmental baseline studies must be carried out showing conformance to the Equator Principles. Most importantly, the performance standards along with the World Bank Group’s Environmental, Health, and Safety Guidelines form the basis of the Equator Principles. The Equator Principles (EPs) are a voluntary set of standards adopted by financial institutions for determining, assessing, and managing environmental and social risk in project finance activities. According to this, Equator Principles financial institutions (EPFIs) commit to implementing the EP in their internal environmental and social policies, procedures and standards for financing projects and will not provide project finance or project-related corporate loans to projects where the client will not, or is unable to, comply with the EP. Obviously, Equator Principles have greatly increased the attention and focus on social/community standards and responsibility since 2010. They include robust standards for indigenous peoples, labor standards, and consultation with locally affected communities within the project finance mining market. The most important lending institutions worldwide, many of whom provide financing for mining activities, have adopted Equator Principles.

A similar initiative is the Kimberley Process (KP). It was founded in 2000 in Kimberley, South Africa, by the governments of South Africa, Botswana, and Namibia. There are actually 54 participants in the KP, including the 28 EU member states, representing 81 countries. The KP tries to join the diamond-producing countries and diamond importers together to remove trade in conflict diamonds and stop them from being used to finance rebel movements. The main KP document applying to rough diamonds is the KP Certification Scheme (KPCS), adopted in November 2002. Today, the KP covers no less than 99.8% of the world diamond trade. ◘ Figure 4.51 shows the Kimberley Process certificate in the European Union.

Fig. 4.51
figure 51

Kimberley Process certificate in the European Union

4.6.1.3 Feasibility Studies

NI 43-101 defines a feasibility study as «a comprehensive study of a mineral deposit in which all geological, engineering, legal, operation, economic, social, environmental and other relevant factors are considered in sufficient detail that it could be reasonable serve as the basis for a final decision by a financial institution to finance the development of the deposit for mineral production.» The term «bankable» is often utilized in connection with feasibility studies. It only means that the study acquires a quality that is acceptable for submission to bankers or other institutions that can finance the project. In fact, it does not really reflect a different type of economic analysis. A better term for a bankable feasibility study would be a «bank-approved» or «bank-vetted study» (Stevens 2010). The reality is that banks or major investment firms will undertake their own internal analysis of a feasibility study to determine if the project meets their investment objectives. If it does meet those objectives, it could be considered bankable.

The feasibility study is the last stage needed to establish if a mine is economically viable. For this reason, it is much more detailed and costly than the previous two study types. The objective is to remove all significant doubt and to present relevant information about referenced material as well as to verify and maximize the value of the preferred technical and business options identified in the previous pre-feasibility study. For this reason, a full feasibility study must prove within a reasonable confidence that the mining project can be operated in a technically sound and economically viable manner. Capital and operating costs are evaluated to an accuracy between 10% and 15%, covering realistic eventualities based on the level of engineering completed. In these studies, the product price is the most important single variable and yet the most difficult to predict. The feasibility study should determine ore reserves as per standard definition (e.g., NI 43-101, SAMREC, or JORC), scale of the project, construction budget and schedule for the project, cost estimate for operating and capital, contingencies (◘ Table 4.19), market estimates (◘ Table 4.20), cash flow studies, and risk analysis (Rupprecht 2004).

Table 4.19 Contingency costs for indirect costs (capital costs) in a feasibility study
Table 4.20 Example of a commodity prices market study included in a feasibility study

Sensitivity analyses are carried out to establish the major factors that can impact upon the reserve estimate (◘ Table 4.21). This will help quantify the risk associated with the reserves, which at this stage will fall within the company’s acceptable risk category. Often, financial institutions utilize independent consultants to audit the resource/reserve calculations (Moon and Evans 2006). The defined mine plans in a feasibility study is based on measured and indicated geologic resource, which would become proven and probable reserves. At this stage, consultation and negotiation with local community groups, landowners, and other interested parties will proceed to the point of basic agreement. Full feasibility studies cost in the neighborhood of one to a few million dollars (Stevens 2010) or 0.5–1.5% of the capital cost of the project (Rupprecht 2004) and can take 1–2 years to complete. This type of study is usually undertaken by engineering consulting firms with expertise in various aspects of mine design and development.

Table 4.21 Cutoff sensitivity in the indicated category for the mineral resource estimates

4.6.2 Economic Analysis

4.6.2.1 Cash Flow Analysis

Ancient methods, prior to the early 1960s, in mineral project economic evaluation process include Hoskold method and Morkill method, with the Hoskold method probably the most popular. The Hoskold method was based on the financial policy of British coal mining companies of nearly a century ago. This procedure was no longer in use since the advent of methods based on cash flow analysis such as net present value, internal rate of return, or payback period.

The value of a mineral project can be determined using a variety of valuation techniques and associated methodologies. Although valuation of the mineral project could be required at any stage in its life, and not all of the valuation techniques are applicable to all stages of such a development, some methods are often used to analyze the economic viability of the mining project as a whole. The predominant economic evaluation technique, from pre-feasibility study to operating mine, is the discounted cash flow (DCF) method. The cash flow model must recognize the time value of money discounting at an appropriate discount rate to obtain their present value. DCF criteria values are gross profit; earnings before interest, depreciation, and amortization (EBITDA); net present value (NPV); internal rate of return (IRR); and payback period (PP). NPV, IRR, and PP methodologies are the most accepted by the industry, the financial community, and regulatory bodies. In summary, the general procedure for evaluating investment opportunities is to carry out a comparison between the benefits of any particular opportunity and the associated costs, investing later in those projects that are worth more than they cost.

The change in the amount of money over a given time period is called the time value of money. This concept is based on the principle that, disregarding inflation, money is worth more today than it will be at some future date because it can be put to work over that period. In other words, since investors would rather receive benefits sooner than later, the value of each yearly cash flow generated over the life of a project can be adjusted for the time value of money. Thus, the value of money today is not the same as money received at some future date. The effect of inflation on project value, however, is important and must be considered.

From the concept of time value of money, two important characteristics in valuing mineralization and mineral projects can be outlined. Firstly, as discount factors are highest in the early years, the discounted value of any project is enhanced by generating high cash flows at the beginning of the project. Secondly, discount factors decrease with time and, by convention and convenience, cash flows are not estimated beyond usually a 10-year interval as their contribution to the value becomes minimal. Moreover, it is quite difficult to predict with some degree of accuracy what is to happen after 10 years.

The time value of money is computed using the compound interest formula. For example, if an investment of I = USD 1,000 is made today at an interest rate of 10%, the future value is:

  • After 1 year: I × (1 + i) = 1000 × (1 + 0.1) = USD 1,100.

  • After 2 years: I × (1 + i) × (1 + i) = 1,000 × (1+ 0.1)2 = USD 1210.

  • After 10 years: I × (1 + i) 10 = 1,000 × (1 + 0.1)10 = 1,000 × 2.594 = USD 2,594.

  • Generally after n years: I × (1 + i)n.

Thus, the general formula will be

$$ {\mathrm{FV}}_n=\mathrm{P}\mathrm{V}\times {\left(1+i\right)}^n $$

where FVn is the future value at year «n», PV is the present value, and «I» the interest rate. This expression can be rewritten to show the relationship between the future yearly cash flows (CF t ) and the discounted values of the yearly cash flows (DCFt) at time period «t»:

$$ {\mathrm{DCF}}_t=\frac{{\mathrm{CF}}_t}{{\left(1+i\right)}^t} $$

Cash flow analysis can be very complex and generally include three main components: (a) capital costs associated with building the mine; (b) operating costs, taxes, and royalties generated to produce the products at a mine; and (c) revenues obtained by the products. Cash flow can be defined as cash into the project (revenue) minus cash leaving the project (cost) or, more in detail, as revenue minus mining, ore beneficiation, transport, sales, capital, interest payments, and taxes costs. From a geological viewpoint, cash flow analysis requires to translate the geologic characteristics of the project into costs of development and extraction and to convert preliminary estimates of reserves into potential revenues from mining, making assumptions about future mineral prices. Regarding the taxation, it is not unexpected in a viable project that taxes and royalties will account for a significant portion of the cash flow.

All texts about mineral project evaluation conclude that the preferred methods of evaluation, where sufficient data is available, are those that include annual cash flow projections and that recognize the time value of money. These are the so-called dynamic methods. They include particularly the net present value and the internal rate of return, as opposed to those employing simple cost and revenue ratios or payback periods and not considering the time value of money (named static methods). On an international level, economic assessment of mining projects are done using basically NPV and IRR and sometimes PP. NPV is a measure of value of a stock of wealth, whereas IRR is a measure of the efficiency of capital use or the rate of accumulation of wealth.

4.6.2.1.1 Net Present Value (NPV)

The net present value of a mining project is merely the difference between cash outflows and cash inflows on a present value basis, being the backbone of a project evaluation process. The formula to calculate NPV is as follows:

$$ \begin{array}{c}\mathrm{N}\mathrm{P}\mathrm{V}=\left({R}_0-{C}_0\right)+\frac{R_1-{C}_1}{\left(1+i\right)}+\frac{R_2-{C}_2}{{\left(1+i\right)}^2}+\dots \\ {}+\frac{R_n-{C}_n}{{\left(1+i\right)}^n}\end{array} $$

where «R» is the expected revenues each year, «C» is the expected costs each year, and «I» is the discount rate for the project. Only cash revenues and costs are incorporated in the net present value calculation, that is, only those revenues actually received or costs currently produced are included in the cash flow for a certain time period. Examples of noncash costs are depreciation and depletion.

In this context, the discount rate equals the minimum rate of return for the project and reflects the opportunity costs of capital, sometimes adjusted for the risk of the project. The opportunity cost of capital is the benefit that would be received by the next investment opportunity. The NPV for different investment projects should be compared using the same discount rate. A positive NPV indicates that expected income is higher than projected expenses and a negative NPV indicates a nonprofit or loss situation so that the project should be abandoned. Obviously, NPV must be positive and usually must be above a certain minimum value determined by the company based on internal standards. The larger the NPV, the richer the investors become by undertaking the project. On the other hand, the higher the discount rate, the lower the NPV of the project.

Selection of a suitable interest rate is essential in the application of NPV because interest rate discounts gradually the cash flow values and establishes finally the net present value of the project. The interest rate for discounting commonly ranges between 5% and 15% over the interest rate of the needed initial capital investment, and in times of high interest rates, this discount rate is particularly onerous. ◘ Table 4.22 can serve as a guideline for discount rate factors at each study level. Often, the different parts involved in the mining project had agreed on all aspects of the evaluation and, by combining these components, even on the final cash flow values. The only factor for discussion tends to be related to the discount rate to be used in the calculation of the net present value. Such differences can cause a variation of more than 50% in the value placed on a project. The discount rate is used not only as the discount rate in the NPV method but also as the minimum rate for the IRR.

Table 4.22 Guideline for discount rate factors at each study level

It is not easy to deal specifically with the selection of discount rates for mineral project evaluations although economic and finance theory proposes the use of the corporate cost of capital as a discount rate. In general, mining companies, for cash flow evaluations at the feasibility study level of projects in low risk countries, commonly select a discount rate of 10%. In fact, the companies actually use to determine the discounting rate to utilize in their financial evaluations applying the weighted average cost of capital (WACC) method. It is the weighted average of the costs that a company has to pay for the capital it uses to make investments. In general, the higher the risk in the project, the higher the discounting rate applied to it. For this reason, sometimes a company will apply a modifying factor to the WACC to account for increased risk in certain projects (e.g., projects with high risk can use a discounting rate equal to the WACC plus 2–3%).

4.6.2.1.2 Internal Rate of Return (IRR)

Internal rate of return (IRR) method is one of the most used investment analysis methods and, besides NPV, is probably the most common evaluation technique in the minerals industry. In the IRR method, the objective is to find the interest rate at which the present sum and future sum are equivalent. In other words, the present or future sum of the all cash flows is equal to zero if IRR value is used as interest rate. It is clear that at the discount rate increases for a specific cash flow, the NPV of the cash flow necessarily decreases. The relationship between IRR and NPV can be written as

$$ \mathrm{N}\mathrm{P}\mathrm{V}=0=\left[{\displaystyle \sum}_{t=1}^n\frac{{\mathrm{CF}}_t}{{\left(1+\mathrm{I}\mathrm{R}\mathrm{R}\right)}^t}\right]-{I}_0 $$

where CF t is the cash flow in year «tI 0 is the initial investment (CF0), IRR is the discount rate that makes NPV = 0, and «n» is the total number of years for the project. In general, calculations of IRR and NPV commonly give the same accept or reject recommendation, but the IRR method is more complicated than relying on NPV estimations.

It can be understood that IRR value is the interest rate at which the investor recovers the investment. The higher a project’s internal rate of return, the more desirable it is to undertake the project because the better the return on capital. If all projects require the same amount of initial investment, the project with the highest IRR would be considered the best and undertaken first. Investment banks and other groups that fund the capital costs for mining operation like to see IRR values exceeding 10% and with values of 20% or better being ideal (Stevens 2010).

There are several reasons to explain the widespread popularity of IRR as an evaluation criterion, being probably the most interesting that IRR is expressed as a percentage value and many managers and engineers prefer to think in terms of percentages. Thus, the acceptance or rejection of a project based on the IRR criterion is carried out by comparing the estimated rate with the necessary rate of return: if the IRR exceeds the required rate, the project should be accepted, but if not, it should be rejected. The difference between the discount rate and IRR is that the investor chooses the discount rate whereas the characteristics of the cash flow determine the IRR. Consequently, IRR is determined internally (hence its designation as the «internal» rate of return), as compared to the discount rate for NPV, which is determined externally (Torries 1998).

4.6.2.1.3 Payback Period (PP)

The payback (or payout) period falls under the heading static methods but it is one of the most simple and common evaluation criteria used by engineering and resource companies. It is the number of years required for a project to generate cash flow or profits equal to the initial capital investment. It is important to note that cash flow in the first year or even more years of a mining operation will be negative since these years are used to pay the previous investments (e.g., exploration). The PP method is a helpful evaluation index since it generates an indication of how long the company has to wait to get its return on investment although it is an inappropriate evaluation technique if used alone because it does not take into account the total cash flow or distribution of cash flows over the life of the project. The rationale of this method is that a shorter time required to get back the investment is better (Torries 1998).

The method does not provide a guidance for the selection of an acceptable payback period, that is, one company may select 3 years, while another can choose 6 years under the exact set of circumstances. However, for most normal mining projects, payback periods lie between 3 and 5 years, and as a rule, shorter payback periods are required in high-risk countries than in stable countries. The method serves as a preliminary screening process, but it is inadequate as it does not take into account the time value of money. Payback period is very helpful in countries of political instability where the retrieval of the initial investment within a short period is clearly essential. For example, consider the use of payback in assessing the feasibility of developing a rich deposit in a remote and politically unstable area. The project can have a very attractive rate of return, but management will probably not give approval until it is shown that payback can be achieved in less than 2 years (Torries 1998). ◘ Table 4.23 incorporates a simple calculation of NPV, IRR, and PP values, whereas ◘ Table 4.24 is a real case of NPV, IRR, and PP estimations (NPV is estimated for different discount rates).

Table 4.23 Simple calculation of NPV, IRR, and PP values; money is expressed in monetary units (MU)
Table 4.24 NPV, IRR, and PP estimations in a real economic analysis
4.6.2.1.4 Inflation

In a project evaluation, anything that changes or impacts costs and revenues is worthy of review. Inflation is such a factor and because it increases at a compounding rate over time, it must be considered carefully before it is applied to a project. Thus, inflation, the sustained increase in the general price level of goods and services in an economy over a period, cannot be overlooked in an evaluation process. If management selects to preclude inflation from the estimation, it should be aware of the results of this decision (Smith 1987). In general, a mining project should be evaluated using several rates of inflation. In the absence of a strong personal or corporate policy on inflation, the consumer price index is often used.

A common error in auditing cash flows is the use of real or constant and nominal or current monetary units (e.g., dollars). Often, mining companies will analyze projects based in real dollars and financial institutions commonly use nominal dollars. The process of converting from nominal to real terms is known as inflation adjustment. Cash flows can be calculated either on a constant or current (inflated) dollar basis but regardless of which based is used, all prices, costs, and rates must be expressed in the same terms. Thus, it is incorrect to mix current dollars values with constant dollar values in a single cash flow. Most company financial statements and reports are in nominal dollars and can serve as a basis for risk evaluation. If inflation can be forecast, current dollar analysis gives results that are more reliable.

4.6.2.2 Risk Analysis

The previous methods to evaluate investment alternatives assume that future benefits and costs are known with certainty at the time of investment, which is clearly a questionable assumption especially in many types of mining project investments. Thus, risk is thought of as a measure of the degree of variability of possible future revenues and costs. Mining involves large risks and the magnitude of uncertainties in mine development projects is generally larger than in most other industries used for comparison. A project in which future prices and costs are known with certainty can and should be evaluated in a different manner from one in which these factors are not known. All input values in DCF analysis must be known with certainty, so that there must be no uncertainty or risk. A numerical value of NPV can be correctly determined using any set of numbers, but the true value of an investment can be determined only if all input values are known with certainty. This certainty is seldom possible since future prices or costs are not exactly known. Moreover, the determination of risk is actually complex in the minerals industries because of the need to include environmental risks and costs in project evaluation. Some companies or institutions tend to issue annually a detailed report on the main risks in mining sector, the so-called by EY’s Global Mining & Metals Center top 10 business risks facing mining and metals. As an example, the top 10 risks for 2016–2017 are shown in ◘ Fig. 4.52. The report also includes the main three top risks for each commodity (aluminum, coal, copper, gold, iron ore, lead/zinc, nickel, PGM, potash, silver, steel, and uranium). In this sense, «price and currency volatility» risk is the first one in six commodities and the second in three more.

Fig. 4.52
figure 52

The top business risks form mining and metals

Where uncertainty and risk are moderately absent, project evaluation is an easy exercise. However, mineral projects commonly involve raw materials for which prices or operating processes are difficult to forecast. In general, the higher the risk experienced by an investor, the higher the expected returns. Without the promise of higher returns, an investor would have no reason to consider projects with higher risks. Consequently, the inclusion or exclusion of risk in economic evaluation is of huge importance. Although the usage of the term risk as a synonym for uncertainty is not right because their definitions are not exactly the same, it is worthy to remember that they are utilized indistinctly in this section. Risk can be denoted by single probability estimation, whereas uncertainty can be denoted by a range of estimates.

There are three categories of mineral-development risk according to cause of the risk: technical, economic, and political risks (Park and Matunhire 2011). The technical risks, which are at least partly under the control of the organizations active in mineral development, are divided into reserve risk, completion risk, and production risk. Reserve risk, determined both by the nature and by the quality of ore-reserve estimates, reflects the possibility that actual reserves will differ from initial estimates. Thus, any resource and reserve estimation is guaranteed to be wrong; some, however, are less wrong than others (Morley et al. 1999). Completion risk reflects the possibility that a mineral-development project will not make it into production as anticipated. Production risk reflects the possibility that production will not proceed as expected because of production fluctuations.

The economic risks are divided into price risk, demand/supply risk, and foreign exchange risk. Price risk is the possible variability of future mineral prices. The most important risk factor is lack of knowledge of the future price of product of mining (Rendu 2002). Demand/supply risk accounts for the difficulty in achieving reliable demand/supply forecasts, and foreign exchange rate risk is the variability of possible foreign exchange rates in the future. Finally, political risks are defined by the political instabilities. In this sense, the general reasoning for a diversification strategy is to reduce fluctuations in earnings produced by mineral price instability and/or unforeseen government actions or other events in a particular country.

To account for risk and uncertainty (the uncertainties begin with exploration and continue up to end of mine life) in economic evaluations, many modifications to NPV analysis are used, including mainly one or more of the following: sensitivity analysis, risk-adjusted discount rates, scenario analysis and Monte Carlo simulation. Other less common techniques include, for example, certainty equivalence or Bayesian analysis.

4.6.2.2.1 Sensitivity Analysis

Sensitivity analysis is a form of risk assessment that is applied to financial analysis of any mining project. It is a procedure that analyzes what will happen to the value of the mining project if any of the key inputs were to change. The basic process for conducting sensitivity analysis involves changing each input variable one at a time, leaving all the other variables constant and assessing the effect that has on the total project value. This method of risk analysis is probably the most used in mineral project evaluations. The range of possible outcomes commonly includes best-case and worst-case scenarios, showing the best and worst combinations of other possible values of each variable that influences NPV estimation. Sensitivity analysis can also include testing the extent to which individual variables influence the economic engaging of a mining investment.

In any mining project evaluation, certain components have a greater effect upon the size of the cash flow, and hence value, than others. It is common to look at the effect on the net present value of the project, but it is possible and often necessary equally to look at the effect on the IRR or the payback period. There are three main objectives in the sensitivity analysis process: (a) to determine which variables have the biggest impact on the project value; (b) to reveals the significant variables, which, if varied or misestimated, would significantly change the acceptability of the project; and (c) to determine which variables we need to be estimated more accurately. The results of a sensitivity analysis are usually presented in two forms, either graphically (e.g., spider and tornado graphs) or in a table. Thus, ◘ Fig. 4.53 shows a spider graph of IRR sensitivity and ◘ Fig. 4.54 a tornado graph of NPV sensitivity at 5% discount rate. Regarding the presentation of sensitivity analysis data in tables, ◘ Table 4.25 shows the NPV and IRR Sensitivity to metal prices.

Fig. 4.53
figure 53

Spider graph of IRR sensitivity (Illustration courtesy of Alabama Graphite Corp.)

Fig. 4.54
figure 54

Tornado graph of NPV sensitivity at 5% discount rate (Illustration courtesy of Euromax Resources)

Table 4.25 NPV and IRR Sensitivity to metal prices
4.6.2.2.2 Scenario Analysis

Multiple combinations of factor values originate uncertainty. As a result, it is necessary to investigate the results of scenarios in which combinations of variables are changed. This type of approach is known as scenario analysis. The problem the decision-maker faces is caused by insufficient information to make an informed decision. One way to identify these unknowns is to construct scenarios (e.g., optimistic, base, and pessimistic) involving the expected ranges of input variables. The base case is constructed from the best estimates of the project parameters, and the resulting NPV is often, although incorrectly, called «expected value» of the project (Torries 1998). The pessimistic case shows the results of what happens where everything goes poorly while the optimistic case shows what happens where everything goes well.

4.6.2.2.3 Monte Carlo Simulation

A more quantitative approach to risk assessment must also incorporate mathematical and statistical methods to assess the risk associated with a project. In the Monte Carlo method, a simulation modeling technique, random number generator is used to calculate probability for each combination of events. The randomized calculation is repeated for many iterations so that an estimate of the overall probability for each outcome can be estimated. Thus, the method accounts for risk in a continuous manner instead of a discrete way because it takes into account all possible values of the underlying determinants of profitability rather than just different specific values.

In the same way that sensitivity analysis changes one variable at a time, the Monte Carlo simulation changes two or more input variables at the same time. Obviously, the overall impact on the project value will be much greater. There is enormous amount of combinations of different variables and different amounts of variation that we will have to deal with. For this reason, and the huge amount of calculations that result, Monte Carlo analyses are nearly always carried out using specific software packages that can model different combinations very quickly. The number of iteration is determined regarding the project size and the importance of risks (1,000, 2,000, 5,000, and so on). The higher number of runs gives the more accurate results. In most cases, to make the calculation easier, the variables are assumed independent from one another although most of the variables are commonly correlated. For example, ore grades are positively correlated with ore recovery. Regarding the presentation of Monte Carlo simulation results, for every combination of input variables a project value is calculated. After repeating the calculation for every combination (number of iteration), all the project values are plotted on a histogram and statistical parameters, such as median, mean, mode, percentiles, etc., are taken into account. The decision rule is to accept those investments with positive means or expected profits. ◘ Figure 4.55 shows the distribution of NPV values in a Monte Carlo simulation (Park 2012).

Fig. 4.55
figure 55

Distribution of NPV values in a Monte Carlo Simulation (Park 2012)

As a summary of the different steps included in a mining project economic evaluation, the following box shows an example of this type of studies (◘ Box 4.6: Matawinie Project Economic Analysis).

4. Box 4.6

4. Matawinie Project Economic Analysis: Courtesy of Nouveau Monde Mining Enterprises Inc.

The economic/financial assessment of the Matawinie Project of Nouveau Monde Mining Enterprises Inc. is based on Q2-2016 price projections in U.S. currency and cost estimates in Canadian currency. An exchange rate of 0.780 USD per CAD was assumed to convert USD market price projections and particular components of the cost estimates into CAD. No provision was made for the effects of inflation. The financial indicators under base case conditions are shown in ◘ Table 4.26. A sensitivity analysis reveals that the project’s viability will not be significantly vulnerable to variations in capital and operating costs within the margins of error associated with preliminary economic assessment (PEA) estimates. However, the project’s viability remains more vulnerable to the USD/CAD exchange rate and the larger uncertainty in future market prices.

Table 4.26 Financial indicators under base case conditions

The main macroeconomic assumptions used in the base case are given in ◘ Table 4.27. The price forecast for graphite concentrate is based on 60-month size-purity-dependent averages calculated from the Benchmark Mineral Intelligence Flake Graphite Price Index. The sensitivity analysis examines a range of prices 30% above and below this base case forecast. The sensitivity of base case financial results to variations in the exchange rate was examined. Those cost components which include US content originally converted to Canadian currency using the base case exchange rate were adjusted accordingly.

Table 4.27 Main macroeconomic assumptions used in the base case

The federal and provincial corporate tax rates currently applicable over the project’s operating life are 15.0% and 11.9% of taxable income, respectively. The marginal tax rates applicable under the recently adopted mining tax regulations are 16%, 22%, and 28% of taxable income and depend on the profit margin. As the mine is to produce a concentrate, a processing allowance rate of 10% is assumed.

The assessment was carried out on a 100% equity basis. Apart from the base case discount rate of 8.0%, two variants of 10.0 and 12.0% were used to determine the net present value of the project. These discount rates represent possible costs of equity capital. The main technical assumptions used in the base case are given in ◘ Table 4.28. A reduced production of 909.4 kt milled in the first production year provides for a ramp-up to full capacity.

Table 4.28 Main technical assumptions used in the base case

◘ Figure 4.56 illustrates the after-tax cash flow and cumulative cash flow profiles of the project for base case conditions. The intersection of the after-tax cumulative cash flow curve with the horizontal dashed line represents the payback period. A summary of the evaluation results is given in ◘ Table 4.29. The summary and cash flow statement indicate that the total preproduction (initial) capital costs were evaluated at USD 144.5 M. The sustaining capital requirement was evaluated at USD 14.4 M. Mine closure costs in the form of trust fund payments at the start of mine production were estimated at an additional USD 11.8 M. The cash flow statement shows a capital cost breakdown by area and provides an estimated capital spending schedule over the 2-year preproduction period of the project. Working capital requirements were estimated at 3 months of total annual operating costs. Since operating costs vary annually over the mine life, additional amounts of working capital are injected or withdrawn as required.

Fig. 4.56
figure 56

After-tax cash flow and cumulative cash flow profiles (Illustration courtesy of Nouveau Monde Mining Enterprises Inc.)

Table 4.29 Project evaluation summary – base case

The total revenue derived from the sale of the concentrate was estimated at USD 2,430.9 M, or on average, USD 78.79/ton milled. The total operating costs were estimated at USD 844.2 M, or on average, USD 27.36/ton milled. The financial results indicate a pretax net present value («NPV») of USD 403.7 M at a discount rate of 8.0%. The pretax internal rate of return («IRR») is 31.2% and the payback period is 2.9 years. The after-tax NPV is USD 237.0 M at a discount rate of 8.0%. The after-tax IRR is 24.7% and the payback period is 3.5 years.

Regarding the sensitivity analysis, it has been carried out, with the base case described above as a starting point, to assess the impact of changes in total preproduction capital costs («Capex»), operating costs («Opex»), product price («PRICE»), and the USD/CAD exchange rate («EX RATE») on the project’s NPV at 8.0% and IRR. Each variable was examined one at a time. An interval of ±30% with increments of 10.0% was used for the first three variables. USD/CAD exchanges rates of 0.70, 0.75, 0.80, 0.85, 0.90, 0.95, and 1.00 (relative variations of −10.3, −3.9, 2.6, 9.0, 15.4, 21.8, and 28.2%, respectively) were used. The US content associated with the capital cost estimate was adjusted accordingly for each exchange rate assumption.

The before-tax results of the sensitivity analysis, as shown in ◘ Fig. 4.57, indicate that, within the limits of accuracy of the cost estimates in this study, the project’s before-tax viability does not seem significantly vulnerable to the underestimation of capital and operating costs, taken one at a time. The NPV is more sensitive to variations in Opex than Capex, as shown by the steeper slope of the Opex curve. As expected, the NPV is most sensitive to variations in price and the USD/CAD exchange rate. The NPV remains positive at the lower limit of the price interval and at the upper limit of the exchange rate interval examined.

Fig. 4.57
figure 57

Pretax NPV8%: sensitivity to Capex, Opex, price, and USD/CAD exchange rate (Illustration courtesy of Nouveau Monde Mining Enterprises Inc.)

The same conclusions can be made from the after-tax results of the sensitivity analysis. They indicate that the project’s after-tax viability is mostly vulnerable to a price forecast reduction and change in the USD/CAD exchange rate, while being less affected by the underestimation of capital and operating costs. Nevertheless, the NPV remains positive at the lower limit of the price interval and at the upper limit of the exchange rate interval examined.

4.7 Questions

Short Questions

  • Definition of sampling.

  • What QA/QC means?

  • Differences between channel sampling and chip sampling.

  • What is bulk sampling?

  • Explain briefly the relationship between number of holes drilled and the precision of reserve estimates.

  • List the factors that influence the appropriate weight of sample.

  • Describe the two methods used for reduction of sampling weight.

  • What is compositing?

  • What outliers means?

  • Explain the concept of cutoff grade.

  • What is the most common way to determine bulk density?

  • Define the concept of geostatistics. What is the most important difference between classical and geostatistical methods regarding error estimation?

  • Explain the significance of kriging.

  • What is the net present value of a mining project?

  • Explain the Monte Carlo simulation in risk analysis.

Long Questions

  • Explain the main sampling drillhole procedures.

  • Describe the spherical or Matheron model used in geoestatistical studies.