Keywords

1.1 Background

Early definitions of precision agriculture (PA) emphasized the management of spatial variation within agricultural fields (Blackmore 1994) to maximize profits and reduce environment pollution (Fenton 1998). More recently, the current (2019) official definition of PA recognized by the International Society for Precision Agriculture (ISPA 2019) was developed in consultation with 46 PA experts coordinated by Dr. Nicolas Tremblay, Dr. Àlex Escolà and Dr. Viacheslav Adamchuk: “Precision agriculture is a management strategy that gathers, processes and analyzes temporal, spatial and individual data and combines it with other information to support management decisions according to estimated variability for improved resource use efficiency, productivity, quality, profitability and sustainability of agricultural production.”

Inherent in the current and early definitions of PA is the need to characterize the spatial variation of crop status, soils, diseases, weeds and pests within fields and the need for data about the variation of these variables. Observing these elements and status by traditional methods usually requires sampling, which is often inconsistent, biased, destructive, time-consuming and expensive to complete. According to the ISPA definition, PA could be practiced without any technological help, as temporal, spatial or individual plant or animal data could be gathered, processed and analyzed by simply using human senses, a paper and a pencil. Most farmers are very aware of the variation within their fields and orchards, and some may be managing them with a simple site-specific approach. However, to make PA economically viable, sensing approaches have been used from the outset and are now increasingly used to obtain dense spatial data sets at a greatly reduced cost compared to traditional sampling and laboratory analysis. Sensors are key in obtaining data about crops, soil , and so on, in a quantifiable, objective, repeatable, cost-effective and simple way, although the last two criteria might not always be satisfied.

Some of the first sensors to be used in PA were yield monitors for cereal crops to characterize the spatial variation in yield, which was seen as a first step to determining the causes of yield variation and managing them. Yield monitors enabled the collection of high resolution spatial data on the variation in yield and then interpolate results into map form without having to do time-consuming spatial sampling at harvest. Use of yield sensors has not been without its problems and much research has been devoted to investigating sources of error and developing methods to pre-process such data to reduce error or to refine the sensors (see Chap. 7). Key to the use of these first sensors in PA , and the use of many sensors today, has been global navigation satellite systems (GNSS), beginning with the global positioning system (GPS ), which were able to reduce considerably the time to obtain accurate positioning data. Although it is currently feasible to practice PA without the need for any GNSS receiver, GPS is considered to be one of the main triggers in the development and adoption of PA as many sensing approaches could not be implemented without accurate positioning data. Some basic concepts on GNSS will be covered at the end of this introductory chapter.

Over time, the number of sensors available for observing various phenomena in PA has increased almost exponentially and there is now a wide variety of sensing approaches to characterize spatial variation in various agronomically important phenomena. The 2019 definition of PA emphasizes the importance of changes in spatial variation over time, between years or even within individual growing seasons. This increasing interest in temporal analysis for PA means that repeated sampling of many phenomena is needed, sometimes numerous times within a season. Such temporal analysis would be economically untenable with traditional sampling and laboratory approaches. Current requirements in agriculture are leading to the almost exponential increase of interest in accurate yet inexpensive sensing approaches.

1.2 Purpose, Aims, Structure and Audience of this Book

A large proportion of the research in the PA literature uses sensors, but the output is scattered in various journals and reports. The idea for this book was proposed at the European Conference on Precision Agriculture held in 2017 in Edinburgh, UK. We noticed there, and at other PA conferences, that a large proportion of presentations involved some sort of sensing application. Sessions are usually based on the fields of application, however, such as weed studies, precision viticulture, precision irrigation or soil studies and only rarely are there occasional coherent sessions on particular sensing approaches. We saw the need for a text that brings together the variety of sensing approaches currently used in PA in one document to discuss properly the pros and cons of the different approaches. This book aims to bring together research based on the types of sensor and sensing systems commercially available to monitor crop characteristics, status and their environment. It also considers sensing systems in development or testing phases and their applications in PA . The book aims to bring together the ‘state of the art’ of the most popular sensing techniques and the current state of research of where sensors are applied in PA . The book will provide a broad overview of sensing in PA and a coherent introduction for new professionals and research scientists. However, the book does not cover proprioceptive or interoceptive sensors mounted on PA machinery or robots, as they are designed to measure the interaction of such equipment with the environment or to get internal data for their operation, respectively. Those sensors are already covered in another book of the series titled “Innovation in Agricultural Robotics for Precision Agriculture - A Roadmap for Integrating Robots into Precision Agriculture”. Chapters on specific topics and case studies will provide depth and enable implementation of the methods by users. Readers will be introduced to the potential applications of a range of different sensors, how they should be used properly, their limitations for use in PA and accuracy assessments as well as current relative prices of some sensors compared to rates charged for standard sampling and laboratory analyses. Sensing is of great value in PA because it usually provides cost-effective and often near real-time data for making more informed management decisions.

In addition to introducing the book, Chap. 1 provides some basic concepts related to sensors and sensing techniques that do not fall within the scope of the other chapters. These basic concepts will help readers to understand the book content better and make more appropriate use of sensors and their data. Chapters 210 of this book identify the most important specific sensing techniques used in PA . These form review chapters covering the following topics: (2) Satellite Remote Sensing, (3) Sensing Crop Geometry and Structure, (4) Soil Sensing, (5) Sensing with Wireless Sensor Networks, (6) Sensing for Health, Vigour and Disease Detection in Row and Grain Crops, (7) On-Combine Sensing Techniques in Arable Crops, (8) Sensing in Precision Horticulture , (9) Sensing from Unmanned Aerial Vehicles and (10) Sensing for Weed Detection. Some of these review chapters include case studies to illustrate the application of the various sensors. However, the focus of Chapters 1113 is solely on small-case studies, each showing cutting edge applications of different sensing methods: (11) Applications of Sensing to Precision Irrigation, (12) Applications of Optical Sensing of Crop Health and Vigour and (13) Applications of Sensing for Disease Detection. In conclusion to the book, there is a section on how we expect sensors and analysis to develop. The reference sections of each chapter alone, particularly the review chapters, have a wealth of information for readers who wish to explore the application of sensors in PA further.

This text provides sufficient detail to act as a handbook for practitioners. It will also be relevant to the wider field of digital agriculture, the adoption of which and some of its principals is increasing at an exponential pace. The theme of the ASA-CSSA-SSSA 2019 annual meeting, with about 600 sessions, 3000 research papers and 4600 presenters, was “Embracing the Digital Environment” which reflects the digital revolution within agriculture to which sensors have served as a keystone.

The target audiences of this book are upper level undergraduate and graduate students, new professionals, scientists and practitioners of PA and agricultural engineers. Readers are provided with a rapid overview of the sensing solutions currently adopted and the trends in research towards developing new applications. The book could be used in general agriculture and PA courses and also in courses on environmental monitoring and policy making.

1.3 Sensing Approaches

A wide range of sensing approaches is covered in this book. Two broad groups of approach are remote and proximal sensing approaches. Although remote sensing implies any measurement done without direct contact with the medium or object being measured, in this book the traditional PA approach is adopted. Thus, remote sensing involves the observation of the earth’s surface from satellite or airborne systems whereas proximal sensing systems collect information near the earth’s surface, from ground-based platforms.

1.3.1 Remote Sensing Systems

Remote sensing approaches typically observe reflectance of different parts of the electromagnetic spectrum from spaceborne or airborne sensors. Satellite remote sensing approaches have been used in PA from the outset investigating basic vegetation indices such as the normalized difference vegetation index (NDVI ), a chlorophyll content-based index which is related to crop health. However, more recently, the increase in the spatial, spectral and temporal resolution of imagery from some satellites has enabled greater use of satellite remote sensing platforms in PA . The spatial resolution in remote sensing imaging refers to the size of image pixels in terms of the area captured on the ground or the footprint, that is the smallest area of observation. This is sometimes called ground sample distance (GSD) although it may differ slightly if interpolation is performed in the final image product. The reflectance characteristics of each pixel indicate the average reflectance of the surface over the area of the pixel . Traditional remote sensing platforms like Landsat have pixel sizes of 30 m for most bands whereas the relatively new Sentinel-2 imagery has a pixel size of 10 m for RGB and near-infrared (NIR) bands. Satellites such as GeoEye-1, Worldview-3 and Pleiades 1A, B now produce imagery pixel sizes of <2 m (see Chap. 2 for more detail). Some newer satellites also have shorter revisit times, some revisit as often as daily, which makes temporal studies using satellite remote sensing increasingly viable. The increased spectral resolution of some satellites in recent years has revolutionized the study of phenomena such as drought (see Chap. 11) and disease detection within individual growing seasons. Increased spectral resolution has been of particular help in identifying certain diseases before their effects are visible to the naked eye (see Chap. 13 for specific examples).

Chapter 2 provides an exhaustive review of satellite remote sensing platforms and their characteristics in terms of spatial, temporal and spectral resolution as well as the swath width and cost per unit area. A wide range of vegetation and soil indices that have been calculated for PA research is reviewed. Its summary tables provide an excellent quick reference guide. Also reviewed are hyperspectral and sun-induced fluorescence (SIF) satellite specifications and narrow band vegetation indices that the former have been used to calculate. The discussion of how synthetic aperture radar satellites and satellite-based digital surface model products have been used is very useful. Indeed, the former has proved particularly useful for estimating soil surface moisture and has the advantage of being able to collect data at night and when there is cloud cover. The chapter concludes by identifying future satellite remote sensing needs for PA . The need for more hyperspectral imagery in the future is particularly important and the economic advantage of freely available data is addressed. Sentinel-2 is identified as a particularly viable option for future studies given its relatively smaller pixel size, good spectral resolution and more frequent revisit times than Landsat 8, other freely available data.

Satellite remote sensing techniques are applied and mentioned in other chapters of the book. For example, Chapter 6 discusses the use of satellite and other remote sensing products and derived vegetation indices to identify plant nutrient deficiencies and diseases for row and grain crops, while Chap. 8 considers the use of remote sensing products by type of band and the diseases that have been identified with those bands. The use of remote sensing approaches to estimate yield and fruit maturity is important for a wide range of crops in precision horticulture (Chap. 8). Chapter 8 also considers the complex numerical analysis techniques that are often required to deal with the hyperspectral imagery needed for disease detection. Chapter 10, on weed detection, notes the low spatial resolution of most freely available satellite imagery as an issue for weed detection in PA , therefore, other remote sensing platforms have been favoured for this activity. If some of the new, high spatial resolution satellite imagery becomes available without charge, satellite remote sensing will still be a cost-effective option for weed detection. Chapter 11, on precision irrigation, includes a case study that used NDVI from Sentinel-2 imagery to characterize the spatial variation in different varieties of fruit trees and different irrigation sectors. Finally, in Chap. 13, case study 1 used 2 m resolution visible and near-infrared (VIS-NIR) GeoEye imagery to identify areas with Cotton Root Rot disease and determine a precision fungicide application protocol.

Early in PA, aerial photographs, in particular from standard surveys, were a relatively frequently used remote sensing product (Robert 2002; Kerry and Oliver 2003) because of the potentially small GSD represented by pixels (usually ~1–3 m) compared to early freely available satellite imagery (~20–30 m). However, with the limited spectral resolution of photographs and the high cost of custom flights where there is no nearby airstrip or farmer routinely using light aircraft to spray crops, and so on, the use of aerial imagery captured by manned aircraft has decreased. However, it is still economically viable for those with easy access to a plane, and Chap. 10 uses manned aircraft to acquire imagery for weed detection. The aircraft need to be flown at low altitudes to ensure sufficient spatial accuracy ; nevertheless, such data are mainly useful for identifying large patches of weeds only. In case study 2 of Chap. 12, aircraft captured aerial imagery in the VIS-NIR wavelengths was used to classify NDVI for vineyard blocks to indicate vine water status. In case study 1 of Chap. 13 aerial imagery captured by manned aircraft in the VIS-NIR wavelengths was used to identify areas with Cotton Root Rot disease . The case study compared the ability of this imagery to identify the disease with high resolution (2 m) GeoEye data and unmanned aerial vehicle (UAV) captured imagery. The use of UAVs has revolutionized remote sensing for PA . Chapter 9 reviews the use of UAVs in PA together with a case study that investigates UAV imagery to determine side-dress fertilization of winter cereals. The chapter discusses the various pros and cons of imagery from UAVs compared to satellite and manned flight-based aerial photograph imagery. The UAVs and some associated cameras are relatively inexpensive and provide images with very high spatial resolution (pixels as small as 1 cm). Case study 1 of Chap. 13 illustrates how relatively high resolution (1–2 m pixels) satellite and manned aircraft imagery can both guide precision fungicide applications successfully. The study also shows how the higher spatial resolution from UAV imagery makes plant by plant fungicide applications a possibility for the future. Having said this, data storage and processing issues can be important drawbacks of using UAV images unless standard automated imagery processing services are used. The case study in Chap. 9 illustrates how Sentinel-2 and UAV imagery used to direct fertilizer side-dressing can increase profits, but the latter increased profits slightly more.

The cheapest cameras or sensors that can be mounted on UAVs have limited spectral resolution, therefore, several vegetation indices using only wavelengths in the visible range have been developed. The finer temporal resolution of UAV data over satellite remote sensing products makes such data much more amenable to within season temporal analysis of crops. Flights can be made regularly apart from when winds are high. The low altitudes of UAV flights means that cloud cover is less of a problem than for satellite imagery. This temporal flexibility for drone flights has resulted in a great increase in temporal studies which observe the crop throughout the growing season. In addition to the review and case study of UAVs in Chap. 9, Chap. 6 briefly discusses the use of UAV imagery to determine N deficiency. Case study 1 in Chap. 10 discusses the use of UAV imagery for mission planning to spray weed patches with commercial sprayers and case study 4 in Chap. 12 uses UAV imagery to improve the functioning of a potato crop model and to spatialize it within a field. Although the use of UAVs is interesting in research, it needs to be improved further to upscale for regular use on large farms.

1.3.2 Proximal Sensing Systems

There are many passive and active optical or spectroscopic sensors currently in use in PA , both in remote and proximal sensing. The main limitation of passive sensors is the variation in lighting in the open environment, therefore, they are sometimes applied to plant and soil samples in the laboratory where lighting conditions can be controlled. The sample collection and preparation adds considerably to the cost, but improves the accuracy of any values derived from the approach. Optical or spectroscopic proximal sensors are often distinguished from one another by the platform on which they are mounted. When the crop is being monitored such sensors are frequently mounted on all-terrain vehicles (ATVs) or on farm machinery, sensing the crop while routine operations are being carried out. At other times, they are attached to masts within a field. This can be a good way to get almost continual temporal coverage of crop status. This is a potentially economic approach if wanting to sense a limited area of crop frequently because no fuel or labour are needed compared with when sensors are mounted on mobile equipment. Case study 1 of Chap. 12 investigates the use of an optical Crop Circle sensor mounted 1.5 m above the ground to sense the side curtain canopy areas of grape vines. The data from this sensor were used to calculate several vegetation indices to monitor berry characteristics such as size and pH throughout the growing season. Case study 2 of Chap. 11 captured oblique thermal images of cotton fields in Israel using infrared thermal cameras mounted on a 20 m vertical mast. The images were used to determine leaf water potential and derive rates of drip irrigation.

Examples of optical sensors mounted on farm machinery or ATV vehicles are quite common in PA . Chapter 7 focuses solely on sensors that are mounted on combines. There is a detailed section on combine-mounted near-infrared spectroscopy instruments to sense different aspects of grain quality. In case study 2 of Chap. 10, a machine vision camera was mounted on an ATV to obtain dense imagery of the vegetation in the field which was analyzed by machine learning software to distinguish monocot and dicot weeds from the crop. In addition to spectroscopy sensors mounted on farm machinery (Chap. 7) or masts, there are handheld devices that can perform spectroscopy in the field or laboratory. One of the earliest examples and most commonly used of these handheld devices are soil plant analysis development (SPAD) or NDVI meters, although such instruments are also frequently mounted on farm machinery (Chaps. 6 and 7).

Proximal sensors have been particularly favoured over remote sensing approaches for sensing soil characteristics because they allow closer proximity to the medium being studied than remote sensors. Case studies 1, 3 and 4 in Chap. 4 all use NIR spectroscopy either alone or together with other sensing approaches. Case study 1 used on-the-go NIR spectroscopy, while case studies 3 and 4 used the method in the laboratory on air-dried soils. These case studies note that the accuracy and reliability of results from proximal soil sensors often differs considerably when spectroscopic approaches are applied in the laboratory under constant moisture and lighting conditions, in situ in the field or are measured on-the-go. Case studies 2 and 3 in Chap. 13 use hyperspectral cameras under laboratory conditions with constant lighting to detect Laurel wilt and Esca in avocado leaves and vine leaves, respectively, before the effects of these diseases on the leaves are visible to the human eye. Hyperspectral imaging has also been used to determine the degree of fruit ripeness (Chap. 8).

At the far end of electromagnetic spectrum are gamma rays. Gamma-ray radiometry has proved useful for mapping soil-derived plant nutrients and for soil mapping at various scales as illustrated by case study 2 of Chap. 4. There are also many active optical sensors that have been used for various aspects of sensing. Stereo cameras and light detection and ranging (LiDAR) have been used for three dimensional 3D modelling of plants, for autonomous robot guidance and to determine the size of fruit (Chaps. 3 and 8). Obtaining 3D data from crops may help farmers and advisors to understand the development and variability of the crop better throughout the growing season to make more informed decisions on canopy management, applications of plant protection products and other operations. Chapter 7 discusses the use of laser and LiDAR-based sensors on combines to detect crop height, density and variability in biomass. Chapter 6 mentions the use of LiDAR data from a UAV digital surface model being subtracted from a digital elevation model to model crop height and yield.

Probably, the most commonly used types of proximal sensor for soil sensing are geophysical sensors and soil moisture sensors. Soil moisture is probably the most temporally variable soil property, therefore, such sensors are often left in place to determine irrigation timing and rates. Chapters 5 and 11 examine the use of wireless sensor networks (WSN ) with soil moisture sensors in detail. Geophysical sensors that measure ECa or resistivity of the soil have been used or calibrated to infer various soil properties or to determine suitable management zones within fields. However, some of the best uses of such geophysical data are when they are combined with other sensor data. Case study 1 of Chap. 4 investigates the use of geophysical instruments on a multi-sensor platform and case study 2 of Chap. 4 uses geophysical data combined with gamma-ray radiometry. Chapter 6 also mentions situations where plant health is assessed through soil sensing or where geophysical instruments are used to define management zones or areas with markedly different soils. Most proximal soil sensors are related to a range of soil properties and do not measure one individual soil property. Consequently, most proximal soil sensing approaches need to be calibrated to reflect values of a given soil property. The case studies in Chap. 4 discuss this issue of calibration of sensed data in detail and investigate the relative expense and sampling effort required for good calibration. To date, more success has been achieved with proximal sensors for mapping the more permanent properties of the soil such as texture and therefore water content and organic matter content compared to nutrients. However, geophysical sensors usually reflect problems with soil salinity well.

Data from sensors are usually dense and increasingly complex numerical methods are needed to analyze these large data sets. Practitioners of PA cannot be expert in all of these methods so there is increasing demand for automated machine learning approaches so that the labour costs for sampling and analysis are not replaced by labour costs for data analysis. Although some techniques are described in the chapters of this book, they do not cover the full range of analysis and numerical techniques in detail because there are limits to the scope within a single book and modelling will be the focus of another forthcoming book in the Springer Precision Agriculture Series.

1.4 Basic Sensing Concepts for Precision Agriculture

Although the readers of this book are not expected to design sensors from scratch, their use in PA requires some basic understanding of sensor principles, how to express the recorded data and what the possible applications in PA are. The aim of this section is to clarify some basic concepts used throughout the book to help the reader understand its content better and make better use of their sensors and data.

1.4.1 When Sensors Are Used in Precision Agriculture

Precision agriculture can be practiced by following a 4-stage cycle. The first stage is data gathering about the crop and its environment. The second is data processing and information extraction. The third stage is decision making and the last is operation in the field. Sensors are used in the first and fourth stages with two main purposes: (1) capturing data about the crop and its environment, but also (2) monitoring the equipment which is carrying the sensors themselves (agricultural machinery or robots). As mentioned, this book covers only those sensors that record data about the crop and its environment.

The PA cycle described above may be followed using two different approaches in what is referred to as map-based precision agriculture and real-time precision agriculture. In the former, sensors are used in the first stage of the PA cycle to gather georeferenced data which needs to be processed (filtered, normalized, etc.) and interpolated to create maps representing the spatial distribution of the variables measured throughout a field or orchard (stage 2). Those maps, together with other georeferenced or mapped ancillary data, are used to make a decision (stage 3) about what specific management operations should be carried out and with what specifications. The output of stage 3 is usually another map, the so-called prescription map, representing the field or orchard and what to do in it following a site-specific management approach (rates to apply, intensity of a specific operation, etc.). Finally, in stage 4 it is time to go to the field and execute the prescription map with either manually operated conventional equipment or with variable rate technology (VRT) equipment. From the moment data are captured until the management operation is carried out, several hours, days or even weeks can pass. This time delay can have a negative effect if a problem needs to be addressed rapidly due to changing conditions (i.e. a treatment against a moving pest ), but it can give the farmer or advisors time to integrate several information sources and supervize and validate the decision made, whether the decision is made by a human or is automated.

The second approach for PA practitioners is the so-called real-time approach. In this approach, crop or soil sensors are mounted on VRT equipment and the PA cycle is executed on-the-go, on a nearly real-time basis, so that the time between sensing and acting is a matter of some milliseconds only, depending on the distance between the sensor and the VRT equipment and on the forward speed of the agricultural machinery. Although this approach also follows the four stages in the cycle, there is no time for the controller to integrate many information sources or for the farmer or operator to supervize and validate the decision made. In this case, even though a GNSS receiver is not required, it is very convenient to have it for three different purposes: (1) to avoid overlap between already treated areas and those currently being treated, (2) to record the sensor readings with their locations to create a map subsequently to show the spatial distribution of the measured quantity, and (3) to record and geo-reference what the VRT equipment did in the field to create what is called an as-applied map . This will not be a map about the crop, soil or environmental readings, but about the internal sensors of the VRT equipment, called proprioceptive and interoceptive sensors, to monitor the rates actually applied. Thus, the farmer will be able, at least, to supervize what was actually done in the field a posteriori.

1.4.2 How Sensors Are Used in Precision Agriculture

Both remote and proximal sensors may be active or passive and that will determine the way in which they are used and under what conditions. An active sensor emits its own energy towards the object being measured (usually some form of radiation but also other forms of energy such as electricity) and captures the returned energy to infer a property or status. A passive sensor does not emit any energy and simply captures the reflected energy from an external source. A simple example is a red, green, blue (RGB) camera. When lighting conditions are sufficient, the camera simply records the amount of red, green and blue light captured by the optics, operating as a passive sensor. However, when ambient light is not sufficient, some cameras use a flash. That is, they emit their own light and capture the returning scattered light , turning it into an active sensing system.

Passive sensors rely on external energy sources. In agriculture, usually the sun. That is why they can only operate during daytime (or in some cases at night with artificial light) and their readings are affected by sunlight intensity. In addition, the further the sensor is from the target, the larger the negative effects on the energy captured will be. This is of particular relevance in satelliteborne sensors, which means that their readings need to be corrected according to atmospheric status to obtain absolute values that can be compared to data captured on different dates. Passive sensors that record visible light from satellites are useful only during daytime and under cloudless conditions.

Alternatively, proximal and remote active sensors can be used during night time and under cloudy conditions, extending the sensing time window or even the working time when the sensor can be used in a real-time PA approach. That is the case of active N proximal sensors and of synthetic aperture radar sensing devices mounted on satellites such as Sentinel-1.

Regardless of whether they are active or passive, proximal or remote, sensors can be operated manually or fully automated. In addition, the time required to complete a measurement also needs to be considered. This characteristic affects the time invested in sensing and may influence the sensing approach, therefore affecting the final spatial resolution of the measurements. In manually operated sensors, the usual approach is to design a discrete sampling strategy, either systematic or stratified. With automated sensors, the readings may require a trigger signal or may be continuous at a specific rate of update or output frequency. Both the trigger signal or the output frequency of the sensors will condition the spatial resolution of the data together with the forward speed of the sensor . If the sensor does not need to stop for a specific time to make a measurement, the usual approach is the on-the-go sensing strategy. In that case, the sensor will be mounted on a ground, aerial or space platform and will capture data at a specific temporal rate or at pre-determined distance intervals, resulting in much higher spatial resolution data than those obtained by discrete sensors.

1.4.3 Sensing Resolutions

Most common concerns when using sensors are associated with the resolution of one type or another. According to the International Vocabulary of Metrology (BIPM, 2012), resolution is the smallest change in a quantity being measured that causes a perceptible change in the corresponding indication. Such an indication is the quantity value provided by a measuring instrument or a measuring system. In analogue devices, the resolution of a display device is to be considered, which is the smallest difference between displayed indications that can be meaningfully distinguished. Those resolutions are expressed in the same units as the quantity being measured.

Regarding imaging sensors, their resolution is related to the detail perceivable in the acquired images. The higher is the resolution, the more detailed are the images and the smaller the objects that can be distinguished. In digital imaging sensors, such resolution is frequently expressed by the number of pixels of the sensor . That is, the number of elementary units or samples of the whole image. The resolution can be expressed as a round number, obtained from multiplying the number of pixel columns by the number of rows of the sensor , or by simply mentioning the count of columns and rows. Thus, a 10 megapixel resolution sensor is a device producing digital images of 3648 columns and 2736 rows, or 9 980 928 pixels. That resolution, together with the distance of the sensor from the target will determine the spatial resolution of the acquired images.

The main types of resolution considered in PA are spatial, temporal, spectral and radiometric resolutions. Spatial resolution of imaging sensors may be considered as the size of the captured target represented in each pixel or, alternatively, its inversion. The resolution of digital images is usually expressed as the equivalent size of a side of a pixel (e.g. pixel size of 1 m) or as the count of pixels per unit target area (e.g. 5 pixels cm−2 of leaf or 9 pixels m−2 of ground surface). It is similar to point clouds where resolution is usually expressed as the number of points per unit ground area or per unit of the target area both for airborne and terrestrial laser scanners. Depending on many factors, such resolutions may range from less than one to some tens of points per square metre of ground in the former and from hundreds to thousands of points per square metre of ground in the latter. For other aspects, it is also common to express spatial resolution as the number of samples per unit of ground area or per grid size. Thus, taking one soil sample per hectare is also a way to express spatial resolution. A technical term to express such spatial resolutions is the ground sample distance or GSD , which expresses the equivalent distance between samples on the ground. In this case, the pixels in digital images would be considered samples as well. The advantage of this way of expressing resolution is that the quantity is a length and it can be expressed using the international system base unit. Another thing to consider in relation to sampling and spatial resolution is the sample spatial support. For a satellite image with a GSD of 30 m the spatial support is areal giving the average reflectance characteristics of the 30 m square on the ground. Other data have punctual support as they are taken at specific points on the ground separated by a given GSD . Still, other data have a pseudo-point support. Using a pseudo-point support is standard practice in soil sampling and involves collecting five or so samples within a metre or a few metres squared at the nodes of a 100 m square grid and mixing them prior to analysis. This mixing or averaging tries to reduce local noise in the data. This procedure is often referred to as the bulking strategy. Similar approaches are also used in plant analyses such as taking SPAD meter measurements of several leaves from a small area around the nodes of a sampling grid.

Temporal resolution is related to the time between two consecutive measurements of the same area or target. It is usually expressed in time units when time is long and in frequency units when it is short. In remote sensing from satellites, the temporal resolution is set by the platform (satellite ) and it is usual to refer to it as return time, revisit time or revisit frequency, commonly expressed in days. However, other sampling techniques with higher temporal resolutions tend to express it as one sample per time unit (1 sample h−1) or as a number of samples per time unit when resolution is higher (10 Hz or samples s−1). That also applies to video cameras, where the number of images or frames per time unit is an important specification. Thus, the usual temporal resolution for videos is 24 Hz or images per second or frames per second but this resolution may vary up to several hundreds or even thousands of hertz in high speed cameras for slow motion recording.

It is difficult to obtain data at both high spatial and temporal resolutions. It is usually a trade-off when deciding what sensing approach to use. In proximal sensing, when sensors are mounted on mobile ground platforms, the spatial resolution depends on the forward speed of the platform and it tends to be as fast as possible when hourly costs are applied. A way to increase spatial resolution without increasing hourly costs is with ground robotic platforms for scouting. Robots may work 24/7 and could take their time to reach the desired spatial resolution. The temporal resolution when using sensors mounted on machinery depends on the fuel costs, the ability to enter the field regularly and the damage that could be done to the crop. A way to increase the temporal resolution is by installing sensors permanently in the field. This way, very high temporal resolutions can be achieved but, on the other hand, spatial resolution tends to be low due to the cost of requiring several sensors over the farm. Satellite remote sensing runs on a pre-set schedule and provides users with pre-set spatial resolution data, whereas manned and unmanned airborne platforms are more flexible but may be more expensive and or the data more complex to process. As mentioned previously, the key is finding a balance between requirements, costs and complexity in processing the data.

Finally, spectral and radiometric resolutions refer to radiometric sensors, which capture energy from different bands of the electromagnetic spectrum; the bands are the fraction of the spectrum to which the sensor is sensitive. Spectral resolution is defined as the ability to resolve features in the electromagnetic spectrum and is usually expressed as bandwidth. Bands can be broader or narrower depending on the sensor and their size or width is expressed in length units, usually nanometres or micrometres. However, spectral resolution is sometimes expressed as the number of spectral bands measured by the sensor . For example, the Sentinel-2 multispectral instrument measures up to 13 bands with bandwidths ranging from 15 nm to 175 nm. Radiometric resolution has to do with how finely an imaging sensor records the different levels of brightness in each band. The available levels depend on the radiometric resolution and are expressed in bits, in a similar way to the A/D converters. The radiometric resolution of Sentinel-2 is 12 bit, allowing up to 4096 levels to be recorded.

1.4.4 Global Navigation Satellite Systems

Global navigation satellite systems (GNSS) receivers are not precisely sensors, but in PA they are strongly connected to them because of the importance of obtaining both data on the measured property and the location of the measurement. They crop up many times throughout this book, therefore we consider them here.

First, notice that the general term should be GNSS receiver rather than GPS as the latter is only one out of the four currently available navigation satellite-based systems available globally. Some years ago, most receivers in the western hemisphere would certainly be only GPS , but that is no longer the case as most professional receivers are currently compatible with GPS (USA), Glonass (Russia), Galileo (Europe) and Beidou (China). The GNSS receivers account for location, navigation and last, but not least, timing. The time obtained from GNSS receivers is one of the most accurate times available to common users and it can be of interest to synchronize the readings of several sensors scattered across wide areas.

What is important when choosing and using a GNSS receiver is the required accuracy to register sensor data with location. Stand-alone receivers may obtain coordinates with errors of up to several metres. Ground-based and satellite-based augmentation systems are available ranging from sub-metre to centimetre accuracies and precision. Service fees range from free to several hundred Euros per year.

Another important aspect is how to include the coordinates properly in technical documents and presentations. It is important to clarify that any coordinate should be accompanied by their units and by a reference to identify them uniquely on the Earth. That is done by adding to the coordinate tuple the coordinate reference system (CRS) they are referred to. A CRS is a coordinate system that is referenced through a datum to the Earth (OGP, 2012). There are several CRS sub-types that depend on how the Earth’s curvature is dealt with. The most common in PA are geographic and projected CRS sub-types. The former uses latitude and longitude expressed in angular units to locate geographic features on Earth. Height is usually given relative to a reference ellipsoid (ellipsoid height) or, if available, to a geoid (orthometric height, equivalent to expressing heights above mean sea level). Projected CRSs are based on geographic ones but use a projection to transfer curved coordinates to a plane. The most used projection in PA is the universal transverse Mercator (UTM) where locations are expressed as Cartesian coordinates (x,y,z) in metres in different zones of a specific UTM grid, where columns are expressed by numbers and rows by letters. To make the coordinates unambiguous, the datum specifies the mathematical model of an ellipsoid to represent the earth and the primer or zero meridian to which longitudes are referred. Thus, when using the GPS system, coordinates provided by receivers are expressed in a geographic CRS sub-type with the WGS84 datum; the correct way to cite them would be as follows, always including the datum:

  • 0.596158° E, 41.629470 N, WGS84 (using dd.dddddd° for latitude and longitude)

  • 0° 35.76948′ E, 41° 37.7682′ N, WGS84 (using dd° mm.mmmmm’)

  • 0° 35′ 46.1688″ E, 41° 37′ 46.092″ N, WGS84 (using dd° mm’ ss.ssssss”)

Care is required with augmentation systems because they sometimes convert the coordinates to a local CRS. For example, the official datum in Europe is the ETRS89 and the one in the USA is NAD83, either with geographic or projected coordinates. In geographic information system software, it is usual to refer to the CRS using the EPSG codes (OGP, 2021). Each code specifies the CRS including its sub-type and the datum. For example, the EPSG code 4326 represents geographic coordinates for the datum WGS84, and the EPSG code 25831 represents projected UTM coordinates for the ETRS89 datum in zone 31.

When projected CRS sub-types are used, it is important to specify what the projection is and the zone that the coordinates refer to. In the UTM projection, the earth’s surface is divided into a grid with 60 columns (numbered from 1 to 60) and 24 rows (named from south to north A to Z). There are singularities around the north and south poles and in other areas regarding rows and columns. Thus, with projected coordinates in an official project in Europe, the previous coordinates should be cited as follows:

  • x or easting: 299761 m, y or northing: 4611429 m, UTM 31 T, ETRS89

where 31 T is the UTM grid zone. It is important to include the UTM zone because the x or eating coordinates might occur in each of the 60 × 24 cells of the UTM grid. The coordinates of the examples define the location of the School of Agrifood and Forestry Science and Engineering of the Universitat de Lleida, in Lleida, Catalonia.

1.4.5 Concepts Related to Metrology

In any book related to sensors and their applications there should be some reference to metrology and how to record and express properly the data sensors produce. In addition to the above definitions of resolution, there are some other basic concepts on sensing to be considered. In this section, we have extracted concepts from the International Vocabulary of Metrology, hereafter VIM (BIPM, 2012) and from the International System of Units brochure, hereafter SI (BIPM, 2019). Both documents are published and updated regularly by the Bureau International des Poids et Mesures (BIPM), which is an international organization with 63 member states and 40 associate states and economies (in January 2021) working cooperatively on measurement science and standards.

Sensing systems provide users with magnitudes of specific properties of a phenomenon, object or substance. The properties are represented by quantities and their magnitudes can be expressed with a number and a reference, usually, a measurement unit. At this point, the authors would like to encourage the use of the International System of Units to ease the scientific use and communication of data in technical or scientific publications (BIPM, 2019).

According to the VIM (VIM , 2012), the term sensor refers to one of the elements in a measuring instrument or measuring system. In this book terms such as measuring instrument, measuring system, sensor , transducer and sensing system are used as synonyms for the sake of simplicity and to reach a broader audience.

Once a measuring instrument or system makes a measurement, it provides an indication or reading. Sensor users may be interested in the errors involved and how to minimize them. The error or measurement error is the measurement obtained minus a reference value, resulting in a numerical quantity. The accuracy is the closeness of the measurement to a true value of the quantity being measured. The smaller is the measurement error , the more accurate is the measurement. However, accuracy is not a quantity and should not be given a numerical value. Another important term is precision. According to the VIM , precision is the closeness between measurements obtained by replicate measurements on the same or similar objects under specified conditions. Sometimes precision is misused when accuracy is meant; precision represents the dispersion of repeated measurements. Moreover, precision is expressed numerically in terms of standard deviation, variance or coefficient of variation of the data.

To improve the accuracy of measurements, a good calibration is required. A calibration establishes a relation between measurements obtained by measuring systems and those provided by measurement standards. Such relation can be expressed by a calibration diagram or a calibration curve and then used to obtain quantity values from instrument measurements. Once a calibration is obtained, the measurement systems require adjustment to provide prescribed indications corresponding to given values of the quantity to be measured (BIPM, 2012). Calibration and adjustment should not be confused.

Finally, as many of the measurement principles, techniques and systems used in PA are not standardized yet, it is always important to include an accurate and detailed description of the techniques and measurement systems used for readers to understand the published results better.