8.1 Introduction

As more and more sensing, perception, and actuation applications emerge in the fields discussed in previous chapters, it is becoming more difficult to consider every aspect manually. The increasing workload is intensified by the labor shortage within several sectors (Taylor et al., 2012; Rye & Scott, 2017), as well as the increasing demand to feed the growing population. To implement new technologies, farmers, therefore, need to rely on autonomous platforms to carry out the tasks. We define an autonomous platform in an agricultural setting as a robot that carries out operations without manual intervention, often used to automate repetitive, hazardous, and/or easy operations to make the agricultural task more convenient for the human being. Carrying out operations without manual intervention requires the system to meet two basic autonomy principles: autonomous navigation and autonomous manipulation. This chapter will discuss the aspects of the first in more detail, as the latter requires this platform when striving for full autonomy.

For autonomous navigation to be possible, the system needs to be aware of its surroundings in several ways. Firstly, knowledge of the platform’s location is crucial for the overall task, such that the platform can make decisions based on its position. Secondly, being able to perceive the local environment is also of great importance to avoid obstacles such as trees or vines, other obstacles, and people. Additionally, the system will need to be able to make decisions based on the perceived environment, such as reducing or eliminating the need for manual intervention.

In short, although it is possible to know and document the exact planting location of the trees and vines with high precision and low uncertainty, plants grow naturally. An autonomous system will therefore need to base its actions on the actual state of its surroundings to avoid obstacles such as branches and reduce the potential damage to plants, crops, and the robot itself.

This chapter starts with a section on sensing, which explains the systems needed for positioning purposes and other sensing capabilities found in agricultural robots. Section 8.3 discusses the decision-making algorithms and how data processing is carried out. Section 8.4 is dedicated to planning and optimization architectures, which guide robotic platforms on a higher level. A brief discussion of the implementation actuators and their control systems is presented in Sect. 8.5, followed by an overview of necessities for fleet operation in Sect. 8.6. Section 8.7 presents some other solutions as well as examples of existing commercial and emerging technologies. Finally, concluding remarks are covered in Sect. 8.8.

8.2 Sensing

Agricultural autonomous platforms are designed to move themselves and the attached equipment to certain positions to carry out tasks. This means these systems will need to know their exact position and understand their environment before being able to make decisions. This section discusses the different sensing techniques used within autonomous platforms and is structured to discuss course sensing first and precision sensing last.

8.2.1 Absolute Positioning

To position themselves, autonomous platforms generally comprise a geospatial positioning system, often consisting of a global navigation satellite system (GNSS) receiver to make sense of GPS, Galileo, or other satellite positioning data. GNSS work by triangulating the distances measured from multiple satellite sources. Unfortunately, regular GNSS data only allows for positioning accuracy of about 2–4 m, which can be sufficient for (autonomous) cars on a fixed road network. Still, depending on the required application, it often is too large for precision actuation on crops. Distances between vineyard rows can be as small as 50 cm, which requires a higher accuracy to navigate than in orchards with larger spaces between the trees. To overcome this shortcoming, some studies increase accuracy using object detection and local sensing methods (García-Pérez et al., 2008). These methods are discussed in Sects. 8.2.2 and 8.2.3.

A more generalized approach to improve accuracy is to use GNSS augmentation. Satellite-Based Augmentation Systems (SBAS), like Europe’s EGNOS technology, or Ground-Based Augmentation Systems (GBAS), like Differential GPS (DGPS), can typically increase positioning accuracies to errors smaller than 1 m and in favorable conditions up to 2–5 cm. An example of such a technology is Real-Time Kinematic (RTK) positioning, widely applied in many commercial applications. This method falls under Observation Space Representation (OSR) technologies and relies on the user to send its approximate location to a processing station, which compares the measurement with those from base stations with known positions and sends a corrected position back to the user. Studies like Garrido et al. (2015, 2019) and Bengochea-Guevara et al. (2018) rely on this technology to accurately measure positioning. Nevertheless, this approach needs to be close to a base station (typically within 30–40 km) to assure high accuracy and needs two-way communication.

Specific approaches aim to lower the necessity for two-way communication and proximity to base stations by using State Space Representation (SSR) methods (Wabbena et al., 2005; Wang et al., 2018). SSR also uses base stations but uses their measurements to model the disturbances over an entire area and sends this correction model to the user.

Another way to improve accuracy is dead reckoning. This approach aims to compute a current location using a previously known location (and orientation) and increment it with known or estimated speeds over the elapsed time. The term odometry is also often used, which describes using motion sensors to estimate a change of position over time. A widely applied sensor is the inertial measurement unit (IMU), a composite sensor that comprises accelerometers, gyroscopes, and sometimes magnetometers (or compasses). Moreover, typically, an IMU has one of each sensor per axis of the vehicle to measure changes in any direction. Other solutions use encoder data obtained from the wheels or separate accelerometers, gyroscopes, and compasses. Studies such as Lan et al. (2019) aim to use the data from these sensors to improve accuracy or reduce the required amount of GNSS data necessary. Note that when using dead reckoning, errors increase over time, and hence, regular inputs of reliable positioning data are necessary to maintain an accurate position over time. Nevertheless, also, in this case, other local sensing methods could be introduced to keep the errors low and reliable (Yang et al., 2020).

8.2.2 Relative Positioning

Another common issue with GNSS signals is that the canopy of the orchard or vineyard and other surfaces (e.g., agricultural vehicles themselves) reflect them and thereby induce extra uncertainty to the measurements (Valbuena et al. 2010). Even though odometry/dead reckoning is one available solution to overcome this by augmenting the available signals, another way is to position oneself relatively to the plants. Relative positioning is defined as the placement of the vehicle with respect to other objects. In the case of agriculture, objects may refer to crops, plants, or the produce but also the ground and human beacons placed for local positioning purposes such as (colored) poles or tags. Studies like Aqel et al. (2016) and Zaman et al. (2019) discuss visual odometry, which mainly focuses on tracking the robot’s motion by using camera images. Other studies focus on object detection to deduce location directly (Azevedo et al., 2019).

Furthermore, relative positioning is also used for a broader application, namely, object detection and avoidance (García-Pérez et al., 2005; Vasconez et al., 2019), but also that of object recognition for precision application purposes (Burgos-Artizzu et al., 2011; Gonzalez-de-Santos et al., 2017). The first has a goal to assess risks and take actions to minimize them, not only for the autonomous platform itself but also for the human operators and the crops. The goal of the latter use would be to perform the necessary action in a precise location, for example, fruit picking, which requires the robot to see where the fruit is with respect to its equipment, or a weeding robot that only applies herbicide on the weeds. The sensors used for these applications are discussed in Sect. 8.2.3, whereas the processing thereof and decision-making are discussed in Sect. 8.3.

8.2.3 Onboard Sensors

As explained in the previous section, autonomous platforms need different types of information to make good decisions. There are many types of sensors available and built into commercial equipment. We will mainly discuss noninvasive sensing techniques, as many invasive ones (like soil and crop sampling) require relatively long processing times and are therefore not suitable for making real-time decisions. The first sensor we will discuss is perhaps the easiest to imagine; however, it is not as easy to implement.

8.2.3.1 Cameras

Briefly summarized, a camera is a device that captures (in our case, visible) light through a lens set and projects it on a photosensitive sensor that captures the intensity values of certain wavelengths. The most common camera is the RGB (red, green, blue) camera, which can be found in most smartphones, but also the larger reflex cameras belong to this type. They are a good way to feed a system with the information that we humans are used to obtaining with our eyes. However, until recently, it was computationally very expensive to process this data into useful information. Current machine and deep learning techniques give us a digital way of mimicking brain-learning functions, thus making it possible to make images understandable to robots.

Studies such as the ones by Gottschalk et al. (2008) and Burgos-Artizzu et al. (2011) propose real-time image processing techniques, and others (e.g., Howarth et al., 2010; Morellos et al., 2016) propose machine learning techniques to identify mature crops and soil composition, respectively.

An interesting possibility is that of 3D reconstruction using photogrammetry. When taking multiple pictures from different perspectives, depth information can be extracted and used to the advantage of our system. Using the changing perspective of a system in motion can provide the necessary depth of information. Studies such as Westoby et al. (2012) and Comba et al. (2018) propose exactly this type of technology (see Fig. 8.1).

Fig. 8.1
A 3-D view of the vineyard is on the left and a heatmap 3-D view of the vineyard is on the right.

3D reconstruction of a vineyard (left) and adapted view of the data (right). (From Comba et al., 2018)

8.2.3.2 LiDAR and Other 3D Imaging Techniques

LiDAR, or Light Detection and Ranging sensors, function similarly to radar and measure the distance to any object within the range of its light source. Instead of radio waves, LiDAR functions by emitting a light of a certain wavelength in a specific direction and measuring the time of the signal to come back. By doing so in many directions sequentially, it maps its environment by creating a so-called point cloud that can then be converted to 3D reconstructions of the environment of the autonomous platform. This type of sensing is more robust for outdoor uses because it carries its light source but can be more costly to operate.

LiDAR data (as depicted in Fig. 8.2) can be useful for a variety of applications, from phenotyping (French et al., 2016) to regular 3D reconstruction of the plants (Garrido et al., 2015) or combinations thereof (Sankey et al., 2017).

Fig. 8.2
Two heat maps A and B mark the elevation from 2,265 to 2,235 and from 2,305 to 2275. The highest elevation value covers less portion in A and is spread out in B.

Example of LiDAR point clouds. (From Sankey et al., 2017)

While LiDAR remains one of the most widely used sensing technologies for 3D imaging, there are other options, as explained in Vázquez-Arellano et al. (2016). One interesting sensor is the Microsoft Kinect v2 sensor, used extensively in scientific research such as Bengochea-Guevara et al. (2018) to reconstruct vineyard rows or Rosell-Polo et al. (2017) for a more generalized approach. The Kinect v2 sensor is the second-generation sensor initially designed for the Microsoft Xbox gaming system, which uses an infrared laser projector to project a pseudo-random pattern of dots. An infrared camera is placed near the projector. The sensor uses triangulation for each dot between the expected position and the perceived position to infer the distances of the objects in the projected field of view. This typically results in renderings like the one depicted in Fig. 8.3.

Fig. 8.3
A photo of grapevines is on the left, a 3 D heatmap of the photo is in the middle, and a 3 D view of the photo is on the right.

Example of 3D reconstruction of vineyard row using data from Kinect v2 sensor. Left, RGB image; middle, depth information; right, 3D reconstruction. (From Bengochea-Guevara et al., 2018)

8.2.3.3 Hyperspectral and Infrared Imaging

Hyperspectral sensing may refer to collecting information within the electromagnetic spectrum but outside the visible light range. In general, they can be seen as specialized cameras containing a sensor that is sensitive to wavelengths outside the visible spectrum. As discussed in Hartel et al. (2015), current applications range from quality and safety inspections for foods and produce to plant quality evaluations, such as phenotyping (Sankey et al., 2017) or nitrogen mapping within the plants (Yu et al., 2014). The latter application might greatly influence the choices a system makes as to where in a field it will need to go next.

Infrared sensing, effectively a subcategory of hyperspectral sensing, has important usage within agriculture on its own, as it can be used to detect live vegetation using the Normalized Difference Vegetation Index (NDVI). This brings possibilities to distinguish the plant from the soil faster and easier, which can be used to avoid obstacles, as explained by Hamuda et al. (2016). Future applications might be able to use the infrared spectrum to detect humans and improve safety measures, as shown in Aspiras et al. (2018).

8.2.3.4 Other Sensing Techniques

Other sensing techniques exist, but many of them are not as popular or have less potential than those discussed before. This section discusses these technologies and applications, which are less common but interesting.

IMU

Although an inertial measurement unit (IMU) is a sensor most generally used for odometry and dead reckoning purposes (as explained in Sect. 8.2.1), this section briefly discusses other potential uses for IMUs. An IMU consists of accelerometers, gyroscopes, and, optionally, magnetometers to measure orientation changes. It can be used to reduce the uncertainty of the current position by using linear acceleration and rotational rate measurements to estimate the change in position since the last known location.

Besides its primary use, an IMU may also be used to detect obstacles, as it will detect a crash or slipping of the wheels if the vehicle is stuck somewhere (Cismas et al., 2017; Xiong et al., 2019). It could also indicate rough terrain and, therefore, can be used to inspect certain areas that might have changed due to animal activity.

Ultrasound

Ultrasound is sound with a higher frequency than the upper audible limit of human hearing. Although ultrasound is a powerful tool within agriculture in the battle against bacteria and other microorganisms (Gordon, 1963), ultrasonic proximity sensors have been employed in many robotic applications. They are finding their way into agricultural platforms (Tang et al., 2011). This process is called echolocation and uses the same concept as radars and LiDARs to infer the position of objects by using the time difference between the sent signal and the perception of its echo.

Physical Sensors

Although most studies aim for noninvasive sensing techniques, physical switches and buttons are often implemented as failsafe. Such sensors are often used as proximity sensors to make sure undetected obstacles are detected, albeit later than regular operation would require, or as safety switches intended to guarantee the safety of the operators.

8.3 Decision-Making and Data Processing

After collecting a multitude of different sensor measurements, an autonomous platform will need to make decisions based on this information. This section highlights the two main decision categories an autonomous platform must take, namely, decisions relating to safety and task planning. This is followed by a section on how data may be processed to be able to make these decisions.

8.3.1 Decision-Making

8.3.1.1 Safety

Safety-related decisions are those decisions made whenever risks for damage are mitigated. Possible danger to humans and the robot itself and/or the crops fall in this category. When a robot crosses path with a human, an example would be to halt dangerous movements or slow down or interrupt other movements.

As Vasconez et al. (2019) stated, most human-robot accidents are caused by human errors. Therefore, a big factor in reducing the number and severity of accidents is eliminating and mitigating the risks involved in human-robot interactions (HRIs). For safety, it is important that safety signals and the decisions derived from them can overrule the task planning decisions. Studies such as García-Pérez et al. (2005), Cherubini et al. (2016), and Pereira and Althoff (2018) propose predicting and adapting to potential risks to mitigate possibly dangerous situations.

8.3.1.2 Task Planning

Task planning decisions are made when considering the best approach to carry out a specific task. Decisions on how to avoid fixed obstacles and path planning algorithms fall into this category. Also, approaches combining multiple sensor inputs to reduce errors, as done in García-Pérez et al. (2008), belong here.

Task planning decision-making is important such that the use of energy and resources can be optimized. For example, a weed detecting algorithm with many false positives will be carrying out the weeding on places that do not require treatment, and route planning moving around a small stone might use more energy than driving over it. These parameters need rigorous tuning for robotics to be feasible within agriculture.

Task planning can be divided into multiple categories, where overall planning is discussed in more detail in Sect. 8.4, whereas fleet coordination and planning are explained in Sect. 8.6. The remaining planning tasks can be carried out locally and consist of the movement of the autonomous platform to place the application device in the right spot for treatment. Those can range from end-effector or gripper placement, an important task for applications that require flexibility, such as trimming (Kaljaca et al., 2019) or harvesting (Bac et al., 2014), to vehicle motion for applications such as spraying (Conesa-Muñoz et al., 2016c) or monitoring (GRAPE, 2020).

8.3.2 Data Processing

To make good decisions, the data needs to be interpreted. This also means that irrelevant data is discarded and the relevant information understood. A good example is the data from depth sensors such as LiDARs.

Depending on the application, it is not necessary to know the exact shape of the objects in the direct vicinity of the platform. Still, an approximate shape and a location would be enough. This type of data refinement typically results in lower data density but a higher information value.

An example of data refinement is carried out in Digumarti et al. (2018), in which a model is proposed to segment the data into branches and leaves. This can then be used for decision-making, plant monitoring, and/or obstacle avoidance.

In many cases, the information derived from the sensors is stored in databases for future reference. Saving this information with respect to the location in the field and subsequently superposing it on a map of the field is an intuitive way of visualizing it. Studies such as Comba et al. (2018) and Jiang et al. (2019) produce maps similar to those shown in Fig. 8.1. Besides being intuitive for the user to understand and see the field’s current status, having this information available per location makes it possible to make local decisions. An autonomous vehicle can potentially base its decision not only on what is perceived currently but also on the history of sensor and actuation information. An example would be sensing a plant needs fertilizer but refraining from giving it because it got a dose the previous time.

8.4 Control Systems

Control systems are the techniques used to manage and regulate the behavior of a device. In essence, robotics is applied control systems. Widely used control setups are closed-loop systems. These systems use inputs from sensors; compare the values against some reference or planned signal, which results in a current error; and aim to reduce said error by the design of the controller.

Many platforms already consist of some low-level control interfaces for some electrical components, such as engine, powertrain, or brake control modules. Therefore, most autonomous vehicles consist of a central computing unit, which makes high-level decisions and gives a more abstract command to the interface of the specific components. Instead of measuring deceleration and using feedback control to adapt the force on each of the vehicle’s brakes, the system can just decide to break, and the brake control module will take care of the rest. This does not mean that we do not need any feedback control. On the contrary, most central processing units will be full of it.

Another commonly used approach for the control of autonomous systems is fuzzy control. This field of study is widely used in systems that mimic human behavior, which often cannot be described in a purely binary form. For example, a vehicle’s steering, braking, and accelerating are typically not performed in a binary or discrete way (either not braking or fully braking) but in a more analog way (breaking a little or breaking more). The concepts of fuzzy logic make it possible to control vehicles in such a way and make the programming logic more understandable for humans. Applications vary from generic autonomous navigation (Mohammadzadeh & Taghavifar, 2020) to specific agricultural tasks (Bengochea-Guevara et al., 2016). Other studies aim to reduce the error of the navigation control systems by using extra information ranging from low-cost IMUs (Si et al., 2019) to the use of visual odometry (Zaman et al., 2019).

8.5 Path Planning and Optimization Systems

Although many aspects can and should be computed in real time to allow for the proper functioning of the robotic systems, others cannot. These encompass planning and optimization systems, as these typically include (NP-Hard) problems that cannot be solved in relatively short times.

Although it might look easy at first, route planning becomes more difficult once more variables are considered. Examples of extra variables are the number of vehicles, the size of each vehicle’s fuel tank or battery, the location of the refueling or charging point, and the turning radius of each vehicle. All of those affect the result of an optimal path. Research such as Conesa-Muñoz et al. (2016b) and Conesa-Muñoz et al. (2016a) propose ways to improve current algorithms and take these variables into account (Fig. 8.4).

Fig. 8.4
Two bar graphs of Y meters versus X meters. The y-axis is from 0 to 200 and the x-axis is from 0 to 350. The total distance in A is 7,902 meters and in B is 7,661.60 meters.

Example of results of two path planning optimization algorithms, with total distances of 7902 m (a) and 7661.6 m (b). (From Conesa-Muñoz et al., 2016a)

In some orchards, when there is enough space and no irrigation infrastructure between trees in a row, optimization could be taken a step further because it is possible to change the paths vehicles take within the field, as they can maneuver between the trees. In contrast, in typical vineyards, this is impossible, as they are arranged in fixed lines. This can especially be interesting if treatment is not necessary in all regions, which can be the case when treating weeds.

Another emerging optimization field is water use optimization, as carried out by Zhang and Guo (2016), aiming to reduce total water use.

8.6 Fleets

As briefly mentioned before within the path optimization section, systems comprising multiple platforms exist and are becoming more prominent in several studies (e.g., Conesa-Muñoz et al., 2016a, b, c; Gonzalez-de-Santos et al., 2017). Fleets of robotic systems are beneficial as they can induce a reduction in vehicle size but also an increase in efficiency and redundancy. As such, they can reduce soil compression and downtimes. Fleet management strategies can be divided into two main categories, namely, centralized and decentralized decision-making, both of which have pros and cons, as discussed in De Ryck et al. (2020). Both will be explained in more detail below.

8.6.1 Centralized Fleet Management

Centralized fleet management refers to a fleet of multiple robots managed from one (external) location, which we will call “the manager.” The platforms will need (semi-)continuous communication with the manager to share the collected knowledge and obtain new tasks. The manager, in this case, has an overview of the entire operation and can make decisions accordingly. For example, when one vehicle encounters an area needing a certain treatment, the correct vehicle can be sent there using an optimal route and making sure none of the vehicles collide in the act.

The advantages of these systems are that one entity has all the information, which makes it easy to document and log the carried-out tasks. The overview is kept in one place, and it is easier to test and check as everything is in one place. Another advantage of such a system is that it can consider every vehicle to optimize the tasks throughout the entire fleet. As a result of the above, it is easy for the farmer to track the overall progress and have a forecast for the remaining time.

This strategy, however, also has some disadvantages. These mainly lie in the scalability of the system. Increasing the number of vehicles in the fleet will greatly impact the optimization software that generally takes exponentially more time to find an optimal solution with respect to the number of vehicles. Often such strategies will favor optimization algorithms that generate known good solutions instead of optimal ones as a trade-off for the time needed to compute optimal solutions. Another slight disadvantage is that the entire system must be computed again with any unexpected change.

Examples of studies using centralized control are Doering et al. (2014) and Barrientos et al. (2011), in which fleets of aerial vehicles are controlled from a single location and where global optimization is carried out.

8.6.2 Decentralized Fleet Management

Decentralized fleet management refers to the robots within the fleet making decisions on their own based on their perceived environment and the communication with nearby vehicles. These vehicles will, in general, compute and follow suboptimal routes; any loss in efficiency could be compensated by adding more vehicles. In general, it cannot be guaranteed that the paths chosen will not cause longer non-productive paths. The computed solutions will also be more myopic than those computed by centralized management because the future states of the entire system are not yet known. A major advantage of decentralized systems is that they are easily scalable, as none of the nodes of the system requires a high computational load. Also, due to the myopic choices, errors and unexpected changes are mitigated easily and do not affect the system as much. Disadvantages include the lack of central knowledge and, therefore, easy forecasting and tracking methods. However, this can be improved by communicating with a central dispatcher, which enables documenting, logging the carried-out tasks, and generating the desired overview. Examples of studies in decentralized control mainly focus on the flexibility of the controller’s scalability (Ju & Son, 2018) and the flexibility of the vehicle behavior (Franchi et al., 2011).

8.7 Examples of Existing Technologies

As part of the SPARKLE Project, co-funded by the Erasmus+ program of the European Union, an analysis has been carried out of the state-of-the-art robotics within the field of precision agriculture. Part of this analysis showcases existing commercial and emerging technologies, of which the most relevant ones within orchard and vineyard treatment are outlined in this section, which is expanded with other research projects and prototypes.

VITIROVER

As a part of weeds management, Vitirover Solutions (2020) proposes to use fleets of robotic lawnmowers to prevent weeds from growing in the first place. Their small, lightweight robot is meant to mow the grass in between the rows of trees or plants, thus reducing the use of herbicides and glyphosate in particular. As shown in Fig. 8.5, it is equipped with a solar panel to extend its working range. It is also equipped with GPS to navigate predefined areas and is monitored remotely by a technician. This robot is highly independent, as it does not require any human interactions. The important decisions are made remotely by a human operator, which indicates the use of a centralized control strategy.

Fig. 8.5
Two photos of a top view and a side view of a small rectangular device with 4 wheels and a solar panel on the top. The device is in a vineyard.

VITIROVER robotic mower fleets can be used to manage grass. This is used to reduce weeds in both orchards (left) and a vineyard (right)

Autonomous Orchard Sprayer

The automatic orchard sprayer GUSS (GUSSAG, 2019), shown in Fig. 8.6, is specifically designed to reduce health threats to operators who would otherwise carry out the driving. Furthermore, it allows for fleet operation from a single control location. This type of robot uses a wide variety of sensors to guide it along a precise route while being safe for its environment.

Fig. 8.6
Two photos. A. a side view of a vehicle with a cylindrical body and big wheels. B. a top view of a vehicle with a cylindrical body and conical front part is in a yard with trees in rows. The vehicle is spraying from the back part.

GUSS autonomous orchard mist sprayer

Naïo TED

An interesting example of mechanical weeding is TED (Naïo Technologies, 2020). This robot, shown in Fig. 8.7, and clearly designed for vineyards, can carry various tools for different applications. The main task this robot was designed for is weeding, but prototype tools exist for various other tasks such as blossom thinning, trimming, and spraying.

Fig. 8.7
Two photos. A. a vehicle with 2 rectangular parts has an arch at the top that connects the two parts. B. the arch of the vehicle is over a row of trees and the 2 parts of the vehicle are on either side of the row.

Naïo TED, a mechanical vineyard weeder

This tool is still experimental to some extent but has a lot of potential due to the possibility of testing new applications while already being of use to farmers. It navigates using RTK GPS and follows a map created using drones beforehand. Although this does not directly count as a fleet, it has the potential to augment and share data from multiple sources, and future heterogeneous fleet implementation is foreseeable.

Vision Robotics Grapevine Pruner

This pruning solution from Botterill et al. (2017) and Vision Robotics Corporation (2019) is currently only a prototype and is awaiting financing to be fully developed. Although the technology mainly focuses on actuation instead of navigation, the finished platform aims to be fully autonomous.

The interesting part of this system is the implementation of artificial perception, as shown in Fig. 8.8, to understand the system’s environment as a regular human would. A finished system could incorporate many other visual cues to understand other aspects, possibly contributing to the vehicle’s autonomy.

Fig. 8.8
Two photos. A. a small truck-like vehicle on a farm. B. a close-up view of a branch without leaves.

Vision Robotics Grapevine Pruning system towed behind an autonomous tractor (left) and the artificial perception of branches (right)

VINBOT

The following system is not a commercial product either and has been developed by a consortium within the European Union and is especially interesting for its autonomy. VINBOT (2019) is designed as a monitoring vehicle to map and measure critical aspects of the vines.

As shown in Fig. 8.9, the mapping capabilities seem promising, and the specified capabilities include monitoring of water and heat stress, canopy density and color, diseases and nutrient deficiencies, and yield estimations (Lopes et al., 2016).

Fig. 8.9
A photo of a device moving between rows of trees. The device has a small rectangular body with 4 wheels, and a long vertical rod at the top. The heat map of the aerial view of the vineyard is on the right.

VINBOT vineyard monitoring platform (left) and its 3D interpretation of its environment (right)

VineScout

Similar to the previous system (Saiz-Rubio et al., 2018; VineScout, 2020), VineScout was developed within a project of the European Union (H2020) to monitor and improve yields within vineyards. Figure 8.10 shows the autonomous ground robot, which has been designed, built, and demonstrated in commercial vineyards. The VineScout goal is to provide massive data such that artificial intelligence techniques based on big data may be applied to build solid models. These models are expected to assist farmers in decision-making about irrigation and harvesting logistics.

Fig. 8.10
Two photos of the top-view and back-side view of a small vehicle-like device moving between rows of trees. The device has 4 wheels and a monitor is at the top.

VineScout vineyard monitoring platform

Other interesting solutions funded by the European Union are:

  1. 1.

    Swarm Robotics SAGA (SAGA, 2020), part of the European ECHORD++ program that aims to develop fleets of aerial vehicles to monitor and map the environment using a decentralized control strategy.

  2. 2.

    TrimBot, supported by the Horizon 2020 program (Hemming et al., 2018), (TrimBot, 2020), focuses on producing a flexible plant trimming and cutting robot. It consists of a small autonomous platform and a robotic arm, which holds a cutting tool at the end. Because of the robotic arm configuration, the system is not tied to fixed cutting and trimming patterns but instead can base the decisions on each plant.

  3. 3.

    GRAPE (GRAPE, 2020), another European ECHORD++ project, aims to make a small autonomous robot for vineyard monitoring and protection and a small robotic platform with a robotic arm to perform specific tasks in certain locations.

  4. 4.

    RHEA (Gonzalez-de-Santos et al., 2017), supported by the 7th Framework program, is a fleet of small, heterogeneous robots – ground and aerial – equipped with advanced sensors, enhanced end-effectors, and improved decision control algorithms, which aims at diminishing the use of agricultural chemical inputs, improving crop quality and health and safety for humans, and reducing production costs. RHEA can be considered a cooperative robotic system.

8.8 Concluding Remarks

While current autonomous platforms are in constant development, many agricultural tasks are starting to reap the benefits from implementing them in practice. Even though most of these platforms are not yet fully industrialized, prototypes and rudimentary versions are being tested and show promising results. Autonomous platforms are especially useful to tackle the problems arising with the decreasing number of both skilled and unskilled workers while at the same time allowing the vehicle to stay small to combat soil compaction issues.

Expectations are high when considering the possibilities to combat current ecological challenges such as global warming and the biodiversity issues in agricultural regions. Autonomous platforms will become increasingly important as trust and knowledge increase, and a couple of specific areas are expected to reap the benefits autonomy brings.

Firstly, even though autonomous tractors are being developed, autonomy can have a larger impact on other areas of agriculture. One important area is the use of fleets, where autonomy serves as a catalyst. Without it, herds of smaller vehicles would not be sustainable nor economically sensible. It is expected that the market for fleets will make its debut in the coming decade and will grow further in the next.

Another area in which autonomy can be of great importance is within the implements. While navigational autonomy is not yet fully functional, implements can already reap its benefits. Smart implements would only rely on a driver and will be able to carry out the tasks without further human intervention. This intermediate step can greatly increase acceptance as well as the adoption rate.

Lastly, drones, or unmanned aerial vehicles, are expected to increase autonomy and open an important new market opportunity, namely, data analytics. This field is expected to be of huge importance for developing new technologies, as choices farmers typically make using experience can be understood and aided from a data-driven perspective.