Abstract
This chapter reviews the approaches for the automation of weed detection. Site-specific plant protection needs to address the varying weed infestation, but the automation is only partially solved and research is still ongoing. The properties for plant species distinction as well as approaches that use them are presented. The focus is on image based methods, of which an example is given.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Weed Species
- Weed Infestation
- Grass Weed
- Curvature Scale Space
- Vegetation Indexnormalised Difference Vegetation
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
The detection of weeds is the prerequisite for successful site-specific weed management . For a uniform treatment the average weed infestation level, weed species composition and growth stages of weeds and crop have to be known. Herbicides or mechanical weed control methods are applied uniformly across the total field, if the economic weed threshold is exceeded. The spatial and temporal variation of weed populations needs to be assessed, if the treatment should vary within a field. It is also needed to select and adapt the herbicide mixture. Commonly, the number of weeds per square meter and/or the weed coverage for each species are measured. This data can be used to estimate the expected yield loss and to decide for each part of the field which weed control method is warranted.
Different methods have been proposed to assess the weed infestation within a field. The most common approach is the weed scouting by human experts. This approach can be done by the experienced farmer or a consultant. An expert can take the history of the weed infestation over the years into account and focus on the most prominent weed species, which are relevant for the yield loss. Different sampling schemes for the within-field estimation were used. Weed infestation can be measured by regular or irregular sampling . Positions of the sampling points can be determined using a local coordinate system and regular distances between the sampling points, or their coordinates can directly be measured with GPS (Global Positioning System) technology. Most studies used a sampling scheme which was constrained by the time and manpower available. The effect of different grid sizes and interpolation techniques have been discussed by Backes et al. (2005), Hamouz et al. (2006), and Heijting et al. (2007). Many weed patches remained undetected, if the grid size exceeded a distance of 15–30 m between the sampling points. An economic evaluation of the manual sampling versus an automatic approach was done by Oebel and Gerhards (2005), estimated costs are about 60€/ha for the manual sampling at regular spaced grid points (8×8 m). The use of a mobile GIS (geographic information system) to map the infestation reduced the costs to 26€/ha.
Since the manual weed sampling is too expensive for practice-oriented management, automatic methods to assess the infestation have been developed (Brown and Noble 2005). Automatic weed sampling provides a way to increase the amount data gathered in the field (smaller sampling intervals) at lower overall costs of 6–11€/ha (Oebel and Gerhards 2005). Sensor technology has already been used to apply herbicides site-specifically, resulting in 30–70% reduction of herbicide use. Depending on the application technology the sensor design has to be adapted; if small robots are used to manage weeds, the driving speed may be lower than with a boom-sprayer.
2 Properties to Distinguish Plant Species
To distinguish plant species from each other, certain characteristic properties have to be identified, which can be measured automatically. Experts identify species by their shape and plant morphology . The location of a plant is a useful property to distinguish species, on the large scale there are several habitats, on the small scale there are locations within a field with a higher probability of occurrence, e.g. at the borders of a field, on certain soil types or between the rows in row cropping systems. In the following sections useful properties for distinguishing plant species are evaluated.
2.1 Spectral Properties
Intact green plants transform the incoming light by their chlorophyll pigments, which absorb mostly red as well as violet and blue light. Only a fraction of the green and most of the near infrared light is reflected. The spectral reflectance of plants has a minimum in the visible wavelengths of about 650 nm and increases towards the invisible near infrared above 700 nm. The steep part of the curve is called the ‘red edge ’ (Fig. 8.1; Guyot et al. 1992). Plant characteristics – chlorophyll content, leaf area index LAI , biomass and water status, age, plant health levels (Shafri et al. 2006) – can be derived from the position of the red edge (REP), usually determined by the position of the turning point (point of maximum slope). The spectral curves of different plants have a similar nonlinear shape , but the soil curve in Fig. 8.1 is linear. The local extremes of the plant curves are within the green band (550 nm, maximum), the red band (660 nm, minimum) and near infrared (750 nm, maximum).
Several spectral indices have been proposed that make use of the different reflectance in the green (G), infrared (IR) and red (R) part of the spectrum. Ratios or subtraction of the values at the extremes lead to the highest differences for plants and soil and are therefore useful for the discrimination of plants against their background. From Fig. 8.1 we can conclude, that the highest difference exists in the near infrared and red spectrum (see also image example in Fig. 8.4). One important index is the normalised difference vegetation index NDVI (Eq. 1), the values are normalised to the interval [−1, 1], with values near one meaning a high amount of chlorophyll. This index correlates well with the biomass and LAI and has been used in remote sensing applications (Godwin and Miller 2003, López-Granados et al. 2006, Reyniers et al. 2006) and for near-range sensors to measure plant biomass production, crop vitality and to forecast crop yield. A few commercial products for weed control with optoelectronic equipment exist that use this spectral information: DetectSpray® (evaluated by Biller 1998) and WeedSeeker® (used by Sui et al. 2008).
Depending on the availability of the measured wavelengths several indices have been used and compared to identify living plant material against the background (Woebbecke et al. 1995, Meyer and Neto 2008). The soil adjusted vegetation index (SAVI, Eq. 1) introduces a variable L into the formula of the NDVI . L can be used to adjust for the soil component; values near 0 are used for high vegetation cover. Variations of these indices exist; Haboudane et al. (2004) compared several indices for an estimation of the leaf area index . Langner et al. (2006) developed an index called DIRT (difference index with red threshold) to improve the contrast between plants and background in mulched areas (DIRT = sign(β - R) NDVI, with β = 0.12).
Transforming RGB colour space images into the HSI (hue, saturation, intensity) colour space leaves the brightness in the intensity channel and colour information in the hue and saturation channels, which then can be used to identify green parts. For standard RGB images the excess green index EGI has proven to be useful for the enhancement of green plant material in many studies (Rasmussen et al. 2007, Burgos-Artizzu et al. 2008). An example for the EGI is shown in Fig. 8.2. Equation (1) contains the formulae for the most important indices.
The spectral reflectance is influenced not only by the plant characteristics, but also depends on the illumination conditions. Atmospheric changes lead on the one hand to different spectral characteristics of the illumination, on the other hand the amplitudes can vary much; direct sun and cloudy conditions differ by factors of 1,000 or more in the amount of light. Therefore some approaches use controlled conditions with artificial lighting and exclude the natural illumination. Artificial lighting equipment has the advantage to make the measurement independent of the external illumination conditions.
Piron et al. (2008) evaluated 22 wavelength bands for weed and crop (carrots) discrimination, and found an optimum with three wavelengths at 450, 550 and 750 nm, reaching a classification accuracy of about 65% for carrots and 80% for weeds. They used artificial lighting to reduce the variability of the natural light conditions in the field. Paap et al. (2008) used a line sensor and LED illumination (635, 670 and 785 nm) to distinguish plants from background. Several approaches explored the spectrometric properties to distinguish different species. Zwiggelaar (1998) found the spectral properties alone not to be able to discriminate all weed species. In more specific cases the spectral information was successfully used to discriminate weed and crop. Borregaard et al. (2000) used a line scanning spectrometer with artificial light and were successful in discrimination of plant and soil as well as crops (sugar beet and potatoes) and three weed species. They used stepwise linear discriminant analysis to select six wavelengths (694, 970, 856, 686, 726 and 897 nm), of which they found the first three to be able to discriminate the five species with an accuracy of 60% and crop and weeds with an accuracy of 90%. Girma et al. (2005) selected five bands between 515 and 865 nm and ratios of them (515/675, 555/675, 805/815, and 755) to distinguish two weed species and winter wheat under controlled conditions (greenhouse). Two trials led to classification accuracies of 64 and 90%. Wang et al. (2001) also selected five wavelengths (496, 546, 614, 676, and 752 nm) and reached 62–86% classification accuracy for the discrimination of nine grouped weed species, soil and wheat. Okamoto et al. (2007) use a spectrometric line sensor with 420 channels of 10 nm to distinguish sugar beet and four weed species with a success rate of about 75–89%, if the data were transformed by a wavelet decomposition and classified using selected wavelet coefficients.
2.1.1 Remote Sensing
Lamb and Brown (2001) reviewed the use of remote sensing (RS ) imaging for weed detection. They conclude, that the use of remote sensing is limited in general due to the low spatial resolution, which does not permit the analysis of weeds on a sub-field scale.
A high infestation level of weeds within patches is accompanied by locally increased biomass production. Early in the season the effect can be used to locate the patches, if the weeds germinate earlier than the crop. Backes and Jacobi (2006) explored remote sensing techniques to detect patches of dicotyledonous weeds in sugar beet using the NDVI .
Thorp and Tian (2004) identified the problem, that the spectral measurements are mixed signals of soil and plant material. The proposed analysis methods for weed detection have to be improved and further developed to reliably detect different weed species, not only local changes in biomass density. Another problem remains the availability of up-to-date imaging material, since RS sensors need clear sky conditions (without clouds) and their update cycles might be of too large intervals. Later in the season patches can be identified using RS : López-Granados et al. (2006) used hyperspectral RS to map grass weed infestations in wheat late in the season. Their accuracies for the grass weed patch detection were about 90%.
2.1.2 Fluorescence
Chlorophyll fluorescence of the plant photosystem is an indicator for the effectiveness of the photosynthesis. The fluorescence intensity shows a typical temporal change after saturation of the photosynthesis system with light, called the Kautsky effect . Kautsky functions indicate healthiness of the plants but can also be used to distinguish different species due to the different leaf structure and leaf angle of grasses and dicotyledons. The fluorescence effect can be used to distinguish living plants from other objects and may lead to methods for species discrimination in the future. A problem for online weed identification is the time of measurement, since the effect can be explored best when the measurements are taken over a certain period of time (seconds to minutes). Current research tries to explore shorter measurements, which may lead to suitable sensing equipment for online species discrimination in the future. Keränen et al. (2003) reduced the measurement time by reducing the pre-measurement dark adaption period to practicable times under field conditions. They were able to distinguish six species using a neural network classifier.
2.2 Location and Temporal Properties
The location of plant species can be used to identify them. Most weeds occur in patches within a field (Heijting et al. 2007) and their location was found to be stable over years. This effect is due to persistent seed banks in the soil and variable germination conditions. The germination rate is higher in areas with a high seed density. Perennial weeds have additional vegetative reproduction organs such as rhizomes, tubers and roots, from which the plants regenerate (e.g. Convolvulus arvensis, Cyperus esculentus, Cirsium arvense, Agropyron repens). Therefore, patches of perennial weeds were found to be most aggregated and stable. Historical maps can be used to predict the occurrence of weeds (Dille et al. 2002, Mortensen 2002). This information is especially useful for preemergence herbicide applications.
The position of weeds can also be helpful on a smaller scale, the plant level. In row crops weeds can be detected between the rows, since no crop plant is expected to grow there. Sensors detecting green plants between the rows have successfully been used for this purpose (Åstrand and Baerveldt 2004). Slaughter et al. (2008) described the robust weed detection as a primary obstacle for robotic weed control technology and review the approaches for weed detection as well as actuator technology.
Several image processing approaches for row detection have been proposed, most of them using standard RGB images . Bossu et al. (2009) determined crop rows for intra-row weed detection and Jones et al. (2007) developed a system to create artificial images to test weed detection algorithms in crop rows. Bakker et al. (2008) used a Hough transformation to detect linear structures in images to find the rows. Åstrand and Baerveldt (2004) modelled Gaussian location probability functions for the crop plants in the row and locate the weed plants at locations with low probability values, either between the rows or within the row at locations between crop plants. Burgos-Artizzu et al. (2008) used large row spacing (barley) and the column sums of the intensities to determine crop rows. They determined crop rows and used additional (expert) knowledge about the scenes to determine optimal parameters for the image processing and feature extraction process.
2.2.1 Morphological Properties
The morphology of the plants is important for the determination of the species by a human expert. Dicotyledons and monocotyledons have a different morphology , e.g. the number of cotyledons and the structure, compactness and diameter of the leaves, which contribute to the overall appearance.
The third dimension can provide information about the orientation of the leaves and the height above ground and leaf structure. The three-dimensional (3D ) structure of the plants is a feature, which has not yet been investigated often. Reasons are that the acquisition of suitable 3D data is computationally intensive or requires special 3D measuring equipment, which became available in the recent years. Chapron et al. (1999) and Andersen et al. (2005) proposed a stereo vision method, extracting height information from two aligned images. The height information can be used to detect overlapping of leaves and can be helpful to separate leaves above others from the ones below.
2.2.2 Overlapping
Occlusion and overlapping is one of main problems for all image processing approaches. The plants in the images, especially the long-leaved ones like cereals and grass weeds, tend to overlap. Overlapped leaves are segmented as one object, since they lead to connected regions, of which parts belong to different plants. It is difficult to detect and separate these leaves from each other, since therefore context information is necessary to assemble occluded leaf shape and assign these to plants. The mentioned 3D approaches provide segment information directly, and a few 2D image processing techniques have been used to overcome this situation (Søgaard and Heisel 2002, Manh et al. 2001, Neto et al. 2006a). These approaches are based on heuristics about the occluded parts. Piron et al. (2009) combine stereoscopic multispectral images with height information from a coded structered light technique, which uses a projected known pattern to derive the distance to the camera.
2.2.3 Texture
More general approaches distinguish plant species based on the texture , which is different for overlapped broad leaved and narrow leaved plants in cluttered conditions. Ishak et al. (2009) present a texture analysis for images of two weed species (a broadleaved and a grass weed) in late growth stage. Weeds in grassland require different approaches, because the plants cannot be separated to single plants from the background (soil), because the overall coverage is very high and the plants overlap. But the most important weeds in grassland have leaves with a different morphology (bigger, broader and more homogeneous surface). These properties can be quantified by textural analysis of 2D images. Gebhardt and Kühbauch (2007b) segmented the image according to a homogeneity criterion and use a textural and colour features to find Rumex obtusifolius , Taraxacum officinale and Plantago major in a grassland plant community with an accuracy of over 70%. Van Evert et al. (2009) used a partial 2D Fourier transformation to determine homogeneous regions, which were identified to be the broadleaved weed leaves of R. obtusifolius . From 3D sensor data Šeatović (2008) segmented broad leaves and classified them as weeds in grassland. Klose et al. (2008) developed a robot with weed detection capabilities in maize using a sensor fusion approach: A vertical laser triangulation sensor measuring the thickness of the maize plant stem is combined with a horizontally mounted camera viewing the maize row from above to find weeds within the row.
Morphological properties can also be explored with 2D shape features, which is the focus of the following image processing part.
3 Image Processing for Automatic Weed Species Identification
In the following the general image processing steps will be outlined. Fig. 8.3 shows the workflow of the basic steps image creation, segmentation , feature extraction and classification .
Imaging sensors like cameras or line sensors deliver 2D images of agricultural fields. These images are the input for the following image processing procedures. Depending on the type of imaging sensor the resulting images may have to be pre-processed to normalise the values or reduce noise. Noise can be reduced in the original images before segmentation into foreground and background objects takes place. Typical pre-processing steps of the original images include filtering with a low pass filter to minimise the effect of Gaussian noise or the use of median filters to suppress pixels with outlier values (zero or maximum values).
3.1 Segmentation
A segmentation of the image into regions with homogeneous properties is the next step, which results in a separation of the image according to the measured properties. One or more intermediate images can be created that enhance the contrast between object and background. In this step homogeneous regions with different gray or colour values are created. This image can be computed using one of the colour indices mentioned before, if colour images are the input, or texture features, if the image should be segmented according to the texture (e.g. grassland images). Fig. 8.4 gives an example for an IR and R difference image (IR-R), the resulting image enhances the plants (bright) and the background objects have been suppressed (dark). The enhanced image is then separated into foreground and background objects, resulting in a binary image (black/white).
A threshold can be used to label the enhanced regions (e.g. white), which are above the threshold and the background (e.g. black). More advanced methods use spatial homogeneity criteria to improve the segmentation (Gorretta et al. 2005). If the foreground regions have been identified, connected foreground regions can be assembled to objects. Noise may have lead to small regions in the thresholding step and can now be filtered using either a size criterion or morphological image processing (Soille 2003). Figure 8.5 shows the result of a segmentation using a threshold and pre-processing steps to reduce noise. Mathematical morphology provides erosion and dilation operators as basic filters for regions. Erosion of region leads to a shrinking, the borders of the region are cut. If an object has a hole (inner borders), this hole will grow bigger. The dilation operation does the opposite: the region grows around the border and small holes can be closed this way. Both operators can be combined to the so called opening (erosion, then dilation) and closing (dilation, then opening) operators. Since both operators are nonlinear the results of the opening and closing are different: opening tends to separate an object at small connections and prune small elongated spikes, closing can combine regions with little distance into one, e.g. leaves which have been separated by the thresholding. It may also happen that small regions disappear in the opening step, which are then gone in the dilation step of an opening. Figure 8.5 (right) shows the result of a morphological closing, leading to connected regions for the dicotyledonous leaves near the centre of the image and the elongated leaves in the top left.
Morphological operators were used by Hemming and Rath (2001) to extract broad leaves from scenes with overlaps. Pérez et al. (2000) used morphological operators to separate the germination leaves of dicotyledonous weeds and analyse the shape of each leaf.
The resulting blobs are the objects of interest for the following feature extraction. Shape, texture or colour features (the latter derived from the input image) describe the properties of each foreground object in the image. These features are used for a classification of each object in the image.
3.2 Shape-Based Weed Discrimination
Several researchers used shape features to discriminate weed and crop (Gerhards and Christensen 2003, Åstrand and Baerveldt 2004, Berge et al. 2008). The shape features were derived for each connected foreground region. Image processing techniques provide a set of commonly used shape features. To describe the shape of a region one of the simplest feature is the size, expressed either in number of pixels or scaled by the ground resolution. There may be objects of different size, but with similar overall shape characteristics (geometrically congruent). Therefore shape descriptors have been developed which are invariant to the size of the region. Two other properties are often not relevant for the shape description: the position and the orientation of a region within the image. Certain shape descriptors are normalised and invariant to translation, rotation and size. Some well known invariant features are derived from statistical moments of the pixel distribution (Hu features ; Hu 1962). This type of features is called region-based, since they are derived from the spatial distribution of the region pixels.
Other features are computed from the outline of a region, given by the border pixels that have neighbouring background pixels. Since the border of an object is a closed contour, a periodic representation can be derived (either using a chain code or polar coordinates; see Jähne 2001 for details). Fourier analysis can be used to analyse the periodic representation (Neto et al. 2006b). The resulting parameters are phases and amplitudes of periodic functions, which can easily be normalised to translation, rotation and size invariant parameters, since this information is located only in the first two of them. The lower order parameters contain the overall shape of the object and the higher order parameters contain information about the small scale curvature changes of the contour (notches and small convexities). A curvature description can be derived from the contour, if it is computed for different scales (by smoothing), then this is called a CSS (curvature scale space ) representation (Mokhtarian et al. 1996). Zhang and Lu (2004) review shape description techniques and distinguish between region-based and contour- based ones.
We found also skeleton features helpful for the discrimination of plant species (Weis and Gerhards 2007). The skeleton is the central line (also called core) of a region, and can be derived from a distance transform of the region or by morphological operators (Soille 2003). A distance transform assigns a distance value to each region pixel: the shortest distance to the contour. Local maxima form a line which is located in the middle of the object and with maximum distance to the borders. Statistical measures (mean, maximum, variance, number of pixels) of these maxima yield a thickness description of the shape , which is especially useful to discriminate broad and narrow leaved species, since the core of a broad leaf has a bigger distance to the border than elongated, thin leaves. Figure 8.6 shows the distribution of four different classes in the feature space of two skeleton features . These features are well suited to discriminate these classes, since the classes have a clustered occurrence in the feature space.
There exist also ‘high level’ shape descriptions, that involve models for the shape description and try to fit the model to the shape . Søgaard and Heisel (2002) and Manh et al. (2001) used active shape models respectively deformable templates for the species discrimination. Templates of various shapes are generated and parametrised (these parameters are the features) and the deformations necessary to match the templates to the shape lead to a similarity measure. The more a model has to be deformed to fit the shape , the higher is the dissimilarity. One problem with these models is the comparably high complexity of the description, leading to a high dimensional search space of the parameters and therefore a high computational load. On the other hand these models can deal with partial occlusion.
3.3 Classification
All numeric features can be combined to feature vectors. The according feature space has as many dimensions as there are features and is usually high dimensional. A high dimensionality of the feature space opposed to the relatively low number of training samples exposes the problem that the samples are ‘vanishing’ in the space and can decrease the performance of a classifier, this is known as the ‘curse of dimensionality’. Features without discriminative abilities to the problem introduce noise into the classification process. Therefore a feature selection process should be performed before classification , aiming at the reduction of the number of features to the most relevant ones. Combinations of features can lead to new features with higher discriminative abilities. An example for the combination of features are the spectral indices (see Eq. 1), combining the amplitude values of different wavelengths to a new value. A popular feature selection algorithm is discriminant analysis (Cho et al. 2002, Borregaard et al. 2000, Gebhardt and Kühbauch 2007a, Neto et al. 2006b).
The classification is the last step of the analysis. Classification algorithms can be grouped into unsupervised classifiers , also known as clustering, and supervised classifiers . Unsupervised classifiers use the feature vectors without additional information and create groups of similar objects according to a distance measure of the vectors in the feature space. These groups are called clusters and may refer to classes of the problem. A supervised classifier has to be trained with prototype information, which are selected feature vectors of known class. Classifiers compare the features of the unknown objects to the trained ones and assign a class. The number of classification algorithms is large, ranging from simple algorithms like kNN (k-nearest-neighbour), that uses the training data directly, to complex functions and function systems like neural networks, tree classifiers or support vector machines, which generate a classifier model from the training set and use that for the classification . Cho et al. (2002) successfully trained neural networks, Pérez et al. (2000) used Bayes rules and a nearest neighbour classifier with shape features. Burks et al. (2005) used neural networks to classify texture features.
A shape based approach was tested by Oebel (2006) under field conditions, the classification accuracies were suitable for the creation of application maps. Table 8.1 shows the detailed results for Zea mays and Hordeum vulgare crops using discriminant analysis .
An example for a classification with shape features (region-based, Fourier and skeleton features ) is shown in Fig. 8.7. The image was composed of samples from several IR-R difference images. A small training set was created containing prototypes of the species. Nine different species have been classified using a radial basis function network classifier. The objects in the image were labelled according to the classification result.
The shape based approach has its limitations due to the number of plant species and the shape variability within different growth stages of each species. A class scheme was developed (Weis and Gerhards 2007) for these variations and used to create training data for various weed and crop species.
4 Conclusions
The automation of weed detection in the field is a very challenging topic, which is a current research topic of several working groups. The complexity of this task originates in the variability of the plant species in the field. Several plant properties have been presented, which can be used to distinguish species. Approaches and results, achieved with available sensor technology, were reviewed. Some sensors were already used successfully for weed detection and discrimination under controlled conditions and also in field experiments, but yet there is no general best practice to achieve this, especially under changing conditions within the field. The combination of different techniques might lead to robust solutions in the future. Sensor fusion and integrative analysis of multiple sensor data could improve the weed detection rate and also influence other precision-farming technologies. Commercial products like special sensors and analysis equipment for this task are to be developed. If such systems are available, the weed infestation can be assessed for site-specific management and population dynamics research. These will add valuable data for precision farming applications and decision support systems.
References
Andersen HJ, Reng L, Kirk K (2005) Geometric plant properties by relaxed stereo vision using simulated annealing. Comput Electron Agric 49:219–232
Åstrand B, Baerveldt AJ (2004) Plant recognition and localization using context information. In: Proceedings of the Mechatronics and Robotics 2004 (MechRob2004), Sascha Eysoldt Verlag, Aachen, pp 1191–1196
Backes M, Jacobi J (2006) Classification of weed patches in Quickbird images: verification by ground truth data. EARSeL eProceedings 5:173–179
Backes M, Schumacher D, Plümer L (2005) The sampling problem in weed control. Are currently applied sampling strategies adequate for site-specific weed control. In: Stafford J (ed) Precision agriculture 2005. Wageningen Academic Publishers, Wageningen, pp 155–161
Bakker T, Wouters H, van Asselt K et al (2008) A vision based row detection system for sugar beet. Comput Electron Agric 60:87–95
Berge T, Aastveit A, Fykse H (2008) Evaluation of an algorithm for automatic detection of broad-leaved weeds in spring cereals. Prec Agric 9:391–405
Biller RH (1998) Reduced input of herbicides by use of optoelectronic sensors. J Agric Eng Res 71:357–362
Borregaard T, Nielsen H, Nørgaard L, Have H (2000) Crop-weed discrimination by line imaging spectroscopy. J Agric Eng Res 75:389–400
Bossu J, Gée C, Jones G, Truchetet F (2009) Wavelet transform to discriminate between crop and weed in perspective agronomic images. Comput Electron Agric 65:133–143
Brown R, Noble S (2005) Site-specific weed management: sensing requirements – what do we need to see? Weed Sci 53:252–258
Burgos-Artizzu XP, Ribeiro A, Tellaeche A et al (2009) Improving weed pressure assessment using digital images from an experience-based reasoning approach. Comput Electron Agric 65:176–185, doi:10.1016/j.compag.2008.09.001
Burks T, Shearer S, Heath J, Donohue K (2005) Evaluation of neural-network classifiers for weed species discrimination. Biosyst Eng 91:293–304
Chapron M, Requena-Esteso M, Boissard P, Assemat L (1999) A method for recognizing vegetal species from multispectral images. In: Stafford J (ed) Precision agriculture 1999. Sheffield Academic Press, Sheffield, pp 239–248
Cho SI, Lee DS, Jeong JY (2002) Weed-plant discrimination by machine vision and artificial neural network. Biosyst Eng 83:275–280
Dille JA, Mortensen DA, Young LJ (2002) Predicting weed species occurrence based on site properties and previous year’s weed presence. Prec Agric 3:193–207
van Evert F, Polder G, van der Heijden G et al (2009) Real-time vision-based detection of Rumex obtusifolius in grassland. Weed Res 49:164–174
Gebhardt S, Kühbauch W (2007a) A new algorithm for automatic Rumex obtusifolius detection in digital images using colour and texture features and the influence of image resolution. Prec Agric 8:1–13
Gebhardt S, Kühbauch W (2007b) Continous mapping of Rumex obtusifolius during different grassland growths based on automatic image classification and GIS-based post processing. In: Stafford J (ed) Precision agriculture ’07, 6th European Conference on Precision Agriculture (ECPA), Wageningen Academic Publishers, Wageningen, pp 499–506
Gerhards R, Christensen S (2003) Real-time weed detection, decision making and patch spraying in maize, sugar beet, winter wheat and winter barley. Weed Res 43:385–392
Girma K, Mosali J, Raun WR et al (2005) Identification of optical spectral signatures for detecting cheat and ryegrass in winter wheat. Crop Sci 45:477–485
Godwin R, Miller P (2003) A review of the technologies for mapping within-field variability. Biosyst Eng 84:393–407
Gorretta N, Fiorio C, Rabatel G, Marchant J (2005) Cabbage/weed discrimination with a region/contour based segmentation approach for multispectral images. In: Bellon Maurel V, Carbonneau A, Regnard JL et al (eds) Information and technology for sustainable fruit and vegetable production. Production, Proceedings of FRUTIC’05, AgroM ENSA Montpellier; Cemagref Montpellier; CIRAD; INRA, Cemagref, Montpellier France, pp 371–380, 7th Fruit nut and vegetable production engineering symposium, 12–16 Sep 2005
Guyot G, Baret F, Jacquemoud S (1992) Imaging spectroscopy for vegetation studies. In: Toselli F, Bodechtel J (eds) Spectroscopy: fundamentals and prospective applications. Kluwer Academic Publishers, Dordrecht, pp 145–165
Haboudane D, Miller JR, Pattey E et al (2004) Hyperspectral vegetation indices and novel algorithms for predicting green lai of crop canopies: modeling and validation in the context of precision agriculture. Rem Sens Environ 90:337–352
Hamouz P, Novakova K, Soukup J, Tyser L (2006) Evaluation of sampling and interpolation methods used for weed mapping. J Plant Dis Prot XX (special issue):205–215
Heijting S, van der Werf W, Stein A, Kropff MJ (2007) Are weed patches stable in location? Application of an explicitly two-dimensional methodology. Weed Res 47:381–395
Hemming J, Rath T (2001) Computer-vision-based weed identification under field conditions using controlled lighting. J Agric Eng Res 78:233–243
Hu MK (1962) Visual pattern recognition by moment invariants. IRE Trans Inf Theory 8:179–187
Ishak AJ, Hussain A, Mustafa MM (2009) Weed image classification using gabor wavelet and gradient field distribution. Comput Electron Agric 66:53–61
Jähne B (2001) Digital image processing, 5th edn. Springer, Berlin
Jones G, Gée C, Truchetet F (2007) Simulation of perspective agronomic images for weed detection. In: Stafford J (ed) Precision agriculture ’07, 6th European Conference on Precision Agriculture (ECPA), Wageningen Academic Publishers, Wageningen, pp 507–515
Keränen M, Aro EM, Tyystjärvi E, Nevalainen O (2003) Automatic plant identification with chlorophyll fluorescence fingerprinting. Prec Agric 4:53–67
Klose R, Thiel M, Ruckelshausen A, Marquering J (2008) Weedy – a sensor fusion based autonomous field robot for selective weed control. In: VDI (ed) Land technik 2008. VDI Verlag, Stuttgart, pp 167–172
Lamb D, Brown R (2001) Remote-sensing and mapping of weeds in crops. J Agric Eng Res 78:117–125
Langner HR, Böttger H, Schmidt H (2006) A special vegetation index for the weed detection in sensor based precision agriculture. Environ Monit Assessm 117:505–518
López-Granados F, Jurado-Expósito M, Peña-Barragán JM, García-Torres L (2006) Using remote sensing for identification of late-season grass weed patches in wheat. Weed Sci 54:346–353
Manh A, Rabatel G, Assemat L, Aldon M (2001) Weed leaf image segmentation by deformable templates. J Agric Eng Res 80:139–146
Meyer GE, Neto JC (2008) Verification of color vegetation indices for automated crop imaging applications. Comput Electron Agric 63:282–293
Mokhtarian F, Abbasi S, Kittler J (1996) Robust and efficient shape indexing through curvature scale space. In: Pycock D (ed) Proceedings of the British Machine Vision Conference 1996, BMVC, British Machine Vision Association, Edinburgh, pp 53–62
Mortensen DA (2002) Crop/weed outcomes from site-specific and uniform soil-applied herbicide applications. Prec Agric 3:95
Neto JC, Meyer GE, Jones DD (2006a) Individual leaf extractions from young canopy images using Gustafson-Kessel clustering and a genetic algorithm. Comput Electron Agric 51: 66–85
Neto JC, Meyer GE, Jones DD, Samal AK (2006b) Plant species identification using elliptic fourier leaf shape analysis. Comput Electron Agric 50:121–134
Oebel H (2006) Teilschlagspezifische Unkrautbekämpfung durch raumbezogene Bildverarbeitung im Offline- und (Online-) Verfahren (TURBO). PhD thesis, Universität Hohenheim, Fakultät Agrarwissenschaften
Oebel H, Gerhards R (2005) Site-specific weed control using digital image analysis and georeferenced application maps – first on-farm experiences. In: Stafford JV (ed) 5th ECPA, Uppsala, Wageningen Academic Publishers, Wageningen, pp 131–138
Okamoto H, Murata T, Kataoka T, Hata SI (2007) Plant classification for weed detection using hyperspectral imaging with wavelet analysis. Weed Biol Manag 7:31–37
Paap A, Askraba S, Alameh K, Rowe J (2008) Photonic-based spectral reflectance sensor for ground-based plant detection and weed discrimination. Opt Express 16:1051–1055
Pérez A, López F, Benlloch J, Christensen S (2000) Colour and shape analysis techniques for weed detection in cereal fields. Comput Electron Agric 25:197–212
Piron A, Leemans V, Kleynen O et al (2008) Selection of the most efficient wavelength bands for discriminating weeds from crop. Comput Electron Agric 62:141–148
Piron A, Leemans V, Lebeau F, Destain M (2009) Improving in-row weed detection in multispectral stereoscopic images. Comput Electron Agric 69:73–79, doi: 10.1016/j.compag.2009.07.001
Rasmussen J, Nørremark M, Bibby B (2007) Assessment of leaf cover and crop soil cover in weed harrowing research using digital images. Weed Res 47:299–310
Reyniers M, Vrindts E, De Baerdemaeker J (2006) Comparison of an aerial-based system and an on the ground continuous measuring device to predict yield of winter wheat. Eur J Agron 24:87–94
Shafri HZM, Salleh MAM, Ghiyamat A (2006) Hyperspectral remote sensing of vegetation using red edge position techniques. Am J Appl Sci 3:1864–1871
Slaughter DC, Giles DK, Downey D (2008) Autonomous robotic weed control systems: a review. Comput Electron Agric 61:63–78
Søgaard H, Heisel T (2002) Machine vision identification of weed species based on active shape models. In: van Laar HH, Bastiaans L, Baumann DT et al (eds) EWRS 12th EWRS Symposium, European Weed Research Society. Grafisch Service Centrum Van Gils BV, Wageningen, pp 402–403
Soille P (2003) Morphological image analysis, 2nd edn. Springer, Heidelberg
Sui R, Thomasson JA, Hanks J, Wooten J (2008) Ground-based sensing system for weed mapping in cotton. Comput Electron Agric 60:31–38
Thorp K, Tian L (2004) A review on remote sensing of weeds in agriculture. Prec Agric 5:477–508
Šeatović D (2008) A segmentation approach in novel real time 3D plant recognition system. In: Proceedings of the Computer Vision Systems, Lecture Notes in Computer Science, vol 5008. Springer, Berlin/Heidelberg, pp 363–372
Wang N, Zhang N, Dowell FE, Sun Y, Peterson DE (2001) Design of an optical weed sensor using plant spectral characteristics. In: ASAE (ed) Transactions of the ASAE, vol 44. American Society of Agricultural Engineers, St. Joseph, pp 409–419
Weis M, Gerhards R (2007) Feature extraction for the identification of weed species in digital images for the purpose of site-specific weed control. In: Stafford J (ed) Precision agriculture ’07, 6th European Conference on Precision Agriculture (ECPA). Wageningen Academic Publishers, Wageningen, pp 537–545
Woebbecke D, Meyer G, von Bargen K, Mortensen D (1995) Color indices for weed identification under various soil, residue and lighting conditions. American Society of Agricultural Engineers, St. Joseph, pp 259–269
Zhang D, Lu G (2004) Review of shape representation and description techniques. Pattern Recognit 37:1–19
Zwiggelaar R (1998) A review of spectral properties of plants and their potential use for crop/weed discrimination. Crop Prot 17:189–206
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer Science+Business Media B.V.
About this chapter
Cite this chapter
Weis, M., Sökefeld, M. (2010). Detection and Identification of Weeds. In: Oerke, EC., Gerhards, R., Menz, G., Sikora, R. (eds) Precision Crop Protection - the Challenge and Use of Heterogeneity. Springer, Dordrecht. https://doi.org/10.1007/978-90-481-9277-9_8
Download citation
DOI: https://doi.org/10.1007/978-90-481-9277-9_8
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-9276-2
Online ISBN: 978-90-481-9277-9
eBook Packages: Biomedical and Life SciencesBiomedical and Life Sciences (R0)