Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The detection of weeds is the prerequisite for successful site-specific weed management . For a uniform treatment the average weed infestation level, weed species composition and growth stages of weeds and crop have to be known. Herbicides or mechanical weed control methods are applied uniformly across the total field, if the economic weed threshold is exceeded. The spatial and temporal variation of weed populations needs to be assessed, if the treatment should vary within a field. It is also needed to select and adapt the herbicide mixture. Commonly, the number of weeds per square meter and/or the weed coverage for each species are measured. This data can be used to estimate the expected yield loss and to decide for each part of the field which weed control method is warranted.

Different methods have been proposed to assess the weed infestation within a field. The most common approach is the weed scouting by human experts. This approach can be done by the experienced farmer or a consultant. An expert can take the history of the weed infestation over the years into account and focus on the most prominent weed species, which are relevant for the yield loss. Different sampling schemes for the within-field estimation were used. Weed infestation can be measured by regular or irregular sampling . Positions of the sampling points can be determined using a local coordinate system and regular distances between the sampling points, or their coordinates can directly be measured with GPS (Global Positioning System) technology. Most studies used a sampling scheme which was constrained by the time and manpower available. The effect of different grid sizes and interpolation techniques have been discussed by Backes et al. (2005), Hamouz et al. (2006), and Heijting et al. (2007). Many weed patches remained undetected, if the grid size exceeded a distance of 15–30 m between the sampling points. An economic evaluation of the manual sampling versus an automatic approach was done by Oebel and Gerhards (2005), estimated costs are about 60€/ha for the manual sampling at regular spaced grid points (8×8 m). The use of a mobile GIS (geographic information system) to map the infestation reduced the costs to 26€/ha.

Since the manual weed sampling is too expensive for practice-oriented management, automatic methods to assess the infestation have been developed (Brown and Noble 2005). Automatic weed sampling provides a way to increase the amount data gathered in the field (smaller sampling intervals) at lower overall costs of 6–11€/ha (Oebel and Gerhards 2005). Sensor technology has already been used to apply herbicides site-specifically, resulting in 30–70% reduction of herbicide use. Depending on the application technology the sensor design has to be adapted; if small robots are used to manage weeds, the driving speed may be lower than with a boom-sprayer.

2 Properties to Distinguish Plant Species

To distinguish plant species from each other, certain characteristic properties have to be identified, which can be measured automatically. Experts identify species by their shape and plant morphology . The location of a plant is a useful property to distinguish species, on the large scale there are several habitats, on the small scale there are locations within a field with a higher probability of occurrence, e.g. at the borders of a field, on certain soil types or between the rows in row cropping systems. In the following sections useful properties for distinguishing plant species are evaluated.

2.1 Spectral Properties

Intact green plants transform the incoming light by their chlorophyll pigments, which absorb mostly red as well as violet and blue light. Only a fraction of the green and most of the near infrared light is reflected. The spectral reflectance of plants has a minimum in the visible wavelengths of about 650 nm and increases towards the invisible near infrared above 700 nm. The steep part of the curve is called the ‘red edge ’ (Fig. 8.1; Guyot et al. 1992). Plant characteristics – chlorophyll content, leaf area index LAI , biomass and water status, age, plant health levels (Shafri et al. 2006) – can be derived from the position of the red edge (REP), usually determined by the position of the turning point (point of maximum slope). The spectral curves of different plants have a similar nonlinear shape , but the soil curve in Fig. 8.1 is linear. The local extremes of the plant curves are within the green band (550 nm, maximum), the red band (660 nm, minimum) and near infrared (750 nm, maximum).

Fig. 8.1
figure 1

Reflectance curves for soil (filled dots) and different plant species with the typical steep incline (red edge ) between 680 and 750 nm wavelength

Several spectral indices have been proposed that make use of the different reflectance in the green (G), infrared (IR) and red (R) part of the spectrum. Ratios or subtraction of the values at the extremes lead to the highest differences for plants and soil and are therefore useful for the discrimination of plants against their background. From Fig. 8.1 we can conclude, that the highest difference exists in the near infrared and red spectrum (see also image example in Fig. 8.4). One important index is the normalised difference vegetation index NDVI (Eq. 1), the values are normalised to the interval [−1, 1], with values near one meaning a high amount of chlorophyll. This index correlates well with the biomass and LAI and has been used in remote sensing applications (Godwin and Miller 2003, López-Granados et al. 2006, Reyniers et al. 2006) and for near-range sensors to measure plant biomass production, crop vitality and to forecast crop yield. A few commercial products for weed control with optoelectronic equipment exist that use this spectral information: DetectSpray® (evaluated by Biller 1998) and WeedSeeker® (used by Sui et al. 2008).

Depending on the availability of the measured wavelengths several indices have been used and compared to identify living plant material against the background (Woebbecke et al. 1995, Meyer and Neto 2008). The soil adjusted vegetation index (SAVI, Eq. 1) introduces a variable L into the formula of the NDVI . L can be used to adjust for the soil component; values near 0 are used for high vegetation cover. Variations of these indices exist; Haboudane et al. (2004) compared several indices for an estimation of the leaf area index . Langner et al. (2006) developed an index called DIRT (difference index with red threshold) to improve the contrast between plants and background in mulched areas (DIRT = sign(β - R) NDVI, with β = 0.12).

$$\begin{array}{l} NDVI = (IR - R)/(IR + R) \\ SAVI = [(IR - R)/(IR + R + L)](1 + L);L[0,1] \\ EGI = 2G - R - B \\ NDI = (G - R)/(G + R) \\ \end{array}$$
((1))

Transforming RGB colour space images into the HSI (hue, saturation, intensity) colour space leaves the brightness in the intensity channel and colour information in the hue and saturation channels, which then can be used to identify green parts. For standard RGB images the excess green index EGI has proven to be useful for the enhancement of green plant material in many studies (Rasmussen et al. 2007, Burgos-Artizzu et al. 2008). An example for the EGI is shown in Fig. 8.2. Equation (1) contains the formulae for the most important indices.

Fig. 8.2
figure 2

Green, red and blue components of a standard RGB camera combined to EGI image (from left to right), enhancing the plants (bright) against the background (dark). Gray values were stretched for better contrast in print

The spectral reflectance is influenced not only by the plant characteristics, but also depends on the illumination conditions. Atmospheric changes lead on the one hand to different spectral characteristics of the illumination, on the other hand the amplitudes can vary much; direct sun and cloudy conditions differ by factors of 1,000 or more in the amount of light. Therefore some approaches use controlled conditions with artificial lighting and exclude the natural illumination. Artificial lighting equipment has the advantage to make the measurement independent of the external illumination conditions.

Piron et al. (2008) evaluated 22 wavelength bands for weed and crop (carrots) discrimination, and found an optimum with three wavelengths at 450, 550 and 750 nm, reaching a classification accuracy of about 65% for carrots and 80% for weeds. They used artificial lighting to reduce the variability of the natural light conditions in the field. Paap et al. (2008) used a line sensor and LED illumination (635, 670 and 785 nm) to distinguish plants from background. Several approaches explored the spectrometric properties to distinguish different species. Zwiggelaar (1998) found the spectral properties alone not to be able to discriminate all weed species. In more specific cases the spectral information was successfully used to discriminate weed and crop. Borregaard et al. (2000) used a line scanning spectrometer with artificial light and were successful in discrimination of plant and soil as well as crops (sugar beet and potatoes) and three weed species. They used stepwise linear discriminant analysis to select six wavelengths (694, 970, 856, 686, 726 and 897 nm), of which they found the first three to be able to discriminate the five species with an accuracy of 60% and crop and weeds with an accuracy of 90%. Girma et al. (2005) selected five bands between 515 and 865 nm and ratios of them (515/675, 555/675, 805/815, and 755) to distinguish two weed species and winter wheat under controlled conditions (greenhouse). Two trials led to classification accuracies of 64 and 90%. Wang et al. (2001) also selected five wavelengths (496, 546, 614, 676, and 752 nm) and reached 62–86% classification accuracy for the discrimination of nine grouped weed species, soil and wheat. Okamoto et al. (2007) use a spectrometric line sensor with 420 channels of 10 nm to distinguish sugar beet and four weed species with a success rate of about 75–89%, if the data were transformed by a wavelet decomposition and classified using selected wavelet coefficients.

2.1.1 Remote Sensing

Lamb and Brown (2001) reviewed the use of remote sensing (RS ) imaging for weed detection. They conclude, that the use of remote sensing is limited in general due to the low spatial resolution, which does not permit the analysis of weeds on a sub-field scale.

A high infestation level of weeds within patches is accompanied by locally increased biomass production. Early in the season the effect can be used to locate the patches, if the weeds germinate earlier than the crop. Backes and Jacobi (2006) explored remote sensing techniques to detect patches of dicotyledonous weeds in sugar beet using the NDVI .

Thorp and Tian (2004) identified the problem, that the spectral measurements are mixed signals of soil and plant material. The proposed analysis methods for weed detection have to be improved and further developed to reliably detect different weed species, not only local changes in biomass density. Another problem remains the availability of up-to-date imaging material, since RS sensors need clear sky conditions (without clouds) and their update cycles might be of too large intervals. Later in the season patches can be identified using RS : López-Granados et al. (2006) used hyperspectral RS to map grass weed infestations in wheat late in the season. Their accuracies for the grass weed patch detection were about 90%.

2.1.2 Fluorescence

Chlorophyll fluorescence of the plant photosystem is an indicator for the effectiveness of the photosynthesis. The fluorescence intensity shows a typical temporal change after saturation of the photosynthesis system with light, called the Kautsky effect . Kautsky functions indicate healthiness of the plants but can also be used to distinguish different species due to the different leaf structure and leaf angle of grasses and dicotyledons. The fluorescence effect can be used to distinguish living plants from other objects and may lead to methods for species discrimination in the future. A problem for online weed identification is the time of measurement, since the effect can be explored best when the measurements are taken over a certain period of time (seconds to minutes). Current research tries to explore shorter measurements, which may lead to suitable sensing equipment for online species discrimination in the future. Keränen et al. (2003) reduced the measurement time by reducing the pre-measurement dark adaption period to practicable times under field conditions. They were able to distinguish six species using a neural network classifier.

2.2 Location and Temporal Properties

The location of plant species can be used to identify them. Most weeds occur in patches within a field (Heijting et al. 2007) and their location was found to be stable over years. This effect is due to persistent seed banks in the soil and variable germination conditions. The germination rate is higher in areas with a high seed density. Perennial weeds have additional vegetative reproduction organs such as rhizomes, tubers and roots, from which the plants regenerate (e.g. Convolvulus arvensis, Cyperus esculentus, Cirsium arvense, Agropyron repens). Therefore, patches of perennial weeds were found to be most aggregated and stable. Historical maps can be used to predict the occurrence of weeds (Dille et al. 2002, Mortensen 2002). This information is especially useful for preemergence herbicide applications.

The position of weeds can also be helpful on a smaller scale, the plant level. In row crops weeds can be detected between the rows, since no crop plant is expected to grow there. Sensors detecting green plants between the rows have successfully been used for this purpose (Åstrand and Baerveldt 2004). Slaughter et al. (2008) described the robust weed detection as a primary obstacle for robotic weed control technology and review the approaches for weed detection as well as actuator technology.

Several image processing approaches for row detection have been proposed, most of them using standard RGB images . Bossu et al. (2009) determined crop rows for intra-row weed detection and Jones et al. (2007) developed a system to create artificial images to test weed detection algorithms in crop rows. Bakker et al. (2008) used a Hough transformation to detect linear structures in images to find the rows. Åstrand and Baerveldt (2004) modelled Gaussian location probability functions for the crop plants in the row and locate the weed plants at locations with low probability values, either between the rows or within the row at locations between crop plants. Burgos-Artizzu et al. (2008) used large row spacing (barley) and the column sums of the intensities to determine crop rows. They determined crop rows and used additional (expert) knowledge about the scenes to determine optimal parameters for the image processing and feature extraction process.

2.2.1 Morphological Properties

The morphology of the plants is important for the determination of the species by a human expert. Dicotyledons and monocotyledons have a different morphology , e.g. the number of cotyledons and the structure, compactness and diameter of the leaves, which contribute to the overall appearance.

The third dimension can provide information about the orientation of the leaves and the height above ground and leaf structure. The three-dimensional (3D ) structure of the plants is a feature, which has not yet been investigated often. Reasons are that the acquisition of suitable 3D data is computationally intensive or requires special 3D measuring equipment, which became available in the recent years. Chapron et al. (1999) and Andersen et al. (2005) proposed a stereo vision method, extracting height information from two aligned images. The height information can be used to detect overlapping of leaves and can be helpful to separate leaves above others from the ones below.

2.2.2 Overlapping

Occlusion and overlapping is one of main problems for all image processing approaches. The plants in the images, especially the long-leaved ones like cereals and grass weeds, tend to overlap. Overlapped leaves are segmented as one object, since they lead to connected regions, of which parts belong to different plants. It is difficult to detect and separate these leaves from each other, since therefore context information is necessary to assemble occluded leaf shape and assign these to plants. The mentioned 3D approaches provide segment information directly, and a few 2D image processing techniques have been used to overcome this situation (Søgaard and Heisel 2002, Manh et al. 2001, Neto et al. 2006a). These approaches are based on heuristics about the occluded parts. Piron et al. (2009) combine stereoscopic multispectral images with height information from a coded structered light technique, which uses a projected known pattern to derive the distance to the camera.

2.2.3 Texture

More general approaches distinguish plant species based on the texture , which is different for overlapped broad leaved and narrow leaved plants in cluttered conditions. Ishak et al. (2009) present a texture analysis for images of two weed species (a broadleaved and a grass weed) in late growth stage. Weeds in grassland require different approaches, because the plants cannot be separated to single plants from the background (soil), because the overall coverage is very high and the plants overlap. But the most important weeds in grassland have leaves with a different morphology (bigger, broader and more homogeneous surface). These properties can be quantified by textural analysis of 2D images. Gebhardt and Kühbauch (2007b) segmented the image according to a homogeneity criterion and use a textural and colour features to find Rumex obtusifolius , Taraxacum officinale and Plantago major in a grassland plant community with an accuracy of over 70%. Van Evert et al. (2009) used a partial 2D Fourier transformation to determine homogeneous regions, which were identified to be the broadleaved weed leaves of R. obtusifolius . From 3D sensor data Šeatović (2008) segmented broad leaves and classified them as weeds in grassland. Klose et al. (2008) developed a robot with weed detection capabilities in maize using a sensor fusion approach: A vertical laser triangulation sensor measuring the thickness of the maize plant stem is combined with a horizontally mounted camera viewing the maize row from above to find weeds within the row.

Morphological properties can also be explored with 2D shape features, which is the focus of the following image processing part.

3 Image Processing for Automatic Weed Species Identification

In the following the general image processing steps will be outlined. Fig. 8.3 shows the workflow of the basic steps image creation, segmentation , feature extraction and classification .

Fig. 8.3
figure 3

General image processing steps leading from the image to a classification

Fig. 8.4
figure 4

Example for the difference (right) of an infrared (left) and red (middle) image. Plants are bright due to the spectral difference in the red and infrared, background objects like dead material (mulch, stones) disappear in the difference image. Gray values were stretched to increase the contrast for the print version

Imaging sensors like cameras or line sensors deliver 2D images of agricultural fields. These images are the input for the following image processing procedures. Depending on the type of imaging sensor the resulting images may have to be pre-processed to normalise the values or reduce noise. Noise can be reduced in the original images before segmentation into foreground and background objects takes place. Typical pre-processing steps of the original images include filtering with a low pass filter to minimise the effect of Gaussian noise or the use of median filters to suppress pixels with outlier values (zero or maximum values).

3.1 Segmentation

A segmentation of the image into regions with homogeneous properties is the next step, which results in a separation of the image according to the measured properties. One or more intermediate images can be created that enhance the contrast between object and background. In this step homogeneous regions with different gray or colour values are created. This image can be computed using one of the colour indices mentioned before, if colour images are the input, or texture features, if the image should be segmented according to the texture (e.g. grassland images). Fig. 8.4 gives an example for an IR and R difference image (IR-R), the resulting image enhances the plants (bright) and the background objects have been suppressed (dark). The enhanced image is then separated into foreground and background objects, resulting in a binary image (black/white).

A threshold can be used to label the enhanced regions (e.g. white), which are above the threshold and the background (e.g. black). More advanced methods use spatial homogeneity criteria to improve the segmentation (Gorretta et al. 2005). If the foreground regions have been identified, connected foreground regions can be assembled to objects. Noise may have lead to small regions in the thresholding step and can now be filtered using either a size criterion or morphological image processing (Soille 2003). Figure 8.5 shows the result of a segmentation using a threshold and pre-processing steps to reduce noise. Mathematical morphology provides erosion and dilation operators as basic filters for regions. Erosion of region leads to a shrinking, the borders of the region are cut. If an object has a hole (inner borders), this hole will grow bigger. The dilation operation does the opposite: the region grows around the border and small holes can be closed this way. Both operators can be combined to the so called opening (erosion, then dilation) and closing (dilation, then opening) operators. Since both operators are nonlinear the results of the opening and closing are different: opening tends to separate an object at small connections and prune small elongated spikes, closing can combine regions with little distance into one, e.g. leaves which have been separated by the thresholding. It may also happen that small regions disappear in the opening step, which are then gone in the dilation step of an opening. Figure 8.5 (right) shows the result of a morphological closing, leading to connected regions for the dicotyledonous leaves near the centre of the image and the elongated leaves in the top left.

Fig. 8.5
figure 5

Binarisation and preprocessing of the difference image in Fig. 8.4. Left: the result of the thresholding, right: the result after applying morphological operators (closing with circle of 5 pixel diameter) and area size selection (regions with more than 30 pixel), as well as discarding regions which are cut by the image border. Foreground objects are black, the background is white

Morphological operators were used by Hemming and Rath (2001) to extract broad leaves from scenes with overlaps. Pérez et al. (2000) used morphological operators to separate the germination leaves of dicotyledonous weeds and analyse the shape of each leaf.

The resulting blobs are the objects of interest for the following feature extraction. Shape, texture or colour features (the latter derived from the input image) describe the properties of each foreground object in the image. These features are used for a classification of each object in the image.

3.2 Shape-Based Weed Discrimination

Several researchers used shape features to discriminate weed and crop (Gerhards and Christensen 2003, Åstrand and Baerveldt 2004, Berge et al. 2008). The shape features were derived for each connected foreground region. Image processing techniques provide a set of commonly used shape features. To describe the shape of a region one of the simplest feature is the size, expressed either in number of pixels or scaled by the ground resolution. There may be objects of different size, but with similar overall shape characteristics (geometrically congruent). Therefore shape descriptors have been developed which are invariant to the size of the region. Two other properties are often not relevant for the shape description: the position and the orientation of a region within the image. Certain shape descriptors are normalised and invariant to translation, rotation and size. Some well known invariant features are derived from statistical moments of the pixel distribution (Hu features ; Hu 1962). This type of features is called region-based, since they are derived from the spatial distribution of the region pixels.

Other features are computed from the outline of a region, given by the border pixels that have neighbouring background pixels. Since the border of an object is a closed contour, a periodic representation can be derived (either using a chain code or polar coordinates; see Jähne 2001 for details). Fourier analysis can be used to analyse the periodic representation (Neto et al. 2006b). The resulting parameters are phases and amplitudes of periodic functions, which can easily be normalised to translation, rotation and size invariant parameters, since this information is located only in the first two of them. The lower order parameters contain the overall shape of the object and the higher order parameters contain information about the small scale curvature changes of the contour (notches and small convexities). A curvature description can be derived from the contour, if it is computed for different scales (by smoothing), then this is called a CSS (curvature scale space ) representation (Mokhtarian et al. 1996). Zhang and Lu (2004) review shape description techniques and distinguish between region-based and contour- based ones.

We found also skeleton features helpful for the discrimination of plant species (Weis and Gerhards 2007). The skeleton is the central line (also called core) of a region, and can be derived from a distance transform of the region or by morphological operators (Soille 2003). A distance transform assigns a distance value to each region pixel: the shortest distance to the contour. Local maxima form a line which is located in the middle of the object and with maximum distance to the borders. Statistical measures (mean, maximum, variance, number of pixels) of these maxima yield a thickness description of the shape , which is especially useful to discriminate broad and narrow leaved species, since the core of a broad leaf has a bigger distance to the border than elongated, thin leaves. Figure 8.6 shows the distribution of four different classes in the feature space of two skeleton features . These features are well suited to discriminate these classes, since the classes have a clustered occurrence in the feature space.

Fig. 8.6
figure 6

Left: skeleton of image in Fig. 8.5. Right: two skeleton features (size and mean distance to leaf border) for Hordeum vulgare (HORVU), monocotyledons (MOKOT), Brassica napus (BRSSN) and dicotyledonous weeds (DIKOT) in the feature space

There exist also ‘high level’ shape descriptions, that involve models for the shape description and try to fit the model to the shape . Søgaard and Heisel (2002) and Manh et al. (2001) used active shape models respectively deformable templates for the species discrimination. Templates of various shapes are generated and parametrised (these parameters are the features) and the deformations necessary to match the templates to the shape lead to a similarity measure. The more a model has to be deformed to fit the shape , the higher is the dissimilarity. One problem with these models is the comparably high complexity of the description, leading to a high dimensional search space of the parameters and therefore a high computational load. On the other hand these models can deal with partial occlusion.

3.3 Classification

All numeric features can be combined to feature vectors. The according feature space has as many dimensions as there are features and is usually high dimensional. A high dimensionality of the feature space opposed to the relatively low number of training samples exposes the problem that the samples are ‘vanishing’ in the space and can decrease the performance of a classifier, this is known as the ‘curse of dimensionality’. Features without discriminative abilities to the problem introduce noise into the classification process. Therefore a feature selection process should be performed before classification , aiming at the reduction of the number of features to the most relevant ones. Combinations of features can lead to new features with higher discriminative abilities. An example for the combination of features are the spectral indices (see Eq. 1), combining the amplitude values of different wavelengths to a new value. A popular feature selection algorithm is discriminant analysis (Cho et al. 2002, Borregaard et al. 2000, Gebhardt and Kühbauch 2007a, Neto et al. 2006b).

The classification is the last step of the analysis. Classification algorithms can be grouped into unsupervised classifiers , also known as clustering, and supervised classifiers . Unsupervised classifiers use the feature vectors without additional information and create groups of similar objects according to a distance measure of the vectors in the feature space. These groups are called clusters and may refer to classes of the problem. A supervised classifier has to be trained with prototype information, which are selected feature vectors of known class. Classifiers compare the features of the unknown objects to the trained ones and assign a class. The number of classification algorithms is large, ranging from simple algorithms like kNN (k-nearest-neighbour), that uses the training data directly, to complex functions and function systems like neural networks, tree classifiers or support vector machines, which generate a classifier model from the training set and use that for the classification . Cho et al. (2002) successfully trained neural networks, Pérez et al. (2000) used Bayes rules and a nearest neighbour classifier with shape features. Burks et al. (2005) used neural networks to classify texture features.

A shape based approach was tested by Oebel (2006) under field conditions, the classification accuracies were suitable for the creation of application maps. Table 8.1 shows the detailed results for Zea mays and Hordeum vulgare crops using discriminant analysis .

Table 8.1 Confusion matrices (predicted and true class in percent) for Zea may s (corn, ZEAMX, left) and Hordeum vulgar e (spring barley, HORVS, right), taken from Oebel (2006)

An example for a classification with shape features (region-based, Fourier and skeleton features ) is shown in Fig. 8.7. The image was composed of samples from several IR-R difference images. A small training set was created containing prototypes of the species. Nine different species have been classified using a radial basis function network classifier. The objects in the image were labelled according to the classification result.

Fig. 8.7
figure 7

Labelled image, each region is labelled with the classification result (the species)

The shape based approach has its limitations due to the number of plant species and the shape variability within different growth stages of each species. A class scheme was developed (Weis and Gerhards 2007) for these variations and used to create training data for various weed and crop species.

4 Conclusions

The automation of weed detection in the field is a very challenging topic, which is a current research topic of several working groups. The complexity of this task originates in the variability of the plant species in the field. Several plant properties have been presented, which can be used to distinguish species. Approaches and results, achieved with available sensor technology, were reviewed. Some sensors were already used successfully for weed detection and discrimination under controlled conditions and also in field experiments, but yet there is no general best practice to achieve this, especially under changing conditions within the field. The combination of different techniques might lead to robust solutions in the future. Sensor fusion and integrative analysis of multiple sensor data could improve the weed detection rate and also influence other precision-farming technologies. Commercial products like special sensors and analysis equipment for this task are to be developed. If such systems are available, the weed infestation can be assessed for site-specific management and population dynamics research. These will add valuable data for precision farming applications and decision support systems.