Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

From the onset of GIS in 1960, the needs for land cover and land use mapping (LUC) have been ever increasing. This is particularly evidenced by the creation of national and international mapping programs such as Copernicus, and from the updating of global repositories of land cover/use such as Corine Land Cover (CLC), Urban Atlas or, in France, the RGE (Référentiel à Grande Échelle) from the IGN (Institut Géographique National).

Currently, CLC is the European land cover reference for many studies on a scale of 1:100000. However, for the most part its scale and/or its typology are not convenient for local territorial management. With Very High Resolution (VHR) sensor development since 2000 and newly acquired drone technology, the scale needs for studies are between 1:10000 and 1:25000. Nomenclature should be more specific and data quicker to refresh. The mapping production of these recent requirements in environmental studies or mapping surveys are performed mainly by manual digitizing of aerial and/or satellite imagery. This method can be extremely expensive in terms of both time and money. This is confirmed by evidence from rural areas where the use of the GIS in connection with land cover or land use is less numerous than the GIS urban database. Therefore, questions about the accuracy and updating of spatio-temporal data over a large area have become essential research topics.

In the last decade, a new automatic image processing method called the Geographic Object-Based Image Analysis (GEOBIA) has been developed to adapt to VHR data and to improve the land mapping (Blaschke 2010). According to Hay and Castilla (2008), GEOBIA is

“a subdiscipline of Geographic Information Science (GIScience) devoted to developing automated methods to partition remote sensing imagery into meaningful image-objects, and assessing their characteristics through spatial, spectral and temporal scales, so as to generate new geographic information in GIS-ready format.”

While traditional pixel-based classification approaches have been widely used to map general land cover and detect changes for urban, forest, water or agricultural monitoring from High and Low Resolution satellites sensors (Landsat, SPOT), object-oriented approaches are not only dependent on the pixel information (Blaschke and Strobl 2001) but also on spatial image information to extract and identify land use features and man-made structures such as agricultural parcel size, house shape or forest texture (Blaschke 2010).These approaches can be considered finer mapping typologies and achieve satisfactory results when applied using VHR images (Schiewe et al. 2001; Baatz et al. 2004). The object-based approach aims to replicate the approach of classical manual digitizing (Blaschke and Strobl 2001; Schiewe et al. 2001; Benz et al. 2004; Kim et al. 2009). GEOBIA consists of two steps:

  • Segmentation: extraction of different homogeneous areas from the image (agricultural land, forest, urban area).

  • Classification: characterization of previously identified areas from spectral and spatial information.

Although, GEOBIA has been extremely popular in image processing for many reasons, there is still an important gap between the use of this new paradigm for LUC mapping from VHR images in research and its application in operational studies. The GEOBIA community is still fragile because there is a lack of protocols, formats or standards for a robust segmentation (Hay and Castilla 2008; Kim et al. 2009; Arvor et al. 2013). It is difficult for a project manager to find an easy and efficient solution to meet mapping expectations that are rarely the same (data, localization, time, aims etc.). All these aspects still lead consulting firms to choose manual approaches rather than semi-automatic solutions. The reasons for this are mainly the lack of skills and fear of using an expensive method without good results. Currently human photo-interpretation seems the safer option.

In their chapter, Hay and Castilla (2008) proposed an analysis to provide insight into the current state of GEOBIA. They described the weaknesses and threats involved in a GEOBIA Project. Of the weaknesses, we can underline several important points. The image segmentation is an ill-posed problem because it is a result of the choice of segmentation scale parameter. As with handmade digitizing, the object delimitation between the different image-segmentations will not be exactly the same (Arvor et al. 2013). Moreover, the relationship between image-objects and landscape-object involves an empirical acceptance. Based on the evidence, we can ask the following questions: What is good image segmentation? And how can we obtain it for an operational mapping application?

In this chapter, we propose to analyse the different steps of image segmentation. We have identified three limiting points in the image-segmentation process for use in an operational land cover and LUC application and derived the following questions:

  • What is a suitable segmentation scale parameter?

  • How can the geometry of image-objects be simplified?

  • How can oversegmentation be reduced?

We consider the key points of image-segmentation key points and these limitations. In Sect. 3, the materials and method we propose are presented. Then, results obtained on each of the points by the method are presented and discussed in Sect. 4. Finally, the discussion and conclusions are in Sect. 5.

2 Image Segmentation

2.1 Image-Objects

Object-based image analysis starts by determining the steps of image segmentation. The image segmentation steps allow us to create “image-objects” (or segments). Castilla and Hay (2008) defined “image-object” as

“a discrete region of a digital image that is internally coherent and different from its surroundings.”

In theory, an image-object has no thematic meaning. It’s just a “discrete” or “unique” entity with internal coherence and external contrast with neighbouring objects based on color, tone, texture, shape or size patterns. In practice, a GEOBIA expert has his own idea (implicit or not) of image-object he wants. Indeed, image-object is a result of choice of parameters based on expert knowledge and experience. Thus, image-object is a subjective concept. Unique cutting solution doesn’t exist and image segmentation methods are rarely transferable (Arvor et al. 2013). Although, image-segmentation is a crucial step and it influences classification accuracy and quality (Dorren et al. 2003; Meinel and Neubert 2004; Kim et al. 2009).

2.2 Image Segmentation Process

There are two main approaches to the image segmentation process: the top-down method directed by knowledge and the bottom-up method directed by data (Baatz et al. 2004). The first method assumes that the object of study on the image is known; the model tries to find the best way to extract it. It is used to identify one or a few landscape elements. On the other hand, the second bottom-up method assumes that the study objects are not well-known. Image-objects are generated randomly from the whole image. This method can be considered as a clustering method, which means that image-objects have no thematic meaning. At this stage, it is best to call these “primitive image-objects” (Thomas 2005).Then, the identification of “primitive image-objects” is performed by the user. Generally, the bottom-up method is used to map large classes of LUC. It can be used to make a summary of pixels according to the criteria of homogeneity and heterogeneity (Baatz et al. 2004). Our research was based on a bottom-up approach because it seems to be appropriate for meeting most mapping expectations.

However, there are some limitations to the usage of image-segmentation, as has been pointed out. The three main limitations identified that affect the project manager in performing a land cover mapping are considered in turn as follows:

2.2.1 What Is the Suitable Segmentation Scale Parameter?

The image segmentation process requires different configuration settings depending on the software used. We used eCognition® Developer software. In this image analysis software, the segmentation scale parameter is most important, and it corresponds to the level of pixel aggregation. It is expressed as the allowable limit on heterogeneity. The higher the scale factor is, the larger the size of image-objects.

Nowadays, the optimal parameter for image segmentation remains problematic. There is no efficient method to determine a suitable scale for image segmentation according to aims of study (Kim et al. 2008; Drǎgut et al. 2010). Usually, the evaluation of the scale parameter is performed by feedback on the study area and image. Around ten segmentations are necessary before confirming one (Meinel and Neubert 2004; Thomas 2005; Kim et al. 2008). Validation is often empirical (visual quality of objects, consistent with the aims of study). Although, several semi-automatic evaluation methods of image segmentation exist. Supervised Evaluation methods compare image segmentation with references usually produced by human interpretation (Neubert and Herold 2008; Marpu et al. 2009). Recently, much effort has been engaged into unsupervised evaluation methods to auto-adjust segmentation parameters (Zhang et al. 2012). Most of them are based on variance measure for each image-object. These values are averaged into global measure to provide an indication about image segmentation suitability. (Woodcock and Strahler 1987; Kim et al. 2008; Drǎgut et al. 2010; Zhang et al. 2012; Drǎgut et al. 2014)

Perfect image segmentation does not exist, as selecting a good segmentation is often a compromise between oversegmentation and undersegmentation (Castilla and Hay 2008). However, both these terms are subjective notions because they are defined by the practitioner’s interpretation. An oversegmentation refers to a low spatial and spectral difference between several contiguous image-objects that should be merged. This phenomenon is accentuated when the segmentation scale parameter is low. By contrast, undersegmentation refers to a high spatial and spectral heterogeneity in an image-object, in which the object contains several landscape elements. However, we consider that oversegmentation is less problematic than undersegmentation, because in practice post-processing image-object aggregation is easier than image-object splitting. Thus, a little oversegmentation seems to be a better segmentation (Castilla and Hay 2008).

2.2.2 How Can the Geometry of Image-Objects Be Simplified?

VHR images are a valuable resource for LUC mapping. Thanks to submetric accuracy, photo-interpreters can detect and identify visually some small landscape entities (isolated habitats, hedge networks, isolated trees). However, during the digitizing process, the photo-interpreter does not create image-objects according to their pixel resolution but according to the Minimum Unit Collection (UMC). This is the minimum size of objects often expressed in square metres and it is usually imposed within the study’s aims. Moreover, to set a digitizing scale and it makes impossible a distance between vertexes too short. Usually, manual digitizing from submetric images is metric (between 5 and 10 m). In an automatic image segmentation approach, the delimitation of image-objects follows from the pixel resolution and generates the “tread of a stair” effect. This effect is not well-liked by project managers. However, geometric simplification is necessary for many reasons: first, it helps to keep homogeneity, by cutting out potential manual digitizing adjustments; second, the data is lighter (fewer vertexes); and last, image-objects are more consistent and easier to handle for users of the map data.

2.2.3 How Can Oversegmentation Be Reduced?

According to Castilla and Hay (2008), a little oversegmentation is a good segmentation. Generally, oversegmentation operates mainly on large homogeneous areas such as forests, water or herbaceous areas. The reason is that the segmentation scale parameter induces heterogeneity that limits objects. Therefore, oversegmentation determines the limit size of objects. On a single image segmentation it is unlikely that a forest of several hectares can be represented by a single image-object or an agricultural plot or a house object of a few hundred square meters. Even if oversegmentation is easier to correct than undersegmentation, it can lead to a long and tedious post-processing aggregation.

3 Materials and Methods

3.1 Study Site

The study area is located in the South-East of France. The area, which measures approximately 30 km, is included in the regional natural park at Pilat, 50 km south of Lyon city (Fig. 1). It transects a rural area between the lower slopes of the Pilat mountains and the banks of the Rhône river. This area describes complex plots composed of a discontinuous urbanization, vineyards and a fragmented forest configuration. The choice of this study site in a rural area was voluntary. Generally, rural areas and urban areas are distinguished in mapping production. Indeed, the mapping needs of urban areas are mainly based on land use for the monitoring of urban planning (building permits, transport, activities or industrial areas etc.). In France, there are numerous GIS urban data (updated by IGN or administrative authorities). Nevertheless, the majority of the surface of the territory is rural and the availability of GIS data is limited. Unlike urban landscapes, landscape changes in rural areas are not always submitted to administrative authorities but may be natural phenomenon (landscape enclosure) or concern agriculture monitoring (undeclared croplands). Thus, VHR remote sensing data is really a major opportunity for rural area monitoring.

Fig. 1
figure 1

Location map of study area

3.2 Data

In scientific literature, the GEOBIA processes use specific and expensive VHR remote sensing data (Worldview, Quickbird, SPOT). At present, the availability of satellite images is little known to project owners as the financial budget does not allow the purchase of satellite images. In France, project managers mainly use free data like VHR aerial photography (BDORTHO®IGN) or free GIS database (BDTOPO®IGN).

Therefore, our experimental data is RGB ortho-photography (BDORTHO® IGN, 2010). BDORTHO® is extremely dense aerial photography data due to its spatial resolution (50 cm) and its spatial extent (5 \(\times \) 5 km or 100 million pixels). BDORTHO® is ideal data for a human photo-interpretation to detect and extract small landscape entities. However, it is more difficult to handle it with automatic image processing, thus tiling or multiprocessing solutions must be considered to reduce the processing time (Hay and Castilla 2008). In addition, BDORTHO® spectral resolution is too low and unusable for a LUC mapping (Jappiot et al. 2003).

3.3 Proposed Method

The objective of this study is to propose an enhancement to image segmentation. Our purpose is to provide at all times a polygonal “base” close to that a photo interpreter could realize taking a lot more time. Thus, we explored several working processes to address the image-segmentation limits described in Sect. 3.2 for an operational land cover mapping application.

In this section, we expose our image segmentation method for the LUC to BDORTHO® thanks to eCognition Developer® software and R and PostgreSQL/PostGIS languages (Fig. 2). Our procedure includes the following steps:

  1. 1.

    Appropriate segmentation scale parameter estimation

  2. 2.

    Image segmentation process

  3. 3.

    Image-objects geometric simplification

  4. 4.

    Oversegmentation reduction.

These steps are described in the following.

Fig. 2
figure 2

General methodology for an image segmentation process enhancement

(1) Appropriate Segmentation Scale Parameter Estimation

In this working process, we explore unsupervised methods to evaluate the image segmentation parameters. We experienced the Estimation Scale Parameter (ESP) tool variant. The ESP tool has been developed by Lucian Dragut (Drǎgut et al. 2010). It can be integrated into a package in eCognition Developer® software.

This tool simulates a series of several scale values and provides an indication of segmentation level suitability. The tool is based on the work of Kim et al. (2008). It explores the relationship between image-object Local Variance (LV) and spatial autocorrelation at difference scale parameter. According to Drǎgut et al. (2010), the more the scale parameter increases, more the size of image-object grow and more the Standard Deviation (SD) of image-objects increases until several little saturation that it matches the object in the real world (forest stand, houses, agricultural parcel...). To detect theses breaks, the tool combine LV information with another indicator: Rate of Change (ROC). The LV-ROC combination measures the LV change between two scale values. The peaks of the LV-ROC curve indicate the scale value where the image can be segmented in the most appropriate manner (Drǎgut et al. 2010).

The simulation of a series of segmentation scale values seems to be a good way to study the segmentation performance. In addition, this provides information about the spatial structures of the landscape. However, it is difficult to justify the choice of a unique scale value for several geographic entities (houses, agricultural parcels, forest) with the LV-ROC curve in a complex landscape configuration. Thus, we developed the ESP tool variant and propose a range of appropriate segmentation scale values.

The LV profile has a logarithmic trend. The more the scale parameter increases, more the LV increases until global saturation is reached. Assuming that a good segmentation is a little oversegmentation (Castilla and Hay 2008), we can define LV curve saturation as being relative stability of image-object creation. Thus we tried to detect scale values just before LV curve saturation.

We used Hubert segmentation (Hubert 2000). The method finds “segments’ in the LV curve. The location segments detected in the LV curve were processed with R software. It is an R code adaptation. A post-treatment was performed to identify the LV saturation segment. When the segment was identified, we processed another Hubert segmentation on this segment to isolate a specific rupture just before LV saturation. The results are presented in Sect. 4.

(2) Image Segmentation Process

The image segmentation process was performed with eCognition Developer® software (Fig. 3). Three commands are necessary and described in the following.

Fig. 3
figure 3

Image segmentation process with eCognition Developer® software. a raw image; b roads and railways segmentation (red); c export image segmentation

  1. (a)

    Roads and Railways segmentation. The compatibility of LUC mapping with other geographic reference data, used in many studies, is very important. Road and rail networks from BDTOPO® IGN are often used as a “polygonal skeleton” where the image segmentation is grafted. There are multiple reasons for its use. First, BDTOPO® is complete and accurate GIS data that references much information including all transport networks with polylines and associated spaces with polygons. Each polyline is identified and prioritized according to an importance level (levels 1–5). Second, this data is extremely structured in the landscape and it is difficult to digitize it (tree cover or road narrowing). This approach was applied to our area of study.

  2. (b)

    Image segmentation. This was performed with the eCognition® multiresolution segmentation algorithm. We used the segmentation scale parameter determined in the previous working process.

  3. (c)

    Export image segmentation. eCognition® converts image segmentation into a vector file (shapefile). We can calculate and export several descriptors by image-objects (spectral, spatial or textural descriptors) into the attribute table of each vector file. This step is important for the fourth step of the process concerning oversegmentation reduction.

(3) Image-object geometrical simplification

The objective of this working process is to simplify the image-objects’ geometry, erase the “tread of a stair” effect without generating topological errors, and reduce size data.

Several commercial software applications like eCognition Developer® or ArcGIS® propose image-object geometric simplification solutions with more or less efficiency and according to software licence level. We decided to develop a script based on R and POSTGIS.

Our scripts used the SQL simplification script of the Sandro Santilli code (Santilli 2013; http://strk.keybit.net/blog/). Thanks our scripts, we were able to apply a geometric simplification on different image-objects. For example, we decided to simplify road and rail objects at the level of 1 metre and other image-objects at 2 m. Moreover, our SQL script had been optimized to reduce processing time. However, while geometrical simplification is a relatively long process, it is a necessary one. The results are presented in Sect. 4.

(4) Oversegmentation reduction

Generally, at this stage of the object-based approach, the image segmentation is followed by the classification of image-objects’ land cover or land use classes. To reduce oversegmentation, contiguous image-objects of the same class are aggregated to form the final image segmentation. The classification is either a computer-aided photo-interpretation process or a semi-automatic process. The former process is very long and tedious; the latter is generally inefficient for several reasons. First, the definition of decision rules from spatial, spectral or textural descriptors for many classes attribution is very difficult. The results are usually dependent on sampling or the threshold of human determination, which generates much class confusion. Second, the spectral resolution of RGB aerial photographies (BDORTHO®) is not adapted to a semi-automatic classification process. Last, decision rules are usually applied to the entire image without taking into account the local variability of landscape.

Spatial autocorrelation is a property often observed during spatial data observation. Two close spatial entities are more similar than two distant spatial entities. This notion is widely regarded in geography (Griffith 1987). By applying these considerations to the over-segmentation problem, it is possible to reduce oversegmentation. The hypothesis is that two neighbouring objects with close descriptors are likely to belong to the same thematic class and should be aggregated as a single object. The descriptors are indicators or properties used to describe each image-object. eCognition Developer® can generate a large number of descriptors. Only two types of indicators are selected. First, colour descriptors refer to the relative brightness or colour in the image-object. These descriptors are important for extracting image-objects. Moreover, tone variations allow some shapes or textures to be identified (Provencher and Dubois 2007). Second, texture descriptors refer to the frequency variation and the arrangement of colour tone in the image. For example, we can distinguish oriented texture like croplands or vineyards entities and homogeneous texture like forest entities (Bloch et al. 2004; Caloz and Collet 2001). The use of texture descriptors is particularly relevant with VHR images (Lefebvre et al. 2008). We used texture descriptors from co-occurrence matrices developed by Haralick et al. (1973).

This working process performed with the R/POSTGIS script was structured as follows. (1) The script identifies neighbours of each image-object. (2) A pre-processing is performed on “urban image-objects”; usually, these consisted of large pixel heterogeneity (houses, parking, gardens, trees). The descriptors have a chaotic distribution. To isolate these objects, building information from BDTOPO® IGN (usually provided) is intersected with image-objects according to a building density threshold. Thus, “urban object” cannot be aggregated with a nearby image-object except with another “urban object”. (3) To perform the aggregation of neighbouring image-objects that are considered similar, we explored a statistical multi-criteria analysis such as a Principal Component Analysis (PCA) that allowed us to study distances between image-objects. Instead of analysing all couples of distance image-objects we processed to a clustering on PCA results and in every cluster we determined a maximum distance for considering two image-objects as similar. The results are presented in the following section.

4 Results

In this section we present the results of every working process for the study area described in Sect. 3.

In Fig. 4 we can observe the results of ESP tool variant used to determine a range of appropriate segmentation scale values. The first graph (Fig. 4a) presents the LV curve evolution (in blue) depending on the segmentation scale values. The scale value simulation ranges from 50 to 300 with spacing of 5. The red curve represents the detection of segments with Hubert segmentation. The segment locations are highlight by vertical markers. The green marker localizes the start of LV saturation. Then, we extracted this first saturation segment and we processed a second Hubert segmentation (Fig. 4b). The first segment presents the phase just before saturation. It means there is a little oversegmentation. Thus, we propose this segment or range of segmentation scale values between 90 and 105 as the most appropriate to process segmentation.

Fig. 4
figure 4

Determination of appropriate segmentation scale values range

We did not want to propose a single segmentation scale value that we could not justify. We wanted to just provide an indication about a range of appropriate scale values to obtain a little oversegmentation according to the study area and remote sensing image.

Then, we processed the image segmentation with eCognition Developer®. The result is presented in Fig. 5. We used 90 as scale value. Roadways and railway networks from BDTOPO® were incorporated in the image segmentation process (red image-objects). Next, we performed image segmentation export with descriptors for oversegmentation reduction in the fourth step. Each image-object was specified for each variable. Table 1 presents the statistics concerning vector file features. We observed that the vector file is large (28 338 Ko) because the vertex number is very high. Therefore, the vector file is difficult to handle. Moreover, we observed that the processing time was not very long (10 min) but the study area was small.

Fig. 5
figure 5

Image segmentation output of eCognition Developer® with communication network

To use this vector, we processed the image-objects to obtain geometric simplification. The results are presented in Fig. 6. At this stage, the vector file has a resolution of 50 cm. We decided to process two levels of geometric simplification. The first level is operated on network image-objects. These image-objects come from an accurate database (BDTOPO®). We processed a slight geometric simplification (1 m) to erase the “tread of stair” effect without losing the precision of these image-objects. The second level is operated on the rest of image-object. We considered that 2 m is a good level of geometric simplification. It can erase the “tread of stair” and reduce considerably the size of vector file. In Table 1 we compare image-segmentation before and after geometric simplification. We can observe that the number of image-objects is the same between the original image segmentation and the geometrically simplified image segmentation.

Table 1 Statistics summary of image segmentation enhancement process
Fig. 6
figure 6

Geometric simplification of image-objects

Fig. 7
figure 7

Meeting between image segmentation and buildings information (in red) from BDTOPO®IGN

Fig. 8
figure 8

Identification of “urban image-objects”

Fig. 9
figure 9

Result of oversegmentation reduction

Fig. 10
figure 10

Zoom of oversegmentation reduction result

Last, we processed oversegmentation reduction. First, we identified “urban image-objects” with BDTOPO (Figs. 7 and 8). Second, we performed PCA and then the clustering on the PCA result. We analysed the distances between image-objects in every cluster. Thus, we could determine more precisely a maximum distance for considering two image-objects as similar. The similarity threshold was set to 0.9. In Figs. 9 and 10 we show the result of the oversegmentation reduction. The statistics for the output vector file are presented in Table 1. The output image segmentation loses 2,533 image-objects and the size of the vector file was reduced by 19,372 Ko. The reduction of image-objects operates especially in the forest area where image-objects are very similar. In open country, the result is more contrasted and oversegmentation is less efficient than for the forest area (Fig. 10). However, the similarity threshold was voluntarily minimized to avoid undersegmentation.

5 Discussion and Conclusions

This chapter presents an automatic image segmentation enhancement for land cover mapping from VHR images. It is based on three points of limitation of the GEOBIA approach to image segmentation used in operational LUC mapping applications.

First, the limitation concerning the determination of suitable segmentation scale parameters remains problematic. It is an ill-posed problem. No objective protocol exists for setting a segmentation scale value. Currently, the scale selection is based on trial-and-error methods. To improve these methods, and assuming that a good segmentation seems to involve a little oversegmentation, we proposed guiding the user to an appropriate scale range and not to a single scale value. The method is based on an analysis of LV information in a series of scale value simulation. The results are shown in Fig. 4 and present a short scale value range just before LV saturation. This range suggests scale values that produce a little oversegmentation of the image. This approach makes the choice of an appropriate segmentation scale value easier, but does not affirm that it is the best segmentation scale value.

After the image segmentation process, we identified that the image-object’s geometry was a main limit for its use in operational LUC application. The “tread of a stair” effect in image-objects delimitation is problematic. We performed efficiency R and PostGIS scripts to solve this problem (Fig. 6). It is an independent script and it can perform several simplification levels on different image-objects. However, the process time remains long but necessary.

Last, we explored a method to reduce oversegmentation and make it easier to use image segmentation for the classification step. Assuming oversegmentation is less problematic than undersegmentation, post-processing the aggregation of image-objects can be long and tedious. We propose a method based on spatial autocorrelation to automatically aggregate the nearest neighbouring image-objects (Fig. 9). The main advantages are that this method considers the local variability of the landscape and avoids a global LUC classification from ill-adapted remote sensing data like BDORTHO®. The PostGIS script was adapted to minimize the aggregation process and to avoid an undersegmentation result. The method gives satisfactory results on homogeneous areas such as forests or grasslands.

In conclusion, several limitations of the GEOBIA process can lead project managers to consider manual digitizing as safer than automatic image segmentation. GEOBIA processes lack operational application or robust methodologies. Moreover, commercially-oriented software is often overly complicated for a non-specialist user; software provides many black box options and promotes many confusions. Nevertheless, the GEOBIA approach for capturing features automatically from VHR remote sensing data is a major opportunity for LUC mapping in the future for a number of reasons: first, this approach could reduce financial and human production costs; second, it could process large datasets in less time; third, thanks to the increasing number of practitioners in the GEOBIA community, many opportunities will emerge to adapt it to specific mapping needs; and last, it can update geographic data faster.

Our development provides an operational answer to object-based image- segmentation problems for LUC mapping production. It proposes a way to optimize the image segmentation process to have at all times a “polygonal base” close to human production. Future research will be dedicated to testing other study areas with different landscape configurations. Another major challenge will be testing very large datasets and experimental tiling processing solutions on large study areas. Finally, the proposed scheme could provide an interesting framework for classification step of the LUC.