Next Article in Journal
Land Cover Change in the Andes of Southern Ecuador—Patterns and Drivers
Previous Article in Journal
A Maxwellian Look beyond Opaque Interfaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images

by
Stelios K. Mylonas
1,
Dimitris G. Stavrakoudis
2,
John B. Theocharis
1,* and
Paris A. Mastorocostas
3
1
Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece
2
Laboratory of Forest Management and Remote Sensing, School of Forestry and Natural Environment, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece
3
Department of Computer Engineering, Technological Education Institute of Central Macedonia, Serres 62124, Greece
*
Author to whom correspondence should be addressed.
Remote Sens. 2015, 7(3), 2474-2508; https://doi.org/10.3390/rs70302474
Submission received: 14 November 2014 / Revised: 6 February 2015 / Accepted: 15 February 2015 / Published: 3 March 2015

Abstract

:
This paper proposes an object-based segmentation/classification scheme for remotely sensed images, based on a novel variant of the recently proposed Genetic Sequential Image Segmentation (GeneSIS) algorithm. GeneSIS segments the image in an iterative manner, whereby at each iteration a single object is extracted via a genetic-based object extraction algorithm. Contrary to the previous pixel-based GeneSIS where the candidate objects to be extracted were evaluated through the fuzzy content of their included pixels, in the newly developed region-based GeneSIS algorithm, a watershed-driven fine segmentation map is initially obtained from the original image, which serves as the basis for the forthcoming GeneSIS segmentation. Furthermore, in order to enhance the spatial search capabilities, we introduce a more descriptive encoding scheme in the object extraction algorithm, where the structural search modules are represented by polygonal shapes. Our objectives in the new framework are posed as follows: enhance the flexibility of the algorithm in extracting more flexible object shapes, assure high level classification accuracies, and reduce the execution time of the segmentation, while at the same time preserving all the inherent attributes of the GeneSIS approach. Finally, exploiting the inherent attribute of GeneSIS to produce multiple segmentations, we also propose two segmentation fusion schemes that operate on the ensemble of segmentations generated by GeneSIS. Our approaches are tested on an urban and two agricultural images. The results show that region-based GeneSIS has considerably lower computational demands compared to the pixel-based one. Furthermore, the suggested methods achieve higher classification accuracies and good segmentation maps compared to a series of existing algorithms.

Graphical Abstract

1. Introduction

In recent years, the growing development and availability of satellite imagery with high spectral and spatial resolution (HSSR), poses new challenges in the field of land cover classification. An attractive method recently receiving considerable attention is the incorporation of spatial information to improve the classification results obtained by traditional pixel-based classifiers. One way to achieve this goal is to extract contextual information from fixed-window neighborhoods around pixels and incorporate it into their feature vector of spectral values. The drawback of this method is that it raises the issue of scale selection, due to the existence of structures of different sizes within the image. A more effective alternative for integrating spatial information is to perform image segmentation. Segmentation is the partitioning of the image into disjointed regions so that each region is connected and homogeneous with respect to some homogeneity criterion of interest.
Most of the existing image segmentation techniques can be distinguished into one of the following three categories [1]: clustering/feature thresholding, region growing, and edge detection. Clustering techniques operate in the spectral space, searching for significant modes in the pattern distribution [2,3]. The created clusters are then mapped back to the spatial domain to form the segmentation map. Important issues to be addressed with cluster methods are the determination of the proper number of clusters and the consideration of the spatial association of pixels, which is usually ignored. Region growing methods start usually from a pixel level and merge neighboring objects sequentially until a homogeneity criterion exceeds a user-defined threshold [4,5,6]. The selection of the termination conditions has always been a challenging task in these methods. Usually, a set of different scales are elaborated, giving rise to a hierarchy of coarser-to-finer segmentations. This multi-scale approach has been applied successfully in various remote sensing tasks [7,8,9,10,11]. Nevertheless, the problem of selecting the proper scale becomes of great importance, so that the map adequately represents all components of different classes. The approach relies on the hypothesis that ground objects of the same land-cover category exhibit similar spectral, textural, and scale characteristics throughout the image, an assumption that is not always true. Several methods now focus on the strategy of automatically finding the most suitable scales so as to avoid scale parameter selection [12,13].
Edge-based methods search for discontinuities in the image by examining the existence of local edges. The extracted edges finally enclose the created objects. The watershed transformation is the most commonly used method of this category and has been employed in various segmentation applications [14,15,16,17,18,19]. A significant limitation of watershed is its sensitivity to local variations, which typically results in severe over-segmentation of the image. For this reason, watershed is often incorporated into more sophisticated methods as a preliminary segmentation step. For instance, in [19] the initially created watershed objects are subsequently merged through graph partitioning techniques. To overcome the oversegmentation problem, markers are used in [15,16], while the authors of [17] consider a genetic algorithm to tune the segmentation parameters of the watershed algorithm.
Recently, the pixel-based Genetic Sequential Image Segmentation (GeneSIS) method has been suggested for the classification of remotely sensed images [20]. GeneSIS is a marker-driven iterative segmentation algorithm, whereby the global segmentation problem is broken down into a succession of simpler tasks, i.e., the extraction of a unique object at each iteration. GeneSIS exploits the searching capabilities of genetic algorithms (GAs) with the aim to locate spatially the proper objects to be extracted from the image. The method evaluates the fuzzy content of candidate regions, creating objects that exhibit an optimal balance locally between fuzzy coverage, consistency, and smoothness. GeneSIS exhibits a number of interesting properties such as reduced over-/under-segmentation, adaptive spatial search, and multi-scale search. Despite its high classification accuracies, two demerits of the pixel-based GeneSIS are the increased execution times required for the completion of segmentation and the fact that, under certain parameter settings, the created segments occasionally exhibit rough boundary shapes.
In this paper, we propose a region-based GeneSIS variant of our approach with enhanced capabilities. With regard to the pixel-based GeneSIS, the modifications and extensions introduced here are outlined as follows:
(1)
The original image is initially fine segmented via the watershed transform method with the goal of reducing the noise effect corrupting single pixels. The created watershed objects are now regarded as the structural elements, instead of the single pixels considered in our previous framework. The watershed segmentation map serves as the basis for the operation of GeneSIS algorithm, while the generated segments are considered as a collection of connected watersheds.
(2)
In view of the region-based representation, significant parts of the algorithm are properly redesigned, such as the marker selection and the fitness function calculation, making extensive usage of the region-adjacency graph (RAG) of the initial map. In addition, we develop a fuzzy integral-based decision fusion scheme for the labeling of watershed objects to the various classes.
(3)
At each iteration, GeneSIS evolves a population of structuring elements placed on the image, called the basic search frames (BSFs), which are continuously relocated over the generations, trying to find the best object for extraction. In a preliminary view of the region-based GeneSIS proposed herein [21], the BSFs were represented by rotating rectangles of varying size. In the new proposal, we adopt a more flexible polygonal representation of the BSFs, which enables GeneSIS to extract irregularly shaped ground structures. A special tuning operator is also designed to enhance the evolutionary search process.
(4)
Finally, exploiting the inherent property of GeneSIS to produce multiple segmentations, we propose two segmentation fusion approaches, namely, the fuzzy majority voting and a minimum spanning forest based scheme. The fusion methods combine the results of an ensemble of different segmentation maps to obtain a final single classification map.
The new algorithm, along with the two segmentation fusion schemes, is tested on two image datasets. The results show that region-based GeneSIS produces better and more robust results compared to the pixel-based GeneSIS, improving the average, the standard deviation, and the best overall classification accuracies. Moreover, the fusion methods achieve similar or higher accuracies compared to the best results obtained from an ensemble of segmentations by GeneSIS. Finally, a significant asset of the region-based GeneSIS is that, owing to the watershed representation and the RAG-based redesign, it has considerably lower computational demands with regard to pixel-based GeneSIS.
The rest of the paper is organized as follows. In Section 2, we provide a general description of the proposed scheme, whereas Section 3 focuses on the GA part of GeneSIS, the object extraction algorithm (OEA). The description of the two segmentation fusion methods follows in Section 4. Experimental results on the classification of two remotely sensed images are presented in Section 5, and the paper concludes in Section 6 with some final remarks.

2. General Configuration

The architecture of the proposed scheme is depicted in Figure 1. Initially, the watershed algorithm is applied in order to create a preliminary fine segmentation map. In addition, supervised pixel-wise classification is performed by applying the fuzzy output SVM (FO-SVM) [22]. As a result, fuzzy classification provides a set of fuzzy membership maps (FMMs), which contain the membership values of image pixels to every class. In the next stage, the fuzzy degrees of pixels contained in each watershed object are combined through the fuzzy integral fusion method with two goals: compute the fuzzy values of the watershed to the different classes, and then assign a specific class label to this object. The class-labeled connected components of watershed objects, along with their membership values, serve as inputs to the GeneSIS segmentation algorithm. The segmentation result produced by GeneSIS provides the final classification map.
Figure 1. Flowchart of the proposed scheme.
Figure 1. Flowchart of the proposed scheme.
Remotesensing 07 02474 g001

2.1. Watershed Segmentation

Watershed transform is a morphological approach widely used in image segmentation. The image, considered as a topographic surface, is flooded from its minima and dams are built in order to prevent merging of water from different sources. Dams represent the watershed lines, enclosing the catchment basins. Watershed transformation is usually applied to a gradient image, so that catchment basins delineate the homogeneous regions of the image. Among the different approaches investigated in [14] to obtain gradient images from hyperspectral data, we use the Robust Color Morphological Gradient (RCMG) method [23]. To reduce the oversegmentation resulting from watershed, we perform an initial filtering using a 3 × 3 median filter to smooth the surface, while at the same time preserving the significant edges.
The watershed implementation presented in [24] is used in this work. As a result of the segmentation, a set W = { W i / i = 1 , , Ω ( W ) } of watershed objects is obtained, along with the set of watershed pixels representing the edges between regions ( Ω ( ) denotes the crisp cardinality operator). The assignment of the watershed pixels to the neighboring objects is carried out as described in [14]. For each Wi, the standard vector median is computed:
x V M i = arg min x W i { x j W i x x j 1 }
Every watershed pixel is then assigned to its neighboring object with the “closest median”, i.e., the object exhibiting the minimal distance between the vector median of this region and the watershed pixel vector. It should be noted that the watershed transform creates an over-segmented map, containing mostly small and compact regions that enclose a few pixels.

2.2. Pixelwise SVM Classification

Support vector machines (SVM) is a valuable classifier from machine learning that has recently attracted considerable interest in the analysis of remote sensing images. Further, it is well recognized that the availability of fuzzy degrees of pixels to the various classes provides a better description of the image context. In this work, we perform a pixel-based classification using the fuzzy output SVM approach [22]. Following the one-versus-all (OVA) decomposition strategy, we first construct an ensemble of M binary SVMs { f 1 ( x ) , , f j ( x ) , , f M ( x ) } , where M is the number of classes and denotes the decision function of the jth classifier, trained independently to discriminate class j from the rest of the classes. Then, the method manipulates the SVM decision values, providing for each pixel x a membership vector:
μ ( x ) = [ μ 1 ( x ) , ... , μ j ( x ) , ... , μ M ( x ) ]
As a result, M fuzzy membership maps (FMMs) are created, each one corresponding to a particular class. These maps contain all the important information required for the different stages of our method. Hence, the fuzzy classification process can be regarded as an image transformation from spectral space to the space of membership values. Based on the above values, each pixel is assigned to a class label, following the max argument principle:
( x ) = arg max j = 1 , ... , M { μ j ( x ) }
where ( ) is the class label assignment function.

2.3. Assignment of Fuzzy Degrees to Watershed Objects

In the proposed classification scheme, we consider watershed objects as structural units, instead of pixels. Therefore, before proceeding to the image segmentation by GeneSIS, we need to determine the fuzzy content of each watershed object at the various class labels. To accomplish this task, we employ the decision fusion approach of fuzzy integral, which is defined with respect to a fuzzy measure, usually a gλ-fuzzy measure. Fuzzy integral has been used in previous works to combine the results of multiple classifiers [25]. Here, all pixels contained in a watershed object are regarded as different and equivalent sources of fuzzy information. Then, the fuzzy degrees conveyed by pixels are combined via decision aggregation to produce the membership values of watershed objects.
Let us consider an arbitrary watershed object W i = { x j i /   j = 1 , , Ω ( W ) } , where each pixel x j i retains a vector μ ( x j i ) of fuzzy degrees. To compute the membership degree of Wi to label k { 1 , , M } , we proceed along the following steps:
(1)
The fuzzy densities g k j represent the degree of importance of μ k ( x j i ) toward the final evaluation. In our case, these densities are determined locally by considering a 3x3 neighborhood N ( x j i ) of each pixel. Specifically, g k j is defined as the fuzzy coverage of label k in   N ( x j i ) of the examined pixel:
g k j = x N ( x j i ) ( x ) = L k μ k ( x )
These densities are then normalized so that j = 1 Ω ( W i ) k = 1 M g k j = 1 .
(2)
Calculate the unique root λ > 1 of the equation:
λ + 1 = j = 1 Ω ( W i ) ( 1 + λ g k j )
(3)
Sort the elements of { μ k ( x j i ) } in descending order: μ k ( x j 1 i ) , , μ k ( x j Ω ( W i ) i ) with μ k ( x j 1 i ) denoting the highest membership value.
(4)
Sort the densities correspondingly, i.e., g k j 1 , , g k j Ω ( W i ) .
(5)
Set g ( 1 ) = g k j 1 and calculate the rest of the fuzzy measures according to the following recursion:
g ( l ) = g k j l + g ( l 1 ) + λ g k j l g ( l 1 ) ,        2 l Ω ( W i )
(6)
Finally, the membership value of Wi to label k is computed as:
μ k ( W i ) = max l = 1 Ω ( W i ) { min { μ k ( x j l i ) , g ( l ) } }

2.4. Connected Component Labeling

Based on the above values, each watershed object is assigned to its dominant class label, following the max argument principle:
( W ) = arg max j = 1 , ... , M { μ j ( W ) }
Adjacent watershed objects of the same label can now be connected to a single hyper object. This is achieved by applying a connected-component (CC) labeling algorithm. As a result, we obtain an initial segmentation map containing the set of CCs: C = { C j / j = 1 , , Ω ( C ) } . Each CC shares the same label with its watershed objects: ( C j ) { L 1 , .. , L j , .. , L M } . Figure 2a illustrates the result of this stage, where three CCs containing watersheds of similar label are formed after CC labeling. The set C of labeled CCs obtained at the output of this stage might be considered as the final classification map of the image; therefore, it is examined as a separate classification approach in the simulation results. This map exhibits considerably lower over-segmentation compared to the pixel-wise SVM classification. Nevertheless, the result still suffers from adequate over-segmentation, mainly due to the appearance of incorrectly labeled small CCs lying in the interior of broader CCs of different label. The proposed segmentation algorithm aims at removing the above demerit, producing more homogeneous objects with reduced over-segmentation.
Figure 2. (a) An example of initial map of labeled CCs with illustration of marked watersheds, (b) Possible segmentation of the same area after GeneSIS.
Figure 2. (a) An example of initial map of labeled CCs with illustration of marked watersheds, (b) Possible segmentation of the same area after GeneSIS.
Remotesensing 07 02474 g002

2.5. Marker Selection

After the formation of the initial CCs, we proceed to the marker selection step. Markers are confident regions of the image that should retain their label after the whole procedure. Contrary to [20], marking is now performed on watershed objects instead of pixels. The CCs to be marked are selected according to their size and their attributed fuzzy degrees. As a first step, we choose those CCs from   C with area larger than a specified threshold  Ω m i n C , which approximately represents the area of the smallest region of interest we want to recognize. Next, the most reliable watershed components contained in the selected CCs should be determined. For this reason, for each of these watersheds, we consider the difference Δ μ ( W ) = μ d o m ( W ) μ c o m p ( W ) , where μ d o m ( W ) is the highest fuzzy degree in the dominant class, while μ c o m p ( W ) denotes the highest degree associated with the most competing class. Since these two membership values are tight by FO-SVM to unity, Δ μ is an indication of the confidence of the examined watershed.
Watershed components, inside large CCs, with degree difference above a defined fuzziness threshold Δ μ ( W ) > Δ μ t h are selected as markers. The value of Δ μ t h depends on the level of image uncertainty, described by the fuzzy classification output. Here, it is defined as the m e d i a n ( Δ C ) , where Δ C denotes the image map of differences between the dominant and the most competing class degrees for every watershed object. Highly mixed images need lower values, while those partitioned confidently take a higher one.
Concluding, highly reliable watersheds are marked, and hence they retain their label after segmentation by GeneSIS. An initial CC containing marked watersheds will be denoted in the sequel as C j ( m ) . On the other hand, mixed watersheds with small difference of degrees between the dominant and its most competing class remain unmarked, being considered as ambiguous objects. As a result, despite the original labeling obtained by the fuzzy integral stage, their label might change after GeneSIS. Figure 2a shows that some confident watersheds within the two connected components are marked (indicated with a colored circle), while the third component remained unmarked. The set of CCs is used by GeneSIS segmentation to estimate the size of components appearing in the uncovered area of the image and delineate the active areas of the population chromosomes to be extracted as objects.

2.6. Segmentation by GeneSIS

In this stage the GeneSIS algorithm is performed, adapted to operate on a region-based image representation obtained by watershed transform. Each object extracted by GeneSIS is now considered as an aggregation of connected watershed objects existing in the terrain. In this context, GeneSIS aims at fulfilling the following two objectives simultaneously. First, partition the image into larger, more homogeneous and well-shaped segments, including primarily highly confident watersheds of a specific label. Secondly, properly apportion the ambiguous (unmarked) watersheds to the neighboring segments according to spectral similarity, measured here in terms of fuzzy degrees to the various classes. This latter objective refers also to the small unmarked CCs of the initial map C , the watersheds of which are necessarily shared with the adjacent segments. As an illustration, Figure 2b shows that two segments are finally created upon the initial map of three CCs, shown in Figure 2a.
Figure 3. Outline of GeneSIS procedure.
Figure 3. Outline of GeneSIS procedure.
Remotesensing 07 02474 g003
An outline of the proposed segmentation algorithm is shown in Figure 3. In the following, we describe the different parts of GeneSIS. Notice that owing to the initial segmentation into watershed objects, the image is now represented by the respective RAG. Accordingly, we have considerably redesigned our algorithm to comply with this new framework. Specifically, many parts of GeneSIS make full exploitation of the node and adjacency information provided by RAG, especially those pertaining to the object extraction algorithm.

2.6.1. Iterative Object Extraction

After initialization, GeneSIS enters a sequential procedure of repeated extractions, where at each iteration t, a unique object S t is extracted. Due to the iterative nature of GeneSIS, the covered part of the image gradually increases after each iteration. Henceforth, the set of extracted segments up to iteration t will be denoted as S ( t ) with the initial condition S ( 0 ) = . On the other hand, the uncovered part of the image is constantly decreasing. So we need to define the set of uncovered initial CCs after iteration t, which is denoted as   R C ( t ) and is initialized to the initial CCs, R C ( 0 ) = C .

2.6.2. Size Estimation of Uncovered Area

Given R C ( t 1 ) and prior to the object search at iteration t, we compute the mean A a v g ( t ) and standard deviation A s t d ( t ) of the area of all spatial structures existing in the uncovered part of the image. These quantities give an approximate view of the distribution of the remaining structures’ area, thus providing an estimation of the spatial scale to be searched in the sequel. They will be used by the Object Extraction Algorithm (OEA), in order to adjust the region growing capabilities of the GA individuals and adapt the object search to the spatial characteristics of the currently uncovered area. In their calculation, we exclude insignificant CCs with area smaller than Ω m i n C . Finally, A a v g ( t ) and A s t d ( t ) are updated after a fixed number of iterations (e.g., 20), in order to reduce computational demands.

2.6.3. Object Extraction Algorithm

The object extraction algorithm (OEA) is the fundamental part at each iteration, being implemented by a GA. Each individual in the population of GA represents a different object. The evolutionary process then tries to find the best possible object, by minimizing a specially designed fitness function. At the end of the GA, the elite individual contains the extracted object S t , tagged along its own class label. A detailed description of OEA is provided in Section 3.

2.6.4. Adaptation of Covered and Uncovered Areas

After the extraction of S t , the set of extracted segments is updated as:
S ( t ) = S ( t 1 ) S t
At the same time, we need to update the remaining part of the image. So, the watershed components of S t are removed from the set R C ( t 1 ) and each C j R C ( t 1 ) is rearranged as follows:
C j C j ( C j S t )
where A \ B = { x   |   x A , x B } , in order to create the R C ( t ) . The iterative process terminates when a specified percentage p of the whole image has been covered (e.g., p = 90 % ).

2.6.5. Assignment of Remaining Parts

The remaining part is mainly composed of small regions of uncertain class, dispersed around the image. These regions are finally apportioned to the already extracted objects via M-HSEG [26], a marker-based region growing method. The already extracted objects are considered as markers, with the same label assigned to them after GeneSIS. In addition, the markers set contains the initially marked watersheds that have not been extracted after completion of GeneSIS segmentation. During the iterative region growing, mergers between markers of different labels are prevented. The merging process terminates when all unmarked watersheds are absorbed. The decision upon which pair of objects should be merged each time is made using a dissimilarity criterion. Since our algorithm operates on the fuzzy space of membership values instead of the spectral space, we employ the fuzzy region dissimilarity measure proposed in [19].

3. Object Extraction Algorithm

As mentioned earlier, OEA is a GA-based routine, each time searching for the best possible object to be extracted from the uncovered area of the image. Over the next subsections, we describe the main issues involved in GA, such as the individual’s encoding, the population initialization, the fitness function and the genetic operators used.

3.1. Chromosome Encoding

Each individual represents a candidate object for extraction and is associated with a so-called basic search frame (BSF). In previous versions of GeneSIS, the BSFs were represented by rotating rectangles of varying size and orientation. Here, we enhance the flexibility of the chromosomes by considering the more descriptive polygonal shape (Figure 4a). A simpler polygon representation was preferred, by applying the following constraint on the angles A i C ^ A i + 1 = 2 π / n ,   i = 1 , , n , where n is the number of the polygon’s vertices A i . Since the directions of the vertices are predetermined, the only parameters left for setting are the center of the polygon and the radians r i of the vertices. As a result, an individual of the population is encoded as a sequence of (n+2) real-coded genes:
O k = ( c x ( k ) , c y ( k ) , r 1 ( k ) , R 2 ( k ) , ... , R n ( k ) )
The above encoding enables a better representation of the polygon’s three basic properties, i.e., location, size, and shape. ( c x ( k ) , c y ( k ) ) is the polygon’s center, representing the location of the chromosome. The r 1 ( k ) is the radius of the first vertex, which operates as a scale factor. Its value is restricted by an upper limit Rmax, which denotes the maximum allowable radius. The remaining vertices are coded in relation to the scale, through the ratio R i ( k ) = r i ( k ) / r 1 ( k ) ,   k = 1 , , n .
The vector R ( k ) = [ 1 , R 2 ( k ) , , R n ( k ) ] determines the shape of the polygon, being independent of the scale. In order to prevent the formation of highly irregular polygons, these ratios are restricted to take values in the range [0.5, 2]. Any change in the first two genes leads to spatial relocation of the polygon, with its shape and size remaining intact. When changing the scale factor r 1 ( k ) , the polygon’s size is decreased or increased, while retaining its shape and location. Finally, changes in any of the rest of genes modify the polygon’s shape only, leaving its scale and location unaffected.
Figure 4. Polygon representation of BSFs. (a) Typical polygonal BSF with n = 8 vertices, and (b) illustration of watersheds inclusion by the previous BSF.
Figure 4. Polygon representation of BSFs. (a) Typical polygonal BSF with n = 8 vertices, and (b) illustration of watersheds inclusion by the previous BSF.
Remotesensing 07 02474 g004
In the current version of GeneSIS (watershed representation), it is not straightforward to define which objects should be contained in the BSF. We choose to consider as internal those objects whose geometric centroid is included within the borders delineated by BSF. In that respect, a BSF can be viewed as a spatial loop placed somewhere over the image, which embraces a collection of adjacent watershed regions. The above procedure is facilitated by using the connectivity information provided by RAG. An illustrative example is presented in Figure 4b. The polygonal BSF of Figure 4a is now placed upon a set of watershed objects, whose centers are marked with blue color. The border line of the area composed by the internal objects is indicated in red. As can be seen, the blue-lined polygon (genotype) is different from the actual content of the chromosome (phenotype).

3.2. Population Initialization

Exploiting the information contained in R C ( t 1 ) , the individuals of the initial population are placed at spatial regions covered by large and marked CCs. Particularly, in order to create O k , we randomly select a marked component C k R C ( t 1 ) with a probability proportional to its area. Next, we find its bounding box B B ( C k ) aligned orthogonally to the image axis. The center of the bounding box is chosen as the polygon’s center while the scale factor r 1 ( k ) is set to half the vertical height of B B ( C k ) . The ratios R i ( k ) ,   k = 2 , , n are initialized randomly in the range [0.8, 1.2]. The above initialization assures that the evolutionary search will be focused mostly on large and uncovered areas.

3.3. Active Region Determination

When evaluating candidate solutions, we are particularly interested in obtaining an object the major part of which is homogeneous, i.e., it contains watersheds with high fuzzy degrees in the same class. Nevertheless, owing to the genetic evolution, an object may be located spatially in such a way that some watersheds included in the BSF are already extracted at previous invocations of the OEA, while some others are marked with a different label. To cope with this situation, an object O k is evaluated in terms of the so-called active area, denoted as A R ( O k ) .
The determination of the active area is accomplished as follows. In the first step, we remove from O k watersheds extracted from previous calls of OEA, since our main objective is to segment currently uncovered regions of the image. Let us define the overlapping region between O k and the already extracted segments S t :
O V E ( O k ) = O k S ( t 1 )
The remaining area O k , obtained by excluding O V E ( O k ) from O k is determined by:
O k = O k \ O V E ( O k )
Next, we determine the dominant class label of the individual. This is decided on the basis of the fuzzy coverage of O k for the different classes:
Ω ˜ j ( O k ) = W O k ( W ) = L j Ω ( W ) μ j ( W )
Ω˜ j ( O k ) indicates the fuzzy degree to which watersheds of class j exist in O k . Finally, the dominant label of O k is derived via the max argument rule:
( O k ) = arg max j = 1 , ... , M { Ω ˜ j ( O k ) }
Generally, the sub-area O k includes watersheds of the object’s class, as well as watersheds assigned to different labels. The former are regarded as positive examples (PEs), whereas the latter ones are considered as negative examples (NEs). The homogeneity property of a region dictates that O k should contain as many PEs as possible with strong fuzzy degrees, and a smaller portion of NEs, preferably with lower degrees to other labels. A special occasion of interest occurs when O k includes sections of NEs with marked watersheds inside. Let us define these sections as a set comprising the marked overlapping regions of O k with the uncovered CCs of different labels:
O V M ( O k ) =             j = 1 ( C j ) ( O k ) Ω ( R C ( t 1 ) ) ( O k C j ) ( m )
In the following, O V M ( O k ) is excluded from O k . This is explained by noticing that, based on the marker selection scheme, marked image parts are considered as large and confident regions. Thus, it seems reasonable to allow them to be absorbed by a different object at a subsequent invocation of OEA. Moreover, with this removal we avoid under-segmentation, since the object is prevented from expanding into regions that possibly have different labels. The active area of a candidate solution is now formulated as follows:
A R ( O k ) = O k    ( O V E ( O k )       O V M ( O k ) )
Finally, an important requirement of our method is that the active area should be a connected component of watersheds. This constraint is imposed in order to avoid the extraction of spatially disjointed segments from a single call to the OEA. In cases where the active area is not connected, we find the component with the largest area A R C m a x ( k ) , and consider this component as the new active region, i.e., A R ( O k ) = A R C m a x ( k ) .
After the previous readjustments, A R ( O k ) is a subset of O k and its location may differ significantly from the corresponding area of BSF. For this reason, a chromosome is repaired so as to include only the active region. Henceforth, we will consider that chromosomes have been repaired and that their active region is connected. The active area represents the useful region of an individual. Its fuzzy content is consistently employed in the computation of fitness function components, as discussed in the following.
Figure 5. Illustration of an object extraction (iteration t): (a) set of uncovered connected components RC(t – 1), (b) internal of the elite chromosome, (c) modified set of uncovered connected components RC(t).
Figure 5. Illustration of an object extraction (iteration t): (a) set of uncovered connected components RC(t – 1), (b) internal of the elite chromosome, (c) modified set of uncovered connected components RC(t).
Remotesensing 07 02474 g005
An illustration of the chromosome evaluation process and clarification of the relevant notation is given in Figure 5 through an artificial example. Suppose we are in the t iteration of GeneSIS and the set R C ( t 1 ) of the uncovered connected components after the first (t-1) iterations is presented in Figure 5a, including three CCs from the initial map. The centers of the marked watersheds are depicted with colored circles in the various class colors, while the centers of unmarked watersheds are denoted with the symbol (*). Assume that the polygon appearing in the same figure represents the best chromosome attained after termination of genetic evolution. Following the rationale described in Section 3.1, the internal area of this chromosome is enclosed by the red polygonal line, demonstrated in more detail in Figure 5b. As a first step, the sections OVE (overlap with the previous extracted segments) are excluded. Next, the remaining part is used to determine the dominant class of the object. The resulting label is Class 2, as assumed from Figure 5b. In the following, we check if the chromosome is intersected with marked regions of other labels. Indeed, the section OVM is such a case and, as discussed previously, this undesirable region is excluded. The remaining part of the chromosome constitutes the active region (AR). This is the part of the chromosome that is evaluated by the fitness function and is finally extracted as segment S t . After the extraction of S t , the uncovered part of the image is rearranged and the new set R C ( t ) of remaining CCs is shown in Figure 5c. Portions of the connected components of Class 2 and 3 were removed, while components of Class 1 remained intact.

3.4. Fitness Function

The determination of fitness function is of particular importance for the GA and hence the OEA. The suggested fitness function design aims at fulfilling three goals simultaneously: the extracted objects should be large, homogeneous (that is, they should not contain mixed regions of different labels), and smoothly shaped. The first two objectives are attained by means of the coverage and consistency criteria, while for the third one we devise a suitable smoothness criterion. All fitness components are computed in a fuzzy manner by manipulating the fuzzy degrees of watersheds to the various classes. Given the dominant class of O k , we define the fuzzy coverage of the PEs and NEs, respectively, covered by the active area of O k :
Ω ˜ p ( O k ) =               W A R ( O k ) ( W ) = L j ,    ( W ) = ( O k ) Ω ( W ) μ j ( W )
Ω ˜ n ( O k ) =             W A R ( O k ) ( W ) = L j ,    ( W ) ( O k ) Ω ( W ) μ j ( W )
The coverage criterion promotes the extraction of large objects by maximizing the fuzzy coverage of PEs. The notion of a large object is strongly related to the size of existing components in the uncovered part of the image, and therefore differs along the various extractions of GeneSIS. In order to match the GA search to the currently available components size, we define a threshold value A t h r ( t ) that is considered as an estimate of a large object’s area:
A t h r ( t ) = A a v g ( t ) + A s t d ( t )
The coverage fitness f C O V [ 0 ,   1 ] is then defined by passing Ω˜ p ( O k ) through the following monotonically increasing sigmoid function:
f C O V = 1 1 + e b ( Ω ˜ p ( O k )       A a v g ( t ) )
Parameter b controls the slope of the sigmoid; it is defined so that for the threshold value A t h r ( t ) we obtain a large coverage value d (for example, d = 0.99 ). Notice that objects with Ω˜ p ( O k ) = A a v g ( t ) are assigned a fitness value f C O V = 0.5 , thereby being regarded as solutions of moderate quality. In addition, highly qualified solutions with f C O V 1.0 are obtained for objects whose active areas fulfill the condition Ω˜ p ( O k ) A t h r ( t ) . As a result, GA search is properly adapted to the scale of the uncovered area of the image, while at the same time promoting the extraction of large objects, thus avoiding oversegmentation.
Consistency serves as a measure of the region’s homogeneity, acting along an opposite direction to the coverage criterion. It prevents the continuous growth of an object and its expansion into highly mixed regions, thereby avoiding under-segmentation. Let G˜ p ( O k ) denote the cumulative degrees of NEs to the object’s label:
G ˜ p ( O k ) =             W A R ( O k ) ( O k ) = L j ,    ( W ) ( O k ) Ω ( W ) μ j ( W )
This term represents the penetration of the dominant class in NEs and is an indication of their ambiguity. NEs with higher values of G˜ p are more consistent than those with low values of G˜ p . Finally, the consistency fitness f C O N S [ 0 ,   1 ] is defined as follows:
f C O N S = { 0 , Ω ˜ p + G ˜ p        Ω ˜ n ( Ω ˜ p + G ˜ p )       Ω ˜ n ( Ω ˜ p + G ˜ p ) , o t h e r w i s e .
A zero consistency value is assigned to those objects that cover more NEs than PEs. The fitness value then increases linearly to 1 when the number of NEs diminishes. Thereby, consistency encourages the formation of objects covering a large number of confident PEs and fewer NEs.
The third fitness component quantifies the smoothness of the object by evaluating the shape of its external borders. Objects with strongly irregular shape are penalized, to avoid the simultaneous extraction of spatially distant regions of the same label. Initially, we compute the following ratio:
λ = Ω ( C H k ) Ω ( A R ( O k ) ) Ω ( A R ( O k ) )
where C H k is the convex hull of A R ( O k ) , i.e., the smallest convex set that contains A R ( O k ) . This quantity measures the matching degree of an object with a prototype convex shape. Objects with small λ are nearly convex ( λ = 0 for ideally convex) which is considered as the ideal shape of an object. To obtain normalized fitness values in the range [01], the smoothness fitness is defined as follows:
f S M O = 1 1 + a λ
Parameter α controls the slope of the function. It is defined so that the smoothness fitness takes a large value (e.g., f S M O ( λ a c c ) = 0.9 ) for an acceptable value λ a c c (e.g., λ a c c = 0.1 ). Objects with nearly convex shapes receive high f S M O values close to unity, while those with highly irregular shapes are penalized.
The overall fitness function is obtained by combining the above three criteria:
f = f C O V f C O N S f S M O
During the initial iterations where the image is mostly uncovered, the OEA extracts large and pure objects, which fulfill both coverage and consistency criteria to a high degree. As the image is progressively segmented, the OEA spatially achieves an optimal balance between coverage (region growing) and consistency (homogeneity), while maintaining the shape of the object into acceptable limits.

3.5. Genetic Operators

We apply the BLX-α [27] crossover operator, suitable for real-coded GAs with a probability p c . As regards mutation, each gene is chosen with a probability p m and assigned a random value within its domain. The mutation rate is defined as the inverse of the number of genes in the solution encoding (e.g., for n = 16 , we have p m 0.05 ). Tournament selection is used for selecting individuals to be recombined for the next generation, while elitism ensures that the fittest solution is retained during evolution. Starting from the initial population of polygons and through crossover and mutation operators, new polygons are created at each generation of the GA. Thus the search space is globally explored and via the survival of the fittest individuals, GA is led to a desirable solution. The algorithm terminates after a maximum number of iterations or when the fitness value of the best individual does not increase after a fixed number of generations.
Figure 6. Description of the elite tuning operator.
Figure 6. Description of the elite tuning operator.
Remotesensing 07 02474 g006
In addition to the standard genetic operators, we also apply a specially designed RAG-based local tuning operator on the elite individual at each generation to improve its fitness (Figure 6). This operator is activated when, for a specific number of generations (e.g., 10), the fitness has been increased by less than 1%. Our objective with this tuning process is to assist the elite chromosome in improving its fitness value by expanding to its fruitful neighboring regions. This can be achieved mainly by considering the possible mergers with its adjacent neighbors of the same class (PEs). In this case, both the coverage and consistency criteria are increased and, therefore, it is most likely that the overall fitness will also increase. In addition, we examine the expansion to regions of NEs by considering the most ambiguous adjacent neighbor, i.e. the one with the smallest difference Δ μ . In this case, the consistency criterion is decreased while the coverage one is increased. Hence, the balance between these two contradicting criteria will finally decide whether this merger is valuable or not. After the first generations, the population usually converges to a specific region, so this operator assists in quickly finding a better solution. In that respect, the local tuning operator serves as a means to boost the spatial search capabilities of OEA.

4. Segmentation Fusion

Owing to the stochastic nature of the object extraction mechanism, GeneSIS is able to produce multiple segmentations of the same image, emanating from different initializations of OEA. Exploiting this inherent property of GeneSIS, we propose two segmentation fusion schemes, where different segmentation maps are combined to decide the final class assignment for each watershed object. Thereby, we can eliminate the stochastic effect of our algorithm, since a single classification map is obtained after fusion. In addition, the effective combination of multiple segmentations can improve the classification accuracies compared to the ones provided by the individual segmentations participating in the ensemble of segmentations.
Initially, we have an ensemble of Q segmentations of the image 𝒮 = { S M ( 1 ) , , S M ( Q ) } after repeated runs of GeneSIS. These maps share all the properties underlying GeneSIS. Nevertheless, due to random initializations, they provide different classification results, especially for the ambiguous regions of the image. Each segmentation map S M ( l ) = { V q ( l ) ,     q = 1 , , N l } is considered as the collection of objects V q ( l ) , which denote the union of connected components of all spatially adjacent watersheds of the same class label in this segmentation map.

4.1. Fuzzy Majority Voting

The configuration of the fuzzy majority voting (FMV) scheme is depicted in Figure 7a. Decision fusion in this approach is performed across the different segmentations of the ensemble on a per watershed basis, i.e., the primitive structural elements appearing in all segmentations. For each watershed, the final label assigned by FMV is determined by combining the certainty degrees of the extracted segments in the different segmentations, which enclose this watershed.
Specifically, for every S M ( l ) ,     l = 1 , , Q , we compute the degree of certainty of V q ( l ) to the various class labels R j ( V q ( l ) ) [ 0 ,   1 ] , j = 1 , , M . This is achieved by considering the fuzzy degrees of the watershed objects:
R j ( V q ( ) ) = Ω ˜ j ( V q ( ) ) r = 1 M Ω ˜ r ( V q ( ) )
Ω˜ j ( V q ( l ) ) are computed by Equation (14) using the watersheds contained in V q ( l ) . R j ( V q ( l ) ) indicates the degree of certainty to which class j exists in V q ( l ) (as delineated by GeneSIS), according to the information offered by SVM classifier. Highly confident objects take large certainty values in the dominant class and lower ones in the other classes. On the other hand, weakly segmented regions receive comparable certainties among the various classes. Consequently, the certainty degrees reflect the quality of a segmentation map, locally.
Figure 7. (a) Fuzzy Majority Voting Fusion; (b) MSF-based Fusion.
Figure 7. (a) Fuzzy Majority Voting Fusion; (b) MSF-based Fusion.
Remotesensing 07 02474 g007aRemotesensing 07 02474 g007b
Next, we assume that for every S M ( l ) , a particular watershed W in the image shares the same certainty degrees as the ones apportioned to the segment V q ( l ) that contains it. FMV operates on a watershed basis, using the following rule:
( W ) = arg max j = 1 , ... , M { = 1 Q q = 1 N I N D q ( ) ( W ) R j ( V q ( ) ) }
where
I N D q ( ) ( W ) = { 1        i f    W V q ( ) 0       o t h e r w i s e
The class label of a watershed W is defined as the one exhibiting the largest cumulative certainty degree, across the different segmentations of the ensemble. Towards the final decision, each segmentation S M ( l ) votes for the various classes according to the certainties of the segments including W.

4.2. MSF-Based Fusion

The configuration of the second fusion scheme examined in this work is shown in Figure 7b. It relies on a region growing segmentation method, the minimum spanning forest (MSF) rooted from markers [28]. A similar approach has been previously applied in [29] for the pixel-based fusion of different classification maps, obtained by various segmentation algorithms.
This fusion strategy is a two-stage procedure, operating again on a watershed basis. In the first stage, the classification results obtained by the different segmentations in the ensemble are combined to select a set of confident region markers. Specifically, we select as markers those watersheds that are assigned to the same class by all segmentations   S M ( l ) ,     l = 1 , , Q , i.e. there is a full consensus between all segmentation maps on the assignment of these watershed class labels. These markers delineate the reliably classified regions of the image, each one receiving the corresponding class label.
The second stage entails the construction of a region-based MSF, rooted from the above markers. This step now undertakes the task of defining the final label assignments to the unmarked objects. Accordingly, markers start growing iteratively by absorbing at each time the most similar neighbor, according to a similarity criterion. This process continues until the entire image is covered. Upon completion of the expansion, the class of each marker is assigned to all watersheds grown from this marker.

5. Experimental Results

The proposed methodology is tested on the land cover classification of two agricultural areas (the Indiana and Koronia datasets), and an urban area (the Pavia image). The images are acquired by different sensors, with varying spatial resolution and number of bands. Because of the stochastic nature of GeneSIS, we performed 30 independent runs (segmentations) with different initializations, to obtain a robust assessment of our methodology. Table 1 shows the GeneSIS parameters commonly used in all experiments. The remaining parameters are defined below for each dataset individually. GeneSIS was coded in C++ and all experiments were conducted on an Intel Core i5-4670 at 3.4 GHz.
Table 1. Parameters used in GeneSIS.
Table 1. Parameters used in GeneSIS.
ParameterValue
Number of polygon nodes (n)16
Maximum allowed radius (Rmax) 50
Image coverage for termination of GeneSIS (%)90
Maximum number of generations (Nmax)1000
Number of generations allowed without change (ng)80
Population size (Np)20
Tournament size2
Crossover parameter α0.5
Crossover probability (pc)0.8
Mutation probability (pm)0.05
Smoothing factor λacc0.1
In the comparative analysis, we consider the pixel-based SVM classification and the results given by the initial segmentation map C , after CC labeling of labeled watersheds (Section 2.4). We also consider the results from SVM classification after spatial postregularization (PR) to reduce the noise. The SVM map is filtered using an 8-neighborohood pixel mask and majority voting. Particularly, if more than five pixel neighbors have a class label different from the one of the considered pixel, then the pixel is reclassified to this label. The filtering is repeatedly applied until stability is reached. In addition, we test the results produced by other recently proposed segmentation-based methods from remote sensing. Specifically, we examine the CaHO[30], HSwC[31], and marker-based M-HSEGop [26] methods. All these algorithms are extensions of HSEG [6], automatically providing a unique segmentation map from the hierarchy of multi-scale maps generated by HSEG. We choose S w g h t = 0 in order to avoid the merging of non-adjacent regions. In that case, HSEG is equivalent to the HSWO algorithm [4]. Finally, we consider the marker-based MSF [28], which operates on a set of labeled markers. For fair comparison, in all algorithms we used the same supervised SVM map obtained from a dataset of training instances. The markers set utilized in MSF and M-HSEGop is the same as that employed in [20]. The dissimilarity criteria SAM, L1 used by the different methods for region merging are described in the aforementioned original works.

5.1. Indiana Image

The Indiana image is a vegetation area acquired by AVIRIS sensor over the Indian Pines site, Northern Indiana. The image has spatial dimensions of 145 × 145 pixels, a spectral range of 220 channels, and a spatial resolution of 20 m/pixel. Twenty water absorption bands have been removed [32], and the remaining 200 bands were used in the experiments. A three-band false-color composite and the reference sites are shown in Figure 8a,b, respectively. The 16 classes of interest existing in this image (mostly different types of crop) are described in Table 3. The training set is randomly selected from reference data, including 15 samples from the three smaller classes (alfalfa, grass/pasture-mowed, and oats), and 50 samples for the remaining classes. The remaining reference data comprised the test set.
Initially, watershed segmentation is performed as described in Section 2.1. The resulting map is shown in Figure 8c, where each watershed is represented by its mean spectral value on an arbitrarily chosen band (Band 120). As expected, the image is highly oversegmented, containing small, well-shaped, and compact watershed regions. This fine segmentation result forms the initial map of structural elements used as the basis for the GeneSIS operation. After the assignment of the watershed pixels to their neighboring objects, a segmentation map with 1109 initial watersheds is created.
Pixel-based classification is next performed by fuzzy output SVM using the entire space of 200 spectral bands. The RBF kernel was considered, while the optimal parameters were chosen by 5-fold cross validation: C = 512 and γ = 2 9 . After hardening of fuzzy degrees, we obtain the supervised classification map shown in Figure 8d. As can be seen, the majority of the fields are correctly classified. Nevertheless, there exists a strong confusion between the spectrally similar corn and soybean types, which produces many misclassifications within certain fields of the corresponding classes. Apparently, the absence of contextual information leads to a highly fragmented SVM map.
Next, we compute the membership degrees of watershed objects via decision fusion by fuzzy integral. After CC labeling we obtain map C , shown in Figure 8e. As can be seen, although the salt and pepper effect is considerably reduced, adequate misclassifications between the spectrally mixed classes still remain. In the marker selection stage, we set Ω m i n C = 20 as the size of structures to be marked, in order to enable GeneSIS to recognize the smallest reference field (oats). The global threshold of fuzziness Δ μ t h is set to a low value, specifically Δ μ t h = 0.2 , due to the aforementioned spectral mixings. As a result, 714 watersheds are selected for marking.
In the following, we proceed to image segmentation by GeneSIS. A typical segmentation map obtained by GeneSIS is displayed in Figure 8f. We can notice that the extracted segments cover mostly the large and homogeneous areas of the image, achieving also a good match with the respective reference fields. It is also remarkable that GeneSIS is now able to cover a whole reference field with a single extraction, without splitting it in more segments. In addition, it should be stressed that the extracted objects appear with varying shapes and irregular boundaries. Particularly, their shapes are delineated from the boundaries of watershed objects included in the BSFs, while the polygonal representation of the chromosome facilitates the extraction of non-convex objects. These are some major differences to the pixel-based version of GeneSIS, where the delineated boundaries and the shape of the objects were strongly constrained by the rectangular shape of the BSF. Finally, an interesting property of the GeneSIS approach is that it is a marker-driven but scale-free segmentation algorithm. Specifically, for each local region, OEA automatically achieves the best compromise between coverage and consistency, according to its size and homogeneity, i.e., it adapts the segment to be extracted to the existing local scale. Hence, GeneSIS does not necessitate the prior determination of a scale parameter to control the segmentation results, contrary to other segmentation methods.
Figure 8. Indiana image: (a) three-band false color composite, (b) reference sites, (c) watershed segmentation map, (d) classification map by SVM, (e) initial segmentation via CC labeling, (f) segmentation map after GeneSIS (black areas denote the yet uncovered small regions of the image), (g) final classification map, (h) total agreement map after 30 runs, (i) classification map after FMV-fusion, and (j) classification map after MSF-based fusion.
Figure 8. Indiana image: (a) three-band false color composite, (b) reference sites, (c) watershed segmentation map, (d) classification map by SVM, (e) initial segmentation via CC labeling, (f) segmentation map after GeneSIS (black areas denote the yet uncovered small regions of the image), (g) final classification map, (h) total agreement map after 30 runs, (i) classification map after FMV-fusion, and (j) classification map after MSF-based fusion.
Remotesensing 07 02474 g008
As a last step, the remaining components are merged to the previously extracted objects via region growing, thus obtaining the final map in Figure 8g. This map is clearly more homogeneous compared to the initial map of CCs (Figure 8e), since many of the previous misclassifications have been resolved. This can also be deduced by considering the number of connected components in the two maps. GeneSIS generated on average 70 CCs, considerably fewer than the 217 components appearing in the initial map of Figure 8e.
The maps resulting after the fusion of the 30 independent segmentations, using the proposed FMV and the MSF-based fusion methods, are depicted in Figure 8i,j, respectively. Further, Figure 8h shows the set of objects (colored areas) that are assigned to the same class by all segmentations, which are used as markers for the MSF-based fusion method. As can be noticed, these objects cover a large portion of the image, comprising mainly the large and homogeneous regions. Hence, the fusion methods operate mostly on the mixed and uncertain parts of the image (white areas in Figure 8h).
In Table 2 we compare the two GeneSIS-based approaches, i.e., the previous pixel-based GeneSIS [20] and the currently proposed region-based approach, in terms of overall accuracy (OA) and execution time. The new scheme attains better maximum and average overall accuracy, while at the same time indicating enhanced robustness, by reducing the variance of the produced accuracies. However, the main impact of the region-based representation is reflected in the execution time, since the new scheme is 62% faster than the previous one. Particularly, pixel-based GeneSIS requires 15.52 s while the region-based one needs 5.87 s, on average.
Table 2. Average OA, standard deviation of OA, maximum OA, and execution time for the two GeneSIS versions in Indiana Image.
Table 2. Average OA, standard deviation of OA, maximum OA, and execution time for the two GeneSIS versions in Indiana Image.
AverageStandard DeviationMaxTime (s)
Pixel-based GeneSIS93.610.3994.5115.2
Region-based GeneSIS94.300.3495.035.87
Table 3 hosts the classification results given by the fusion methods and the competing segmentation algorithms of the literature for the Indiana image. The results are evaluated by means of overall accuracy (OA), average accuracy (AA), kappa coefficient k, and class-specific accuracies. In regard to the above results, the following comments are in order. (1) First, the pixel-wise SVM classification offers by far the worst accuracy compared to the other segmentation-based classification approaches. This finding justifies the need to formulate meaningful objects to be classified, instead of handling single pixels. (2) Both pixel-based and especially the new region-based GeneSIS outperform the accuracy of the initial segmentation map C by 2%–3%. This improvement implies that GeneSIS further homogenizes the input map C by creating a smaller number of segments. On the other hand, it correctly assigns the ambiguous areas of the image, which leads to higher classification accuracies. (3) Both fusion methods achieve better results than an average GeneSIS run, since their OAs are higher than the average OA obtained from the ensemble of segmentations. Especially, the MSF-based fusion method performs slightly higher than the best accuracy attained from the ensemble of 30 segmentations by GeneSIS. This indicates that, through fusion, a significant number of the disagreements existing between the different segmentations is effectively resolved. (4) The spatial PR considerably improves the accuracy of the pixel-based SVM classification. However, both region-based GeneSIS and the two fusion schemes significantly outperform the filtered SVM results by over 5%. This is due to the fact that filtering refines the image components locally, while GeneSIS evaluates much larger areas to formulate the optimal segments. (5) Compared to the other segmentation algorithms, we observe that GeneSIS achieves higher classification performance even in terms of average OA. Moreover, using the MSF-based fusion, GeneSIS offers the highest AA, which shows its ability to sufficiently handle all the classes without underestimating any of them.
Table 3. Classification accuracies for the Indiana Image.
Table 3. Classification accuracies for the Indiana Image.
SVMFMV-FusionMSF-FusionSVM + PRInitial Map CCaHOHSwCM-HSEG°pMSF
DCSAMSAMSAMSAML1
OA76.2294.5195.0889.9991.7193.4892.5693.1893.27
AA84.0394.2096.5293.8192.9696.0595.7189.2489.20
k73.0993.7394.3688.5990.5492.5591.4992.1792.28
Alfalfa87.1892.3192.3110092.3189.7489.7489.7489.74
Corn-notill72.6295.0996.1788.9593.2893.2188.9593.6492.85
Corn-min67.3588.1483.9380.2381.7685.0885.5988.9090.18
Corn76.0997.2897.8398.9197.28100100100100
Grass/Pasture92.3996.2096.2095.3096.2096.4296.2096.2094.18
Grass/Trees95.419997.9999.4398.719999.1496.84100
Grass/pasture-mowed100100100100100100100100100
Hay-windrowed97.0499.7799.7798.8699.7799.7799.7799.7799.77
Oats8060100806010010000
Soybeans-notill77.1299.2495.1093.0396.4198.8099.0282.0382.24
Soybeans-min58.3588.3093.0580.7382.1390.2888.7594.3894.25
Soybean-clean84.4097.1697.1695.3996.8195.2195.7496.2896.45
Wheat99.3899.3899.3899.3899.3810099.3899.38100
Woods88.9197.9197.9195.6696.7890.8490.849191
Bldg-Grass-Tree-Drives72.7399.7099.7095.1598.7998.4898.1899.7098.79
Stone-steel towers95.5697.7897.7810097.7810010010097.78

5.2. Koronia Image

Koronia image is an IKONOS bundle image acquired over a cultivated area around Lake Koronia, northern Greece. The image has four spectral channels (three visible and one near-infrared) with a spatial resolution of 4 m/pixel. Our experiments were conducted on a sub-image of 1000 × 1000 pixels, extracted from the agricultural zone nearby the lake. Five classes of interest were identified: alfalfa, cereals, maize, orchards, and urban areas, with the first three being the major ones. The reference sites are collected after extensive field survey and photo-interpretation by the experts, in combination with high-resolution orthophotos. The training set was selected randomly from the reference data and comprises 300 samples for the first three classes and 150 for the remaining two. The rest of the reference data comprised the test set, as detailed in Table 5.
Pixel-wise SVM classification in this image is performed using an advanced space of 53 features overall, including the original four bands of the image, transformed spectral features (TSF), and textural features. Particularly, we consider the intensity (I) and hue (H) from the HIS color space, and the three data structures from Tasseled Cap transformation, suitable for vegetation representation. Furthermore, we examine 16 features from Gray-Level Co-occurrence Matrix (GLCM) and 28 wavelet features. Textural features are computed from fixed local windows around pixels of appropriate size. A detailed discussion on the derivation of the above features can be found in [33].
Figure 9. Koronia image: (a) three-band false color composite, (b) reference sites, (c) watershed segmentation map, (d) initial segmentation map after CC labeling, (e) segmentation map after GeneSIS (black areas denote the yet uncovered regions of the image), and (f) classification map after FMV-fusion.
Figure 9. Koronia image: (a) three-band false color composite, (b) reference sites, (c) watershed segmentation map, (d) initial segmentation map after CC labeling, (e) segmentation map after GeneSIS (black areas denote the yet uncovered regions of the image), and (f) classification map after FMV-fusion.
Remotesensing 07 02474 g009
For clarity in presentation, the obtained results will be depicted on a portion of 450 × 450 pixels of the whole study area. The three-band false color image and the reference data of this portion are shown in Figure 9a,b, respectively. The initially oversegmented map is shown in Figure 9c, where the watershed objects are depicted on Band 4. After the assignment of watershed pixels to their neighboring objects, the set of the initial structural elements comprises 43,565 watersheds. For the pixel-wise SVM classification, the optimal parameters C and γ were chosen through five-fold cross validation: C = 512 and γ = 2 11 . After derivation of the watershed fuzzy degrees, we obtain the map shown in Figure 9d. The visual assessment of the map shows that the majority of fields are correctly classified. However, within some large physical structures, i.e., crop fields, there exist small patches being classified erroneously. For instance, some components within certain maize fields are wrongly assigned to the alfalfa class. By observing the size of the fields existing in the study area, the marking threshold is set to Ω m i n C = 100 . In this case, the global threshold of fuzziness Δ μ t h is set to a higher value compared to the Indiana image, specifically Δ μ t h = 0.6 , since the classification results are more precise here. As a result, 21,945 watersheds are selected for marking.
A typical segmentation after GeneSIS is shown in Figure 9e. The extracted objects follow the size and orientation of ground truth structures, especially avoiding oversegmentation of large ground components. GeneSIS generated on average 475.5 CCs, considerably smaller than the 4178 connected components existing in the initial map C , shown in Figure 9d. Finally, the obtained map after FMV-fusion is depicted in Figure 9f.
In Table 4, we present the comparative results of the two GeneSIS variants. Similar conclusions to the Indiana image can also be drawn for this case study. For the region-based GeneSIS, we notice a small increase in average and maximum OA, while the standard deviation of the results is slightly decreased. As in the previous image, the critical asset of the region-based representation lies in the substantial reduction of the average execution time, in this case by a percentage of 57%. Table 5 summarizes the results from GeneSIS after fusion and the comparative methods. It can be seen that the proposed segmentation fusion methods exhibit similar or higher OA than the best one in the ensemble of segmentations. Both fusion methods outperform the competing methods in terms of OA and AA in the majority of the different classes. Finally, the three GeneSIS-based methods improve the results of the initial segmentation map C and the filtered SVM+PR results by a percentage of 2%.
Table 4. Average OA, standard deviation of OA, maximum OA, and execution time for the two GeneSIS versions in Koronia Image.
Table 4. Average OA, standard deviation of OA, maximum OA, and execution time for the two GeneSIS versions in Koronia Image.
AverageStandard DeviationMaxTime (m)
Pixel-based GeneSIS82.930.1983.2617.07
Region-based GeneSIS83.530.1283.747.40
Table 5. Classification accuracies for the Koronia image.
Table 5. Classification accuracies for the Koronia image.
-SVMFMV-FusionMSF-FusionSVM + PRInitial Map CCaHOHSwCM-HSEG°pMSF
DCL1SAMSAMSAML1
OA77.3783.8783.5981.4881.2882.3082.4980.5281.44
AA78.9485.8485.9383.7583.0582.7181.5784.0078.54
k64.1973.7173.2170.2969.8371.3171.3668.4369.78
Alfalfa64.7870.0169.4069.4967.1868.5866.0963.9566.47
Cereals81.1785.4385.1283.7783.7885.0785.8884.7883.40
Maize81.9489.8289.6986.2587.0988.0089.3886.7687.96
Orchards80.6193.4094.4288.2492.1493.4091.8292.9365.09
Urban86.1990.539190.9985.0378.4774.6991.5889.80

5.3. University of Pavia Image

The University of Pavia image is a hyperspectral image acquired by the ROSIS-03 sensor over the University of Pavia, northern Italy. The spatial dimension of the image is 610 × 340   and its spatial resolution is 1.3 m/pixel. The full spectral range of the initially recorded image contains 115 bands (ranging from 0.43 to 0.86 μm). The 12 most noisy channels were removed and the remaining 103 spectral bands were used in our experiments. The comparison of the two GeneSIS approaches are shown in Table 6. The nine classes of interest existing in the terrain are detailed in Table 7. For the exact number of training and test samples per class, the reader can refer to [20]. A three band true color composite and the reference sites are shown in Figure 10a,b, respectively.
Figure 10. University of Pavia image: (a) three-band false color composite, (b) reference sites, (c) watershed segmentation map, (d) initial segmentation map after CC labeling, (e) segmentation map after GeneSIS, and (f) classification map after FMV-fusion.
Figure 10. University of Pavia image: (a) three-band false color composite, (b) reference sites, (c) watershed segmentation map, (d) initial segmentation map after CC labeling, (e) segmentation map after GeneSIS, and (f) classification map after FMV-fusion.
Remotesensing 07 02474 g010
Table 6. Average OA, standard deviation of OA, maximum OA, and execution time for the two GeneSIS versions in University of Pavia Image.
Table 6. Average OA, standard deviation of OA, maximum OA, and execution time for the two GeneSIS versions in University of Pavia Image.
AverageStandard DeviationMaxTime (m)
Pixel-based GeneSIS88.410.2288.963.92
Region-based GeneSIS89.860.6590.951.07
Table 7. Classification accuracies for the University of Pavia image.
Table 7. Classification accuracies for the University of Pavia image.
SVMFMV-FusionMSF-FusionSVM + PRInitial Map CCaHOHSwCM-HSEG°pMSF
DCSAMSAMSAMSAMSAM
OA8190.4989.5687.1986.5588.4587.5189.9688.48
AA88.1594.9595.0192.7393.4194.4593.2195.3993.30
k75.7487.5986.4383.4682.7085.0783.8786.9785.11
Asphalt76.5194.9193.7889.5890.4093.5190.7597.7397.26
Meadows73.5983.4281.5379.0376.9179.2278.8180.8078.76
Gravel71.3585.5189.2075.3781.1086.3985.4092.2989.48
Trees98.7096.4696.0999.5298.2198.7398.5296.9195.54
Metal sheets99.0199.9199.9110099.4699.8299.8299.9199.91
Bare soil91.8098.8898.4798.0897.2298.2197.0597.8897.73
Bitumen91.5499.5910094.9098.7897.3598.1798.8899.18
Bricks91.1499.2699.1198.1998.7299.2098.8199.7999.85
Shadows99.7596.6096.9899.8799.8797.6191.5794.3482.01
The initially obtained watershed segmentation map is depicted upon Band 80 in Figure 10c. After the assignment of watershed pixels to their neighboring objects, the set of the initial structural elements comprises 9152 watersheds. The optimal parameters C and γ of the pixel-based SVM classification were chosen through five-fold cross validation: C = 8 and γ = 2 5 . The obtained SVM map is next combined with the watershed segmentation via the fuzzy integral approach, and the initial map C is obtained, as shown in Figure 10d. As can be seen, most of the class areas are correctly classified, except mainly from the large meadows region in the lower part of the image, which is confused with the trees and bare soil. This can be interpreted by observing Figure 10a, where it can easily be seen that this region is spectrally heterogeneous although it belongs to the same class. In order for GeneSIS to be able to recognize some small components of trees and shadows, the marking threshold parameter was set here to Ω m i n C = 20 . The global threshold of fuzziness Δ μ t h is set to a medium value Δ μ t h = 0.4 , since the classification results are of moderate precision compared to the previous two paradigms. As a result, 4735 watersheds are selected for marking. Finally, in Figure 10e,f, we can see a typical segmentation after GeneSIS and the final map obtained after FMV fusion, respectively. Although most of the extracted segments follow the orientation and shape of the ground truth objects, there are still some misclassifications at the lower part of the image. This can be attributed to the erroneous marking of some initial objects, which were initially misclassified by the SVM.
The comparison of the two GeneSIS approaches, through Table 6, leads to conclusions similar to those in the previous case studies. The region-based GeneSIS exhibits higher average and maximum accuracies of about 1.5%–2%, although the standard deviation of the results is increased. The most obvious effect of the region-based representation lies again in the execution time, which is on average decreased by a percentage of 73%. Table 7 summarizes the results from GeneSIS after fusion and the comparative methods. It can be seen that the GeneSIS-based methods clearly outperform both the initial segmentation map C and the SVM+PR results. Noticeably, this latter method achieves high AAs for the small classes, while on the other hand it is unable to correctly classify the larger ones. Finally, the FMV-fusion method performs better than the competing methods in terms of OA and k, with the exception of M-HSEGop, which is superior in terms of AA.

6. Discussions

The purpose of the region-based GeneSIS and the two decision fusion schemes presented in this paper is to improve the performance qualities of the previously developed pixel-based GeneSIS. The striking difference between the different algorithms lies in the decisions made on two critical issues, namely, the selection of the structural element in combination with the shape of the BSFs used by OEA. In our original proposal, we considered single pixels as the structural elements, while BSFs were represented by rectangles of varying size and orientation. As a consequence of these settings, the pixel-based GeneSIS suffers from increased computational cost and, potentially, rough description of the ambiguous areas of the image. The increased execution time is attributed to the fact that each pixel is repeatedly used for the evaluation of the fuzzy content of BSFs, i.e., the computation of the cover, the consistency, and smoothness fitness components. Hence, in view of the global search strategy followed by OEA, for larger image sizes the computational burden is considerably aggravated. Furthermore, the active areas are considered as connected components of pixels appearing within a BSF, and therefore the segments extracted by OEA are close to the rectangular shape of the BSFs. As a result, for lower values of the smoothness controlling parameter λ a c c , the segments covering the boundary regions between the different classes occasionally appear with irregular shapes, especially for more complicated landscapes. In the newly developed schemes, we provide a different design for the handling of the critical issues. On the one hand, the watershed objects generated by a fine segmentation of the initial map are regarded as the structural elements in the new framework, while on the other hand we incorporate a more constructive representation of the BSFs in the form of polygonal shapes. The above settings enhance the performance of the resulting algorithms, as summarized in the following.

6.1. Computational Cost Reduction

The initial map, which serves as the basis for the GeneSIS operation, comprises a reduced number of watershed structural units, each one carrying out their own fuzzy degrees for the various classes. For instance, the initial map of Koronia Image contains 43,565 watersheds, much smaller compared to the 106 pixels of the original image. This results in a substantial saving of computational cost, reducing the execution times spent by the region-based GeneSIS by 56%–73% compared to the pixel-based variant for the images considered in our experiments. The required segmentation time may be of less attention for smaller images, but it is of particular importance when dealing with large size images. Generally, the execution time spent by region-based GeneSIS depends on the number of watersheds generated in the initial map, which in turn is related to the image size and content. Particularly, the Indiana image is a small image with 1109 watersheds requiring 5.87 s on average, while the Koronia image with 43,565 watersheds needs a much larger time of 7.40 m for the segmentation task.

6.2. Classification Accuracy

While accomplishing considerable algorithmic cost savings, the region-based GeneSIS achieves also better classification accuracies compared to the pixel-based algorithm. Specifically, it improves both the average OA and the best OA attained from an ensemble of different segmentations, by a factor of approximately 1%, for all images elaborated. In addition, the new algorithm is proved to be more robust, since it reduces to some extent the standard deviation of OAs with the exception of Pavia Image. It should be noticed that GeneSIS is an evolutionary segmentation algorithm, producing different segmentation results for different population initializations. The measures cited in Table 2, Table 4 and Table 6 show that a sample map obtained by a single GeneSIS trial statistically achieves the average OAs, while all trials appear with small deviations around these values.
In order to alleviate the stochastic effect of GeneSIS, we also introduce in this paper two segmentation fusion schemes (Section 4), both operating on the watershed basis. In the former configuration, an ensemble of different segmentations is aggregated through a fuzzy majority voting rule, while in the latter the segmentation ensemble is combined to create a map of reliable region markers, which is subsequently used by MSF to complete the final classification map. Given a segmentation ensemble by GeneSIS, the above fusion schemes provide a single classification map. The results in Table 3, Table 5 and Table 7 reveal that the fusion methods produce similar or higher classification accuracies compared to the best accuracies attained by GeneSIS in the segmentation ensemble.
The results offered by our methods are favorably compared with the ones given by the initial map C after CC labeling of watersheds. Specifically, the aforementioned Tables indicate that both GeneSIS and the fusion approaches outperform the accuracies of the initial maps by 2%–4%. This implies that GeneSIS reduces considerably the oversegmentation effect appearing in C , thus generating larger and more homogenous segments. Finally, our methods outperform the pixel-based SVM classifications, without or after applying spatial filtering. According to the spatial mask being used, filtering homogenizes the objects locally, reassigning the erroneously classified pixels in the maps. Using filtering with 8-neighborhood of pixels actually produces similar results to the ones given by the initial map C . As expected, the improvement over the SVM classifications alone is prominent, an observation advocating the usage of object-based classification. Nevertheless, the proposed approaches also outperform the filtered maps by 2%–5% for the different images examined. This is attributed to the fact that, contrary to the limited observation scale of filtering, GeneSIS evaluates broader areas of the image, namely, all watershed objects contained in the BSFs, and thus is able to formulate more compact segments fitting to the ground truth structures.
Table 8. Summary of pair-wise classification comparisons using the McNemar test. Our region-based GeneSIS and the two fusion schemes (rows) are compared to the pixel-based GeneSIS and six other methods (columns). One-sided tests are performed with 5% level of significance.
Table 8. Summary of pair-wise classification comparisons using the McNemar test. Our region-based GeneSIS and the two fusion schemes (rows) are compared to the pixel-based GeneSIS and six other methods (columns). One-sided tests are performed with 5% level of significance.
-MethodsPixel-Based GeneSISSVM + PRInitial Map CCaHOHSwCM-HSEG°pMSF
IndianaRegion-based GeneSIS2.2417.4616.106.489.576.866.61
FMV-fusion0.0015.4413.914.347.395.064.78
MSF-fusion2.5817.0413.766.899.857.777.41
KoroniaRegion-based GeneSIS27.0369.2588.8949.5442.7786.2764.41
FMV-fusion30.9873.8998.9232.6746.9590.6368.31
MSF-fusion22.6363.0374.4524.6141.5185.3762.70
PaviaRegion-based GeneSIS15.6126.5932.3718.5824.867.8617.91
FMV-fusion12.5324.3831.7416.1122.674.5615.39
MSF-fusion4.9617.8423.018.6615.63−3.408.61
Finally, Table 8 summarizes the statistical significance of the results reported in Section 5. For this purpose, we use the directional (one sided) McNemar test with 5% level of significance. The table hosts the values of the z statistic, comparing in a pair-wise basis the three suggested methods against all competing algorithms. It can be seen that for all datasets and for all comparisons, the statistic takes much larger values compared to the critical value z > z 0.05 = 1.65 , which verifies the superiority of our approaches. An exception appears for the Pavia Image, where M-HSEG method dominates the MSF-based fusion scheme.

6.3. Classification Map Quality

The properties of the segmentation/classification maps resulting from the new methods emanate from the following four factors. First, the preliminary watershed segmentation into small and well-shaped objects eliminates the corrupting noise usually pertaining to pixel-wise representations. Watersheds retain their own boundaries along with their fuzzy degrees assigned to them by Fuzzy integral aggregation (Section 2.3). Secondly, the polygonal representation of chromosomes allows the BSFs to take a great variety of flexible forms with different locations, scales, and shapes. Third, our choice to define a watershed as containing a BSF as soon as its geometric centroid falls within the area delineated by the BSF provides additional flexibility to the OEA since it decouples the polygon shape (genotype) from the shape of the actual solution of the chromosome (phenotype). The shapes of the extracted objects are now defined by the borders of the watersheds contained in a BSF, and hence, they do not necessarily comply with the BSF shape. The synergetic effect of the above tools renders region-based GeneSIS capable of generating arbitrary shaped and homogeneous segments with a good fit to the reference sites. This is demonstrated, for instance, by comparing the map created by GeneSIS (Figure 8g) to the reference map (Figure 8b) for the Indiana Image. The soybean no-till and soybean min-till segments in the middle of the image are, among others, indicative cases showing the creation of large and smooth segments of varying shapes and good matching. Notice that the creation of large segments covering the whole reference field by one OEA extraction is due to the effect of the fourth contributing factor, namely the incorporation of the tuning operator (Section 3.5), which adjusts the elite solutions by merging adjacent watersheds of the same class. Finally, observing the map of Figure 8j, it can be seen that the already good results attained by GeneSIS are improved when using the best performing MSF-based fusion scheme, further improving the degree of matching. Similar conclusions can also be drawn for the rest of the images examined. Generally, the level of accuracy is closely reflected in the quality of the resulting classification maps.

6.4. Parametric Robustness

The parameter set used by GeneSIS can be distinguished into four groups. The first group includes the GA parameters ( N m a x , n g , N p , a , p c , p m ) used to control the population evolution for each invocation of OEA. The second group includes the parameters involved in the chromosome encoding, namely, the number of polygon nodes (n) and the max allowable radius ( R m a x ). The third group contains the parameter λ a c c used in the calculation of the smoothness component f S M O . Finally, the fourth group includes parameters Ω m i n C and Δ μ t h pertaining to the marker selection module. The majority of parameters receive common values for all test cases considered (Table 1), with the exception of Ω m i n C and Δ μ t h , which are adapted to each image individually.
The parameters of the first group take typical values from GA literature, taking into consideration the search requirements of the optimization task undertaken by OEA. Preliminary experimentation with different settings shows that they have negligible impact on the results. In the following we examine the influence of n and R m a x in the second group. Specifically, Table 9 shows the results for different values of polygon nodes, considering the smaller Indiana image and the larger Koronia image. For the evaluation of different choices, we also incorporate the corresponding average execution times required by GeneSIS. It can be seen that all accuracy measures remain practically intact for different number of nodes and for both images. Nevertheless, increasing n has a reasonably adverse effect on the computational cost. Specifically, larger values of nodes require considerably higher execution times, this being more prominent for the Koronia image. The proposed rule is that n should take a suitable value according to the image demands, compromising between the search flexibility and the computational resource savings. On the one hand, n should be adequately high so that OEA is able to locate irregularly shaped components in the terrain, thus providing acceptable classification accuracies. On the other hand, for larger image sizes, n should take a relatively moderate value when the segmentation times are an issue.
Table 9. Classification accuracies and execution times obtained by region-based GeneSIS for varying values of the number of polygon nodes. The experiments refer to the Indiana and Koronia images. The results corresponding to the selected parameter values are shown in bold.
Table 9. Classification accuracies and execution times obtained by region-based GeneSIS for varying values of the number of polygon nodes. The experiments refer to the Indiana and Koronia images. The results corresponding to the selected parameter values are shown in bold.
-IndianaKoronia
Polygon Nodes ( n)81624328162432
OAavg93.8894.3094.9594.8283.3183.5383.5583.65
OAbest94.6295.0396.1095.5383.4883.7483.7383.84
AAavg93.7594.1394.2594.2285.3285.5785.6485.68
AAbest94.3196.3994.9094.6985.6586.1385.9285.99
Time5.71 (s)5.87 (s)5.70 (s)5.69(s)4.62 (m)7.40 (m)9.45 (m)11.05 (m)
In Table 10, we can see the results for varying values of R m a x around the typical value ( R m a x = 50 ) selected in the experiments for the Indiana image. The results are evaluated in terms of OAs and AAs, considering both the average and the best records in the ensemble of 30 segmentations. In this way, we are able to assess the behavior of GeneSIS more robustly. The results show that R m a x has insignificant influence on all accuracy measures. Table 10 also shows the results for the parameter λ a c c , which affects the smoothness of the extracted segments by evaluating the shape of their external borders. It can be noticed that different values of this parameter around the typical λ a c c = 0.1 again have no significant influence on the obtained accuracies. Finally, the parameters of the fourth group are selected according to the image content. Ω m i n C is taken as the smallest reference components to be recognized, while Δ μ t h is set as the median of the map, comprising the differences of fuzzy degrees in the dominant and the most competing classes, respectively. Concluding, the previous analysis indicates that, with the exception of number of nodes, the proposed region-based GeneSIS is mostly insensitive to parameter settings.
Table 10. Classification accuracies by region-based GeneSIS for varying values of Rmax and λacc in the Indiana dataset. The results corresponding to the selected parameter values are shown in bold.
Table 10. Classification accuracies by region-based GeneSIS for varying values of Rmax and λacc in the Indiana dataset. The results corresponding to the selected parameter values are shown in bold.
λaccRmax
0.10.30.50.73040506070
OAavg94.3094.3794.3894.3894.3094.3994.3094.3694.40
OAbest95.0595.0295.7095.1195.0595.2195.0394.9195.50
AAavg94.1394.0794.0194.0394.0494.1194.1394.0794.13
AAbest96.3994.5694.7394.4094.3294.6396.3994.2694.69

7. Conclusions

A novel version of the GeneSIS algorithm is presented in this paper, where the main segmentation is performed on an initial region-based map of the image acquired via watershed transform. The evolutionary part of GeneSIS is also enhanced by considering the more descriptive polygonal shape in the chromosomes’ encoding. As a final step, two fusion schemes are applied so as to overcome the stochasticity effect of our algorithm. The effectiveness of the proposed scheme is validated on the classification of three remote sensing images. Comparing to the pixel-based version, the execution time of region-based GeneSIS is considerably reduced in all test cases. At the same time, higher average accuracies are exhibited, indicating enhancement of the method’s robustness. Moreover, more arbitrarily shaped objects are obtained, since their shapes are now formed by the boundaries of the watershed objects included in the BSFs. The incorporation of the polygon in chromosomes’ representation also enables the extraction of more irregular and non-convex structures. Finally, both fusion methods attain accuracies similar to the best from the ensemble of different segmentations.

Acknowledgments

This research has been co-financed by the European Union (European Social Fund—ESF) and Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic Reference Framework (NSRF)’s Research Funding Program, ARCHIMEDES III: “Investing in knowledge society through the European Social Fund.” The authors would like also to thank P. Gamba for providing the hyperspectral dataset of the University of Pavia.

Author Contributions

Stelios K. Mylonas implemented the proposed methodology, performed the experimental analysis, manuscript writing and results interpretation. Dimitris G. Stavrakoudis performed the image preprocessing and contributed to results interpretation. John B. Theocharis proposed the overall methodology, developed the research design and contributed to discussion writing. Paris A. Mastorocostas contributed to research design and manuscript revision.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fu, K.S.; Mui, J.K. A survey on image segmentation. Pattern Recogn. 1981, 13, 3–16. [Google Scholar] [CrossRef]
  2. Huang, X.; Zhang, L. An adaptive mean-shift analysis approach for object extraction and classification from urban hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2008, 46, 4173–4185. [Google Scholar] [CrossRef]
  3. Yang, H.; Du, Q.; Ma, B. Decision fusion on supervised and unsupervised classifiers for hyperspectral imagery. IEEE Geosci. Remote Sens. Lett. 2010, 7, 875–879. [Google Scholar] [CrossRef]
  4. Beaulieu, J.; Goldberg, M. Hierarchy in picture segmentation: A stepwise optimal approach. IEEE Trans. Pattern Anal. 1989, 11, 150–163. [Google Scholar] [CrossRef]
  5. Baatz, M.; Schäpe, A. Multiresolution segmentation—An optimization approach for high quality multi-scale image segmentation. In Angewandte Geographische Informationsverarbeitung XII; Strobl, J., Blaschke, T., Griesbner, G., Eds.; Wichmann-Verlag: Heidelberg, Germany, 2000; pp. 12–23. [Google Scholar]
  6. Tilton, J.C.; Tarabalka, Y.; Montesano, P.M.; Gofman, E. Best-merge region growing segmentation with integrated nonadjacent region object aggregation. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4454–4467. [Google Scholar] [CrossRef]
  7. Hay, G.J.; Blaschke, T.; Marceau, D.J.; Bouchard, A. A comparison of three image-object methods for the multiscale analysis of landscape structure. ISPRS J. Photogramm. Remote Sens. 2003, 57, 327–345. [Google Scholar] [CrossRef]
  8. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  9. Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2006, 72, 799–811. [Google Scholar] [CrossRef]
  10. Mallinis, G.; Koutsias, N.; Tsakiri-Strati, M.; Karteris, M. Object-based classification using Quickbird imagery for delineating forest vegetation polygons in a Mediterranean test site. ISPRS J. Photogramm. Remote Sens. 2008, 63, 237–250. [Google Scholar] [CrossRef]
  11. Tzotsos, A.; Karantzalos, K.; Argialas, D. Object-based image analysis through nonlinear scale-space filtering. ISPRS J. Photogramm. Remote Sens. 2011, 66, 2–16. [Google Scholar] [CrossRef]
  12. Feitosa, R.Q.; Costa, G.A.O.P.; Cazes, T.B.; Feijo, B. A genetic approach for the automatic adaptation of segmentation parameters. In Proceedings of the First International Conference on Object-Based Image Analysis, Salzburg, Austria, 4–5 July 2006.
  13. Drăguţ, L.; Csillik, O.; Eisank, C.; Tiede, D. Automated parameterisation for multi-scale image segmentation on multiple layers. ISPRS J. Photogramm. Remote Sens. 2014, 88, 119–127. [Google Scholar] [CrossRef] [PubMed]
  14. Tarabalka, Y; Chanussot, J.; Benediktsson, J.A. Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recogn. 2010, 43, 2367–2379. [Google Scholar] [CrossRef]
  15. Li, D.; Zhang, G.; Wu, Z.; Yi, L. An edge embedded marker-based watershed algorithm for high spatial resolution remote sensing image segmentation. IEEE Trans. Image Process. 2010, 19, 2781–2787. [Google Scholar] [CrossRef] [PubMed]
  16. Angulo, J.; Velasco-Forero, S. Semi-supervised hyperspectral image segmentation using regionalized stochastic watershed. Proc. SPIE 2010, 7695. [Google Scholar] [CrossRef] [Green Version]
  17. Derivaux, S.; Forestier, G.; Wemmert, C.; Lefevre, S. Supervised image segmentation using watershed transform, fuzzy classification and evolutionary computation. Pattern Recogn. Lett. 2010, 31, 2364–2374. [Google Scholar] [CrossRef]
  18. Pratikakis, I.; Vanhammel, I.; Sahli, H.; Gatos, B.; Perantonis, S.J. Unsupervised watershed-driven region-based image retrieval. Vis. Image Signal Process. 2006, 153, 313–322. [Google Scholar] [CrossRef]
  19. Makrogiannis, S.; Economou, G.; Fotopoulos, S. Region dissimilarity relation that combines feature-space and spatial information for color image segmentation. IEEE Trans. Syst. Man Cy. B 2005, 35, 44–53. [Google Scholar] [CrossRef]
  20. Mylonas, S.K.; Stavrakoudis, D.G.; Theocharis, J.B. GeneSIS: A GA-based fuzzy segmentation algorithm for remote sensing images. Knowl. Based Syst. 2013, 54, 86–102. [Google Scholar] [CrossRef]
  21. Mylonas, S.K.; Stavrakoudis, D.G.; Theocharis, J.B.; Mastorocostas, P.A. Spectral-spatial classification of remote sensing images using a region-based GeneSIS segmentation algorithm. In Proceedings of the IEEE International Conference on Fuzzy Systems, Beijing, China, 6–11 July 2014; pp. 1976–1984.
  22. Moustakidis, S.P.; Mallinis, G.; Koutsias, N.; Theocharis, J.B. SVM-based fuzzy decision trees for classification of high spatial resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2012, 50, 149–169. [Google Scholar] [CrossRef]
  23. Evans, A.N.; Liu, X.U. A morphological gradient approach to color edge detection. IEEE Trans. Image Process. 2006, 15, 1454–1463. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Meyer, F. Topographic distance and watershed lines. Signal. Process. 1994, 38, 113–125. [Google Scholar] [CrossRef]
  25. Cho, S.-B.; Kim, J.H. Multiple network fusion using fuzzy logic. IEEE Trans. Neural Networks 1995, 6, 497–501. [Google Scholar] [CrossRef]
  26. Tarabalka, Y.; Tilton, J.C.; Benediktsson, J.A.; Chanussot, J. A marker-based approach for the automated selection of a single segmentation from a hierarchical set of image segmentations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 262–272. [Google Scholar] [CrossRef]
  27. Herrera, F.; Lozano, M.; Sanchez, A.M. A taxonomy for the crossover operator for real-coded genetic algorithms: an experimental study. Int. J. Intell. Syst. 2003, 18, 309–338. [Google Scholar] [CrossRef]
  28. Tarabalka, Y.; Chanussot, J.; Benediktsson, J.A. Segmentation and classification of hyperspectral images using minimum spanning forest grown from automatically selected markers. IEEE Trans. Syst. Man. Cy. B 2010, 40, 1267–1279. [Google Scholar] [CrossRef]
  29. Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Multiple spectral-spatial classification approach for hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4122–4132. [Google Scholar]
  30. Tarabalka, Y.; Tilton, J.C. Spectral-spatial classification of hyperspectral images using hierarchical optimization. In Proceedings of Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Lisbon, Portugal, 6–9 June 2011; pp. 1–4.
  31. Tarabalka, Y.; Tilton, J.C. Best merge region growing with integrated probabilistic classification for hyperspectral imagery. In Proceedings of the International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 3724–3727.
  32. Tadjudin, S.; Landgrebe, D.A. Covariance estimation with limited training samples. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2113–2118. [Google Scholar] [CrossRef]
  33. Mitrakis, N.E.; Topaloglou, C.A.; Alexandridis, T.K.; Theocharis, J.B.; Zalidis, G.C. Decision fusion of GA self-organizing neuro-fuzzy multilayered classifiers for land cover classification using textural and spectral features. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2137–2152. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Mylonas, S.K.; Stavrakoudis, D.G.; Theocharis, J.B.; Mastorocostas, P.A. A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images. Remote Sens. 2015, 7, 2474-2508. https://doi.org/10.3390/rs70302474

AMA Style

Mylonas SK, Stavrakoudis DG, Theocharis JB, Mastorocostas PA. A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images. Remote Sensing. 2015; 7(3):2474-2508. https://doi.org/10.3390/rs70302474

Chicago/Turabian Style

Mylonas, Stelios K., Dimitris G. Stavrakoudis, John B. Theocharis, and Paris A. Mastorocostas. 2015. "A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images" Remote Sensing 7, no. 3: 2474-2508. https://doi.org/10.3390/rs70302474

Article Metrics

Back to TopTop