Next Article in Journal
A Method of Retrieving BRDF from Surface-Reflected Radiance Using Decoupling of Atmospheric Radiative Transfer and Surface Reflection
Previous Article in Journal
A Practical Approach to Landsat 8 TIRS Stray Light Correction Using Multi-Sensor Measurements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery

1
Institute of Photogrammetry and Remote Sensing, Chinese Academy of Surveying and Mapping, 28 Lianhuachi Road, Beijing 100830, China
2
Institute for Photogrammetry, University of Stuttgart, Geschwister-Scholl-Str. 24D, 70174 Stuttgart, Germany
3
Department of Geoinformatics—Z_GIS, University of Salzburg, Schillerstrasse 30, 5020 Salzburg, Austria
4
Remote Sensing Technology Institute (IMF), Earth Observation Center (EOC), German Aerospace Center (DLR), 82234 Weßling, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(4), 590; https://doi.org/10.3390/rs10040590
Submission received: 16 January 2018 / Revised: 26 March 2018 / Accepted: 7 April 2018 / Published: 11 April 2018

Abstract

:
Remote sensing (RS) image segmentation is an essential step in geographic object-based image analysis (GEOBIA) to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA). Specifically, a minimum spanning tree (MST) algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR) algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI) parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2) and different selected landscapes (residential/industrial, residential/agriculture) covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral), while the accuracy is comparable with that of the FNEA method.

Graphical Abstract

1. Introduction

Remote sensing (RS) image segmentation is an essential step to ultimately derive “meaningful objects” in geographic object-based image analysis (GEOBIA) [1,2]. The optimal segmentation of an image should result in a balance of over-segmentation and under-segmentation of the scene. Many segmentation methods for RS imagery exist. They can be grouped into four categories: (a) pixel-based (thresholding, clustering, etc.) [3,4]; (b) edge detection (boundary tracking, curve fitting, etc.) [5]; (c) region-based (region growing, watershed, split and merge, level set, etc.) [6]; (d) others (wavelet, neural networks, mean-shift, fuzzy sets, etc.) [7,8]. Region-based segmentation methods are the dominant methods that can be used to create compact regions and identify suitable scales [9]. The fractal net evolution approach (FNEA) and graph method are the two dominant region-based methods. The FNEA, proposed by Baatz and Schäpe in 2000 [10] and commercialized by the software eCognition (http://www.ecognition.com), is a bottom-up region growing and merging technique based on local criteria. The neighboring image objects are merged to form bigger objects based on a minimum heterogeneity rule (MHR) algorithm. Thus, it builds a multi-scale hierarchy structure using a merging technique. In addition, spectral, shape and textural information can be extracted as well, which is helpful for advanced applications [10]. However, this method—as with most local region growing techniques—has some limitations: it is time-consuming to create a large number of initial image objects starting with individual pixels [11]; and it is suboptimal to choose a sequence of starting points in a random sequence, due to the percolation patterns or clusters which increasingly appear, starting at a certain density of points [10]. Graph methods explicitly organize the image elements into mathematically sound structure and make the formulation of the problem more flexible and the computation more efficient [12]. There are four basic algorithms: best merge [13], minimum spanning tree (MST) [14], minimum mean cut [15,16], and normalized cut [17]. The best merge and MST algorithms follow a bottom-up approach: they start from a segmented image, i.e., each pixel constitutes a separate segment; then, these algorithms iteratively merge two adjacent segments. The minimum mean cut and normalized cut algorithms follow a top-down approach: the whole image is one segment initially; then, in each step, one of the segments is split into parts [18]. These graph-based algorithms have been applied to different RS imagery (such as hyperspectral, panchromatic, multispectral, LiDAR, SAR, etc.) [19,20,21,22].
Recently, two obvious trends within RS image segmentation can be observed: (a) new theories are constantly applied, which leads to more and more specific RS image segmentation methods and methodologies; (b) different segmentation algorithms have been combined to overcome the disadvantages of each individual segmentation algorithm [1]. The key problem here is how to combine these algorithms to fully employ the advantages of various algorithms and complement their shortcomings. For instance, Li et al. proposed an efficient multi-scale segmentation method by combining a statistical region merging algorithm and a MHR algorithm, whereby the results showed that the combined algorithm is able to capture the main structural components of imagery by a simple but effective statistical analysis, and it is able to cope with significant noise corruption and handle occlusions with a sort function [23]. Wang presented a multiresolution RS image segmentation method by combining a rainfall watershed algorithm and a fast region merging method, yielding a stable performance for high spatial resolution RS imagery, [24].
Our goal in this research is to develop an efficient segmentation method by combining MST and MHR, which should have the following properties: (a) be suitable for a variety of RS imagery, such as hyperspectral, multispectral, SAR, etc.; (b) be applicable to complex scenes (residential/industrial, residential/agriculture); (c) be efficient for large datasets; (d) the image objects should represent the sub-objects of geo-objects, whereby in an ideal situation, image objects are representations of geo-objects. The initial research was published in the 2015 International Workshop on Image and Data conference [25]. In this paper, we extend the 2015 work significantly by presenting the method in detail and performing a comprehensive evaluation. The main parts of this study are as follows: (a) the MST algorithm for initial segmentation is proposed to be combined with the MHR algorithm for object merging which is adopted in FNEA; (b) an efficient implementation of the segmentation strategy is presented using data partition and a “reverse searching-forward processing” chain based on message passing interface (MPI) parallel technology; (c) high-resolution optical images, a hyperspectral image and a SAR image from four test sites are used to assess the proposed method.

2. Multi-Scale Segmentation Method Based on MST and MHR

The flowchart of our multi-scale segmentation method is shown in Figure 1. It consists of two main procedures: an initial segmentation using the MST algorithm based on the graph theory described in Section 2.1, and an object merging step using the MHR algorithm based on the FNEA method, detailed in Section 2.2.

2.1. Initial Segmentation Based on MST

Graph-based image segmentation techniques generally represent the problem in terms of a graph G = ( V , E ) , where each node v i V corresponds to a pixel in the image, and each edge ( v i , v j ) E connects a pair of neighboring pixels. Each edge is associated with a corresponding weight w ( ( v i , v j ) ) , which is a non-negative measure of the dissimilarity (e.g., the difference in intensity, color, motion, location or some other local attribute) between neighboring elements v i and v j [14]. A segmentation S is a partition of V into components such that each component (or region) C V corresponds to a connected component in a graph G = ( V , E ) , where E E . The pixel-based image is shown as a graph in Figure 2.
The merging criterion D presented by Felzenszwalb and Huttenlocher [14] based on the MST algorithm is used to merge neighboring components, and is,
D ( C i , C j ) = { t r u e , i f D i f ( C i , C j ) > M I n t ( C i , C j ) f a l s e , o t h e r w i s e
The difference between two components C i , C j ( C i , C j V ), D i f ( C i , C j ) , is the minimum weight edge connecting the two components. That is,
D i f ( C i , C j ) = min v i C i , v j C j , ( v i , v j ) E w ( ( v i , v j ) )
If there is no edge connecting Ci and Cj, D i f ( C i , C j ) = 1 .
The internal difference of a component C ( C V ), I n t ( C ) , is described as the largest weight in the MST of the component, M S T ( C , E ) , That is,
I n t ( C ) = max e M S T ( C , E ) w ( e )
The minimum internal difference, M I n t ( C i , C j ) , is defined as,
M I n t ( C i , C j ) = min ( I n t ( C i ) + τ ( C i ) , I n t ( C j ) + τ ( C j ) )
The threshold function is defined as,
τ ( C ) = k / | C |
where | C | denotes the size of C and k is a constant parameter. The role of this parameter is to avoid small regions as excessive noise and to suppress the generation of a large connected region.
There are four steps in the initial segmentation procedure [14]:
(1)
A graph G = ( V , E ) with vertices V and edges E is built and the weight of each edge is the Euclidean distance d ( v i , v j ) between neighboring elements v i and v j in terms of the intensities of all the bands.
(2)
Sort E by non-decreasing edge weight.
(3)
Start with an initial segmentation S0, where each vertex v i is in its own component. Compute a threshold function τ ( C i ) for each component using Formula (5). Then Compute D i f ( C i , C j ) and M I n t ( C i , C j ) for each component using Formulas (2) and (4). Subsequently we decide whether D i f ( C i , C j ) < = M I n t ( C i , C j ) . If the condition holds, the two components are merged; otherwise nothing would be done. Repeat the above steps until all the components are computed.
(4)
The output is a segmentation of V into components S = ( C 1 , ... , C R ) .

2.2. Object Merging Based on MHR

To implement the multi-scale segmentation, the MHR is introduced to merge two adjacent regions from the initial segmentation, which could reduce the disturbance from noise, reduce the fragmentized degree of object boundary, and obtain more regular objects [23]. Not only the color heterogeneity ( h c o l o r ) but also shape heterogeneity ( h s h a p e ) is considered. The heterogeneity (h) of MHR is defined as,
h = w c o l o r h c o l o r + w s h a p e h s h a p e
where,
h c o l o r = c w c ( n m e r g e σ c , m e r g e ( n o b j _ A σ c , o b j _ A + n o b j _ B σ c , o b j _ B ) )
h s h a p e = w c o m p t h c o m p t + w s m o o t h h s m o o t h
h c o m p t = n m e r g e l m e r g e n m e r g e ( n o b j _ A l o b j _ A n o b j _ A + n o b j _ B l o b j _ B n o b j _ B )
h s m o o t h = n m e r g e l m e r g e b m e r g e ( n o b j _ A l o b j _ A b o b j _ A + n o b j _ B l o b j _ B b o b j _ B )
The symbols w c o l o r , w s h a p e , w c o m p t , w s m o o t h are weights of color, shape, compact and smooth, respectively, and w c o l o r [ 0 , 1 ] , w s h a p e [ 0 , 1 ] , w c o m p t [ 0 , 1 ] , w s m o o t h [ 0 , 1 ] , w c o l o r + w s h a p e = 1 , w c o m p t + w s m o o t h = 1 . The symbols n m e r g e , n o b j _ A , n o b j _ B represent the number of objects after merging, the number of object A before merging, and the number of object B before merging. Other symbols l o b j _ A and l o b j _ B are denoted as the perimeters of object A and object B, respectively, where b o b j _ A and b o b j _ B are the bounding box perimeters of object A and object B. Lastly, the meaning of b m e r g e is the bounding box perimeter of objects after merging.
There are three steps in the merging procedure:
(1)
Set the parameters of MHR, such as weights of color, shape, compact, and smooth ( w c o l o r , w s h a p e , w c o m p t , w s m o o t h ), and the scale parameter Q. Then compute the heterogeneity h of adjacent objects using Formula (6).
(2)
Decide whether h satisfies MHR. If h < (Q)2, the adjacent smaller objects are merged into a bigger one. Meanwhile, the average size, standard deviation, and mean of all the objects are calculated. Repeat this process until all the objects are merged.
(3)
Repeat steps 1–2 to accomplish the multi-scale segmentations.

3. Parallel Segmentation Based on MPI

One problem with applying this method is that it is very time-consuming for large datasets. Thus, in this section, we present a parallel implementation which relies on a data partition strategy and a “reverse searching-forward processing” chain strategy to boost the segmentation speed on a cluster platform using MPI parallel technology. The MPI has the advantages of portability, standardization, and efficiency, which could solve the communication problems of parallel processing [26]. The computing resources of the cluster platform are divided into one master node and many slave nodes with the Master/Slave mode, which can handle dynamic load balancing [26].

3.1. Data Partition Strategy

Data partition is a common strategy for parallel processing [27]; the data is divided into rectangular blocks using the expand buffer strategy by the master node. The data partition strategy is shown in Figure 3.
It has the following three steps:
(1)
The data is divided into rectangular blocks, and every block is labelled with “1, 2, 3, …, i, …” from left to right, up to down.
(2)
Each block is assigned to a different slave node; the block is labelled as i, that is,
i = p + n × P
where, (n, p) is the coordinate of the slave node, which is labeled with index i, and P is the number of slave nodes in the column direction. Here the column and row directions are equivalent to the coordinate in the x and y directions, respectively.
The slave nodes process their blocks according to the label of the blocks and send the result to the master node until all the data has been processed.

3.2. “Reverse Searching-Forward Processing” Chain

The “reverse searching-forward processing” chain is an interconnected system, which describes a series of steps. Each step can be performed simultaneously by different slave nodes [28]. The chain is shown in Figure 4.
At first, the slave nodes start to search for the relative data and parameters from the result until they find all the data, parameters, and algorithms. Then the slave nodes use the relative data, parameters, and algorithms to process from the source to the result until all the steps are completed.

3.3. Parallel Segmentation Method

The computing resources of the cluster platform are divided into one master node and many slave nodes. The master node is responsible for scheduling tasks and collecting results from the slave nodes. The slave nodes are responsible for processing and submitting the tasks. The source file is a series of images to be processed, and the target file is the segmentation result. The flowchart for the parallel segmentation method based on MPI is shown in Figure 5.
The three steps in the parallel segmentation method are:
(1)
The master node is responsible for reading the image and dividing it into blocks using extended buffer strategy. A regular data division is used to divide the original image data into sub-rectangular data blocks, which will be used as the input data and assigned to the slave nodes for parallel computing.
(2)
The slave nodes are responsible for receiving blocks. First, each block is segmented separately, and the data block is segmented to obtain objects initially based on MST. Then, the objects are merged based on MHR to obtain the results. In the end, all the segmentation results are sent to the master node.
(3)
The master node collects the results from each slave node, and outputs the segmentation image.
We apply two strategies to deal with the objects adjacent to borders. One is to use an extended buffer strategy to extend the size of the block under consideration to obtain an extended block. The extended block is processed, and the normal size of the block is output (Figure 6). The other strategy is to remove the boundary of the vectorized segmentation results by the mean feature of the neighboring objects (Figure 7); the mean is calculated using statistical methods. These two strategies do not deal with the boundary problem exhaustively. To solve this problem entirely would be a research question in itself and the subject of another study. Our two-fold solution can deal with the vast majority of specific problems and can be a trade-off between perfection and effort.

4. Test Sites and Experiments

4.1. Test Sites

We tested the proposed method on a variety of images with different selected landscapes (Table 1). The test images were chosen with diverse scene complexities for exploring the general applicability of the proposed method.
Test data 1 (T1) covering the city of Potsdam in Germany was selected from the ISPRS benchmark project [29], which is an airborne true orthophoto with 0.05 m resolution and four bands (red, green, blue, and infrared). Test data 2 (T2) covering the city of DaFeng in China is a hyperspectral SPECIM AISA EAGLE II data acquired in November 2014 with a spatial resolution of 0.78 m, a spectral resolution of 9.6 nm, and 64 bands with a wavelength from 400 nm to 970 nm. The image was atmospherically corrected using the FLAASH model implemented in the ENVI software (http://www.harrisgeospatial.com/), and then 10 bands were selected using the minimum noise fraction (MNF) method from ENVI. Test data 3 (T3), covering the city of Lintong in the northwest of China, is a panchromatic (Pan) WorldView-2 image with a resolution of 0.5 m and a multispectral (MS) WorldView-2 image acquired in July 2011 with a resolution of 2.0 m (with eight bands, namely coastal, blue, green, yellow, red, red edge, near-infrared 1, and near-infrared 2). The WorldView-2 MS image and the Pan image were then fused using the PanSharp fusion method as implemented within the PCI Geomatica software (http://www.pcigeomatics.com/). Test data 4 (T4) covering the city of Genhe in China is a quad polarization RADARSAT-2 data acquired in September 2012 with a spatial resolution of 8.0 m. It was multi-looked using SARScape (http://www.harrisgeospatial.com/) and filtered using the enhanced Frost method provided by ENVI.

4.2. Results

Our segmentation method was evaluated and compared with the FNEA method, which has the same three parameters, namely, scale, weight of color, and weight of compactness. It is important to set these parameters properly. There are some methods for a fact-based determination of these parameters, such as estimation of scale parameters (ESP) [30], optimized image segmentation [31], segmentation parameter tuner (SPT) [32], plateau objective function (POF) [33] or the work of Stumpf and Kerle (2011), who optimized segmentation through the optimal use of the derived object features in a random forest framework [34]. In FNEA, the color and the shape parameter work contrarily: The larger the weight of the color is, the better the spectral consistency of the resulting objects. The smaller the compactness weight is, the more complex the average shapes of the resulting objects are. In this study, the selection of image segmentation parameters is based on an iterative trial-and-error approach that is often utilized in object-based classification [35,36].
It should be noted that the scale parameter (Q) is the most important factor for segmentation as it controls the relative size of the resulting image objects and has a direct effect on the segmentation accuracy. The bigger the scale value is, the larger the object. The weight of color ( w c o l o r ) is considered to be bigger than the weight of shape ( w s h a p e ), and the weight of smoothness ( w s m o o t h ) is considered to be similar to the weight of compactness ( w c o m p t ) [37]. We set the same parameters ( w c o l o r , w s m o o t h ) for different experiments to compare different methods under the same conditions. For all four test data, w c o l o r is 0.9, w s m o o t h is 0.5. The Q of the proposed method is the same as for the FNEA method. The segmentation parameters for the four experiments are shown in Table 2.
The segmentation results of the proposed method and FNEA with Q of 240 for the first test data are shown in Figure 8. Visually, the image objects partially represent parts or sub-objects of the targeted geo-objects, such as buildings, trees, cars, and roads. The two segmentation methods were compared with manually interpreted buildings and individual trees. The segmentation results of the proposed method and the FNEA with Q of 360 for the second test data are shown in Figure 9. Buildings, waters, roads are well segmented. The two segmentation methods were compared to manually interpreted buildings as well. The third test was conducted in a residential/agriculture area. Figure 10 depicts the proposed method and the FNEA method with Q values of 260. As can be seen, buildings, roads and water bodies are well segmented. The two segmentation methods were also compared to manually interpreted objects. The segmentation results of the proposed method and FNEA with Q values of 2000 for the fourth test data are shown in Figure 11. The image objects partially represent sub-objects of geo-objects, such as water bodies, fields, and trees. The two results were compared to manually interpreted fields. Overall, the segmentation results of the two methods for the four experiments look similarly, but there are some differences such as those highlighted for areas A and B in Figure 8, Figure 9, Figure 10 and Figure 11. For area A, highly detailed segments are found in Figure 9a compared with Figure 9b. The opposite is true for area B. The same holds true for Figure 10a,b and for Figure 11a,b.

4.3. Analysis

4.3.1. Accuracy Evaluation

The geometries of the image objects are compared through a supervised segmentation evaluation [38]. This includes methods that assess the geometric differences between the generated image objects and the ground truth data [39]. For this study, the SPT tool is used to evaluate the segmentation results with seven metrics, namely Hoover Index (H), Area-Fit-Index (AFI), Shape Index (SI), Rand Index (RI), F-measure (F), Segmentation Covering (C), Reference Bounded Segments Booster (RBSB) [32], as shown in Table 3. The ground truth data for the four experiments are shown in Table 4. The values of the respective metrics for the four experiments are shown in Table 5.
The values of metrics for Airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 image segmentation are shown in Figure 12.
All the values except AFI for the airborne image, SPECIM AISA EAGLE II image, and WorldView-2 image were relatively low. The lower the value, the better the result. The results shown in Table 4 demonstrate that the image objects do not over-estimate the ground truth polygons too much, which is desirable. However, the relatively high values suggest that the ground truth polygons are usually far larger in size compared to the evaluated image objects. For the RADARSAT-2 image, all the values are high except H and SI, but the ground truth polygons seem to match the image objects. We segmented the RADARSAT-2 image with different scale parameters and evaluated the segmentation results with the seven metrics. The object boundaries are not clear due to the speckle noise of SAR images, resulting in segments with rough boundaries compared to other images, so the values of RI, F, C, RBSB are close to 1, the AFI is very high, and the overlap is not good between segmentation and reference ground truth. Still, this radar specific phenomenon does not affect the comparative analysis as we use the same evaluation metrics and ground truth.
The curve of the proposed method for the airborne image is the lowest, as shown in Figure 12a. The values of four metrics (AFI, RI, F, RBSB) of the proposed method are the lowest, which shows that the proposed method is better than the FNEA method for airborne images. There is a slight difference between the proposed method and the FNEA for the SPECIM AISA EAGLE II image, as shown in Figure 12b. The values of four metrics (SI, RI, F, C) of the proposed method are the same as the FNEA method, which indicates that the two methods perform similarly on the SPECIM AISA EAGLE II image. The curve of the proposed method is slightly higher than the FNEA method for the WorldView-2 image as shown in Figure 12c, while there is a little difference for the RADARSAT-2 image as shown in Figure 12d. The values of three metrics (AFI, RI, F) are slightly smaller using the proposed method compared to the FNEA, which indicates that the proposed method is better than the FNEA method for RADARSAT-2 image. In summary, the proposed method is generally applicable to different RS imagery and is at least comparable with the FNEA method in accuracy, or even slightly better.

4.3.2. Speed Evaluation

In this section, we analyze the segmentation speed using three metrics: speed-up, efficiency, and effectiveness. The speed evaluation metrics for image segmentation are shown in Table 6.
We use two WorldView-2 images with a volume of 1.1 GB and 4.3 GB to test the speed of our segmentation method. The segmentation efficiency is related to the segmentation parameters. The larger the block is, the faster the segmentation. The larger the scale is, the faster the segmentation. In the two tests, the block size is 1024 pixels, and the segmentation parameters are 200, 0.9, 0.5, respectively. The experimental processor is Intel(R) Xeon (R), CPU E5-2640, 24 [email protected] GHz with 16 GB of DDR3. The speed evaluation of parallel segmentation is shown in Figure 13.
For the two test data sets, the running time (shown in Figure 13a) tends to decrease with the number of CPUs until it reaches 14, then it becomes stable and shows little variation because of the I/O (Input/Output) reaches a bottleneck. The speed-up (shown in Figure 13b) tends to increase with the number of CPUs until it reaches 14, at which point the maximum is 6 for 1.1 GB data, and 7.65 for 4.3 GB data, then it becomes stable. The efficiency (shown in Figure 13c) tends to decrease as the number of CPUs increases. The effectiveness (shown in Figure 13d) tends to increase with the number of CPUs until it reaches 11, after which it tends to decrease. In summary, when 14 CPUs are used for parallel computing, it is able to achieve the maximum speed-up for the two test data, when 11 CPUs are used for 1.1 GB data and 13 CPUs are used for 4.3 GB data, it is able to achieve the maximum effectiveness, and the parallel computing strategy is more efficient than stand-alone computing. The larger the dataset, the higher the speed-up is, and the higher the efficiency is.

5. Conclusions

This study proposes an efficient parallel multi-scale segmentation method for RS imagery based on graph theory and FNEA, which fully exploits the advantages of both the graph theory and the FNEA method. There are two main contributions: (a) the MST algorithm for initial segmentation is combined with the MHR algorithm which is used for object merging in FNEA; (b) an efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on MPI parallel technology. Segmentation results using images from different sensors (Airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2) and complex scenes (residential/industrial, residential/agriculture) covering four sites indicated a general applicability to different RS imagery (airborne optical, satellite optical, SAR, high-spectral) in accuracy using seven different metrics, namely Hoover Index, Area-Fit-Index, F-measure, Shape Index, Reference Bounded Segments Booster, Rand Index, Segmentation Covering. Segmentation results of the WorldView-2 large datasets demonstrate a high efficiency in speed in terms of three metrics: speed-up, efficiency, and effectiveness.
The proposed method has the following advantages: (a) it is an effective method using a fast graph segmentation algorithm to create the initial objects, yielding a linear complexity for the number of image pixels; (b) it is a multi-scale segmentation method for RS imagery using the MHR algorithm to merge the initial object, and it is suitable for a variety of landscapes (residential/industrial, residential/agriculture); (c) it is efficient for large datasets using data partition and the “reverse searching-forward processing” chain based on MPI parallel technology. We conclude that the proposed method has a general applicability and high efficiency for a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral) with a variety of landscapes (residential/industrial, residential/agriculture), and is at least comparable to the FNEA method in terms of accuracy, in some respects it even performs slightly better.
Nevertheless, as segmentation itself is a problem without a perfect solution—some scholars even claim it is an ill-posed problem—it is hard to find optimal segmentation parameters and there is no optimal solution across a variety of landscapes. Different landscapes may contain inherent features at different scales, thus requiring multi-scale consideration, but recent developments allow for the use of more objective comparisons between the various parameters and thresholds. We referred to the popular ESP tool [30] and to the work of Stumpf and Kerle who determined multiple optimal scales objectively and subsequently linked them to landscape fragments for image-based identification of landslides [34]. Ma et al. [46] extensively analyzed feature selection methods for OBIA classifiers. These and other recent efforts pave the way for future workflows. The currently prevailing strategy of selecting segmentation parameters by an iterative trial-and-error approach still requires a lot of effort. Thus, automated methods need to be investigated for a variety of landscapes. In addition, the boundary problem of parallel segmentation could be tackled in this study satisfactorily but not perfectly. A full exploitation of this problem will be worth a study on its own. In fact, future work needs to integrate concepts from the OBIA community as laid out in [2] and concepts from computer vision, and the segmentation method will be implemented in an open source and made available to the whole RS community.

Acknowledgments

This research was funded by: (1) the National Natural Science Foundation of China (Project No. 41701506 and 41671440); (2) Central Public-interest Scientific Institution Basal Research Fund (Project Nos. 7771712); and (3) DAAD-Strategic partnerships and thematic networks 2015–2018, Program line B: Thematic Networks, Stuttgart University (Project No. 57173947).

Author Contributions

This research was mainly performed and prepared by Haiyan Gu, Yanshun Han and Yi Yang. Haiyan Gu, Yanshun Han and Yi Yang contributed with ideas, conceived and designed the study. Haitao Li and Zhengjun Liu supervised the study and their comments were considered throughout the paper. Thomas Blaschke, Uwe Soergel and Shiyong Cui analyzed the results of the experiment, reviewed, and edited the manuscript.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  2. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Feitosa, R.; van der Meer, F.; van der Werff, H.; Van Coillie, F.; et al. Geographic Object-based Image Analysis: A new paradigm in Remote Sensing and Geographic Information Science. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [PubMed]
  3. Yuan, J.Y.; Wang, D.L.; Li, R.X. Remote Sensing Image Segmentation by Combining Spectral and Texture Features. IEEE Trans. Geosci. Remote Sens. 2014, 52, 16–24. [Google Scholar] [CrossRef]
  4. Wang, J.; Tang, J.L.; Liu, J.B; Ren, C.Y.; Liu, X.N.; Feng, J. Alternative Fuzzy Cluster Segmentation of Remote Sensing Images Based on Adaptive Genetic Algorithm. Chin. Geogr. Sci. 2009, 19, 83–88. [Google Scholar] [CrossRef]
  5. Judah, A.; Hu, B.X.; Wang, J.G. An Algorithm for Boundary Adjustment toward Multi-Scale Adaptive Segmentation of Remotely Sensed Imagery. Remote Sens. 2014, 6, 3583–3610. [Google Scholar] [CrossRef]
  6. Karantzalos, K.; Argialas, D. A Region-based Level Set Segmentation for Automatic Detection of Man-made Objects from Aerial and Satellite Images. Photogramm. Eng. Remote Sens. 2009, 75, 667–677. [Google Scholar] [CrossRef]
  7. Gaetano, R.; Masi, G.; Poggi, G. Marker-Controlled Watershed-Based Segmentation of Multiresolution Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2987–3004. [Google Scholar] [CrossRef]
  8. Michel, J.; Youssefi, D.; Grizonnet, M. Stable Mean-Shift Algorithm and Its Application to the Segmentation of Arbitrarily Large Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 952–964. [Google Scholar] [CrossRef]
  9. Marpu, P.R.; Neubert, M.; Herold, H.; Niemeyer, I. Enhanced evaluation of image segmentation results. J. Spat. Sci. 2010, 55, 55–68. [Google Scholar] [CrossRef]
  10. Baatz, M.; Schäpe, A. Multiresolution segmentaion: An optimization approach for high quality multi-scale image segmentation. In Angewandte Geographische Informationsverarbeitung XII; Strobl, J., Blaschke, T., Griesebner, G., Eds.; Wichmann: Heidelberg, Germany, 2000; pp. 12–23. [Google Scholar]
  11. Deng, F.L.; Yang, C.J.; Cao, C.X.; Fan, X.Y. An Improved Method of FNEA for High Resolution Remote Sensing Image Segmentation. J. Geo-Inf. Sci. 2014, 16, 95–101. [Google Scholar]
  12. Peng, B.; Zhang, L.; Zhang, D. A Survey of Graph Theoretical Approaches to Image Segmentation. Pattern Recognit. 2013, 46, 1020–1038. [Google Scholar] [CrossRef]
  13. Beaulieu, J.M.; Goldberg, M. Hierarchy in Picture Segmentation: A Stepwise Optimization Approach. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 150–163. [Google Scholar] [CrossRef]
  14. Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient Graph-Based Image Segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
  15. Wang, S.; Siskind, J.M. Image Segmentation with Minimum Mean Cut. In Proceedings of the 8th IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; pp. 517–524. [Google Scholar]
  16. Wang, S.; Siskind, J.M. Image Segmentation with Ratio Cut. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 675–690. [Google Scholar] [CrossRef]
  17. Shi, J.B.; Malik, J. Normalised cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 888–905. [Google Scholar]
  18. Dezső, B.; Giachetta, R.; László, I.; Fekete, I. Experimental study on graph-based image segmentation methods in the classification of satellite images. EARSeL eProc. 2012, 11, 12–24. [Google Scholar]
  19. Bue, B.D.; Thompson, D.R.; Gilmore, M.S.; Castano, R. Metric learning for hyperspectral image segmentation. In Proceedings of the 3rd IEEE WHISPERS, Lisbon, Portugal, 6–9 June 2011; pp. 1–4. [Google Scholar]
  20. Sırmaçek, B. Urban-Area and Building Detection Using SIFT Keypoints and Graph Theory. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1156–1167. [Google Scholar] [CrossRef]
  21. Strîmbu, V.F.; Strîmbu, B.M. A graph-based segmentation algorithm for tree crown extraction using airborne LiDAR data. ISPRS J. Photogramm. Remote Sens. 2015, 104, 30–43. [Google Scholar] [CrossRef]
  22. Sharifi, M.; Kiani, K.; Kheirkhahan, M. A Graph-Based Image Segmentation Approach for Image Classification and Its Application on SAR Images. Prz. Elektrotech. 2013, 89, 202–205. [Google Scholar]
  23. Li, H.T.; Gu, H.Y.; Han, Y.S.; Yang, J.H. An efficient multi-scale SRMMHR (statistical region merging and minimum heterogeneity rule) segmentation method for high-resolution remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2009, 2, 67–73. [Google Scholar] [CrossRef]
  24. Wang, M. A Multiresolution Remotely Sensed Image Segmentation Method Combining Rainfalling Watershed Algorithm and Fast Region Merging. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 1213–1218. [Google Scholar]
  25. Yang, Y.; Li, H.T.; Han, Y.S.; Gu, H.Y. High resolution remote sensing image segmentation based on graph theory and fractal net evolution approach. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 197–201. [Google Scholar] [CrossRef]
  26. Shen, Z.F.; Luo, J.C.; Chen, Q.X.; Sheng, H. High-efficiency Remotely Sensed Image Parallel Processing Method Study Based on MPI. J. Image Graph. 2007, 12, 133–136. [Google Scholar]
  27. Li, G.Q.; Liu, D.S. Key Technologies Research on Building a Cluster-based Parallel Computing System for Remote Sensing. In Proceedings of the 5th International Conference on Computational Science, Atlanta, GA, USA, 22–25 May 2005; Volume 3516, pp. 484–491. [Google Scholar]
  28. Schott, J.R. Remote Sensing: The Image Chain Approach; Oxford University Press: Oxford, UK, 2007; pp. 17–19. [Google Scholar]
  29. ISPRS Benchmarks. 2D Semantic Labeling Contest—Potsdam. Available online: http://www2.isprs.org/commissions/comm3/wg4/2d-sem-label-potsdam.html (accessed on 15 September 2016).
  30. Drǎguţ, L.; Tiede, D.; Levick, S.R. ESP: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci. 2010, 24, 859–871. [Google Scholar] [CrossRef]
  31. Gao, Y.; Mas, J.F.; Kerle, N.; Navarrete Pacheco, J.A. Optimal region growing segmentation and its effect on classification accuracy. Int. J. Remote Sens. 2011, 32, 3747–3763. [Google Scholar] [CrossRef]
  32. Achanccaray, P.; Ayma, V.A.; Jimenez, L.I.; Bernabe, S.; Happ, P.N.; Costa, G.A.O.P.; Feitosa, R.Q.; Plaza, A. SPT 3.1: A free software tool for automatic tuning of segmentation parameters in optical, hyperspectral and SAR images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Milan, Italy, 26–31 July 2015; pp. 4332–4335. [Google Scholar]
  33. Martha, T.R.; Kerle, N.; van Westen, C.J.; Jetten, V.; Kumar, K.V. Segment Optimization and Data-Driven Thresholding for Knowledge-Based Landslide Detection by Object-Based Image Analysis. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4928–4943. [Google Scholar] [CrossRef]
  34. Stumpf, A.; Kerle, N. Object-oriented mapping of landslides using Random Forests. Remote Sens. Environ. 2011, 115, 2564–2577. [Google Scholar] [CrossRef]
  35. Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. object-based classification of urban land-cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161. [Google Scholar] [CrossRef]
  36. Pu, R.L.; Landry, S.; Yu, Q. Object-based urban detailed land-cover classification with high spatial resolution IKONOS imagery. Int. J. Remote Sens. 2011, 32, 3285–3308. [Google Scholar] [CrossRef]
  37. Johnson, B.A. Mapping Urban Land Cover Using Multi-Scale and Spatial Autocorrelation Information in High Resolution Imagery. Ph.D. Thesis, Florida Atlantic University, Boca Raton, FL, USA, May 2012. [Google Scholar]
  38. Zhang, Y.J. A survey on evaluation methods for image segmentation. Pattern Recognit. 1996, 29, 1335–1346. [Google Scholar] [CrossRef]
  39. Belgiu, M.; Dragut, L. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery. ISPRS J. Photogram. Remote Sens. 2014, 96, 67–75. [Google Scholar] [CrossRef] [PubMed]
  40. Hoover, A.; Jean-Baptiste, G.; Jiang, X.; Flynn, P.J.; Bunke, H.; Goldgof, D.; Bowyer, K.; Eggert, D.W.; Fitzgibbon, A.; Fisher, R.B. An Experimental Comparison of Range Image Segmentation Algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 673–689. [Google Scholar] [CrossRef]
  41. Lucieer, A. Uncertainties in Segmentation and Their Visualisation. Ph.D. Thesis, Utrecht University and International Institute for Geo-Information Science and Earth Observation (ITC), Utrecht, The Netherlands, 2004. [Google Scholar]
  42. Neubert, M.; Meinel, G. Evaluation of Segmentation programs for high resolution remote sensing applications. In Proceedings of the Joint ISPRS/EARSeL Workshop “High Resolution Mapping from Space 2003”, Hannover, Germany, 6–8 October 2003; p. 8. [Google Scholar]
  43. Rand, W. Objective criteria for the evaluation of clustering methods. J. Am. Stat. Assoc. 1971, 336, 846–850. [Google Scholar] [CrossRef]
  44. Pont-Tuset, J.; Marques, F. Measures and Meta-Measures for the Supervised Evaluation of Image Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013; pp. 2131–2138. [Google Scholar]
  45. Feitosa, R.Q.; Costa, G.A.O.P.; Cazes, T.B.; Feijo, B. A genetic approach for the automatic adaptation of segmentation parameters. In Proceedings of the 1st International Conference on Object Based Image Analysis, Salzburg, Austria, 4–5 July 2006; p. 31. [Google Scholar]
  46. Ma, L.; Fu, T.Y.; Blaschke, T.; Li, M.C.; Tiede, D.; Zhou, Z.J.; Ma, X.X.; Chen, D.L. Evaluation of feature selection methods for object-based land cover mapping of UAV imagery by RF and SVM classifiers. ISPRS Int. J. Geo-Inf. 2017, 6, 51. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the multi-scale segmentation method based on MST and MHR.
Figure 1. Flowchart of the multi-scale segmentation method based on MST and MHR.
Remotesensing 10 00590 g001
Figure 2. Illustration of a graph-based image segmentation.
Figure 2. Illustration of a graph-based image segmentation.
Remotesensing 10 00590 g002
Figure 3. Data partition strategy based on MPI.
Figure 3. Data partition strategy based on MPI.
Remotesensing 10 00590 g003
Figure 4. “Reverse searching-forward processing” chain based on MPI.
Figure 4. “Reverse searching-forward processing” chain based on MPI.
Remotesensing 10 00590 g004
Figure 5. Flowchart for parallel segmentation based on MPI.
Figure 5. Flowchart for parallel segmentation based on MPI.
Remotesensing 10 00590 g005
Figure 6. Extended buffer strategy.
Figure 6. Extended buffer strategy.
Remotesensing 10 00590 g006
Figure 7. Remove the boundary.
Figure 7. Remove the boundary.
Remotesensing 10 00590 g007
Figure 8. Segmentation results of the airborne true orthoimage at Q 240 (yellow outlines) and ground truth (104 buildings and individual trees, blue outlines), for area A, highly detailed segments are found in (a) compared with (b), the opposite is true for area B.
Figure 8. Segmentation results of the airborne true orthoimage at Q 240 (yellow outlines) and ground truth (104 buildings and individual trees, blue outlines), for area A, highly detailed segments are found in (a) compared with (b), the opposite is true for area B.
Remotesensing 10 00590 g008
Figure 9. Segmentation results of SPECIM AISA EAGLE II image at Q 360 (yellow outlines) and ground truth (28 big buildings, blue outlines), for area A, highly detailed segments are found in (a) compared with (b), the opposite is true for area B.
Figure 9. Segmentation results of SPECIM AISA EAGLE II image at Q 360 (yellow outlines) and ground truth (28 big buildings, blue outlines), for area A, highly detailed segments are found in (a) compared with (b), the opposite is true for area B.
Remotesensing 10 00590 g009
Figure 10. Segmentation results of worldview-2 image at Q 260 (yellow outlines) and ground truth (28 fields, blue outlines), for area A and B, highly detailed segments are found in (b) compared with (a).
Figure 10. Segmentation results of worldview-2 image at Q 260 (yellow outlines) and ground truth (28 fields, blue outlines), for area A and B, highly detailed segments are found in (b) compared with (a).
Remotesensing 10 00590 g010
Figure 11. Segmentation results of RADARSAT-2 image at Q 2000 (yellow outlines) and ground truth (21 fields, blue outlines), for area A, highly detailed segments are found in (a) compared with (b), the opposite is true for area B.
Figure 11. Segmentation results of RADARSAT-2 image at Q 2000 (yellow outlines) and ground truth (21 fields, blue outlines), for area A, highly detailed segments are found in (a) compared with (b), the opposite is true for area B.
Remotesensing 10 00590 g011
Figure 12. Values of metrics for different sensors using the two segmentation methods described in Section 2. (a). Airborne image segmentation. (b) SPECIM AISA EAGLE II image segmentation. (c) WorldView-2 image segmentation. (d) RADARSAT-2 image segmentation.
Figure 12. Values of metrics for different sensors using the two segmentation methods described in Section 2. (a). Airborne image segmentation. (b) SPECIM AISA EAGLE II image segmentation. (c) WorldView-2 image segmentation. (d) RADARSAT-2 image segmentation.
Remotesensing 10 00590 g012
Figure 13. Speed evaluation of parallel segmentation. (a) Running time. (b) Speed-up. (c) Efficiency. (d) Effectiveness.
Figure 13. Speed evaluation of parallel segmentation. (a) Running time. (b) Speed-up. (c) Efficiency. (d) Effectiveness.
Remotesensing 10 00590 g013
Table 1. Summary of four test sites and imagery types.
Table 1. Summary of four test sites and imagery types.
TestImagerySpatial Resolution (in m)Number of BandsLandscape CharacteristicsLocation
T1Airborne true orthophoto0.054residential/industrial areaPotsdam in Germany
T2SPECIM AISA EAGLE II image0.7810residential/industrial areaDaFeng, in the Yancheng sub-province of Jiangsu province, in China
T3WorldView-2 image0.58residential/agriculture areaLintong, in the Xi’an sub-province of Shanxi province, in the northwest of China.
T4RADARSAT-2 image8HH, HV, VH, VVAgriculture areaGenhe, in the Hulunbeier sub-province of Inner Mongolia province, in China
Table 2. The segmentation parameters for four experiments.
Table 2. The segmentation parameters for four experiments.
Segmentation ParametersAirborneSPECIM AISA EAGLE IIWorldView-2RADARSAT-2
weight of color ( w c o l o r )0.90.90.90.9
weight of smoothness ( w s m o o t h )0.50.50.50.5
scale parameter (Q)2403602602000
Table 3. The metrics for accuracy evaluation.
Table 3. The metrics for accuracy evaluation.
MetricsFormulaExplanations
Hoover Index (H) H = 1 C D N G T Measures the number of correct detection based on the percentage of overlap between segmentation and reference ground truth (GT) [40].
CD is the number of correct detections and N G T represents the number of segments in the GT image.
Range [0,1], “H = 0” stands for perfect segmentation.
Area-Fit-Index (AFI) A F I = 1 N G T k = 1 N G T A k A l . i . k A k Addresses over-/under-segmentation by analyzing the overlapping area between segmentation and reference GT [41].
where Ak is the area, in pixels, of a reference segment Ck in the GT image, and Al.i.k is the area, in pixels, of the segment in the segmentation outcome S, with the largest intersection with the reference segment Ck. NGT is the number of segments in the GT image.
AFI = 0” stands for perfect overlap.
Shape Index (SI) S I = 1 N G T j = 1 N G T ( ρ j 4 A j ρ i 4 A i ) Addresses the shape conformity between segmentation and reference GT [42].
where NGT is the number of segments in the GT image, ρi and ρj are the perimeters of the segments Ci and Cj, and Ai and Aj are their respective areas.
Range [0,1], “SI = 0” stands for perfect segmentation.
Rand Index (RI) R I = 1 | p 00 | + | p 11 | | p | Measures the ratio between pairs of pixels that were correctly classified and the total pairs of pixels [43].
Let I = {𝑝1,…,𝑝N} be the set of pixels of the original image and consider the set of all pairs of pixels 𝒫 = {(𝑝i,𝑝j) I × I|i < j}. Moreover, Ci, a segment in the segmentation S, and Cj, a segment in the GT image, are considered as partitions of the image I. Then, 𝒫 is divided into four different sets depending on where a pair (𝑝i,𝑝j) of pixels falls:
𝒫11: in the same segment both in Ci and Cj.
𝒫10: in the same segment in Ci but different in Cj.
𝒫01: in the same segment in Cj but different in Ci.
𝒫00: in different segments both Ci and Cj.
Range [0,1], “RI = 0” stands for perfect segmentation.
Precision-Recall (F) F = 1 2 P R P + R , P = tp tp + fp , R = tp tp + fn Measures the trade-off between Precision and Recall considering segmentation as a classification process [44].
Given a segment from the segmentation outcome S and a segment from its GT, four different regions can easily be differentiated:
True positives (tp): pixels that belong to both S and GT.
False positives (fp): pixels that belong to S but not to GT.
False negatives (fn): pixels that belong to GT but not to S.
True negatives (tn): pixels that do not belong to S or GT.
Range [0,1], “F = 0” stands for perfect segmentation.
Segmentation Covering (C) C ( S G T ) = 1 1 N G T C t G T | C t | . max C i S O ( C i , C t ) Measures the number of pixels of the intersection of two segments [44].
where ΣNGT is the total number of pixels in the original image.
The overlap between two segments, Ci in a segmentation S and Cj in its GT, is defined as O ( C i , C j ) = | C i C j | | C i C j |
Range [0,1], “C = 0” stands for perfect segmentation.
Reference Bounded Segments Booster (RBSB) R B S B = 1 N G T t = 1 N G T ( f n t + f p t f n t + t p t ) Measures the ratio between the number of pixels outside the intersection of two segments with the area of the reference GT [45].
where t represents a segment from GT and NGT is the number of segments in the GT image. fn, fp, tp is the same as Precision-Recall (F).
Range [0,1], “RBSB = 0” stands for perfect segmentation.
Table 4. The ground truth data for the four experiments.
Table 4. The ground truth data for the four experiments.
Ground Truth DataAirborneSPECIM AISA EAGLE IIWorldView-2RADARSAT-2
Classes, numbers of polygonsbuildings and individual trees, 104big buildings, 28fields, 28fields, 21
Table 5. The values of fitness functions for four experiments.
Table 5. The values of fitness functions for four experiments.
AirborneSPECIM AISA EAGLE IIWorldView-2RADARSAT-2
MetricsThe Proposed MethodFNEAThe Proposed MethodFNEAThe Proposed MethodFNEAThe Proposed MethodFNEA
Hoover Index (H)0.880.840.640.680.680.610.580.57
Area-Fit-Index (AFI)2.182.51.331.192.031.48122.46310.56
Shape Index (SI)0.030.020.050.050.050.040.030.03
Rand Index (RI)0.350.380.260.260.30.260.980.99
Precision-Recall (F)0.250.280.170.170.210.180.960.98
Segmentation Covering (C)0.490.490.450.450.430.350.990.99
Reference Bounded Segments Booster (RBSB)0.730.750.640.60.60.530.990.99
Table 6. Speed evaluation metrics for image segmentation.
Table 6. Speed evaluation metrics for image segmentation.
MetricsFormulaRemarks
Speed-up S P = T S T P T S is the processing time of single computing node. T P is the processing time of P computing node.
Efficiency E P = S P P S P is the speed-up of P computing node.
Effectiveness F P = S P P × T P = E P T P E P is the efficiency of P computing node.

Share and Cite

MDPI and ACS Style

Gu, H.; Han, Y.; Yang, Y.; Li, H.; Liu, Z.; Soergel, U.; Blaschke, T.; Cui, S. An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery. Remote Sens. 2018, 10, 590. https://doi.org/10.3390/rs10040590

AMA Style

Gu H, Han Y, Yang Y, Li H, Liu Z, Soergel U, Blaschke T, Cui S. An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery. Remote Sensing. 2018; 10(4):590. https://doi.org/10.3390/rs10040590

Chicago/Turabian Style

Gu, Haiyan, Yanshun Han, Yi Yang, Haitao Li, Zhengjun Liu, Uwe Soergel, Thomas Blaschke, and Shiyong Cui. 2018. "An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery" Remote Sensing 10, no. 4: 590. https://doi.org/10.3390/rs10040590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop