Next Article in Journal
Stability, Optimality and Manipulation in Matching Problems with Weighted Preferences
Previous Article in Journal
Multi-Core Parallel Gradual Pattern Mining Based on Multi-Precision Fuzzy Orderings

Algorithms 2013, 6(4), 762-781;

Very High Resolution Satellite Image Classification Using Fuzzy Rule-Based Systems
Department of Geodesy and Geomatics Engineering, University of New Brunswick, 15 Dineen Drive, Fredericton, NB E3B 5A3, Canada
Author to whom correspondence should be addressed.
Received: 23 August 2013; in revised form: 22 October 2013 / Accepted: 6 November 2013 / Published: 12 November 2013


The aim of this research is to present a detailed step-by-step method for classification of very high resolution urban satellite images (VHRSI) into specific classes such as road, building, vegetation, etc., using fuzzy logic. In this study, object-based image analysis is used for image classification. The main problems in high resolution image classification are the uncertainties in the position of object borders in satellite images and also multiplex resemblance of the segments to different classes. In order to solve this problem, fuzzy logic is used for image classification, since it provides the possibility of image analysis using multiple parameters without requiring inclusion of certain thresholds in the class assignment process. In this study, an inclusive semi-automatic method for image classification is offered, which presents the configuration of the related fuzzy functions as well as fuzzy rules. The produced results are compared to the results of a normal classification using the same parameters, but with crisp rules. The overall accuracies and kappa coefficients of the presented method stand higher than the check projects.
fuzzy rule based systems; object-based image classification; very high resolution satellite imagery; urban land cover

1. Introduction

With the development of satellite images to provide finer spatial resolutions, they can provide finer more details in urban mapping [1]. However, considering high spectral variation within the same urban class and low spectral variations between different urban classes, image classification has become a challenging issue [1,2].
Considering the existence of extreme level of detail in very high resolution urban satellite images (VHRSI), object-based methods are being increasingly employed for image classification since they have a higher resemblance to human interpretation skills and in object-based image analysis, object characteristics such as shape, texture, topological information, and spectral response can also be used [3,4]. In object-based image classifications, an image is divided into non-overlapping segments which are then assigned to different classes using specific methods; for example, [5] presented a method for object-based image classification using a neural network. They used a kernel called “cloud base” function for classification. In [6] rule-based classification of very high resolution images, using cartographic data, was used. In the segmentation step, the seeds were picked in the center of cartographic objects; then, using some contextual rules, they classified the image. Although object-based image analysis is not mature enough to be used in automatic image analysis, it is still very promising [5].
Detecting various image objects such as buildings and roads in VHRSI is quite problematic due to uncertain object borders in VHRSI as well as multiplex resemblance of the segments to different classes. Sometimes even the human eye has difficulty differentiating some image objects. Thus, a significant number of studies have added ancillary data, such as road maps, building maps, etc., to the process in order to make the procedure easier [7,8,9,10]. Needless to say, those ancillary data are not always available for other studies. In this project, no ancillary data are used. Another group of articles use fuzzy logic to deal with the mentioned complexity.
Fuzzy logic, which is developed by [11], has been used in image classification in several studies [12,13,14,15,16,17,18,19,20]. In [12], a fuzzy membership matrix for supervised image classification was used. Introducing partial membership of pixels, mixed pixels could be identified and more accurate classification results could be achieved. In addition, [13] used fuzzy logic for Spot image classification. In [14], fuzzy borders for image segmentation was used. In [15], fuzzy segmentation for object-based image classification was adopted. They used a fuzzy classification method on a segmented image to classify large scale areas such as mining fields and transit sites. In [16], Object-Oriented fuzzy analysis of remote sensing data for GIS-ready information was used. In [17,18,19,20], methods for hierarchical image classification using fuzzy logic were also presented.
As can be seen fuzzy logic is a quite popular in satellite image analysis. Considering the uncertainties in image pixels/segments, a fuzzy inference system can be of great help in image classification. However, existing literature still suffers from the lack of the literature with a step-by-step image classification method based on fuzzy logic. In this study, a fuzzy inference system is used for image classification in order to detect urban features such as buildings, roads, and vegetation using the tools provided in eCognition software. In this project, no ancillary data such as building maps or road maps are used. The focus is on establishing fuzzy membership functions for object extraction.
Typically, the main concern in high resolution satellite image classification is to differentiate objects like vegetation, roads, buildings, etc., especially in urban environments. Vegetation extraction methods are probably among the most straightforward object recognition techniques in remote sensing. The Near Infrared (NIR) band plays a crucial role in this field. Considering the high reflectivity of vegetation in the NIR region, it is unproblematic to detect the mentioned features in remote sensing images.
On the other hand, extraction of urban objects such as roads and buildings is more challenging, since they have more similar spectral reflectance and texture. However, buildings have more compact geometric shapes, while roads are typically elongated features. Hence, shape parameters can be of great help in the delineating of buildings from roads. Contextual information is also a good tool for VHRSI classifications; for example, buildings are elevated objects so there are shadows associated with them in the direction opposite to sun’s azimuth. Therefore, if an urban object has a shadow in the related direction, it is a building [9]. Here, the role of object-based image analysis is underscored, as the shape and neighborhood of the objects can be defined in object-based image analysis, not in pixel-based methods.
In this article, first data specifications are presented. Then, the fuzzy-based methodology is explained and fuzzy membership functions and fuzzy rules are introduced. Finally, the accuracy assessment is performed and the results are compared to the same classification method but with crisp thresholds.
In this paper the offered method is tested on two different data sets (GeoEye and QuickBird imagery) and the results are compared to the results of test projects with crisp thresholds.

2. Data

This study uses a GeoEye-1 satellite image captured from the city of Hobart, Australia, in February 2009 and a QuickBird image of Fredericton, Canada, acquired in 2002. Generally, the GeoEye and QuickBird imagery comprise a Panchromatic band (Pan) and four multi-spectral (MS) bands (Blue, Green, Red, Near Infrared), all of which are used in this study. Ground Sampling Distance for GeoEye-1 (GSD) at nadir is 41 cm for the Pan band and 1.64 m at nadir for MS bands. In addition, for QuickBird images, GSD at nadir is around 61 cm for Pan band and 2.4 m for MS bands [21]. These two satellites produce typical high-resolution imagery, which approximately have similar spatial and spectral resolutions with the other VHRSI. Therefore, these two satellite products are selected for this research. The coverage of images are so that they include urban or suburban structure types with typical one or two story buildings in order to prevent facing huge relief distortions which reduce classification accuracy. Sun Angle Azimuths are 59.58 and 141.95 degrees for GeoEye and QuickBird images, respectively. These angles form shadows in the southern or western or both mentioned sides of a building—depending on the building orientation—in GeoEye image and in northern or eastern or both sides in QuickBird image. This information is going to be used for building detection (see Section 3.2.4).
In order to take the advantage of the Pan and the MS bands in image classification, initially the Pan band and MS bands are fused. In the output, MS bands will have as high spatial resolution as the Pan band. This is done using the UNB (University of New Brunswick) pan sharpening method available in the Fuze-Go software. More details about the method can be found in [22].

3. Methodology

In this project the image is classified into 5 major classes: Shadow, Vegetation, Road, Building, and Bare land. In the hierarchy of the classification, first shadow is extracted. Then, from shadow and unclassified segments, vegetation is extracted. This means that shadow is not excluded from the classification process in this step. The logic behind it is that some parts of vegetation are covered by the shadow of the others, while still demonstrating similarities to vegetation and we do not want to exclude them from vegetation class. After, vegetation extraction, road classification, building detection and contextual analysis are done, respectively. Finally, the remaining unclassified features are assigned to bare land. Figure 1 shows the flow chart of the presented method.
Figure 1. Flow chart of the presented method.
Figure 1. Flow chart of the presented method.
Algorithms 06 00762 g001
In this study, in order to classify a VHRSI, first the image is segmented, and then using the related fuzzy rules, segments are assigned to specific classes; this process is explained in the rest of this article.

3.1. Image Segmentation

Generally, object-based image classification is based on image segmentation, which is a procedure of dividing an image into separated homogenous non-overlapping regions based on the pixel gray values, texture, or other auxiliary data [23]. One of the most popular image segmentation methods is Multi-resolution image segmentation, which is used in this study, for initial segmentation adopting eCognition software. For multi-resolution image segmentation in eCognition, there are three parameters to be specified: scale, shape, and compactness. Generally, the eCognition default values for shape and compactness are used for initial segmentation, which are 0.1 and 0.5 respectively. Scale is also specified so that the resulting segments are smaller than real objects. Considering the spatial resolutions of the used data, which are around 0.5 m, a scale equal to 10 is fine. Here, the initial segmentation results are enough to be used to define shadows, vegetation, and roads, but for building detection due to the existing complexity a second level segmentation is required, which is described in Section 3.2.4.

3.2. Fuzzy Image Classification

In traditional classification methods such as minimum distance method, each pixel or each segment in the image will have an attribute equal to 1 or 0 expressing whether the pixel or segment belongs to a certain class or not, respectively. In fuzzy classification, instead of a binary decision-making, the possibility of each pixel/segment belonging to a specific class is considered, which is defined using membership functions. A membership function offers membership degree values ranging from 0 to 1, where 1 means fully belonging to the class and 0 means not belonging to the class [24].
Implementing fuzzy logic ensures that the borders are not crisp thresholds any more, but membership functions within which each parameter value will have a specific probability to be assigned to a specific class are used. Appending more parameters to this classification, for example, using NIR ratio and NDVI for vegetation classification, better results will be achieved. Using fuzzy logic, classification accuracy is less sensitive to the thresholds.
μ A is a fuzzy membership function over domain X. μ A ( x ) is called membership degree, which ranges from 0 to 1 over domain X [23,24]. μ A ( x ) can be a Gaussian, Triangular, Trapezoidal, or other standard functions depending on the application. In this research, trapezoidal and triangular functions are used (Figure 2). The associated formulas are given in Equations (1) and (2) [25,26].
Figure 2. Typical (a) Trapezoidal. (b) Triangular fuzzy functions which are used in this study.
Figure 2. Typical (a) Trapezoidal. (b) Triangular fuzzy functions which are used in this study.
Algorithms 06 00762 g002
Triangular function:
μ A ( x ) = 1 | a x | λ ,  for  0 | a x | λ ; μ ( x ) = 0
Trapezoidal function:
μ A ( x ) = min { 2 2 ( | a x | λ ) , 1 } ,   for a λ | a x | a + λ  ; μ ( x ) = 0
where, a is the x coordinate of the middle point of the trapezoidal function or the x coordinate of the peak of the triangular function and λ equals half of base of triangle or half of the long base of the trapezoidal. All the parameters of the functions are specified based on human expertise [25].
Figure 3. Lingual variable example.
Figure 3. Lingual variable example.
Algorithms 06 00762 g003
In fuzzy rule-based systems, lingual variables are introduced, which replace the crisp thresholds. For example, instead of defining vegetation with a threshold for NDVI, a lingual value, such as high, medium, or low, with a specific fuzzy function is identified which assigns a membership degree to specific ranges of NDVI. Figure 3 shows an example of lingual variables [25,26,27]. In this research, the only lingual variables which are useful for this study are defined.
The other important specification for a fuzzy rule-based system is a fuzzy inference system, which uses Fuzzy rules for decision making. In this project the inference system of eCognition software is used. More details of fuzzy classification in eCognition software are given in [28].
In the presented method, the specifications of each object are tested using the fuzzy rules defined for each class based on the hierarchy mentioned in Section 3. Each segment receives a degree specifying the similarity of the segment to each class. The segments which have high similarity degrees will be assigned to the associated class after deffuzification of the results. This step is also done in eCognition software.
In the following, the details of the parameters used for different object classification are presented.

3.2.1. Shadow

In satellite images, it is very likely that shadows will be mistaken for roads, since they both are narrow elongated objects. Hence, it is necessary to label shadow segments in advance. In addition, shadows can help in defining elevated objects [9]. In this research, 2 parameters are used for shadow detection: Brightness and Density, which are explained in this section.
Brightness: Brightness is the mean of gray values of all bands for each pixel/segment given in Equation (3).
Brightness=  R e d + G r e e n + B l u e + N I R 4
Since shadows tend to have low brightness values, here, in order to specify the parameters of the fuzzy function associated with low brightness in the image, an unsupervised image classification method called the fuzzy k-means method is used to find the darkest cluster. The fuzzy k-means clustering method is applied on the image to generate 15 clusters. (Generally, the number of the clusters in the unsupervised classification methods should be between 3 and 5 times of the number of actual classes in the image [29]). Then, the mean and standard deviation of the darkest cluster, amongst all clusters, are used to define the fuzzy function for Shadow brightness. The fuzzy function used in this process is presented in Figure 4a.
Figure 4. (a) fuzzy function for Brightness: M is the mean of the darkest cluster and sigma is the associated standard deviation; (b) fuzzy function for Density.
Figure 4. (a) fuzzy function for Brightness: M is the mean of the darkest cluster and sigma is the associated standard deviation; (b) fuzzy function for Density.
Algorithms 06 00762 g004
The upper threshold is considered to be mean of the darkest cluster plus three times the associated standard deviation; statistically speaking, in normal distribution, the probability of 99% is given by M + 3σ and that is why in this study the upper threshold equals M + 3 σ . It is aimed to cover 99% of the shadows, but the membership probability is the least value in this threshold.
Density: Density is a feature presented in eCognition software, which describes the distribution of the pixels of an image object in space [28].
d e n s i t y = # P v 1 + σ X 2 + σ Y 2
where, # P v is the diameter of a square object with # P v pixels and σ X 2 + σ Y 2 is the diameter of the fitted ellipse to the segment.
The more the object is like a square, the higher the value of density. Consequently, filament shaped objects have low values of density, which is consistent with the shape of shadows looked for in this study. Here, we look for shadows to help in finding buildings. In this project the majority of shadows had a density less than 1. There are also shadows as well as some other objects with density between 1 and 1.2, which means this range is the fuzzy range of shadow classification. Thus, the membership degree for those segments with the density less than 1 is 100% (the membership degree equals one) and those in between 1 and 1.2 are in the fuzzy range with membership degree ranging from 1 to 0 (Figure 4b). Equations (5) and (6) show the formula for low brightness and density that are associated with Figure 4. In other projects, density of shadows might be slightly different than the mentioned numbers.
μ B ( x ) = { 1 ,   for   x < M 1 x M 3 σ , for   0 x M 3 σ 0 , otherwise
μ D ( x ) = { 1 ,   for   x < 1 1 x 1 0.2 , for   0 x 1 0.2 0 , otherwise
The fuzzy rule used for shadow detection is:
  • If Brightness is low and Density is low then segment is Shadow.

3.2.2. Vegetation

As mentioned earlier in this article, vegetation extraction is one of the most straightforward methods in image classification. In this research, 2 parameters are used for vegetation detection: NDVI and NIR ratio, which are explained in this section.
NDVI: Normalized Difference Vegetation Index (NDVI) is a proper tool for vegetation extraction [30].
NIR Ratio: Because of high reflectivity of vegetation in the NIR region of the electromagnetic wave spectrum, the value of the NIR ratio, which is given by Equation (7), is higher in vegetated areas compared to other parts of the image [28].
N I R   R a t i o = N I R N I R + R + G + B
The value of the NDVI and the NIR ratio in vegetated areas slightly differ from one image to another. Generally, the NDVI value for vegetation is around 0.2 and the NIR ratio is around 0.3, depending on the density of vegetation. However, if a crisp threshold is selected to classify the image into vegetated and non-vegetated areas, there might be some vegetation segments in the image with the NDVI and the NIR ratio values slightly less than thresholds which will be miss-classified as non-vegetation. In order to improve error of omission, the presented fuzzy rule-based system offers a wider margin to include uncertain segments and provides them a second chance to be examined for classification qualification.
Figure 5. NDVI or NIR Ratio fuzzy function. In this study for NDVI, T1 = 0.05, T2 =0.25, For NIR Ratio: T1 = 0.15, T2 = 0.4.
Figure 5. NDVI or NIR Ratio fuzzy function. In this study for NDVI, T1 = 0.05, T2 =0.25, For NIR Ratio: T1 = 0.15, T2 = 0.4.
Algorithms 06 00762 g005
The used membership function is presented in Figure 5 and the associated formula is given in Equation (8).
μ N ( x ) = { 1 ,   for   0 x T 2 1 T 2 1 T 2 x T 2 T 1 , for  0 T 2 x T 2 T 1 0 , otherwise
The related fuzzy rule is:
  • If NIR ratio is high and NDVI is high then segment is Vegetation.

3.2.3. Road

Roads are elongated objects with smooth surfaces having low variations in the gray values. Therefore, criteria which show lengthened objects and gray value variances can be used for defining roads. Classifying road segments without any ancillary data is a challenging issue. In this project, first strict limitations are used for road detection. Therefore, the real road segments will be classified as Road class. Then, using spectral similarity with neighbor objects, other parts of the road can also be correctly classified. In this study, 3 parameters called  I cm  , I e , and standard deviation are used for road detection which are explained in this section.
I c m   and I e : Equation (9) and (10) are used for defining elongated objects [6].
I c m = 2 π . A r e a ( o b j e c t ) p e r i m e t e r ( o b j e c t )
I e = A r e a ( o b j e c t ) [ l e n g t h ( o b j e c t ) ] 2
Having compared different class segments, one comes to the understanding that road segments have low value for I c m   and   I e , while buildings have high values of them. The exact range for I c m   and I e associated with road segments depends on the image spatial resolution, road width, etc., However, a rough estimation for the used GeoEye image is that I c m   for road segments is less than 0.5 and I e is less than 0.1 (These numbers are slightly different in the used QuickBird imagery). Although there are some other segments belonging to roads which have higher values of I c m   and   I e , this tight limitation decreases the risk of error of commission, which means it prevents non-road elongated segments to be miss-classified in road class. Later, as explained above, using contextual information, those unclassified road segments will be properly classified to road class (see Section 3.2.5). The fuzzy function defined for   I c m   and I e is presented in Figure 6.
Standard deviation (Std): Std shows variation of pixel values in a specific segment [28]. Since the surfaces of the roads are smooth with low variations in gray levels, the lower the standard deviation of segments, the higher the possibility to be assigned to the road class. The upper limit of standard deviation for roads depends on the radiometric resolution of the image and the distribution of gray values in the image histogram. For example in this project, T = 150 for standard deviation of GeoEye image (Figure 6).
Figure 6. Fuzzy function for   I c m   , I e or Standard deviation, which has the same shape, but with different T values.
Figure 6. Fuzzy function for   I c m   , I e or Standard deviation, which has the same shape, but with different T values.
Algorithms 06 00762 g006
Equation (11) presents the associated fuzzy set for   I c m   , I e or Standard deviation.
μ I ( x ) = { 1 x T ,     for  0 x T 0 ,     otherwise
Considering the above specifications for road, the produced fuzzy rule is:
  • If Icm is low and Ie is low and Std is low then segment is Road.
Using this rule, some narrow elongated features not belonging to road class might also be miss-classified as road. Therefore, in this study after applying this rule, all the road segments so produced are merged. Then, those narrow features are removed using a width threshold. Considering that the minimum width of a road is 3 m and spatial resolution of the image is 0.5 m, all road segments which have width less than 6 pixels are unlikely to belong to roads and they are omitted from it. This procedure improves error of commission.

3.2.4. Building

All previously mentioned object classes can be specified by first level of segmentation. However, building detection needs further processing because of the particular specifications of buildings. For example in pitched-roof buildings, each side of a roof will be assigned to a different segment due the dissimilar amount of solar radiation it receives in different sides. Therefore, a second level of segmentation must be applied to unify those segments, which in this study, requires training data. Afterwards, using building specifications, building segments are classified.

Second Level Segmentation

[31] presented a supervised method for optimal multi-resolution image segmentation. The method called FbSP optimizer (Fuzzy-based Segmentation Parameter optimizer), offers a fuzzy-based process with which optimized scale, shape, and compactness parameters of multi resolution segmentation are generated. The process is trained using the segments belonging to a specific training building; the information of the training segments such as texture, brightness, stability, rectangular fit, and compactness are used to train FbSP optimizer. Then, new scale, shape, and compactness values are generated for the second level of segmentation. Using the optimal parameters for image re-segmentation, precise building borders are delineated. The output segments have borders fairly fitting the real ones. The problem of this method is the requirement of training data. Therefore, this method is only used for building classification due to its complicated nature. The full description of FbSP optimizer can be found in [32]. After finding the optimized segments for buildings, the next step is to detect building segments out of unclassified ones.

Building Classification

The specification for buildings in VHRSI is that buildings are typically rectangular shaped (mostly square shaped) objects with an elevation above the ground. These specifications are going to be used for building detection in this part. In this study 4 parameters are used for building detection: Rectangular Fit, Elliptic Fit, Area, and Shadow Neighborhood, which are explained in this section.
Rectangular Fit and Elliptic Fit: Rectangular Fit parameter describes how well an image object fits into a rectangle of similar size and proportions, while The Elliptic Fit feature describes how well an image object fits into an ellipse of similar size and proportions. They both range from 0 to 1, 0 meaning no fit and 1 meaning perfect fit [28]. Having assessed the different building related image segments in this project, it is inferred that buildings have high similarities to rectangular shapes (Figure 10a shows some typical buildings). Typically, buildings are square shaped which have a quite high Rectangular Fit and fairly high Elliptic Fit (not as high as circles). Therefore, Rectangular Fit and Elliptic Fit parameters can assist in building identification. If elliptic fit is omitted from the process, then elongated features which have rectangular shapes will also be miss-classified as buildings. Thus, both of the parameters must be used for building detection, simultaneously. Figure 7 shows high fuzzy membership for Rectangular Fit as well as fairly high fuzzy membership of Elliptic Fit; high elliptic fit is used for round about detection in Section 3.2.5.
Figure 7. Fuzzy function for: (a) Rectangular Fit; (b) Elliptic Fit.
Figure 7. Fuzzy function for: (a) Rectangular Fit; (b) Elliptic Fit.
Algorithms 06 00762 g007
The fuzzy set equations related to the high lingual value for rectangular fit are presented in Equation (12).
μ R ( x ) = { 1 1 x 1 T ,               for  0 x T 1 T     0 ,                                       o t h e r w i s e
In addition, 2 fuzzy sets of elliptic fit for high and fairly high lingual values are presented in Equation (13) and (14).
fairly high elliptic fit:
μ E F ( x ) = { 1 T 2 x T 2 T 1 ,         f o r   0 T 2 x T 2 T 1   1 x T 2 1 T 2 ,     for  0 x T 2 1 T 2     0 ,                                     otherwise
high elliptic fit:
μ E F ( x ) = { 1 1 x 1 T 2 ,     f o r     0 x T 2 1 T 2 0 ,                                   o t h e r w i s e
Shadow Neighborhood: Using just Rectangular Fit and Elliptic fit for building detection, still, there might be parcels of bare land or any other classes which have approximately square shapes to be miss-classified as building reducing the classification accuracy. Therefore, there must be another parameter to extract buildings out of them. Here, shadows are of great importance. Considering the height of the building and Azimuth angle of sun at the time of exposure, a shadow with specific length and direction should be visible beside buildings.
Figure 8. Direction of shadow in satellite images. Az is Azimuth angle of sun. Depending on the orientation of building, shadow is formed in side A (western side) or side B (southern side) or both.
Figure 8. Direction of shadow in satellite images. Az is Azimuth angle of sun. Depending on the orientation of building, shadow is formed in side A (western side) or side B (southern side) or both.
Algorithms 06 00762 g008
The direction of the shadow depends on the azimuth angle of sun at the time of image capturing. Figure 8 shows how azimuth angle of sun specifies the direction of shadow. Depending on the orientation of building, shadow is formed in the side A (in Figure 8, western side), side B (in Figure 8, southern side) or both. Therefore, whatever the sun azimuth angle is, the shadow of the building is formed in the opposite direction (in this research southern and/or western direction).
Area: Other than shape and shadow neighborhood in proper direction, buildings have another specification which is their area. In order to not misclassify other rectangular shaped features, such as vehicles, as building, the area limitation must also be considered. The smallest size for a building in this study is 3 m × 4 m and the biggest size is 25 m × 20 m. Regarding the image spatial resolution, area limit for building segments will be around 50 to 2000 pixels, within which, area from 100 to 1500 is the most probable range (membership degree equals to 1) and the remaining regions are the fuzzy areas. In Figure 8, the medium fuzzy function is used for delineating buildings; the low fuzzy function will also be used for vehicle detection in Section 3.2.5.
Figure 9. Fuzzy functions for area, medium fuzzy function is used for building detection; low fuzzy function is used for vehicle detection.
Figure 9. Fuzzy functions for area, medium fuzzy function is used for building detection; low fuzzy function is used for vehicle detection.
Algorithms 06 00762 g009
Equations related to low and medium lingual values for area.
l o w   Area μ ( x ) = 1 | T 1 x | T 1 ,  for  0 | T 1 x | T 1 ; μ ( x ) = 0
m e d i u m   Area μ ( x ) = min { 2 2 ( | a x | λ ) , 1 } ,   for a λ | a x | a + λ ; μ ( x ) = 0
where, a =   T 2 + T 3 2 T 1 2 and   λ   =   T 1 ; also in this project T1, T2, and T3 equal 50, 100, 1500, respectively.
In this research, Rectangular shapes with moderate area which are neighbors to shadows in the direction specified by sun azimuth angle are classified as buildings. The resulting fuzzy rule for building detection is:
  • If Rectangular fit is high and elliptic fit is fairly high and Area is medium and southern/western-neighbor is shadow, then segment is Building.

3.2.5. Contextual Check

Having applied all the above mentioned rules to the image, still some unclassified segments might remain. Unclassified features are tested to see if they resemble specific classes. For example, since strict rules are applied for road detection, some parts of roads remain unclassified. Therefore, within unclassified objects those which are in the neighborhood of roads and have spectral similarity to roads are classified as Road. Also, some parts of the buildings might be remained unclassified. A similar rule is applied to them for a proper classification. The fuzzy rules used for further building or road classification are:
  • If spectral similarity to road is high and relative border to road is medium or relative border to road is high, then segment is Road.
  • If spectral similarity to building is high and relative border to building is medium or relative border to building is high, then segment is Building.
Other than the objects mentioned above, there are some other image objects which are not of interest of this study. For example, vehicles are irrelevant to this project, even though they might be the topic of other studies. Thus, using contextual information vehicles are removed. Courts and roundabouts must be assigned to road classification as well. In the following some of the contextual features with the related fuzzy rule are mentioned.
Vehicle removing: Considering the spatial resolution of the used satellite images, which is around 50 cm, and also the size of a typical vehicle which is approximately 1.5 m × 2.5 m, vehicles will be a rectangular shaped objects in satellite images with area around 15 pixels. The largest size vehicle that is counted in this study is a truck with size 2.5 m × 10 m. Therefore, objects which have an area between 15 to 100 pixels are probable to be vehicles. In addition, vehicles can be found in roads. Therefore, they must be connected to a road segment, which means their relative border with roads is greater than 50%.
Therefore the associated fuzzy rule is:
  • If Area is small and Rectangular fit is high and relative border to road is high, then segment is Vehicle.
And since we do not deal with vehicles in this study, we assign it to road class.
Courts and Roundabouts: The segments of the image relating to courts and roundabouts fail to be classified as roads because of their round shapes or spectral reflectance different of that of roads. In this study, the unclassified segments are searched for circular or semi-circular features which are in neighborhood of the roads (covering roundabouts) or they have similar spectral reflectance to roads (covering courts). Here, these features are also classified as roads. The related rule is:
  • If Elliptic Fit is high and relative border to road is high, then segment is Road.
  • If Elliptic Fit is high and spectral similarity to road is high, then segment is Road.

3.2.6. Bare Land

Detecting parking lots from roads is a complicated issue which cannot be done without ancillary data [2]. Here, the selected areas are so that they do not contain vast parking lots; just bare lands around the houses can be detected. The areas also contain no water bodies, therefore the remaining unclassified objects are assigned to bare land class.

4. Results

In order to control the accuracy of the presented fuzzy classification method a check project is defined for each dataset, so that the same rules are used but with crisp thresholds. In other words, instead of using fuzzy member ship functions for classification, crisp threshold are used. For example, the fuzzy rule used for vegetation detection is:
  • If NIR ratio is high and NDVI is high then segment is Vegetation.
In the check project, the threshold values are calculated as the middle point of the fuzzy area. As explained earlier in Section 3.2.2, T1 and T2 specify the fuzzy area for vegetation. In the check project, the crisp threshold is considered to be the middle point between T1 and T2, which is   T 1 + T 2 2 . Thus, those segments which have NIR Ratio more than 0.275 and NDVI more than 0.15 are assigned to the vegetation class. The same procedure is done for other classes as well.
Figure 10 shows some parts of the original images, images classified using crisp thresholds (check project) and the images classified using the presented fuzzy rule-based system for both datasets. As can be seen, the images classified using the presented fuzzy rule-based system (Figure 10b,e,h,k) are more successful in terms of defining image objects compared to the check projects (Figure 10c,f,i,l).
Figure 10. (a),(d) original image of GeoEye; (b),(e) GeoEye image classified using crisp borders; (c),(f) GeoEye image classified using fuzzy functions. (g),(j) original image of QB; (h),(k) QB image classified using crisp borders; (i),(l) QB image classified using fuzzy functions. In this figure red, green, magenta, blue, and yellow colors stand for shadow, vegetation, building, road, and bare land classes, respectively.
Figure 10. (a),(d) original image of GeoEye; (b),(e) GeoEye image classified using crisp borders; (c),(f) GeoEye image classified using fuzzy functions. (g),(j) original image of QB; (h),(k) QB image classified using crisp borders; (i),(l) QB image classified using fuzzy functions. In this figure red, green, magenta, blue, and yellow colors stand for shadow, vegetation, building, road, and bare land classes, respectively.
Algorithms 06 00762 g010
In order to evaluate the quality of the results of the presented method, a random part (400 pixels × 500 pixels) of each image is classified manually using human expertise. Then, using the manual classification result as control information, the confusion matrix is generated for both the presented method as well as the check project. Table 1 and Table 2 show the related confusion matrices.
Table 1. Confusion Matrix for the presented method results (Geo-Eye Data).
Table 1. Confusion Matrix for the presented method results (Geo-Eye Data).
BuildingRoadShadowVegetationBare LandSumUser’s Accuracy
Bare Land18,3398494001,37847,32268,2880.69
Producer’s Accuracy0.700.840.830.900.91
Table 2. Confusion Matrix for the check project (Geo-Eye Data).
Table 2. Confusion Matrix for the check project (Geo-Eye Data).
BuildingRoadShadowVegetationBare LandSumUser’s Accuracy
Bare Land31,52410,5947381,48048,75393,0890.52
Producer’s Accuracy0.520.570.600.750.93
Table 3. Confusion Matrix for the presented method results (QB data).
Table 3. Confusion Matrix for the presented method results (QB data).
BuildingRoadShadowVegetationBare LandSumUser’s Accuracy
Bare Land1,4143,2467312,12117,65725,1680.7
Producer’s Accuracy0.780.80.670.980.75
Table 4. Confusion Matrix for the check project (QB data).
Table 4. Confusion Matrix for the check project (QB data).
BuildingRoadShadowVegetationBare LandSumUser's Accuracy
Bare Land3,3337,14882156,89613,77581,9740.17
Producer’s Accuracy0.660.730.60.550.59
The overall accuracies and Kappa coefficients for both of the methods are also presented in Table 5.
Table 5. Overall Accuracy and Kappa Coefficient of the presented method and the check project.
Table 5. Overall Accuracy and Kappa Coefficient of the presented method and the check project.
Fuzzy Method
(for GeoEye)
Crisp Method
(for GeoEye)
Fuzzy Method
(for QB)
Crisp Method
(for QB)
Overall Accuracy0.820.680.90.42
Kappa Coefficient0.760.580.820.59

5. Discussion

As can be seen from Table 1 to Table 5, the overall accuracies as well as Kappa coefficient in the presented fuzzy method stand higher than those of crisp method; in addition, the majority of the User’s Accuracies and Producer’s Accuracies in the presented method results show up higher than the crisp check method. It means that, using fuzzy rule-based system improved the accuracy of the classification.
Considering the User’s/Producer’s accuracies of the presented method (Table 1 and Table 3), one comes to the understanding that almost all segments are properly classified (with accuracies better than 80%) except those belonging to building and bare land classes. High level of User’s Accuracy for buildings shows that the criteria for building extraction are adequate; meanwhile, low amount of Producer’s Accuracy for building class indicates that there are some buildings that are not covered by this rule and are mistakenly labeled as another class. On the other hand, the User’s Accuracy for bare land class is around 70% for both datasets, which means there are plenty of segments mistakenly committed to the bare land class. Comparing the related numbers, it is inferred that some buildings are not properly extracted; therefore, they remained unclassified and in the last step of classification (3.6.2), in which the unclassified segments are assigned to the bare land class, they are falsely classified as bare land, reducing both building and bare land classes’ accuracies. The same contamination has happened for road segments. Those segments, which are missed in road classification, are transferred to the bare land class. In this study, also building detection is dependent on shadow detection. Therefore, if for some reason the method fails to detect a shadow, the associated building will be also missed. If ancillary data for building extraction, such as a building map or road map, is added to this process, the accuracy of the classification will be remarkably improved.
The presented method performed quite well in vegetation detection, however, the crisp method missed to classify a huge number of segments related to vegetation.
Fredericton is a very green town, therefore the most of the QB image is vegetated and since this method is very good at detecting vegetation, the overall accuracy and kappa coefficient for QB image classification are affected by this classification accuracy and show up higher than that of GeoEye image.

6. Conclusion

Although the accuracies for building and bare land detection in this project are not as high as other classes, they are still higher than those of the check project. In addition, the presented method succeeded in detecting roads, shadows, and vegetation. Referring to the confusion matrix, the accuracy related to these three classes is fairly promising. However, the crisp thresholds for classification did not produce so high accuracies.
Keeping in mind that with the current knowledge of object-based methods, building detection is extremely challenging without using ancillary data, hence, offering a comprehensive image classification system which will be applicable to every kind of images is not possible, the presented method can be useful as a sample for other fuzzy-based image classification projects. Of course, if there are any other ancillary data available, the produced results will be significantly better. Therefore, in similar projects, the presented specifications for each class can be used and if any additional information is available, it will help to improve the accuracy of the classification.
In addition, the presented thresholds can also vary in other projects slightly, depending on the image resolutions (spatial, spectral or radiometric), area coverage and also image acquisition time.
In this study we aimed to present a step-by-step classification method using fuzzy inference system with some specific parameters gathered from the related literature. The presented method is capable of adding any other useful parameters to increase the classification accuracy for other projects in future.


This research has been funded by National Sciences and Engineering Research Council (NSERC) of Canada. We appreciate their support. We also acknowledge the City of Fredericton for proving QuickBird imagery and also Space Agency for providing GeoEye satellite images and publishing them freely in the ISPRS (International Society for Photogrammetry and Remote Sensing) website.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Pacifici, F.; Chini, M.; Emery, W.J. A neural network approach using multi-scale textural metrics from very high-resolution panchromatic imagery for urban land-use classification. Remote Sens. Environ. 2009, 113, 1276–1292. [Google Scholar] [CrossRef]
  2. Salehi, B.; Zhang, Y.; Zhong, M.; Dey, V. Object-based classification of urban areas using vhr imagery and height points ancillary data. Remote Sens. 2012, 4, 2256–2276. [Google Scholar] [CrossRef]
  3. Bauer, T.; Steinnocher, K. Per-parcel land use classification in urban areas applying a rule-based technique. GeoBIT/GIS 2001, 6, 24–27. [Google Scholar]
  4. Hofmann, P. Detection Informal Settlements from IKONOS Image Data Using Methods of Object Oriented Image Analysis—An Example from Cape Town (South Africa). In Remote Sensing of Urban. Areas/Fenerkundung in Urbanen. Raumen; Jürgens, V., Ed.; Regensburg, Germany, 2001; pp. 41–42. [Google Scholar]
  5. Rizvi, I.A.; Mohan, B.K. Object-based image analysis of high-resolution satellite images using modified cloud basis function neural network and probabilistic relaxation labeling process. IEEE T. Geosci. Remote. Sens. 2011, 49, 4815–4820. [Google Scholar] [CrossRef]
  6. Bouziani, M.; Goita, K.; He, D.C. Rule-based classification of a very high resolution image in an urban environment using multispectral segmentation guided by cartographic data. IEEE T. Geosci. Remote. Sens. 2010, 48, 3198–3211. [Google Scholar] [CrossRef]
  7. Moskal, M.; Styers, D.M.; Halabisky, M. Monitoring urban tree cover using object-based image analysis and public domain remotely sensed data. Remote Sens. 2011, 3, 2243–2262. [Google Scholar] [CrossRef]
  8. Thomas, N.; Hendrix, C.; Congalton, R.G. A comparison of urban mapping methods using high-resolution digital imagery. Photogramm. Eng. Remote Sens. 2003, 69, 963–972. [Google Scholar] [CrossRef]
  9. Watanachaturaporn, P.; Arora, M.K.; Varshney, P.K. Multisource classification using support vector machines: An empirical comparison with decision tree and neural network classifiers. Photogramm. Eng. Remote Sens. 2008, 74, 239–246. [Google Scholar] [CrossRef]
  10. Salehi, B.; Zhang, Y.; Zhong, M.; Dey, V. A review of the effectiveness of spatial information used in urban land cover classification of VHR imagery. Int. J. Geo Inf. 2012, 8, 35–51. [Google Scholar]
  11. Zadeh, L.A. Fuzzy sets. Inf. Control. 1965, 8, 338–353. [Google Scholar] [CrossRef]
  12. Wang, F. Fuzzy supervised classification of remote sensing images. IEEE T. Geosci. Remote. Sens. 1990, 28, 194–201. [Google Scholar] [CrossRef]
  13. Nedeljkovic, I. Image classification based on fuzzy logic. Remote Sens. Spat. Inf. Sci. 2006, 34, 1–6. [Google Scholar]
  14. Lizarazo, I.; Barros, J. Fuzzy image segmentation for urban land-cover classification. Photogramm. Eng. Remote Sens. 2010, 76, 151–162. [Google Scholar] [CrossRef]
  15. Lizarazo, I.; Elsner, P. Fuzzy segmentation for object-based image classification. Int. J. Remote Sens. 2009, 30, 1643–1649. [Google Scholar] [CrossRef]
  16. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for gis-ready information. J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  17. Shackelford, A.K.; Davis, C.H. A combined fuzzy pixel-based and object-based approach for classification of high-resolution multispectral data over urban areas. IEEE T. Geosci. Remote. Sens. 2003, 41, 2354–2363. [Google Scholar] [CrossRef]
  18. Shackelford, A.K.; Davis, C.H. A hierarchical fuzzy classification approach for high-resolution multispectral data over urban areas. IEEE T. Geosci. Remote. Sens. 2003, 41, 1920–1932. [Google Scholar] [CrossRef]
  19. Saberi, I.; He, D. Automatic fuzzy object-based analysis of VHSR images for urban objects extraction. J. Photogramm. Remote Sens. 2013, 79, 171–184. [Google Scholar] [CrossRef]
  20. Digital globe. Available online: (accessed on 7 November 2013).
  21. Walker, J.S.; Blaschke, T. Object-based land-cover classification for the Phoenix metropolitan area: Optimization vs. transportability. Int. J. Remote Sens. 2008, 29, 2021–2040. [Google Scholar] [CrossRef]
  22. Zhang, Y. Highlight article: Understanding image fusion. Photogramm. Eng. Remote Sens. 2004, 70, 657–661. [Google Scholar]
  23. Jinmei, L.; Guoyu, W. A Refined Quadtree-Based Automatic Classification Method for Remote Sensing Image. In Proceedings of 2011 International Conference on Computer. Science and Network Technology (ICCSNT), Harbin, China, 24–26 December 2011; pp. 1703–1706.
  24. Zhang, C.; Zhao, Y.; Zhang, D.; Zhao, N. Application and Evaluation of Object-Oriented Technology in High-Resolution Remote Sensing Image Classification. In Proceedings of 2011 International Conference on Control, Automation and 478 Systems Engineering (CASE), Singapore, Singapore, 30–31 July. 2011; pp. 1–4.
  25. Jabari, S. Assessment of Intelligent Algorithms in Satellite Images for Change Detection due to Earth Quake. Master’s Thesis, University if Tehran, Tehran, Iran, 2009. [Google Scholar]
  26. Jang, J.S.R.; Sun, C.T.; Mizutani, E. Neuro-fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence, 1st ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1997; pp. 21–30. [Google Scholar]
  27. Shani, A. Landsat Image Classification using Fuzzy Sets Rule Base Theory. Master’s Thesis, San Jose State University, Washington, DC, USA, 2006. [Google Scholar]
  28. eCognition. In eCognition Developer (8.64.0) User Guide; Trimble Germany GmbH: Munich, Germany, 2010.
  29. Richards, J.A. Image Classification Methods. In Remote Sensing Digital Image Analysis, 3rd ed.; Springer: Berlin, Germany, 1999. [Google Scholar]
  30. USGS. Available online: (accessed on 7 November 2013).
  31. Tong, H.; Maxwell, T.; Zhang, Y.; Dey, V. A supervised and fuzzy-based approach to determine optimal multi-resolution image segmentation parameters. Photogramm. Eng. Remote Sens. 2012, 78, 1029–1044. [Google Scholar] [CrossRef]
  32. Zhang, Y.; Maxwell, T.; Tong, H.; Dey, V. Development of Supervised Software Tool for Automated Determination of Optimal Segmentation Parameters for Ecognition. In Proceedings of ISPRS TC VII symposium—100 Years ISPRS, Vienna, Austria, 5–7 July 2010.
Back to TopTop