Next Article in Journal
FS-OpenSecurity: A Taxonomic Modeling of Security Threats in SDN for Future Sustainable Computing
Next Article in Special Issue
Using Machine Learning in Environmental Tax Reform Assessment for Sustainable Development: A Case Study of Hubei Province, China
Previous Article in Journal
An Environmental Critique: Impact of Socialist Ideology on the Ecological and Cultural Sensitivity of Belgrade’s Large-Scale Residential Settlements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Land Use and Land Cover of Geotagged Social-Sensing Images Using Naive Bayes Classifier

1
Remote Sensing and GIS, School of Engineering and Technology, Asian Institute of Technology, P.O. Box 4, Klong Luang, Pathumthani 12120, Thailand
2
Computer Science and Information Management, School of Engineering and Technology, Asian Institute of Technology, P.O. Box 4, Klong Luang, Pathumthani 12120, Thailand
*
Author to whom correspondence should be addressed.
Sustainability 2016, 8(9), 921; https://doi.org/10.3390/su8090921
Submission received: 16 July 2016 / Revised: 29 August 2016 / Accepted: 5 September 2016 / Published: 9 September 2016
(This article belongs to the Special Issue Sustainable Ecosystems and Society in the Context of Big and New Data)

Abstract

:
Online social media crowdsourced photos contain a vast amount of visual information about the physical properties and characteristics of the earth’s surface. Flickr is an important online social media platform for users seeking this information. Each day, users generate crowdsourced geotagged digital imagery containing an immense amount of information. In this paper, geotagged Flickr images are used for automatic extraction of low-level land use/land cover (LULC) features. The proposed method uses a naive Bayes classifier with color, shape, and color index descriptors. The classified images are mapped using a majority filtering approach. The classifier performance in overall accuracy, kappa coefficient, precision, recall, and f-measure was 87.94%, 82.89%, 88.20%, 87.90%, and 88%, respectively. Labeled-crowdsourced images were filtered into a spatial tile of a 30 m × 30 m resolution using the majority voting method to reduce geolocation uncertainty from the crowdsourced data. These tile datasets were used as training and validation samples to classify Landsat TM5 images. The supervised maximum likelihood method was used for the LULC classification. The results show that the geotagged Flickr images can classify LULC types with reasonable accuracy and that the proposed approach improves LULC classification efficiency if a sufficient spatial distribution of crowdsourced data exists.

1. Introduction

The availability of volunteered geographic information (VGI) from social media has exponentially increased over the last few years [1]. Since the emergence of Web 2.0, an increasing number of users have uploaded georeferenced photographs on social media websites, such as Flickr, Picasa, Webshoot, Panoramio, and Geograph [2,3,4,5,6]. Owing to advancements in camera and mobile technology, social media photos contain a vast amount of information, ranging from non-spatial information, such as tags, titles, and descriptions, to spatial and temporal information, such as the locations and times in which the photos were taken. Georeferenced Flickr images [6,7,8], for example, can be used in various applications, such as navigation, natural disaster response (e.g., wildfires, earthquakes, and floods), disease outbreak response, crisis management, and other emergency responses [9,10,11]. Collecting, searching, and analyzing these types of photo repositories can provide information of social and practical importance.
Land cover maps of the earth represent both human-made and natural characteristics of the earth’s surface. Various scientific land cover products, covering various spatial and temporal resolutions, have been created using remotely sensed imagery, such as Moderate Resolution Imaging Spectroradiometer (MODIS), Global Land Cover SHARE (GLC-SHARE), Global Land Cover 2000 (GLC2000), International Geosphere-Biosphere Programme (IGBP), and GlobeCover [12]. The classification accuracies and validations of these regional products are a great concern for the scientific community because of the lack of training and validation data. Consequently, geotagged images have become increasingly prevalent via social media, which can be used as supporting data for scientific analysis [13].
Performing feature extraction of crowdsourced information is now possible owing to the rapid development of information technology. Several researchers have demonstrated that social-sensing image features extracted from photographs (such as on Flickr) include visual descriptors of color, edges, and color indices that can be used for land cover mapping [14,15].
The critical issue with VGI is the need for a means to extract quality information from crowdsourced data and determining its reliability using other LULC products [13]. To create meaningful information from VGI, a highly accurate scientific experiment is required that includes sampling, training, validation, and suitable classification algorithms. Some studies have reviewed these crowdsourced geographic information terms; nevertheless, few examples exist of these techniques being collated to land cover classification [5].
With the above considerations, this paper presents the exploration of crowdsourced LULC types that share certain image characteristics with the major LULC types, including urban areas, forest, agricultural areas, water bodies, and grassland. The integration of crowdsourced LULC maps can be linked to satellite remote-sensing data for creating LULC products [16]. The technical challenge is developing a method to convert visual features from crowdsourced data (e.g., Flickr images) into LULC types by acquiring a sufficient amount of quality references for using remote sensing applications.
The objective of this study is thus to automatically extract visual LULC descriptions from online photos and to estimate the probability distributions over LULC types for a particular area of interest. We herein demonstrate the: (1) automatic extraction of geotagged social-sensing images for visual LULC descriptions from Flickr; (2) classification of Flickr images into major LULC types (agricultural, forest, grassland, urban structures, and water bodies) using a naive Bayes classifier model; and (3) use of majority voting to reduce the uncertainty inherent in estimating LULC types from crowdsourced images.

2. Study Area and Dataset

2.1. Crowdsourced Data

For the study dataset, we acquired and downloaded more than 1,000,000 geotagged images from Flickr obtained in Sapporo City, Japan, using the Flickr Application Programming Interface (API). Sapporo City covers a total area of 1121 km2. Flickr contains a large volume of images distributed over Sapporo. Figure 1 shows the Flickr image density per square kilometer. Flickr metadata for each image contain useful information about it, including the title, description, tags, authors, GPS location, and date and time of when the image was obtained and uploaded. From the available dataset, we selected all images obtained between 1 January and 31 December 2009. The Flickr metadata, such as tags, date taken, image location, and image characteristics were used to produce a filtered list of images.

2.2. Landsat TM5 Data

In this study, we chose Landsat TM5 imagery for integration with crowdsourced data. A significant data source, Landsat TM5 imagery, is freely available and one of the most commonly used satellite imagery repositories for regional and global LULC applications. The Landsat TM5 image used in this study was selected from the Land Processes Distributed Active Archive Center (LPDAAC) [17] and scene path/row 107/030. The thermal band (band 6) was excluded from the analysis. The image is dated 28 May 2009 and has a resolution of 30 m × 30 m in the WGS84 geographic coordinate reference system (World Geodetic System 1984) covering Sapporo City. The image was converted to top-of-atmosphere (TOA) reflectance and then atmospherically corrected using a dark pixel subtraction method.

3. Methodology

The aim of this research was to extract a precise and accurate LULC type for each geotagged image that can be used for practical LULC mapping. The image-based results are then used to classify the LULC of each tile in the map. Figure 2 presents a schematic of our approach. We first perform low-level image feature extraction to characterize the visual appearance of an image. These features include color, edge content, and color index descriptors. Low-level feature extraction is followed by classification. Then, the image-based results are mapped to geospatial locations. In this section, we provide details of our pipeline, including low-level feature extraction, naive Bayes classification, and majority voting for tiled (30 m × 30 m Landsat-scale pixel) LULC mapping.

3.1. Automatic Low-Level Image Feature Extraction from Flickr Data

Image analysis is a mathematical process that extracts, characterizes, and interprets information contained in the digital pixel elements of photographic images. Examples include finding shapes, counting objects, identifying colors, and measuring object properties [18,19]. In this study, we develop visual feature extractors for land cover extraction. We use color histograms to summarize the distribution of colors in an image, gradient orientation maps to indicate shape [14], and color indices characterize vegetation [20]. Finally, we combine these image features with statistical analysis in the form of a Bayes classifier [19,21].
In the first step, the image is segmented to remove prospective sky regions. Then, the low-level color, edge, and vegetation features are calculated over the probable non-sky regions. In the next step, we build a naive Bayes classifier to perform maximum likelihood estimation of the LULC type for a given image. We use a majority filtering method to obtain the most likely LULC category for each tile on the map.

3.1.1. Image Segmentation

Image segmentation is the process of sub-dividing an image into regions, parts, or objects, where each part or object is homogeneous, so that each of the resulting regions in the image can be separately analyzed [22,23]. Background or foreground segmentation can use different techniques, such as image-based thresholding, edge-based segmentation, and color-based segmentation. The main purpose of this step is to automatically separate the subject from the sky and other irrelevant features [22].
Accordingly, we employ Otsu thresholding to separate the foreground and background into two non-overlapping binary sets of color pixels. The Otsu method calculates a probability distribution over pixel intensities and then finds the optimal threshold (T) separating the two assumed intensity classes by minimizing the variance within each class and maximizing the separation of the two classes [24,25,26,27]. The result is a binary mask, F, indicating the two classes. The connected components of this binary mask are considered foreground segments.
F ( x , y ) = { 1   if   I ( x , y ) < T , 0   otherwise .
where T is the threshold and I(x, y) is the intensity of the input pixel at location (x, y).
Pixels in the image with intensities that are less than T are classified as foreground, whereas image pixels with intensities that are greater than or equal to T are classified as sky/background according to Equation (1) [28]. RGB channel features are obtained by multiplying the original RGB image with the binary foreground image. However, face images are filtered out before further feature extraction using a face detection approach [14]. We additionally consider animals or other objects as noise or misclassified images using the majority voting method described in Section 3.5.

3.1.2. Feature Extraction

Content-based image retrieval (CBIR) is the application of computer vision techniques to image retrieval problems [29]. We adapt some of the features typically used in CBIR to perform automatic land cover classification. We automatically compute image features from a large database (Flickr). We extract RGB histograms, edge orientation maps, and vegetation indices (color indices) as our features.
(a)
RGB histogram: Color features are among the most widely used visual features in CBIR. See Figure 3 for a diagram that summarizes our image segmentation and RGB feature extraction approach. We assume that the input is a three-channel RGB image with possible sky regions segmented as background. It is assumed that the remaining pixels are foreground. Because color is more important than intensity, we normalize the RGB image, as follows, to reduce dependency on the illumination conditions (Equation (2)) [20,30]. See Figure 3 for an example of RGB feature extraction.
R n ( x , y ) = R ( x , y ) R ( x , y ) + G ( x , y ) + B ( x , y ) , G n ( x , y ) = G ( x , y ) R ( x , y ) + G ( x , y ) + B ( x , y ) , B n ( x , y ) = B ( x , y ) R ( x , y ) + G ( x , y ) + B ( x , y ) .
Each pixel is then classified according to the dominant color following the scheme of Equation (3).
R * ( x , y ) = { 1   if   R n ( x , y ) > B n ( x , y )   and   R n ( x , y ) > G n ( x , y ) ,   0   otherwise G * ( x , y ) = { 1   if   G n ( x , y ) > B n ( x , y )   and   G n ( x , y ) > R n ( x , y ) ,   0   otherwise B * ( x , y ) = { 1   if   B n ( x , y ) > R n ( x , y )   and   B n ( x , y ) > G n ( x , y ) ,   0   otherwise
The resulting binary images (R*, G*, B*) indicate the locations of primary red, green, and blue pixels in the original image. The final RGB histogram features used as input for the Bayes classifier are simply the number of pixels classified as red, green, and blue, respectively. In sum, we compute the histogram ( H R , H B   and   H G ) according to Equation (4).
H R = x , y R * ( x , y ) ;   H G = x , y G * ( x , y ) ;   H B = x , y B * ( x , y ) .
(b)
Edge orientation: No universal definition of shape exists in an RGB image. However, shape impressions can be extracted by color, intensity patterns, or texture, while a geometric representation can be derived from these impressions. Image edges are characterized by location, magnitude, and orientation [31,32]. Edge orientation is especially useful in distinguishing urban or other developed scenes from undeveloped scenes based on the principle that images of developed scenes will have higher proportions of horizontal and vertical edges than images of undeveloped scenes.
We thus extract edge histogram descriptors to identify the distribution of edges at different orientations across the image. See Figure 4 for a schematic of our edge orientation histogram extraction approach. We use F, the binary result of Otsu segmentation from the previous step. We apply four 3 × 3 Sobel edge kernels, which are tuned to detect edges in the horizontal ( K x ), vertical ( K y ), 45° diagonal ( K 45 ), and −45° diagonal ( K 45 ) directions as follows:
K x = ( 1 0 1 2 0 2 1 0 1 ) ,   K y = ( 1 2 1 0 0 0 1 2 1 ) ,   K 45 = ( 2 1 0 1 0 1 0 1 2 ) ,   K 45 = ( 0 1 2 1 0 1 2 1 0 )
The filter response magnitude indicates the intensity of the gradient at a given pixel in a particular direction [24]. We formulate the magnitude threshold of the outputs F × Kx, F × Ky, F × K45 and F × K−45 of these filters to convert each edge magnitude image into a binary image. We classify each pixel with a gradient magnitude above the threshold as a possible horizontal, vertical, diagonal up, or diagonal down edge. The binary-oriented edge images are then summed, yielding four features (Hx, Hy, H45, and H−45).
(c)
Vegetation Index (VI): Flickr images are normally three-layer images consisting of red, green and blue color planes, each representing the intensity of light in a range of color in the visible spectrum. Our vegetation descriptors (features) characterize vegetation content through the use of several VIs, particularly, excess green (ExG), excess red (ExR), normalized difference index (NDI), and difference of excess green and excess red (ExGR) [20], which are applied to each pixel’s chromatic coordinate. See Figure 5 for a schematic of the vegetation index histogram extraction approach.
The main aim of this approach is to extract prospective vegetation locations from the input image through analysis of the RGB channels (chromatic coordinates) of every pixel. Table 1 shows the four vegetation index operations derived from the R n ( x , y ) , G n ( x , y ) , and B n ( x , y ) planes. They extracted greenness information and differentiated between vegetation, soil, and residues. The advantage of using VIs is that they are renowned as being accurate at representing characteristics such as plant greenness [23].
The vegetation indices are summarized by binarization and then summed to obtain four features (HNDI, HExG, HExR, and HExGR). ExG [33,34,35,36,37] selects plant regions of interest. ExR is related to the redness of the soil and residue. We use ExR to separate grassland from other types of vegetation, such as agriculture and forest. NDI is useful for separating plants from soil and residue in the background. Nonetheless, we determined that illumination has a significant impact on the usefulness of NDI. Meyer and Neto [20] used ExGR to extract dominantly green regions from images. Each of the four VI images is binarized using Otsu thresholding prior to summation to obtain HNDI, HExG, HExR, and HExGR.
After computing all images through different feature extraction techniques, 11 feature histograms are obtained from each image: HR, HG, HB, Hx, Hy, H45, H−45, HNDI, HExG, HExR, and HExGR. These features are the input parameters obtained from each image to classify the LULC into a class. They are then used for further LULC classification.

3.2. Crowdsourced LULC Classification

We use the principle of probabilistic minimum risk classification to place each input image in a category. Each image is represented by the previously described vector of low-level features extracted from the image based on pixel-level properties. The simple but effective naive Bayes classifier is used because it is fast to train and not sensitive to irrelevant features [31].
We begin with the Bayes rule to compute the posterior probability, P ( c | d ) , of category c. Data d are proportional to the product of prior probability P(c) and class-conditional probability P ( d | c ) . The latter is from data given by the LULC class of the image according to Equation (5).
P ( c | d ) = P ( c ) P ( d | c ) P ( d ) .
In our approach, the observed data d are the vector containing the RGB histogram, edge orientation histogram, and vegetation index histogram features. P(c) can be directly estimated from the number of training images in each category. The exact joint class-conditional probability is
P ( d | c ) = i = 1 D P ( d i | c ,   d 1 d i 1 ) ,
where di is the i-th element of d .
Assuming the conditional independence of the elements of the image features, naive Bayes makes the simplifying assumption that
P ( d i | c ,   d 1 d i 1 ) P ( d i | c ) .
Combining the above equations, we get
P ( c | d ) P ( c ) i = 1 D P ( d i | c ) .
For ease of computation, we convert the continuous variables, di, to discrete variables using the equal-width binning feature of the WEKA machine learning software, version 3.6.13 (Waikato Environment for Knowledge Analysis) [38]. We partition the range of each variable into five bins.
From Equation (8), we can calculate P ( c | d ) for each category and classify d into LULC category c with the highest posterior probability P ( c | d ) . To estimate parameters P ( c ) and P ( d i | c ) , which are required by the naive Bayes classifier, we use the hand-labeled LULC images for training data (see Section 3.3). Then, we can automatically classify unlabeled images by assigning probabilistic labels using the estimated parameters.

3.3. Naive Bayes Classification Performance Evaluation

Evaluating the performance of a classifier on a dataset is intended to characterize the classifier’s capacity to predict the correct LULC class of data that are not yet evident. Here, we strive to assess the potential of previously unviewed geotagged Flickr images and the observed pattern of LULC features to estimate LULC classes.
To test the stability of the naive Bayes classifier, different partitions of training and testing images were used. The training dataset availability was designed to ensure a minimum standard in data quality for the classification. It was adjusted to minimize errors for improving the accuracy of image classification. When the accuracy rate is more than 80%, the potential reference of each LULC class (output) is suitable for the further mapping analysis [6].
The training sample size can affect the classifier accuracy. With consideration, a minimum number of training samples that are necessary to achieve the acceptable level of classifier accuracy exist. Creating a larger training dataset may not sufficiently improve the classifier accuracy. We thus tested the image classification potential by increasing the training dataset.
Of the total images, we considered only a random sample of 500 and 1000 images. Stratified training and validation samples of images were selected for each of the five LULC types. We considered the LULC types as strata. We provided five subjects the same 100 images to label with each of the five LULC types. The objective was to create a training and validation dataset of 500 images. We then gave these same five subjects an additional 100 images per class to create a second training dataset and validation of 1000 images in total. The remaining images were reserved as testing images. We trained the naive Bayes classifiers with balanced datasets of sizes 500 and 1000. For the data size of 500 and 1000, 350 and 700 images were used as training images, and 150 and 300 images were used as validation images, respectively.
To evaluate the performance, we used common accuracy to measure the true-positive (TP) rate, false-positive (FP) rate, recall, precision, sensitivity, specificity, f-measure, kappa coefficient, and receiver operating characteristic (ROC) [39,40]. In this evaluation, accuracy was the proportion of Flickr images correctly classified into LULC types. The false-positive rate measured the proportion of images incorrectly classified as a particular LULC class. Precision was just the accuracy of the images placed in a particular class by the classifier. Sensitivity, also known as recall, was the proportion of images in a particular class that were correctly classified. Specificity, also referred to as the true negative rate or user’s accuracy, was the proportion of actual negatives that were classified as negatives [39]. In addition, F-measure was the combined measure for assessing the precision/recall balance. Meanwhile, the Kappa coefficient compared the observed prediction to the accuracy expected through chance agreements [41,42].

3.4. Determination of Free Parameters

At this point, the method has 275 conditional probability parameters (5 bins × 5 LULC classes × 11 image features) that describe the distribution of features for particular LULC categories (urban areas, water bodies, grassland, forest, and agricultural areas) in the training data. These parameters were directly estimated from the observed training set counts. A threshold exists for each of the low-level feature histograms as a global background/foreground threshold. There are three thresholds for dominant color planes ( H R , H G , and H B ), four thresholds for the magnitude of oriented gradients (Hx, Hy, H45, and H−45), and four thresholds for the vegetation histograms (HNDI, HExG, HExR, and HExGR). With the global background/foreground threshold, we have 12 thresholds. We automatically calculated all of these thresholds using Otsu’s approach.

3.5. Application of Majority Class Filter Determination of Crowdsourced LULC Mapping

After the determination of free parameters, we use the majority distribution in a geospatial tile to ameliorate the effect of geographically non-informative (positional error) or misclassified images. Majority class filtering inspects the LULC types in neighborhoods to reduce uncertainty or unreliable information. To localize our analysis, we partitioned the study region into 30 m × 30 m (Landsat pixel size: Section 2.2) tiles and mapped each labeled Flickr image to a tile. To label the LULC class of each tile, the neighboring images were considered for majority class filtering by selecting the highest frequency LULC class in the given tile. Figure 6 shows an example of majority class filtering in the case study region of Sapporo City, Japan.
After obtaining the majority voting result, crowdsourced LULC mapping was used for training samples for LULC classification of the Landsat TM5 image using the maximum likelihood method (MLM), which is one of the most widely used methods for classification on account of its simplicity and robustness. The MLM classifier was trained using randomly sampled tiles for individual LULC types.
Training samples were collected on tile basis to reduce the redundancy and spatial autocorrelation. We uniformly selected the training samples from the tiles of each LULC type covering 70% of samples per class. The remaining 30% of tiles were used to validate the obtained LULC map. Here, we compare the classification result at each tile of the validation location [13]. Table 2 shows the structure of a confusion matrix between the classification results from LULC classes from crowdsourced data and LULC classes of the Landsat TM5 image.
The observed classification accuracy of the crowdsourced map was determined by the diagonal elements of the confusion matrix. Chance agreement incorporated the off-diagonals of the confusion matrix (diagonals represented items correctly classified according to the reference data; off-diagonals represented misclassified items).

4. Results and Discussion

Through our experiment, we examined how effectively LULC types can be predicted using geotagged ground level images from Flickr. We used independent training and testing datasets as described in the previous section. The results showed that the naive Bayes classifier provides good performance for LULC classification.

4.1. Foreground/Background Segmentation Using Otsu Thresholding

The proposed image segmentation process is based on the Otsu threshold. We carefully selected the training image that has homogeneous features. For example, the image could have an obvious skyline or line structure to accurately segment the LULC features.
Sample results for different LULC types are shown in Figure 7. The obtained results clearly indicate that the white pixels identify agricultural, forest, grassland, urban structures, and water features in the original images. In Figure 7a, black pixels correctly identify background and other material present in the image (87.8% correctly detected). The misdetections sampled in Figure 7b show 12.2% of incorrect classifications. The background illumination, time of day, and angle of the captured image contributed to misdetection. In some images, detection of the skyline was difficult. The angle from which the image was obtained also had a significant effect on skyline detection [43].

4.2. Color, Shape, and Color Index Based Image Classification

The segmentation method extracts relatively homogeneous regions. Based on the segmentation results, we removed the background from each image and calculated the feature images for further analysis. Manually selected training sample images were placed into the categories of agriculture, forest, water, grassland, and urban. Sample color feature histograms are shown in Figure 8. Classes with vegetation (agriculture, and forest) show a dominant green channel, while the water class shows a dominant blue channel. However, grassland and urban areas have no distinctive color channel patterns. This is possibly on account of the variation in color of the targeted feature.
Sample results of thresholding the outputs of the linear oriented edge filters are shown in Figure 9a. We found that images of urban scenes have a much higher proportion of horizontal and vertical edges than other non-urban scenes (forest, grassland, and agriculture). Developed areas show a visible pattern (spikes in the ratio of horizontal and vertical edges in Figure 9b). One issue with edge histograms is that the edges of a building may have diagonal orientations on account of perspectival distortion.
We performed an initial comparison of VIs for plant and background features on the original color image, as shown in Figure 10. The greenness extraction approach combined the information provided by different vegetation indexes (VIs), as discussed in Section 3.1.2. We found that the ExG index successfully discriminated grassland, evergreen forest, and agriculture from the background; however, it separated deciduous forests with higher redness from other forests. ExR was useful for discriminating grassland from the background on account of its extraction of redness, soil, and residue. It was also effective at separating deciduous forests and evergreen forests. Nevertheless, it did not perform well for extracting dominant greenness (agriculture).
The NDI index could separate agriculture from soil and background. However, we found that illumination had a significant impact in not capturing the color of other vegetation categories (grassland, deciduous forest, and evergreen forest). The ExGR index appeared to accurately extract dominant greenness (agriculture) as long as the background and soil were detected as background in the segmentation. Figure 10 shows that ExG, NDI, and ExGR together separate the plant regions quite accurately from the soil and background. Both NDI and ExGR show the effect of separating forest and grassland. Hence, our combined VI feature extraction technique shows good results for separating different kinds of vegetation.

4.3. LULC Feature Classification Using Naive Bayes Classifier

We tested the performance of the proposed approach in extracting five major LULC types (water body, urban area, grassland, forest, and agriculture) on labeled Flickr images. The naive Bayes classifier enabled subjective definitions to be described in terms of our color, shape, and color index features. We arbitrarily selected training LULC images. We calculated a set of parameters from each training set and then applied those parameters to classify the images in the test set. Figure 11 shows sample images labeled by the naive Bayes classifier.

4.4. Measuring Performance of the Naive Bayes Classifier

We performed a model evaluation using precision, recall, f-measure, ROC curve, kappa coefficient, and overall accuracy (Figure 12 and Figure 13 present the results). The availability of classification accuracy depended on the training dataset size: an increase in the training dataset size increased the accuracy. Thus, more training data may further improve LULC classification. Figure 12 shows that at least 1000 training image samples are required for naive Bayes to provide a satisfactory LULC class estimation. The performance obtained using 1000 training samples are clearly better than the results with 500 training samples. With 1000 training samples, we obtained an overall accuracy, kappa coefficient, precision, recall, and f-measure of 87.94%, 82.89%, 88.20%, 87.90%, and 88%, respectively.
Figure 13 shows the system performance with different combinations of color, edge, and VI features. The combination of multiple features contributed to the discrimination between different LULC types. The urban area, water, forest, and grassland categories are classified with satisfactory accuracy (>80%). However, agricultural shows only 63.9% positive classification, primarily because of the misclassification of agricultural areas as other classes (36.1%). This misclassification may result from the relatively small number of training samples in the dataset and heterogeneity features in the agriculture images.
ROC analysis was performed to estimate the classification accuracy (sensitivity against single-specificity on a per-category and per-feature basis). Accordingly, we selected a threshold for a feature above which a labeled LULC class was considered positive. The area under the ROC curve (AUC) evaluated the classifier over all cutoff points, giving better insight into how well the classifier can distinguish the classes. An area under the ROC curve of 0.5 indicated a random prediction.
For each LULC class, we compared correctly classified relevant images against incorrectly classified images. In the case of water, the ROC curve and area under the ROC curve for the blue descriptor performed well. The green descriptor showed outstanding performance (ExGR: 0.944 and G: 0.943) in discriminating grassland and forest from other classes, as shown in Figure 14. Urban images can be effectively differentiated by the vertical edge orientation descriptor. However, ROC curves for agricultural images are poorer for every descriptor on account of the heterogeneity of the objects (e.g., flowers, cropland, and paddy fields), which affected the classifier performance.
In summary, we investigated the application of crowdsourced data to LULC classification. We employed the renowned naive Bayes classifier model on account of its speed, efficiency, capability of handling large amounts of data, and insensitivity to irrelevant features. We determined that our combined feature descriptor yielded good overall accuracy of 87.94% of the test dataset of LULC classification. The use of features sensitive to vegetation characteristics is particularly useful in reducing the uncertainty of classification of the crowdsourced images.

4.5. Crowdsourced and Landsat TM 5 Based LULC Classification

After classifying individual images, we performed majority class filtering to obtain LULC predictions for neighborhoods to reduce uncertainty and ignore unreliable information. Images distributed in 30 m × 30 m areas were considered for majority class filtering by selecting the highest frequency. From this result, we obtained LULC classes for each tile in the map. Table 3 shows the obtained number of tiles in each of the LULC classes. Urban areas have the highest number of tiles for the overall area, followed by agriculture, grassland, water bodies, and forest, respectively.
Figure 15 shows the LULC map obtained from Flickr imagery. It contains the majority LULC class for the area in Sapporo City, Japan. The Flickr LULC map occupies approximately 10% of the total area, whereas other areas have no Flickr images. However, the urban areas have the highest distribution of the crowdsourced LULC map. This agreement could be due to the density and distribution of available images being sufficiently high in a particular tile [16].
After filtering the individual tile, we performed supervised classification on the Landsat TM5 image using the majority filtered crowdsourced LULC tiles as training and validation datasets to acquire the LULC map. Figure 16 shows the Landsat TM5 based LULC mapping, which was trained by Flickr images. The accuracy evaluation was performed to compare the accuracy between the crowdsourced and Landsat TM5 LULC mapping coverage of the Sapporo City area.
Table 4 shows the accuracy assessment, which was calculated using the confusion matrix obtained to assess the quality of the classification [18]. The obtained overall accuracy and the kappa coefficient are 70% and 0.625, respectively, which are mostly driven by urban areas. This implies that crowdsourced images can support ground truth information as training/validation data for urban/non-urban mapping.
In short, the proposed approach uses the crowdsourced (Flickr images) dataset and produces training and validation data for use in various LULC applications. However, the performance of the image processing chain may be limited, particularly over a large area and by variety of classes requiring a sufficient quality of reference data.

5. Conclusions

In this paper, we proposed the use of Flickr images for LULC classification to automatically recognize specific types of LULC from visual properties of crowdsourced data. This method (the combination of thresholding, color, shape, and color indices) yielded good classification accuracy. The combined feature descriptor method with 1000 training images provided 88% accuracy over five LULC categories. Integration of color index descriptors with the naive Bayes classifier improved the separation of different types of vegetation. Especially for areas with green vegetation, the proposed strategy outperformed the simpler color and shape index combination in terms of accuracy. The number of training samples greatly affected the classifier performance. Increasing the number of training samples led to increased classification performance. Further improvement may be possible using larger training sets. However, the results show that an acceptable accuracy level can be obtained using a minimum number of training samples.
As a whole, these findings provide insight into the applicability of probabilistic classifiers and the number of training samples required when implementing an object-based approach to LULC classification using a large crowdsourced dataset. The results also highlight the utility of integrating color indices with machine learning. The crowdsourced data can be potential training samples for various LULC applications. We assessed the classification performance for both image-based recognition and mapping-based classification. Because we tested all possibilities to obtain the highest accuracy from image recognition, we could then map these reference images in terms of spatial distributions using majority voting approach.
For image recognition, we performed a model evaluation using precision, recall, f-measure, ROC curve, kappa coefficient, overall accuracy, and ROC. For mapping classification, we validated the classification result using 30% of tiles of each LULC class with equal spatial distributions in a particular area. The accuracy evaluation was performed to compare the accuracy between crowdsourced and Landsat TM5 LULC mapping covers of the Sapporo City area. The accuracy assessment was calculated using the confusion matrix obtained to assess the quality of the classification of LULC mapping.
This approach is extensible and can be useful for specific areas that especially depend on the availability of images. Its ultimate benefit may be the possibility of automatically monitoring heterogeneous/subclass classification or noise image removal, such as agriculture (flower, cropland and rice paddies), forest (deciduous and evergreen forest), long-term vegetation changes (changes in agricultural, forest, and grassland), and other objects (face/people and animals). Social media can support data sources for both urban and natural resource planning, allowing modeling and identification of historic and new identifications of LULC in a relatively inexpensive and near-real-time manner.
Another application of the proposed approach is validating or updating of existing LULC maps using social sensing, which requires rigorous sampling design. The images could complement an already existing validation dataset, which would be less costly and time consuming than the traditional field survey methods. In sum, this work marks an initial step toward improving our ability to classify, detect changes in, and validate LULC maps obtained by remote sensing through the use of social-sensing databases. Moreover, it may provide information that can be helpful if combined with other crowdsourced data, such as Panoramio, Facebook, and Twitter [11].
In future work, to improve efficiency and obtain additional image feature descriptors, we plan to use more sophisticated texture extraction techniques [19,31] with other machine learning models, such as neural networks, random forest, and support vector machines, as well as other sources of geotagging crowdsourced data [11]. Finally, we intend to extend the scope of our approach to investigate how well naive Bayes and other classifiers can be trained using Flickr or other social media images from one region to classify LULC in other regions.
This study considered only the location of the image acquisition. However, most cases can be performed by majority voting to represent LULC classes. We plan to advance our work by considering the direction of the image to increase the classification efficiency. We hope that the idea of observing natural and man-made features through social-sensing photo-sharing websites can foster a greater understanding of the earth-surface properties and characteristics in spatial and temporal terms.

Acknowledgments

This work (experiment accomplishment) was supported by Japanese Government Scholarships.

Author Contributions

Asamaporn Sitthi performed the data collection and data analysis, wrote the main manuscript, and prepared the figures and tables. Matthew Daily and Masahiko Nagai provided conceptual advice and critical comments. Sarawut Ninsawat provided critical comments on the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goodchild, M.F.; Li, L. Assuring the quality of volunteered geographic information. Spat. Stat. 2012, 1, 110–120. [Google Scholar] [CrossRef]
  2. Andrienko, G.; Andrienko, N.; Bak, P.; Kisilevich, S.; Keim, D. Analysis of community-contributed space-and time-referenced data (example of Panoramio photos). In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advance Geographic Information System (GIS’09), Seattle, WA, USA, 4–6 November 2009; pp. 540–541.
  3. Chen, L.; Roy, A. Event detection from Flickr data through wavelet-based spatial analysis. In Proceedings of the 18th ACM Conference Information Knowledge Management (CIKM’09), Hong Kong, China, 2–6 November 2009; pp. 523–532.
  4. Antoniou, V.; Morley, J.; Haklay, M. Web 2.0 geotagged photos: Assessing the spatial dimension of the phenomenon. Geomatica 2010, 64, 99–110. [Google Scholar]
  5. Heipke, C. Crowdsourcing geospatial data. ISPRS J. Photogram. Remote Sens. 2010, 65, 550–557. [Google Scholar] [CrossRef]
  6. See, L.; Mooney, P.; Foody, G.; Bastin, L.; Comber, A.; Estima, J.; Fritz, S.; Kerle, N.; Jiang, B.; Laakso, M.; et al. Crowdsourcing, citizen science or volunteered geographic information? The current state of crowdsourced geographic information. ISPRS Int. J. Geoinf. 2016, 5. [Google Scholar] [CrossRef]
  7. Lee, I.; Cai, G.; Lee, K. Exploration of geotagged photos through data mining approaches. Expert Syst. Appl. 2014, 41, 397–405. [Google Scholar] [CrossRef]
  8. Majid, A.; Chen, L.; Chen, G.; Mirza, H.T.; Hussain, I.; Woodward, J. A context-aware personalized travel recommendation system based on geotagged social media data mining. Int. J. Geo. Inf. Sci. 2013, 27, 662–684. [Google Scholar] [CrossRef]
  9. Estima, J.; Painho, M. Photo based volunteered geographic information initiatives: A comparative study of their suitability for helping quality control of Corine land cover. Int. J. Agric. Environ. Inf. Syst. 2014, 5, 75–92. [Google Scholar] [CrossRef]
  10. Crooks, A.; Croitoru, A.; Stefanidis, A.; Radzikowski, J. Earthquake: Twitter as a distributed sensor system. Trans. GIS 2013, 17, 124–147. [Google Scholar] [CrossRef]
  11. Longueville, B.D.; Smith, R.S. “OMG, from here, I Can See the Flames!” A Use Case of Mining Location Based Social Networks to Acquire Spatio-Temporal Data on Forest Fires. Available online: http://dl.acm.org/citation.cfm?id=1629907 (assessed on 29 August 2016).
  12. Jokar Arsanjani, J.; See, L.; Tayyebi, A. Assessing the suitability of GlobeLand30 for mapping land cover in Germany. Int. J. Digit. Earth 2016, 9, 1–19. [Google Scholar] [CrossRef] [Green Version]
  13. Comber, A.; See, L.; Fritz, S.; van der Velde, M.; Perger, C.; Foody, G. Using control data to determine the reliability of volunteered geographic information about land cover. Int. J. Appl. Earth Obs. Geoinf. 2013, 23, 37–48. [Google Scholar] [CrossRef]
  14. Leung, D.; Newsam, S. Proximate sensing: Inferring what-is-where from georeferenced photo collections. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 2955–2962.
  15. Leung, D.; Newsam, S. Land cover classification using geo-referenced photos. Multimedia Tools Appl. 2014, 74, 11741–11761. [Google Scholar] [CrossRef]
  16. See, L.; McCallum, I.; Fritz, S.; Perger, C.; Kraxner, F.; Obersteiner, M.; Deka Baruah, U.; Mili, N.; Ram Kalita, N. Mapping cropland in Ethiopia using crowdsourcing. Int. J. Geosci. 2013, 4, 6–13. [Google Scholar] [CrossRef]
  17. The Land Processes Distributed Active Archive Center. Available online: https://lpdaac.usgs.gov/data_access/glovis (accessed on 2 February 2015).
  18. Fritz, S.; McCallum, I.; Schill, C.; Perger, C.; Grillmayer, R.; Achard, F.; Kraxner, F.; Obersteiner, M. Geo-Wiki.Org: The use of crowdsourcing to improve global land cover. Remote Sens. 2009, 1, 345–354. [Google Scholar] [CrossRef]
  19. Torralba, A.; Oliva, A. Statistics of natural image categories. Netw. Comput. Neural Syst. 2003, 14, 391–412. [Google Scholar] [CrossRef]
  20. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  21. Leung, D.; Newsam, S. Can off-the-shelf object detectors be used to extract geographic information from geo-referenced social multimedia? In Proceedings of the 5th International Workshop on Location-Based Social Networks, Redondo Beach, CA, USA, 6 November 2012.
  22. Kumar, R.; Arthanari, M.; Sivakumar, M. Image segmentation using discontinuity-based approach. Int. J. Multimedia Image Process. 2011, 1, 72–78. [Google Scholar]
  23. Shrestha, D.S.; Steward, B.L.; Bartlett, E. Segmentation of plant from background using neural network approach. In Proceedings of the Artificial Neural Networks in Engineering Conference, St. Louis, MO, USA, 4–7 November 2001.
  24. Napoleon, D.; Lakshmi Priya, U.; Revathi, P. An efficient segmentation of insects from images using clustering algorithm and thresholding techniques. Int. J. Res. Advent Technol. 2013, 1, 194–199. [Google Scholar]
  25. Bindu, C.H.; Prasad, K.S. A new approach for segmentation of fused images using cluster based thresholding. Int. J. Signal Image Process. 2013, 4, 1–5. [Google Scholar]
  26. Vala, H.J.; Baxi, A. A review on Otsu image segmentation algorithm. Int. J. Adv. Res. Compt. Eng. Technol. 2013, 2, 387–389. [Google Scholar]
  27. Parmarl, S.P.; Shah, D.H. Otsu based segmentation for thermal image. Am. Int. J. Res. Sci. Technol. Eng. Math. 2015, 10, 190–193. [Google Scholar]
  28. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 2nd ed.; Pearson Prentice Hall: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
  29. Chary, R.V.R.; Lakshmi, D.R.; Sunitha, K.V.N. Feature extraction methods for color image similarity. Adv. Comp. Int. J. 2012, 3, 147–157. [Google Scholar] [CrossRef]
  30. Font, D.; Tresanchez, M.; Martínez, D.; Moreno, J.; Clotet, E.; Palacín, J. Vineyard yield estimation based on the analysis of high resolution images obtained with artificial illumination at night. Sensors 2015, 15, 8284–8301. [Google Scholar] [CrossRef] [PubMed]
  31. Xiao, J.; Hays, J.; Russell, B.C.; Patterson, G.; Ehinger, K.A.; Torralba, A.; Oliva, A. Basic level scene understanding: Categories, attributes and structures. Front. Psychol. 2013, 4, 1–10. [Google Scholar] [CrossRef] [PubMed]
  32. Prat, W.K. Digital Image Processing, 3rd ed.; John Wiley and Sons Inc.: Los Altos, CA, USA, 2001. [Google Scholar]
  33. Guerrero, J.M.; Pajares, G.; Montalvo, M.; Romeo, J.; Guijarro, M. Support Vector Machines for crop/weeds identification in maize fields. Expert Syst. Appl. 2012, 39, 11149–11155. [Google Scholar] [CrossRef]
  34. Giltelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithm for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef]
  35. Lamm, R.D.; Slaughter, D.C.; Giles, D.K. Precision weed control for cotton. Trans. ASAE 2002, 45, 231–238. [Google Scholar]
  36. Mao, W.; Wang, Y.; Wang, Y. Real-time detection of between-row weeds using machine vision. In Proceedings of the 2003 ASAE Annual International Meeting, Las Vegas, NV, USA, 27–30 July 2003.
  37. Meyer, G.E.; Neto, J.C.; Jones, D.D.; Hindman, T.W. Intensified fuzzy clusters for determining plant, soil, and residue regions of interest from color images. Comput. Electron. Agric. 2004, 42, 161–180. [Google Scholar] [CrossRef]
  38. Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. WEKA data mining software: An update; SIGKDD explorations. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
  39. Wieland, M.; Pittore, M. Performance evaluation of machine learning algorithms for urban pattern recognition from multi-spectral satellite images. Remote Sens. 2014, 6, 2912–2939. [Google Scholar] [CrossRef]
  40. Powers, D.M.W. Evaluation: From precision, recall and F-factor to ROC, informedness, markedness & correlation. J. Mach. Learn. Technol. 2007, 2, 1–24. [Google Scholar]
  41. Foody, G.M.; See, L.; Fritz, S.; Van der Velde, M.; Perger, C.; Schill, C.; Boyd, D.S. Assessing the accuracy of volunteered geographic information arising from multiple contributors to an internet based collaborative project. Trans. GIS 2013, 17, 847–860. [Google Scholar] [CrossRef]
  42. Adam, A.H.M.; Elhag, A.M.H.; Salih, A.M. Accuracy assessment of land use & land cover classification (LU/LC), case study of Shomadi area, Renk County, Upper Nile State, South Sudan. Int. J. Sci. Res. Publ. 2013, 3, 1–6. [Google Scholar]
  43. Calbó, J.; Jeff, S. Feature extraction from whole-sky ground-based images for cloud-type recognition. J. Atmos. Ocean Technol. 2008, 1, 3–14. [Google Scholar] [CrossRef]
Figure 1. Sapporo Flickr image density in 2009.
Figure 1. Sapporo Flickr image density in 2009.
Sustainability 08 00921 g001
Figure 2. Methodological approach to automatic low-level image feature extraction and land use/land cover (LULC) classification using Flickr.
Figure 2. Methodological approach to automatic low-level image feature extraction and land use/land cover (LULC) classification using Flickr.
Sustainability 08 00921 g002
Figure 3. Image segmentation and RGB feature extraction approach.
Figure 3. Image segmentation and RGB feature extraction approach.
Sustainability 08 00921 g003
Figure 4. Extraction of edge histogram descriptors.
Figure 4. Extraction of edge histogram descriptors.
Sustainability 08 00921 g004
Figure 5. Schematic of vegetation index histogram approach.
Figure 5. Schematic of vegetation index histogram approach.
Sustainability 08 00921 g005
Figure 6. Example of majority filtering in each LULC class (A: agricultural; F: forest; G: grassland; U: urban areas; and W: water bodies).
Figure 6. Example of majority filtering in each LULC class (A: agricultural; F: forest; G: grassland; U: urban areas; and W: water bodies).
Sustainability 08 00921 g006
Figure 7. Image segmentation by Otsu thresholding. Sample images that are: (a) correctly segmented; and (b) incorrectly segmented.
Figure 7. Image segmentation by Otsu thresholding. Sample images that are: (a) correctly segmented; and (b) incorrectly segmented.
Sustainability 08 00921 g007
Figure 8. Example RGB images and color histograms in different LULC categories.
Figure 8. Example RGB images and color histograms in different LULC categories.
Sustainability 08 00921 g008
Figure 9. Sample of edge orientation images from thresholding: (a) edge orientation histogram features with pixel counts; and (b) ratio of horizontal and vertical edges.
Figure 9. Sample of edge orientation images from thresholding: (a) edge orientation histogram features with pixel counts; and (b) ratio of horizontal and vertical edges.
Sustainability 08 00921 g009
Figure 10. Comparison of vegetation indices. Color and binary images for grassland, deciduous forest, evergreen forest, and agriculture.
Figure 10. Comparison of vegetation indices. Color and binary images for grassland, deciduous forest, evergreen forest, and agriculture.
Sustainability 08 00921 g010
Figure 11. Sample images labeled by the maximum a posteriori classification of LULC using the naive Bayes classifier.
Figure 11. Sample images labeled by the maximum a posteriori classification of LULC using the naive Bayes classifier.
Sustainability 08 00921 g011
Figure 12. Accuracy, precision, recall, and f-measure for the naive Bayes classifier trained with balanced increasing numbers of training and validation samples (500 and 1000 images).
Figure 12. Accuracy, precision, recall, and f-measure for the naive Bayes classifier trained with balanced increasing numbers of training and validation samples (500 and 1000 images).
Sustainability 08 00921 g012
Figure 13. Precision of crowdsourced image classification with different combinations of features (RGB color, edge orientation, and vegetation indices).
Figure 13. Precision of crowdsourced image classification with different combinations of features (RGB color, edge orientation, and vegetation indices).
Sustainability 08 00921 g013
Figure 14. ROC curve on different LULC categories (pH: edge horizontal, pV: edge vertical, NDI: normalized difference index, ExG: excess green, ExR: excess red, ExGR: different excess green and red, G: green, B: blue and R: red).
Figure 14. ROC curve on different LULC categories (pH: edge horizontal, pV: edge vertical, NDI: normalized difference index, ExG: excess green, ExR: excess red, ExGR: different excess green and red, G: green, B: blue and R: red).
Sustainability 08 00921 g014
Figure 15. Crowdsourced LULC mapping in Sapporo, Japan.
Figure 15. Crowdsourced LULC mapping in Sapporo, Japan.
Sustainability 08 00921 g015
Figure 16. Landsat TM5 based LULC map of Sapporo city in 2009.
Figure 16. Landsat TM5 based LULC map of Sapporo city in 2009.
Sustainability 08 00921 g016
Table 1. Vegetation index operations.
Table 1. Vegetation index operations.
IndicesEquation
ExG2 G n ( x , y ) R n ( x , y ) B n ( x , y )
ExR1.4 R n ( x , y ) G n ( x , y )
NDI G n ( x , y ) R n ( x , y ) / G n ( x , y ) + R n ( x , y )
ExGRExG − ExR
Table 2. Confusion matrix for land use/land cover (LULC) classes from crowdsourced and Landsat TM5 imagery.
Table 2. Confusion matrix for land use/land cover (LULC) classes from crowdsourced and Landsat TM5 imagery.
Observed Class from Crowdsourced MappingPredicted Class from Landsat TM5
Class of InterestNon-Class of InterestTotal
Class of interestTPFNTP + FN
Non-class of interestFPTNFN + TN
TotalTP + FPFN + TNN = TP + TN + FP + FN
Table 3. Number of tiles in each of the classes from crowdsourced LULC mapping.
Table 3. Number of tiles in each of the classes from crowdsourced LULC mapping.
Crowdsourced LULC ClassNumber of Tiles
Agricultural744
Forest120
Grassland345
Urban areas4050
Water bodies93
Total5352
Table 4. Classification accuracy of Landsat TM LULC mapping.
Table 4. Classification accuracy of Landsat TM LULC mapping.
Land Use/Land Cover ClassUser’s Accuracy (%)Producer’s Accuracy (%)Overall Accuracy (%)Kappa Coefficient
Agricultural648370%0.625
Forest6853
Grassland6164
Urban areas8569
Water bodies7294

Share and Cite

MDPI and ACS Style

Sitthi, A.; Nagai, M.; Dailey, M.; Ninsawat, S. Exploring Land Use and Land Cover of Geotagged Social-Sensing Images Using Naive Bayes Classifier. Sustainability 2016, 8, 921. https://doi.org/10.3390/su8090921

AMA Style

Sitthi A, Nagai M, Dailey M, Ninsawat S. Exploring Land Use and Land Cover of Geotagged Social-Sensing Images Using Naive Bayes Classifier. Sustainability. 2016; 8(9):921. https://doi.org/10.3390/su8090921

Chicago/Turabian Style

Sitthi, Asamaporn, Masahiko Nagai, Matthew Dailey, and Sarawut Ninsawat. 2016. "Exploring Land Use and Land Cover of Geotagged Social-Sensing Images Using Naive Bayes Classifier" Sustainability 8, no. 9: 921. https://doi.org/10.3390/su8090921

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop