Next Article in Journal
Determination of Crop Water Stress Index by Infrared Thermometry in Grapefruit Trees Irrigated with Saline Reclaimed Water Combined with Deficit Irrigation
Next Article in Special Issue
On Capabilities of Tracking Marine Surface Currents Using Artificial Film Slicks
Previous Article in Journal
Geospatial Object Detection on High Resolution Remote Sensing Imagery Based on Double Multi-Scale Feature Pyramid Network
Previous Article in Special Issue
Dual-Polarized L-Band SAR Imagery for Temporal Monitoring of Marine Oil Slick Concentration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Case Report

Semi-Automatic Oil Spill Detection on X-Band Marine Radar Images Using Texture Analysis, Machine Learning, and Adaptive Thresholding

1
Navigation College, Dalian Maritime University, Dalian 116026, China
2
Environmental Information Institute, Dalian Maritime University, Dalian 116026, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(7), 756; https://doi.org/10.3390/rs11070756
Submission received: 17 March 2019 / Revised: 25 March 2019 / Accepted: 26 March 2019 / Published: 28 March 2019
(This article belongs to the Special Issue Oil Spill Remote Sensing)

Abstract

:
Oil spills bring great damage to the environment and, in particular, to coastal ecosystems. The ability of identifying them accurately is important to prompt oil spill response. We propose a semi-automatic oil spill detection method, where texture analysis, machine learning, and adaptive thresholding are used to process X-band marine radar images. Coordinate transformation and noise reduction are first applied to the sampled radar images, coarse measurements of oil spills are then subjected to texture analysis and machine learning. To identify the loci of oil spills, a texture index calculated by four textural features of a grey level co-occurrence matrix is proposed. Machine learning methods, namely support vector machine, k-nearest neighbor, linear discriminant analysis, and ensemble learning are adopted to extract the coarse oil spill areas indicated by the texture index. Finally, fine measurements can be obtained by using adaptive thresholding on coarsely extracted oil spill areas. Fine measurements are insensitive to the results of coarse measurement. The proposed oil spill detection method was used on radar images that were sampled after an oil spill accident that occurred in the coastal region of Dalian, China on 21 July 2010. Using our processing method, thresholds do not have to be set manually and oil spills can be extracted semi-automatically. The extracted oil spills are accurate and consistent with visual interpretation.

Graphical Abstract

1. Introduction

Oil spills can result in immediate and long-term environmental damage that can last for decades. In oil spill accidents, crude oil at and beneath the water’s surface can quickly spread and reach the coastal region under particular weather and oceanographic conditions [1,2,3,4,5]. Therefore, quick and accurate oil spill detection is a critical step for oil spill cleaning. Marine radars are normally installed in vessels for navigation [6]. The marine radar images used in this study are convenient and expedient to obtain. In marine radar images, the intensity of the backscattered signal in an oil spill area is weaker than in the neighboring waters, a phenomenon that can be exploited to detect oil spills [7]. Although several commercial systems have been developed, such as the oil spill detection (OSD) system of Miros (Norwegian company) [8,9], the SeaDarQ radar system of Nortek Netherlands [10], and the sigma S6 OSD system of Rutter (Canada company) [11], oil spill extraction methods are seldom publicized due to commercial competition.
Some scholarly studies have been published on how to use marine radar to detect oil spills in the 1980s [12,13], but the detailed oil spill segmentation method was not yet introduced. Zhu et al. studied how the OSD method [14] can be used to access oil spills information. In their study, multiple radar images were used to map and predict the attenuation of signal intensity in the images, and a manually set global threshold was applied to extract the oil spills. In another study, Liu et al. proposed an adaptive thresholding method [15] to visualize the oil spills on individual sampled radar images. Similarly, a manually set threshold had to be used to select an available area for oil spill segmentation. Xu et al. detected oil spills on radar images by dual thresholds [16], however, the thresholds still have to be manually selected in this research.
In this paper, we propose a semi-automatic oil spill detection method using texture analysis, machine learning, and adaptive thresholding. In our method, manually set thresholds are not required. The process of the proposed method is shown in Figure 1, and the proposed method will be described in Section 2, Section 3, and Section 4. Section 2 describes how radar images were obtained and preprocessed by erasing co-channel interference, reducing small objects turbulence. Section 3 explores a coarse measurement method. An index calculated by the grey level co-occurrence matrix (GLCM) will be used to indicate areas containing oil spills. Machine learning methods are used to separate the oil spill area. In Section 4, fine measurements of the oil spill area will be obtained by an adaptive thresholding method. In Section 5, extraction results of oil spills based on several radar images are carried out, and the performance of the proposed method is discussed in Section 6. Finally, Section 7 presents the conclusions.

2. Radar Image Preprocessing

2.1. Radar Image Collection

An oil spill accident took place on the shore in Dalian on July 16 2010. A large amount of oil was poured into ocean. The teaching–training ship “YUKUN” owned by Dalian Maritime University was commissioned to measure the oil spills in the coastal area. An X-band marine radar from Sperry Marine (Head Office in London, UK) was installed on “YUKUN”, and radar images were used to detect oil spills. The main parameters of X-band marine radar are shown in Table 1. Figure 2a shows the navigation route of the teaching–training ship “YUKUN” on 21 July 2010. Figure 2b shows an example of radar images obtained at the seashore near Dalian Bay at 23:19:32, covering the area shown in red in Figure 2a. The scanning radius of the radar image is 0.75 nautical miles. For more straightforward analysis, the marine radar images were plotted on a Cartesian coordinate system instead of the polar coordinate system. In the Cartesian coordinate, the horizontal axis indicates angles of incidence, the vertical axis distances from the pole, and the brightness the strength of the reflected signal. Figure 3 shows the transformed image of Figure 2b in the Cartesian coordinate and the image size is 512 × 2048.

2.2. Denoising

As shown in Figure 3, co-channel interference manifests as bright band effects. An index calculated by convolutional filters [15] was used to detect co-channel interference. The expression of the index is shown as follow:
C i , j = k = N 1 / 2 N 1 / 2 I i , j + k V c k l = N 1 / 2 N 1 / 2 I i + l , j V r l ,
where C i , j is the index value, V c is the N by 1 convolutional filter, V r is the 1 by N convolutional filter, and N is the length of the convolutional filter, which is set as an odd number.
Based on the calculated index value, the co-channel interference was detected by Otsu’s method [17,18]. Then, the identified co-channel interference was erased by a mean filter with a 1 by N matrix [19]. The radar image erasing co-channel interference is shown in Figure 4, with two convolutional filters [1, 1, 1, 1, 1, 1, 1] and [1; 1; 1; 1; 1; 1; 1], and mean filter [1/6, 1/6, 1/6, 0, 1/6, 1/6, 1/6]. According to the convolutional filter method, only the pixel value at the loci of co-channel interference is modified.
Though co-channel interference has been removed, Figure 4 is still littered with bright speckles caused by reflected signals of small objects. These speckles are considered as noise in image processing for oil spill detection. To reduce speckles, the Fields-of-Experts (FoE) model [20] was introduced. In this method, bright speckles are extracted as prior information. To gain information on the speckles, binarization with the threshold calculated by Otsu’s method was first used to identify the bright areas, then connected components (objects) of fewer than 200 pixels were extracted as speckles. 3 × 3 square patches with 8 filters were adopted to clean up the bright speckles. After introducing the FoE model, the intensity of bright speckles was reduced significantly. A radar image with co-channel interference erased and bright speckles reduced is shown in Figure 5.
Visual interpretation by experts is one of the most common methods for oil spill detection [21]. One visual interpretation result in the Cartesian coordinate of Figure 2 is shown in Figure 6.

3. Coarse Measurement

3.1. Texture Analysis

Texture is taken as a repetitive pattern to describe a limited area [22]. The GLCM was first introduced to describe statistical features in texture analysis by Haralick [23]. The GLCM is determined as a two-dimensional histogram of grey levels for a pair of pixels, which are separated by a fixed spatial relationship. Texture analysis of TerraSAR-X data has been used to enhance the land use/cover (LULC) classification near Pirna, Saxony, Germany [24]. GLCM was utilized to detect the landslides on levees based on Synthetic Aperture Radar (SAR) data [25]. Besides land applications, GLCM texture analysis was also used for oil spill monitoring from ERS-2 SAR image in the Bohai Sea, China [26]. Based on these studies, it can be said that texture analysis is an effective tool for hazard mapping. Therefore, texture analysis using GLCM is introduced to oil spill detection by marine radar imaging. Compared with SAR images, the intensity of reflected signal from sea surface grabbed by X-band marine radar is affected by distance, roughness of sea surface, and incidence [27,28]. Hence, texture characteristics of SAR images cannot be applied to X-band marine radar images directly.
Four textural features—energy, entropy, contrast, and correlation—were proved to be independent [29]. To decrease the computational complexity of the 4 texture features on marine radar images, the subsampling window was set to 32 × 32 pixels, the off-set step was set to 4 pixels, and the number of gray levels used in GLCM was set to 32. Based on these settings, the pooling images of 4 textural features are shown in Figure 7. Figure 7a shows the entropy. The bright area in the radar image has a small reflection value. Oil spill area is included in the small entropy value area. Figure 7b shows the energy, both oil spill areas and areas farther away in the radar image have weak energy, which is not useful in distinguishing oil spills independently. Figure 7c shows the correlation value, the oil spill area and places farther away have high correlation values, but it is difficult to extract the oil spill area. The textural feature of contrast is shown in Figure 7d. The contrast value in the oil spill area is high, but it cannot provide oil spill extraction information alone. Based on Figure 7, it is impossible to extract the oil spill areas with single texture features.
To highlight the region containing oil spills, an index calculated by 4 independent textural features of GLCM is proposed to distinguish oil slicks. In this calculation, we assume that the contribution of each feature is equivalent. Then, the value of each textural feature is normalized from 0 to 1, which is expressed as:
Norm M = M min M max M min M ,
where M is the value of textural features on the marine radar image. In the texture analysis shown in Figure 7, the contrast between oil spill area and water area is not significant. A logarithmic operation can be used to carry out the contrast enhancement [30]. Therefore, the logarithmic operations are used on the four independent textural features. Finally, according to the performance of each textural feature for oil spill detection, an index I used for oil spill detection by marine radar imaging is calculated as follows:
I = log 1 Norm   E N T + log 1 Norm   E N E + log Norm   C O R + log Norm   C O N ,
where E N T is entropy, E N E is energy, C O R is correlation, and C O N is contrast.
According to Equation (3), the proposed texture index of Figure 5 is shown in Figure 8. In Figure 8, the color of the area containing oil slicks is warmer than that of other areas. This observation can be used to carry out oil spill segmentation.

3.2. Classification by Machine Learning Algorithm

After processing using the proposed texture index, the marine radar image can be divided into two categories—one is the area containing oil spill and the other is the area without oil spill. To separate the area containing oil spill, machine learning is introduced. In this process, we do not need to test multiple thresholds to pick the best one for oil spill segmentation. In machine learning, several methods can be used for classification, and the following methods will be discussed in this section: Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Linear Discriminant Analysis (LDA) and Ensemble Learning (EL).

3.2.1. SVM

SVM has been successfully used for classification [31,32,33] and pattern recognition [34]. To separate two classes by SVM, the goal is to find the hyperplane that maximizes the minimum distance between any data point. Suppose there is a training dataset containing l points x i , y i i l , with the input data x i n and the corresponding data y i 1 , + 1 as two classes. In feature space, SVM models can be expressed as:
y x = ω T φ x + b ,
where ω is a weight vector of the same dimension as the feature space, φ · is the nonlinear mapping to map the input vector into a so-called higher dimensional feature space, and b is the bias.
The expression used for classification using the kernel trick is:
y x = sign α i y i K x , x i + b ,
where α i are support vectors, and K x i , x j = φ x i   T φ x j is the kernel trick.
Several types of kernels, such as linear, polynomial, splines, radius-based function, and multi-layer perceptrons, can be used within the SVM.

3.2.2. k-NN

The k-NN algorithm is one of the most commonly used classifiers [35,36]. Suppose datasets X 1 ,   Y 1 , X 2 ,   Y 2 , X 3 ,   Y 3 ,… are independent and identically distributed pairs taking values in d . A classifier C is a Borel measurable function from d to {1, 2}, with the interpretation that the point x d is classified as belonging to class C x . Let X 1 ,   Y 1 ,…,   X n ,   Y n denote a permutation of the training sample X 1 ,   Y 1 ,…,   X n ,   Y n such that X 1 x X n x , where · is an arbitrary norm on d . We define the k-NN classifier to be
C k N N x = 1 , if   i = 1 k I Y i = 1 k 1 2   , 2 , otherwise ,
where I is the group indicator vector.
In the k-NN method, each sample belonging to the test dataset is classified according to the closest k samples belonging to the training dataset. Among the class numbers obtained from k samples, the class having the maximum number is determined as the class of the sample [36]. The distances between the test samples and the training samples can be calculated as Euclidean, Manhattan, and Hamming.

3.2.3. LDA

LDA is a normally used method for classification [37,38,39]. Suppose a set of N samples { x 1 , x 2 , …, x N } are in an n-dimensional space, and the samples belong to m classes { C 1 , C 2 , …, C m }. The between-class scatter S b and within-class scatter S w are defined as:
S b =   i = 1 m N i μ i μ μ i μ T ,
S w =   i = 1 m x k C i x k μ i x k μ i T ,
where μ is the total sample mean vector, μ i is the mean sample of class C i , and N i is the number of samples in class C i .
LDA finds a projection matrix A by maximizing the following objective function:
A L D A = arg max A A T S b A A T S w A   .
According to Equation (9), LDA seeks directions on which data points of different classes are far from each other while requiring data points of the same class to be close to each other.

3.2.4. Ensemble Learning (EL)

EL is an effective method to enhance classification performance and accuracy in the remote sensing community [40,41,42]. The EL method is a paradigm of machine learning, and it focuses on using multiple learners to solve a problem. In the EL method, instead of finding the best individual learner to solve the problem, better performance can be reached by combining multiple weak learners’ outputs. An EL system is constructed by two key components—a specific strategy to build an ensemble using diverse classifiers, and a rule to combine multiple outputs. Diversity of multiple classes is the foundation for constructing an effective EL system. The normally used method supports different training sets produced by a resampling technique, such as Boosting and Bagging.

3.2.5. Oil Spill Detection on the Texture Analyzed Image

To carry out machine learning methods, training data is first needed to gauge the parameters in different machine learning methods. Then, classification is carried out by the trained model of machine learning methods. On the image of proposed texture index, the selected data for training is shown in Figure 9. The data in the blue polygons are used as oil spills, and the data in the green polygons are taken as water. In the SVM method, the type of kernel is polynomial with the degree as 3. In the k-NN method, the value of k is 10 and the Euclidean distance is used. In the EL method, the LogitBoost method is adopted, after which decision tree learner is used. Parameters used in the LDA method are deduced by training data. Based on training information and set parameters, the oil spill detection carried out by SVM, k-NN, LDA, and EL are shown in Figure 10. In Figure 10, white areas are oil spills and black areas are water. The oil spill areas in Figure 10a,b are roughly the same as the dark red areas in Figure 8. Figure 10d indicates the oil spill areas containing many small speckle noises. The classification results of oil spill areas in Figure 10c are the most different from the visual interpretation results in Figure 8, including a large number of water areas.
In machine learning, the image of texture index is pooled by texture analysis. Therefore, the detected region of oil spills on the image of texture index should be restored to the original size. The restored areas containing oil spills extracted by 4 machine learning methods are shown in Figure 11. As shown in Figure 11, oil spill areas detected by textural analysis and machine learning contain lots of water area. Hence, fine measurements of oil spills are required.

4. Fine Measurements

As mentioned before, the intensity of the reflected signal from the sea surface is affected by distance, roughness of sea surface, and incidence, oil spills cannot be extracted accurately with a one-fit-all threshold. Adaptive thresholding methods [43,44,45,46,47] are normally used in image processing for segmentation. To obtain accurate oil spill information based on selected regions by textural analysis and machine learning, the adaptive thresholding method proposed by Bradley et al. [48] was used.
In the adaptive thresholding method, an average of 32 by 32 pixels around the current pixel was calculated. If the gray value of current pixel was η % less than the average, the current pixel, which would be otherwise taken as water area, was determined as oil spill area.
The oil spills extracted by the adaptive thresholding method have common areas with the result of visual interpretation. The higher the proportion of the common area to the total area detected by thresholding method ( γ a ) is, the greater the correct rate of the adaptive thresholding method will be. The larger the proportion of the common area to the total area of visual interpretation results ( γ v ) is, the more comprehensive the recognition by the adaptive thresholding method will be. Oil spill extraction by adaptive thresholding method varies by different values of η . Therefore, a value of η % resulting in both a high γ a and a high γ v is needed. Table 2 shows the values of γ a and γ v by the adaptive thresholding method using different values of η . As can be seen from Table 2, when η% < 35%, γ a is less than 0.7; when η% > 35%, γ v is less than 0.7; only when η% = 35%, γ a is larger than 80% and γ v is around 80%. Hence, η = 35 was used in the adaptive thresholding method.
The spilled oil floating on the water appears as continuous zones. If detected oil spill areas were small and not continuous, they were determined as incorrect extraction and removed as noise. On the voyage, ship wake disturbs sea surface considerably, resulting in dark area in radar images. Therefore, areas containing the ship wake cannot be distinguished from oil spills area on the marine radar image. In our research, the areas containing the ship wake were excluded in oil spill detection. This area was set from 30 ° portside to 30 ° starboard in the stern. Finally, the fine measurements of oil spills in the Cartesian coordinate are shown in Figure 12, and the detected oil spills plotted on the original radar image are shown in Figure 13.

5. Results

In the actual measurement of oil spills by marine radar images, interferences, such as wind turbulence and swells, appear as dark areas. To exclude these interferences, a ship-borne thermal infrared sensor produced by Zhejiang Dali Technology Co. Ltd (Hangzhou, Zhejiang, China) was used, as shown in Figure 14a. Wind turbulence and swells do not generate temperature differences but oil spills do. According to the thermal infrared image captured at 23:19, 21 July 2010 on “YUKUN” in Figure 14b, the region captured by radar images contains spilled oil instead of interferences.
To evaluate the performance of oil spill detection based on the proposed method of texture analysis, machine learning, and adaptive thresholding, another two randomly selected marine radar images were processed to detect oil spills, shown in Figure 15. The radar images erasing co-channel interference and reducing the small speckles of Figure 15 in the Cartesian coordinate are shown in Figure 16. Figure 16 shows that the processed images are cleaner than original sampled radar images. The low noise images are helpful for oil spill detection by textural analysis. The visual interpretation results of Figure 15 are shown in Figure 17. The calculated value of texture index for oil spill detection based on Figure 16 is shown in Figure 18, which indicates the oil spill areas as deep red color. On the image of proposed texture index, several areas were selected for machine learning training and they are also plotted on Figure 18. The final oil spill detection results using four machine learning methods are shown in Figure 19. The oil spill detection work was carried out on a personal computer and the basic configurations are shown in Table 3. Using this computer, the processing time of the classification using four machine learning methods are shown in Table 4.

6. Discussion

The sampled radar image in Figure 2 is processed and shown in Section 2, Section 3, and Section 4 in detail. As shown in Figure 11, the coarse measurement of oil spills using the four machine learning methods contains many water areas, and the extracted areas between the four machine learning methods are not same. As shown by the fine measurement results in Figure 13, all extracted oil spill areas based on coarse measurement are congruent with visual interpretation, and the fine extracted oil spill areas between four machine learning methods have little difference. Figure 19a,c,e,g are oil spill detection results of the radar image in Figure 15a using SVM, k-NN, LDA, and EL, respectively. Figure 19a,c,e,g shows that the oil spill detection results using textural analysis, machine learning, and adaptive thresholding methods are comparable to the visual interpretation shown in Figure 17a. Compared with the oil spill detection results between Figure 19a,c,e,g, it is difficult to identify the visual difference between the four machine learning methods used. The oil spill extraction results of Figure 15b are shown in Figure 19b,d,f,h. It also shows that the extracted oil spill areas in Figure 19b,d,f,h are consistent with the visual interpretation shown in Figure 17b, and there are few differences in the final results between the four machine learning methods. Table 4 shows that the processing time of EL is significantly longer than the other three methods, and the LDA needs the shortest processing time of the four methods.
In the oil spill detection methods proposed by Zhu et al. [14] and Xu et al. [16], a threshold was needed for each radar image, which was manually selected. In the proposed method, no manually selected threshold is needed. Comparing Figure 13 with Figure 11, the fine measurement by adaptive thresholding is insensitive to the coarse measurement results by texture analysis and machine learning, which means that the manually selected training areas for machine learning are insensitive to the final oil spill extracted results. Therefore, the proposed method is robust.
The value γ a in the three radar images is shown in Table 5. Table 5 indicates that over 80% of the extracted oil spill areas by the proposed method are the same as the visual interpretation results. Although the extracted oil spill areas by the proposed method are close to the visual interpretations, some narrow oil bands are not considered.

7. Conclusions

In this paper, we proposed an oil spill detection method using texture analysis, machine learning, and adaptive thresholding based on marine radar images. First, preprocessing of radar images was carried out by coordinate transforming and denoising. Then coarse detection of oil spills was executed using texture analysis and machine learning. In texture analysis, a texture index of oil spills calculated by energy, entropy, correlation, and contrast was proposed to indicate the location of oil spills. Four machine learning methods—support vector machine, k-nearest neighbor, linear discriminant analysis, and ensemble learning—were used to classify oil spills on radar images coarsely. In these four methods, ensemble learning would need the longest time, while linear discriminant analysis was the fastest. Finally, fine measurements were carried out using the adaptive thresholding method.
The proposed method was used on the radar images sampled in an oil spill accident in Dalian on 21 July 2010. According to the proposed method, manually set thresholds were not required, oil spill detection was achieved semi-automatically by selecting training data on the image of the proposed texture index. The selected training data affected oil spill detection by machine learning in coarse measurement. However, the fine measurement is insensitive to the results of coarse measurement. Therefore, the proposed semi-automatic oil spill detection method is robust. Over 80% of the extracted oil spill areas by the proposed method are the same as the visual interpretation results, thus it is an effective method to be applied for oil spill detection in emergency accidents. Compared with visual interpretation results, the proposed method had to exclude some narrow oil bands. Future work should increase the narrow oil band extraction capacity.

Author Contributions

P.L. conceived and designed the algorithm and contributed to the texture analysis and adaptive thresholding method; Y.L. and B.L. constructed the outline for the manuscript and made the first draft of the manuscript; P.C. and B.L. carried out oil spill detection by machine learning methods; J.X. worked on radar image preprocessing.

Funding

This research was funded by the Marine Public Welfare Projects of China, grant number 201305002; the Fundamental Research Funds for the Central Universities, grant number3132019133; and the Guidance plan of the Provincial Natural Science Fund, grant number 201601062.

Acknowledgments

The authors of this research would like to thank all the field management staff at the teaching-training ship “YUKUN” for their support of our research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lardner, R.; Zodiatis, G. Modelling oil plumes from subsurface spills. Mar. Pollut. Bull. 2017, 124, 94–101. [Google Scholar] [CrossRef]
  2. Alves, T.M.; Kokinou, E.; Zodiatis, G.; Radhakrishnan, H.; Panagiotakis, C.; Lardner, R. Multidisciplinary oil spill modeling to protect coastal communities and the environment of the Eastern Mediterranean Sea. Sci. Rep. 2016, 6, 36882. [Google Scholar] [CrossRef] [PubMed]
  3. Alves, T.M.; Kokinou, E.; Zodiatis, G.; Lardner, R.; Panagiotakis, C.; Radhakrishnan, H. Modelling of oil spills in confined maritime basins: The case for early response in the Eastern Mediterranean Sea. Environ. Pollut. 2015, 206, 390–399. [Google Scholar] [CrossRef] [PubMed]
  4. Alves, T.M.; Kokinou, E.; Zodiatis, G. A three-step model to assess shoreline and offshore susceptibility to oil spills: The South Aegean (Crete) as an analogue for confined marine basins. Mar. Pollut. Bull. 2014, 86, 443–457. [Google Scholar] [CrossRef]
  5. Delpeche-Ellmann, N.C.; Soomere, T. Investigating the marine protected areas most at risk of current-driven pollution in the Gulf of Finland, the Baltic Sea, using a Lagrangian transport model. Mar. Pollut. Bull. 2013, 67, 121–129. [Google Scholar] [CrossRef] [PubMed]
  6. International Maritime Organization. Appendix A: Extract from regulation 12. In Proceedings of the International Convention for the Safety of Life at Sea, London, UK, 1 November 1974. [Google Scholar]
  7. Atanassov, V.; Mladenov, L.; Rangelov, R.; Savchenko, A. Observation of oil slicks on the sea surface by using marine navigation radar. In Proceedings of the Remote Sensing, Global Monitoring for Earth Management: 1991 International Geoscience and Remote Sensing Symposium, Helsinki University of Technology, Espo, Finland, 3–6 June 1991; pp. 1323–1326. [Google Scholar]
  8. Nost, E.; Egset, C.N. Oil spill detection system—Results from field trials. In Proceedings of the OCEANS 2006, Boston, MA, USA, 18–21 September 2006. [Google Scholar]
  9. Gangeskar, R. Automatic oil-spill detection by marine X-band radars. Sea Technol. 2004, 45, 40–45. [Google Scholar]
  10. SeaDarQ. Available online: http://www.seadarq.com/seadarq/products/oil-spill-detection-1 (accessed on 30 December 2018).
  11. Rutter. Available online: http://www.rutter.ca/oil-spill-detection (accessed on 30 December 2018).
  12. Bartsch, N.; Gruner, K.; Keydel, W.; Witte, F. Contributions to oil-spill detection and analysis with radar and microwave radiometry: Results of the archimedes II campaign. IEEE Trans. Geosci. Remote Sens. 1987, 25, 677–690. [Google Scholar] [CrossRef]
  13. Tennyson, E.J. Shipboard navigational radar as an oil spill tracking tool-a preliminary assessment. In Proceedings of the IEEE OCEANS 1988, Baltimore, MD, USA, 31 October–2 November 1988. [Google Scholar]
  14. Zhu, X.; Li, Y.; Feng, H.; Liu, B.; Xu, J. Oil spill detection method using X-band marine radar imagery. J. Appl. Remote Sens. 2015, 9, 123–129. [Google Scholar] [CrossRef]
  15. Liu, P.; Li, Y.; Xu, J.; Zhu, X. Adaptive enhancement of X-band marine radar imagery to detect oil spill segments. Sensors 2017, 17, 2349. [Google Scholar]
  16. Xu, J.; Liu, P.; Wang, H.; Lian, J.; Li, B. Marine radar oil spill monitoring technology based on dual-threshold and c–v level set methods. J. Indian Soc. Remote. 2018, 46, 1949–1961. [Google Scholar] [CrossRef]
  17. Otsu, N. Threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  18. Sezgin, M.; Sankur, B. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 2004, 13, 146–165. [Google Scholar]
  19. Kundu, A.; Mitra, S.K.; Vaidyanathan, P.P. Application of two-dimensional generalized mean filtering for removal of impulse noises from images. IEEE Trans. Acoust. Speech 1984, 32, 600–609. [Google Scholar] [CrossRef]
  20. Roth, S.; Black, M.J. Fields of Experts. Int. J. Comput. Vis. 2009, 82, 205–229. [Google Scholar] [CrossRef]
  21. Akar, S.; Mehmet, L.S.; Kaymakci, N. Detection and object-based classification of offshore oil slicks using ENVISAT-ASAR images. Environ. Monit. Assess. 2011, 183, 409–423. [Google Scholar] [CrossRef] [PubMed]
  22. Tamura, H.; Mori, S.; Yamawaki, T. Textural features corresponding to visual perception. IEEE Trans. Syst. Man Cybern. 1978, 8, 460–473. [Google Scholar] [CrossRef]
  23. Haralick, R.M. Statistical and structural approaches to texture. Proc. IEEE 1979, 67, 786–804. [Google Scholar] [CrossRef]
  24. Mahmoud, A.; Elbialy, S.; Pradhan, B.; Buchroithner, M. Field-based landcover classification using TerraSAR-X texture analysis. Adv. Space Res. 2011, 48, 799–805. [Google Scholar] [CrossRef]
  25. Lee, M.A.; Aanstoos, J.V.; Bruce, L.M.; Prasad, S. Application of omnidirectional texture analysis to SAR images for levee landslide detection. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012. [Google Scholar]
  26. Wei, L.; Hu, Z.; Guo, M.; Jiang, M.; Zhang, S. Texture feature analysis in oil spill monitoring by SAR image. In Proceedings of the 20th International Conference on Geoinformatics, Hong Kong, China, 15–17 June 2012. [Google Scholar]
  27. Nghiem, S.V.; Li, F.K.; Lou, S.H.; Neumann, G. Ocean remote sensing with airborne Ku-band scatterometer. In Proceedings of the OCEANS’93, Victoria, BC, Canada, 18–21 October 1993; pp. 120–124. [Google Scholar]
  28. Schroeder, L.; Schaffner, P.; Mitchell, J.; Jones, W. AAFE RADSCAT 13.9-GHz measurements and analysis: Wind-speed signature of the ocean. IEEE J. Ocean. Eng. 1985, 10, 346–357. [Google Scholar] [CrossRef]
  29. Ulaby, F.T.; Kouyate, F.; Brisco, B.; Williams, T.H.L. Textural infornation in SAR images. IEEE Trans. Geosci. Remote Sens. 1986, 24, 235–245. [Google Scholar] [CrossRef]
  30. Jaähne, B. Digital Image Processing, 6th ed.; Springer: Berlin, Germany, 2005; p. 78. [Google Scholar]
  31. David, A.; Lerner, B. Support vector machine-based image classification for genetic syndrome diagnosis. Pattern Recogn. Lett. 2005, 26, 1029–1038. [Google Scholar] [CrossRef]
  32. Wang, X.Y.; Wang, Q.Y.; Yang, H.Y.; Bu, J. Color image segmentation using automatic pixel classification with support vector machine. Neurocomputing 2011, 74, 3898–3911. [Google Scholar] [CrossRef]
  33. Wang, X.Y.; Zhang, X.J.; Yang, H.Y.; Bu, J. A pixel-based color image segmentation using support vector machine and fuzzy C-means. Neural Netw. 2012, 33, 148–159. [Google Scholar] [CrossRef] [PubMed]
  34. Burges, C.J.C. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Disc. 1998, 2, 121–167. [Google Scholar] [CrossRef]
  35. Samworth, R.J. Optimal weighted nearest neighbour classifiers. Ann. Stat. 2012, 40, 2733–2763. [Google Scholar] [CrossRef]
  36. Decaestecker, C.; Salmon, I.; Dewitte, O.; Camby, I.; Ham, V.; Pasteels, J.L.; Brotchi, J.; Kiss, R. Nearest-neighbor classification for identification of aggressive versus nonaggressive low-grade astrocytic tumors by means of image cytometry-generated variables. J. Neurosurg. 1997, 86, 532–537. [Google Scholar] [CrossRef] [PubMed]
  37. Ng, M.K.; Liao, L.Z.; Zhang, L. On sparse linear discriminant analysis algorithm for high-dimensional data classification. Numer. Linear Algebra Appl. 2011, 18, 223–235. [Google Scholar] [CrossRef]
  38. Peng, J.; Luo, T. Sparse matrix transform-based linear discriminant analysis for hyperspectral image classification. Signal Image Video 2016, 10, 761–768. [Google Scholar] [CrossRef]
  39. Ye, Q.; Ye, N.; Yin, T. Fast orthogonal linear discriminant analysis with application to image classification. Neurocomputing 2015, 158, 216–224. [Google Scholar] [CrossRef]
  40. Samat, A.; Du, P.; Baig, M.H.A.; Chakravarty, S.; Cheng, L. Ensemble learning with multiple classifiers and polarimetric features for polarized SAR image classification. Photogramm. Eng. Rem. 2014, 80, 239–251. [Google Scholar] [CrossRef]
  41. Wu, Q.; Wang, L.W.; Wu, J. Ensemble learning on hyperspectral remote sensing image classification. Adv. Mater. Res. 2012, 546–547, 508–513. [Google Scholar] [CrossRef]
  42. Merentitis, A.; Debes, C.; Heremans, R. Ensemble learning in hyperspectral image classification: Toward selecting a favorable bias-variance tradeoff. IEEE J.-STARS 2014, 7, 1089–1102. [Google Scholar] [CrossRef]
  43. Niblack, W. An Introduction to Digital Image Processing; Prentice Hall: Upper Saddle River, NJ, USA, 1986. [Google Scholar]
  44. Sauvola, J.; Pietikäinen, M. Adaptive document image binarization. Pattern Recognit. 2000, 33, 225–236. [Google Scholar] [CrossRef]
  45. White, J.M.; Rohrer, G.D. Image thresholding for optical character recognition and other applications requiring character image extraction. IBM J. Res. Dev. 1983, 27, 400–411. [Google Scholar] [CrossRef]
  46. Yanowitz, S.D.; Bruckstein, A.M. A new method for image segmentation. Comput. Graph. Image Process. 1989, 46, 82–95. [Google Scholar] [CrossRef]
  47. Wall, R.J. The Gray Level Histogram for Threshold Boundary Determination in Image Processing to the Scene Segmentation Problem in Human Chromosome Analysis. Ph.D. Thesis, University of California at Los Angeles, Los Angeles, CA, USA, 1974. [Google Scholar]
  48. Bradley, D.; Roth, G. Adaptive thresholding using the Integral Image. J. Graph. Tools 2007, 12, 13–21. [Google Scholar] [CrossRef]
Figure 1. Flow chart of marine radar image processing for oil spill detection.
Figure 1. Flow chart of marine radar image processing for oil spill detection.
Remotesensing 11 00756 g001
Figure 2. (a) Navigation route of ship “YUKUN” on 21 July 2010, (b) sampled marine radar image at 23:19:32 (red area of Figure 2a).
Figure 2. (a) Navigation route of ship “YUKUN” on 21 July 2010, (b) sampled marine radar image at 23:19:32 (red area of Figure 2a).
Remotesensing 11 00756 g002
Figure 3. Converted radar image in the Cartesian coordinate.
Figure 3. Converted radar image in the Cartesian coordinate.
Remotesensing 11 00756 g003
Figure 4. Radar image with co-channel interference removed.
Figure 4. Radar image with co-channel interference removed.
Remotesensing 11 00756 g004
Figure 5. Radar image with co-channel interference erased and bright speckles reduced.
Figure 5. Radar image with co-channel interference erased and bright speckles reduced.
Remotesensing 11 00756 g005
Figure 6. One visual interpretation results of oil spills (shown in red) in the Cartesian coordinate of radar image sampled at 23:19:32, 21 July 2010.
Figure 6. One visual interpretation results of oil spills (shown in red) in the Cartesian coordinate of radar image sampled at 23:19:32, 21 July 2010.
Remotesensing 11 00756 g006
Figure 7. Textural features of grey level co-occurrence matrix (GLCM) on the marine radar images: (a) Entropy, which indicates oil spills in small value; (b) energy, which indicates oil spills and far distance areas in small value; (c) correlation, which indicates oil spills and certain areas farther away in high value; (d) contrast, which indicates oil spills and areas farther away in high value.
Figure 7. Textural features of grey level co-occurrence matrix (GLCM) on the marine radar images: (a) Entropy, which indicates oil spills in small value; (b) energy, which indicates oil spills and far distance areas in small value; (c) correlation, which indicates oil spills and certain areas farther away in high value; (d) contrast, which indicates oil spills and areas farther away in high value.
Remotesensing 11 00756 g007aRemotesensing 11 00756 g007b
Figure 8. Texture index for oil spill detection on the marine radar image, which indicates oil spills in the dark red area.
Figure 8. Texture index for oil spill detection on the marine radar image, which indicates oil spills in the dark red area.
Remotesensing 11 00756 g008
Figure 9. Selected training data for 4 machine learning methods, where green polygons are training data of water and blue polygons are training data of oil spills.
Figure 9. Selected training data for 4 machine learning methods, where green polygons are training data of water and blue polygons are training data of oil spills.
Remotesensing 11 00756 g009
Figure 10. Oil spill segmentation based on the image of proposed texture index by 4 machine learning methods: (a) Support Vector Machine (SVM), which indicates oil spills roughly the same as the visual interpretation in Figure 8; (b) k-Nearest Neighbor (k-NN), which indicates oil spills roughly the same as the visual interpretation in Figure 8; (c) Linear Discriminant Analysis (LDA), which indicates oil spills containing many water areas; and (d) Ensemble Learning (EL), which indicates oil spills with many noisy speckles.
Figure 10. Oil spill segmentation based on the image of proposed texture index by 4 machine learning methods: (a) Support Vector Machine (SVM), which indicates oil spills roughly the same as the visual interpretation in Figure 8; (b) k-Nearest Neighbor (k-NN), which indicates oil spills roughly the same as the visual interpretation in Figure 8; (c) Linear Discriminant Analysis (LDA), which indicates oil spills containing many water areas; and (d) Ensemble Learning (EL), which indicates oil spills with many noisy speckles.
Remotesensing 11 00756 g010aRemotesensing 11 00756 g010b
Figure 11. Selected areas containing oil spills and the surrounding water on the marine radar image in its original size by 4 machine learning methods: (a) SVM, (b) k-NN, (c) LDA, (d) EL.
Figure 11. Selected areas containing oil spills and the surrounding water on the marine radar image in its original size by 4 machine learning methods: (a) SVM, (b) k-NN, (c) LDA, (d) EL.
Remotesensing 11 00756 g011aRemotesensing 11 00756 g011b
Figure 12. Fine measurement results of oil spills (shown in white) based on coarse extracted areas by 4 machine learning methods in the Cartesian coordinate: (a) SVM, (b) k-NN, (c) LDA, (d) EL.
Figure 12. Fine measurement results of oil spills (shown in white) based on coarse extracted areas by 4 machine learning methods in the Cartesian coordinate: (a) SVM, (b) k-NN, (c) LDA, (d) EL.
Remotesensing 11 00756 g012aRemotesensing 11 00756 g012b
Figure 13. Fine measurements of oil spills (shown in red) marked on the marine radar image in the polar coordinate based on coarse extracted areas by 4 machine learning methods: (a) SVM, (b) k-NN, (c) LDA, (d) EL.
Figure 13. Fine measurements of oil spills (shown in red) marked on the marine radar image in the polar coordinate based on coarse extracted areas by 4 machine learning methods: (a) SVM, (b) k-NN, (c) LDA, (d) EL.
Remotesensing 11 00756 g013
Figure 14. An oil spill image monitored by thermal infrared sensor: (a) Thermal infrared sensor used in this study, (b) an image captured by thermal infrared sensor.
Figure 14. An oil spill image monitored by thermal infrared sensor: (a) Thermal infrared sensor used in this study, (b) an image captured by thermal infrared sensor.
Remotesensing 11 00756 g014
Figure 15. Two sampled radar images: (a) Sampled at 23:15:45, 21 July 2010 and (b) sampled at 23:19:22, 21 July 2010.
Figure 15. Two sampled radar images: (a) Sampled at 23:15:45, 21 July 2010 and (b) sampled at 23:19:22, 21 July 2010.
Remotesensing 11 00756 g015
Figure 16. Preprocessed radar images of: (a) Figure 15a and (b) Figure 15b.
Figure 16. Preprocessed radar images of: (a) Figure 15a and (b) Figure 15b.
Remotesensing 11 00756 g016
Figure 17. The visual interpretation results of oil spills (shown in red) in two radar images: (a) Figure 15a and (b) Figure 15b.
Figure 17. The visual interpretation results of oil spills (shown in red) in two radar images: (a) Figure 15a and (b) Figure 15b.
Remotesensing 11 00756 g017
Figure 18. Texture index image of: (a) Figure 16a and (b) Figure 16b. The green polygons are training data of water, and the blue polygons are training data of oil spills.
Figure 18. Texture index image of: (a) Figure 16a and (b) Figure 16b. The green polygons are training data of water, and the blue polygons are training data of oil spills.
Remotesensing 11 00756 g018aRemotesensing 11 00756 g018b
Figure 19. Final oil spill detection results (shown in red): (a) Figure 15a using SVM, (b) Figure 15b using SVM, (c) Figure 15a using k-NN, (d) Figure 15b using k-NN, (e) Figure 15a using LDA, (f) Figure 15b using LDA, (g) Figure 15a using EL, and (h) Figure 15b using EL.
Figure 19. Final oil spill detection results (shown in red): (a) Figure 15a using SVM, (b) Figure 15b using SVM, (c) Figure 15a using k-NN, (d) Figure 15b using k-NN, (e) Figure 15a using LDA, (f) Figure 15b using LDA, (g) Figure 15a using EL, and (h) Figure 15b using EL.
Remotesensing 11 00756 g019aRemotesensing 11 00756 g019b
Table 1. Main parameters of X-band marine radar.
Table 1. Main parameters of X-band marine radar.
NameParameters
Working frequency 9.41 GHz
Antenna length8 ft
Detection range0.5–12 nautical miles
Horizontal direction360 °
Vertical direction ± 10 °
Peak power25 kW
Pulse width50 ns/250 ns/750 ns
Pulse repetition frequency 3000 Hz/1800 Hz/785 Hz
Table 2. The value of γ a and γ v using different values of η in the adaptive thresholding method.
Table 2. The value of γ a and γ v using different values of η in the adaptive thresholding method.
η Machine Learning Methods
SVMk-NNLDAEL
0.10 γ a 0.30140.28990.17440.2494
γ v 0.87880.88330.94960.9287
0.15 γ a 0.35740.34110.21890.3122
γ v 0.87710.88150.92780.927
0.20 γ a 0.44030.43510.31690.4105
γ v 0.84620.85060.91610.896
0.25 γ a 0.51760.51110.43150.4972
γ v 0.82150.83480.88090.88
0.30 γ a 0.61840.61430.5820.6338
γ v 0.8050.80530.87190.8623
0.35 γ a 0.80090.8130.8760.8096
γ v 0.77820.77880.79360.8161
0.40 γ a 0.94850.94840.95970.9583
γ v 0.66320.66410.5470.6237
0.45 γ a 0.97380.97360.9720.9713
γ v 0.50320.50350.45970.4651
0.50 γ a 0.97660.97630.97620.9764
γ v 0.36850.36820.3180.3547
Table 3. The basic configurations of the personal computer used for oil spill detection.
Table 3. The basic configurations of the personal computer used for oil spill detection.
ConfigurationType
CPUInter® Core™ i5-4300U
Memory8 GB
Display cardIntel HD Graphics 4400
Hard discSolid State Drive 128 GB
Table 4. Calculation time of four machine learning methods on three marine radar images.
Table 4. Calculation time of four machine learning methods on three marine radar images.
Machine Learning MethodProcessing Time (s)
Figure 9Figure 18aFigure 18b
SVM0.1220.0770.315
k-NN0.3270.3300.670
LDA0.0280.0230.217
EL2.4032.6593.567
Table 5. The value < γ a in the three radar images.
Table 5. The value < γ a in the three radar images.
Radar Images The   Value   γ a
SVMk-NNLDAEL
Figure 20.80090.8130.8760.8096
Figure 15a0.81440.830.8610.8379
Figure 15b0.87310.8730.82950.8669

Share and Cite

MDPI and ACS Style

Liu, P.; Li, Y.; Liu, B.; Chen, P.; Xu, J. Semi-Automatic Oil Spill Detection on X-Band Marine Radar Images Using Texture Analysis, Machine Learning, and Adaptive Thresholding. Remote Sens. 2019, 11, 756. https://doi.org/10.3390/rs11070756

AMA Style

Liu P, Li Y, Liu B, Chen P, Xu J. Semi-Automatic Oil Spill Detection on X-Band Marine Radar Images Using Texture Analysis, Machine Learning, and Adaptive Thresholding. Remote Sensing. 2019; 11(7):756. https://doi.org/10.3390/rs11070756

Chicago/Turabian Style

Liu, Peng, Ying Li, Bingxin Liu, Peng Chen, and Jin Xu. 2019. "Semi-Automatic Oil Spill Detection on X-Band Marine Radar Images Using Texture Analysis, Machine Learning, and Adaptive Thresholding" Remote Sensing 11, no. 7: 756. https://doi.org/10.3390/rs11070756

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop