Exploring the Potential of Active Learning for Automatic Identification of Marine Oil Spills Using 10-Year (2004–2013) RADARSAT Data
Abstract
:1. Introduction
2. Dataset and Methods
2.1. Dataset
2.2. Classifiers Used
2.2.1. Support Vector Machine (SVM)
2.2.2. K Nearest Neighbors
2.2.3. Linear Discriminant Analysis
2.2.4. Decision Tree
2.3. Active Learning
2.3.1. Six Basic ACS Methods
2.3.2. Adjusting Sample Preference in Iterations
2.3.3. Reducing Redundancy amongst Samples (RRAS)
2.4. Performance Measures
2.4.1. Overall Performance
2.4.2. High TPR Performance
2.4.3. Sorting Performance
2.5. Cost Reduction Measure
2.6. Initial Training Set
3. Results and Discussion
3.1. Performance of ACS Methods
3.2. Cost Reduction Using ACS Methods
3.3. Reducing Redundancy Amongst Samples (RRAS)
3.4. Adjusting Sample Preference in Iterations
4. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Brekke, C.; Solberg, A.H.S. Oil spill detection by satellite remote sensing. Remote Sens. Environ. 2005, 95, 1–13. [Google Scholar] [CrossRef]
- Salberg, A.B.; Rudjord, Ø.; Solberg, A.H.S. Oil spill detection in hybrid-polarimetric SAR images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6521–6533. [Google Scholar] [CrossRef]
- Singha, S.; Vespe, M.; Trieschmann, O. Automatic Synthetic Aperture Radar based oil spill detection and performance estimation via a semi-automatic operational service benchmark. Mar. Pollut. Bull. 2013, 73, 199–209. [Google Scholar] [CrossRef] [PubMed]
- Solberg, A.H.S.; Storvik, G.; Solberg, R.; Volden, E. Automatic detection of oil spills in ERS SAR images. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1916–1924. [Google Scholar] [CrossRef]
- Solberg, A.H.S. Remote Sensing of Ocean Oil-Spill Pollution. Proc. IEEE 2012, 100, 2931–2945. [Google Scholar] [CrossRef]
- Topouzelis, K. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms. Sensors 2008, 6, 6642–6659. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Topouzelis, K.; Stathakis, D.; Karathanassi, V. Investigation of genetic algorithms contribution to feature selection for oil spill detection. Int. J. Remote Sens. 2009, 30, 611–625. [Google Scholar] [CrossRef]
- Topouzelis, K.; Psyllos, A. Oil spill feature selection and classification using decision tree forest on SAR image data. ISPRS J. Photogramm. Remote Sens. 2012, 68, 135–143. [Google Scholar] [CrossRef]
- Brekke, C.; Solberg, A.H.S. Classifiers and confidence estimation for oil spill detection in ENVISAT ASAR images. IEEE Geosci. Remote Sens. Lett. 2008, 5, 65–69. [Google Scholar] [CrossRef]
- Solberg, A.H.S.; Brekke, C.; Husøy, P.O. Oil spill detection in Radarsat and Envisat SAR images. IEEE Trans. Geosci. Remote Sens. 2007, 45, 746–754. [Google Scholar] [CrossRef]
- Del Frate, F.; Petrocchi, A.; Lichtenegger, J.; Calabresi, G. Neural networks for oil spill detection using ERS-SAR data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 2282–2287. [Google Scholar] [CrossRef]
- Topouzelis, K.; Karathanassi, V.; Pavlakis, P.; Rokos, D. Detection and discrimination between oil spills and look-alike phenomena through neural networks. ISPRS J. Photogramm. Remote Sens. 2007, 62, 264–270. [Google Scholar] [CrossRef]
- Topouzelis, K.; Karathanassi, V.; Pavlakis, P.; Rokos, D. Dark formation detection using neural networks. Int. J. Remote Sens. 2008, 29, 4705–4720. [Google Scholar] [CrossRef]
- Topouzelis, K.; Karathanassi, V.; Pavlakis, P.; Rokos, D. Potentiality of feed-forward neural networks for classifying dark formations to oil spills and look-alikes. Geocarto Int. 2009, 24, 179–191. [Google Scholar] [CrossRef]
- Mera, D.; Cotos, J.M.; Varela-Pet, J.; Rodríguez, P.G.; Caro, A. Automatic decision support system based on sar data for oil spill detection. Comput. Geosci. 2014, 72, 184–191. [Google Scholar] [CrossRef]
- Singha, S.; Velotto, D.; Lehner, S. Near real time monitoring of platform sourced pollution using TerraSAR-X over the North Sea. Mar. Pollut. Bull. 2014, 86, 379–390. [Google Scholar] [CrossRef] [PubMed]
- Kubat, M.; Holte, R.; Matwin, S. Machine learning for the detection of oil spills in satellite radar images. Mach. Learn. 1998, 23, 1–23. [Google Scholar] [CrossRef]
- Nirchio, F.; Sorgente, M.; Giancaspro, A.; Biamino, W.; Parisato, E.; Ravera, R.; Trivero, P. Automatic detection of oil spills from SAR images. Int. J. Remote Sens. 2005, 26, 1157–1174. [Google Scholar] [CrossRef]
- Karathanassi, V.; Topouzelis, K.; Pavlakis, P.; Rokos, D. An object-oriented methodology to detect oil spills. Int. J. Remote Sens. 2006, 27, 5235–5251. [Google Scholar] [CrossRef]
- Mercier, G.; Girard-Ardhuin, F. Partially supervised oil-slick detection by SAR imagery using kernel expansion. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2839–2846. [Google Scholar] [CrossRef]
- Ramalho, G.L.B.; Medeiros, F.N.S. Oil Spill Detection in SAR Images using Neural Networks. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006. [Google Scholar]
- Xu, L.; Li, J.; Brenning, A. A comparative study of different classification techniques for marine oil spill identification using RADARSAT-1 imagery. Remote Sens. Environ. 2014, 141, 14–23. [Google Scholar] [CrossRef]
- Hastie, T.; Tibshirani, R.; Friedman, J. Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed.; Springer: New York, NY, USA, 2009; pp. 22–26. ISBN 978-0387848570. [Google Scholar]
- Pavlakis, P.; Tarchi, D.; Sieber, A.J. On the monitoring of illicit vessel discharges using spaceborne SAR remote sensing—A reconnaissance study in the Mediterranean sea. Ann. Des. Telecommun. Telecommun. 2001, 56, 700–718. [Google Scholar]
- Indregard, M.; Solberg, A.H.S.; Clayton, P. D2-Report on Benchmarking Oil Spill Recognition Approaches and Best Practice; Technical Report, Eur. Comm. 2004, Archive No. 04-10225-A-Doc, Contract No:EVK2-CT-2003-00177; European Commission: Brussels, Belgium, 2004. [Google Scholar]
- Ferraro, G.; Meyer-Roux, S.; Muellenhoff, O.; Pavliha, M.; Svetak, J.; Tarchi, D.; Topouzelis, K. Long term monitoring of oil spills in European seas. Int. J. Remote Sens. 2009, 30, 627–645. [Google Scholar] [CrossRef]
- Settles, B. Active Learning Literature Survey. Mach. Learn. 2010, 15, 201–221. [Google Scholar]
- Wang, M.; Hua, X.-S. Active learning in multimedia annotation and retrieval. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–21. [Google Scholar] [CrossRef]
- Fu, Y.; Zhu, X.; Li, B. A survey on instance selection for active learning. Knowl. Inf. Syst. 2013, 35, 249–283. [Google Scholar] [CrossRef]
- Ferecatu, M.; Boujemaa, N. Interactive remote-sensing image retrieval using active relevance feedback. IEEE Trans. Geosci. Remote Sens. 2007, 45, 818–826. [Google Scholar] [CrossRef]
- Rajan, S.; Ghosh, J.; Crawford, M.M. An Active Learning Approach to Hyperspectral Data Classification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1231–1242. [Google Scholar] [CrossRef]
- Demir, B.; Persello, C.; Bruzzone, L. Batch-mode active-learning methods for the interactive classification of remote sensing images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1014–1031. [Google Scholar] [CrossRef]
- Persello, C.; Bruzzone, L. Active and Semisupervised Learning for the Classification of Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6937–6956. [Google Scholar] [CrossRef]
- Chen, R.; Cao, Y.F.; Sun, H. Active sample-selecting and manifold learning-based relevance feedback method for synthetic aperture radar image retrieval. IET Radar Sonar Navig. 2011, 5, 118. [Google Scholar] [CrossRef]
- Cui, S.; Dumitru, C.O.; Datcu, M. Semantic annotation in earth observation based on active learning. Int. J. Image Data Fusion 2014, 5, 152–174. [Google Scholar] [CrossRef]
- Samat, A.; Gamba, P.; Du, P.; Luo, J. Active extreme learning machines for quad-polarimetric SAR imagery classification. Int. J. Appl. Earth Obs. Geoinf. 2015, 35, 305–319. [Google Scholar] [CrossRef]
- Cao, Y.F.; Xu, L.; Clausi, D. Active learning for identifying marine oil spills using 10-year radarsat data. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10−15 July 2016; pp. 7722–7725. [Google Scholar]
- Espedal, H.A.; Wahl, T. Satellite SAR oil spill detection using wind history information. Int. J. Remote Sens. 1999, 20, 49–65. [Google Scholar] [CrossRef]
- Hwang, P.A.; Stoffelen, A.; van Zadelhoff, G.-J.; Perrie, W.; Zhang, B.; Li, H.; Shen, H. Cross-polarization geophysical model function for C-band radar backscattering from the ocean surface and wind speed retrieval. J. Geophys. Res. Ocean. 2015, 120, 893–909. [Google Scholar] [CrossRef]
- Salvatori, L.; Bouchaib, S.; DelFrate, F.; Lichtenneger, J.; Smara, Y. Estimating the Wind Vector from Radar Sar Images When Applied To the Detection of Oil Spill Pollution. In Proceedings of the Fifth International Symposium on GIS and Computer Catography for Coastal Zone Management, CoastGIS’03, Genoa, Italy, 16–18 October 2003. [Google Scholar]
- Hu, M.K. Visual pattern recognition by moment invariants. IEEE Trans. Inf. Theory 1962, IT-8, 179–187. [Google Scholar]
- Khotanzad, A.; Hong, Y.H. Invariant image recognition by Zernike moments. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 489–497. [Google Scholar] [CrossRef]
- Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
- Chang, C.C.; Lin, C.J. LIBSVM: A Library for Support Vector Machines. Available online: http://www.csie.ntu.edu.tw/~cjlin/libsvm/ (accessed on 1 September 2015).
- Zhu, H.; Basir, O. An adaptive fuzzy evidential nearest neighbor formulation for classifying remote sensing images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1874–1889. [Google Scholar] [CrossRef]
- Blanzieri, E.; Melgani, F. Nearest neighbor classification of remote sensing images with the maximal margin principle. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1804–1811. [Google Scholar] [CrossRef]
- Yang, J.-M.; Yu, P.-T.; Kuo, B.-C. A Nonparametric Feature Extraction and Its Application to Nearest Neighbor Classification for Hyperspectral Image Data. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1279–1293. [Google Scholar] [CrossRef]
- Hastie, T.; Buja, A.; Tibshirani, R. Penalized Discriminant Analysis. Ann. Stat. 1995, 23, 73–102. [Google Scholar] [CrossRef]
- Chen, L.F.; Liao, H.Y.M.; Ko, M.T.; Lin, J.C.; Yu, G.J. New LDA-based face recognition system which can solve the small sample size problem. Pattern Recognit. 2000, 33, 1713–1726. [Google Scholar] [CrossRef]
- Wang, X.; Tang, X. Dual-space linear discriminant analysis for face recognition. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; Volume 2, pp. 564–569. [Google Scholar]
- Li, P.; Fu, Y.; Mohammed, U.; Elder, J.H.; Prince, S.S.J.D. Probabilistic models for Inference about Identity. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 144–157. [Google Scholar] [CrossRef]
- Zhang, D.; He, J.; Zhao, Y.; Luo, Z.; Du, M. Global plus local: A complete framework for feature extraction and recognition. Pattern Recognit. 2014, 47, 1433–1442. [Google Scholar] [CrossRef]
- Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Taylor Francis Ltd.: Burlington, MA, USA, 1984; Volume 5. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Hothorn, T.; Lausen, B. Bundling classifiers by bagging trees. Comput. Stat. Data Anal. 2005, 49, 1068–1078. [Google Scholar] [CrossRef]
- Friedman, J.H. Greedy Function Approximation: A Gradient Boosting Machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
- Wu, T.F.; Lin, C.J.; Weng, R.C. Probability estimates for multi-class classification by pairwise coupling. J. Mach. Learn. Res. 2004, 5, 975–1005. [Google Scholar]
- Alvarez, I.; Bernard, S.; Deffuant, G. Keep the decision tree and estimate the class probabilities using its decision boundary. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, 6–12 January 2007; pp. 654–659. [Google Scholar]
- Cebron, N.; Berthold, M.R. Active learning for object classification: From exploration to exploitation. Data Min. Knowl. Discov. 2009, 18, 283–299. [Google Scholar] [CrossRef]
- Swets, J.A. Measuring the accuracy of diagnostic systems. Science 1988, 240, 1285–1293. [Google Scholar] [CrossRef] [PubMed]
- Fawcett, T. An introduction to ROC analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
- Davis, J.; Goadrich, M. The Relationship between Precision-Recall and ROC Curves. In Proceedings of the 23rd International Conference Machine Learning ICML’06, Pittsburgh, PA, USA, 25–29 June 2006; pp. 233–240. [Google Scholar]
Literature | SAR Sensor | #Image | #Samples | #Samples Verified | Location (Time Span) |
---|---|---|---|---|---|
[22] | RADARSAT1 | 93 | (98, 94) | (21, N/A) | West and East coasts of Canada (2004–2008) |
[9] | Envisat | 103 | (41, 12, 245) | N/A | Baltic Sea and North See (2003–2005) |
[10] | Envisat RADARSAT1 | 127 | (37, 12, 110) | (29, N/A) | Baltic Sea and North See (2003–2004) |
[4] | Envisat | 84 | (71, 6980) | N/A | European waters (N/A) |
[7,8,12,13,14] | ERS-2 | 24 | (69, 90) | N/A | Mediterranean Sea (N/A) |
[19] | ERS-1/2 | 12 | N/A | N/A | Mediterranean Sea (N/A) |
[24] | ERS-1/2 | 1600 | (1638, N/A) | N/A | Mediterranean Sea (1999) |
[18] | ERS-1/2 Envisat | N/A | (153, 237) | N/A | Mediterranean Sea (N/A) |
[26] | ERS-1/2 Envisat | 15, 533 | (9299, N/A) | N/A | Mediterranean Sea (1999–2004) |
[26] | ERS-1/2 Envisat | 3165 | (1227, N/A) | N/A | Black sea (1999–2004) |
[15] | Envisat | 47 | (80, 155) | N/A | Galicia coast, Spain (2007–2011) |
[3] | Envisat RADARSAT | 118 | (361, 5728) | N/A | European Waters (2009–2012) |
[17] | ERS-1 RADARSAT1 | 9 | (41, 896) | N/A | N/A |
No | Type | Features | Code |
---|---|---|---|
1 | Geometric | Target area in number of pixels | A |
2 | Target perimeter in number of pixels | P | |
3 | Target Complexity measure C1 = P^2/A | C1 | |
4 | Target Complexity measure C2 = P/A | C2 | |
5 | Target Complexity measure C3 = P/(2*sqrt(pi*area)) | C3 | |
6 | The length of the major axis of the ellipse that has the same normalized second central moments as the object region. | Length | |
7 | The length of the minor axis of the ellipse that has the same normalized second central moments as the object region. | Width | |
8 | The eccentricity of the ellipse that has the same second-moments as the object region | Ecce. | |
9 | Target Spreading measures S = Length/Width | S | |
10–12 | The first three of Hu's invariant planar moments [41] | H1–H3 | |
13–19 | The first seven of Zernike moments [42] | Z1–Z7 | |
20 | Physical | Average intensity value of the object | MeO |
21 | Standard deviation of gray-scale intensity values of the object | SDO | |
22 | Average intensity value of the background area (a limited area near and outside object) | MeB | |
23 | Standard deviation of the intensity value of the background area | SDB | |
24 | Maximum intensity value of the object | MaxO | |
25 | Minimum intensity value of the object | MinO | |
26 | Power-to-Mean Ratio of the Object, SDO/MeO | PMRO | |
27 | Power-to-Mean Ratio of the Background area, SDB/MeB | PMRB | |
28 | Ratio between MeO and MeB | MeR | |
29 | Ratio between SDO and SDB | SDR | |
30 | Ratio between PMRO and PMRB | PMRR | |
31 | Difference between MeB and MeO | MeD | |
32 | Difference between SDB and SDO | SDD | |
33 | Difference between PMRB and PMRO | PMRD | |
34 | The difference between MeB and MinO | MaxC | |
35 | Average gradient value of the object area | MeGO | |
36 | Standard deviation of the gradient value of the object area | SDGO | |
37 | Average gradient value of the background area. | MeGB | |
38 | Standard deviation of the gradient value of the background area | SDGB | |
39 | Average gradient value of the object border area. | MeGBo | |
40 | Standard deviation of the gradient value of the object border area | SDGBo | |
41 | Maximum gradient value of the object | MaxGO | |
42 | Minimum gradient value of the object | MinGO | |
43 | Ratio between SDGO and MeGO | PMRGO | |
44 | Ratio between SDGB and MeGB | PMRGB | |
45 | Ratio between MeGO and meGB | MeGR | |
46 | Ratio between SDGO and SDGB | SDGR | |
47 | Ratio between PMRGB and PMRGO | PMRGR | |
48 | Difference between MeGB and MeGO | MeGD | |
49 | Difference between SDGB and SDGO | SDGD | |
50 | Difference between PMRGB and PMRGO | PMRGD | |
51 | Difference between MeGB and MinGO | MaxGC | |
52 | Textural | GLCM Contrast | Cont. |
53 | GLCM Correlation | Corr. | |
54 | GLCM Energy | Ener. | |
55 | GLCM Homogeneity | Homo. | |
56 | GLCM Entropy | Entr. |
Method | Parameters | Description |
---|---|---|
ACS-1 | NULL | Randomly select samples |
ACS-2 | w1 = 1, w2 = 0, w3 = 0 | Prefer samples more like oil spills |
ACS-3 | w1 = 0, w2 = 1, w3 = 0 | Prefer samples more like look-alikes |
ACS-4 | w1 = 0.5, w2 = 0.5, w3 = 0 | Take half of samples from ACS-2 and half from ACS-3 |
ACS-5 | w1 = 0, w2 = 0, w3 = 1 | Prefer samples with high uncertainty of classification |
ACS-6 | w1 = 0.25, w2 = 0.25, w3 = 0.5 | Take half of the samples from ACS-4 and the other half from ACS-5 |
Predicted as Positive | Predicted as Negative | |
---|---|---|
Actually Positive | True Positives (TP) | False Negatives (FN) |
Actually Negative | False Positive (FP) | True Negatives (TN) |
Overall Performance (%) | High-TPR Performance (%) | Sorting Performance (%) | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
D1 | D2 | D3 | D4 | D5 | D6 | D1 | D2 | D3 | D4 | D5 | D6 | D1 | D2 | D3 | D4 | D5 | D6 | |
SVM | 74 | 70 | 61 | 57 | 43 | 26 | 61 | 57 | 52 | 43 | 30 | 4 | 78 | 78 | 78 | 74 | 70 | 61 |
KNN | 70 | 61 | 57 | 39 | 26 | 9 | 91 | 91 | 91 | 91 | 91 | 91 | 57 | 48 | 43 | 39 | 35 | 26 |
LDA | 96 | 91 | 91 | 91 | 91 | 87 | 87 | 83 | 83 | 83 | 83 | 78 | 96 | 96 | 96 | 96 | 96 | 91 |
DT | 91 | 87 | 83 | 61 | 57 | 26 | 96 | 96 | 96 | 96 | 96 | 96 | 96 | 96 | 83 | 70 | 57 | 48 |
Overall Performance (10−3) | High-TPR Performance (10−3) | Sorting Performance (10−3) | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A1 | A2 | A3 | A4 | A5 | A6 | A1 | A2 | A3 | A4 | A5 | A6 | A1 | A2 | A3 | A4 | A5 | A6 | |
SVM | 1.3 | −17 | −28 | −19 | −14 | −26 | 0 | −5 | −14 | −8 | −6 | −15 | 4.2 | −22 | −28 | −23 | −15 | −22 |
KNN | 0 | 5.9 | 2.1 | 1.8 | −5 | −9 | 1.1 | 8.3 | −3 | 5.3 | 1.7 | −7 | 1.5 | −6 | 6.8 | 10 | 1 | 1 |
LDA | −7 | 15 | −4 | −2 | 6.3 | 1.8 | −1 | 0 | −34 | −15 | −12 | −3 | −2 | 14 | −1 | 1 | 6.6 | 2 |
DT | −3 | 4.4 | 5 | 11 | 0 | −7 | 0.7 | 17 | 29 | −39 | −12 | −11 | −1 | −3 | 14 | 11 | −2.2 | −4 |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cao, Y.; Xu, L.; Clausi, D. Exploring the Potential of Active Learning for Automatic Identification of Marine Oil Spills Using 10-Year (2004–2013) RADARSAT Data. Remote Sens. 2017, 9, 1041. https://doi.org/10.3390/rs9101041
Cao Y, Xu L, Clausi D. Exploring the Potential of Active Learning for Automatic Identification of Marine Oil Spills Using 10-Year (2004–2013) RADARSAT Data. Remote Sensing. 2017; 9(10):1041. https://doi.org/10.3390/rs9101041
Chicago/Turabian StyleCao, Yongfeng, Linlin Xu, and David Clausi. 2017. "Exploring the Potential of Active Learning for Automatic Identification of Marine Oil Spills Using 10-Year (2004–2013) RADARSAT Data" Remote Sensing 9, no. 10: 1041. https://doi.org/10.3390/rs9101041
APA StyleCao, Y., Xu, L., & Clausi, D. (2017). Exploring the Potential of Active Learning for Automatic Identification of Marine Oil Spills Using 10-Year (2004–2013) RADARSAT Data. Remote Sensing, 9(10), 1041. https://doi.org/10.3390/rs9101041