Change Detection from Remote Sensing to Guide OpenStreetMap Labeling
Abstract
:1. Introduction
1.1. OpenStreetMap Data Generation Assisted by Artificial Neural Networks
1.2. An Approach to OSM Generation Based on Deep Learning
1.3. Related Work
1.4. Contributions
- We demonstrate the application of a modified CycleGAN with an attention mechanism trained on NAIP vertical aerial imagery and OSM raster tiles.
- We quantify the accuracy of house detection based on the approach above and compare it to a state-of-the-art house detection network architecture (U-Net) from the remote sensing literature.
- We exemplify the extraction of a heat map from the big geospatial data platform IBM PAIRS that stores OSM raster tiles and maps generated by the modified CycleGAN. This way, we successfully identify geospatial regions where OSM mappers should focus their labor force.
- We provide lessons learned where our approach needs further research. In the context of OSM map tiles that assign different colors to various hierarchies of roads, we inspect the color interpolation performed by the modified CycleGAN architecture.
2. Materials and Methods
2.1. Scalable Geo-Data Platform, Data Curation, and Ingestion
2.2. Data Sources
2.2.1. NAIP Aerial Imagery
2.2.2. OSM Rasterized Map Tiles
2.2.3. Data Ingestion into PAIRS for NAIP and OSM
2.3. Deep Learning Methodology
- When training is completed, we use the ANN corresponding to Student 1 in order to infer OSM raster map tiles from NAIP vertical aerial imagery.
- During the entire training, the lecturer did not require the availability of pairs of vertical aerial imagery and maps. Since OSM relies on voluntary contributions and mapping the entire globe is an extensive manual labeling task, not requiring the availability of pairs of vertical aerial imagery and maps, it allows the use of inaccurate or incomplete maps at training time.
- To exploit the fact that in our scenario, we indeed have an existing pairing of NAIP imagery and OSM map tiles, we let the lecturer focus her/his attention on human infrastructure (such as roads and buildings) when determining the difference of the NAIP imagery handed to Student 1 to what she/he gets returned by Student 3. This deviation from the CycleGAN procedure is what we refer to as fw-CycleGAN (feature-weighted CycleGAN) in Section 3.1.
2.4. Computer Code
3. Numerical Experiments
3.1. Feature-Weighted CycleGAN for OSM-Style Map Generation
3.2. fw-CycleGAN for OSM Data Change Detection
4. Results and Discussion
4.1. Building Detection for OSM House Label Addition
- Given the OSM map tiles, it is apparent that in the middle of the figure, Region b has residential house labels, while Region a has not yet been labeled by OSM mappers. Given NAIP data from 2016, the fw-CycleGAN is capable of identifying homes in Region a such that the technique presented in Figure 3 revealed a heat map with an indicative signal. Spurious magnitudes in Region b stemmed from imprecise georeferencing of buildings in the OSM map, as well as the vegetation cover above rooftops in the aerial NAIP imagery.Referring back to Listing 1, we had to allow for small color value variation with magnitude 1 = 2 regarding the RGB feature color for buildings in the rasterized OSM map M. Thus, our approach became tolerant to minor perturbations in the map background other than bare land ((R,G,B) = (242,239,233), cf. sandy color in Region a) such as the grayish background in Region b needs to be accounted for. Similarly, a wider color-tolerance range of 9 = 18 was set for the generated map , which was noisier. Moreover, as we will discuss in Figure 5, we observed that the fw-CycleGAN tried to interpolate colors smoothly based on the feature’s context.
- fw-CycleGAN correctly identified roads and paths in areas where OSM had simply a park/recreational area marker. However, for the heat map defined by Listing 1, we restricted the analysis of the generated map to houses only—which was why the change in the road network in this part of the image was not reflected by the heat map.
- This section of the image demonstrated the limits of our current approach. In the generated map, colors were fluctuating wildly, while patches of land were marked as bodies of water (blue) or forestry (green). Further investigation is required if these artifacts were the result of idiosyncratic features in the map scarcely represented in the training dataset. We are planning to train on significantly larger datasets to answer this question.Another challenge of our current approach was exhibited by more extensive bodies of water. Though not present in Figure 4, we noticed the vertical aerial image-to-map ANN of the fw-CycleGAN to generate complex compositions of patches in such areas. Nevertheless, the heat map generated by the procedure outlined in Section 3.2 did not develop a pronounced signal that could potentially mislead OSM mappers. Thus, there were no false alarms due to these artifacts; regardless, an area to be labeled could be potentially missed in this way.
4.2. Change of Road Hierarchy from Color Interpolation
- We begin by focusing on the circular highway exits. As apparent from the illustration, the fw-CycleGAN attempted to interpolate gradually from a major highway to a local street instead of assigning a discontinuous boundary.
- For our experiments, the feature weighting of the CycleGAN’s consistency loss was restricted to roads and buildings. Based on visual inspection of our training dataset of cities in Texas, we observed that rooftops (in particular, those of commercial buildings) and roads could share a similar sandy to grayish color tone. This might be the root cause why on inference, the typical brown color of OSM house labels became mixed into the road network, as clearly visible in Region c. Indeed, Regions a and b seemed to support such a hypothesis. More specifically, roads leading into a flat, extended, sandy parking lot area (as in Region a) might be misinterpreted as flat rooftops of, e.g., a shopping mall or depot, as in Region IIIa and IIId.
- In general, the context seemed to play a crucial role for inference: Where Regions a to d met, sharp transitions in color patches were visible. The generated map was obtained from stitching 512 × 512 pixel image mosaics without overlap. The interpretation of edge regions was impacted by information displayed southwest, northwest, northeast, and southeast of it. Variations could lead to a substantially different interpretation. Without proof, it might be possible that the natural scene southwest of Region d induced the blueish (water body) tone on the one hand, while, in contrast, the urban scene northeast of Region a triggered the alternating brown (building) and white (local road) inpainting on the other hand.
- Finally, Regions a and b provided a hallmark of our feature-weighting procedure. Although the NAIP imagery contained extended regions of vegetation, the fw-CycleGAN inferred bare ground.
5. Conclusions and Perspectives
6. Patents
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Abbreviations
AI | artificial intelligence |
ANN | artificial neural network |
CNN | convolutional neural network |
CONUS | Contiguous United States |
CRS | coordinate reference system |
Deg | unit of degrees to measure angles |
ED | encoder-decoder |
ESA | European Space Agency |
FN | false negative |
FP | false positive |
fw | feature-weighted |
GAN | generative adversarial network |
GPS | Global Positioning System |
IoU | intersection over union |
LiDAR | light detection and ranging |
M, | OSM rasterized map ground truth, ANN generated version |
NAIP | National Agriculture Imagery Product |
NASA | National Aeronautics and Space Administration |
OSM | OpenStreetMap |
PAIRS | Physical Analytics Integrated Repository and Services |
Radar | radio detection and ranging |
RGB | red green blue |
S, | vertical aerial imagery ground truth, ANN generated version |
TN | true negative |
TP | true positive |
TX | Texas |
UDF | User-Defined Function |
USDA | U.S. Department of Agriculture |
USGS | U.S. Geological Survey |
{T,G,M,K}B | {Tera,Giga,Mega,Kilo}bytes |
XML | Extensible Markup Language |
Appendix A. Primer on ANNs from the Perspective of Our Work
References
- OpenStreetMap. Available online: https://www.openstreetmap.org/ (accessed on 25 June 2020).
- OpenStreetMap Editor. Available online: https://www.openstreetmap.org/edit (accessed on 25 June 2020).
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- SegNet. Available online: https://mi.eng.cam.ac.uk/projects/segnet/ (accessed on 25 June 2020).
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar] [CrossRef] [Green Version]
- Image-to-Image Translation with Conditional Adversarial Networks. Available online: https://phillipi.github.io/pix2pix/ (accessed on 25 June 2020).
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Schmidhuber, J. Unsupervised Minimax: Adversarial Curiosity, Generative Adversarial Networks, and Predictability Minimization. arXiv 2019, arXiv:cs/1906.04493. [Google Scholar]
- Zhang, R.; Albrecht, C.; Zhang, W.; Cui, X.; Finkler, U.; Kung, D.; Lu, S. Map Generation from Large Scale Incomplete and Inaccurate Data Labels. arXiv 2020, arXiv:2005.10053. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar] [CrossRef] [Green Version]
- CycleGAN Project Page. Available online: https://junyanz.github.io/CycleGAN/ (accessed on 25 June 2020).
- Tiecke, T.G.; Liu, X.; Zhang, A.; Gros, A.; Li, N.; Yetman, G.; Kilic, T.; Murray, S.; Blankespoor, B.; Prydz, E.B.; et al. Mapping the World Population One Building at a Time. arXiv 2017, arXiv:cs/1712.05839. [Google Scholar]
- Iglovikov, V.; Seferbekov, S.S.; Buslaev, A.; Shvets, A. TernausNetV2: Fully Convolutional Network for Instance Segmentation. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; Volume 233, p. 237. [Google Scholar]
- Microsoft/USBuildingFootprints. Available online: https://github.com/microsoft/USBuildingFootprints (accessed on 25 June 2020).
- Albert, A.; Kaur, J.; Gonzalez, M.C. Using convolutional networks and satellite imagery to identify patterns in urban environments at a large scale. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 1357–1366. [Google Scholar]
- Rakhlin, A.; Davydow, A.; Nikolenko, S.I. Land Cover Classification from Satellite Imagery with U-Net and Lovasz-Softmax Loss. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 262–266. [Google Scholar]
- Cao, R.; Zhu, J.; Tu, W.; Li, Q.; Cao, J.; Liu, B.; Zhang, Q.; Qiu, G. Integrating aerial and street view images for urban land use classification. Remote Sens. 2018, 10, 1553. [Google Scholar] [CrossRef] [Green Version]
- Kuo, T.S.; Tseng, K.S.; Yan, J.W.; Liu, Y.C.; Wang, Y.C.F. Deep Aggregation Net for Land Cover Classification. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 252–256. [Google Scholar]
- Zhou, L.; Zhang, C.; Wu, M. D-LinkNet: LinkNet with Pretrained Encoder and Dilated Convolution for High Resolution Satellite Imagery Road Extraction. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 192–1924. [Google Scholar] [CrossRef]
- Oehmcke, S.; Thrysøe, C.; Borgstad, A.; Salles, M.A.V.; Brandt, M.; Gieseke, F. Detecting Hardly Visible Roads in Low-Resolution Satellite Time Series Data. arXiv 2019, arXiv:1912.05026. [Google Scholar]
- Buslaev, A.; Seferbekov, S.S.; Iglovikov, V.; Shvets, A. Fully Convolutional Network for Automatic Road Extraction From Satellite Imagery. In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 207–210. [Google Scholar]
- Xia, W.; Zhang, Y.Z.; Liu, J.; Luo, L.; Yang, K. Road extraction from high resolution image with deep convolution network—A case study of GF-2 image. In Multidisciplinary Digital Publishing Institute Proceedings; MDPI: Basel, Switzerland, 2018; Volume 2, p. 325. [Google Scholar]
- Wu, S.; Du, C.; Chen, H.; Xu, Y.; Guo, N.; Jing, N. Road Extraction from Very High Resolution Images Using Weakly labeled OpenStreetMap Centerline. ISPRS Int. J. Geo-Inf. 2019, 8, 478. [Google Scholar] [CrossRef] [Green Version]
- Xia, W.; Zhong, N.; Geng, D.; Luo, L. A weakly supervised road extraction approach via deep convolutional nets based image segmentation. In Proceedings of the 2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP), Shanghai, China, 19–21 May 2017; pp. 1–5. [Google Scholar]
- Sun, T.; Di, Z.; Che, P.; Liu, C.; Wang, Y. Leveraging crowdsourced gps data for road extraction from aerial imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 7509–7518. [Google Scholar]
- Ruan, S.; Long, C.; Bao, J.; Li, C.; Yu, Z.; Li, R.; Liang, Y.; He, T.; Zheng, Y. Learning to generate maps from trajectories. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–8 February 2020. [Google Scholar]
- Liu, M.Y.; Breuel, T.; Kautz, J. Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 700–708. [Google Scholar]
- Bonafilia, D.; Gill, J.; Basu, S.; Yang, D. Building High Resolution Maps for Humanitarian Aid and Development with Weakly-and Semi-Supervised Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–20 June 2019; pp. 1–9. [Google Scholar]
- Singh, S.; Batra, A.; Pang, G.; Torresani, L.; Basu, S.; Paluri, M.; Jawahar, C. Self-Supervised Feature Learning for Semantic Segmentation of Overhead Imagery. In Proceedings of the The British Machine Vision Conference, Newcastle upon Tyne, UK, 3–6 September 2018; British Machine Vision Association: Durham, UK, 2018. [Google Scholar]
- Ganguli, S.; Garzon, P.; Glaser, N. Geogan: A conditional gan with reconstruction and style loss to generate standard layer of maps from satellite images. arXiv 2019, arXiv:1902.05611. [Google Scholar]
- Machine Learning—OpenStreetMap Wiki. Available online: https://wiki.openstreetmap.org/wiki/Machine_learning (accessed on 25 June 2020).
- IBM PAIRS—Geoscope. Available online: https://ibmpairs.mybluemix.net/ (accessed on 25 June 2020).
- Klein, L.; Marianno, F.; Albrecht, C.; Freitag, M.; Lu, S.; Hinds, N.; Shao, X.; Rodriguez, S.; Hamann, H. PAIRS: A scalable geo-spatial data analytics platform. In Proceedings of the 2015 IEEE International Conference on Big Data (Big Data), Santa Clara, CA, USA, 29 October–1 November 2015; pp. 1290–1298. [Google Scholar] [CrossRef]
- Lu, S.; Freitag, M.; Klein, L.J.; Renwick, J.; Marianno, F.J.; Albrecht, C.M.; Hamann, H.F. IBM PAIRS Curated Big Data Service for Accelerated Geospatial Data Analytics and Discovery. In Proceedings of the 2016 IEEE International Conference on Big Data (Big Data), Washington, DC, USA, 5–8 December 2016; p. 2672. [Google Scholar] [CrossRef]
- Albrecht, C.M.; Bobroff, N.; Elmegreen, B.; Freitag, M.; Hamann, H.F.; Khabibrakhmanov, I.; Klein, L.; Lu, S.; Marianno, F.; Schmude, J.; et al. PAIRS (re)loaded: System design and benchmarking for scalable geospatial applications. ISPRS Annals Proceedings 2020, in press. [Google Scholar]
- Fecher, R.; Whitby, M.A. Optimizing Spatiotemporal Analysis Using Multidimensional Indexing with GeoWave. In Proceedings of the Free and Open Source Software for Geospatial (FOSS4G) Conference, Hyderabad, India, 26–29 January 2017; Volume 17, p. 10. [Google Scholar] [CrossRef]
- Hughes, J.N.; Annex, A.; Eichelberger, C.N.; Fox, A.; Hulbert, A.; Ronquest, M. Geomesa: A Distributed Architecture for Spatio-Temporal Fusion. In Proceedings of the PIE 9473, Geospatial Informatics, Fusion, and Motion Video Analytics V, Baltimore, MD, USA, 20–24 April 2015; SPIE: Bellingham, WA, USA, 2015. [Google Scholar]
- Whitman, R.T.; Park, M.B.; Ambrose, S.M.; Hoel, E.G. Spatial indexing and analytics on Hadoop. In Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Dallas, TX, USA, 4–7 November 2014; Association for Computing Machinery: New York, NY, USA, 2014. [Google Scholar]
- Albrecht, C.M.; Fisher, C.; Freitag, M.; Hamann, H.F.; Pankanti, S.; Pezzutti, F.; Rossi, F. Learning and Recognizing Archeological Features from LiDAR Data. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 5630–5636. [Google Scholar] [CrossRef] [Green Version]
- Klein, L.J.; Albrecht, C.M.; Zhou, W.; Siebenschuh, C.; Pankanti, S.; Hamann, H.F.; Lu, S. N-Dimensional Geospatial Data and Analytics for Critical Infrastructure Risk Assessment. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 5637–5643. [Google Scholar] [CrossRef]
- Elmegreen, B.; Albrecht, C.; Hamann, H.; Klein, L.; Lu, S.; Schmude, J. Physical Analytics Integrated Repository and Services for Astronomy: PAIRS-A. Bull. Am. Astron. Soc. 2019, 51, 28. [Google Scholar]
- Vora, M.N. Hadoop-HBase for Large-Scale Data. In Proceedings of the 2011 International Conference on Computer Science and Network Technology, Harbin, China, 24–26 December 2011; Volume 1, pp. 601–605. [Google Scholar] [CrossRef]
- Home—Spatial Reference. Available online: https://spatialreference.org/ (accessed on 25 June 2020).
- Janssen, V. Understanding Coordinate Reference Systems, Datums and Transformations. Int. J. Geoinform. 2009, 5, 41–53. [Google Scholar]
- Samet, H. The Quadtree and Related Hierarchical Data Structures. ACM Comput. Surv. 1984, 16, 187–260. [Google Scholar] [CrossRef] [Green Version]
- Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote. Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
- Roy, D.P.; Wulder, M.A.; Lovel, T.R.; Woodcock, C.E.; Allen, R.G.; Anderson, M.C.; Helder, D.; Irons, J.R.; Johnson, D.M.; Kennedy, R.; et al. Landsat-8: Science and Product Vision for Terrestrial Global Change Research. Remote. Sens. Environ. 2014, 145, 154–172. [Google Scholar] [CrossRef] [Green Version]
- Landsat Missions Webpage. Available online: https://www.usgs.gov/land-resources/nli/landsat/landsat-satellite-missions (accessed on 25 June 2020).
- Terra Mission Webpage. Available online: https://terra.nasa.gov/about/mission (accessed on 25 June 2020).
- Sentinel-2 Mission Webpage. Available online: https://sentinel.esa.int/web/sentinel/missions/sentinel-2 (accessed on 25 June 2020).
- Lim, K.; Treitz, P.; Wulder, M.; St-Onge, B.; Flood, M. LiDAR Remote Sensing of Forest Structure. Prog. Phys. Geogr. 2003, 27, 88–106. [Google Scholar] [CrossRef] [Green Version]
- Meng, X.; Currit, N.; Zhao, K. Ground Filtering Algorithms for Airborne LiDAR Data: A Review of Critical Issues. Remote. Sens. 2010, 2, 833–860. [Google Scholar] [CrossRef] [Green Version]
- Soergel, U. (Ed.) Radar Remote Sensing of Urban Areas, 1st ed.; Book Series: Remote Sensing and Digital Image Processing; Springer: Heidelberg, Germany, 2010. [Google Scholar] [CrossRef]
- Ouchi, K. Recent Trend and Advance of Synthetic Aperture Radar with Selected Topics. Remote. Sens. 2013, 5, 716–807. [Google Scholar] [CrossRef] [Green Version]
- Naip Data in Box. Available online: https://nrcs.app.box.com/v/naip (accessed on 25 June 2020).
- USGS EROS Archive—Aerial Photography—National Agriculture Imagery Program (NAIP). Available online: https://doi.org/10.5066/F7QN651G (accessed on 28 June 2020).
- WMF Labs Tile Server: “OSM No Labels”. Available online: https://tiles.wmflabs.org/osm-no-labels/ (accessed on 25 June 2020).
- OSM Server: “Tiles with Labels”. Available online: https://tile.openstreetmap.de/ (accessed on 25 June 2020).
- Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized Intersection Over Union: A Metric and a Loss for Bounding Box Regression. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 658–666. [Google Scholar] [CrossRef] [Green Version]
- Pytorch/Pytorch. Available online: https://github.com/pytorch/pytorch (accessed on 25 June 2020).
- milesial. Milesial/Pytorch-UNet. Available online: https://github.com/milesial/Pytorch-UNet (accessed on 25 June 2020).
- Zhu, J.Y. Junyanz/Pytorch-CycleGAN-and-Pix2pix. Available online: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix (accessed on 25 June 2020).
- Mapnik/Mapnik. Available online: https://github.com/mapnik/mapnik (accessed on 25 June 2020).
- IBM/Ibmpairs. Available online: https://github.com/IBM/ibmpairs (accessed on 25 June 2020).
- IBM PAIRS—Tutorial. Available online: https://pairs.res.ibm.com/tutorial/ (accessed on 25 June 2020).
- Chu, C.; Zhmoginov, A.; Sandler, M. CycleGAN, a Master of Steganography. arXiv 2017, arXiv:1712.02950. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 248–255. [Google Scholar] [CrossRef] [Green Version]
- ImageNet. Available online: http://www.image-net.org/ (accessed on 25 June 2020).
- Van Etten, A.; Lindenbaum, D.; Bacastow, T.M. SpaceNet: A Remote Sensing Dataset and Challenge Series. arXiv 2018, arXiv:1807.01232. [Google Scholar]
- SpaceNet. Available online: https://spacenetchallenge.github.io/ (accessed on 25 June 2020).
- Winning Solution for the Spacenet Challenge: Joint Learning with OpenStreetMap. Available online: https://i.ho.lc/winning-solution-for-the-spacenet-challenge-joint-learning-with-openstreetmap.html (accessed on 25 June 2020).
- Powers, D.M. Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness and Correlation; Bioinfo Publications: Pune, India, 2011. [Google Scholar]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
- Lu, J.; Behbood, V.; Hao, P.; Zuo, H.; Xue, S.; Zhang, G. Transfer Learning Using Computational Intelligence: A Survey. Knowl. Based Syst. 2015, 80, 14–23. [Google Scholar] [CrossRef]
- Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A Survey of Transfer Learning. IEEE Trans. Knowl. Data Eng. 2009, 3, 9. [Google Scholar] [CrossRef] [Green Version]
- Lin, J.; Jiang, Z.; Sarkaria, S.; Ma, D.; Zhao, Y. Special Issue Deep Transfer Learning for Remote Sensing. Remote Sensing (Journal). Available online: https://www.mdpi.com/journal/remotesensing/special_issues/DeepTransfer_Learning (accessed on 28 June 2020).
- Xie, M.; Jean, N.; Burke, M.; Lobell, D.; Ermon, S. Transfer Learning from Deep Features for Remote Sensing and Poverty Mapping. In Proceedings of the AAAI 2016: Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; p. 7. [Google Scholar]
- Huang, Z.; Pan, Z.; Lei, B. Transfer Learning with Deep Convolutional Neural Network for SAR Target Classification with Limited Labeled Data. Remote. Sens. 2017, 9, 907. [Google Scholar] [CrossRef] [Green Version]
- Lian, X.; Zhang, C.; Zhang, H.; Hsieh, C.J.; Zhang, W.; Liu, J. Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent. In Advances in Neural Information Processing Systems 30; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 5330–5340. [Google Scholar]
- Zhang, W.; Cui, X.; Kayi, A.; Liu, M.; Finkler, U.; Kingsbury, B.; Saon, G.; Mroueh, Y.; Buyuktosunoglu, A.; Das, P.; et al. Improving Efficiency in Large-Scale Decentralized Distributed Training. In Proceedings of the ICASSP 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020. [Google Scholar]
- Bing Maps. Available online: https://www.bing.com/maps (accessed on 25 June 2020).
- Zhang, R.; Albrecht, C.M.; Freitag, M.; Lu, S.; Zhang, W.; Finkler, U.; Kung, D.S.; Cui, X. System and Methodology for Correcting Map Features Using Remote Sensing and Deep Learning. U.S. Patent. application submitted, under review.
- Klein, L.J.; Lu, S.; Albrecht, C.M.; Marianno, F.J.; Hamann, H.F. Method and System for Crop Recognition and Boundary Delineation. U.S. Patent 10445877B2, 15 October 2019. [Google Scholar]
- Klein, L.; Marianno, F.J.; Freitag, M.; Hamann, H.F.; Rodriguez, S.B. Parallel Querying of Adjustible Resolution Geospatial Database. U.S. Patent 10372705B2, 6 August 2019. [Google Scholar]
- Freitag, M.; Albrecht, C.M.; Marianno, F.J.; Lu, S.; Hamann, H.F.; Schmude, J.W. Efficient Querying Using Overview Layers of Geospatial—Temporal Data in a Data Analytics Platform. U.S. Patent P201805207, 14 May 2020. [Google Scholar]
- Mapnik.Org—the Core of Geospatial Visualization and Processing. Available online: https://mapnik.org/ (accessed on 25 June 2020).
- Wikimedia Cloud Services Team. Available online: https://www.mediawiki.org/wiki/Wikimedia_Cloud_Services_team (accessed on 25 June 2020).
- Bottou, L.; Curtis, F.E.; Nocedal, J. Optimization Methods for Large-Scale Machine Learning. Siam Rev. 2018, 60, 223–311. [Google Scholar] [CrossRef]
- Nwankpa, C.; Ijomah, W.; Gachagan, A.; Marshall, S. Activation Functions: Comparison of Trends in Practice and Research for Deep Learning. arXiv 2018, arXiv:1811.03378. [Google Scholar]
- Shi, W.; Caballero, J.; Theis, L.; Huszar, F.; Aitken, A.; Ledig, C.; Wang, Z. Is the Deconvolution Layer the Same as a Convolutional Layer? arXiv 2016, arXiv:1609.07009. [Google Scholar]
- Kingma, D.P.; Welling, M. An Introduction to Variational Autoencoders. arXiv 2019, arXiv:1906.02691. [Google Scholar] [CrossRef]
- Kurach, K.; Lučić, M.; Zhai, X.; Michalski, M.; Gelly, S. A Large-Scale Study on Regularization and Normalization in GANs. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
- Hinton, G.E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R.R. Improving Neural Networks by Preventing Co-Adaptation of Feature Detectors. arXiv 2012, arXiv:1207.0580. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Bengio, Y.; Louradour, J.; Collobert, R.; Weston, J. Curriculum Learning. In Proceedings of the 26th Annual International Conference on Machine Learning—ICML ’09, Montreal, QC, Canada, 14–18 June 2009; pp. 1–8. [Google Scholar] [CrossRef]
- Parisi, G.I.; Kemker, R.; Part, J.L.; Kanan, C.; Wermter, S. Continual Lifelong Learning with Neural Networks: A Review. Neural Netw. 2019, 113, 54–71. [Google Scholar] [CrossRef] [PubMed]
Training (90%) | Testing (10%) | Ground Truth | House Density | Network Architecture | F1-Score | |
---|---|---|---|---|---|---|
Austin | Austin | OSM as-is | ∼1700 km | fw-CycleGAN | 0.35 * | |
U-Net | ||||||
visual inspection | fw-CycleGAN | * | ||||
U-Net | ||||||
Dallas | OSM as-is | ∼1300 km | fw-CycleGAN | 0.60 ** | ||
U-Net |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Albrecht, C.M.; Zhang, R.; Cui, X.; Freitag, M.; Hamann, H.F.; Klein, L.J.; Finkler, U.; Marianno, F.; Schmude, J.; Bobroff, N.; et al. Change Detection from Remote Sensing to Guide OpenStreetMap Labeling. ISPRS Int. J. Geo-Inf. 2020, 9, 427. https://doi.org/10.3390/ijgi9070427
Albrecht CM, Zhang R, Cui X, Freitag M, Hamann HF, Klein LJ, Finkler U, Marianno F, Schmude J, Bobroff N, et al. Change Detection from Remote Sensing to Guide OpenStreetMap Labeling. ISPRS International Journal of Geo-Information. 2020; 9(7):427. https://doi.org/10.3390/ijgi9070427
Chicago/Turabian StyleAlbrecht, Conrad M., Rui Zhang, Xiaodong Cui, Marcus Freitag, Hendrik F. Hamann, Levente J. Klein, Ulrich Finkler, Fernando Marianno, Johannes Schmude, Norman Bobroff, and et al. 2020. "Change Detection from Remote Sensing to Guide OpenStreetMap Labeling" ISPRS International Journal of Geo-Information 9, no. 7: 427. https://doi.org/10.3390/ijgi9070427