An Improved Aggregated-Mosaic Method for the Sparse Object Detection of Remote Sensing Imagery
Abstract
:1. Introduction
- (1)
- We analyze the object distribution of remote sensing images and highlight the problem of augmentation in object samples with a sparse distribution. We introduce Assigned-Stitch, a constraint for sparse object augmentation, which guarantees that each training sample is not a blank sample and ensures a smooth and stable training procedure;
- (2)
- We propose the Auto-Target-Duplication modules to overcome the class-imbalance problem. This module is able to balance the labels of each class and focus on the poor class results by efficiently selecting objects to duplicate.
2. Related Work
2.1. Object Detection
2.2. Data Augmentation
3. Methods
3.1. Sparse Object in Remote Sensing Images
3.2. Aggregated-Mosaic
3.3. Assigned-Stitch
Algorithm 1. Assigned-Stitch |
Input: randomly select four images (A1, B1, C1, and D1) with w × h resolution; random object ground truth (g1’, g2’, g3’, g4’). Output: Assigned-Stitch training sample Ifu. 1 Initialize grey image Igrey, which value is 127 and Randomly select the slice center (xsc, ysc). 2 Purple rectangles for Igrey: Top-left: [xpmin1, ypmin1, xpmax1, ypmax1] = [max(xsc − w, 0), max(ysc − h, 0), xsc, ysc] Top-right: [xpmin2, ypmin2, xpmax2, ypmax2] = [ xsc, max(ysc − h, 0), max(xsc + w, w × 2), ysc] Bottom-left: [xpmin3, ypmin3, xpmax3, ypmax3] = [ max(xsc − w, 0), ysc, xsc, min(ysc + h, h × 2)] Bottom-right: [xpmin4, ypmin4, xpmax4, ypmax4] = [ xsc, ysc, min(xsc + w, w × 2), min(ysc + h, h × 2)] 3 Four input images slices (green rectangles) for each purple rectangle i: [xgmini, ygmini, xgmaxi, ygmaxi] = toRect([gxci’, gyci’, (xpmaxi − xpmini), (ypmaxi − ypmini)]) 4 Verify each image slices: if xgmini < 0, [xgmini, xgmaxi] = [0, (xpmaxi − xpmini)] if ygmini < 0, [ygmini, ygmaxi] = [0, (ypmaxi − ypmini)] if xgmaxi > w, [xgmini, xgmaxi] = [w − (xpmaxi - xpmini), w] if ygmaxi > h, [ygmini, ygmaxi] = [h − (ypmaxi − ypmini), h] 5 Place the image slices on Igrey: Igrey(xpmini:ypmini, xpmaxi:ypmaxi) = (xgmini:ygmini:xgmaxi:ygmaxi), i ∈ (1, 2, 3, 4) 6 Perform affine transformation: Repeat gi = affine(gi’, AffineMatrix) until the top-left w × h image contains objects. 7 Assigned-Stitch training sample: Ifu = affine(Igrey, AffineMatrix)(0:w, 0:h) |
ymin = yc − 0.5 × hc ymax = yc + 0.5 × hc
3.4. Auto-Target-Duplication
Algorithm 2. Auto-Target-Duplication |
Input: detection network net(), training samples and ground truth for this epoch (img, label), results of validation dataset from last training model result. Output: Auto-Target-Duplication samples for this epoch (imgA, labelA). Repeat epochs: 1 Generate the Assigned-Stitch samples: (imgA, labelA) = AS(img, label) 2 Sort results and identify class with lowest result: classl = min(result) 3 Randomly select multiple objects for classl from training dataset: (obji, labeli), i = 1, 2, 3… 4 Duplicate (obji, labeli) to each Assigned-Stitch samples: (imgA, labelA) = Duplicate(imgA, labelA, obji, labeli), i = 1, 2, 3… 5 Use Assigned-Stitch samples to train the detection network in each iteration: net = train(net(), imgA, labelA) |
4. Experiment
4.1. Datasets
4.1.1. VEDAI
4.1.2. NWPU VHR-10
4.2. Training Settings
4.2.1. Anchor Generation
4.2.2. Learning Policy
4.3. Experiment Results
4.3.1. VEDAI 512
4.3.2. VEDAI 1024
4.3.3. NWPU-VHR 10
4.4. Disscussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Hong, D.; Yokoya, N.; Xia, G.-S.; Chanussot, J.; Zhu, X.X. X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data. ISPRS J. Photogramm. Remote Sens. 2020, 167, 12–23. [Google Scholar] [CrossRef]
- Hou, J.-B.; Zhu, X.; Yin, X.-C. Self-Adaptive Aspect Ratio Anchor for Oriented Object Detection in Remote Sensing Images. Remote Sens. 2021, 13, 1318. [Google Scholar] [CrossRef]
- Wu, X.; Hong, D.; Chanussot, J.; Xu, Y.; Tao, R.; Wang, Y. Fourier-based rotation-invariant feature boosting: An efficient framework for geospatial object detection. IEEE Geosci. Remote Sens. Lett. 2019, 17, 302–306. [Google Scholar] [CrossRef] [Green Version]
- Awad, M.M.; Lauteri, M. Self-Organizing Deep Learning (SO-UNet)—A Novel Framework to Classify Urban and Peri-Urban Forests. Sustainability 2021, 13, 5548. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
- Nusrat, I.; Jang, S.-B. A comparison of regularization techniques in deep neural networks. Symmetry 2018, 10, 648. [Google Scholar] [CrossRef] [Green Version]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
- Yan, J.; Lei, Z.; Wen, L.; Li, S.Z. The fastest deformable part model for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014; pp. 2497–2504. [Google Scholar]
- Cao, J.; Cholakkal, H.; Anwer, R.M.; Khan, F.S.; Pang, Y.; Shao, L. D2det: Towards high quality object detection and instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 11485–11494. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 16–21 July 2017; pp. 2961–2969. [Google Scholar]
- Yun, S.; Han, D.; Oh, S.J.; Chun, S.; Choe, J.; Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 6023–6032. [Google Scholar]
- Zhang, H.; Cisse, M.; Dauphin, Y.N.; Lopez-Paz, D. Mixup: Beyond empirical risk minimization. In Proceedings of the 6th International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Ultralytics. YOLOv5. Available online: https://github.com/ultralytics/yolov5 (accessed on 8 May 2021).
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Rätsch, G.; Onoda, T.; Müller, K.-R. Soft margins for AdaBoost. Mach. Learn. 2001, 42, 287–320. [Google Scholar] [CrossRef]
- Suykens, J.A.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
- Felzenszwalb, P.; McAllester, D.; Ramanan, D. A discriminatively trained, multiscale, deformable part model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 24–26 June 2008; pp. 1–8. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014; pp. 580–587. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 16–21 July 2017; pp. 7263–7271. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Choi, J.; Chun, D.; Kim, H.; Lee, H.-J. Gaussian yolov3: An accurate and fast object detector using localization uncertainty for autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 502–511. [Google Scholar]
- Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 16–21 July 2017; pp. 2117–2125. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. Scaled-yolov4: Scaling cross stage partial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021; pp. 13029–13038. [Google Scholar]
- Zheng, Z.; Wang, P.; Ren, D.; Liu, W.; Ye, R.; Hu, Q.; Zuo, W. Enhancing geometric factors in model learning and inference for object detection and instance segmentation. arXiv 2020, arXiv:2005.03572. [Google Scholar]
- Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; Ren, D. Distance-IoU loss: Faster and better learning for bounding box regression. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 12993–13000. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 6569–6578. [Google Scholar]
- Ghiasi, G.; Lin, T.-Y.; Le, Q.V. Dropblock: A regularization method for convolutional networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Montréal, QC, Canada, 3–8 December 2018; pp. 10750–10760. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; Yang, Y. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 13001–13008. [Google Scholar]
- DeVries, T.; Taylor, G.W. Improved regularization of convolutional neural networks with cutout. arXiv 2017, arXiv:1708.04552. [Google Scholar]
- Real, E.; Aggarwal, A.; Huang, Y.; Le, Q.V. Regularized evolution for image classifier architecture search. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 4780–4789. [Google Scholar]
- Dwibedi, D.; Misra, I.; Hebert, M. Cut, paste and learn: Surprisingly easy synthesis for instance detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1301–1310. [Google Scholar]
- Dvornik, N.; Mairal, J.; Schmid, C. Modeling visual context is key to augmenting object detection datasets. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 364–380. [Google Scholar]
- Geirhos, R.; Rubisch, P.; Michaelis, C.; Bethge, M.; Wichmann, F.A.; Brendel, W. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In Proceedings of the 7th International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Tokozume, Y.; Ushiku, Y.; Harada, T. Between-class learning for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 18–22 June 2018; pp. 5486–5494. [Google Scholar]
- Takahashi, R.; Matsubara, T.; Uehara, K. Data augmentation using random image cropping and patching for deep cnns. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 2917–2931. [Google Scholar] [CrossRef] [Green Version]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. In Proceedings of the 7th International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Shafahi, A.; Najibi, M.; Xu, Z.; Dickerson, J.; Davis, L.S.; Goldstein, T. Universal adversarial training. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 5636–5643. [Google Scholar]
- Wang, J.; Yang, Y.; Chen, Y.; Han, Y. LighterGAN: An Illumination Enhancement Method for Urban UAV Imagery. Remote Sens. 2021, 13, 1371. [Google Scholar] [CrossRef]
- Awad, M.M.; De Jong, K. Optimization of spectral signatures selection using multi-objective genetic algorithms. In Proceedings of the IEEE Congress of Evolutionary Computation (CEC), New Orleans, LA, USA, 5–8 June 2011; pp. 1620–1627. [Google Scholar]
- Ding, Y.; Zhou, Y.; Zhu, Y.; Ye, Q.; Jiao, J. Selective sparse sampling for fine-grained image recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 6599–6608. [Google Scholar]
- Zheng, S.; Zhang, Y.; Liu, W.; Zou, Y. Improved image representation and sparse representation for image classification. Appl. Intell. 2020, 1–12. [Google Scholar] [CrossRef]
- Van Etten, A. You only look twice: Rapid multi-scale object detection in satellite imagery. arXiv 2018, arXiv:1805.09512. [Google Scholar]
- Ghiasi, G.; Cui, Y.; Srinivas, A.; Qian, R.; Lin, T.-Y.; Cubuk, E.D.; Le, Q.V.; Zoph, B. Simple copy-paste is a strong data augmentation method for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021; pp. 2918–2928. [Google Scholar]
- Razakarivony, S.; Jurie, F. Vehicle detection in aerial imagery: A small target detection benchmark. J. Vis. Commun. Image Represent. 2016, 34, 187–203. [Google Scholar] [CrossRef] [Green Version]
- Cheng, G.; Han, J. A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 2016, 117, 11–28. [Google Scholar] [CrossRef] [Green Version]
- Chen, P.; Liu, S.; Zhao, H.; Jia, J. Gridmask data augmentation. arXiv 2020, arXiv:2001.04086. [Google Scholar]
- Wang, J.; Jin, S.; Liu, W.; Liu, W.; Qian, C.; Luo, P. When human pose estimation meets robustness: Adversarial algorithms and benchmarks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021; pp. 11855–11864. [Google Scholar]
Dataset | Feature Maps (Size) | Anchor Shapes (w, h) |
---|---|---|
VEDAI_512 | P3 (80 × 80) | (27, 11) (13, 27) (27, 18) |
P4 (40 × 40) | (22, 25) (17, 43) (46, 17) | |
P5 (20 × 20) | (35, 32) (33, 78) (82, 70) | |
VEDAI_1024 | P3 (80 × 80) | (44, 18) (19, 41) (42, 26) |
P4 (40 × 40) | (28, 39) (29, 58) (42, 41) | |
P5 (20 × 20) | (71, 29) (48, 147) (115, 93) | |
NWPU VHR-10 | P3 (80 × 80) | (35, 34) (25, 54) (47, 52) |
P4 (40 × 40) | (64, 40) (70,63) (55, 92) | |
P5 (20 × 20) | (87, 83) (152, 110) (210, 248) |
Class | Basic [21] | Cutout [37] | GridMask [56] | MixUp [12] | CutMix [11] | Mosaic [28] | Proposed |
---|---|---|---|---|---|---|---|
Car | 84.2 | 86.1 | 84.5 | 83.4 | 78.6 | 89.9 | 90.7 |
Truck | 63.1 | 59.6 | 59.2 | 60.8 | 57.3 | 60.1 | 61.3 |
Ship | 53.5 | 55.8 | 56.3 | 55.9 | 56.2 | 54.4 | 58.2 |
Tractor | 76.6 | 80.3 | 79.7 | 78.5 | 79.5 | 82.1 | 83.1 |
Camping | 71.3 | 69.9 | 68.5 | 67.7 | 68.1 | 66.7 | 72.1 |
Van | 55.9 | 52.8 | 49.7 | 53.2 | 56.9 | 57.6 | 57.4 |
Vehicle | 44.8 | 39.2 | 42.9 | 47.5 | 43.5 | 46.2 | 48.1 |
PickUp | 72.6 | 73.6 | 71.9 | 74.6 | 75.3 | 74.8 | 76.1 |
Plane | 90.1 | 86.5 | 86.7 | 86.7 | 85.8 | 84.6 | 93.9 |
mAP | 68.01 | 67.09 | 66.60 | 67.59 | 66.80 | 68.49 | 71.21 |
Class | Basic [21] | Cutout [37] | GridMask [56] | MixUp [12] | CutMix [11] | Mosaic [28] | Proposed |
---|---|---|---|---|---|---|---|
Car | 90.4 | 91.1 | 88.2 | 85.4 | 81.3 | 91.1 | 92.9 |
Truck | 63.6 | 62.5 | 63.5 | 63.7 | 59.1 | 48.3 | 65.5 |
Ship | 66.4 | 57.3 | 58.7 | 62.1 | 62.4 | 67.3 | 69.7 |
Tractor | 83.5 | 86.7 | 82.9 | 77.3 | 80.3 | 82.4 | 87.2 |
Camping | 68.7 | 69.3 | 69.2 | 69.8 | 70.8 | 69.5 | 73.1 |
Van | 75.9 | 68.9 | 62.8 | 69.3 | 73.2 | 78.2 | 84.4 |
Vehicle | 51.2 | 48.7 | 49.8 | 48.7 | 45.6 | 49.3 | 53.3 |
PickUp | 81.3 | 78.9 | 80.2 | 79.1 | 75.8 | 82.3 | 83.5 |
Plane | 93.6 | 88.4 | 89.3 | 83.2 | 78.6 | 74 | 81.8 |
mAP | 74.96 | 72.42 | 71.62 | 70.96 | 69.68 | 71.38 | 76.82 |
Class | Basic [21] | Cutout [37] | GridMask [56] | MixUp [12] | CutMix [11] | Mosaic [28] | Proposed |
---|---|---|---|---|---|---|---|
PL | 91.3 | 89.1 | 92.8 | 95.9 | 90.3 | 96.5 | 98.2 |
SH | 90.1 | 92.3 | 82.6 | 89.6 | 94.1 | 93.1 | 88.9 |
ST | 85.2 | 91.2 | 87.9 | 95.3 | 89.9 | 83.3 | 98.4 |
BD | 94.3 | 95.4 | 94.7 | 96.4 | 94.3 | 99.2 | 99.5 |
TC | 92.1 | 93.5 | 92.5 | 95.7 | 96.1 | 92 | 99.6 |
BC | 85.3 | 82.6 | 91.9 | 93.9 | 87.6 | 88.6 | 99.5 |
GT | 89.1 | 93.8 | 95.1 | 83.9 | 92.8 | 96.5 | 98.0 |
HA | 88.5 | 91.7 | 90.3 | 88.5 | 91.5 | 90.7 | 92.0 |
BR | 76.2 | 82.5 | 88.2 | 84.6 | 78.9 | 76.4 | 89.8 |
VH | 91.1 | 89.1 | 91.7 | 95.3 | 92.7 | 90.1 | 97.3 |
mAP | 88.32 | 90.12 | 90.77 | 91.91 | 90.82 | 90.64 | 96.12 |
Duplication Times | Gaussian Blur | VEDAI_512 | VEDAI_1024 | NWPU-VHR 10 |
---|---|---|---|---|
X1 | √ | 70.82 | 76.45 | 95.85 |
X5 | √ | 70.85 | 76.25 | 96.04 |
X10 | √ | 70.93 | 76.30 | 95.68 |
X1 | × | 70.98 | 76.32 | 95.37 |
X5 | × | 71.21 | 76.82 | 96.08 |
X10 | × | 71.13 | 76.53 | 96.12 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, B.; Wu, Y.; Guan, X.; Gao, L.; Zhang, B. An Improved Aggregated-Mosaic Method for the Sparse Object Detection of Remote Sensing Imagery. Remote Sens. 2021, 13, 2602. https://doi.org/10.3390/rs13132602
Zhao B, Wu Y, Guan X, Gao L, Zhang B. An Improved Aggregated-Mosaic Method for the Sparse Object Detection of Remote Sensing Imagery. Remote Sensing. 2021; 13(13):2602. https://doi.org/10.3390/rs13132602
Chicago/Turabian StyleZhao, Boya, Yuanfeng Wu, Xinran Guan, Lianru Gao, and Bing Zhang. 2021. "An Improved Aggregated-Mosaic Method for the Sparse Object Detection of Remote Sensing Imagery" Remote Sensing 13, no. 13: 2602. https://doi.org/10.3390/rs13132602