New Approach for Mapping Land Cover from Archive Grayscale Satellite Imagery
Abstract
:1. Introduction
2. Materials and Methods
2.1. Study Area
2.2. Workflow
2.3. Data Collection
2.4. Data Preparation
2.5. Training Models for Colorization
2.5.1. Training Data for Colorization
- Patching: The grayscale and RGB Landsat images were initially padded to ensure their dimensions were divisible by the specified patch size. This step allowed the images to be uniformly divided into patches of size 256 × 256 pixels, creating manageable and consistent training samples.
- Data Augmentation: To increase the diversity of the training dataset, data augmentation techniques were employed. These transformations included rotations, translations, horizontal flips, shear transformations, and zooming.
2.5.2. Generative Adversarial Networks
2.5.3. Conditional Generative Adversarial Networks
2.5.4. Pix2Pix-cGANs
2.5.5. Attention Mechanism for Pix2Pix-cGANs
- Attention in the Generator: By integrating an attention mechanism into the generator, it allows selective focus on regions that require more detailed processing. Specifically, the attention mechanism helps the generator to detect areas with complex features, such as boundaries or intricate textures, where accurate colorization is more challenging. In addition, an attentive residual network was employed to guide the generator in preserving important low-level details while performing the colorization.
- Attention in the Discriminator: By incorporating an attention mechanism into the discriminator, it enhances the ability to focus on specific regions where the generated image might differ from the real image. This region-specific focus helps the discriminator to provide more targeted feedback to the generator, thereby improving the adversarial training process.
2.5.6. BigColor
2.5.7. ChromaGAN
2.5.8. iColoriT
2.5.9. Colorization Model Selection
2.5.10. Setting Up the Colorization Models
2.5.11. Evaluate Colorization Models
- PSNR: PSNR is a measure that compares the maximum possible power of an image’s signal to the power of the noise that degrades it, expressed in decibels on a logarithmic scale, as in Equation (19). To find PSNR, the mean squared error (MSE) is first calculated, as in Equation (20), averaging the squared differences between the original and distorted image pixels using the image’s dimensions of M rows and N columns.
- SSIM: SSIM assesses how similar two images are by analyzing their structural features, aligning with human visual perception. For images x (generated) and y (ground truth), it uses their mean values (, ), standard deviations (, ), and covariance (), as shown in Equation (21):Here, and are small constants to prevent division by zero, typically and , where L is the maximum pixel value.
2.6. Training Models for Segmentation
2.6.1. Training Data for Segmentation
- Patching: The colorized Landsat images taken from the previous best colorization model and built-up masks were initially padded to ensure that their dimensions were divisible by the specified patch size. The images were divided into patches of size 256 × 256 pixels.
- Data Augmentation: Data augmentation techniques were used, including rotations, translations, horizontal flips, shear transformations, and zooming.
2.6.2. SegFormer
2.6.3. U-Net++
- Accurate Mode: Outputs from all segmentation branches are averaged to produce the final segmentation map.
- Fast Mode: The final segmentation map is selected from only one of the segmentation branches, enabling model pruning and reducing inference time.
2.6.4. DeepLabv3+
2.6.5. FPN
- Bottom-up pathway: Feed-forward computation of the backbone ConvNet, computing a feature hierarchy at several scales with a scaling step of 2.
- Top-down pathway and lateral connections: Higher-resolution features are generated by upsampling coarser, semantically stronger feature maps from higher pyramid levels, enhanced with bottom-up pathway features via lateral connections, as in Equation (29):
- Prediction heads: A shared prediction head is attached to each pyramid level.
2.6.6. Segmentation Model Selection
2.6.7. Setting Up Segmentation Models
2.6.8. Evaluating Segmentation Models
- Accuracy: Accuracy measures the proportion of correctly classified pixels across all classes in the image. It is defined as the ratio of true predictions (both positive and negative) to the total number of pixels, expressed in Equation (31), where is true positive, is true negative, is false positive, and is false negative.
- Precision: Precision quantifies the accuracy of positive predictions, representing the ratio of correctly predicted positive pixels to the total predicted positive pixels. It is calculated as shown in Equation (32).
- Recall: Recall, or sensitivity, measures the ability of the model to identify all relevant positive pixels. It is the ratio of correctly predicted positive pixels to the total actual positive pixels, defined in Equation (33).
- F1 Score: The F1 score is the harmonic mean of precision and recall, providing a balanced measure of a model’s performance. It is particularly useful when class distribution is imbalanced, as shown in Equation (34).
- IoU: Intersection over Union (IoU), also known as the Jaccard Index, calculates the overlap between the predicted segmentation mask and the ground truth by dividing the area of their intersection by the area of union. Here, A represents the predicted pixel set, and B represents the ground truth pixel set for a given class. It is expressed in Equation (35):
- mIoU: Mean Intersection over Union (mIoU) averages the IoU scores across all classes in the dataset, providing a comprehensive evaluation of segmentation performance. For N classes, and denote the predicted and ground truth pixel sets for the i-th class, respectively. It is defined in Equation (36):
3. Results
3.1. Evaluation of Colorization Methods
3.2. Evaluation of Segmentation Models
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
Att Pix2Pix | Attention-based Pix2Pix |
cGAN | Conditional Generative Adversarial Network |
FPN | Feature Pyramid Network |
GAN | Generative Adversarial Network |
GEE | Google Earth Engine |
HCP | Haut Commissariat au Plan |
INSEE | Institut National de la Statistique et des Études Économiques |
IoU | Intersection over Union |
LULC | Land Use and Land Cover |
mIoU | Mean Intersection over Union |
MSE | Mean Squared Error |
PSNR | Peak Signal-to-Noise Ratio |
RGB | Red, Green, Blue |
RGPH | Recensement Général de la Population et de l’Habitat |
SSIM | Structural Similarity Index |
TOA | Top of Atmosphere |
UNet++ | Enhanced U-Net |
USGS | United States Geological Survey |
References
- Jensen, J.R. Remote Sensing of the Environment: An Earth Resource Perspective 2/e; Pearson Education India: Tamil Nadu, India, 2009. [Google Scholar]
- Lillesand, T.; Kiefer, R.W.; Chipman, J. Remote Sensing and Image Interpretation; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
- Library of Congress. History of Remote Sensing. 2024. Available online: https://guides.loc.gov/geospatial/computer-cartography-archive/history-of-remote-sensing (accessed on 13 February 2025).
- Williams, D.L.; Goward, S.; Arvidson, T. Landsat. Photogramm. Eng. Remote Sens. 2006, 72, 1171–1178. [Google Scholar] [CrossRef]
- National Reconnaissance Office. The CORONA Story. 1995. Available online: https://www.nro.gov/Portals/65/documents/history/csnr/corona/The%20CORONA%20Story.pdf (accessed on 10 February 2025).
- Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Vitoria, P.; Raad, L.; Ballester, C. Chromagan: Adversarial picture colorization with semantic class distribution. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 2445–2454. [Google Scholar]
- Kim, G.; Kang, K.; Kim, S.; Lee, H.; Kim, S.; Kim, J.; Baek, S.H.; Cho, S. Bigcolor: Colorization using a generative color prior for natural images. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 350–366. [Google Scholar]
- Yun, J.; Lee, S.; Park, M.; Choo, J. iColoriT: Towards propagating local hints to the right region in interactive colorization by leveraging vision transformer. In Proceedings of the IEEE/CVF winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 1787–1796. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and efficient design for semantic segmentation with transformers. Adv. Neural Inf. Process. Syst. 2021, 34, 12077–12090. [Google Scholar]
- Lee, C. Artificial Colorization of Grayscale Satellite Imagery via GANs: Part 1. 2017. Available online: https://medium.com/the-downlinq/artificial-colorization-of-grayscale-satellite-imagery-via-gans-part-1-79c8d137e97b (accessed on 20 January 2025).
- Gravey, M.; Rasera, L.G.; Mariethoz, G. Analogue-based colorization of remote sensing images using textural information. ISPRS J. Photogramm. Remote Sens. 2019, 147, 242–254. [Google Scholar] [CrossRef]
- Liu, H.; Fu, Z.; Han, J.; Shao, L.; Liu, H. Single satellite imagery simultaneous super-resolution and colorization using multi-task deep neural networks. J. Vis. Commun. Image Represent. 2018, 53, 20–30. [Google Scholar] [CrossRef]
- Agapiou, A. Land cover mapping from colorized CORONA archived greyscale satellite data and feature extraction classification. Land 2021, 10, 771. [Google Scholar] [CrossRef]
- Farella, E.M.; Malek, S.; Remondino, F. Colorizing the past: Deep learning for the automatic colorization of historical aerial images. J. Imaging 2022, 8, 269. [Google Scholar] [CrossRef] [PubMed]
- Shamsaliei, S.; Gundersen, O.E.; Alfredsen, K.T.; Halleraker, J.H.; Foldvik, A. Highlighting Challenges of State-of-the-Art Semantic Segmentation with HAIR-A Dataset of Historical Aerial Images. J. Data Centric Mach. Learn. Res. 2024, 8, 1–31. [Google Scholar]
- Anwar, S.; Tahir, M.; Li, C.; Mian, A.; Khan, F.S.; Muzaffar, A.W. Image colorization: A survey and dataset. Inf. Fusion 2025, 114, 102720. [Google Scholar] [CrossRef]
- Haut Commissariat au Plan (HCP). Résultats du Recensement Général de la Population et de l’Habitation 2024 (RGPH 2024). 2024. Available online: https://resultats2024.rgphapps.ma/ (accessed on 22 February 2025).
- Commune d’Agadir. Plan d’Action Communal d’Agadir 2022–2027. Technical Report, Commune d’Agadir. 2022. Available online: https://agadir2027.ma/wp-content/uploads/2023/04/Version-finale-du-Plan-dAction-Communal-2022-2027-1.pdf (accessed on 22 February 2025).
- United States Geological Survey (USGS). Impact of the 1960 Agadir Earthquake. 2023. Available online: https://earthquake.usgs.gov/earthquakes/eventpage/iscgem878424/impact, (accessed on 22 February 2025).
- Institut National de la Statistique et des Études Économiques (INSEE). Population Data for Les Sables-d-Olonne 2021. 2025. Available online: https://www.insee.fr/fr/statistiques/2011101?geo=COM-85194 (accessed on 23 February 2025).
- Institut National de la Statistique et des Études Économiques (INSEE). Population Data for Vendée 2021. 2025. Available online: https://www.insee.fr/fr/statistiques/2011101?geo=DEP-85 (accessed on 23 February 2025).
- Météo France. Fiche du Poste 85060002—Château-d’Olonne. 2024. Available online: https://donneespubliques.meteofrance.fr/metadonnees_publiques/fiches/fiche_85060002.pdf (accessed on 24 February 2025).
- Vendée Globe. Vendée Globe Official Website. 2025. Available online: https://www.vendeeglobe.org/ (accessed on 25 February 2025).
- Kanan, C.; Cottrell, G.W. Color-to-grayscale: Does the method matter in image recognition? PLoS ONE 2012, 7, e29740. [Google Scholar] [CrossRef] [PubMed]
- Bourke, P. Histogram Matching. 2011. Available online: https://paulbourke.net/miscellaneous/equalisation/ (accessed on 13 December 2024).
- Simou, M.R.; Maanan, M.; Loulad, S.; Benayad, M.; Maanan, M.; Rhinane, H. An Improved Pix2Pix Approach for Colorizing Historical Grayscale Satellite Imagery. 2025; submitted. [Google Scholar]
- Shafiq, H.; Lee, B. Transforming color: A novel image colorization method. Electronics 2024, 13, 2511. [Google Scholar] [CrossRef]
- Tran, D.T.; Nguyen, N.D.H.; Pham, T.T.; Tran, P.N.; Vu, T.D.T.; Nguyen, C.T.; Dang-Ngoc, H.; Dang, D.N.M. SwinTExCo: Exemplar-based video colorization using Swin Transformer. Expert Syst. Appl. 2025, 260, 125437. [Google Scholar] [CrossRef]
- Bovik, A.C. The Essential Guide to Image Processing; Academic Press: Cambridge, MA, USA, 2009. [Google Scholar]
- Lin, X.; Cheng, Y.; Chen, G.; Chen, W.; Chen, R.; Gao, D.; Zhang, Y.; Wu, Y. Semantic segmentation of China’s coastal wetlands based on Sentinel-2 and Segformer. Remote Sens. 2023, 15, 3714. [Google Scholar] [CrossRef]
- Feng, X.; Wei, C.; Xue, X.; Zhang, Q.; Liu, X. RST-DeepLabv3+: Multi-Scale Attention for Tailings Pond Identification with DeepLab. Remote Sens. 2025, 17, 411. [Google Scholar] [CrossRef]
- Şengül, G.S.; Sertel, E. Automatic Building Extraction From VHR Remote Sensing Images Using Geoai Methods. In Proceedings of the IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 7–12 July 2024; IEEE: New York, NY, USA, 2024; pp. 8109–8112. [Google Scholar]
- de Carvalho, O.L.F.; de Carvalho Júnior, O.A.; Silva, C.R.e.; de Albuquerque, A.O.; Santana, N.C.; Borges, D.L.; Gomes, R.A.T.; Guimarães, R.F. Panoptic segmentation meets remote sensing. Remote Sens. 2022, 14, 965. [Google Scholar] [CrossRef]
- Everingham, M.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
- Fisher, J.R.; Acosta, E.A.; Dennedy-Frank, P.J.; Kroeger, T.; Boucher, T.M. Impact of satellite imagery spatial resolution on land use classification accuracy and modeled water quality. Remote Sens. Ecol. Conserv. 2018, 4, 137–149. [Google Scholar] [CrossRef]
- Sumbul, G.; Charfuelan, M.; Demir, B.; Markl, V. Bigearthnet: A large-scale benchmark archive for remote sensing image understanding. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing symposium, Yokohama, Japan, 28 July–2 August 2019; IEEE: New York, NY, USA, 2019; pp. 5901–5904. [Google Scholar]
City | System | Entity ID | Resolution | Date | Source |
---|---|---|---|---|---|
Agadir | KH-4B | DS1039-1011DA002 | 1.8 to 2.75 m | 23 February 1967 | USGS Earth Explorer |
Les Sables-d’Olonne | KH-9 | DZB1210-500097L005001 | 6 to 9 m | 2 July 1975 | USGS Earth Explorer |
City | Type | Spatial Resolution | Date | Source |
---|---|---|---|---|
Greater Agadir | Landsat 5TM | 30 m | 11 February 1986 | GEE |
Vendée | Landsat 5TM | 30 m | 2 June 1985 | GEE |
Model | Greater Agadir | Vendée | ||
---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | |
Att Pix2Pix | 27.18 | 0.95 | 27.72 | 0.96 |
iColoriT | 23.87 | 0.88 | 25.33 | 0.84 |
BigColor | 11.92 | 0.56 | 12.11 | 0.58 |
ChromaGAN | 23.44 | 0.85 | 22.64 | 0.79 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Simou, M.R.; Maanan, M.; Loulad, S.; Maanan, M.; Rhinane, H. New Approach for Mapping Land Cover from Archive Grayscale Satellite Imagery. Technologies 2025, 13, 158. https://doi.org/10.3390/technologies13040158
Simou MR, Maanan M, Loulad S, Maanan M, Rhinane H. New Approach for Mapping Land Cover from Archive Grayscale Satellite Imagery. Technologies. 2025; 13(4):158. https://doi.org/10.3390/technologies13040158
Chicago/Turabian StyleSimou, Mohamed Rabii, Mohamed Maanan, Safia Loulad, Mehdi Maanan, and Hassan Rhinane. 2025. "New Approach for Mapping Land Cover from Archive Grayscale Satellite Imagery" Technologies 13, no. 4: 158. https://doi.org/10.3390/technologies13040158
APA StyleSimou, M. R., Maanan, M., Loulad, S., Maanan, M., & Rhinane, H. (2025). New Approach for Mapping Land Cover from Archive Grayscale Satellite Imagery. Technologies, 13(4), 158. https://doi.org/10.3390/technologies13040158