applsci-logo

Journal Browser

Journal Browser

Applications of Machine Learning Algorithms in Remote Sensing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Earth Sciences".

Deadline for manuscript submissions: closed (20 March 2025) | Viewed by 3881

Special Issue Editors


E-Mail Website
Guest Editor
Department of Geography, Simon Fraser University, 8888 University Drive, Burnaby, BC V5A 1S6, Canada
Interests: statistics; mathematics; GIS; remote sensing; image processing; machine learning; algorithm optimization
Department of Geography, Simon Fraser University, 8888 University Drive, Burnaby, BC V5A 1S6, Canada
Interests: remote sensing; environmental change; grassland-wetland; ecosystems; precision agriculture; estuarine and coastal dynamics; remote sensing big data
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep learning-based techniques have been introduced in a wide range of applications, like the analysis of remote sensing images, owing to the rising accessibility of large-scale data sets, efficient training approaches, and high-performance computing devices. In the past few years, models based on deep learning have become a potent tool for analyzing imagery from satellites for a range of tasks, including classification, clustering, forecasting, and regression. Applying machine learning methods invented for computer vision to remote sensing data that is substantial, multivariate, noisy, and irregularly collected presents unseen challenges. Review and research papers on cutting-edge CNN and vision transformer-based methods for deep learning, architectures, and structures for applications in remote sensing will be published in this Special Issue, with an emphasis on tasks that address the problems in the field.

Potential topics of interest include, but are not limited to:

  • Shallow and deep learning remote sensing image interpretation and analysis (image classification, pan-sharpening, image enhancement, object detection, semantic segmentation, and change detection)’
  • Graph, adversarial, unsupervised, semi-supervised, self-supervised, active, and transfer learning for dealing with limited and/or low-quality data;
  • Knowledge acquisition of deep learning models for remote sensing imagery;
  • Novel benchmark datasets for remote sensing image analysis;
  • Applications of vision transformers (ViTs) in remote sensing.

Dr. Ali Jamali
Dr. Bing Lu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • CNNs
  • vision transformer
  • geography

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 7413 KiB  
Article
A Study on Enhancing the Visual Fidelity of Aviation Simulators Using WGAN-GP for Remote Sensing Image Color Correction
by Chanho Lee, Hyukjin Kwon, Hanseon Choi, Jonggeun Choi, Ilkyun Lee, Byungkyoo Kim, Jisoo Jang and Dongkyoo Shin
Appl. Sci. 2024, 14(20), 9227; https://doi.org/10.3390/app14209227 - 11 Oct 2024
Viewed by 1156
Abstract
When implementing outside-the-window (OTW) visuals in aviation tactical simulators, maintaining terrain image color consistency is critical for enhancing pilot immersion and focus. However, due to various environmental factors, inconsistent image colors in terrain can cause visual confusion and diminish realism. To address these [...] Read more.
When implementing outside-the-window (OTW) visuals in aviation tactical simulators, maintaining terrain image color consistency is critical for enhancing pilot immersion and focus. However, due to various environmental factors, inconsistent image colors in terrain can cause visual confusion and diminish realism. To address these issues, a color correction technique based on a Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP) is proposed. The proposed WGAN-GP model utilizes multi-scale feature extraction and Wasserstein distance to effectively measure and adjust the color distribution difference between the input image and the reference image. This approach can preserve the texture and structural characteristics of the image while maintaining color consistency. In particular, by converting Bands 2, 3, and 4 of the BigEarthNet-S2 dataset into RGB images as the reference image and preprocessing the reference image to serve as the input image, it is demonstrated that the proposed WGAN-GP model can handle large-scale remote sensing images containing various lighting conditions and color differences. The experimental results showed that the proposed WGAN-GP model outperformed traditional methods, such as histogram matching and color transfer, and was effective in reflecting the style of the reference image to the target image while maintaining the structural elements of the target image during the training process. Quantitative analysis demonstrated that the mid-stage model achieved a PSNR of 28.93 dB and an SSIM of 0.7116, which significantly outperforms traditional methods. Furthermore, the LPIPS score was reduced to 0.3978, indicating improved perceptual similarity. This approach can contribute to improving the visual elements of the simulator to enhance pilot immersion and has the potential to significantly reduce time and costs compared to the manual methods currently used by the Republic of Korea Air Force. Full article
(This article belongs to the Special Issue Applications of Machine Learning Algorithms in Remote Sensing)
Show Figures

Figure 1

25 pages, 16838 KiB  
Article
Optimizing Mobile Vision Transformers for Land Cover Classification
by Papia F. Rozario, Ravi Gadgil, Junsu Lee, Rahul Gomes, Paige Keller, Yiheng Liu, Gabriel Sipos, Grace McDonnell, Westin Impola and Joseph Rudolph
Appl. Sci. 2024, 14(13), 5920; https://doi.org/10.3390/app14135920 - 6 Jul 2024
Cited by 1 | Viewed by 1967
Abstract
Image classification in remote sensing and geographic information system (GIS) data containing various land cover classes is essential for efficient and sustainable land use estimation and other tasks like object detection, localization, and segmentation. Deep learning (DL) techniques have shown tremendous potential in [...] Read more.
Image classification in remote sensing and geographic information system (GIS) data containing various land cover classes is essential for efficient and sustainable land use estimation and other tasks like object detection, localization, and segmentation. Deep learning (DL) techniques have shown tremendous potential in the GIS domain. While convolutional neural networks (CNNs) have dominated image analysis, transformers have proven to be a unifying solution for several AI-based processing pipelines. Vision transformers (ViTs) can have comparable and, in some cases, better accuracy than a CNN. However, they suffer from a significant drawback associated with the excessive use of training parameters. Using trainable parameters generously can have multiple advantages ranging from addressing model scalability to explainability. This can have a significant impact on model deployment in edge devices with limited resources, such as drones. In this research, we explore, without using pre-trained weights, how the inherent structure of vision transformers behaves with custom modifications. To verify our proposed approach, these architectures are trained on multiple land cover datasets. Experiments reveal that a combination of lightweight convolutional layers, including ShuffleNet, along with depthwise separable convolutions and average pooling can reduce the trainable parameters by 17.85% and yet achieve higher accuracy than the base mobile vision transformer (MViT). It is also observed that utilizing a combination of convolution layers along with multi-headed self-attention layers in MViT variants provides better performance for capturing local and global features, unlike the standalone ViT architecture, which utilizes almost 95% more parameters than the proposed MViT variant. Full article
(This article belongs to the Special Issue Applications of Machine Learning Algorithms in Remote Sensing)
Show Figures

Figure 1

Back to TopTop