remotesensing-logo

Journal Browser

Journal Browser

Deep Learning and Computer Vision in Remote Sensing-III

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 1653

Special Issue Editors


E-Mail Website
Guest Editor Assistant
Department of Computing, University of Turku, Turku, Finland
Interests: machine learning; deep learning; computer vision; data analysis; pose estimation

E-Mail Website
Guest Editor Assistant
Department of Computing, University of Turku, Turku, Finland
Interests: artificial intelligence; machine learning; deep learning; human-computer interaction

Special Issue Information

Dear Colleagues,

Deep Learning (DL) has been successfully applied to a wide range of computer vision tasks, exhibiting state-of-the-art performance. For this reason, most data fusion architectures for computer vision tasks are built based on DL. In addition, DL harbors the great potential to process multi-sensory data, which usually contain rich information in the raw data and are sensitive to the training time and model size.

We are pleased to announce this Part III Special Issue, which will follow on from Part I and II, focusing on deep learning and computer vision methods for remote sensing. This Special Issue will provide researchers with the opportunity to present the recent advances in deep learning, with a specific focus on three main computer vision tasks: classification, detection and segmentation. We seek collaborative contributions from academia and industry experts in the fields of deep learning, computer vision, data science, and remote sensing.

The scope of this Special Issue includes, but is not limited to, the following topics:

  • Satellite image processing and analysis based on deep learning;
  • Deep learning for object detection, image classification, and semantic and instance segmentation;
  • Deep learning for remote sensing scene understanding and classification;
  • Transfer learning and deep reinforcement learning for remote sensing;
  • Supervised and unsupervised representation learning for remote sensing environments;
  • Applications

Dr. Fahimeh Farahnakian
Prof. Dr. Jukka Heikkonen
Guest Editors

Pouya Jafarzadeh
Farshad Farahnakian
Guest Editor Assistants

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computer vision
  • Deep learning
  • Machine learning
  • Remote sensing
  • Sensor fusion
  • Autonomous systems

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

35 pages, 18681 KiB  
Article
Deep Learning Test Platform for Maritime Applications: Development of the eM/S Salama Unmanned Surface Vessel and Its Remote Operations Center for Sensor Data Collection and Algorithm Development
by Juha Kalliovaara, Tero Jokela, Mehdi Asadi, Amin Majd, Juhani Hallio, Jani Auranen, Mika Seppänen, Ari Putkonen, Juho Koskinen, Tommi Tuomola, Reza Mohammadi Moghaddam and Jarkko Paavola
Remote Sens. 2024, 16(9), 1545; https://doi.org/10.3390/rs16091545 - 26 Apr 2024
Viewed by 226
Abstract
In response to the global megatrends of digitalization and transportation automation, Turku University of Applied Sciences has developed a test platform to advance autonomous maritime operations. This platform includes the unmanned surface vessel eM/S Salama and a remote operations center, both of which [...] Read more.
In response to the global megatrends of digitalization and transportation automation, Turku University of Applied Sciences has developed a test platform to advance autonomous maritime operations. This platform includes the unmanned surface vessel eM/S Salama and a remote operations center, both of which are detailed in this article. The article highlights the importance of collecting and annotating multi-modal sensor data from the vessel. These data are vital for developing deep learning algorithms that enhance situational awareness and guide autonomous navigation. By securing relevant data from maritime environments, we aim to enhance the autonomous features of unmanned surface vessels using deep learning techniques. The annotated sensor data will be made available for further research through open access. An image dataset, which includes synthetically generated weather conditions, is published alongside this article. While existing maritime datasets predominantly rely on RGB cameras, our work underscores the need for multi-modal data to advance autonomous capabilities in maritime applications. Full article
(This article belongs to the Special Issue Deep Learning and Computer Vision in Remote Sensing-III)
24 pages, 6324 KiB  
Article
A Bio-Inspired Visual Perception Transformer for Cross-Domain Semantic Segmentation of High-Resolution Remote Sensing Images
by Xinyao Wang, Haitao Wang, Yuqian Jing, Xianming Yang and Jianbo Chu
Remote Sens. 2024, 16(9), 1514; https://doi.org/10.3390/rs16091514 - 25 Apr 2024
Viewed by 205
Abstract
Pixel-level classification of very-high-resolution images is a crucial yet challenging task in remote sensing. While transformers have demonstrated effectiveness in capturing dependencies, their tendency to partition images into patches may restrict their applicability to highly detailed remote sensing images. To extract latent contextual [...] Read more.
Pixel-level classification of very-high-resolution images is a crucial yet challenging task in remote sensing. While transformers have demonstrated effectiveness in capturing dependencies, their tendency to partition images into patches may restrict their applicability to highly detailed remote sensing images. To extract latent contextual semantic information from high-resolution remote sensing images, we proposed a gaze–saccade transformer (GSV-Trans) with visual perceptual attention. GSV-Trans incorporates a visual perceptual attention (VPA) mechanism that dynamically allocates computational resources based on the semantic complexity of the image. The VPA mechanism includes both gaze attention and eye movement attention, enabling the model to focus on the most critical parts of the image and acquire competitive semantic information. Additionally, to capture contextual semantic information across different levels in the image, we designed an inter-layer short-term visual memory module with bidirectional affinity propagation to guide attention allocation. Furthermore, we introduced a dual-branch pseudo-label module (DBPL) that imposes pixel-level and category-level semantic constraints on both gaze and saccade branches. DBPL encourages the model to extract domain-invariant features and align semantic information across different domains in the feature space. Extensive experiments on multiple pixel-level classification benchmarks confirm the effectiveness and superiority of our method over the state of the art. Full article
(This article belongs to the Special Issue Deep Learning and Computer Vision in Remote Sensing-III)
Show Figures

Figure 1

23 pages, 40077 KiB  
Article
MineCam: Application of Combined Remote Sensing and Machine Learning for Segmentation and Change Detection of Mining Areas Enabling Multi-Purpose Monitoring
by Katarzyna Jabłońska, Marcin Maksymowicz, Dariusz Tanajewski, Wojciech Kaczan, Maciej Zięba and Marek Wilgucki
Remote Sens. 2024, 16(6), 955; https://doi.org/10.3390/rs16060955 - 08 Mar 2024
Viewed by 631
Abstract
Our study addresses the need for universal monitoring solutions given the diverse environmental impacts of surface mining operations. We present a solution combining remote sensing and machine learning techniques, utilizing a dataset of over 2000 satellite images annotated with ten distinct labels indicating [...] Read more.
Our study addresses the need for universal monitoring solutions given the diverse environmental impacts of surface mining operations. We present a solution combining remote sensing and machine learning techniques, utilizing a dataset of over 2000 satellite images annotated with ten distinct labels indicating mining area components. We tested various approaches to develop comprehensive yet universal machine learning models for mining area segmentation. This involved considering different types of mines, raw materials, and geographical locations. We evaluated multiple satellite data set combinations to determine optimal outcomes. The results suggest that radar and multispectral data fusion did not significantly improve the models’ performance, and the addition of further channels led to the degradation of the metrics. Despite variations in mine type or extracted material, the models’ effectiveness remained within an Intersection over Union value range of 0.65–0.75. Further, in this research, we conducted a detailed visual analysis of the models’ outcomes to identify areas requiring additional attention, contributing to the discourse on effective mining area monitoring and management methodologies. The visual examination of models’ outputs provides insights for future model enhancement and highlights unique segmentation challenges within mining areas. Full article
(This article belongs to the Special Issue Deep Learning and Computer Vision in Remote Sensing-III)
Show Figures

Figure 1

Back to TopTop