Convolutional Neural Networks Application in Remote Sensing, Volume II

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "AI in Imaging".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 1682

Special Issue Editors


E-Mail Website
Guest Editor
Information Technologies Institute, Centre for Research and Technology Hellas, 57001 Thessaloniki, Greece
Interests: artificial intelligence; computer vision; image and video processing; pattern recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing is the process of acquiring information and monitoring the physical characteristics of an area using special satellite and aircraft-based sensors. Remote sensing is crucial in a plethora of applications ranging from detecting land use and cover to observing climate and urban changes and from controlling forest fires to identifying crop production and damage. Remote sensing began in the 1960s and 1970s with the development of image processing of satellite imagery, but it greatly benefited from the advances in machine and deep learning. Today, convolutional neural networks can process remote sensing images with high speed and achieve impeccable accuracy and robustness in several applications. However, technological breakthroughs are still needed to enhance the performance of remote sensing applications and facilitate their use in real life.

This Special Issue of the Journal of Imaging aims to feature reports of recent advances in remote sensing technology, novel deep network architectures to enhance the accuracy and robustness of remote sensing applications, such as object segmentation and change detection, and innovative real-time applications that can be employed in real life to cover important needs in remote sensing.

Dr. Dimitrios Konstantinidis
Dr. Kosmas Dimitropoulos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • multispectral imaging
  • deep networks
  • convolutional neural networks
  • computer vision
  • object detection and segmentation
  • change detection
  • scene understanding
  • attention-based networks
  • image fusion

Related Special Issue

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 7045 KiB  
Article
An Effective Hyperspectral Image Classification Network Based on Multi-Head Self-Attention and Spectral-Coordinate Attention
by Minghua Zhang, Yuxia Duan, Wei Song, Haibin Mei and Qi He
J. Imaging 2023, 9(7), 141; https://doi.org/10.3390/jimaging9070141 - 10 Jul 2023
Cited by 1 | Viewed by 1267
Abstract
In hyperspectral image (HSI) classification, convolutional neural networks (CNNs) have been widely employed and achieved promising performance. However, CNN-based methods face difficulties in achieving both accurate and efficient HSI classification due to their limited receptive fields and deep architectures. To alleviate these limitations, [...] Read more.
In hyperspectral image (HSI) classification, convolutional neural networks (CNNs) have been widely employed and achieved promising performance. However, CNN-based methods face difficulties in achieving both accurate and efficient HSI classification due to their limited receptive fields and deep architectures. To alleviate these limitations, we propose an effective HSI classification network based on multi-head self-attention and spectral-coordinate attention (MSSCA). Specifically, we first reduce the redundant spectral information of HSI by using a point-wise convolution network (PCN) to enhance discriminability and robustness of the network. Then, we capture long-range dependencies among HSI pixels by introducing a modified multi-head self-attention (M-MHSA) model, which applies a down-sampling operation to alleviate the computing burden caused by the dot-product operation of MHSA. Furthermore, to enhance the performance of the proposed method, we introduce a lightweight spectral-coordinate attention fusion module. This module combines spectral attention (SA) and coordinate attention (CA) to enable the network to better weight the importance of useful bands and more accurately localize target objects. Importantly, our method achieves these improvements without increasing the complexity or computational cost of the network. To demonstrate the effectiveness of our proposed method, experiments were conducted on three classic HSI datasets: Indian Pines (IP), Pavia University (PU), and Salinas. The results show that our proposed method is highly competitive in terms of both efficiency and accuracy when compared to existing methods. Full article
Show Figures

Figure 1

Back to TopTop