remotesensing-logo

Journal Browser

Journal Browser

Advances and Challenges on Multisource Remote Sensing Image Fusion: Datasets, New Technologies, and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (15 March 2024) | Viewed by 15086

Special Issue Editors

Chinese Academy of Surveying and Mapping (CASM), Beijing, China
Interests: remote sensing image processing; photogrammetry; 3D reconstruction; light field

E-Mail Website
Guest Editor
Institute of Photogrammetry and Remote Sensing, Chinese Academy of Surveying and Mapping (CASM), Beijing 100036, China
Interests: computer vision; photogrammetry; remote sensing; optical satellite image processing

E-Mail Website
Guest Editor
School of Computer Science & Informatics, Cardiff University, Cardiff CF10 3EU, UK
Interests: non-photorealistic rendering; neural style transfer; performance evaluation; cellular automata; shape analysis; cultural heritage; medical image analysis; facial analysis

E-Mail Website
Guest Editor
School of Computer Science and Informatics, Cardiff University, Queen’s Buildings, 5 The Parade, Roath, Cardiff CF24 3AA, UK
Interests: statistical signal/image processing; bayesian data analysis; remote sensing; synthetic aperture radar imaging; inverse problems; convex/non-convex optimisation; Markov chain Monte Carlo methods; nonlinear time series modelling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430072, China
Interests: remote sensing; feature extraction,image segmentation,object detection

Special Issue Information

Dear Colleagues,

Today, various types of remote sensing images have been developed, including optical/near-infrared satellite images, SAR images, LiDAR intensity/depth images, thermal images, vector map images, etc. Each source of images encodes one aspect of information, and thus, the fusion of different sources of images is conducive to the comprehensive utilization of their advantages. For example, SAR data can be effectively applied to geometrically orient optical satellite images without additional control points, exploiting the high geopositioning accuracy of SAR data. Image fusion has been researched for decades, and a range of techniques related to photogrammetry, computer vision, and artificial intelligence have been developed. However, accurate multisource remote sensing image fusion is still challenging, due to: (1) large nonlinear intensity differences between the different sources of images; (2) ineffective rotation-handling strategies; (3) the absence of large-scale, multisource remote sensing image datasets that include various types of images; and (4) a lack of remote sensing images oriented to deep learning frameworks which consider the spectral characteristics, geoscientific prior knowledge, etc. The topics of this Special Issue include, but are not limited to:

  • Large-scale, multisource remote sensing image datasets;
  • Multisource remote sensing data fusion methods that are robust when it comes to scale, rotation, and nonlinear radiation- change;
  • Machine learning (including deep learning, multitask learning, and transfer learning) for multisource remotely sensed images;
  • Multisource image fusion for remote sensing applications, including, but not limited to, geometric orientation, 3D reconstruction, change detection, segmentation, etc.

Dr. Yuxuan Liu
Prof. Dr. Li Zhang
Prof. Dr. Paul Rosin
Dr. Oktay Karakus
Dr. Zhihua Hu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing image data
  • multisource remote sensing image processing
  • data fusion
  • deep learning
  • remote sensing image-based application
  • computer vision

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 6833 KiB  
Article
Fine Calibration Method for Laser Altimeter Pointing and Ranging Based on Dense Control Points
by Chaopeng Xu, Fan Mo, Xiao Wang, Xiaomeng Yang, Junfeng Xie and Zhen Wen
Remote Sens. 2024, 16(4), 611; https://doi.org/10.3390/rs16040611 - 06 Feb 2024
Viewed by 639
Abstract
Satellite laser altimetry technology, a novel space remote sensing technique, actively acquires high-precision elevation information about the Earth’s surface. However, the accuracy of laser altimetry can be compromised by alterations in the satellite-ground environment, thermal dynamics, and cosmic radiation. These factors may induce [...] Read more.
Satellite laser altimetry technology, a novel space remote sensing technique, actively acquires high-precision elevation information about the Earth’s surface. However, the accuracy of laser altimetry can be compromised by alterations in the satellite-ground environment, thermal dynamics, and cosmic radiation. These factors may induce subtle variations in the installation and internal structure of the spaceborne laser altimeter on the satellite platform, diminishing measurement precision. In-orbit calibration is thus essential to enhancing the precision of laser altimetry. Through collaborative calculations between satellite and ground stations, we can derive correction parameters for laser pointing and ranging, substantially improving the accuracy of satellite laser altimetry. This paper introduces a sophisticated calibration method for laser altimeter pointing and ranging that utilizes dense control points. The approach interpolates discrete ground control point data into continuous simulated terrain using empirical Bayesian kriging, subsequently categorizing the data for either pointing or ranging calibration according to their respective functions. Following this, a series of calibration experiments are conducted, prioritizing “pointing” followed by “ranging” and continuing until the variation in the ranging calibration results falls below a predefined threshold. We employed experimental data from ground control points (GCPs) in Xinjiang and Inner Mongolia, China, to calibrate the GaoFen-7 (GF-7) satellite Beam 2 laser altimeter as per the outlined method. The calibration outcomes were then benchmarked against those gleaned from infrared laser detector calibration, revealing disparities of 1.12 s in the pointing angle and 2 cm in the ranging correction value. Post validation with ground control points, the measurement accuracy was refined to 0.15 m. The experiments confirm that the proposed calibration method offers accuracy comparable to that of infrared laser detector calibration and can facilitate the updating of 1:10,000 topographic maps utilizing stereo optical imagery. Furthermore, this method is more cost-effective and demands fewer personnel for ground control point collection, enhancing resource efficiency compared to traditional infrared laser detector calibration. The proposed approach surpasses terrain-matching limitations when calibrating laser ranging parameters and presents a viable solution for achieving frequent and high-precision in-orbit calibration of laser altimetry satellites. Full article
Show Figures

Figure 1

22 pages, 22679 KiB  
Article
SatellStitch: Satellite Imagery-Assisted UAV Image Seamless Stitching for Emergency Response without GCP and GNSS
by Zijun Wei, Chaozhen Lan, Qing Xu, Longhao Wang, Tian Gao, Fushan Yao and Huitai Hou
Remote Sens. 2024, 16(2), 309; https://doi.org/10.3390/rs16020309 - 11 Jan 2024
Viewed by 854
Abstract
Rapidly stitching unmanned aerial vehicle (UAV) imagery to produce high-resolution fast-stitch maps is key to UAV emergency mapping. However, common problems such as gaps and ghosting in image stitching remain challenging and directly affect the visual interpretation value of the imagery product. Inspired [...] Read more.
Rapidly stitching unmanned aerial vehicle (UAV) imagery to produce high-resolution fast-stitch maps is key to UAV emergency mapping. However, common problems such as gaps and ghosting in image stitching remain challenging and directly affect the visual interpretation value of the imagery product. Inspired by the data characteristics of high-precision satellite images with rich access and geographic coordinates, a seamless stitching method is proposed for emergency response without the support of ground control points (CGPs) and global navigation satellite systems (GNSS). This method aims to eliminate stitching traces and solve the problem of stitching error accumulation. Firstly, satellite images are introduced to support image alignment and geographic coordinate acquisition simultaneously using matching relationships. Then a dynamic contour point set is constructed to locate the stitching region and adaptively extract the fused region of interest (FROI). Finally, the gradient weight cost map of the FROI image is computed and the Laplacian pyramid fusion rule is improved to achieve seamless production of the fast-stitch image map with geolocation information. Experimental results indicate that the method is well adapted to two representative sets of UAV images. Compared with the Laplacian pyramid fusion algorithm, the peak signal-to-noise ratio (PSNR) of the image stitching results can be improved by 31.73% on average, and the mutual information (MI) can be improved by 19.98% on average. With no reliance on CGPs or GNSS support, fast-stitch image maps are more robust in harsh environments, making them ideal for emergency mapping and security applications. Full article
Show Figures

Graphical abstract

19 pages, 26173 KiB  
Article
Multi-Modal Image Registration Based on Phase Exponent Differences of the Gaussian Pyramid
by Xiaohu Yan, Yihang Cao, Yijun Yang and Yongxiang Yao
Remote Sens. 2023, 15(24), 5764; https://doi.org/10.3390/rs15245764 - 17 Dec 2023
Viewed by 961
Abstract
In multi-modal images (MMI), the differences in their imaging mechanisms lead to large signal-to-noise ratio differences, which means that the matching of geometric invariance and the matching accuracy of the matching algorithms often cannot be balanced. Therefore, how to weaken the signal-to-noise interference [...] Read more.
In multi-modal images (MMI), the differences in their imaging mechanisms lead to large signal-to-noise ratio differences, which means that the matching of geometric invariance and the matching accuracy of the matching algorithms often cannot be balanced. Therefore, how to weaken the signal-to-noise interference of MMI, maintain good scale and rotation invariance, and obtain high-precision matching correspondences becomes a challenge for multimodal remote sensing image matching. Based on this, a lightweight MMI alignment of the phase exponent of the differences in the Gaussian pyramid (PEDoG) is proposed, which takes into account the phase exponent differences of the Gaussian pyramid with normalized filtration, i.e., it achieves the high-precision identification of matching correspondences points while maintaining the geometric invariance of multi-modal matching. The proposed PEDoG method consists of three main parts, introducing the phase consistency model into the differential Gaussian pyramid to construct a new phase index. Then, three types of MMI (multi-temporal image, infrared–optical image, and map–optical image) are selected as the experimental datasets and compared with the advanced matching methods, and the results show that the NCM (number of correct matches) of the PEDoG method displays a minimum improvement of 3.3 times compared with the other methods, and the average RMSE (root mean square error) is 1.69 pixels, which is the lowest value among all the matching methods. Finally, the alignment results of the image are shown in the tessellated mosaic mode, which shows that the feature edges of the image are connected consistently without interlacing and artifacts. It can be seen that the proposed PEDoG method can realize high-precision alignment while taking geometric invariance into account. Full article
Show Figures

Figure 1

27 pages, 36179 KiB  
Article
Tree Species Classification from Airborne Hyperspectral Images Using Spatial–Spectral Network
by Chengchao Hou, Zhengjun Liu, Yiming Chen, Shuo Wang and Aixia Liu
Remote Sens. 2023, 15(24), 5679; https://doi.org/10.3390/rs15245679 - 10 Dec 2023
Viewed by 1311
Abstract
Tree species identification is a critical component of forest resource monitoring, and timely and accurate acquisition of tree species information is the basis for sustainable forest management and resource assessment. Airborne hyperspectral images have rich spectral and spatial information and can detect subtle [...] Read more.
Tree species identification is a critical component of forest resource monitoring, and timely and accurate acquisition of tree species information is the basis for sustainable forest management and resource assessment. Airborne hyperspectral images have rich spectral and spatial information and can detect subtle differences among tree species. To fully utilize the advantages of hyperspectral images, we propose a double-branch spatial–spectral joint network based on the SimAM attention mechanism for tree species classification. This method achieved high classification accuracy on three tree species datasets (93.31% OA value obtained in the TEF dataset, 95.7% in the Tiegang Reservoir dataset, and 98.82% in the Xiongan New Area dataset). The network consists of three parts: spectral branch, spatial branch, and feature fusion, and both branches make full use of the spatial–spectral information of pixels to avoid the loss of information. In addition, the SimAM attention mechanism is added to the feature fusion part of the network to refine the features to extract more critical features for high-precision tree species classification. To validate the robustness of the proposed method, we compared this method with other advanced classification methods through a series of experiments. The results show that: (1) Compared with traditional machine learning methods (SVM, RF) and other state-of-the-art deep learning methods, the proposed method achieved the highest classification accuracy in all three tree datasets. (2) Combining spatial and spectral information and incorporating the SimAM attention mechanism into the network can improve the classification accuracy of tree species, and the classification performance of the double-branch network is better than that of the single-branch network. (3) The proposed method obtains the highest accuracy under different training sample proportions, and does not change significantly with different training sample proportions, which are stable. This study demonstrates that high-precision tree species classification can be achieved using airborne hyperspectral images and the methods proposed in this study, which have great potential in investigating and monitoring forest resources. Full article
Show Figures

Figure 1

22 pages, 19803 KiB  
Article
MSISR-STF: Spatiotemporal Fusion via Multilevel Single-Image Super-Resolution
by Xiongwei Zheng, Ruyi Feng, Junqing Fan, Wei Han, Shengnan Yu and Jia Chen
Remote Sens. 2023, 15(24), 5675; https://doi.org/10.3390/rs15245675 - 08 Dec 2023
Cited by 1 | Viewed by 830
Abstract
Due to technological limitations and budget constraints, spatiotemporal image fusion uses the complementarity of high temporal–low spatial resolution (HTLS) and high spatial–low temporal resolution (HSLT) data to obtain high temporal and spatial resolution (HTHS) fusion data, which can effectively satisfy the demand for [...] Read more.
Due to technological limitations and budget constraints, spatiotemporal image fusion uses the complementarity of high temporal–low spatial resolution (HTLS) and high spatial–low temporal resolution (HSLT) data to obtain high temporal and spatial resolution (HTHS) fusion data, which can effectively satisfy the demand for HTHS data. However, some existing spatiotemporal image fusion models ignore the large difference in spatial resolution, which yields worse results for spatial information under the same conditions. Based on the flexible spatiotemporal data fusion (FSDAF) framework, this paper proposes a multilevel single-image super-resolution (SISR) method to solve this issue under the large difference in spatial resolution. The following are the advantages of the proposed method. First, multilevel super-resolution (SR) can effectively avoid the limitation of a single SR method for a large spatial resolution difference. In addition, the issue of noise accumulation caused by multilevel SR can be alleviated by learning-based SR (the cross-scale internal graph neural network (IGNN)) and then interpolation-based SR (the thin plate spline (TPS)). Finally, we add the reference information to the super-resolution, which can effectively control the noise generation. This method has been subjected to comprehensive experimentation using two authentic datasets, affirming that our proposed method surpasses the current state-of-the-art spatiotemporal image fusion methodologies in terms of performance and effectiveness. Full article
Show Figures

Graphical abstract

22 pages, 6549 KiB  
Article
Intelligent Segmentation and Change Detection of Dams Based on UAV Remote Sensing Images
by Haimeng Zhao, Xiaojian Yin, Anran Li, Huimin Zhang, Danqing Pan, Jinjin Pan, Jianfang Zhu, Mingchun Wang, Shanlin Sun and Qiang Wang
Remote Sens. 2023, 15(23), 5526; https://doi.org/10.3390/rs15235526 - 27 Nov 2023
Viewed by 727
Abstract
Guilin is situated in the southern part of China with abundant rainfall. There are 137 reservoirs, which are widely used for irrigation, flood control, water supply and power generation. However, there has been a lack of systematic and full-coverage remote sensing monitoring of [...] Read more.
Guilin is situated in the southern part of China with abundant rainfall. There are 137 reservoirs, which are widely used for irrigation, flood control, water supply and power generation. However, there has been a lack of systematic and full-coverage remote sensing monitoring of reservoir dams for a long time. According to the latest public literature, high-resolution unmanned aerial vehicle (UAV) remote sensing has not been used to detect changes on the reservoir dams of Guilin. In this paper, an intelligent segmentation change detection method is proposed to complete the detection of dam change based on multitemporal high-resolution UAV remote sensing data. Firstly, an enhanced GrabCut that fuses the linear spectral clustering (LSC) superpixel mapping matrix and the Sobel edge operator is proposed to extract the features of reservoir dams. The edge operator is introduced into GrabCut to redefine the new energy function’s smooth item, which makes the segmentation results of enhanced GrabCut more robust and accurate. Then, through image registration, the multitemporal dam extraction results are unified to the same coordinate system to complete the difference operation, and finally the dam change results are obtained. The experimental results of two representative reservoir dams in Guilin show that the proposed method can achieve a very high accuracy of change detection, which is an important reference for related research. Full article
Show Figures

Graphical abstract

22 pages, 18446 KiB  
Article
AEFormer: Zoom Camera Enables Remote Sensing Super-Resolution via Aligned and Enhanced Attention
by Ziming Tu, Xiubin Yang, Xingyu Tang, Tingting Xu, Xi He, Penglin Liu, Li Jiang and Zongqiang Fu
Remote Sens. 2023, 15(22), 5409; https://doi.org/10.3390/rs15225409 - 18 Nov 2023
Cited by 2 | Viewed by 880
Abstract
Reference-based super-resolution (RefSR) has achieved remarkable progress and shows promising potential applications in the field of remote sensing. However, previous studies heavily rely on existing and high-resolution reference image (Ref), which is hard to obtain in remote sensing practice. To address this issue, [...] Read more.
Reference-based super-resolution (RefSR) has achieved remarkable progress and shows promising potential applications in the field of remote sensing. However, previous studies heavily rely on existing and high-resolution reference image (Ref), which is hard to obtain in remote sensing practice. To address this issue, a novel structure based on a zoom camera structure (ZCS) together with a novel RefSR network, namely AEFormer, is proposed. The proposed ZCS provides a more accessible way to obtain valid Ref than traditional fixed-length camera imaging or external datasets. The physics-enabled network, AEFormer, is proposed to super-resolve low-resolution images (LR). With reasonably aligned and enhanced attention, AEFormer alleviates the misalignment problem, which is challenging yet common in RefSR tasks. Herein, it contributes to maximizing the utilization of spatial information across the whole image and better fusion between Ref and LR. Extensive experimental results on benchmark dataset RRSSRD and real-world prototype data both verify the effectiveness of the proposed method. Hopefully, ZCS and AEFormer can enlighten a new model for future remote sensing imagery super-resolution. Full article
Show Figures

Figure 1

19 pages, 11970 KiB  
Article
A Novel Shipyard Production State Monitoring Method Based on Satellite Remote Sensing Images
by Wanrou Qin, Yan Song, Haitian Zhu, Xinli Yu and Yuhong Tu
Remote Sens. 2023, 15(20), 4958; https://doi.org/10.3390/rs15204958 - 13 Oct 2023
Viewed by 956
Abstract
Monitoring the shipyard production state is of great significance to shipbuilding industry development and coastal resource utilization. In this article, it is the first time that satellite remote sensing (RS) data is utilized to monitor the shipyard production state dynamically and efficiently, which [...] Read more.
Monitoring the shipyard production state is of great significance to shipbuilding industry development and coastal resource utilization. In this article, it is the first time that satellite remote sensing (RS) data is utilized to monitor the shipyard production state dynamically and efficiently, which can make up for the traditional production state data collection mode. According to the imaging characteristics of optical remote sensing images in shipyards with a different production state, the characteristics are analyzed to establish reliable production state evidence. Firstly, in order to obtain the characteristics of the production state of optical remote sensing data, the high-level semantic information in the shipyard is extracted by transfer learning convolutional neural networks (CNNs). Secondly, in the evidence fusion, for the conflict evidence from the core sites of the shipyard, an improved DS evidence fusion method is proposed, which constructs the correlation metric to measure the degree of conflict in evidence and designs the similarity metric to measure the credibility of evidence. Thirdly, the weight of all the evidence is calculated according to the similarity metric to correct the conflict evidence. The introduction of the iterative idea is motivated by the fact that the fusion result aligns more closely with the desired result, the iterative idea is introduced to correct the fusion result. This method can effectively solve the conflict of evidence and effectively improve the monitoring accuracy of the shipyard production state. In the experiments, the Yangtze River Delta and the Bohai Rim are selected to verify that the proposed method can accurately recognize the shipyard production state, which reveals the potential of satellite RS images in shipyard production state monitoring, and also provides a new research thought perspective for other industrial production state monitoring. Full article
Show Figures

Graphical abstract

24 pages, 9298 KiB  
Article
High-Quality Object Detection Method for UAV Images Based on Improved DINO and Masked Image Modeling
by Wanjie Lu, Chaoyang Niu, Chaozhen Lan, Wei Liu, Shiju Wang, Junming Yu and Tao Hu
Remote Sens. 2023, 15(19), 4740; https://doi.org/10.3390/rs15194740 - 28 Sep 2023
Cited by 1 | Viewed by 1468
Abstract
The extensive application of unmanned aerial vehicle (UAV) technology has increased academic interest in object detection algorithms for UAV images. Nevertheless, these algorithms present issues such as low accuracy, inadequate stability, and insufficient pre-training model utilization. Therefore, a high-quality object detection method based [...] Read more.
The extensive application of unmanned aerial vehicle (UAV) technology has increased academic interest in object detection algorithms for UAV images. Nevertheless, these algorithms present issues such as low accuracy, inadequate stability, and insufficient pre-training model utilization. Therefore, a high-quality object detection method based on a performance-improved object detection baseline and pretraining algorithm is proposed. To fully extract global and local feature information, a hybrid backbone based on the combination of convolutional neural network (CNN) and vision transformer (ViT) is constructed using an excellent object detection method as the baseline network for feature extraction. This backbone is then combined with a more stable and generalizable optimizer to obtain high-quality object detection results. Because the domain gap between natural and UAV aerial photography scenes hinders the application of mainstream pre-training models to downstream UAV image object detection tasks, this study applies the masked image modeling (MIM) method to aerospace remote sensing datasets with a lower volume than mainstream natural scene datasets to produce a pre-training model for the proposed method and further improve UAV image object detection accuracy. Experimental results for two UAV imagery datasets show that the proposed method achieves better object detection performance compared to state-of-the-art (SOTA) methods with fewer pre-training datasets and parameters. Full article
Show Figures

Figure 1

20 pages, 27024 KiB  
Article
Vehicle Detection in Multisource Remote Sensing Images Based on Edge-Preserving Super-Resolution Reconstruction
by Hong Zhu, Yanan Lv, Jian Meng, Yuxuan Liu, Liuru Hu, Jiaqi Yao and Xionghanxuan Lu
Remote Sens. 2023, 15(17), 4281; https://doi.org/10.3390/rs15174281 - 31 Aug 2023
Viewed by 1003
Abstract
As an essential technology for intelligent transportation management and traffic risk prevention and control, vehicle detection plays a significant role in the comprehensive evaluation of the intelligent transportation system. However, limited by the small size of vehicles in satellite remote sensing images and [...] Read more.
As an essential technology for intelligent transportation management and traffic risk prevention and control, vehicle detection plays a significant role in the comprehensive evaluation of the intelligent transportation system. However, limited by the small size of vehicles in satellite remote sensing images and lack of sufficient texture features, its detection performance is far from satisfactory. In view of the unclear edge structure of small objects in the super-resolution (SR) reconstruction process, deep convolutional neural networks are no longer effective in extracting small-scale feature information. Therefore, a vehicle detection network based on remote sensing images (VDNET-RSI) is constructed in this article. The VDNET-RSI contains a two-stage convolutional neural network for vehicle detection. In the first stage, a partial convolution-based padding adopts the improved Local Implicit Image Function (LIIF) to reconstruct high-resolution remote sensing images. Then, the network associated with the results from the first stage is used in the second stage for vehicle detection. In the second stage, the super-resolution module, detection heads module and convolutional block attention module adopt the increased object detection framework to improve the performance of small object detection in large-scale remote sensing images. The publicly available DIOR dataset is selected as the experimental dataset to compare the performance of VDNET-RSI with that of the state-of-the-art models in vehicle detection based on satellite remote sensing images. The experimental results demonstrated that the overall precision of VDNET-RSI reached 62.9%, about 6.3%, 38.6%, 39.8% higher than that of YOLOv5, Faster-RCNN and FCOS, respectively. The conclusions of this paper can provide a theoretical basis and key technical support for the development of intelligent transportation. Full article
Show Figures

Graphical abstract

22 pages, 23048 KiB  
Article
A Novel Adaptively Optimized PCNN Model for Hyperspectral Image Sharpening
by Xinyu Xu, Xiaojun Li, Yikun Li, Lu Kang and Junfei Ge
Remote Sens. 2023, 15(17), 4205; https://doi.org/10.3390/rs15174205 - 26 Aug 2023
Viewed by 1044
Abstract
Hyperspectral satellite imagery has developed rapidly over the last decade because of its high spectral resolution and strong material recognition capability. Nonetheless, the spatial resolution of available hyperspectral imagery is inferior, severely affecting the accuracy of ground object identification. In the paper, we [...] Read more.
Hyperspectral satellite imagery has developed rapidly over the last decade because of its high spectral resolution and strong material recognition capability. Nonetheless, the spatial resolution of available hyperspectral imagery is inferior, severely affecting the accuracy of ground object identification. In the paper, we propose an adaptively optimized pulse-coupled neural network (PCNN) model to sharpen the spatial resolution of the hyperspectral imagery to the scale of the multispectral imagery. Firstly, a SAM-CC strategy is designed to assign hyperspectral bands to the multispectral bands. Subsequently, an improved PCNN (IPCNN) is proposed, which considers the differences of the neighboring neurons. Furthermore, the Chameleon Swarm Optimization (CSA) optimization is adopted to generate the optimum fusion parameters for IPCNN. Hence, the injected spatial details are acquired in the irregular regions generated by the IPCNN. Extensive experiments are carried out to validate the superiority of the proposed model, which confirms that our method can realize hyperspectral imagery with high spatial resolution, yielding the best spatial details and spectral information among the state-of-the-art approaches. Several ablation studies further corroborate the efficiency of our method. Full article
Show Figures

Figure 1

22 pages, 6076 KiB  
Article
Guided Local Feature Matching with Transformer
by Siliang Du, Yilin Xiao, Jingwei Huang, Mingwei Sun and Mingzhong Liu
Remote Sens. 2023, 15(16), 3989; https://doi.org/10.3390/rs15163989 - 11 Aug 2023
Viewed by 1259
Abstract
GLFNet is proposed to be utilized for the detection and matching of local features among remote-sensing images, with existing sparse feature points being leveraged as guided points. Local feature matching is a crucial step in remote-sensing applications and 3D reconstruction. However, existing methods [...] Read more.
GLFNet is proposed to be utilized for the detection and matching of local features among remote-sensing images, with existing sparse feature points being leveraged as guided points. Local feature matching is a crucial step in remote-sensing applications and 3D reconstruction. However, existing methods that detect feature points in image pairs and match them separately may fail to establish correct matches among images with significant differences in lighting or perspectives. To address this issue, the problem is reformulated as the extraction of corresponding features in the target image, given guided points from the source image as explicit guidance. The approach is designed to encourage the sharing of landmarks by searching for regions in the target image with features similar to the guided points in the source image. For this purpose, GLFNet is developed as a feature extraction and search network. The main challenge lies in efficiently searching for accurate matches, considering the massive number of guided points. To tackle this problem, the search network is divided into a coarse-level match network-based guided point transformer that narrows the search space and a fine-level regression network that produces accurate matches. The experimental results on challenging datasets demonstrate that the proposed method provides robust matching and benefits various applications, including remote-sensing image registration, optical flow estimation, visual localization, and reconstruction registration. Overall, a promising solution is offered by this approach to the problem of local feature matching in remote-sensing applications. Full article
Show Figures

Figure 1

29 pages, 3979 KiB  
Article
An ISAR and Visible Image Fusion Algorithm Based on Adaptive Guided Multi-Layer Side Window Box Filter Decomposition
by Jiajia Zhang, Huan Li, Dong Zhao, Pattathal V. Arun, Wei Tan, Pei Xiang, Huixin Zhou, Jianling Hu and Juan Du
Remote Sens. 2023, 15(11), 2784; https://doi.org/10.3390/rs15112784 - 26 May 2023
Viewed by 1211
Abstract
Traditional image fusion techniques generally use symmetrical methods to extract features from different sources of images. However, these conventional approaches do not resolve the information domain discrepancy from multiple sources, resulting in the incompleteness of fusion. To solve the problem, we propose an [...] Read more.
Traditional image fusion techniques generally use symmetrical methods to extract features from different sources of images. However, these conventional approaches do not resolve the information domain discrepancy from multiple sources, resulting in the incompleteness of fusion. To solve the problem, we propose an asymmetric decomposition method. Firstly, an information abundance discrimination method is used to sort images into detailed and coarse categories. Then, different decomposition methods are proposed to extract features at different scales. Next, different fusion strategies are adopted for different scale features, including sum fusion, variance-based transformation, integrated fusion, and energy-based fusion. Finally, the fusion result is obtained through summation, retaining vital features from both images. Eight fusion metrics and two datasets containing registered visible, ISAR, and infrared images were adopted to evaluate the performance of the proposed method. The experimental results demonstrate that the proposed asymmetric decomposition method could preserve more details than the symmetric one, and performed better in both objective and subjective evaluations compared with the fifteen state-of-the-art fusion methods. These findings can inspire researchers to consider a new asymmetric fusion framework that can adapt to the differences in information richness of the images, and promote the development of fusion technology. Full article
Show Figures

Figure 1

Back to TopTop