Next Article in Journal
Simultaneous Wireless Power Transfer and Secure Multicasting in Cooperative Decode-and-Forward Relay Networks
Next Article in Special Issue
Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach
Previous Article in Journal
Sea Ice Detection Based on an Improved Similarity Measurement Method Using Hyperspectral Data
Previous Article in Special Issue
Super-Resolution Reconstruction of High-Resolution Satellite ZY-3 TLC Images
Article Menu
Issue 5 (May) cover image

Export Article

Open AccessArticle
Sensors 2017, 17(5), 1127;

Airborne Infrared and Visible Image Fusion Combined with Region Segmentation

Changchun Institute of Optics Fine Mechanics and Physics, Chinese Academy of Science, #3888 Dongnanhu Road, Changchun 130033, China
University of Chinese Academy of Sciences, #19 Yuquan Road, Beijing 100049, China
Author to whom correspondence should be addressed.
Academic Editors: Cheng Wang, Julian Smit, Ayman F. Habib and Michael Ying Yang
Received: 24 March 2017 / Revised: 4 May 2017 / Accepted: 9 May 2017 / Published: 15 May 2017
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Full-Text   |   PDF [4215 KB, uploaded 15 May 2017]   |  


This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. View Full-Text
Keywords: airborne optoelectronic platform; image fusion; image segmentation; saliency extraction; dual-tree complex wavelet transform (DTCWT) airborne optoelectronic platform; image fusion; image segmentation; saliency extraction; dual-tree complex wavelet transform (DTCWT)

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

Zuo, Y.; Liu, J.; Bai, G.; Wang, X.; Sun, M. Airborne Infrared and Visible Image Fusion Combined with Region Segmentation. Sensors 2017, 17, 1127.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top