Next Article in Journal
Web Objects Based Contextual Data Quality Assessment Model for Semantic Data Application
Previous Article in Journal
Droplet Size Distribution Characteristics of Aerial Nozzles by Bell206L4 Helicopter under Medium and Low Airflow Velocity Wind Tunnel Conditions and Field Verification Test
Previous Article in Special Issue
Unsupervised Generation and Synthesis of Facial Images via an Auto-Encoder-Based Deep Generative Adversarial Network
Open AccessArticle

Multifocus Image Fusion Using a Sparse and Low-Rank Matrix Decomposition for Aviator’s Night Vision Goggle

1
Department of Electrical Engineering, National Chin-Yi University of Technology, Taichung 41170, Taiwan
2
Department of Mechanical Engineering, National Chin-Yi University of Technology, Taichung 44170, Taiwan
3
Department of Mechanical Engineering, National Cheng Kung University, Tainan 70101, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(6), 2178; https://doi.org/10.3390/app10062178
Received: 20 February 2020 / Revised: 13 March 2020 / Accepted: 17 March 2020 / Published: 23 March 2020
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
This study proposed the concept of sparse and low-rank matrix decomposition to address the need for aviator’s night vision goggles (NVG) automated inspection processes when inspecting equipment availability. First, the automation requirements include machinery and motor-driven focus knob of NVGs and image capture using cameras to achieve autofocus. Traditionally, passive autofocus involves first computing of sharpness of each frame and then use of a search algorithm to quickly find the sharpest focus. In this study, the concept of sparse and low-rank matrix decomposition was adopted to achieve autofocus calculation and image fusion. Image fusion can solve the multifocus problem caused by mechanism errors. Experimental results showed that the sharpest image frame and its nearby frame can be image-fused to resolve minor errors possibly arising from the image-capture mechanism. In this study, seven samples and 12 image-fusing indicators were employed to verify the image fusion based on variance calculated in a discrete cosine transform domain without consistency verification, with consistency verification, structure-aware image fusion, and the proposed image fusion method. Experimental results showed that the proposed method was superior to other methods and compared the autofocus put forth in this paper and the normalized gray-level variance sharpness results in the documents to verify accuracy. View Full-Text
Keywords: autofocus; night vision goggles; image fusion; sparse and low-rank matrix decomposition autofocus; night vision goggles; image fusion; sparse and low-rank matrix decomposition
Show Figures

Figure 1

MDPI and ACS Style

Jian, B.-L.; Chu, W.-L.; Li, Y.-C.; Yau, H.-T. Multifocus Image Fusion Using a Sparse and Low-Rank Matrix Decomposition for Aviator’s Night Vision Goggle. Appl. Sci. 2020, 10, 2178.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop