Special Issue "Advances in Image Fusion"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 14 September 2021.

Special Issue Editors

Prof. Dr. Jiayi Ma
Website
Guest Editor
Electronic Information School, Wuhan University, Wuhan 430072, China
Interests: machine learning; computer vision; information fusion; image super resolution; hyperspectral image analysis; infrared imaging; image denoising
Special Issues and Collections in MDPI journals
Dr. Yu Liu
Website
Co-Guest Editor
Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China
Interests: image fusion; image super-resolution; visual recognition; biomedical image analysis; machine learning; computer vision
Prof. Dr. Junjun Jiang
Website
Co-Guest Editor
School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
Interests: image super-resolution; image denoising; video processing; hyperspectral image analysis; image fusion; visual recognition; machine learning
Dr. Zheng Wang
Website
Co-Guest Editor
Department of Information and Communication Engineering, The University of Tokyo, Tokyo 113-8656, Japan
Interests: person re-identification; image retrieval; crowd counting; image enhancement; multimedia content analysis
Ms. Han Xu
Website
Assistant Guest Editor
Electronic Information School, Wuhan University, Wuhan 430070, China
Interests: computer vision; image fusion; deep learning

Special Issue Information

Dear Colleagues,

Many engineering, medical, remote sensing, environmental, national defense, and civilian applications require multiple types of information. Some examples are multimodality images; images with multiple exposure or focus settings; and images including multispectral, hyperspectral, and panchromatic images, ect. A single type of information can merely represent a part of the scene information, while the combination of multiple types of information can provide a comprehensive characterization. However, the deficiency is that the redundant multiple types of information take up much unnecessary storage space. Thus, the challenge of generating aligned and synthesized results by integrating complementary information has gained significant attention, both from storage space and visual perception viewpoints.

The implementation of information fusion is often hindered by different understandings of information, the definitions of meaningful information for subsequent tasks, the ways of information decomposition, the methods used to distinguish complementary information from redundant information, and the design of fusion rules, ect. Further progress on these issues calls for clearer physical explanations of these methods and definitions. Contributions addressing any of these issues are welcome.

This Special Issue aims to be a forum for new and improved information fusion techniques, which are not restricted to image fusion. Considering that there are inevitable offsets in the information collection process and that source images may sometimes be of low quality, it is likely that fusion techniques involving image registration and image enhancement will gain more popularity. In addition, fusion performance evaluation is also a significant factor in designing a good fusion algorithm. Therefore, studies on image quality assessment techniques, such as image-entropy-based methods, are welcome in this Special Issue.

Prof. Dr. Jiayi Ma
Dr. Yu Liu
Prof. Dr. Junjun Jiang
Dr. Zheng Wang
Ms. Han Xu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visible and infrared image fusion
  • medical image fusion
  • multi-exposure image fusion
  • multi-focus image fusion
  • remote sensing image fusion
  • image registration
  • image super-resolution
  • image enhancement and image super-resolution
  • image quality assessment
  • information fusion
  • fusion applications

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
An Improvised Machine Learning Model Based on Mutual Information Feature Selection Approach for Microbes Classification
Entropy 2021, 23(2), 257; https://doi.org/10.3390/e23020257 - 23 Feb 2021
Viewed by 186
Abstract
The accurate classification of microbes is critical in today’s context for monitoring the ecological balance of a habitat. Hence, in this research work, a novel method to automate the process of identifying microorganisms has been implemented. To extract the bodies of microorganisms accurately, [...] Read more.
The accurate classification of microbes is critical in today’s context for monitoring the ecological balance of a habitat. Hence, in this research work, a novel method to automate the process of identifying microorganisms has been implemented. To extract the bodies of microorganisms accurately, a generalized segmentation mechanism which consists of a combination of convolution filter (Kirsch) and a variance-based pixel clustering algorithm (Otsu) is proposed. With exhaustive corroboration, a set of twenty-five features were identified to map the characteristics and morphology for all kinds of microbes. Multiple techniques for feature selection were tested and it was found that mutual information (MI)-based models gave the best performance. Exhaustive hyperparameter tuning of multilayer layer perceptron (MLP), k-nearest neighbors (KNN), quadratic discriminant analysis (QDA), logistic regression (LR), and support vector machine (SVM) was done. It was found that SVM radial required further improvisation to attain a maximum possible level of accuracy. Comparative analysis between SVM and improvised SVM (ISVM) through a 10-fold cross validation method ultimately showed that ISVM resulted in a 2% higher performance in terms of accuracy (98.2%), precision (98.2%), recall (98.1%), and F1 score (98.1%). Full article
(This article belongs to the Special Issue Advances in Image Fusion)
Show Figures

Figure 1

Open AccessArticle
Exploiting Superpixels for Multi-Focus Image Fusion
Entropy 2021, 23(2), 247; https://doi.org/10.3390/e23020247 - 21 Feb 2021
Viewed by 212
Abstract
Multi-focus image fusion is the process of combining focused regions of two or more images to obtain a single all-in-focus image. It is an important research area because a fused image is of high quality and contains more details than the source images. [...] Read more.
Multi-focus image fusion is the process of combining focused regions of two or more images to obtain a single all-in-focus image. It is an important research area because a fused image is of high quality and contains more details than the source images. This makes it useful for numerous applications in image enhancement, remote sensing, object recognition, medical imaging, etc. This paper presents a novel multi-focus image fusion algorithm that proposes to group the local connected pixels with similar colors and patterns, usually referred to as superpixels, and use them to separate the focused and de-focused regions of an image. We note that these superpixels are more expressive than individual pixels, and they carry more distinctive statistical properties when compared with other superpixels. The statistical properties of superpixels are analyzed to categorize the pixels as focused or de-focused and to estimate a focus map. A spatial consistency constraint is ensured on the initial focus map to obtain a refined map, which is used in the fusion rule to obtain a single all-in-focus image. Qualitative and quantitative evaluations are performed to assess the performance of the proposed method on a benchmark multi-focus image fusion dataset. The results show that our method produces better quality fused images than existing image fusion techniques. Full article
(This article belongs to the Special Issue Advances in Image Fusion)
Show Figures

Figure 1

Open AccessArticle
Advanced Driving Assistance Based on the Fusion of Infrared and Visible Images
Entropy 2021, 23(2), 239; https://doi.org/10.3390/e23020239 - 19 Feb 2021
Viewed by 181
Abstract
Obtaining key and rich visual information under sophisticated road conditions is one of the key requirements for advanced driving assistance. In this paper, a newfangled end-to-end model is proposed for advanced driving assistance based on the fusion of infrared and visible images, termed [...] Read more.
Obtaining key and rich visual information under sophisticated road conditions is one of the key requirements for advanced driving assistance. In this paper, a newfangled end-to-end model is proposed for advanced driving assistance based on the fusion of infrared and visible images, termed as FusionADA. In our model, we are committed to extracting and fusing the optimal texture details and salient thermal targets from the source images. To achieve this goal, our model constitutes an adversarial framework between the generator and the discriminator. Specifically, the generator aims to generate a fused image with basic intensity information together with the optimal texture details from source images, while the discriminator aims to force the fused image to restore the salient thermal targets from the source infrared image. In addition, our FusionADA is a fully end-to-end model, solving the issues of manually designing complicated activity level measurements and fusion rules existing in traditional methods. Qualitative and quantitative experiments on publicly available datasets RoadScene and TNO demonstrate the superiority of our FusionADA over the state-of-the-art approaches. Full article
(This article belongs to the Special Issue Advances in Image Fusion)
Show Figures

Figure 1

Back to TopTop