Next Article in Journal
A Case Treated with Maxillary Molar Distalization through the Maxillary Sinus: Three-Dimensional Assessment with a Cone-Beam Computed Tomography Superimposition
Next Article in Special Issue
An Artificial Intelligence-Based Algorithm for the Assessment of Substitution Voicing
Previous Article in Journal
Detection of Wind Turbine Failures through Cross-Information between Neighbouring Turbines
Previous Article in Special Issue
Constructing Condition Monitoring Model of Harmonic Drive
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Procedure for Multi-Focus Images Using Image Fusion with qshiftN DTCWT and MPCA in Laplacian Pyramid Domain

by
Chinnem Rama Mohan
1,
Kuldeep Chouhan
2,
Ranjeet Kumar Rout
3,
Kshira Sagar Sahoo
4,
Noor Zaman Jhanjhi
5,*,
Ashraf Osman Ibrahim
6 and
Abdelzahir Abdelmaboud
7
1
Department of Computer Science and Engineering, Visvesvaraya Technological University, Belgaum 590018, India
2
Department of Computer Science and Engineering, Shivalik College of Engineering, Dehradun 248197, India
3
Computer Science and Engineering, National Institute of Technology Srinagar, Srinagar 190006, India
4
Department of Computer Science and Engineering, SRM University, Amaravati 522240, India
5
School of Computer Science and Engineering, Taylor’s University, Subang Jaya 47500, Malaysia
6
Faculty of Computing and Informatics, University Malaysia Sabah, Kota Kinabalu 88400, Malaysia
7
Department of Information Systems, King Khalid University, Muhayil Asir 61913, Saudi Arabia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9495; https://doi.org/10.3390/app12199495
Submission received: 24 July 2022 / Revised: 29 August 2022 / Accepted: 2 September 2022 / Published: 22 September 2022

Abstract

:
Multi-focus image fusion (MIF) uses fusion rules to combine two or more images of the same scene with various focus values into a fully focused image. An all-in-focus image refers to a fully focused image that is more informative and useful for visual perception. A fused image with high quality is essential for maintaining shift-invariant and directional selectivity characteristics of the image. Traditional wavelet-based fusion methods, in turn, create ringing distortions in the fused image due to a lack of directional selectivity and shift-invariance. In this paper, a classical MIF system based on quarter shift dual-tree complex wavelet transform (qshiftN DTCWT) and modified principal component analysis (MPCA) in the laplacian pyramid (LP) domain is proposed to extract the focused image from multiple source images. In the proposed fusion approach, the LP first decomposes the multi-focus source images into low-frequency (LF) components and high-frequency (HF) components. Then, qshiftN DTCWT is used to fuse low and high-frequency components to produce a fused image. Finally, to improve the effectiveness of the qshiftN DTCWT and LP-based method, the MPCA algorithm is utilized to generate an all-in-focus image. Due to its directionality, and its shift-invariance, this transform can provide high-quality information in a fused image. Experimental results demonstrate that the proposed method outperforms many state-of-the-art techniques in terms of visual and quantitative evaluations.

1. Introduction

The statistical analysis of images is restricted due to the depth of focus while sensing different images. The main problem is that the focus is not equally concentrated on objects which exist in an image [1]. A feasible solution to overcome the above problem is composite imaging. Composite imaging is one of the techniques used in Multi-focus Image Fusion (MIF), which combines multiple numbers of images with the concentration of different focus levels related to the same scene [2]. The spatial and transform domain methods are applicable in MIF [3]. Transform base methods are also called multiresolution algorithms. The main principle of transform domain algorithms is to maintain perceptual vision with accurate information in a multiresolution representation. Various studies indicate that several multiresolution methods have been developed, such as discrete wavelet transform (DWT), stationary wavelet transform (SWT), double density discrete wavelet transform (DDDWT), etc. [4,5,6,7,8,9,10,11,12,13,14,15]. Lack of spatial orientation selectivity is the main issue with pyramid-based approaches, which causes blocking effects in the fused image. These pitfalls can be avoided by using DWT. However, DWT has issues with directionality, shift invariance, and aliasing. Primary factors influencing the quality of fused images are shift-invariance and directional selectivity. Traditional wavelet-based fusion techniques generate ringing artifacts into the fused images, which restricts the use of DWT for image fusion.
The DTCWT [16,17], one of the most accurate ones, overcomes the DWT’s limitations in shift invariance and directional sensitivity. The directional selectivity and near-shift invariance of DTCWT allow it to properly represent features in the fused image. Developing filters in DTCWT is a little more challenging since bi-orthogonal, and phase limitations must be met. The qshift DTCWT is a technique for simplifying filter design in DTCWT that produces superior fusion outcomes. The qshift DTCWT has succeeded as a multi-resolution transform intended for image fusion because it can capture directional and shift invariant characteristics.
The objective of the proposed approach is to create a high-quality fused image that is smoother, has improved visuality, and is free of distortions and noise. Users can easily perceive details in these images. The majority of MIF algorithms have an inadequate spatial resolution, which causes blurring. On fused images, the qshiftN DTCWT approach has a significant impact. This technique effectively enhances the resolution of fused images and yields high-quality results. The LP [18,19,20], and MPCA [19] methods also do better in terms of lowering additive noise, reducing distortion, and maintaining edges and other crucial values such as image sectors with higher contrast. As shown by the visual, and quantitative results, the proposed method gets rid of these problems and produces better quality measurement results. Furthermore, the proposed formulation performs well in MIF.
Several approaches for MIF were proposed in the past decades. For example, in the Nonsubsampled Contourlet Transform (NSCT) domain, Chinmaya Panigrahy et al. proposed an effective image fusion model using an enhanced adaptive pulse coupled neural network (PCNN). The proposed methodology has utilized the subbands of the source images obtained by the NSCT algorithm in the image fusion process. The adaptive linking strength is estimated using a new fractal dimension-based focus measure FDFM algorithm [21]. A review of region-based fusion techniques was presented by Bikash Meher et al. based on the classification of region-based fusion approaches. For the comparison of the mentioned existing approaches, fusion objective assessment indicators are emphasized [22]. Lin He, and colleagues proposed a MIF approach for improving imaging systems. The cascade forest was incorporated into MIF to estimate the influence of fusion rules [23]. Samet Aymaz et al. proposed a unique MIF approach based on a super-resolution hybrid method [24].
In the DWT domain, Zeyu Wang et al. [25] proposed a novel MIF approach that uses a convolutional neural network (CNN) algorithm to combine the benefits of both spatial and transforms domain approaches. Instead of using image blocks or source images, CNN is employed to amplify features and build various decision maps for different frequency subbands. The additional benefit of the CNN approach is to utilize the adaptive fusion rule in the fusion process. Amin-Naji et al. [26] derived two important feature metrics the energy of Laplacian and the variance of Laplacian. The idea of the proposed work is to evaluate the correlation coefficient between the source blocks and the artificial blurred blocks in the discrete cosine transform (DCT) domain using the focus metrics. A new approach for MIF is proposed by Samet Aymaz et al. [27]. A super-resolution method is concerned with contrast enhancement, SWT with the combination of Discrete Meyer filter for decomposition. The further final image is attained by implementing a new fusion rule with a gradient-based approach. Wavelet transformations are introduced by Jinjiang Li et al. [28] to extract high and low-frequency coefficients. In addition to this deep convolution, neural networks are implemented to generate a high-quality fused image by direct mapping in between learning high-frequency and low-frequency of source images [29]. Mansour Nejati et al. [30] presented a new focus metric based on the surface area of regions using the encircled method. This measure’s ability to discriminate blurred regions in the fusion method is demonstrated. Bingzhe Wei et al. [31] a novel fusion method that applies CNN to assist sparse representation (SR) is proposed for the purpose of gaining a fused image with more precise and abundant information. The computational complexity of this fusion method is impressively reduced. Chenglang Zhang [32] proposed a novel MIF approach based on multiscale transform (MST) and convolution sparse representation (CSR) to address the inherent defects of both the MST and SR-based fusion methods. The proposed approach is put up against the approaches discussed in the literature [21,22,23,24,25,26,27,28,30].
The following are the essential contributions of this work:
(i)
A hybrid method (i.e., qshiftN DTCWT and LP) with MPCA is introduced for the fusion of multifocus images;
(ii)
The method helps combine multiple source images to develop a fused image having better image quality with good directionality, a high degree of shift-invariance, achieving better visual quality, and retaining more information than the source images;
(iii)
Using the MPCA method, the amount of redundant data is decreased, and the most significant components of the source images are extracted;
(iv)
Extend the depth-of-field (DOF) of the advanced imaging system;
(v)
An analyzing procedure has been done both quantitatively and qualitatively;
(vi)
Proposed approach performance has improved compared with the state-of-the-art techniques developed in recent years.
The rest of the paper is organized as follows: Section 2 explains the proposed fusion methodology as well as the fusion methods implemented. Section 3 presents the results of the experimentation. Section 4 concludes with conclusions.

2. The Proposed Fusion Approach

This paper proposes a hybrid approach with MPCA to overcome other algorithms’ blurring and spatial distortions. An algorithm is a novel approach in MIF because this hybrid technique with MPCA gives good performance compared to other algorithms in recent years. In the proposed method, the fusion procedure is performed individually on row and column images, which are then averaged to eliminate any noise or distortion generated by the fusion process. The noise elimination process is explained in Section 2.1. Then, the source images are decomposed into LF components and HF components using LP. It provides information on the sharp contrast changes to which the human visual system is principally sensitive. The LP method is explained in Section 2.2. Then, qshiftN DTCWT is used to fuse low and high-frequency components to produce a fused image with good directionality and a high degree of shift-invariance. The qshiftN DTCWT method is explained in detail in Section 2.3. After fusing the low and high-frequency components, IDTCWT is applied to reconstruct the fused low and high-frequency components. In the proposed method, MPCA is used to improve the efficiency of the hybrid approach (i.e., qshiftN DTCWT and LP) to reduce the redundant data and extract the essential components of the fused image (i.e., all-in-focus image). Also, MPCA emphasizes elements that have the most significant impact and are robust to noise. So, the MPCA reduces the blurring and spatial distortions; thus, the fused image has more detailed clarity, clear edges, and better visual and machine perception. The MPCA method is explained in Section 2.4. Finally, the fused image is formed and available for comparison. Various objective quality metrics are calculated to assess the proposed method’s quality. These measures are described in Section 2.6 and Section 2.7, respectively. Figure 1 depicts the proposed technique’s flow diagram, detailed in Section 2.5.

2.1. Noise Elimination Process

The image g(x, y) of size M × N is separated into rows, and the rows are concatenated to generate a 1-D vector data g(x) of size MN [18]. It is shown in Algorithm 1.
Algorithm 1 Converting a two-dimensional array to a one-dimensional array
Input: Two-Dimensional Image (I), No. of Rows (M), and No. of Columns (N)
Output: One-Dimensional Vector Data (I)
Steps:
   Begin
      I 2 : 2 : E N D , : = I 2 : 2 : E N D , E N D : 1 : 1
      I = R E S H A P E I , 1 , M N
   End
By inversing the technique mentioned in Algorithm 2, the 2-D image could be restored from the 1-D vector data.
Algorithm 2 Converting a one-dimensional array to a two-dimensional array
Input: One-Dimensional Vector Data (I), No. of Rows (M), and No. of Columns (N)
Output: Two-Dimensional Image (I)
Steps:
   Begin
      I = R E S H A P E I , M , N
    I 2 : 2 : E N D , : = I 2 : 2 : E N D , E N D : 1 : 1
   End
Likewise, the size image g(x, y) is separated into columns and these columns are concatenated to generate a 1D vector data with g(x) a size of MN. The operation is I = C2DT1D (I′, M, N).

2.2. Laplacian Pyramid (LP)

The Laplacian pyramid [18,19,20] reveals the strong contrast modifications to which the human visual system is most highly sensitive. It can localize in both the spatial and frequency domains. LP is used to extract the most relevant elements of the fused image. LP also sets a premium on elements that have the most effective and are resistant to noise. As a result, the LP minimizes blurring and spatial distortions. The technique for constructing and reconstructing a Laplacian pyramid is shown below. On vector data, the image reduction method is performed by taking the DCT and applying the inverse of the DCT (IDCT) to the first half of the coefficients. The function that reduces IR is used to conduct level-to-level image reduction.
Reduction Function (Image):
Image Reduction (IR) using DCT:
n = l e n g t h I m a g e  
Y = D C T I m a g e , n  
I m a g e . L F = I D C T Y 1 : n / 2  
I m a g e . H F = I m a g e I D C T Y 1 : n / 2 , n  
Expand Image (IE) using DCT:
n = l e n g t h I m a g e . H F  
I m a g e = D C T I m a g e . L F  
I m a g e = I D C T I m a g e , n + I m a g e . H F  
Pyramid Construction:
X = I R X  
l k = x I E X  
Each image to be fused is formed into a pyramid using Equations (8) and (9). The constructed stages of the Laplacian pyramid are denoted by I1 in the first image and I2 in the second image. The following is the image fusion rule:
                     f o r   i = 1 : J
I M A G E 1 i = r e d u c e I 1  
I M A G E 2 i = r e d u c e I 2  
i m a g e 1 = I M A G E 1 i . L  
i m a g e 2 = I M A G E 2 i . L  
                     e n d
  A t   J t h l e v e l , I M A G E f . L = 0.5 I M A G E 1 J . L + I M A G E 2 J . L
               for J – 1 to 1 levels
D = a b s I M A G E 1 i . H a b s I M A G E 2 i . H > = 0  
I M A G E f . H = D . I M A G E i . H + ~ D . I M A G E 2 i . H  
I M A G E f . L = e x p a n d I M A G E f  
               end
F u s e d I m a g e = c o n v e r t _ 1 D _ t o _ 2 D I M A G E f . L , m , n  

2.3. qshiftN Dual-Tree Complex Wavelet Transform

Highly sampled DWT exhibits change invariance issues in 1-D and directional sensitivity in N-D. The DTCWT approach is shift-invariant, economical, and directionally selective. The DTCWT [33,34] is an improved wavelet transformation that generates actual and imagined transformational coefficients. The DTCWT uses two 2-channel FIR filter banks. Output is the actual component of one filter bank (Tree A), whereas yield is the imaginary component (Tree B).
For a d-dimensional object, the DTCWT uses two significantly sampled filter banks with a 2d redundancy. The three stages of a 1-D DTCWT filter bank are shown in Figure 2. While DWT-fused images have broken borders, DTCWT-fused images are soft and unbroken. When compared to DWT, which only delivers constrained directions in (0°, 45°, 90°), DTCWT produces 6 subbands in each of the three (±15°, ±45°, ±75°), both real and imaginary, which improves transformational correctness and preserves more detailed features.
The odd/even filter approach proposed by DTCWT, however, has a number of drawbacks:
  • There is no clear symmetry in the sub-sampling structure;
  • The frequency reactions of the two trees vary slightly;
  • Otherwise, since both terms denote linearity, the filter sets must be biorthogonal rather than orthogonal. It demonstrates that energy efficiency does not apply to signals and fields.
Each of them is reduced and solved using the DTCWT qshiftN as illustrated in Figure 3, with all filters above level 1 much shorter. It is possible to achieve a sample gap above levels 1, and 1/2 during a test period by using delayed filters of 1/4 and 3/4 rather than the DTCWT original’s 0 and 1/2. An asymmetric equal-length filter and the time it takes will be used to accomplish this.
Wavelet orthonormality can be perfectly transformed because of the asymmetry. When it comes to reverse filters, Tree-A filters are used, but Tree-B filters are used for both reverse and reconstruction filters because they are all part of the same orthonormal array. All trees have the same response in terms of their natural frequency. Individual effects are symmetrical around their midpoints, but the total complex impulses are asymmetric. Asymmetrical extension continues to work on the frame’s edges because of this.

2.4. Modified Principal Component Analysis (MPCA)

MPCA is used to turn uncorrelated variables into correlated variables. This method is useful for analyzing data and determining the optimal features for data collection. The first principal component represents data with the greatest variance. The others are just as much of what is left. The data is well represented by the first principal component, which also illustrates the direction of maximum variation. In this paper, the MPCA approach is used to determine the best-represented value of each subband of source images after implementing the LP-based qshiftN DTCWT method. These values are then multiplied by matched source image subbands. MPCA’s goal is to transfer data from the original space to the Eigen space. By saving the components with the largest eigenvector, the variance of the data is enhanced, and the covariance is lowered.
Specifically, this method removes redundant data from source images and extracts the most significant components. Furthermore, MPCA prioritizes components with the greatest impact and resistance to noise. As a result, the MPCA decreases blurring and spatial distortions. The steps of the MPCA algorithm are as follows:
  • Create a vector from the data
  • Determine the covariance matrix of the given vector
i . e . , c o v i m 1 :
3.
Calculate the eigenvalues and eigenvectors of the covariance matrices
i . e . , V , D = e i g C
4.
Choose the first principal component in the order of the eigenvectors
i . e . , m a x , i n d = s o r t d i a g D , d e s c e n d
a = V : , i n d 1 . / s u m V : , i n d 1
5.
Finally, to get the features extracted image i . e . , F _ E _ i m a g e = a 1 i m 1

2.5. Flow Diagram of Proposed Approach

The flow diagram of the complete fusion algorithm is depicted in Figure 1, which comprises two processes: LP-based qshiftN-DTCWT image fusion and MPCA. LP is used for decomposition, DTCWT is used for image fusion, and MPCA is used for feature extraction, as shown in Algorithm 3.
Algorithm 3 LP-based qshiftN DTCWT image fusion process and LP
Input: Multi-focus images.
Output: All-in-Focus Image
Steps:
(i)
Take the multi-focus images from the source and load them;
(ii)
To use the image fusion technique, two multi-focus images (I1 and I2) are used as source images. Row (I1 and I2) and column (I1 and I2) pixels are used to divide raw images;
(iii)
Multi-focus image row and column pixels are converted from a Two-Dimensional image to a One-Dimensional array of data;
(iv)
Laplacian pyramid is used to divide the resulting 1-D array data (I1) into minimum (row and column frequencies) and maximum (row and column frequencies) frequency elements. The I2 image is split into low (row and column frequencies) and high (row and column frequencies) frequency components in the same way;
(v)
To produce low and high-frequency row components, the primary fusion procedure is performed on row elements (both low and high-frequency elements) of I1 and I2. Similarly, to generate minimum and maximum frequency column elements, this fusion technique is used to process the column elements;
(vi)
To produce row and column elements, the fused row, and column frequency components are filtered utilizing the Inverse laplacian pyramid algorithm;
(vii)
The row and column elements of the 1-D array data are converted into a 2-D image;
(viii)
Using qshiftN DTCWT, a final fused image is created from the filtered row and column frequency elements;
(ix)
Apply MPCA on a fused image by qshiftN DTCWT-LP;
(x)
Featured extracted image i.e., all-in-Focus image.

2.6. Evaluation of the Proposed Method’s Effectiveness

In this section, the performance of the proposed technique is compared to that of state-of-the-art techniques in two ways: subjectively and objectively. Subjective assessment is a qualitative evaluation of how good the fused image looks. On the other hand, objective assessment, also called quantitative evaluation, is done by correlating the values of many image fusion efficiency metrics. Mathematical modeling is the basis for this quantitative method, which is called “objective analysis.” It looks at how similar the fused image is to the images that were used to make it. With and without a reference image, there are two ways to do quantitative analysis [11,21,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51].
This paper compared fourteen metrics: SF, E(F), SD, AG, RMSE, CC, QAB/F, LAB/F, SSIM, QE, NAB/F, PSNR, and these measures are explained in Section 2.7.

2.7. Measuring Performance with Objective Quality Metrics

E(F) (Entropy): It assists in the extraction of meaningful information from an image. A high level of entropy indicates that the image carries more than information.
AG (Average-Gradient): It determines the sharpness and clarity of an image. It shows that when the value of AG is high, the fused image has more clarity and sharpness.
CC (Correlation-Coefficient): It assesses the similarity of the all-in-focus image to the input images. For a better fusion process, a higher CC value is desired.
SSIM (Structural-Similarity-Index-Measure): It assists in the correlation of two images’ local patterns of the brightness of pixels. SSIM has a range of −1 to 1 in its value.
QE (Edge-dependent Fusion Quality): This metric considers features of the human visual system, such as edge detail sensitivity. A greater QE value suggests that the fusion process is more efficient.
SD (Standard Deviation): The higher the SD number, the noisier the final image. Noise is more likely to impact images with lower contrast.
SF (Spatial-Frequency): It is used to determine the total intensity of activeness. When the all-in-focus image activity level is really high, it indicates that SF is quite high.
RMSE (Root Mean Square Error): It is useful for calculating the variations per pixel caused by image fusion methods. The value of RMSE rises as the similarity decreases.
PSNR (Peak-Signal-to-Noise-Ratio): It compares the similarity of the produced fused image and the reference image to determine image quality. The better the PSNR number, the better the fusion results.
In addition, objective image fusion effectiveness assessment via gradient information [11] is examined. Assessing total fusion performance (TFP), fusion loss (FL), and fusion artifacts (FA), provides a complete analysis of fusion performance. The process intended for calculating these metrics is detailed in [11], and their symbolic representation is presented below:
QAB/F denotes the total amount of data transferred from the source images to the all-in-focus image. The method’s performance is good if QAB/F values are higher; LAB/F, Total loss of information. The method’s performance is good if LAB/F values are lower, and NAB/F. Due to the fusion process, noise or artifacts have been added to the fused image. The method’s performance is good if NAB/F values are lower.

3. Experimental Results

This paper proposes a qshiftN DTCWT and MPCA in laplacian pyramid domain. Quality measures include SD, QAB/F, E(F), AG, SF, CC, SSIM, QE, QW, FMI, LAB/F, NAB/F, RMSE, and PSNR were employed to assess the algorithm’s quality. These metrics are used to contrast the proposed technique to the methods that have been published in the past. The resemblance and robustness of the fused images against distortions are measured using these criteria. Source images for comparison are commonly used in MIF. Experiments are also carried out on many images from various areas and datasets [52]. In these images, the proposed approach yields good results. However, these images are not included in the paper because the techniques that are contrasted with the proposed approach do not produce outcomes for these images. Desk, balloon, book, clock, flower, lab, leaf, leopard, flowerpot, Pepsi, wine, and craft images are used to compare methodologies in the literature with [21,22,23,24,25,26,27,28,30]. In addition, the outcomes of the proposed technique for certain tried source images were presented. The images are of various sizes and qualities. The proposed method is applicable to any multi-focus images, not only those presented in this work.

3.1. The Outcomes of Some of the Images That Were Tried

Several grayscale images are used to implement the proposed technique. To analyze these images, SF, QAB/F, QE, AG, E(F), SSIM, SD, and CC were used. It analyses a variety of images. Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 show the visual outcomes for the images of a balloon, a leopard, a calendar, a bottle of wine, and a craft, respectively. Table 1 displays the results of the proposed method for specific trailing images. The subjective measurement outcomes (i.e., RMSE and PSNR) of certain trailing multi-focus images are depicted in Table 1. Table 2 compares the proposed technique to methods in the literature that use subjective criteria. Measurements for the stated image are not measured for the mentioned article, as shown by the letter X. In contrast, the proposed technique to methods in the literature, the flowerpot, clock, Pepsi, cameraman, desk, book, lab, and flower images are used. The best outcomes are shown in bold. The robustness of the proposed technique to deformation is measured using these criteria. The outcomes suggest that the proposed technique performs well in subjective measurements.

3.2. Comparison of Multi-Focus Image (i.e., Clock)

The evaluation of the first multi-focus image is the clock, illustrated in Figure 9. Figure 9a represents the original image. Figure 9b,c illustrate left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. A “right-focused image” is one in which the image’s right side is focused, but its left side is not. Figure 9d shows that the all-in-focus image is created when the approach is implemented. E(F), AG, CC, QAB/F, SSIM, QE, SF, and QW are calculated to assess the proposed methodology performance. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [21,22,24,27,28,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

3.3. Comparison of Multi-Focus Image (i.e., Desk)

The evaluation of the second multi-focus image is the desk, illustrated in Figure 10. Figure 10a represents the original image. Figure 10b,c illustrates left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. An image with a right focus indicates that it has been focused on its right side only, while its left side has not been focused on. Figure 10d shows the process of creating the all-in-focus image after the method has been successfully implemented. The following parameters are computed to evaluate the proposed methodology performance: E(F), AG, CC, QAB/F, SSIM, QE, FMI, SD, QW, LAB/F, NAB/F, and SF. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [21,23,24,25,26,27,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

3.4. Comparison of Multi-Focus Image (i.e., Book)

The evaluation of the third multi-focus image is the book, illustrated in Figure 11. Figure 11a represents the original image. Figure 11b,c illustrate left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused, while the right side is not. The right side of the image is focused while the left is not. Figure 11d shows the process of creating the all-in-focus image after the method has been successfully implemented. The following parameters are computed to evaluate the proposed methodology performance: E(F), AG, CC, QAB/F, SSIM, QE, QW, LAB/F, NAB/F, and SF. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in Table 16, Table 17, Table 18, Table 19, Table 20 and Table 21 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [21,24,25,26,27,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

3.5. Comparison of Multi-Focus Image (i.e., Flower)

The evaluation of the fourth multi-focus image is the flower, illustrated in Figure 12. Figure 12a represents the original image. Figure 12b,c illustrates left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. A right-focused image is one in which the image’s right side is focused, but its left side is not. Figure 12d shows that the all-in-focus image is created when the approach is implemented. E(F), AGF, CC, QAB/F, SSIM, and QE are calculated to assess the proposed methodology performance. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The comparison results are shown in Table 22 and Table 23 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [21,24], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

3.6. Comparison of Multi-Focus Image (i.e., Lab)

The evaluation of the fifth multi-focus image is the lab, which is illustrated in Figure 13. Figure 13a represents the original image. Figure 13b,c illustrate left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. An image with a right focus indicates that it has been focused on its right side only, while its left side has not been focused on. Figure 13d shows the process of creating the all-in-focus image after the method has been successfully implemented. The following parameters are computed to evaluate the proposed methodology performance: E(F), AG, CC, QAB/F, SSIM, QE, QW, and SF. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in Table 24, Table 25, Table 26, Table 27, Table 28 and Table 29 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [21,24,25,27,28,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

3.7. Comparison of Multi-Focus Image (i.e., Leaf)

The evaluation of the sixth multi-focus image is the leaf, which is illustrated in Figure 14. Figure 14a represents the original image. Figure 14b,c illustrates left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused, while the right side is not. The right side of the image is focused while the left is not. Figure 14d shows the process of creating the all-in-focus image after the method has been successfully implemented. The following parameters are computed to evaluate the proposed methodology performance: E(F), AG, CC, QAB/F, SSIM, QE, and SF. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in Table 30, Table 31 and Table 32 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [21,24,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

3.8. Comparison of Multi-Focus Image (i.e., Pepsi)

The evaluation of the seventh multi-focus image is the pepsi, which is illustrated in Figure 15. Figure 15a represents the original image. Figure 15b,c illustrate left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. A right-focused image is one in which the image’s right side is focused, but its left side is not. Figure 15d shows that the all-in-focus image is created when the approach is implemented. AG, QAB/F, QE, SF, and QW are calculated to assess the proposed methodology performance. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in Table 33, Table 34, Table 35 and Table 36 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [24,25,27,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

3.9. Comparison of Multi-Focus Image (i.e., Flowerpot)

The evaluation of the eighth multi-focus image is the flowerpot, which is illustrated in Figure 16. Figure 16a represents the original image. Figure 16b,c illustrate left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. An image with a right focus indicates that it has been focused on its right side only, while its left side has not been focused on. Figure 16d shows the process of creating the all-in-focus image after the method has been successfully implemented. The following parameters are computed to evaluate the proposed methodology performance: QE, and QW. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in Table 37 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [25], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

3.10. Analysis of a Few More Image Pairs

A single strategy will never produce the ideal subjective and objective results for all image pairs. Because of this, eight multi-focus image pairings (shown in Figure 17) are used in the next experiment to demonstrate the average performance of various techniques, which is demonstrated in the following experiment. In the case of the image pairs in Figure 17, the proposed method produced fused images depicted in Figure 18. As shown in Figure 18, the results of the proposed approach fusion are satisfactory for all of the image pairs tested. For the image pairs in Figure 17, the average objective assessment of several methodologies is shown in Table 38. The results of the comparison are presented in Table 38. Comparing the proposed method to approaches described in the literature [21], the proposed method is more successful, and the best outcomes of the various methods are highlighted in bold.

4. Conclusions

The performance of the traditional wavelets-based fusion algorithms is to create ringing distortions in the fused image due to a lack of directional selectivity and shift-invariance. The proposed methodology utilizes the benefits of a hybrid method approach for the image fusion process. The hybrid method contains LP for decomposition, DTCWT for image fusion, and MPCA for feature extraction. The advantages of the proposed fused image are having better image quality and extracting relevant information from the source images with good directionality, a high degree of shift-invariance using hybrid approach with MPCA, due to this achieving better visual quality. Several pairs of multifocus images are used to assess the performance of the proposed method. Through the experiments conducted on standard test pairs of multifocus images, it was found that the proposed method has shown superior performance in most of the cases as compared to other methods in terms of quantitative parameters and in terms of visual quality, it has shown superior performance to that of other methods. Therefore, the proposed work is validated with many data sets to meet these goals by evaluating quantitative measures like E(F), AG, SD, SSIM, QAB/F, etc. It is evident from the results that the proposed method produces better visual perception, better clarity, and less distortion. In this work, the proposed technique is used to fuse only grayscale images. Moreover, the application of the proposed method to other areas, such as medical image processing, infrared-visible image processing should be part of future exploration.

Author Contributions

Conceptualization, C.R.M., K.C., A.O.I. and N.Z.J.; Methodology, C.R.M., N.Z.J., R.K.R., K.S.S. and K.C.; Coding, C.R.M., A.A, A.O.I. and K.C.; Validation, N.Z.J., R.K.R., K.S.S., A.A. and C.R.M.; Investigation, R.K.R., K.C. and N.Z.J.; Resources, C.R.M., K.C. and A.O.I.; Writing—original draft preparation, C.R.M., K.C. and R.K.R.; Writing—review and editing, C.R.M., K.S.S., A.A. and K.C. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups. (Project under grant number (RGP.2/111/43)).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Data can be available on request.

Acknowledgments

We acknowledge thanks to the Center for Smart Society 5.0 CSS5, Taylor’s University, Malaysia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shah, P.; Merchant, S.N.; Desai, U.B. Multi-focus and multispectral image fusion based on pixel significance using multiresolution decomposition. Signal Image Video Processing 2013, 7, 95–109. [Google Scholar] [CrossRef]
  2. Chai, Y.; Li, H.; Li, Z. Multi-focus image fusion scheme using focused region detection and multiresolution. Opt. Commun. 2011, 284, 4376–4389. [Google Scholar] [CrossRef]
  3. Zhang, B.; Zhang, C.; Yuanyuan, L.; Jianshuai, W.; He, L. Multi-focus image fusion algorithm based on compound PCNN in Surfacelet domain. Optik 2014, 125, 296–300. [Google Scholar] [CrossRef]
  4. Wahyuni, I.S.; Sabre, R. Wavelet Decomposition in Laplacian Pyramid for Image Fusion. Int. J. Signal Processing Syst. 2016, 4, 37–44. [Google Scholar] [CrossRef]
  5. Petrovic, V.S.; Xydeas, C.S. Gradient-based multiresolution image fusion. IEEE Trans. Image Processing 2004, 13, 228–237. [Google Scholar] [CrossRef]
  6. Wang, W.W.; Shui, P.; Song, G. Multi-focus Image Fusion in Wavelet Domain. Proceedings of the Second International Conference on Machine Learning and Cybernetics. IEEE Comput. Soc. 2003, 5, 2887–2890. [Google Scholar] [CrossRef]
  7. Li, S.; Yang, B.; Hu, J. Performance comparison of different multi-resolution transforms for image fusion. Inf. Fusion 2011, 12, 74–84. [Google Scholar] [CrossRef]
  8. Sharma, A.; Gulati, T. Change Detection from Remotely Sensed Images Based on Stationary Wavelet Transform. Int. J. Electr. Comput. Eng. 2017, 7, 3395–3401. [Google Scholar] [CrossRef]
  9. Borwonwatanadelok, P.; Rattanapitak, W.; Udomhunsakul, S. Multi-focus Image Fusion Based on Stationary Wavelet Transform and Extended Spatial Frequency Measurement. In Proceedings of the 2009 International Conference on Electronic Computer Technology, Macau, China, 20–22 February 2009; pp. 77–81. [Google Scholar] [CrossRef]
  10. Naidu, V. Image Fusion Technique using Multi-resolution Singular Value Decomposition. Def. Sci. J. 2011, 61, 479–484. [Google Scholar] [CrossRef] [Green Version]
  11. Shreyamsha Kumar, B.K. Multi-focus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Processing 2013, 7, 1125–1143. [Google Scholar] [CrossRef]
  12. Li, H.; Wei, S.; Chai, Y. Multi-focus image fusion scheme based on feature contrast in the lifting stationary wavelet domain. EURASIP J. Adv. Signal Processing 2012, 39. [Google Scholar] [CrossRef]
  13. Yuelin, Z.; Xiaoqiang, L.; Wang, T. Visible and Infrared Image Fusion using the Lifting Wavelet. Telecommun. Comput. Electron. Control. 2013, 11, 6290–6295. [Google Scholar] [CrossRef]
  14. Pujar, J.; Itkarkar, R.R. Image Fusion Using Double Density Discrete Wavelet Transform. Int. J. Comput. Sci. Netw. 2016, 5, 6–10. [Google Scholar]
  15. Liu, J.; Yang, J.; Li, B. Multi-focus Image Fusion by SML in the Shearlet Subbands. TELKOMNIKA Indones. J. Electr. Eng. 2014, 12, 618–626. [Google Scholar] [CrossRef]
  16. Selesnick, I.W.; Baraniuk, R.G.; Kingsbury, N.C. The dual-tree complex wavelet transform. IEEE Signal Processing Mag. 2005, 22, 123–151. [Google Scholar] [CrossRef]
  17. Radha, N.; Ranga Babu, T. Performance evaluation of quarter shift dual tree complex wavelet transform based multi-focus image fusion using fusion rules. Int. J. Electr. Comput. Eng. 2019, 9, 2377–2385. [Google Scholar] [CrossRef]
  18. Naidu, V.P.S. Novel Image Fusion Techniques using DCT. Int. J. Comput. Sci. Bus. Inform. 2013, 5, 1–18. [Google Scholar]
  19. Rama Mohan, C.; Kiran, S.; Vasudeva; Ashok Kumar, A. Image Enhancement based on Fusion using 2D LPDCT and Modified PCA. Int. J. Eng. Adv. Technol. 2019, 8, 1482–1492. [Google Scholar]
  20. Rama Mohan, C.; Ashok Kumar, A.; Kiran, S.; Vasudeva. An Enhancement Process for Gray-Scale Images Resulted from Image Fusion using Multiresolution and Laplacian Pyramid. ICTACT J. Image Video Proc. 2021, 11, 2391–2399. [Google Scholar] [CrossRef]
  21. Panigrahy, C.; Seal, A.; Mahato, N.K. Fractal dimension based parameter adaptive dual channel PCNN for multi-focus image fusion. Opt. Lasers Eng. 2020, 133, 106141–106163. [Google Scholar] [CrossRef]
  22. Meher, B.; Agrawal, S.; Panda, R.; Abraham, A. A survey on region based image fusion methods. Inf. Fusion 2018, 48, 119–132. [Google Scholar] [CrossRef]
  23. He, L.; Yang, X.; Lu, L.; Wu, W.; Ahmad, A.; Jeon, G. A novel multi-focus image fusion method for improving imaging systems by using cascade-forest model. J. Image Video Proc. 2020, 2020, 1–14. [Google Scholar] [CrossRef] [Green Version]
  24. Aymaz, S.; Köse, C. A novel image decomposition-based hybrid technique with super-resolution method for multi-focus image fusion. Inf. Fusion 2019, 45, 113–127. [Google Scholar] [CrossRef]
  25. Wang, Z.; Li, X.; Duan, H.; Zhang, X.; Wang, H. Multi-focus image fusion using convolutional neural networks in the discrete wavelet transform domain. Multimed. Tools Appl. 2019, 78, 34483–34512. [Google Scholar] [CrossRef]
  26. Amin-Naji, M.; Aghagolzadeh, A. Multi-Focus Image Fusion in DCT Domain using Variance and Energy of Laplacian and Correlation Coefficient for Visual Sensor Networks. J. AI Data Min. 2018, 6, 233–250. [Google Scholar] [CrossRef]
  27. Aymaz, S.; Köse, C.; Aymaz, Ş. Multi-focus image fusion for different datasets with super-resolution using gradient-based new fusion rule. Multimed Tools Appl. 2020, 79, 13311–13350. [Google Scholar] [CrossRef]
  28. Li, J.; Yuan, G.; Fan, H. Multi-focus Image Fusion Using Wavelet-Domain-Based Deep CNN. Comput. Intell. Neurosci. 2019, 2019, 4179397. [Google Scholar] [CrossRef]
  29. Chouhan, K.; Kumar, A.; Chakraverti, A.K.; Cholla, R.R. Human fall detection analysis with image recognition using convolutional neural network approach. In Proceedings of the International Conference on Trends in Computational and Cognitive Engineering, Lecture Notes in Network and Systems; Springer Nature: Singapore, 2022; Volume 376. [Google Scholar]
  30. Nejati, M.; Samavi, S.; Karimi, N.; Soroushmehr, S.R.; Shirani, S.; Roosta, I.; Najarian, K. Surface area-based focus criterion for multi-focus image fusion. Inf. Fusion 2017, 36, 284–295. [Google Scholar] [CrossRef]
  31. Wei, B.; Feng, X.; Wang, K.; Gao, B. The Multi-Focus-Image-Fusion Method Based on Convolutional Neural Network and Sparse Representation. Entropy 2021, 23, 827. [Google Scholar] [CrossRef]
  32. Zhang, C. Multifocus image fusion using multiscale transform and convolutional sparse representation. Int. J. Wavelets Multiresolution Inf. Processing 2021, 19, 2050061. [Google Scholar] [CrossRef]
  33. Yang, Y.; Tong, S.; Huang, S.; Lin, P. Dual-Tree Complex Wavelet Transform and Image Block Residual-Based Multi-Focus Image Fusion in Visual Sensor Networks. Sensors 2014, 14, 22408–22430. [Google Scholar] [CrossRef] [PubMed]
  34. Xiao, Y.; Hong, Y.; Chen, X.; Chen, W. The Application of Dual-Tree Complex Wavelet Transform (DTCWT) Energy Entropy in Misalignment Fault Diagnosis of Doubly-Fed Wind Turbine (DFWT). Entropy 2017, 19, 587. [Google Scholar] [CrossRef] [Green Version]
  35. Jagalingam, P.; Hegde, A.V. A Review of Quality Metrics for Fused Image. Aquat. Procedia 2015, 4, 133–142. [Google Scholar] [CrossRef]
  36. Zafar, R.; Farid, M.S.; Khan, M.H. Multi-Focus Image Fusion: Algorithms, Evaluation, and a Library. J. Imaging 2020, 6, 60. [Google Scholar] [CrossRef] [PubMed]
  37. Pistonesi, S.; Martinez, J.; Ojeda, S.M.; Vallejos, R. A Novel Quality Image Fusion Assessment Based on Maximum Codispersion. In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications; CIARP 2015. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9423. [Google Scholar] [CrossRef]
  38. Sun, C.; Zhang, C.; Xiong, N. Infrared and Visible Image Fusion Techniques Based on Deep Learning: A Review. Electronics 2020, 9, 2162. [Google Scholar] [CrossRef]
  39. Anisha, M.L.; Margret Anouncia, S. Enhanced Dictionary based Sparse Representation Fusion for Multi-temporal Remote Sensing Images. Eur. J. Remote Sens. 2016, 49, 317–336. [Google Scholar] [CrossRef]
  40. Xydeas, C.S.; Petrovic, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef]
  41. Liu, Z.; Blasch, E.; Xue, Z.; Zhao, J.; Laganiere, R.; Wu, W. Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Study. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 94–109. [Google Scholar] [CrossRef]
  42. Qu, G.; Zhang, D.; Yan, P. Information measure for performance of image fusion. Electron. Lett. 2002, 38, 313–315. [Google Scholar] [CrossRef]
  43. Hossny, M.; Nahavandi, S.; Creighton, D. Comments on ‘Information measure for performance of image fusion’. Electron. Lett. 2008, 44, 1066–1067. [Google Scholar] [CrossRef]
  44. Han, Y.; Cai, Y.; Cao, Y.; Xu, X. A New Image Fusion Performance Metric Based on Visual Information Fidelity. Inf. Fusion 2013, 14, 127–135. [Google Scholar] [CrossRef]
  45. Wang, Q.; Shen, Y.; Jin, J. Performance evaluation of image fusion techniques. In Image Fusion: Algorithms and Applications; Stathaki, T., Ed.; Academic Press: Oxford, UK, 2008; pp. 469–492. [Google Scholar] [CrossRef]
  46. Cvejic, N.; Canagarajah, C.; Bull, D. Image fusion metric based on mutual information and Tsallis entropy. Electron. Lett. 2006, 42, 626–627. [Google Scholar] [CrossRef]
  47. Zheng, Y.; Essock, E.A.; Hansen, B.C.; Haun, A.M. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Inf. Fusion 2007, 8, 177–192. [Google Scholar] [CrossRef]
  48. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Processing 2004, 13, 600–612. [Google Scholar] [CrossRef]
  49. Yang, C.; Zhang, J.-Q.; Wang, X.-R.; Liu, X. A novel similarity based quality metric for image fusion. Inf. Fusion 2008, 9, 156–160. [Google Scholar] [CrossRef]
  50. Chen, H.; Varshney, P.K. A human perception inspired quality metric for image fusion based on regional information. Inf. Fusion 2007, 8, 193–207. [Google Scholar] [CrossRef]
  51. Chen, Y.; Blum, R.S. A new automated quality assessment algorithm for image fusion. Image Vis. Comput. 2009, 27, 1421–1432. [Google Scholar] [CrossRef]
  52. Available online: https://sites.google.com/view/durgaprasadbavirisetti/datasets (accessed on 1 December 2021).
  53. Rama Mohan, C.; Kiran, S.; Ashok Kumar, A. Advanced Multi-focus Image Fusion algorithm using FPDCT with Modified PCA. Int. J. Innov. Technol. Explor. Eng. 2019, 9, 175–184. [Google Scholar] [CrossRef]
  54. Li, H.; Chai, Y.; Yin, H.; Liu, G. Multi-focus image fusion denoising scheme based on homogeneity similarity. Opt. Commun. 2012, 285, 91–100. [Google Scholar] [CrossRef]
  55. Moushmi, S.; Sowmya, V.; Soman, K.P. Empirical wavelet transform for multi-focus image fusion. In Proceedings of the International Conference on Soft Computing Systems, Advances in Intelligent Systems and Computing 2016; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
Figure 1. The flow diagram of the proposed qshiftN DTCWT-LP and MPCA-based image fusion algorithm.
Figure 1. The flow diagram of the proposed qshiftN DTCWT-LP and MPCA-based image fusion algorithm.
Applsci 12 09495 g001
Figure 2. DTCWT bank filter’s structure.
Figure 2. DTCWT bank filter’s structure.
Applsci 12 09495 g002
Figure 3. The qshiftN DTCWT filter’s structure.
Figure 3. The qshiftN DTCWT filter’s structure.
Applsci 12 09495 g003
Figure 4. (Balloon): (a) Original Image; (b,c) Multi-focus input images; (d) Proposed fusion.
Figure 4. (Balloon): (a) Original Image; (b,c) Multi-focus input images; (d) Proposed fusion.
Applsci 12 09495 g004
Figure 5. (Leopard): (a) Original Image; (b,c) Multi-focus input images; (d) Proposed fusion.
Figure 5. (Leopard): (a) Original Image; (b,c) Multi-focus input images; (d) Proposed fusion.
Applsci 12 09495 g005
Figure 6. (Calendar): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Figure 6. (Calendar): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Applsci 12 09495 g006
Figure 7. (Wine): (a) Original Image; (b,c) Multi-focus input images; (d) Proposed fusion.
Figure 7. (Wine): (a) Original Image; (b,c) Multi-focus input images; (d) Proposed fusion.
Applsci 12 09495 g007
Figure 8. (Craft): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Figure 8. (Craft): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Applsci 12 09495 g008
Figure 9. (Clock): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Figure 9. (Clock): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Applsci 12 09495 g009
Figure 10. (Desk): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Figure 10. (Desk): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Applsci 12 09495 g010
Figure 11. (Book): (a) original Image; (b,c) multi-focus input images; and (d) proposed fusion.
Figure 11. (Book): (a) original Image; (b,c) multi-focus input images; and (d) proposed fusion.
Applsci 12 09495 g011
Figure 12. (Flower): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Figure 12. (Flower): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Applsci 12 09495 g012
Figure 13. (Lab): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Figure 13. (Lab): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Applsci 12 09495 g013
Figure 14. (Leaf): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Figure 14. (Leaf): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Applsci 12 09495 g014
Figure 15. (Pepsi): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Figure 15. (Pepsi): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Applsci 12 09495 g015
Figure 16. (Flowerpot): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Figure 16. (Flowerpot): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.
Applsci 12 09495 g016
Figure 17. A few pairs of multi-focus images.
Figure 17. A few pairs of multi-focus images.
Applsci 12 09495 g017
Figure 18. The multi-focus image sets in Figure 16 represent the fusion outcomes of the proposed technique.
Figure 18. The multi-focus image sets in Figure 16 represent the fusion outcomes of the proposed technique.
Applsci 12 09495 g018
Table 1. For certain trailed images, the outcomes of the proposed method.
Table 1. For certain trailed images, the outcomes of the proposed method.
Input ImagesRMSEPSNRSDSFSSIME(F)QAB/FAGCCQE
Book9.841638.234160.938925.07700.90857.43620.937613.02100.98920.9165
Clock4.450241.681051.17839.22240.95767.41800.98356.13010.98160.8997
Flower5.143341.052439.676922.33380.97537.23150.991514.73270.96690.9058
Lab4.043742.097047.717213.46510.96397.15170.99087.58920.97760.9196
Leaf11.943437.393546.716532.05470.78987.46330.958226.98160.92190.8305
Flowerpot5.326240.900653.559324.13770.96387.52110.990814.45990.97500.9131
Pepsi2.765143.747745.626414.28840.97307.14310.99558.55740.98190.9439
Balloon1.762145.704548.528721.06750.99047.48750.999010.25070.98400.9534
Leopard2.110844.920466.040820.09530.99097.47770.998813.69410.98930.9491
Wine10.043338.146072.260151.70720.89387.61160.978435.65070.94920.8705
Craft4.661741.479431.855613.47990.93746.52210.98237.52300.96600.8807
Desk4.876541.283847.641716.13410.94487.38820.98719.42340.96240.9216
Table 2. For some images, the comparisons with approaches in the literature.
Table 2. For some images, the comparisons with approaches in the literature.
IF-MethodsImage MetricsLabDeskClockBookCameramanFlowerPepsiFlowerpot
C. Rama Mohan et al. [53]RMSEX7.445.85X9.06X3.837.43
PSNRX39.4540.50X38.59X42.3439.46
Li et al. [54]RMSE4.65XXXX7.84XX
PSNRXXXXXXXX
Moushmi et al. [55]RMSEXX4.517.04XXXX
PSNRXXXXXXXX
Proposed MethodRMSE5.149.844.454.0444.885.332.775.33
PSNR41.0538.2341.6842.0941.2840.9043.7540.90
Table 3. The outcomes for the clock image and comparisons with existing techniques in the literature ([21]).
Table 3. The outcomes for the clock image and comparisons with existing techniques in the literature ([21]).
Evaluation MetricGFDFNSC-PCNNLG-MWIMDCT-CVQTGFMST-SRDSIFTCSRSSDICNNBFBRW-TSPA-DCPCNNProposed Method
E(F)7.0777.3047.0677.1706.9926.9867.2497.3217.0147.3187.1237.1287.0087.0787.3857.418
QE0.8530.8030.8400.8480.8410.8480.8520.8500.8500.8500.8530.8550.8480.8540.8540.899
AGF5.8025.4115.8605.7465.8105.8665.8415.8795.8575.3825.8485.7515.7745.7576.0726.130
SSIM0.8950.9000.8940.9000.8940.8940.8970.9030.8940.8970.8960.8960.8940.8960.9030.958
QAB/F0.8920.8730.8800.8500.8690.8800.8950.8930.8940.8890.8930.8910.8810.8900.8970.984
CC0.9780.9810.9780.9780.9780.9780.9800.9810.9780.9790.9790.9790.9780.9790.9810.982
Table 4. The outcomes for the clock image and comparisons with existing techniques in the literature ([22]).
Table 4. The outcomes for the clock image and comparisons with existing techniques in the literature ([22]).
Evaluation MetricQWT Normalized CutRF-SSIMNSCT & Focus Area DetectionBEMDSurface Area BasedRSSFSRShearlet & GBVSCSLSWTProposed Method
QAB/F0.7440.4290.7500.4830.740.7030.7530.7170.4260.7250.984
SF8.3989.1008.4739.19313.658.9868.4688.7088.5628.0469.2224
QE0.6650.6650.5820.663X0.6520.5920.5860.4940.5830.899
Qw0.8430.8520.8340.834X0.8210.7810.7360.7760.7630.9192
H7.3427.4267.2917.346X7.3697.0667.4347.4127.1557.418
Table 5. The outcomes for the clock image and comparisons with existing techniques in the literature ([24]).
Table 5. The outcomes for the clock image and comparisons with existing techniques in the literature ([24]).
Evaluation MetricSamet Aymaz et al. SRBaohua et al.Li et al.Hua et al.Zhang et al.Samet Aymaz et al. without SRYin et al.Proposed
Method
QAB/F0.90.70.680.730.710.780.710.984
AG6.97XXXX4.263.466.130
Table 6. The outcomes for the clock image and comparisons with existing techniques in the literature ([27]).
Table 6. The outcomes for the clock image and comparisons with existing techniques in the literature ([27]).
MethodsQAB/F
Nejati et al.0.72
Samet Aymaz et al.—with SR-20.87
Du et al.X
Jiang et al.0.71
Samet Aymaz et al.—with SR-40.89
Li et al.0.68
Chaudhary et al.X
Amin-Naji et al.X
Abdipour et al.0.65
Hua et al.0.73
He et al.0.69
He et al.X
Samet Aymaz et al.—with SR-30.88
Chen et al.X
Aymaz et al.0.9
Yin et al.0.71
Zhang et al.0.71
Yang et al.0.74
Proposed Method0.984
Table 7. The outcomes for the clock image and comparisons with existing techniques in the literature ([28]).
Table 7. The outcomes for the clock image and comparisons with existing techniques in the literature ([28]).
Evaluation MetricPCNNNSCTDCNNSRDSIFTMWGNSCT-SRGFWDCNNProposed Method
AG5.574.534.564.344.444.444.554.586.526.130
SSIM0.650.540.650.530.580.600.530.610.690.958
Table 8. The outcomes for the clock image and comparisons with existing techniques in the literature ([30]).
Table 8. The outcomes for the clock image and comparisons with existing techniques in the literature ([30]).
Evaluation MetricCBFDCHWTDCTVARGFFIFMMWGFWSSMSA-FCProposed
Method
SF12.8712.2813.5213.4313.2913.4213.613.659.2224
QAB/F0.7260.6940.7350.7330.7350.7310.7090.740.984
Table 9. The outcomes for the desk image and comparisons with existing techniques in the literature ([21]).
Table 9. The outcomes for the desk image and comparisons with existing techniques in the literature ([21]).
Evaluation MetricPA-DCPCNNNSC-PCNNBRW-TSLG-MWGFDFDCT-CVIMCNNGFBFSSDIQTDSIFTCSRMST-SRProposed Method
E(F)7.3467.2987.2847.2967.2797.2727.2847.2807.3097.2707.3087.2807.2917.2947.3117.388
QE0.8670.8490.8650.8640.8640.8290.8480.8630.8660.8340.8650.8640.8630.8630.8690.922
SSIM0.8690.8600.8580.8530.8560.8550.8560.8580.8620.8560.8570.8550.8550.8600.8670.945
QAB/F0.8960.8870.8910.8880.8910.8750.8830.8900.8930.8740.8920.8920.8910.8880.8960.987
CC0.9640.9640.9610.9600.9610.9600.9610.9610.9620.9600.9610.9600.9600.9610.9640.962
AG8.2157.8227.9738.0577.9957.9537.9567.9577.9217.8978.0538.0688.0717.5708.1229.423
Table 10. The outcomes for the desk image and comparisons with existing techniques in the literature ([23]).
Table 10. The outcomes for the desk image and comparisons with existing techniques in the literature ([23]).
Evaluation MetricCFSRCVTCNNNSCTGFFNSCT-SRProposed Method
SD46.86046.58946.76646.81746.86946.86046.57647.642
FMI0.6750.5790.5330.6740.5750.5920.5740.390
QAB/F0.7340.7020.6860.7340.7020.7110.7020.987
Table 11. The outcomes for the desk image and comparisons with existing techniques in the literature ([24]).
Table 11. The outcomes for the desk image and comparisons with existing techniques in the literature ([24]).
Evaluation MetricSamet Aymaz et al. without SRBaohua et al.Zhang et al.Hua et al.Chen et al.Samet Aymaz et al. SRProposed Method
AG6.6XXXX11.869.423
QAB/F0.840.710.680.730.710.880.987
Table 12. The outcomes for the desk image and comparisons with existing techniques in the literature ([25]).
Table 12. The outcomes for the desk image and comparisons with existing techniques in the literature ([25]).
Evaluation MetricDWTLPCurveletIMFCNNDSDCTCVMWGFGFFCNN-DWTProposed Method
QE0.88960.89580.89740.89230.89820.89710.88780.89820.89730.89850.9216
Qw0.86550.87270.87790.87110.87750.87610.8680.87570.87750.87820.9090
Table 13. The outcomes for the desk image and comparisons with existing techniques in the literature ([26]).
Table 13. The outcomes for the desk image and comparisons with existing techniques in the literature ([26]).
MethodsQAB/FLAB/FNAB/F
DCT + Average0.51870.47820.0063
DCT + Variance0.71650.26120.0478
DCT + Contrast0.62120.25540.3629
DWT0.63020.25520.3362
SIDWT0.66940.27640.1564
DCHWT0.65290.3140.0789
DCT + SML0.67740.30740.0324
DCT + Eng_Corr0.72880.2530.0391
DCT + SF0.72130.260.0415
DCT + VOL0.72850.25190.0421
DCT + EOL0.7280.25220.0425
DCT + AC_Max0.67630.2910.0696
DCT + Corr0.72460.25410.0456
Proposed Method0.98710.00710.0117
Table 14. The outcomes for the desk image and comparisons with existing techniques in the literature ([27]).
Table 14. The outcomes for the desk image and comparisons with existing techniques in the literature ([27]).
MethodsQAB/F
Samet Aymaz et al.—with SR-40.89
Jiang et al.0.72
Hua et al.0.73
Li et al.X
Samet Aymaz et al.—with SR-20.87
Nejati et al.0.73
Yang et al.0.73
Abdipour et al.X
Zhang et al.0.68
Yin et al.X
Chaudhary et al.0.71
He et al.X
He et al.X
Du et al.X
Aymaz et al.0.88
Amin-Naji et al.X
Samet Aymaz et al.—with SR-30.88
Chen et al.0.71
Proposed Method0.9871
Table 15. The outcomes for the desk image and comparisons with existing techniques in the literature ([30]).
Table 15. The outcomes for the desk image and comparisons with existing techniques in the literature ([30]).
Evaluation MetricSA-FCCBFDCHWTMWGFDCTVARGFFIFMWSSMProposed Method
SF15.5414.9213.8315.4515.3915.4215.4215.4716.1341
QAB/F0.7390.6990.6550.7280.7320.7260.7250.7020.9871
Table 16. The outcomes for the book image and comparisons with existing techniques in the literature ([21]).
Table 16. The outcomes for the book image and comparisons with existing techniques in the literature ([21]).
Evaluation MetricPA-DCPCNNNSC-PCNNBRW-TSLG-MWGFDFDCT-CVIMCNNGFMST-SRBFQTCSRDSIFTSSDIProposed Method
SSIM0.9540.9520.9520.9510.9520.9520.9510.9520.9520.9540.9520.9520.9530.9520.9520.909
QAB/F0.9150.9060.9070.9050.9070.9050.9050.9080.9080.9080.9060.9050.9080.9060.9070.938
CC0.9830.9820.9820.9820.9820.9820.9820.9820.9820.9820.9820.9820.9820.9820.9820.989
AG13.70613.37713.41213.43613.40913.41813.35413.37313.37313.51813.41113.42812.64513.45713.43813.021
QE0.8840.8790.8820.8810.8820.8790.8760.8830.8830.8830.8810.8800.8830.8810.8820.917
E(F)7.2967.2957.2717.2707.2687.2707.2667.2707.2657.2757.2697.2717.2497.2737.2717.436
Table 17. The outcomes for the book image and comparisons with existing techniques in the literature ([24]).
Table 17. The outcomes for the book image and comparisons with existing techniques in the literature ([24]).
Evaluation MetricSamet Aymaz et al. without SRChen et al.Li et al.Zhang et al.Liu et al.Samet Aymaz et al. SRHua et al.Proposed Method
AG10.83XXX9.3613.9X13.021
QAB/F0.810.710.710.720.790.920.730.938
Table 18. The outcomes for the book image and comparisons with existing techniques in the literature ([25]).
Table 18. The outcomes for the book image and comparisons with existing techniques in the literature ([25]).
Evaluation MetricDWTLPCurveletIMFCNNDSDCTCVMWGFGFFCNN-DWTProposed Method
QE0.89420.89620.89990.89110.90070.90020.88270.90.9010.89930.917
Qw0.89320.89690.90240.89570.90290.90170.88970.90140.90260.90310.9363
Table 19. The outcomes for the book image and comparisons with existing techniques in the literature ([26]).
Table 19. The outcomes for the book image and comparisons with existing techniques in the literature ([26]).
MethodsLAB/FQAB/FNAB/F
DCT + Average0.50020.49850.0025
DCT + Variance0.2660.7210.0277
DCT + Contrast0.23840.6470.3736
DWT0.22940.66210.3569
DCT + Eng_Corr0.26220.72840.0202
DCHWT0.30140.66840.0705
SIDWT0.26370.69320.1279
DCT + EOL0.2620.72830.0206
DCT + SML0.29280.6960.0241
DCT + SF0.27570.71510.0197
DCT + VOL0.26190.72840.0207
DCT + Corr0.26220.72810.0207
DCT + AC_Max0.27810.70810.0294
Proposed Method0.04220.9380.0403
Table 20. The outcomes for the book image and comparisons with existing techniques in the literature ([27]).
Table 20. The outcomes for the book image and comparisons with existing techniques in the literature ([27]).
MethodsQAB/F
Chen et al.0.71
Abdipour et al.0.73
Samet Aymaz et al.—with SR-40.93
Hua et al.0.73
Jiang et al.0.73
Zhang et al.X
Chaudhary et al.X
Nejati et al.0.73
Amin-Naji et al.X
Samet Aymaz et al.—with SR-30.91
Yang et al.0.76
He et al.X
He et al.0.76
Yin et al.X
Aymaz et al.0.92
Samet Aymaz et al.—with SR-20.9
Du et al.0.72
Li et al.0.71
Proposed Method0.938
Table 21. The outcomes for the book image and comparisons with existing techniques in the literature ([30]).
Table 21. The outcomes for the book image and comparisons with existing techniques in the literature ([30]).
Evaluation MetricCBFDCHWTDCTVARGFFIFMMWGFWSSMSA-FCProposed Method
QAB/F0.7280.6880.7310.7320.7290.7310.7280.7340.938
SF28.0825.1130.0829.9830.3829.9127.4930.125.0770
Table 22. The outcomes for the flower image and comparisons with existing techniques in the literature ([21]).
Table 22. The outcomes for the flower image and comparisons with existing techniques in the literature ([21]).
Evaluation MetricPA-DCPCNNNSC-PCNNBRW-TSLG-MWGFDFDCT-CVSSDICNNIMGFQTCSRDSIFTMST-SRBFProposed Method
QE0.8620.8500.8550.8530.8560.8500.8550.8560.8520.8560.8540.8570.8540.8600.8510.906
AG14.31612.81314.11414.18214.10214.12614.14114.05314.11514.08314.20713.34814.21714.14813.96914.733
SSIM0.9480.9500.9410.9400.9410.9400.9400.9410.9390.9410.9400.9420.9400.9470.9400.975
QAB/F0.8870.8850.8760.8740.8770.8720.8760.8780.8750.8780.8740.8780.8740.8780.8710.992
CC0.9690.9690.9620.9610.9620.9610.9620.9620.9610.9630.9610.9630.9610.9670.9610.967
E(F)7.2217.1527.1817.1817.1817.1827.1807.1807.1817.1857.1827.1687.1827.1917.1797.232
Table 23. The outcomes for the flower image and comparisons with existing techniques in the literature ([24]).
Table 23. The outcomes for the flower image and comparisons with existing techniques in the literature ([24]).
Evaluation MetricLiu et al.Samet Aymaz et al. without SRSamet Aymaz et al. SRProposed Method
AG9.229.4718.0814.733
QAB/F0.710.790.850.992
Table 24. The outcomes for the lab image and comparisons with existing techniques in the literature ([21]).
Table 24. The outcomes for the lab image and comparisons with existing techniques in the literature ([21]).
Evaluation MetricPA-DCPCNNNSC-PCNNBRW-TSLG-MWCNNDCT-CVIMGFCSRQTDSIFTMST-SRSSDIBFGFDFProposed Method
SSIM0.9120.9090.9070.9050.9070.9060.9040.9100.9090.9060.9060.9110.9070.9070.9070.964
AG6.6476.3366.4826.5146.4576.4666.4846.4256.1266.5046.5346.5946.5396.4566.4797.589
QAB/F0.9000.8930.8960.8920.8960.8930.8930.8980.8960.8960.8960.9000.8970.8950.8960.991
E(F)7.1186.9927.0757.0377.0226.9827.0567.0607.0437.0397.0747.1107.0966.9897.0327.152
CC0.9790.9790.9770.9770.9770.9770.9770.9780.9780.9770.9770.9790.9770.9770.9770.978
QE0.8680.8500.8650.8620.8650.8630.8610.8670.8640.8650.8640.8680.8650.8630.8650.919
Table 25. The outcomes for the lab image and comparisons with existing techniques in the literature ([24]).
Table 25. The outcomes for the lab image and comparisons with existing techniques in the literature ([24]).
Evaluation MetricChen et al.Hua et al.Zhang et al.Li et al.Samet Aymaz et al. without SRSamet Aymaz et al. SRProposed Method
AGXXXX4.87.817.589
QAB/F0.730.740.730.730.840.890.991
Table 26. The outcomes for the lab image and comparisons with existing techniques in the literature ([25]).
Table 26. The outcomes for the lab image and comparisons with existing techniques in the literature ([25]).
Evaluation MetricDWTLPCurveletIMFCNNDSDCTCVMWGFGFFCNN-DWTProposed Method
QE0.87870.88550.88710.88490.88850.88830.88060.88840.88920.88750.919
Qw0.87480.88070.88490.88250.88440.88310.87990.88320.88290.8850.9147
Table 27. The outcomes for the lab image and comparisons with existing techniques in the literature ([27]).
Table 27. The outcomes for the lab image and comparisons with existing techniques in the literature ([27]).
MethodsQAB/F
Zhang et al.0.73
Abdipour et al.0.75
Jiang et al.0.73
Hua et al.0.74
Amin-Naji et al.0.75
He et al.0.73
Chaudhary et al.0.68
Samet Aymaz et al.—with SR-20.88
Yang et al.X
Aymaz et al.0.89
Du et al.0.75
Yin et al.X
Samet Aymaz et al.—with SR-40.9
He et al.X
Nejati et al.0.74
Chen et al.0.73
Samet Aymaz et al.—with SR-30.89
Li et al.0.73
Proposed Method0.991
Table 28. The outcomes for the lab image and comparisons with existing techniques in the literature ([28]).
Table 28. The outcomes for the lab image and comparisons with existing techniques in the literature ([28]).
Evaluation MetricNSCTGFNSCT-SRMWGDSIFTDCNNPCNNSRWDCNNProposed Method
AG9.359.569.549.509.459.899.709.4610.357.59
SSIM0.480.510.500.510.500.500.510.480.510.96
Table 29. The outcomes for the lab image and comparisons with existing techniques in the literature ([30]).
Table 29. The outcomes for the lab image and comparisons with existing techniques in the literature ([30]).
Evaluation MetricWSSMCBFDCHWTSA-FCDCTVARGFFIFMMWGFProposed Method
SF11.9412.2411.212.9712.9612.8612.9413.0113.4651
QAB/F0.7070.7120.6630.7480.7460.7380.7380.7370.991
Table 30. The outcomes for the leaf image and comparisons with existing techniques in the literature ([21]).
Table 30. The outcomes for the leaf image and comparisons with existing techniques in the literature ([21]).
Evaluation MetricNSC-PCNNGFDFLG-MWDCT-CVIMGFQTBRW-TSDSIFTMST-SRSSDIPA-DCPCNNCSRBFCNNProposed Method
QAB/F0.8710.8800.8730.8390.8800.8810.8770.8800.8770.8830.8800.8870.8770.8750.8800.958
QE0.7970.8220.8140.7830.8180.8220.8200.8220.8210.8240.8230.8120.8200.8100.8190.831
AG18.31818.71318.92318.10318.87818.63519.03818.75619.05419.03118.92919.17618.17918.77518.44926.982
SSIM0.7420.7420.7360.7290.7400.7450.7350.7430.7360.7460.7400.7590.7410.7390.7460.789
E(F)7.3137.3397.3447.2687.3437.3477.3437.3407.3457.3757.3437.4067.3347.3417.3347.463
CC0.9260.9270.9250.9040.9260.9290.9250.9260.9250.9320.9260.9370.9270.9250.9280.922
Table 31. The outcomes for the leaf image and comparisons with existing techniques in the literature ([24]).
Table 31. The outcomes for the leaf image and comparisons with existing techniques in the literature ([24]).
Evaluation MetricYin et al.Samet Aymaz et al. SRBaohua et al.Zhang et al.Samet Aymaz et al. Without SRProposed Method
QAB/F0.730.880.730.690.750.958
AG10.8824.18XX11.9726.982
Table 32. The outcomes for the leaf image and comparisons with existing techniques in the literature ([30]).
Table 32. The outcomes for the leaf image and comparisons with existing techniques in the literature ([30]).
Evaluation MetricCBFDCHWTDCTVARGFFIFMMWGFWSSMSA-FCProposed Method
SF13.8811.9714.0214.2414.311.579.4814.3432.0547
QAB/F0.7210.6890.7510.7510.7460.6080.7350.7630.958
Table 33. The outcomes for the Pepsi image and comparisons with existing techniques in the literature ([24]).
Table 33. The outcomes for the Pepsi image and comparisons with existing techniques in the literature ([24]).
Evaluation MetricHua et al.Li et al.Yin et al.Zhang et al.Baohua et al.Samet Aymaz et al. without SRChen et al.Samet Aymaz et al. SRProposed Method
AGXX4.01XX6X15.068.5574
QAB/F0.760.760.780.780.750.810.750.930.9955
Table 34. The outcomes for the pepsi image and comparisons with existing techniques in the literature ([25]).
Table 34. The outcomes for the pepsi image and comparisons with existing techniques in the literature ([25]).
Evaluation MetricDWTLPCurveletIMFCNNDSDCTCVMWGFGFFCNN-DWT Proposed Method
QE0.91540.91860.92160.90250.92230.92080.91850.92110.92320.92370.9439
Qw0.91180.9150.91540.90950.91440.91320.91170.91320.91790.91840.9381
Table 35. The outcomes for the pepsi image and comparisons with existing techniques in the literature ([27]).
Table 35. The outcomes for the pepsi image and comparisons with existing techniques in the literature ([27]).
MethodsQAB/F
He et al.0.78
Aymaz et al.0.93
Yang et al.X
Jiang et al.0.78
Hua et al.0.76
Li et al.0.76
Zhang et al.0.78
Chaudhary et al.0.76
Nejati et al.0.78
Amin-Naji et al.X
Du et al.X
Yin et al.0.78
Chen et al.0.75
He et al.0.78
Abdipour et al.0.79
Samet Aymaz et al.—with SR-20.91
Samet Aymaz et al.—with SR-30.92
Samet Aymaz et al.—with SR-40.93
Proposed Method0.99
Table 36. The outcomes for the Pepsi image and comparisons with existing techniques in the literature ([30]).
Table 36. The outcomes for the Pepsi image and comparisons with existing techniques in the literature ([30]).
Evaluation MetricCBFDCHWTDCTVARGFFIFMMWGFWSSMSA-FCProposed Method
QAB/F0.7690.7530.7850.7780.7710.7770.7290.7860.9955
SF13.512.8113.8113.7213.9613.7313.9613.9914.2884
Table 37. The outcomes for the flowerpot image and comparisons with existing techniques in the literature ([25]).
Table 37. The outcomes for the flowerpot image and comparisons with existing techniques in the literature ([25]).
Evaluation MetricDWTLPCurveletIMFCNNDSDCTCVMWGFGFFCNN-DWT Proposed Method
QE0.86030.86870.86990.86790.87220.870.86920.86990.87230.87480.9131
Qw0.9070.91470.9250.92170.9250.92330.92330.92150.9250.92810.9223
Table 38. Comparative Analysis of quantitative measures (average value) ([21]).
Table 38. Comparative Analysis of quantitative measures (average value) ([21]).
Evaluation MetricPA-DCPCNNNSC-PCNNCNNLG-MWGFDFDCT-CVIMGFBFQTDSIFTMST-SRSSDICSRBRW-TSProposed Method
SSIM0.8720.8600.8590.8560.8590.8540.8590.8640.8570.8560.8560.8680.8590.8610.8590.958
QAB/F0.8980.8810.8860.8820.8850.8770.8830.8870.8830.8830.8830.8910.8850.8830.8850.991
CC0.9740.9720.9690.9690.9690.9660.9690.970.9690.9690.9690.9730.9690.9690.9690.974
AG15.12014.41914.62914.84814.69914.72314.68614.52414.74414.84314.84614.83514.74913.84914.70315.023
QE0.8340.8070.8310.8270.8310.8220.8270.8320.8270.8280.8280.8350.8300.8290.8290.918
E(F)7.2777.307.2267.2297.2277.2127.2277.2347.2247.2257.2257.2577.2287.2327.2267.292
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mohan, C.R.; Chouhan, K.; Rout, R.K.; Sahoo, K.S.; Jhanjhi, N.Z.; Ibrahim, A.O.; Abdelmaboud, A. Improved Procedure for Multi-Focus Images Using Image Fusion with qshiftN DTCWT and MPCA in Laplacian Pyramid Domain. Appl. Sci. 2022, 12, 9495. https://doi.org/10.3390/app12199495

AMA Style

Mohan CR, Chouhan K, Rout RK, Sahoo KS, Jhanjhi NZ, Ibrahim AO, Abdelmaboud A. Improved Procedure for Multi-Focus Images Using Image Fusion with qshiftN DTCWT and MPCA in Laplacian Pyramid Domain. Applied Sciences. 2022; 12(19):9495. https://doi.org/10.3390/app12199495

Chicago/Turabian Style

Mohan, Chinnem Rama, Kuldeep Chouhan, Ranjeet Kumar Rout, Kshira Sagar Sahoo, Noor Zaman Jhanjhi, Ashraf Osman Ibrahim, and Abdelzahir Abdelmaboud. 2022. "Improved Procedure for Multi-Focus Images Using Image Fusion with qshiftN DTCWT and MPCA in Laplacian Pyramid Domain" Applied Sciences 12, no. 19: 9495. https://doi.org/10.3390/app12199495

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop