Improved Procedure for Multi-Focus Images Using Image Fusion with qshiftN DTCWT and MPCA in Laplacian Pyramid Domain

: Multi-focus image fusion (MIF) uses fusion rules to combine two or more images of the same scene with various focus values into a fully focused image. An all-in-focus image refers to a fully focused image that is more informative and useful for visual perception. A fused image with high quality is essential for maintaining shift-invariant and directional selectivity characteristics of the image. Traditional wavelet-based fusion methods, in turn, create ringing distortions in the fused image due to a lack of directional selectivity and shift-invariance. In this paper, a classical MIF system based on quarter shift dual-tree complex wavelet transform (qshiftN DTCWT) and modiﬁed principal component analysis (MPCA) in the laplacian pyramid (LP) domain is proposed to extract the focused image from multiple source images. In the proposed fusion approach, the LP ﬁrst decomposes the multi-focus source images into low-frequency (LF) components and high-frequency (HF) components. Then, qshiftN DTCWT is used to fuse low and high-frequency components to produce a fused image. Finally, to improve the effectiveness of the qshiftN DTCWT and LP-based method, the MPCA algorithm is utilized to generate an all-in-focus image. Due to its directionality, and its shift-invariance, this transform can provide high-quality information in a fused image. Experimental results demonstrate that the proposed method outperforms many state-of-the-art techniques in terms of visual and quantitative evaluations.


Introduction
The statistical analysis of images is restricted due to the depth of focus while sensing different images.The main problem is that the focus is not equally concentrated on objects which exist in an image [1].A feasible solution to overcome the above problem is composite imaging.Composite imaging is one of the techniques used in Multi-focus Image Fusion (MIF), which combines multiple numbers of images with the concentration of different focus levels related to the same scene [2].The spatial and transform domain methods are applicable in MIF [3].Transform base methods are also called multiresolution algorithms.The main principle of transform domain algorithms is to maintain perceptual vision with accurate information in a multiresolution representation.Various studies indicate that several multiresolution methods have been developed, such as discrete wavelet transform (DWT), stationary wavelet transform (SWT), double density discrete wavelet transform area of regions using the encircled method.This measure's ability to discriminate blurred regions in the fusion method is demonstrated.Bingzhe Wei et al. [31] a novel fusion method that applies CNN to assist sparse representation (SR) is proposed for the purpose of gaining a fused image with more precise and abundant information.The computational complexity of this fusion method is impressively reduced.Chenglang Zhang [32] proposed a novel MIF approach based on multiscale transform (MST) and convolution sparse representation (CSR) to address the inherent defects of both the MST and SR-based fusion methods.The proposed approach is put up against the approaches discussed in the literature [21][22][23][24][25][26][27][28]30].
The following are the essential contributions of this work: (i) A hybrid method (i.e., qshiftN DTCWT and LP) with MPCA is introduced for the fusion of multifocus images; (ii) The method helps combine multiple source images to develop a fused image having better image quality with good directionality, a high degree of shift-invariance, achieving better visual quality, and retaining more information than the source images; (iii) Using the MPCA method, the amount of redundant data is decreased, and the most significant components of the source images are extracted; (iv) Extend the depth-of-field (DOF) of the advanced imaging system; (v) An analyzing procedure has been done both quantitatively and qualitatively; (vi) Proposed approach performance has improved compared with the state-of-the-art techniques developed in recent years.
The rest of the paper is organized as follows: Section 2 explains the proposed fusion methodology as well as the fusion methods implemented.Section 3 presents the results of the experimentation.Section 4 concludes with conclusions.

The Proposed Fusion Approach
This paper proposes a hybrid approach with MPCA to overcome other algorithms' blurring and spatial distortions.An algorithm is a novel approach in MIF because this hybrid technique with MPCA gives good performance compared to other algorithms in recent years.In the proposed method, the fusion procedure is performed individually on row and column images, which are then averaged to eliminate any noise or distortion generated by the fusion process.The noise elimination process is explained in Section 2.1.Then, the source images are decomposed into LF components and HF components using LP.It provides information on the sharp contrast changes to which the human visual system is principally sensitive.The LP method is explained in Section 2.2.Then, qshiftN DTCWT is used to fuse low and high-frequency components to produce a fused image with good directionality and a high degree of shift-invariance.The qshiftN DTCWT method is explained in detail in Section 2.3.After fusing the low and high-frequency components, IDTCWT is applied to reconstruct the fused low and high-frequency components.In the proposed method, MPCA is used to improve the efficiency of the hybrid approach (i.e., qshiftN DTCWT and LP) to reduce the redundant data and extract the essential components of the fused image (i.e., all-in-focus image).Also, MPCA emphasizes elements that have the most significant impact and are robust to noise.So, the MPCA reduces the blurring and spatial distortions; thus, the fused image has more detailed clarity, clear edges, and better visual and machine perception.The MPCA method is explained in Section 2.4.Finally, the fused image is formed and available for comparison.Various objective quality metrics are calculated to assess the proposed method's quality.These measures are described in Sections 2.6 and 2.7, respectively.Figure 1 depicts the proposed technique's flow diagram, detailed in Section 2.5.
reduces the blurring and spatial distortions; thus, the fused image has more detailed clarity, clear edges, and better visual and machine perception.The MPCA method is explained in Section 2.4.Finally, the fused image is formed and available for comparison.Various objective quality metrics are calculated to assess the proposed method's quality.These measures are described in Sections 2.6 and 2.7, respectively.Figure 1 depicts the proposed technique's flow diagram, detailed in Section 2.5.

Noise Elimination Process
The image g(x, y) of size M × N is separated into rows, and the rows are concatenated to generate a 1-D vector data g(x) of size MN [18].It is shown in Algorithm 1.

Noise Elimination Process
The image g(x, y) of size M × N is separated into rows, and the rows are concatenated to generate a 1-D vector data g(x) of size MN [18].It is shown in Algorithm 1.Likewise, the size image g(x, y) is separated into columns and these columns are concatenated to generate a 1D vector data with g(x) a size of MN.The operation is I = C2DT1D (I , M, N).

Laplacian Pyramid (LP)
The Laplacian pyramid [18][19][20] reveals the strong contrast modifications to which the human visual system is most highly sensitive.It can localize in both the spatial and frequency domains.LP is used to extract the most relevant elements of the fused image.LP also sets a premium on elements that have the most effective and are resistant to noise.As a result, the LP minimizes blurring and spatial distortions.The technique for constructing and reconstructing a Laplacian pyramid is shown below.On vector data, the image reduction method is performed by taking the DCT and applying the inverse of the DCT (IDCT) to the first half of the coefficients.The function that reduces IR is used to conduct level-to-level image reduction.

qshiftN Dual-Tree Complex Wavelet Transform
Highly sampled DWT exhibits change invariance issues in 1-D and directional sensitivity in N-D.The DTCWT approach is shift-invariant, economical, and directionally selective.The DTCWT [33,34] is an improved wavelet transformation that generates actual and imagined transformational coefficients.The DTCWT uses two 2-channel FIR filter banks.Output is the actual component of one filter bank (Tree A), whereas yield is the imaginary component (Tree B).
For a d-dimensional object, the DTCWT uses two significantly sampled filter banks with a 2 d redundancy.The three stages of a 1-D DTCWT filter bank are shown in Figure 2.While DWT-fused images have broken borders, DTCWT-fused images are soft and unbroken.When compared to DWT, which only delivers constrained directions in (0  , 90 • ), DTCWT produces 6 subbands in each of the three (±15 • , ±45 • , ±75 • ), both real and imaginary, which improves transformational correctness and preserves more detailed features.

qshiftN Dual-Tree Complex Wavelet Transform
Highly sampled DWT exhibits change invariance issues in 1-D and directional sensitivity in N-D.The DTCWT approach is shift-invariant, economical, and directionally selective.The DTCWT [33,34] is an improved wavelet transformation that generates actual and imagined transformational coefficients.The DTCWT uses two 2-channel FIR filter banks.Output is the actual component of one filter bank (Tree A), whereas yield is the imaginary component (Tree B).
For a d-dimensional object, the DTCWT uses two significantly sampled filter banks with a 2 d redundancy.The three stages of a 1-D DTCWT filter bank are shown in Figure 2.While DWT-fused images have broken borders, DTCWT-fused images are soft and unbroken.When compared to DWT, which only delivers constrained directions in (0°, 45°, 90°), DTCWT produces 6 subbands in each of the three (±15°, ±45°, ±75°), both real and imaginary, which improves transformational correctness and preserves more detailed features.The odd/even filter approach proposed by DTCWT, however, has a number of drawbacks: 1.There is no clear symmetry in the sub-sampling structure; 2. The frequency reactions of the two trees vary slightly; 3. Otherwise, since both terms denote linearity, the filter sets must be biorthogonal rather than orthogonal.It demonstrates that energy efficiency does not apply to signals and fields.
Each of them is reduced and solved using the DTCWT qshiftN as illustrated in Figure 3, with all filters above level 1 much shorter.It is possible to achieve a sample gap above The odd/even filter approach proposed by DTCWT, however, has a number of drawbacks: 1.There is no clear symmetry in the sub-sampling structure; 2. The frequency reactions of the two trees vary slightly; 3. Otherwise, since both terms denote linearity, the filter sets must be biorthogonal rather than orthogonal.It demonstrates that energy efficiency does not apply to signals and fields.
Each of them is reduced and solved using the DTCWT qshiftN as illustrated in Figure 3, with all filters above level 1 much shorter.It is possible to achieve a sample gap above levels 1, and 1/2 during a test period by using delayed filters of 1/4 and 3/4 rather than the DTCWT original's 0 and 1/2.An asymmetric equal-length filter and the time it takes will be used to accomplish this.
Wavelet orthonormality can be perfectly transformed because of the asymmetry.When it comes to reverse filters, Tree-A filters are used, but Tree-B filters are used for both reverse and reconstruction filters because they are all part of the same orthonormal array.All trees have the same response in terms of their natural frequency.Individual effects are symmetrical around their midpoints, but the total complex impulses are asymmetric.Asymmetrical extension continues to work on the frame's edges because of this.
be used to accomplish this.
Wavelet orthonormality can be perfectly transformed because of the asymmetry.When it comes to reverse filters, Tree-A filters are used, but Tree-B filters are used for both reverse and reconstruction filters because they are all part of the same orthonormal array.All trees have the same response in terms of their natural frequency.Individual effects are symmetrical around their midpoints, but the total complex impulses are asymmetric.Asymmetrical extension continues to work on the frame's edges because of this.

Modified Principal Component Analysis (MPCA)
MPCA is used to turn uncorrelated variables into correlated variables.This method is useful for analyzing data and determining the optimal features for data collection.The first principal component represents data with the greatest variance.The others are just as much of what is left.The data is well represented by the first principal component, which also illustrates the direction of maximum variation.In this paper, the MPCA approach is used to determine the best-represented value of each subband of source images after implementing the LP-based qshiftN DTCWT method.These values are then multiplied by matched source image subbands.MPCA's goal is to transfer data from the original space to the Eigen space.By saving the components with the largest eigenvector, the variance of the data is enhanced, and the covariance is lowered.
Specifically, this method removes redundant data from source images and extracts the most significant components.Furthermore, MPCA prioritizes components with the greatest impact and resistance to noise.As a result, the MPCA decreases blurring and spatial distortions.The steps of the MPCA algorithm are as follows: 1. Create a vector from the data 2. Determine the covariance matrix of the given vector ..,  1 :

Modified Principal Component Analysis (MPCA)
MPCA is used to turn uncorrelated variables into correlated variables.This method is useful for analyzing data and determining the optimal features for data collection.The first principal component represents data with the greatest variance.The others are just as much of what is left.The data is well represented by the first principal component, which also illustrates the direction of maximum variation.In this paper, the MPCA approach is used to determine the best-represented value of each subband of source images after implementing the LP-based qshiftN DTCWT method.These values are then multiplied by matched source image subbands.MPCA's goal is to transfer data from the original space to the Eigen space.By saving the components with the largest eigenvector, the variance of the data is enhanced, and the covariance is lowered.
Specifically, this method removes redundant data from source images and extracts the most significant components.Furthermore, MPCA prioritizes components with the greatest impact and resistance to noise.As a result, the MPCA decreases blurring and spatial distortions.The steps of the MPCA algorithm are as follows: 1. Create a vector from the data 2. Determine the covariance matrix of the given vector (i.e., cov([im1(:)]))

Flow Diagram of Proposed Approach
The flow diagram of the complete fusion algorithm is depicted in Figure 1, which comprises two processes: LP-based qshiftN-DTCWT image fusion and MPCA.LP is used for decomposition, DTCWT is used for image fusion, and MPCA is used for feature extraction, as shown in Algorithm 3.

Evaluation of the Proposed Method's Effectiveness
In this section, the performance of the proposed technique is compared to that of state-of-the-art techniques in two ways: subjectively and objectively.Subjective assessment is a qualitative evaluation of how good the fused image looks.On the other hand, objective assessment, also called quantitative evaluation, is done by correlating the values of many image fusion efficiency metrics.Mathematical modeling is the basis for this quantitative method, which is called "objective analysis."It looks at how similar the fused image is to the images that were used to make it.With and without a reference image, there are two ways to do quantitative analysis [11,21,[35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51].
This paper compared fourteen metrics: SF, E(F), SD, AG, RMSE, CC, Q AB/F , L AB/F , SSIM, Q E , N AB/F , PSNR, and these measures are explained in Section 2.7.AG (Average-Gradient): It determines the sharpness and clarity of an image.It shows that when the value of AG is high, the fused image has more clarity and sharpness.

Measuring Performance with Objective Quality Metrics
CC (Correlation-Coefficient): It assesses the similarity of the all-in-focus image to the input images.For a better fusion process, a higher CC value is desired.
SSIM (Structural-Similarity-Index-Measure): It assists in the correlation of two images' local patterns of the brightness of pixels.SSIM has a range of −1 to 1 in its value.
Q E (Edge-dependent Fusion Quality): This metric considers features of the human visual system, such as edge detail sensitivity.A greater Q E value suggests that the fusion process is more efficient.SD (Standard Deviation): The higher the SD number, the noisier the final image.Noise is more likely to impact images with lower contrast.
SF (Spatial-Frequency): It is used to determine the total intensity of activeness.When the all-in-focus image activity level is really high, it indicates that SF is quite high.
RMSE (Root Mean Square Error): It is useful for calculating the variations per pixel caused by image fusion methods.The value of RMSE rises as the similarity decreases.
PSNR (Peak-Signal-to-Noise-Ratio): It compares the similarity of the produced fused image and the reference image to determine image quality.The better the PSNR number, the better the fusion results.
In addition, objective image fusion effectiveness assessment via gradient information [11] is examined.Assessing total fusion performance (TFP), fusion loss (FL), and fusion artifacts (FA), provides a complete analysis of fusion performance.The process intended for calculating these metrics is detailed in [11], and their symbolic representation is presented below: Q AB/F denotes the total amount of data transferred from the source images to the all-in-focus image.The method's performance is good if Q AB/F values are higher; L AB/F , Total loss of information.The method's performance is good if L AB/F values are lower, and N AB/F .Due to the fusion process, noise or artifacts have been added to the fused image.The method's performance is good if N AB/F values are lower.

Experimental Results
This paper proposes a qshiftN DTCWT and MPCA in laplacian pyramid domain.Quality measures include SD, Q AB/F , E(F), AG, SF, CC, SSIM, Q E , Q W , FMI, L AB/F , N AB/F , RMSE, and PSNR were employed to assess the algorithm's quality.These metrics are used to contrast the proposed technique to the methods that have been published in the past.The resemblance and robustness of the fused images against distortions are measured using these criteria.Source images for comparison are commonly used in MIF.Experiments are also carried out on many images from various areas and datasets [52].In these images, the proposed approach yields good results.However, these images are not included in the paper because the techniques that are contrasted with the proposed approach do not produce outcomes for these images.Desk, balloon, book, clock, flower, lab, leaf, leopard, flowerpot, Pepsi, wine, and craft images are used to compare methodologies in the literature with [21][22][23][24][25][26][27][28]30].In addition, the outcomes of the proposed technique for certain tried source images were presented.The images are of various sizes and qualities.The proposed method is applicable to any multi-focus images, not only those presented in this work.

The Outcomes of Some of the Images That Were Tried
Several grayscale images are used to implement the proposed technique.To analyze these images, SF, Q AB/F , Q E , AG, E(F), SSIM, SD, and CC were used.It analyses a variety of images.Figures 4-8 show the visual outcomes for the images of a balloon, a leopard, a calendar, a bottle of wine, and a craft, respectively.Table 1 displays the results of the proposed method for specific trailing images.The subjective measurement outcomes (i.e., RMSE and PSNR) of certain trailing multi-focus images are depicted in Table 1.Table 2 compares the proposed technique to methods in the literature that use subjective criteria.Measurements for the stated image are not measured for the mentioned article, as shown by the letter X.In contrast, the proposed technique to methods in the literature, the flowerpot, clock, Pepsi, cameraman, desk, book, lab, and flower images are used.The best outcomes are shown in bold.The robustness of the proposed technique to deformation is measured using these criteria.The outcomes suggest that the proposed technique performs well in subjective measurements.The evaluation of the first multi-focus image is the clock, illustrated in Figure 9. Figure 9a represents the original image.Figure 9b,c illustrate left-focused and right-focused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused while the right side is not.A "right-focused image" is one in which the image's right side is focused, but its left side is not.Figure 9d shows that the all-in-focus image is created when the approach is implemented.E(F), AG, CC, Q AB/F , SSIM, Q E , SF, and QW are calculated to assess the proposed methodology performance.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Tables 3-8 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [21,22,24,27,28,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.The evaluation of the second multi-focus image is the desk, illustrated in Figure 10. Figure 10a represents the original image.Figure 10b,c illustrates left-focused and rightfocused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused while the right side is not.An image with a right focus indicates that it has been focused on its right side only, while its left side has not been focused on.Figure 10d shows the process of creating the all-in-focus image after the method has been successfully implemented.The following parameters are computed to evaluate the proposed methodology performance: E(F), AG, CC, Q AB/F , SSIM, Q E , FMI, SD, QW, L AB/F , N AB/F , and SF.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Tables 9-15 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [21,[23][24][25][26][27]30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.The evaluation of the third multi-focus image is the book, illustrated in Figure 11. Figure 11a represents the original image.Figure 11b,c illustrate left-focused and rightfocused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused, while the right side is not.The right side of the image is focused while the left is not.Figure 11d shows the process of creating the all-in-focus image after the method has been successfully implemented.The following parameters are computed to evaluate the proposed methodology performance: E(F), AG, CC, Q AB/F , SSIM, Q E , QW, L AB/F , N AB/F , and SF.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Tables 16-21 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [21,[24][25][26][27]30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

Comparison of Multi-Focus Image (i.e., Flower)
The evaluation of the fourth multi-focus image is the flower, illustrated in Figure 12. Figure 12a represents the original image.Figure 12b,c illustrates left-focused and rightfocused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused while the right side is not.A right-focused image is one in which the image's right side is focused, but its left side is not.Figure 12d shows that the all-in-focus image is created when the approach is implemented.E(F), AG F , CC, Q AB/F , SSIM, and QE are calculated to assess the proposed methodology performance.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The comparison results are shown in Tables 22 and 23 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [21,24], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

Comparison of Multi-Focus Image (i.e., Lab)
The evaluation of the fifth multi-focus image is the lab, which is illustrated in Figure 13. Figure 13a represents the original image.Figure 13b,c illustrate left-focused and rightfocused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused while the right side is not.An image with a right focus indicates that it has been focused on its right side only, while its left side has not been focused on.Figure 13d shows the process of creating the all-in-focus image after the method has been successfully implemented.The following parameters are computed to evaluate the proposed methodology performance: E(F), AG, CC, Q AB/F , SSIM, Q E , QW, and SF.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Tables 24-29 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [21,24,25,27,28,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.The evaluation of the fifth multi-focus image is the lab, which is illustrated in Figure 13. Figure 13a represents the original image.Figure 13b,c illustrate left-focused and rightfocused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused while the right side is not.An image with a right focus indicates that it has been focused on its right side only, while its left side has not been focused on.Figure 13d shows the process of creating the all-in-focus image after the method has been successfully implemented.The following parameters are computed to evaluate the proposed methodology performance: E(F), AG, CC, Q AB/F , SSIM, QE, QW, and SF.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Tables 24-29 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [21,24,25,27,28,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.The evaluation of the sixth multi-focus image is the leaf, which is illustrated in Figure 14. Figure 14a represents the original image.Figure 14b,c illustrates left-focused and right-focused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused, while the right side is not.The right side of the image is focused while the left is not.Figure 14d shows the process of creating the all-in-focus image after the method has been successfully implemented.The following parameters are computed to evaluate the proposed methodology performance: E(F), AG, CC, Q AB/F , SSIM, Q E , and SF.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Tables 30-32 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [21,24,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

Comparison of Multi-Focus Image (i.e., Pepsi)
The evaluation of the seventh multi-focus image is the pepsi, which is illustrated in Figure 15. Figure 15a represents the original image.Figure 15b,c illustrate left-focused and right-focused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused while the right side is not.A right-focused image is one in which the image's right side is focused, but its left side is not.Figure 15d shows that the all-in-focus image is created when the approach is implemented.AG, Q AB/F , Q E , SF, and QW are calculated to assess the proposed methodology performance.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Tables 33-36 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [24,25,27,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

Comparison of Multi-Focus Image (i.e., Flowerpot)
The evaluation of the eighth multi-focus image is the flowerpot, which is illustrated in Figure 16. Figure 16a represents the original image.Figure 16b,c illustrate left-focused and right-focused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused while the right side is not.An image with a right focus indicates that it has been focused on its right side only, while its left side has not been focused on.Figure 16d shows the process of creating the all-in-focus image after the method has been successfully implemented.The following parameters are computed to evaluate the proposed methodology performance: Q E , and QW.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Table 37 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [25], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.The evaluation of the sixth multi-focus image is the leaf, which is illustrated in Figure 14. Figure 14a represents the original image.Figure 14b,c illustrates left-focused and rightfocused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused, while the right side is not.The right side of the image is focused while the left is not.Figure 14d shows the process of creating the all-in-focus image after the method has been successfully implemented.The following parameters are computed to evaluate the proposed methodology performance: E(F), AG, CC, Q AB/F , SSIM, QE, and SF.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Tables 30-32 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [21,24,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.The evaluation of the seventh multi-focus image is the pepsi, which is illustrated in Figure 15. Figure 15a represents the original image.Figure 15b,c illustrate left-focused and right-focused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused while the right side is not.A right-focused image is one in which the image's right side is focused, but its left side is not.Figure 15d shows that the all-in-focus image is created when the approach is implemented.AG, Q AB/F , QE, SF, and QW are calculated to assess the proposed methodology performance.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Tables 33-36 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [24,25,27,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.The evaluation of the eighth multi-focus image is the flowerpot, which is illustrated in Figure 16. Figure 16a represents the original image.Figure 16b,c illustrate left-focused and right-focused images, respectively.The term "left-focused image" refers to the fact that the image's left side is focused while the right side is not.An image with a right focus indicates that it has been focused on its right side only, while its left side has not been focused on.Figure 16d shows the process of creating the all-in-focus image after the method has been successfully implemented.The following parameters are computed to evaluate the proposed methodology performance: QE, and QW.Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Table 37 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature [25], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.A single strategy will never produce the ideal subjective and objective results for all image pairs.Because of this, eight multi-focus image pairings (shown in Figure 17) are used in the next experiment to demonstrate the average performance of various techniques, which is demonstrated in the following experiment.In the case of the image pairs in Figure 17, the proposed method produced fused images depicted in Figure 18.As shown in Figure 18, the results of the proposed approach fusion are satisfactory for all of the image pairs tested.For the image pairs in Figure 17, the average objective assessment of several methodologies is shown in Table 38.The results of the comparison are presented in Table 38.Comparing the proposed method to approaches described in the literature [21], the proposed method is more successful, and the best outcomes of the various methods are highlighted in bold.

Analysis of a Few More Image Pairs
A single strategy will never produce the ideal subjective and objective results for all image pairs.Because of this, eight multi-focus image pairings (shown in Figure 17) are used in the next experiment to demonstrate the average performance of various techniques, which is demonstrated in the following experiment.In the case of the image pairs in Figure 17, the proposed method produced fused images depicted in Figure 18.As shown in Figure 18, the results of the proposed approach fusion are satisfactory for all of the image pairs tested.For the image pairs in Figure 17, the average objective assessment of several methodologies is shown in Table 38.The results of the comparison are presented in Table 38.Comparing the proposed method to approaches described in the literature [21], the proposed method is more successful, and the best outcomes of the various methods are highlighted in bold.

Conclusions
The performance of the traditional wavelets-based fusion algorithms is to create ringing distortions in the fused image due to a lack of directional selectivity and shift-invariance.The proposed methodology utilizes the benefits of a hybrid method approach for the image fusion process.The hybrid method contains LP for decomposition, DTCWT for image fusion, and MPCA for feature extraction.The advantages of the proposed fused image are having better image quality and extracting relevant information from the source images with good directionality, a high degree of shift-invariance using hybrid approach with MPCA, due to this achieving better visual quality.Several pairs of multifocus images are used to assess the performance of the proposed method.Through the experiments conducted on standard test pairs of multifocus images, it was found that the proposed method has shown superior performance in most of the cases as compared to other methods in terms of quantitative parameters and in terms of visual quality, it has shown superior performance to that of other methods.Therefore, the proposed work is validated with many data sets to meet these goals by evaluating quantitative measures like E(F), AG, SD,

Figure 1 .
Figure 1.The flow diagram of the proposed qshiftN DTCWT-LP and MPCA-based image fusion algorithm.

Algorithm 1 Algorithm 2 Figure 1 .
Figure 1.The flow diagram of the proposed qshiftN DTCWT-LP and MPCA-based image fusion algorithm.

E
(F) (Entropy): It assists in the extraction of meaningful information from an image.A high level of entropy indicates that the image carries more than information.

Figure 8 .
Figure 8. (Craft): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.3.2.Comparison of Multi-Focus Image (i.e., Clock) The evaluation of the first multi-focus image is the clock, illustrated in Figure 9. Figure 9a represents the original image.Figure 9b,c illustrate left-focused and right-focused

Figure 8 .
Figure 8. (Craft): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.3.2.Comparison of Multi-Focus Image (i.e., Clock) The evaluation of the first multi-focus image is the clock, illustrated in Figure 9. Figure 9a represents the original image.Figure 9b,c illustrate left-focused and right-focused

Figure 8 .
Figure 8. (Craft): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.3.2.Comparison of Multi-Focus Image (i.e., Clock) The evaluation of the first multi-focus image is the clock, illustrated in Figure 9. Figure 9a represents the original image.Figure 9b,c illustrate left-focused and right-focused

Figure 8 .
Figure 8. (Craft): (a) original image; (b,c) multi-focus input images; and (d) proposed fusion.3.2.Comparison of Multi-Focus Image (i.e., Clock) The evaluation of the first multi-focus image is the clock, illustrated in Figure 9. Figure 9a represents the original image.Figure 9b,c illustrate left-focused and right-focused
Appl.Sci.2022,12,  x FOR PEER REVIEW 12 of 27 performance of the proposed approach is compared to that of other methods previously published in the literature.The results of the comparison are shown in Tables 3-8 of the report.The letter X indicates that metrics are not calculated for the article depicted in the image.According to the literature[21,22,24,27,28,30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.

Figure 17 .
Figure 17.A few pairs of multi-focus images.Figure 17.A few pairs of multi-focus images.

Figure 17 .
Figure 17.A few pairs of multi-focus images.Figure 17.A few pairs of multi-focus images.

Figure 18 .
Figure 18.The multi-focus image sets in Figure 16 represent the fusion outcomes of the proposed technique.

Figure 18 .
Figure 18.The multi-focus image sets in Figure 16 represent the fusion outcomes of the proposed technique.

Table 1 .
For certain trailed images, the outcomes of the proposed method.

Table 2 .
For some images, the comparisons with approaches in the literature.

Table 3 .
[21]outcomes for the clock image and comparisons with existing techniques in the literature ([21]).

Table 4 .
[22]outcomes for the clock image and comparisons with existing techniques in the literature ([22]).

Table 5 .
[24]outcomes for the clock image and comparisons with existing techniques in the literature ([24]).

Table 6 .
[27]outcomes for the clock image and comparisons with existing techniques in the literature ([27]).

Table 7 .
[28]outcomes for the clock image and comparisons with existing techniques in the literature ([28]).

Table 8 .
[30]outcomes for the clock image and comparisons with existing techniques in the literature ([30]).

Table 9 .
[21]outcomes for the desk image and comparisons with existing techniques in the literature ([21]).

Table 10 .
The outcomes for the desk image and comparisons with existing techniques in the literature([23]).

Table 11 .
[24]outcomes for the desk image and comparisons with existing techniques in the literature ([24]).

Table 12 .
The [25]omes for the desk image and comparisons with existing techniques in the literature ([25]).

Table 9 .
[21]outcomes for the desk image and comparisons with existing techniques in the literature ([21]).

Table 10 .
The outcomes for the desk image and comparisons with existing techniques in the literature([23]).

Table 11 .
[24]outcomes for the desk image and comparisons with existing techniques in the literature ([24]).

Table 12 .
[25]outcomes for the desk image and comparisons with existing techniques in the literature ([25]).

Table 13 .
[26]outcomes for the desk image and comparisons with existing techniques in the literature ([26]).

Table 14 .
[27]outcomes for the desk image and comparisons with existing techniques in the literature ([27]).

Table 15 .
[30]outcomes for the desk image and comparisons with existing techniques in the literature ([30]).

Table 16 .
[21]outcomes for the book image and comparisons with existing techniques in the literature ([21]).

Table 17 .
[24]outcomes for the book image and comparisons with existing techniques in the literature ([24]).

Table 18 .
[25]outcomes for the book image and comparisons with existing techniques in the literature ([25]).

Table 19 .
The [26]omes for the book image and comparisons with existing techniques in the literature ([26]).

Table 16 .
[21]outcomes for the book image and comparisons with existing techniques in the literature ([21]).

Table 17 .
[24]outcomes for the book image and comparisons with existing techniques in the literature ([24]).

Table 18 .
[25]outcomes for the book image and comparisons with existing techniques in the literature ([25]).

Table 19 .
[26]outcomes for the book image and comparisons with existing techniques in the literature ([26]).

Table 20 .
[27]outcomes for the book image and comparisons with existing techniques in the literature ([27]).

Table 21 .
The [30]omes for the book image and comparisons with existing techniques in the literature ([30]).Appl.Sci.2022, 12, x FOR PEER REVIEW 18 of 27

Table 22 .
[21]outcomes for the flower image and comparisons with existing techniques in the literature ([21]).

Table 23 .
[24]outcomes for the flower image and comparisons with existing techniques in the literature ([24]).

Table 23 .
[24]outcomes for the flower image and comparisons with existing techniques in the literature ([24]).

Table 24 .
[21]outcomes for the lab image and comparisons with existing techniques in the literature ([21]).

Table 25 .
[24]outcomes for the lab image and comparisons with existing techniques in the literature ([24]).

Table 26 .
[25]outcomes for the lab image and comparisons with existing techniques in the literature ([25]).

Table 27 .
[27]outcomes for the lab image and comparisons with existing techniques in the literature ([27]).

Table 28 .
[28]outcomes for the lab image and comparisons with existing techniques in the literature ([28]).

Table 29 .
[30]outcomes for the lab image and comparisons with existing techniques in the literature ([30]).

Table 29 .
[30]outcomes for the lab image and comparisons with existing techniques in the literature ([30]).

Table 30 .
[21]outcomes for the leaf image and comparisons with existing techniques in the literature ([21]).

Table 31 .
[24]outcomes for the leaf image and comparisons with existing techniques in the literature ([24]).

Table 32 .
[30]outcomes for the leaf image and comparisons with existing techniques in the literature ([30]).

Table 32 .
[30]outcomes for the leaf image and comparisons with existing techniques in the literature ([30]).

Table 33 .
[24]outcomes for the Pepsi image and comparisons with existing techniques in the literature ([24]).

Table 34 .
The outcomes for the pepsi image and comparisons with existing techniques in the litera-

Table 33 .
[24]outcomes for the Pepsi image and comparisons with existing techniques in the literature ([24]).

Table 34 .
[25]outcomes for the pepsi image and comparisons with existing techniques in the literature ([25]).

Table 35 .
[27]outcomes for the pepsi image and comparisons with existing techniques in the literature ([27]).

Table 36 .
[30]outcomes for the Pepsi image and comparisons with existing techniques in the literature ([30]).

Table 36 .
[30]outcomes for the Pepsi image and comparisons with existing techniques in the literature ([30]).

Table 37 .
[25]outcomes for the flowerpot image and comparisons with existing techniques in the literature ([25]).