Efﬁcient Pre-Processing and Segmentation for Lung Cancer Detection Using Fused CT Images

: Over the last two decades, radiologists have been using multi-view images to detect tumors. Computer Tomography (CT) imaging is considered as one of the reliable imaging techniques. Many medical-image-processing techniques have been developed to diagnoses lung cancer at early or later stages through CT images; however, it is still a big challenge to improve the accuracy and sensitivity of the algorithms. In this paper, we propose an algorithm based on image fusion for lung segmentation to optimize lung cancer diagnosis. The image fusion technique was developed through Laplacian Pyramid (LP) decomposition along with Adaptive Sparse Representation (ASR). The suggested fusion technique fragments medical images into different sizes using the LP. After that, the LP is used to fuse the four decomposed layers. For the evaluation purposes of the proposed technique, the Lungs Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) was used. The results showed that the Dice Similarity Coefﬁcient (DSC) index of our proposed method was 0.9929, which is better than recently published results. Furthermore, the values of other evaluation parameters such as the sensitivity, speciﬁcity, and accuracy were 89%, 98% and 99%, respectively, which are also competitive with the recently published results. better overall performance than other fusion techniques. A doctor must closely examine the fused CT image before a As a result, when evaluating multi-view medical image fusion, not only the suitability of the evaluation indices, but also whether the indices are in compliance with the human visual system must be addressed. The results showed that the proposed approach produced the highest-quality fused image with no distortion while attempting to recreate the fused image with all the information and structure preserved.


Introduction
Cancer is one of the most dangerous types of disease, spreading day by day across the globe. One of the main causes of death is lung cancer. The presence of cancer poses the greatest risk of complications and death. The underlying causes of cancer are not wholly known, which results in the frequent occurrence of the disease. According to the World Health Organization fact sheet, cancer is ranked as the leading cause of death across the globe. Cancer caused approximately 10 million casualties in 2020 alone, while more than 70% of the deaths occurred in low-and middle-income countries. Surprisingly, lung cancer is the most commonly occurring disease, with 2.21 million cases identified, leading to the death of 1.80 million. However, early detection of lung cancer can significantly help decrease the death toll while saving the lives of many people. The advancement of technology has considerably helped cancer diagnosis with commonly used techniques such as Magnetic Resonance Imaging (MRI), CT scans, X-rays, Positron Emission Tomography (PET), lung biopsy, High-Resolution Computed Tomography (HRCT), etc. The advancement of CT innovation has caused a remarkable expansion in the measurement of information in clinical CT. The development of Computer-Aided Design (CAD) systems for lung segmentation and fusion depends on computer vision and medical imaging technology.
Traditional SR algorithms that use a fixed dictionary, on the other hand, have a number of drawbacks in the image fusion process. Liu et al. [8] suggested ASR for both image fusion and denoising, which can adaptively create a compressed dictionary for the fusion of images. Aishwarya et al. [9] applied the adjusted spatial frequency to image fusion and tried to introduce the basic concept of an adaptive selection dictionary to SR. In 2015, Singh and Khare [10] investigated an image fusion method for multi-view medical images based on two redundant wavelet transforms (Redundant Wavelet Transform (RWT) and R-DWT). In their proposed method, they found that quality image fusions can be produced through the shift-invariance of the Redundant Discrete Wavelet Transform (R-DWT) method. Numerous multimodal MRI, CT, and PET medical images have been used for experiments, and the results were analyzed using mutual information and strength metrics [11]. Pyramid transformation is a technique that can be used to accomplish the fusion of multi-view images. This technique was adopted and mostly used in computer vision, image compression, and the segmentation of images when it was first proposed [12]. Presently, to combine multi-view clinical images, the pyramid transform is extensively used. The method of the union LP was proposed by Du et al. [13] to extract many important features, which helped to enhance the outline structure and color contrast of the fused images. To fuse the images captured by a microscope, Kou et al. [14] suggested Region Mosaicking on Laplacian Pyramids (RMLP), but it was found to be sensitive to noise. Then, the LP algorithm including joint averaging was suggested, which resulted in effectively improving the output by showing the rich background details of the image. Li and Zhao in 2020 [15] worked on a novel multi-modal medical image fusion algorithm. In their study, firstly, CT and MR images were decomposed into low-and high-frequency subbands using the Non-Subsampled Contourlet Transform (NSCT) of multi-scale geometric transformation; second, the local area standard deviation method or fusion was selected for the low-frequency sub-band, while an adaptive pulse coupling neural network model was constructed and the fusion was used for the high-frequency sub-band. The fusion results of the algorithm in this paper significantly enhanced the image fusion accuracy, and it had some advantages for both visual effects and objective assessment indices, providing a more accurate basis for the clinical diagnosis and treatment of diseases. Moreover, Soliman et al. worked on accurate lung segmentation of CT chest images by adaptive appearance-guided shape modeling and reported a high DS, Bidirectional Hausdorff Distance (BHD), and Percentage VolumeDifference (PVD) accuracy of our lungsegmentation framework on multiple in vivo 3D CT image datasets [16]. Khan et al. in 2020 also worked on an integrated design of contrast-based classical feature fusion and selection [17]. Firstly, the gamma correction max intensity weight approach improves the contrast of the original CT images. Secondly, multiple texture, point, and geometric features are extracted from the contrast images, and then, a serial canonical correlation-based fusion is performed. Finally, an entropy-based approach is used to substitute zero values and negative features, followed by weighted Neighborhood Component Analysis (NCA) for selection. The Lungs Data Science Bowl (LDSB) 2017 was able to achieve the maximum accuracy. Similarly, in 2021, Azam et al. proposed multimodal medical image registration and fusion for quality enhancement [18]. The proposed approach was validated using the Harvard dataset's CT and MRI imaging modalities. For the statistical comparison of the proposed system, quality evaluation metrics such as the Mutual Information (MI), Normalized Cross-Correlation (NCC), and Feature Mutual Information (FMI) were computed. The suggested technique yielded more precise outcomes, higher image quality, and useful data for medical diagnosis.
Recently, the American Cancer Society noted that there is a high likelihood of more severe COVID-19 in cancer patients, recommending that patients and their care-givers be required to take special precautions to reduce the risk of contracting the disease. This new type of coronavirus is SARS-CoV-2. It is beta-coronavirus, a primary cause of Acute Respiratory Syndrome (ARS). In this regard, lung cancer is closely linked with ARS because it is a part of a disease group based on the progression and expansion of abnormal cells within the human body. American scientists have analyzed the course of COVID-19 in patients with cancer. Therefore, the diagnostic and examination features are of particular importance, since these include not only determining the causative agent of an infectious disease, but also the main indicators, determining the severity of the clinical images, the prognosis, the nature, and the amount of medical care.
In this paper, we propose a lung image segmentation and fusion method. The segmentation method is based on an approach by which we optimize the computational time of CT image segmentation with the help of a very effective known method, the adaptive global threshold. The proposed algorithm also incorporates morphological operations and masking, which have proven very helpful in CT image segmentation. This enabled us to reduce the computational time with improved accuracy in complicated scenarios while eliminating the need for post-processing tasks and activities. The lung image fusion method is based on the LP and ASR [19] methods of image fusion, resulting in a better outcome and a better method of medical image fusion in the treatment of lung cancer [20]. We used LP decomposition for the multi-view clinical CT images to increase the speed of constructing the sub-dictionaries using the ASR method.
The remainder of this paper is organized as follows: The background of the theory is given in Section 2. The materials and techniques are introduced in Section 3. Section 4 gives the experimental results. Section 5 provides the conclusions.

Background of the Theory
In this section, we review various image fusion methods for multi-view images.

Sparse Representation Method
Several SR-based fusion approaches have been studied in recent years [21]. According to Zhu et al. [22], image patches were generated using a sampling approach and classified by a clustering algorithm, and then, a dictionary was constructed using the K-SVD methodology. A medical image fusion scheme based on discriminative low-rank sparse dictionary learning was proposed by Li et al. [23]. Convolutional-sparsity-based morphological component analysis was introduced by Liu et al. in 2019 [24] as a sparse representation model for pixel-level medical image fusion.
In SR methods, various small dictionary items are used to linearly explain natural signals. Since SR can only reflect natural images in a limited way, it has been broadly utilized in different fields recently. Nevertheless, the use of SR in fusion methods differs significantly from that of other areas. As a result, we expect an over-complete dictionary SR to show the signal y ∈ W x [25]. The SR can be depicted as follows: where E = [e 1 , e 2 , e 3 , . . . , e M ] ∈ B N×M (N < M) with e i as the dictionary particle, which shows the matrix of SR, and α i = [α 1 , α 2 , α 3 , . . . , α M ] T is the set of sparse coefficients. E has over-complete features; as a result, Equation (1) has an infinite number of solutions. The purpose of this procedure is to find a single solution vector θ i that contains the solution with the vector that contains mostly zero values. Normally, we choose the largest l 1 -norm rule to fuse {α i }. {α i } is solved by the equation: where λ has a significant role in sparsity. When λ is large, this means that the sparse error will be large; if λ is small, then the final error will be smaller.

Image-Decomposition-Based Fusion Methods
A novel multi-component fusion method has been presented to generate superior fused images by efficiently exploiting the morphological diversity features of the images and the advantages [26]. Maqsood and Javed [27] proposed a two-scale Image Decomposition (ID) and sparse representation method for the integration of multi-modal medical images in 2020.

Deep-Learning-Based Fusion Methods
Several DL-based fusion approaches for multi-modality image fusion have recently been developed. Gao et al. [28] studied the use of a deep network for creating an initial decision map in a CNN for multi-focus image fusion. Li et al. [29] developed a DL architecture for multi-modality image fusion in 2018, which included encoder and decoder networks. Zhang et al. [30] proposed the general Image Fusion Framework based on a Convolutional Neural Network (IFCNN) (2020), which is a broad multi-modality image fusion framework based on CNNs. The performance of these DL-based fusion algorithms has been proven to be competitive. For the merging of images with different resolutions, Ma et al. [31] proposed a Dual-Discriminator conditional Generative Adversarial Network (DDcGAN) in 2020.

Rolling Guidance Filtering
The Rolling Guidance Filtering (RGF) algorithm, which is an edge-preserving smoothing filter, was presented by Zhang et al. [32] in 2014. Rolling guidance was implemented using RGF in an iterative way, with rapid convergence characteristics. RGF can totally manage the detail smoothing under the scale measure, unlike other edge-preserving filters. Small structure removal and edge recovery are the two main methods in RGF.

Dictionary Learning
Aishwarya and Thangammal [9] suggested a multi-modal medical image fusion adaptive dictionary learning algorithm. Useful information blocks were isolated for dictionary learning by removing zero information blocks and estimating the remaining image patches with a Modified Spatial Frequency (MSF).
The creation of an over-complete dictionary has a major impact on SR. There are two basic approaches to creating an over-complete dictionary. Firstly, pre-setting a transformation matrix is one procedure, for example contourlet transform and DCT. This method yields a dictionary that is fundamentally unchanged. Although the multi-source images have various attributes, a consistent sparse dictionary to fuse the images could result in poor performance. Secondly, a dictionary can be created based on training methods such as the PCA and K-SVD strategies. This generates a dictionary from the source image's structure, allowing the trained or prepared particles to address the original image more sparsely. As a result, the dictionary produced by the latter method has better execution and efficiency, making it more appropriate for clinical image fusion. Now, look at how to use the dictionary to train the atoms. We say that {x i } e i=1 is the database sample we obtain through a fixed size window (the size is √ n × √ n), where e represents the samples and n signifies the number of sampling databases. From a compilation of clinical images with multiple views, the window performs random sampling. The dictionary E learning model can be defined as follows: where ε > 0 is the tolerance factor and M is the total number of multi-view clinical images.

Laplacian Pyramid Method
Liu et al. [20] proposed a deep-learning technique for medical image fusion. The strategy uses the Laplacian Pyramid to reconstruct the image in the fusion process after generating a weighted map of the source image using the deep network. Chen et al. [19] defined the Laplace pyramid to describe the lost high-frequency detail information caused by the convolution and down-sampling operations in the Gaussian Pyramid (GP) method.
The LP technique is used to decompose an input image into a sequence of multiscale, multi-layer, and pyramid-shaped output images [33]. This technique is used to breakdown medical images that can distinguish between useful data and clinical images. The LP method decomposes an image into a pyramid of images of progressively lower resolution. The upper layer contains low-resolution images, while the lower layer contains high-resolution images with the lower image's size being four-times that of the upper image. The resulting decomposed images have a neat appearance. In the LP technique, the contrast between the two layers of the Gaussian pyramid is used as a pyramid layer for the processing of information at different frequencies by different layers. The first pyramid decomposition in the LP decomposition process is Gaussian pyramid decomposition, which loses some high-frequency data due to the convolution and down-sampling operations. The following are the steps involved in image decomposition: The input images are used to create the initial Gaussian pyramid (multi-view medical images). A 5 × 5 2D separable Gaussian filter ω(m × n) is used to convolve the source images and build P l from bottom to top by down-sampling, where P l is the Gaussian pyramid, l is the current layer, and W l is the current number of rows in the l-th layer: The Gaussian pyramid obtained in the previous step is used to construct the corresponding LP. The (l + 1)th layer P l+1 is subtracted from the lth layer P l after up0sampling and Gaussian convolution, and the difference is LP's lth layer P l . From the bottom layer to the top layer, the LP is constructed as follows: where: The corresponding Gaussian pyramid for the fused LP can be restored layer by layer from the top to bottom, resulting in the source image P 0 . The preceding indicates that the interpolation method will be used at the start. The inverse LP transform is defined as follows:

Image-Segmentation Method
Lung parenchyma segmentation is significantly helpful in locating and analyzing the nearby lesions, but it requires certain methodologies and frameworks. In the CAD system of lung nodules based on CT image sequences, lung parenchyma segmentation is an important pre-processing stage. We used an optimal thresholding method to reduce the complexity of lung segmentation in the quest to improve the computational time along with the accuracy. The approach was applied with the help of experimentation on several CT images taken from the LIDC-IDRI. The flowchart of the proposed segmented method is given in Figure 1. All the steps of the proposed segmentation technique are also summarised in Algorithm 1. Let A(x, y) be the input CT image of the lungs. The adaptive global threshold was used to perform the segmentation of the lungs through intense thresholding of the region of the lung segment from the CT image. Then, the value of the threshold was picked from the CT image histogram to provide the output.
σ is the specific global threshold value that was applied on original input image A(x, y). After applying thresholding on A(x, y), we obtain the resultant image, represented by A δ (x, y). Now, we obtain the image complement to the clear border as shown below: where C represents an image with all pixel values equal to 1. A α (x, y) acts as the output of this image complement with the clear border stage. Now, a morphological closing operation is performed on A α (x, y) by using the mask B to obtain A β (x, y): Now, taking the complement of A β (x, y): Now, from Equation (8), the binary image A δ (x, y) is multiplied by image A γ (x, y) from Equation (11): Morphological closing is applied on A τ (x, y) from Equation (12) by using the mask B (structuring element): In the next step, the morphological opening operation is applied on A θ (x, y), which is calculated in Equation (13) by using SE B, as shown below: In the last step, the output segmented image µ(x, y) is generated by multiplying A ω (x, y) from Equation (13) with A α (x, y) from Equation (9) as shown below: Image complement:A α (x, y) ← C − A δ (x, y) 11: Closing operation:A β (x, y) ← A α (x, y) • B 12: Closing operation:A θ (x, y) ← A τ (x, y) • B 15: opening operation:A ω (x, y) ← A θ (x, y) • B 16: µ(x, y) ← A ω (x, y)A α (x, y) 17: end procedure

Image Fusion Method
In this section, the proposed image fusion algorithm is presented. The proposed method has three steps, as shown in Figure 2: decomposition of the source segmented image, hierarchical fusion, and reconstruction of the image. The complete proposed fusion method is also summarised in Algorithm 2. The method of LP decomposition is used to decompose each multi-view medical image into four layers in the initial step. The next step is to build a dictionary for each layer, which is then fused using the ASR method in sequence. In the last step, the inverse LP transform is used to obtain the reconstructed resultant image.

27:
I F ← P F 28: end procedure

Decomposition of the Segmented Source Image
To obtain the features of segmented source images µ(x, y) at various sizes, the LP decomposition technique was applied. To begin, we need the Gaussian pyramid of an image of size M × N. The source image is on the P 0 layer. To obtain the P 1 layer (0.5M × 0.5N), the image µ(x, y) from layer P 0 was down-sampled with the help of the Gaussian kernel function. By repeating the above steps, the LP decomposition was formed. The three stage decomposition of an LP is shown in Figure 3. The decomposition of the (l − 1)th layer P l−1 into the lth layer P l can be expressed as follows: The first step in making the LP is sampling each layer of the Gaussian pyramid. The image to be enlarged is considered to be m × n in dimension. An inverse Gaussian pyramid was used to extend the image into a 2m × 2n image, which can be interpreted as: Now, the reconstruction of µ(x, y) can done from the Laplacian Pyramid layers as shown below: f or l = 0 P * l + LP l ; f or 0 < l < L P l = LP l ; f or l = L (18)

ASR Method
After decomposing, the ASR method was used to fuse the two groups (LP 0 to LP 3 ) of two source images [34]. As shown in Figure 4, the most critical step in ASR is to select and compose the adaptive dictionary. The segmented source images are represented by {µ 1 , µ 2 , . . . , µ j }, and all have the same M × N size. Medical images must meet the ASR model's requirement that the source images must be of the same size. As a result, ASR is an excellent option for fusing multi-view images.  The relevant layers of the two images of the LP were used to create a new LP of the fused image using the learned sub-dictionaries {E 0 , E 1 , E 2 , . . . , E k }. The sub-dictionaries {E 0 , E 1 , E 2 , . . . , E k } were generated through the following five steps:

1.
For each input image µ j , a sliding window with a size of √ n × √ n was used to delete all patches with a step length of one pixel from top to bottom and left to right.
where 1 is the unit vector of n × 1; 3.
From the set {v i 1 ,v i 2 , . . . ,v i J }, thev i m with the greatest variance was chosen. Then, usingv i m , a gradient orientation histogram was generated, and one sub-dictionary was chosen from E = {E 0 , E 1 , . . . , E k }, which had a total of K + 1 sub-dictionaries. The gradient orientation histogram can be written as: E ki is defined as an adaptive sub-dictionary, with k i being the index of E k into which the patch v i should be divided. The procedure for selecting k i is shown below: where θ max is: θ max = max{θ 0 , θ 1 , . . . , θ K } and the index of θ max is shown as: k * = argmax{θ k |k = 1, . . . , K};

4.
The dictionary that was chosen for SR fusion was D ki . The sparse vectors α i F of {α i 1 , α i 2 , . . . , α i J } were obtained after extracting vectorv j i from the LP 0 of both source images. max where C > 0 is a constant and > 0 is the error tolerance. The steps of this method are shown in Figure 5. The Max-L1 fusion rule was used for the fusion of sparse vectors {α i 1 , α i 2 , . . . , α i J }, where: It is recommended that the merged mean valuev i F be set to: Finally, the fused results of the 1st layer of {v i

Image Reconstruction and Fusion
The inverse LP transform is represented in the form of the equation given below: where P F l = LP F l and the Gaussian pyramid of the lth layer retrieved by LP F l is P F l . According to Equation (27), the corresponding Gaussian pyramid is obtained after recursion on the top layer of the LP, and then, the fused image I F is acquired.

Results
In this section, the experimental results of the proposed technique are presented and evaluated by comparing with other recently published results of other proposed techniques/methods.

Dataset
The LIDC-IDRI of lung CT images was used to evaluate the performance of the proposed algorithm. The Cancer Imaging Archive (TCIA) hosts the LIDC, which is freely accessible on website of the TCIA [35]. This dataset was created by the collaboration of seven academic centers and eight medical imaging organizations. Each radiologist out of four experts independently assessed his/her own marks, as well as the anatomized marks of the three other radiologists before rendering a final decision. We considered 4682 scans of 61 different patients from this dataset, which contains nodules of a size of 3-30 mm. Each patient has 60-120 slices. The dataset is in DICOM format containing 512 × 512 × 16 bit images and 4096 gray-level values in HU. The pixel spacing range is from 0.78 mm to 1 mm, whereas the reconstruction interval ranges from 1 mm to 3 mm. We implemented our algorithm in MATLAB R2019a.

Image Segmentation
The first part of the proposed algorithm is image segmentation. For the evaluation of our proposed technique for lung segmentation, the DSC index was used to estimate the consistency between the original segmentation and our calculated results. The dice coefficient is calculated by using the formula: where O img is the original image, while F img is the segmented result. The results of image segmentation are shown in Figures 6 and 7. The DSC index was used as an evaluation parameter for the image segmentation. The DSC index value of the proposed method was 0.9929, which is better than the published results of 0.9874 [36]. In Figure 7, three cases are presented to show the results. The first column displays the original images taken for lung segmentation, while the second column displays the outcomes with a thick boundary of the selected region. The third column presents the final results of the segmentation. In Figure 8, the segmentation results of the proposed method are compared with the results of recently published techniques. Figure 9 and Table 1 provide a precise comparison of the conventional methods with the proposed method for quantification. The DSC index value of the proposed method was 0.9929, better than the other listed results.  Table 2 compares the overall performance of the proposed technique with existing techniques. In this table, three parameters, sensitivity, specificity, and accuracy, are used for evaluation purposes. The quantitative results showed that the purposed technique outperformed the U-Net [37], AWEU-Net [38], 2D U-Net [9], 2D Seg U Det [39], 3D FCN [40], 3D nodule R-CNN [41], 2D AE [42], 2D CNN [43], 2D LGAN [44], and 2D encoder-decoder [45]. The accuracy of purposed method was 99%, which is much better than the other listed methods, as shown in Table 2. The sensitivity of the purposed method was 89% higher than all listed methods, except the published results by the AWEU-Net and 2D encoder-decoder. However, the other parameters values were lower than the purposed method.     -3D FCN  LIDC-IDRI  ---3D Nodule R-CNN  LIDC-IDRI  ---2D AE  LIDC-IDRI  ---2D CNN  LIDC-IDRI  ---2D LGAN

Image Fusion Results
This section describes the result of the proposed fusion method. A comparative experiment was performed with single-patient multi-view diagnostic images of lungs to check the feasibility of the proposed procedure (CT images). In this study, the experiment was performed using the CT images of the lung patients. There were six indices used to test the fusion results. The contrast was measured using the Average Pixel Intensity (API). The arithmetic square root of the variance is the Standard Deviation (SD), which represents the degree of dispersion. The total amount of information in the image is represented by the entropy (H) [46]. The resolution of the fusion effects is measured by the Average Gradient (AG). Mutual Information (MI) reflects the energy transferred from the input image to the fused output image [47]. The Spatial Frequency (SF) was used to analyze the total level of the fused output image information. Edge retention (Q AB/F ) [48] refers to how much of the input image edge information is preserved in the final result. The total loss of the image was determined using L AB/F , and the level of noise and other related artifacts was calculated using (N m AB/F ) [49]. Figures 10-12 present the fusion results obtained using various fusion methods. In general, the proposed approach produced a fused image that retained the edges and information as well. Figure 10 shows the source multi-view images. The first column shows the different multi-view source CT images. The second, third, and fourth columns show the different layers of the Gaussian pyramid. The proposed approach was applied to the source images at different levels.
The SR method easily created a block effect, as seen in Figure 11. The ASR method failed to eliminate the block effect, and the gradient contrast was poor, resulting in a fusion result with a blurred texture and structure. The blurry edges, low contrast, and lack of structure in the fused lungs images would have a huge impact on the doctor's treatment accuracy. The proposed method, on the other hand, can produce better fusion results and is consistent with the human visual system, as shown in Figure 12. As a result, the proposed method achieved the highest level of medical image fusion efficiency and can be used for medical care.
To evaluate the experimental results, six statistical indicators, SF, MI, API, SD, AG, and H, were used. The better the quality of the fused image, the higher the value of each indicator is. Since the values of API, SD, and SF were too high, we divided them by ten to make the observation easier. (Q AB/F ), (L AB/F ), and (N m AB/F ) are the fusion efficiency metrics Q, L, and N, respectively (Table 3). (Q AB/F ) has a positive sign as well, but the (L AB/F ) and (N m AB/F ) values should be lower.   The proposed method consistently had better results with respect to the API, SD, and MI, indicating that the suggested technique has a good capacity to maintain details. Because of the block effect, the SR method outperformed the proposed method in terms of the AG and SF. As shown in Figure 12, the resultant images acquired by using the SR method contained several artifacts and became smooth due to the loss of internal information in the fused image. The proposed approach had the best (L AB/F ) and (Q AB/F ) ratings, meaning that it kept the most information from the source images, while still preserving the edge and structure. This shows that our method is effective in general. According to the study of the fusion results, the suggested technique had a better overall performance than the other fusion techniques. A doctor must closely examine the fused CT image before making a diagnosis. As a result, when evaluating multi-view medical image fusion, not only the suitability of the evaluation indices, but also whether the indices are in compliance with the human visual system must be addressed. The results showed that the proposed approach produced the highest-quality fused image with no distortion while attempting to recreate the fused image with all the information and structure preserved.

Discussion
In view of the global pandemic and its effect on lung cancer patients, early diagnosis using segmentation of lung CT images has received greater attention from clinical analysts and research scholars. They have proposed many algorithms to achieve the objective of preciseness and accuracy. Taking this into consideration, a novel method based on the adaptive global threshold was proposed by considering three different aspects: DSC, accuracy, and time-based analysis. First, the DSC results are computed in Table 1, which can be further validated by Figure 9. In order to evaluate the proposed method, the results were compared with the results of the recently published methods and the manual segmentation made by experts. From Figure 8, it can be observed that the proposed method provides accurate lung segmentation results. The proposed method extracts the lung region accurately, as it uses a modified algorithm and mathematical morphological operations. In Figure 6, the specific value of the threshold is σ, which was applied to the input of the original image. The level of the grey threshold was 2/3 in our experimentation. The segmented lung boundary is flawless and clear with the mentioned value. In this way, the accuracy perfectly aligned with the requirements.
Next, the fusion was performed and improved the classification parameters, as given in Table 2 and Figure 12. The fusion accuracy achieved by the proposed method was 99%. From the review of the existing methods, we found that it is very hard to compare the results with the previously published work because of their use of non-uniform performance metrics and different evaluation criteria including the datasets and types of nodules considered. Despite the constraint, we tried to make a performance comparison of our proposed system with the other lung CAD systems, as shown in Table 2. It can be seen that our proposed system showed better performance compared to the other systems regarding the sensitivity, specificity, and accuracy. Other systems that were close on the performance indicators, i.e., API, SD, AG, H, MI, SF, Q AB/F , L AB/F , and N m AB/F , are shown in Table 3. It is clear that the proposed method had the optimum performance on the API, SD, and MI, which shows that the proposed method has the ability to retain the detailed information. The values of N m AB/F in Table 3 show that the images obtained by the proposed method had some artifacts and the fused image was too smooth due to the loss of many internal details. The smaller values of L AB/F and N m AB/F indicate that the image had a minor loss of information along with artifacts in the fusion process. From the analysis of the fusion results, it can be concluded that the proposed method has overall better performance than the other fusion methods.
The fusion approach has a significant disadvantage in terms of computational time, as the fusing of many features increases the overall classification time, which can be reduced by the selection process, but in the proposed method, it was minimized to 1.22 s. An analysis of the given comparison between the computing time and final segmented results revealed that our proposed method of the adaptive global threshold is more efficient in lung segmentation. The proposed approach would improve the fused images' contrast and brightness. The output of the experiments showed that the suggested technique can significantly preserve detail information within a range, provide a clear view of the input images data, and ensure that no additional objects or information are added during the fusion process. In particular, the proposed method contains information regarding the edges and structure of all CT image slices. The proposed method was applied on a single dataset, which is the limitation of this study.

Conclusions and Future Work
Lung segmentation has gained much attention in the past due to its effectiveness in lung CT image processing and clinical analysis of lung disease, and various segmentation methods have been suggested. A robust lung segmentation method is also required to support computer-aided lung lesion treatment planning and quantitative evaluation of lung cancer treatment response. The improved global threshold approach has made significant development in the field of computer vision and image processing, prompting us to study its utility in lung CT image segmentation. As a result, selecting the appropriate collection of characteristics can improve the system's overall accuracy by increasing the sensitivity and decreasing false positives. To evaluate the system's effectiveness, we also used fusion methods (LP and ASR); however, the findings clearly revealed that these methods reduce image noise and enhance the image quality by reducing the time complexity.
Our proposed method produced satisfactory results, but it still has room for improvement. First, the fusion rule of the detail layer requires further research. Secondly, the system should be evaluated on large and different datasets to achieve greater robustness.