Precise Segmentation of COVID-19 Infected Lung from CT Images Based on Adaptive First-Order Appearance Model with Morphological/Anatomical Constraints

A new segmentation technique is introduced for delineating the lung region in 3D computed tomography (CT) images. To accurately model the distribution of Hounsfield scale values within both chest and lung regions, a new probabilistic model is developed that depends on a linear combination of Gaussian (LCG). Moreover, we modified the conventional expectation-maximization (EM) algorithm to be run in a sequential way to estimate both the dominant Gaussian components (one for the lung region and one for the chest region) and the subdominant Gaussian components, which are used to refine the final estimated joint density. To estimate the marginal density from the mixed density, a modified k-means clustering approach is employed to classify the Gaussian subdominant components to determine which components belong properly to a lung and which components belong to a chest. The initial segmentation, based on the LCG-model, is then refined by the imposition of 3D morphological constraints based on a 3D Markov–Gibbs random field (MGRF) with analytically estimated potentials. The proposed approach was tested on CT data from 32 coronavirus disease 2019 (COVID-19) patients. Segmentation quality was quantitatively evaluated using four metrics: Dice similarity coefficient (DSC), overlap coefficient, 95th-percentile bidirectional Hausdorff distance (BHD), and absolute lung volume difference (ALVD), and it achieved 95.67±1.83%, 91.76±3.29%, 4.86±5.01, and 2.93±2.39, respectively. The reported results showed the capability of the proposed approach to accurately segment healthy lung tissues in addition to pathological lung tissues caused by COVID-19, outperforming four current, state-of-the-art deep learning-based lung segmentation approaches.


Introduction
Pulmonary diseases are serious public heath threats that may happen after having inflammation or fluid accumulation in the lung, causing a respiratory failure, such as coronavirus disease 2019 . The primary reason for COVID-19 death is acute respiratory distress syndrome (ARDS) [1]. According to Gupta et al. [2], 83.9% of the COVID-19 patients in their study needed a mechanical ventilation support, of whom 87.95% had ARDS. Therefore, detection and diagnosis of COVID-19 grades is vital to prioritize patient's need for ventilator support. The accuracy attainable by computeraided diagnostic (CAD) system using lung imaging data for COVID-19 depends on how accurate the segmentation is. Accurate lung segmentation is a challenging task as different pathologies affect the appearance of the lung, and if the infected regions are missed during the segmentation, it will affect the entire task. Therefore, this paper focuses on developing an automatic system to detect and segment the lungs in chest computed tomography (CT), which is one of the popular noninvasive clinical modalities used by physicians to diagnose lung pathologies.
In the last few years, many preliminary studies have been conducted to detect and segment lung as well as pathological lesions. Some of these studies [3][4][5][6][7][8][9] proposed thresholdbased approaches for lung segmentation, which performed well on normal CT scans but failed in pathological cases, especially severe cases, whereas lungs in the normal CT scan can be discriminated easily from background due to huge differences in attenuation [10]. Therefore, to overcome this problem, more recent studies employed texture, shapes, deep learning, or hybrid techniques to accurately segment normal and different lung pathologies. These studies are briefly discussed below.
In [11][12][13][14], authors considered texture analysis, shape analysis, or both of them in their system to discriminate between objects. A recent study by Oulefki et al. [15] proposed a system to automatically segment COVID-19 lung infected region by applying a multilevel entropy-based threshold approach, namely a modified Kapur method. Their system achieved a sensitivity, specificity, Dice similarity coefficient (DSC), and precision of 73.3%, 99.4%, 71.4%, and 73.9%, respectively. Another study by Korfiatis et al. [16] employed k-means clustering to partition CT voxels into four classes: lung, muscle, fat, and bone based on intensity values. After that, the initial lung region was extracted by applying a filling operation. Finally, a support vector machine (SVM) was used to determine the final border of the lung based on intensity and wavelet-based descriptors. In [17], authors proposed a segmentation system by eliminating unwanted regions and segmenting lung initially using a threshold approach. Moreover, a 3D gray-level co-occurrence matrix (GLCM) was constructed for a window of size 15 × 15 × 15 centered on each voxel. Then, predefined features were extracted from the GLCM, and a new image was constructed, being the product of the entropy and the inverse difference moment of the GLCM. Subsequently, the abnormal regions were identified from the constructed image using a threshold approach. Finally, the later and initial segmentation were merged together to determine the final segmentation. Dehmeshki et al. [18] used a genetic algorithm (GA) to construct a system to identify spherical nodules within CT images. First, the lung was segmented using adaptive thresholding. Then, the authors utilized a geometric feature, namely, volumetric shape index (VSI), for the segmented lung as a weighted factor in the fitness function of GA. VSI of a spherical object is 1, while that of a cylindrical object is 0.75, so the values of fitness function for nodules were higher than for blood vessels. Convergence criteria of GA to select the shape as a nodule was a threshold-based. The detection rate of their system was approximately 90% with a 14.6 false positive per scan. Moreover, Nakagomi [19] presented a min-cut graph segmentation algorithm based on multiple shapes and prior information of neighbors structure to detect and segment lung infected by pleural effusion. In [20], authors presented a lung segmentation system for different lung pathologies. Their system first determined two seed points within both lungs using a thresholding approach, then a fuzzy connectedness (FC) algorithm was used to extract the lung. Furthermore, multiple refinement stages based on machine learning classification and neighboring anatomy-guided learning mechanisms were included in their system to detect pathological regions during FC segmentation. A recent study by Houssein et al. [21] developed a segmentation system that employed a heuristic method, called manta ray foraging optimization (MRFO), based on an opposition-based learning (OBL), using Otsu's method as a fitness function, to get the best threshold values using COVID-19 CT images. More information about textureand shape-based lung segmentation can be found in [22].
Recently, deep learning approaches have been employed to segment normal as well as pathological lung caused by COVID-19. For example, Saood et al. [23] investigated two deep learning approaches to semantically segment infected/non-infected lung using CT images. These included SegNet [24] and U-Net [25] networks. The author employed these networks for binary and multi-class classification. They conducted multiple experiments with different hyperparameters. The best reported results for the binary (multi-class) classification gave accuracy of 95.4 ±2.9 % (90.7 ±6 %) and 94.9 ±4.3 % (90.8 ±6.5 %) using SegNet and U-Net, respectively. A similar study [26] proposed a segmentation system using a convolution neural network (CNN). Their network employed feature variation block to enhance the efficiency of feature representation as well as progressive atrous spatial pyramid pooling to deal with appearance and shape differences caused by sophisticated infection. The DSC (sensitivity, specificity) of their system was 72.6% (75.1%, 72.6%) and 98.7% (98.6%, 99%) for COVID-19 infections and normal lung CT images, respectively. A multi-task deep learning-based system was implemented by Amyar et al. [27]. This study included reconstruction for better feature representation; segmentation to extract lesion regions; and classification to categorize the scan into normal, COVID-19, and other diseases. Their system employed encoder-decoder architecture based on U-Net network which used a common encoder for the three tasks. The best reported DSC of their segmentation task was 88%. Recent study by Fan et al. [28] developed a binary and multi-class segmentation system using CT chest images, called Inf-Net. This system was mainly based on a deep learning. Moreover, to compensate the limited number of labeled images, they included a random sampling-based semi-supervised learning, namely, Semi-Inf-Net. Their system employed edge attention as well as reverse attention to improve the feature representation by modeling lung boundaries. In addition, high-level features were exploited by their network and combined by a parallel partial decoder. The performance of their infection segmentation system achieved a DSC of 68.2% and 73.9%, sensitivity of 69.2% and 72.5%, and specificity of 94.3% and 96% using Inf-Net and Semi-Inf-Net, respectively. A similar study [29] proposed an automatic deep learning-based multi-class segmentation system of COVID-19 using CT chest images. The latter exploited aggregated residual transformations in addition to soft attention mechanism to better represent the features and to increase the system's ability to distinguish between different COVID-19 lesions. The reported DSC (accuracy, precision) of their system was 94% (89%, 95%) and 83% (79%, 82%) with and without data augmentation, respectively. Another study [30] proposed a semi-supervised deep learning-based segmentation system, called FSS-2019-nCov, to detect lesion infection in COVID-19 patients. The latter was based on encoder-decoder architecture with Res2Net [31] encoder backbone. In the proposed encoder-decoder architecture, the authors used a context enrichment module, namely, smoothed atrous convolution block and the multi-scale pyramid pooling block, to overcome any debilitation occurred in the represented knowledge generated in the encoder phase. This system was consisted of three modules: conditioner path, adaptive interaction module, and segmentation path. Conditioner path was responsible to learn feature maps from support sets which contain CT slice and its ground-truth. Subsequently, these feature maps were transmitted to the segmentation path using adaptive interaction module which was responsible for detecting lesion in the CT slice. The performance of their system achieved a DSC of 79.8%, sensitivity of 80.3%, and specificity of 98.6%. In [32], the authors employed V-Net [33] to segment lung in COVID-19 CT images that was refined by a shape deformation module. A similar study by Li et al. [34] employed U-Net to segment lung on CT images. Then, they proposed a deep learning network, called COVNet, with a ResNet-50 [35] backbone to detect COVID-19 lesions. A recent study [36] developed a deep learning-based segmentation system, called LungINFseg, to detect COVID-19 lesions in CT images. This system was built on the basis of encoder-decoder architecture. The authors employed a 2D discrete wavelet transform (DWT) with four Haar filters and a receptive field aware (RFA) module in the encoder phase, which were able to change the size of receptive field, to capture more relevant features related to infected regions. Their system achieved a DSC and intersection over union (IoU) score of 80.34% and 68.77%, respectively. Other studies have also employed deep learning as a segmentation system with varying accuracy as reported in [36][37][38][39][40][41][42][43].
Segmentation techniques for CT data using deep learning method consider the current, state-of-the-art approaches. However, they have some drawbacks in practical applications, such as the need for huge databases to learn the different pathology of the lung regions which makes the training of such network is very high computational [44]. Moreover, segmentation approaches based on deformable models, which optimize a trade-off between smoothness of the deformable boundary and homogeneity of the region inside the boundary, suffer from high computational complexity and limited capabilities when the desired boundary has concavities or encompasses a region that is naturally inhomogeneous, such as infected lung regions. To overcome the aforementioned limitations, we are proposing an unsupervised lung segmentation approach that is based on modeling the first-order appearance model of CT data by using a probabilistic model based on a linear combination of Gaussian (LCG) that estimates dominant components, corresponding to lung and chest regions, as well as subdominant components. Subsequently, these subdominant components are clustered to one of the dominant components for marginal density estimation. This model can capture the variability in the Hounsfield distributions that may come from changing the screening protocols and severity of lung infections. Finally, we refine the lung segmentation by applying 3D morphological constraints based on the Markov-Gibbs random field (MGRF) model with analytical parameter estimations.

Methods
A fully automated segmentation framework is presented to extract both healthy lung tissues as well as pathological lung tissues that may be caused by COVID-19. The major steps of the framework, depicted in Figure 1, are as follows: (i) preprocessing 3D chest CT scans to identify background voxels; (ii) modeling the gray-level distribution of the CT data as a Gaussian mixture model with parameters estimated using a novel, sequential, expectation-maximization (EM)-based approach; (iii) preliminary segmentation of the lung region based on the use of a Bayes classifier; and (iv) refining the segmentation using a three-dimensional, rotation-and translation-invariant MGRF to impose morphological constraint. Below, we will describe the details of each step.

First-Order Visual Appearance Model
The ultimate goal is accurate labeling of voxels as belonging to lung tissue or background, where accuracy is defined as close agreement with "ground-truth" lung region delineated by a radiologist. The main challenge in modeling the distribution of the radiodensities (in Hounsfield units) of lung and chest tissues, i.e., the relative frequency histogram of CT voxel values, is dependent upon slice thickness and the severity of lung infection as shown in Figure 2.

Thick-Section
Thin-Section To address this challenge, we will assume that the first-order visual appearance model of the CT data (H) can be modeled with linear combination of Gaussian distributions with K ≥ 2 components [45]. The first two components, called the dominant modes, corresponding to the lung region (k = 1) and the chest region exterior to the lungs (k = 2). The remaining Gaussian components k = 3, . . . , K are called subdominant modes. Thus, the proposed probabilistic model is where the w k > 0 are mixing weights, and ϕ is a Gaussian density with parameters θ k = (µ k , σ k ). In order for p(h) to be a density function, the weights must satisfy the constraint Given the number K of Gaussian components, the 3K parameters of Equation (1), including mixing weights W and means and variances Θ, are estimated by maximizing the log-likelihood of the empirical data where n(h) is the histogram of the CT data, whose voxel values range from 0 to H. The corresponding relative frequency histogram is denoted f (h) = n(h)/N, N being the total number of voxels. To maximize the likelihood in Equation (3), we employ an iterative block relaxation process as follows.
Let τ indicate an iteration such that (W [τ] , Θ [τ] ) are the parameter estimates on that iteration, and is the proposed probabilistic model for the CT data. The conditional weights are estimated as follows: This conditional probability specifies the relative contributions of voxel value h to each component at step τ. Using these variables, Equation (3) can be written in the equivalent form: From given starting values at τ = 0, the block relaxation scheme converges to a local maximum of the likelihood function in Equation (6) through iteration of the following two steps: , which maximize L(W, Θ) under the fixed conditional weights of Equation (5)  and Θ [τ+1] fixed.
The process is repeated until the changes of all the parameters become small. The E-step maximizes the likelihood function of Equation (6) subject to the constraints Equation (2). The solution for the weights is Then, parameters of each Gaussian component are found using the ordinary (unconstrained) maximum likelihood estimates: We will follow Algorithm 1 to illustrate the steps for estimating the parameters of the proposed probabilistic model. The final estimated density will consist of the two dominant Gaussian components and K − 2 subdominant Gaussian components. Jensen-Shannon divergence (JSD) [46] is employed in this algorithm to measure the similarity between empirical density and mixed density for use as convergence criteria to determine the number of Gaussian components. The latter is a symmetric version of a Kullback-Leibler divergence [47]. Calculate the Jensen-Shannon divergence [46], M is the Kullback-Leibler divergence [47].  Equation 1). output : Marginal density function for lung p l (h) and for chest p c (h). p l (h) = w 1 ϕ(h; θ 1 ); (µ l , σ l ) = (µ 1 , σ 1 ); (µ gl , σ gl ) = (µ 1 , σ 1 ); p c (h) = w 2 ϕ(h; θ 2 ); (µ c , σ c ) = (µ 2 , σ 2 ); (µ gc , σ gc ) = (µ 2 , σ 2 );

MGRF-Based Morphological Constraints
To get a consistent segmentation of the lung region, we applied rotation invariant spatial constraints by using a generic Markov-Gibbs model of region maps [45]. The model, which incorporates voxel-voxel interaction effects as shown in Figure 3, has, in general, an arbitrary interaction structure and corresponding Gibbs potentials. For simplicity, we restrict the interaction neighborhood system (N) to the nearest 9 neighbors in the above CT slice and 9 neighbors in the below CT-slice. To model the interactions between the CT voxels, we will assume all the interactions as the same within each region. To estimate this interaction in analytical way, let V : X × X → {V eq , V ne } denote a bi-valued Gibbs potential describing pairwise interactions, where Then, the Gibbs probability distribution (GPD) of region maps on the 3D lattice R is as follows [45]: By modifying the derivation scheme in [45] to fit our model, the following first approximation of the maximum likelihood estimator (MLE) of the potential values for a given map m is obtained: where f (m) and f (m) denote the relative frequency of the equal and non-equal pairs of the labels in all the equivalent voxels pairs {((i, j, z), (i + ξ, j + η, z + ζ)) : (i, j, z) ∈ R ; (i + ξ, j + η, z + ζ) ∈ R; (ξ, η, ζ) ∈ N}, respectively.

Joint MGRF Model and Lung Segmentation Algorithm
In order to integrate the first-order appearance model with the spatial probabilistic model that describes the morphological/anatomical constrains, we will assume that the CT data (g) consisting of visual appearance model and its spatial map (m, data labels) follow the following two-level MGRF model: Here, P(m) is an unconditional distribution of maps that is modeled by MGRF probabilistic model that is demonstrated in Equation (10). P(g|m) is a conditional distribution of gray levels for a given labeling. The Bayesian maximum a posteriori estimation (MAP) of the labeling, given the image g, m * = argmax m L(g, m) maximizes the log-likelihood, L(g, m) = log P(g|m) + log P(m).
In order to summarize the proposed segmentation system, the basic steps are demonstrated in Algorithm 3.

Algorithm 3: Lung Extraction Algorithm
input : A test 3D CT image. output : Final 3D lung segmentation.

1 st Order Density Estimation:
Estimate the marginal density function for lung (p l (h)) and marginal density function for chest (p c (h)). 2. Initial Segmentation/Labeling: Use Bayes classifier to delineate the initial lung region by using the marginal estimated densities. (11) on the initial segmentation to estimate the Gibbs potentials. 4. Refine Segmentation: Use iterative conditional mode (ICM) algorithm [45] to find the map that maximize the likelihood of joint MGRF model shown in Equation (13).

Evaluation Metrics
This section describes the metrics used to gauge the performance of our proposed system: Dice similarity coefficient (DSC), overlap coefficient, and absolute lung volume difference (ALVD). Each of these quantifies in some way either the agreement or dissimilarity between the segmentation algorithm result and the corresponding ground-truth segmentation. More detailed explanation is presented in Sections 3.1-3.3, respectively. Furthermore, a fourth metric, the 95th-percentile bidirectional Hausdorff distance (BHD) (Section 3.4), is employed to quantify the accuracy of the boundary of the segmented region relative to ground-truth.

Dice Similarity Coefficient (DSC)
Dice similarity coefficient (DSC) is one of the most common similarity metric to measure the similarity between two different areas. This metric is used to evaluate the result of the proposed system by estimating the similarity between the black-white segmented lung (L) and the ground-truth (G), i.e., the percentage of common region (i.e., the green part) in both images as shown in Figure 4a. The range of this metric is between 0 and 1, as 0 and 1 mean dissimilar and similar, respectively. It is computed as follows: where n(L G) is the cardinality of white pixels in the intersection between the segmented lung (L) and the ground-truth (G), while n(L) and n(G) are the cardinality of the white pixels in the segmentation (L) and the ground-truth (G), respectively.

Overlap Coefficient
Overlap coefficient is used in our assessment pipeline to measure the similarity between the predicted object and its ground-truth by computing the overlap percentage between them, see Figure 4c. The overlap coefficient of identical objects gives 1, while it gives 0 for heterogeneous one. The latter is estimated as follows: where n(L G) is the cardinality of white pixels in the union between the segmented lung (L) and the ground-truth (G).

Absolute Lung Volume Difference (ALVD)
Another metric used to assess our work is absolute lung volume difference (ALVD). ALVD computes the similarity between two images by measuring the differences between the ground-truth (G) and the black-white segmented lung (L) (Figure 4d). The ALVD of similar objects gives 0. This metric is defined as where |n(G) − n(L)| is the absolute difference between the cardinality of white pixels in the ground-truth (G) and segmentation (L).

Bidirectional Hausdorff Distance (BHD)
This section describes the last metric called bidirectional Hausdorff distance (BHD), which is used to evaluate our proposed system in addition to the previous three metrics. BHD is the bidirectional estimation of Hausdorff distance (HD) between the black-white segmented lung (L) and the ground-truth (G), and vice versa. HD is the maximum Euclidean distance between the points in the border of the black-white segmented lung (L) and its closest point in the border of the ground-truth (G), as visualized in Figure 4b, which is computed as follows [48,49]: where l and g are sets of the points border in the L and G, respectively, and d(g, l) is the Euclidean distance between the two points. As, BHD(L, G) is estimated as BHD(L, G) = max{HD(L, G), HD(G, L)} (18) In this paper, the 95th-percentile BHD is used to evaluate our proposed system. Instead of getting the maximum Euclidean distance between L and G, 95th-percentile of all computed distances is selected to overcome the outliers.

Experimental Results
The segmentation framework described above was applied to the problem of segmenting lung with pathological tissue in COVID-19 patients. The proposed segmentation system is evaluated and tested on 32 CT chest volume with different severity of COVID-19 infections, selected from 249 CT volume in COVID-19 [50]. Four of them had healthy/mild COVID-19 infections whose image size ranges from 512 × 512 × 51 to 512 × 512 × 125, while 17 patients of size 512 × 512 × 36-607 who had moderate infections as well as 11 CT chest volume of size 512 × 512 × 44-577 had severe COVID-19 infections. To compare our framework with other approaches that depend on a training dataset, we select another 34 3D CT chest volume (i.e., 3713 images in total) from the same dataset to use them as a training. Table 1 summarized the dataset characteristics used in our experimental results. The data are graded according to the radiology protocol in [51]. To obtain more accurate segmentation, we included morphological/anatomical constraints based on the use of a rotation invariant MGRF model. To demonstrate step by step how our proposed approach works, Figure 5b shows empirical density for the 3D CT chest volume (Figure 5a), and the two Gaussian mixtures approximating its dominant modes are presented in Figure 5c. Furthermore, Figure 5c demonstrates the JSD between the empirical density and the two estimated dominant Gaussian components. Figure 5d shows the changes of JSD and the best-estimated number of Gaussian components that are demonstrated in Figure 5e. Figure 5f demonstrated the classification of the subdominant Gaussian components based on the use of the proposed clustering algorithm (Algorithm 2). Figure 5g,h demonstrates the final marginal densities for lung and chest as well as the final estimated mixed density. Figures 6 and 7 demonstrate the ability of the proposed probabilistic model to handle the variability in the empirical density that it may occur due to the severity of infections or the variability of the scanning protocol. To highlight the promise of including the rotation invariant MGRF-based morphological/anatomical constraints with the adaptive first-order appearance model, the system's performance is evaluated before and after inclusion, as demonstrated in Table 2. As shown in the table, the proposed segmentation is enhanced after including MGRF-based morphological/anatomical constraints, particularly in severe cases where the DSC is significantly increased from 82.51 ±13.13 % to 95.15 ±1.91 %. To more prove the attainable enhancement of the system, Figure 8 presents three examples of the proposed system before and after the inclusion for healthy/mild, moderate, and severe COVID-19 infected lung. As shown in the figure, the developed system outperforms the proposed appearance model alone (i.e., LCG) for three examples, whereas the proposed system shows its ability to segment a severe COVID-19 infection with 94.55% DSC compared to the proposed appearance model which gives 76.49% DSC. Moreover, Figure 9 presents the proposed segmentation for a severe lung COVID-19 infection at different cross-sections (i.e., 2D axial, coronal, and saggital) to visually show the efficiency of the proposed system. Overall, the proposed system achieves a DSC, overlap, BHD, and ALVD of 95.67 ±1.83 %, 91.76 ±3.29 %, 4.86 ±5.01 , and 2.93 ±2.39 , respectively. Finally, to prove the robustness of the proposed segmentation approach, deep learning approaches are adopted as a comparison: DeepLabv3+ [52] using ResNet-50 network as a backbone, Inf-Net [28] with backbone ResNet-50 network, U-Net [25], and 3D U-Net [53]. The results are reported in Table 3. As demonstrated in the table, the 3D U-Net approach gives a worst performance of 66.08 ±35.99 % DSC, 58.30 ±34.81 % overlap, 44.44 ±48.60 BHD, and 66.77 ±133.01 ALVD, while the proposed segmentation approach gives the best performance compared to these deep learning approaches. Moreover, to visually demonstrate the capability of the proposed system, three different examples of healthy/mild, moderate, and severe lung COVID-19 infections are segmented using these approaches, as presented in Figure 10. As demonstrated in the figure, the proposed approach segments the three examples better than the other four approaches. Moreover, the U-Net approach has a DSC close to the proposed approach. However, there are some parts that segment incorrectly as demonstrated in the figures, e.g., classifying part of trachea or chest as lung. Therefore, the proposed system is much better due to its segmentation being closer to the ground-truth. Therefore, it is highly recommended to use our approach to segment the lung infected by COVID-19 as it shows better performance than the state-of-the-art deep learning approaches. In addition, it is unsupervised technique, thus it will not suffer from the underfitting and overfitting problems.

DSC:
94.55% (a) (b) (c) Figure 10. An illustrative example of the proposed segmentation compared to other deep learning approaches for (a) healthy/mild, (b) moderate, and (c) severe COVID-19 infected lung. Note that red border (green border or yellow region) refers to ground-truth (segmentation).

Discussion and Conclusions
Experiments demonstrate that the proposed framework is promising and achieved high accuracy, with identification of the first-order appearance model followed by 3D morphological constraints based on analytical estimation MGRF parameters producing good results when segmenting the COVID-19 infected lung region from CT images. Quantitative metrics of accuracy including the DSC, overlap coefficient, 95th-percentile BHD, and the ALVD metrics all show consistent performance on our sample data set of 32 subjects, outperforming current, state-of-the-art deep learning-based lung segmentation approaches. The results herein demonstrate the ability of the developed system to segment lung on a CT image, whose DSC is improved from 89.59 ±9.76 % to 95.67 ±1.83 % when 3D morphological MGRF-based constraints are included in the system pipeline. However, the accuracy of the proposed segmentation system will get affected if the lung is significantly damaged or filled with water, or the appearance of the lung is closed to the chest. Thus, separation based on appearance model will be very challenging task. Therefore, we plan to add some shape model approach in our system to overcome these problems. Moreover, a future extension of this work would integrate the proposed segmentation approach into a computer-aided diagnostic system to assess pulmonary function and risk of mortality in COVID-19 patients, which is the ultimate goal of our research group. Furthermore, the morphological constraints could be made to support large-scale inhomogeneity of the kind seen in severe lung infection. This will be accomplished by expanding the neighborhood system to include larger cliques so that the MGRF model incorporates higher order interaction effects.