Next Article in Journal
Characteristics and Resistance to Cisplatin of Human Neuroblastoma Cells Co-Cultivated with Immune and Stromal Cells
Next Article in Special Issue
A Surrogate Model Based on a Finite Element Model of Abdomen for Real-Time Visualisation of Tissue Stress during Physical Examination Training
Previous Article in Journal
A Novel 3D Printing Particulate Manufacturing Technology for Encapsulation of Protein Therapeutics: Sprayed Multi Adsorbed-Droplet Reposing Technology (SMART)
Previous Article in Special Issue
A Bioinformatics View on Acute Myeloid Leukemia Surface Molecules by Combined Bayesian and ABC Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Level-Set-Based Kidney Segmentation from DCE-MRI Using Fuzzy Clustering with Population-Based and Subject-Specific Shape Statistics

1
Electrical Engineering Department, Assiut University, Assiut 71515, Egypt
2
Computer Science Department, Assiut University, Assiut 71515, Egypt
3
Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt
4
Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
5
Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
*
Author to whom correspondence should be addressed.
Bioengineering 2022, 9(11), 654; https://doi.org/10.3390/bioengineering9110654
Submission received: 25 September 2022 / Revised: 23 October 2022 / Accepted: 2 November 2022 / Published: 5 November 2022
(This article belongs to the Special Issue Machine Learning for Biomedical Applications)

Abstract

:
The segmentation of dynamic contrast-enhanced magnetic resonance images (DCE-MRI) of the kidney is a fundamental step in the early and noninvasive detection of acute renal allograft rejection. In this paper, a new and accurate DCE-MRI kidney segmentation method is proposed. In this method, fuzzy c-means (FCM) clustering is embedded into a level set method, with the fuzzy memberships being iteratively updated during the level set contour evolution. Moreover, population-based shape (PB-shape) and subject-specific shape (SS-shape) statistics are both exploited. The PB-shape model is trained offline from ground-truth kidney segmentations of various subjects, whereas the SS-shape model is trained on the fly using the segmentation results that are obtained for a specific subject. The proposed method was evaluated on the real medical datasets of 45 subjects and reports a Dice similarity coefficient (DSC) of 0.953 ± 0.018, an intersection-over-union (IoU) of 0.91 ± 0.033, and 1.10 ± 1.4 in the 95-percentile of Hausdorff distance (HD95). Extensive experiments confirm the superiority of the proposed method over several state-of-the-art level set methods, with an average improvement of 0.7 in terms of HD95. It also offers an HD95 improvement of 9.5 and 3.8 over two deep neural networks based on the U-Net architecture. The accuracy improvements have been experimentally found to be more prominent on low-contrast and noisy images.

Graphical Abstract

1. Introduction

Acute rejection is the most frequent cause of graft failure after kidney transplantation [1]. However, acute renal rejection is treatable, and early detection is critical in order to ensure graft survival. The diagnosis of renal transplant dysfunction using traditional blood and urine tests is inaccurate because the failure can be detected after losing 60% of the kidney function [1]. In this respect, the DCE-MRI technique has achieved an increasingly important role in measuring the physiological parameters of the kidney and follow-up patients. DCE-MRI data acquisition is carried out through injecting the patient with a contrast agent and, during the perfusion, the kidney images are captured quickly and repeatedly at three second intervals. The contrast agent perfusion leads to contrast variation in the acquired images. Consequently, the intensity of the images at the beginning of the sequence is low (pre-contrast interval), gradually increases until reaching its maximum (post-contrast interval), and then decreases slowly (late-contrast interval). Figure 1 shows a time sequence of DCE-MRI kidney images of one of the patients that was taken during the contrast agent perfusion. Accurate kidney segmentation from these images is an important first step for a complete noninvasive characterization of the renal status. However, segmenting the kidneys is challenging due to the motion that is made by the patient’s breathing, the contrast variation, and the low spatial resolution of DCE-MRI acquisitions [1,2].
In order to overcome these problems, several researchers have proposed multiple techniques to segment the kidney from DCE-MRI images. A careful examination of the related literature reveals that level-set-based segmentation methods [3,4,5,6,7,8] have been the most popular for this purpose. In these methods, a deformable model adapts to the shape of the kidney, its evolution being constrained by the image properties and prior knowledge of the expected kidney shape. In [3], the authors developed a DCE-MRI kidney segmentation method employing prior kidney shape and gray-level distribution density in the level set speed function in order to constrain the evolution of the level set contour. However, their method had large segmentation errors on noisy and low-contrast images. Thus, in [4,5], Khalifa et al. proposed a speed function combining the intensity information, the shape prior information, and the spatial information modeled by 2nd- and 4th-order Markov Gibbs random field (MGRF) models, respectively. In order to circumvent the issue of the rather similar appearance between the kidney and the background tissues, Liu et al. [6] proposed to remove the intensity information from the speed function in [5] and to use a 5th-order MGRF to model the spatial information.
Incorporating the shape information into the level set method typically requires a separate registration step [3,4,5,6] to align an input DCE-MRI image to the shape prior model in order to compensate for the motion that is caused by the patient’s breathing and movement during data acquisition. In a different manner, Hodneland et al. [7] proposed a new model that jointly combines the segmentation and registration into the level set’s energy function and applied it to segment kidneys from 4D DCE-MRI images.
From another perspective, the level set contour evolution is guided by deriving a partial differential equation in the direction that minimizes a predefined cost functional containing several weighting parameters that need manual tuning [3,4,5,6,7]. In contrast, Eltanboly et al. [8] proposed a level set segmentation method employing the gray-level intensity and shape information without using weighting parameters. Some work [9,10] has also been carried out in addressing the intensity inhomogeneity and the low contrast problems of DCE-MRI images that are caused during the acquisition process. Based on fractional calculus, Al-Shamasneh et al. [9] proposed a local fractional entropy model to enhance the contrast of DCE-MRI images. Later, in [10], they presented a fractional Mittag-Leffler energy function based on the Chan-Vese algorithm for segmenting the kidneys from low-contrast and degraded MR images.
More recently, convolutional neural networks (CNNs) have been successfully used for several image segmentation tasks, including kidney segmentation. For example, Lundervold et al. [11] developed a CNN-based approach for segmenting kidneys from 3D DCE-MRI data using a transfer learning technique from a network that was trained for brain hippocampus segmentation. Haghighi et al. [12] employed two cascaded U-Net models [13] to segment kidneys from 4D DCE-MRI data. Later on, Milecki et al. [14] developed a 3D unsupervised CNN-based approach for the same reason. Bevilacqua et al. [15] presented two different CNN-based approaches for accurate kidney segmentation from MRI data. On the other hand, the authors in [16] integrated a mono-objective genetic algorithm and deep learning for an MRI kidney segmentation task. Isensee et al. [17] presented the top scoring model in the CHAOS challenge [18] for an abdominal organs segmentation task, in which they used an nnU-Net model to segment the left and right kidneys from MRI data. The CHAOS challenge dataset includes the data of 80 different subjects, including 40 CTs and 40 MRIs. Each sequence contains an average of 90 scans in CT and 36 in MRI in the DICOM format.
The research gap is as follows: The common stumbling block facing CNN methods is that they typically require annotated data of a large size in order to train the network, which is often difficult to obtain in the medical field. Thus, the aforementioned deep learning methods struggle to achieve high segmentation accuracy. On the other hand, the level-set-based kidney segmentation methods [3,4,5,6,7,8] have proved their effectiveness in achieving a superior performance with more accurate segmentation. However, unfortunately, almost all of them need accurate level set contour initialization to be performed manually by the user. Inaccurate initialization may cause a drop in the segmentation accuracy or even cause the method to fail. In order to overcome this problem, in [19] we have presented an automated DCE-MRI kidney segmentation, called FCMLS, based on FCM clustering [20] and level sets [21]. In our FCMLS method, we constrain the contour evolution by the shape prior information and the intensity information that are represented in the fuzzy memberships. In addition, in order to ensure the robustness of the FCMLS method against contour initialization, we employ smeared-out Heaviside and Dirac delta functions in the level set method. The FCMLS method has indeed demonstrated its efficiency in segmenting the kidneys from DCE-MRI images. However, it still has some limitations. First, its performance drops on low-contrast images, such as those in the pre- and late-contrast parts of the time sequence in Figure 1. Second, the FCM algorithm is used for computing the fuzzy memberships of the image pixels before the level set evolution begins. Once the level set starts evolving, the obtained memberships are not changed, and this might be not accurate enough in some cases.
In order to enhance the segmentation accuracy of FCMLS, and to improve its robustness on low-contrast images, we have developed a new kidney segmentation method, named the FML method, in [22]. In this method, we model the correlation between neighboring pixels into the level set’s objective functional by a Markov random field energy term. We also embed the FCM algorithm into the level set method and iteratively update the fuzzy memberships of the image pixels during contour evolution. The experimental results have confirmed the improved accuracy and robustness of this method. However, the integration of the Markov random field model within the level set formulation has increased the computational complexity of the FML method significantly.
In this paper, we follow a different strategy in order to improve the segmentation performance of our previous method without sacrificing the computational complexity. The shape information plays a key role in kidney segmentation since human kidneys tend to have a common shape, with between-subject variations. Thus, we seek to take full advantage of this in our new level set formulation by exploiting the level set method’s flexibility to accommodate the shape information about the target object that is to be segmented [23]. Inspired by [24], we employ PB-shape and SS-shape models for kidney segmentation. The PB-shape model is built offline from a range of kidney images from various subjects that are manually segmented by human experts, whereas the SS-shape model is constructed on the fly from the segmented kidneys of a specific patient.
This new methodology is able to generate high segmentation accuracy because the PB-shape model is used on images with high contrast in the post-contrast interval of the image sequence. Moreover, the SS-shape model that is generated from those accurate segmentations is employed on the more challenging, lower contrast images from pre- and late-contrast intervals of the sequence, as it more accurately reflects the kidney’s shape from the same patient. Our early work on this new methodology has been drafted in [25], on which we build and develop several novel contributions in the present paper. First, we embed FCM clustering into the level set evolution. Thus, the kidney/background fuzzy memberships are computed and updated every time the level set contour evolves. Second, the representation of the shape information in [25] is based on a 1st-order shape method, which might be inaccurate when some kidney pixels are not observed at all in the images that are used to construct the shape model. In this paper, we adopt an efficient Bayesian parameter estimation method [26] in computing the PB-shape and SS-shape models, which more accurately accounts for the kidney pixels that are possibly not observed during the model building. Third, we propose an automated and time-efficient, yet effective, strategy to determine the images from the patient’s sequence, to which the PB-shape model, the SS-shape model, or both of the models blended together are applied.
The proposed method is used to segment the kidneys of 45 subjects from DCE-MRI sequences, and the segmentation accuracy is assessed using the Dice similarity coefficient (DSC), the intersection-over-union (IoU), and the 95-percentile of Hausdorff distance (HD95) metrics [2,27]. Our experimental results prove that the proposed method can achieve high accuracy, even on noisy and low-contrast images, with no need for tuning the weighting parameters. The experiments also show that the segmentation accuracy is not affected by changing the position of the initial level set contour, which demonstrates the high consistency of the proposed method. We compare our method’s segmentation accuracy with several state-of-the-art level set methods, as well as our own earlier methods [19,22,25]. Furthermore, we compare its performance against the base U-Net model and one of its modifications named BCDU-Net [28], which is trained for the same kidney segmentation task. The two networks are trained from scratch on our DCE-MRI data, which are augmented with the KiTS19 challenge dataset [29]. This dataset contains 300 subjects’ data, where 210 out of all of the data are publicly released for training and the remaining 90 subjects are held out for testing. Each subject has a sequence of high quality CT scans, with their ground-truth labels that are manually segmented by medical students. It also includes a chart review that illustrates all of the relevant clinical information about this patient. All of the CT images and segmented annotations are provided in an anonymized NIFTI format. The comparison results confirm that the proposed method outperforms all of the other methods.
The remainder of this paper is structured as follows: Section 2 introduces the mathematical formulation of the proposed kidney segmentation method. Then, Section 3 provides the experimental results and the comparisons. Finally, a discussion and the conclusions are presented in Section 4.

2. Materials and Methods

In this section, we present the formulation of the proposed segmentation method in detail.

2.1. Materials

DCE-MRI data are collected from 45 subjects who underwent kidney transplantation at Mansoura University Hospital, Egypt. In order to acquire the data, a dose of 0.2 mL/kgBW of Gd-DTPA contrast agent was injected intravenously at a rate of 3–4 mL/s. Meanwhile, the kidney is scanned quickly and repeatedly, at 3 s intervals, using a 1.5T MRI scanner with a phased-array torso surface coil. The transition of the contrast agent results in a variation in the contrast of the images. Therefore, each subject has a dataset of about 80 repeated temporal frames, which are 256 × 256 pixels in size. Each image in the sequence is manually segmented by an expert radiologist at the hospital. A sample sequence of one subject is shown in Figure 1.

2.2. Problem Statement and Notations

In DCE-MRI, to evaluate the transplanted kidney function, the kidney needs to be accurately segmented from each image separately. Let I t ,   t = 1 , , , be a time-point image captured at time t from a DCE-MRI sequence of length . I t x , y is the intensity of a pixel x , y in the image domain Ω . The target is to label each pixel x , y in the image as kidney ( K ) or background ( B ).

2.3. Level-Set-Based Segmentation Model with Fuzzy Clustering and Shape Statistics

Given a DCE-MRI time-point image I t   , the level set contour Ω partitions the domain Ω of the image into a kidney region Ω K and background region Ω B . At any time t , Ω corresponds to the level set of a higher-dimensional function ϕ t x , y , i.e., Ω t = { x , y   |   ϕ t x , y = 0 } . The function ϕ is defined as the shortest Euclidean distance between every pixel x ,   y in the image and the contour. The distance is positive for the pixels inside of the contour, negative outside, and zero on the contour. The level set contour iteratively evolves in the direction minimizing the following energy function:
E ϕ x , y = λ 1   L ϕ x , y + λ 2 E F C M ϕ x , y
where λ 1 and λ 2 are positive normalizing parameters that control the impact of the energy terms. E F C M ϕ x , y is an FCM-based energy function computed from the input image I t to attract the contour towards the position of the kidney in the image, which is defined as follows:
E F C M ϕ = Ω   H ϕ ε   F B x , y   d x d y + Ω   1 H ϕ ε   F K x , y   d x d y
where H ϕ ε = H ε ϕ x , y is the smeared-out Heaviside function, which is defined as follows:
H ϕ ε = 1 ϕ > ε 1 2 + ϕ 2 ε + 1 2 π sin π ϕ ε ε ϕ ε 0 ϕ < ε
where the parameter ε determines the degree of smearing. L ϕ in (1) is a length term that is responsible for keeping the level set contour ϕ x , y smooth and defined, as follows:
L ϕ x , y = Ω   δ ϕ ε   ϕ x , y   d x   d y    
where δ ϕ ε = δ ε ϕ x , y is the Dirac delta function, which is the derivative of H ϕ ε , and is given as follows:
δ ϕ ε = 0 ϕ > ε 1 2 ε + 1 2 ε cos π ϕ ε ϕ ε          
F L x , y in (2) represents either kidney (for L = K ) or background (for L = B ) energy function of the pixel x ,   y in the image and is defined as follows:
F L x , y =   ω t   μ L x , y   P L x , y + 1 ω t   μ L x , y   S L x , y
where μ L x , y is the kidney/background fuzzy membership degrees of the pixel x , y .   P L x , y and S L x , y are prior probabilities of the pixel x , y derived from PB-shape and SS-shape models, respectively. The weight factor ω t is used to control the contribution of both models in the segmentation operation. Information about how the value of ω t is computed for each image in the sequence is explained in Section 2.6.
According to the calculus of variations, the minimization of the function in (1), with respect to ϕ , is given as follows:
ϕ t = δ ϕ ε   λ 1 div ϕ ϕ + λ 2   F K x , y λ 2   F B x , y  
Finally, the level set contour is iteratively evolved to the boundary of the object as follows:
ϕ n + 1 x , y = ϕ n x , y + τ ϕ n x , y t
where integer n is a number of time steps, as follows: t = n τ for τ > 0 . It is worth noting that using the smeared-out Heaviside and Dirac delta function is important in order to obtain a global minimizer for the function in (1), irrespective of the level set initialization in the image [21].

2.4. FCM Membership Function

Given an image I t , the FCM clustering algorithm divides the pixels in the image domain Ω into two separate clusters, kidney and background, as shown in Figure 2. According to this algorithm, the optimal centroid values of the clusters and the corresponding membership degrees are obtained by iteratively minimizing an objective function of the following form [30]:
J = x , y     Ω     L     μ L 2 x , y   || I t x , y C L || 2
where C L is the centroid value of kidney ( L = K ) or background ( L = B ) clusters, and μ L x , y 0 , 1 is the fuzzy membership degree of the pixel x , y in the cluster L and satisfies the condition of μ K x , y + μ B x , y = 1 . ||   . || represents the Euclidean distance between the pixel’s intensity and cluster’s centroid.
In our earlier method [25], the FCM algorithm is used to compute the fuzzy memberships of the image pixels before the level set evolution begins. Once the level set starts evolving, the obtained memberships are not changed, and this might be not accurate enough in some cases. We improve this approach in the present paper. First, the kidney and background centroid values are initially defined as the mean of the pixel intensities inside and outside of the initial level set contour, respectively. Then, the kidney/background fuzzy membership degrees of each pixel x , y are iteratively updated during the level set evolution as follows:
μ L x , y = || I t x , y C L || 2 || I t x , y C K || 2 + || I t x , y C B || 2
Similarly, the centroid values of the kidney/background clusters are computed as follows:
C L =   x , y Ω     R L ϕ x , y     I t x , y       μ L 2 x , y     x , y Ω   R L ϕ x , y     μ L 2 x , y    
where R L ϕ x , y = R K ϕ x , y = H ϕ ε for ( L = K ), and R L ϕ x , y = R B ϕ x , y = 1 H ϕ ε for ( L = B ). As such, the per-pixel fuzzy memberships and kidney/background centroids are coupled with the level set function via (10) and (11) and are updated in each evolution step.
Overall, the membership values of the pixels to a specific cluster depend on the distances between the intensity of the pixels and the cluster centroid. This means that the pixels are assigned high membership values (close to 1) to a certain cluster when their intensities are close to the centroid value and low membership values (close to 0) when they are far from the centroid. As illustrated in Figure 2, the higher the brightness of a pixel is in the kidney/background cluster, the higher its probability to belonging to this cluster. As shown in Figure 2, relying only on fuzzy membership is often not enough to obtain accurate kidney segmentation, especially on low-contrast images. Thus, we incorporate the shape prior information with fuzzy memberships to control the level set evolution.

2.5. Statistical Kidney Shape Model

Some earlier approaches (e.g., [6,25]) employ a 1st-order shape method in the construction of a kidney shape model. The major drawback of this method appears when a pixel is classified as kidney or background in all of the training images. In such cases, the pixel-wise probability of the observed label will be exactly 1, and the unobserved label’s probability will be exactly 0, which is often unreasonable. To circumvent this issue, we adopt the Bayesian parameter estimation method [26] in the construction of the PB-shape and SS-shape models in our work here. For the PB-shape model, a number N of DCE-MRI kidney images are selected from varying subjects, and one among them is considered as a reference image. These images are mutually registered to the selected target image, assuming 2D affine transformation by the maximization of mutual information [31] (Figure 3). Then, the co-aligned images are manually segmented by an expert. Finally, the obtained ground-truth segmentations are used to build the shape model, as follows:
For each pixel x , y in the co-aligned ground-truth images, when kidney and background labels are both observed, the empirical kidney/background probability of this pixel is computed as follows [26,32]:
P L x , y = N L x , y + β     N + β   O x , y     N   N + 𝓁 O x , y  
where O x , y denotes the number of observed labels, which in this case equals 2, because both labels are observed. N L x , y indicates how many times the label L is observed. β is a pseudo count added to the count of each observed label, and 𝓁 is the total number of possible region labels (kidney and background). On the other hand, when a kidney or background label is observed in all of the images, O x , y equals 1 and the probability of the observed label is computed from (12), while the probability of the unobserved label is computed as follows:
P L x , y = 1     𝓁 O x , y     1 N   N + 𝓁 O x , y  
According to the above steps, an example PB-shape model is shown in Figure 3. The same methodology is also adopted in the construction of the SS-shape model, but from a set of images selected on the fly from the specific patient’s sequence being segmented.

2.6. Sequence Partitioning and the Weight Factor

In order to segment the kidney of a specific patient, we partition the patient’s DCE-MRI sequence into three subsets. The already-constructed PB-shape model is employed to segment the kidneys from the images in the first subset S 1 . The obtained kidney segmentations are used to construct the SS-shape model, which is blended with the PB-shape model to segment the kidneys from the images in the second subset S 2 . The images in the third subset S 3 are segmented using only the SS-shape model.
We propose to employ an automated, fast approach for this sequence partitioning. First, all of the images in a given patient sequence are co-aligned via affine transformations to the reference image used in the PB-shape model construction. Then, for each image I t in the sequence, the mean of the pixel intensities in the kidney region is computed using the PB-shape model as follows:
𝓂 t =   x , y Ω       P K x , y I t x , y   x , y Ω         P K x , y        
Note that this step does not require any kidney segmentation beforehand, thus can be carried out before starting our segmentation method. Figure 4 shows these mean values across the sequence in Figure 1. A number 1 of images with the highest mean values (indicated by red circles in Figure 4) is selected to constitute the subset S 1 . Images of length 2 whose 𝓂 t values come next (indicted by black diamonds) are selected to form the subset S 2 . Finally, the remaining 3 images ( i = 1 3 i = ) in the sequence constitute the subset S 3 . Accordingly, the weight factor ω t in (6) is computed for each image in these subsets as follows:
ω t = 1   I t   S 1 2 𝒾 / 2   I t   S 2           0   I t   S 3
where 𝒾 is the index of the image I t in S 2 , as decreasingly ordered by its 𝓂 t value. The dashed green line in Figure 4 shows the values of ω t across the subject’s sequence shown in Figure 1.
Note that this partitioning procedure collects the high contrast images of the post-contrast interval of the MRI sequence in   S 1 , thus allowing the PB-shape model alone ω t = 1 to accurately segment the kidneys from the S 1 images. The SS-shape model is constructed from the segmented kidneys from the S 1 images and used together with the PB-shape model (while ω t is gradually decreasing) to segment the images in S 2 . The SS-shape model is incrementally updated on the fly while working on the S 2 images (Figure 5). As a new segmentation becomes available, it is added onto the set employed that is to update the SS-shape model. Once all of the S 2 images are segmented, the SS-shape model is not updated anymore.
Note that the partitioning procedure keeps the more challenging, lower contrast images from the pre- and late-contrast intervals of the sequence in S 3 . However, the SS-shape model is solely ( ω t = 0 ) able to precisely segment those S 3 images as it more accurately captures the kidney’s shape of this specific patient.
Finally, a flowchart of the proposed kidney segmentation method is shown in Figure 6.

3. Results

The performance of the proposed method was evaluated on DCE-MRI datasets of 45 subjects. The segmentation accuracy was assessed using DSC (mean ± standard deviation), IoU (mean ± standard deviation), and HD95 (mean ± standard deviation) metrics [2]. The PB-shape model was trained from the 30 ground-truth images of 30 different subjects. The parameters of the proposed method were experimentally set as follows: ε = 1.5 , λ 1 = 6 , λ 2 = 6 , 1 = 20 , 2 = 10 , and β = 1 . The values of all of the parameters were not changed or further tuned in all of the conducted experiments.

3.1. Method Performance with Comparisons to Other Methods

We first evaluated the performance of the proposed method on the gathered DCE-MRIs. Figure 7 depicts the segmentation process by our method for two different images. It shows the level set contour evolution during the segmentation procedure after different iterations. The figure also shows the final segmentation result. As shown in Figure 7, the proposed method can efficiently drive the contour towards the boundary of the kidneys in the images.
We then compared the segmentation performance of this new method against the following previous methods: FCMLS [19], FML [22], and PBPSFL [25]. In our experiments, we initialized—on purpose—the level set contour extremely far away from the kidney in all of the methods. We reported the performances on all of images and also on a particular set of low-contrast images (the first 5 images from each subject, totaling 225 images) in terms of DSC, IoU, and HD95 in Table 1.
The results in Table 1 demonstrate the improvement of the proposed method over our previous methods by achieving the highest mean DSC and IoU values and the lowest mean HD95 values, with a noticeable advantage on the low-contrast images. The lower standard deviation values of all of the evaluation metrics confirm the new method’s more consistent performance compared to the other methods. Figure 8 shows a qualitative comparison between these methods on two low-contrast images from two different subjects. Clearly, the proposed method achieves notably better segmentation accuracy than the other methods.
In order to further confirm the high-performance of the proposed method over the PBPSFL method [25], the two methods are used to segment the kidneys from the images that were corrupted by additive Gaussian noise (mean 0, variance 0.01, image intensities are normalized to range [0, 1]). Figure 9 visually compares the segmentation performances of both of the methods on a number of noisy images, while quantitative comparison results are given in Table 2.
The proposed method clearly outperforms the PBPSFL method in the presence of noise. It has a higher mean DSC and IoU, and lower mean HD95 values. While the two methods share the idea of using both the PB-shape and the SS-shape models, the higher performance of the new method can be attributed to the better shape model that was constructed, as explained in Section 2.5, and to updating the FCM memberships during the level set evolution.
The efficiency of the proposed method is further demonstrated by comparing its accuracy against those of a number of state-of-the-art level-set-based methods. Table 3 compares the accuracy of the results of all of the images by the proposed method and our previous FCMLS [19], PBPSFL [25], FML [22] methods, as well as by the shape-based (SB) method [33], the vector level set (VLS) [34], the 2nd-order MGRF level set (2nd-MGRF) [4], and a parametric kernel graph cut (PKGC) [35]. The DSC values of the PKGC and the 2nd-MGRF methods are reported in [5,6], using the same DCE-MRI datasets that were used in our study. As neither the output segmented kidneys that were obtained by these two methods nor the faithful implementations of the two methods are available to us, we are not able to compute/report the IoU and HD95 values of the two methods. Clearly, as shown in Table 3, the proposed method achieves the best segmentation accuracy compared with to other methods.
While we have not tried yet to optimize the time performance of the implementation of the new method, it takes about 8.4 min on the average to segment a sequence of 80 images with 256 × 256 sized pixels. However, the execution time of our previous method [25] segmenting the same sequence is 11.2 min. This demonstrates that the proposed method is faster than our previous method. All of the runtimes were calculated using MATLAB (R2015a) implementations of the methods on a 1.80 GHz Intel Core i7 CPU with 16 GB of RAM.

3.2. Ablation Experiments

We have performed an ablation study in order to assess the contribution of each component to the proposed method’s performance. We then evaluated the effect of some user-supplied parameters on the obtained segmentation accuracy. In our ablation study, we compared three scenarios. First, we evaluated the performance of our level-set-based method incorporating only fuzzy memberships and the PB-shape model. The FCM algorithm was used to compute the fuzzy memberships of the input image before the level set evolution began, and the memberships were not changed afterwards. In the second scenario, the FCM algorithm was embedded into the level set method, and the fuzzy memberships were updated as the level set evolved. The third scenario represented our complete approach, integrating the SS-shape model with the PB-shape model and the embedded fuzzy memberships. In all three of the scenarios, we assessed the impact on the final segmentation accuracy. The quantitative and qualitative comparison results are reported in Table 4 and Figure 10.
While the segmentation accuracy in Table 4 for all of the images improved from Scenario 1 to Scenario 2 to, eventually, Scenario 3, the impact was more prominent on low-contrast images. Updating the fuzzy memberships during the contour evolution improved the segmentation accuracy of the low-contrast images by about 3%, 4%, and 2.5 mm in terms of the mean DSC, IoU, and HD95, respectively. Moreover, the incorporation of the SS-shape model yielded a further improvement of about 3% in DSC, 5% in IoU, and 1.6 mm in HD95. As shown in Figure 10, the proposed method in Scenario 3 could more efficiently segment and catch the boundary of the target kidneys, thus generating more accurate segmentations. Overall, the results in Table 4 and Figure 10 highlight the benefit of the proposed integration of the embedded fuzzy memberships and the SS-shape model along with the PB-shape model into our level set framework.
Then, we studied the effect of some user-supplied parameters on the proposed method. We first investigated the impact of changing the values of 1 and 2 on the segmentation accuracy. Table 5 reports these results for three combinations of 1 and 2 values, demonstrating that the proposed method achieved the best segmentation accuracy when 1 = 20 and 2 = 10 as this allows more images to build the SS-shape model.
Afterwards, we evaluated the proposed method’s performance against different level set initializations. Figure 11 shows the accuracy of the proposed method on a sample DCE-MRI image with the level set contour that was initialized in different positions in the image. From the visual results and the reported DSC values in Figure 11, the segmentation accuracy was not changed in all cases. This confirms the method’s high and consistent performance, regardless of where the level set contour is initialized in the image.

3.3. Comparison to U-Net-Based Deep Neural Networks

In recent years, CNN in general, and the U-Net architecture in particular [13], have been applied to various medical image segmentation problems with good results [11,12,13,14,15,16,17,18]. Therefore, we have compared the proposed method versus a base U-Net CNN and one of its variants named BCDU-Net [28]. Both of the networks were trained from scratch on data from 18 subjects, were validated against data from 12 subjects, and were tested using the remaining 15 subjects. In order to prevent the models from overfitting, the data of each subject were augmented by performing the following operations on each image in the sequence: vertical and horizontal flipping, random x and y -translations, rotation by ± 45°, ± 90°, 180° angles, and noising by adding Gaussian noise with zero mean and a variance of 0.01, 0.02, and 0.05 (the image intensities were normalized to range [0, 1]). The augmentation results in a total number of 16,404 images for training and 10,980 images for validation.
In order to further enlarge the data, following [18,36], we used high quality CT scans of 210 subjects from a KiTS19 dataset [29]. We manually split each image into two sub-images of size 256 × 256 for the left and right kidneys. This eventually increased the number of training and validation images to 40,050 and 10,980, respectively. The two networks were trained for 200 epochs using the Adam optimizer and an initial learning rate 0.0001, which decays by a factor of 0.1 whenever the validation loss is not reduced for 10 consecutive epochs. In order to further avoid overfitting, we used dropout regularization with a 50% ratio during the training. The training was carried out on a workstation with dual 2.20 GHz Intel Xeon Silver 4114 CPUs with 128 GB of RAM and two Nvidia GPUs in a Python environment, using Keras API with Tensorflow backend. The trained networks were then used to segment the kidney from the test subjects. Table 6 presents a comparison between the segmentation accuracies that were obtained by the proposed method, the U-Net model, and the BCDU-Net model with three dense blocks.
It can be seen from Table 6 that the BCDU-Net model had a considerably better performance than the base U-Net model, yet our method performed notably better than it. The mean DSC and IoU of the proposed method were higher than those of the U-Net and BCDU-Net models. Moreover, the mean HD95 values definitely showed the gap between the accuracy of our method and the two models. The HD95 metric characterized the divergence between the boundary surfaces of the segmentation result and the ground-truth kidney [2,27]. As such, unlike DSC, it was more sensitive to the shape deviations of the segmentation result against the ground-truth [27]. On the other hand, the standard deviations of DSC, IOU, and HD95 indicated that the proposed method was much more consistent and stable than the U-Net and BCDU-Net models. The improvement was more profound on the low-contrast images. It is important to mention that the behavior of the proposed method is easier to explain, as well as the interpretation of the results, compared to the deep U-Nets. For example, obtaining rather a noisy kidney contour from the segmentation result would suggest increasing the weighting factor λ 1 in our method, as a corrective action.

4. Conclusions

Kidney segmentation from DCE-MRI images is important for the assessment of renal transplant function. This paper has proposed a new and accurate method to automatically segment kidneys from DCE-MRI image sequences. The paper makes the following contributions:
  • It integrates the FCM clustering algorithm, the level set method, and both PB-shape and SS-shape statistics for this problem for the first time in literature;
  • The FCM clustering algorithm is embedded into the level set method; a pixel’s kidney/background fuzzy memberships are coupled with the level set evolution, considering the image intensities directly, as well as the kidney’s shape indirectly. This allows the proposed method to precisely capture the kidney, even on noisy and low-contrast images;
  • The PB-shape and the SS-shape models are built using Bayesian parameter estimation, which statistically accounts for kidney pixels that are possibly not observed in the images that are used for the model building, thus rendering more accurate shape models;
  • An automated, simple, and time-efficient strategy is proposed for partitioning the patient’s sequence into three subsets in order to properly determine the blending factor between the PB-shape and the SS-shape models;
  • The experiments that were performed on 45 subjects demonstrate the accuracy of the proposed method and its robustness against noise, low contrast, and contour initialization with no need for tuning the method’s parameters. The comparisons with several state-of-the-art level set methods, and two CNN based on the U-Net architecture, confirm the superior and consistent performance of the proposed method.
Nevertheless, the proposed method has some limitations. First, incorporating the shape information into the level set method requires a prerequisite registration step in order to align an input DCE-MRI image to the shape prior model in order to compensate for the motion due to patient’s breathing and movement during the data acquisition. Any errors occurring in this alignment step would affect the segmentation performance. Second, similar to all of the level-set-based methods, the level set contour evolution in our method is guided by a partial differential equation containing several weighting parameters. Moreover, we use a weight factor that controls the contribution of the two shape statistics in the segmentation procedure. All of these weighting parameters require proper setting. Third, our new method takes about 7 s to segment one image of a size of 256 × 256 pixels, which is not suitable for real-time operation yet.
Our current research is directed towards improving the proposed method and alleviating its limitations. In our experiments, the values of the weighting parameters are experimentally chosen and then fixed throughout all of the conducted experiments without further tuning. We plan to investigate other weighting strategies in order to systematically find out the proper values for these weights, similarly to the scheme that was proposed in [37]. We also plan to investigate combining kidney segmentation and registration into the level set’s energy function. Simultaneously solving this issue for both of the tasks would diminish the propagation of errors from one task to the other. Last but not least, in an attempt to improve the time performance of the proposed method, we are working on converting the MATLAB code to C++ code that is optimized for GPU computing.

Author Contributions

Conceptualization, M.E.-M., N.S.A., and A.E.-B.; methodology, M.E.-M. and A.E.-B.; software, R.K.; validation, R.K., M.E.-M., and M.A.E.-G.; formal analysis, M.E.-M., R.K., and A.E.-B.; investigation, M.E.-M. and A.E.-B.; resources, R.K.; data curation, M.A.E.-G. and A.E.-B.; writing—original draft preparation, R.K. and M.E.-M.; writing—review and editing, M.E.-M., N.S.A., and A.E.-B.; visualization, R.K.; supervision, M.E.-M. and A.E.-B.; project administration, M.E.-M. and A.E.-B.; funding acquisition, M.E.-M. and A.E.-B. All authors have read and agreed to the published version of the manuscript.

Funding

1. This research is supported by the Science and Technology Development Fund (STDF), Egypt (grant USC 17:253), and 2. Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R40), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of University of Louisville (protocol code 14.1052).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are available upon reasonable request to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mostapha, M.; Khalifa, F.; Alansary, A.; Soliman, A.; Suri, J.; El-Baz, A.S. Computer-aided diagnosis systems for acute renal transplant rejection: Challenges and methodologies. In Abdomen And Thoracic Imaging; Springer: Berlin/Heidelberg, Germany, 2014; pp. 1–35. [Google Scholar] [CrossRef]
  2. Zöllner, F.G.; Kociński, M.; Hansen, L.; Golla, A.K.; Trbalić, A.Š.; Lundervold, A.; Materka, A.; Rogelj, P. Kidney segmentation in renal magnetic resonance imaging-current status and prospects. IEEE Access 2021, 9, 71577–71605. [Google Scholar] [CrossRef]
  3. Yuksel, S.E.; El-Baz, A.; Farag, A.A.; El-Ghar, M.; Eldiasty, T.; Ghoneim, M.A. A kidney segmentation framework for dynamic contrast enhanced magnetic resonance imaging. J. Vib. Control. 2007, 13, 1505–1516. [Google Scholar] [CrossRef] [Green Version]
  4. Khalifa, F.; El-Baz, A.; Gimel’farb, G.; El-Ghar, M.A. Non-invasive image-based approach for early detection of acute renal rejection. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Beijing, China, 20–24 September 2010; pp. 10–18. [Google Scholar] [CrossRef] [Green Version]
  5. Khalifa, F.; Beache, G.M.; El-Ghar, M.A.; El-Diasty, T.; Gimel’farb, G.; Kong, M.; El-Baz, A. Dynamic contrast-enhanced MRI-based early detection of acute renal transplant rejection. IEEE Trans. Med. Imaging 2013, 32, 1910–1927. [Google Scholar] [CrossRef]
  6. Liu, N.; Soliman, A.; Gimel’farb, G.; El-Baz, A. Segmenting kidney DCE-MRI using 1st-order shape and 5th-order appearance priors. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 77–84. [Google Scholar] [CrossRef]
  7. Hodneland, E.; Hanson, E.A.; Lundervold, A.; Modersitzki, J.; Eikefjord, E.; Munthe-Kaas, A.Z. Segmentation-driven image registration-application to 4D DCE-MRI recordings of the moving kidneys. IEEE Trans. Image Process. 2014, 23, 2392–2404. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Eltanboly, A.; Ghazal, M.; Hajjdiab, H.; Shalaby, A.; Switala, A.; Mahmoud, A.; Sahoo, P.; El-Azab, M.; El-Baz, A. Level sets-based image segmentation approach using statistical shape priors. Appl. Math. Comput. 2019, 340, 164–179. [Google Scholar] [CrossRef]
  9. Al-Shamasneh, A.R.; Jalab, H.A.; Palaiahnakote, S.; Obaidellah, U.H.; Ibrahim, R.W.; El-Melegy, M.T. A new local fractional entropy-based model for kidney MRI image enhancement. Entropy 2018, 20, 344. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Al-Shamasneh, A.R.; Jalab, H.A.; Shivakumara, P.; Ibrahim, R.W.; Obaidellah, U.H. Kidney segmentation in MR images using active contour model driven by fractional-based energy minimization. Signal Image Video Process 2020, 14, 1361–1368. [Google Scholar] [CrossRef]
  11. Lundervold, A.S.; Rørvik, J.; Lundervold, A. Fast semi-supervised segmentation of the kidneys in DCE-MRI using convolutional neural networks and transfer learning. In Proceedings of the 2nd International Scientific Symposium, Functional Renal Imaging: Where Physiology, Nephrology, Radiology and Physics Meet, Berlin, Germany, 11–13 October 2017. [Google Scholar]
  12. Haghighi, M.; Warfield, S.K.; Kurugol, S. Automatic renal segmentation in DCE-MRI using convolutional neural networks. In Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI), Washington, DC, USA, 4–7 April 2018; pp. 1534–1537. [Google Scholar] [CrossRef]
  13. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar] [CrossRef]
  14. Milecki, L.; Bodard, S.; Correas, J.M.; Timsit, M.O.; Vakalopoulou, M. 3D unsupervised kidney graft segmentation based on deep learning and multi-sequence MRI. In Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021; pp. 1781–1785. [Google Scholar] [CrossRef]
  15. Bevilacqua, V.; Brunetti, A.; Cascarano, G.D.; Guerriero, A.; Pesce, F.; Moschetta, M.; Gesualdo, L. A comparison between two semantic deep learning frameworks for the autosomal dominant polycystic kidney disease segmentation based on magnetic resonance images. BMC Med. Inform. Decis. Mak. 2019, 19, 244. [Google Scholar] [CrossRef] [Green Version]
  16. Brunetti, A.; Cascarano, G.D.; Feudis, I.D.; Moschetta, M.; Gesualdo, L.; Bevilacqua, V. Detection and segmentation of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease. In Proceedings of the International Conference on Intelligent Computing, Nanchang, China, 3–6 August 2019; pp. 639–650. [Google Scholar] [CrossRef]
  17. Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  18. Kavur, A.E.; Gezer, N.S.; Barış, M.; Aslan, S.; Conze, P.H.; Groza, V.; Pham, D.D.; Chatterjee, S.; Ernst, P.; Özkan, S.; et al. CHAOS challenge-combined (CT-MR) healthy abdominal organ segmentation. Med. Image Anal. 2021, 69, 101950. [Google Scholar] [CrossRef]
  19. El-Melegy, M.T.; Abd El-karim, R.M.; El-Baz, A.; El-Ghar, M.A. Fuzzy membership-driven level set for automatic kidney segmentation from DCE-MRI. In Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
  20. Nayak, J.; Naik, B.; Behera, H.S. Fuzzy C-means (FCM) clustering algorithm: A decade review from 2000 to 2014. In Computational Intelligence in Data Mining; Springer: Berlin/Heidelberg, Germany, 2015; pp. 133–149. [Google Scholar] [CrossRef]
  21. Fedkiw, R.; Osher, S. Level Set Methods and Dynamic Implicit Surfaces; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  22. El-Melegy, M.T.; Abd El-Karim, R.M.; Abou El-Ghar, M.; Shehata, M.; Khalifa, F.; El-Baz, A.S. Kidney segmentation from DCE-MRI converging level set methods, fuzzy clustering and Markov random field modeling. Sci. Rep. 2022. In Press. [Google Scholar] [CrossRef]
  23. Heimann, T.; Meinzer, H.P. Statistical shape models for 3D medical image segmentation: A review. Med. Image Anal. 2009, 13, 543–563. [Google Scholar] [CrossRef] [PubMed]
  24. Shi, Y.; Qi, F.; Xue, Z.; Chen, L.; Ito, K.; Matsuo, H.; Shen, D. Segmenting lung fields in serial chest radiographs using both population-based and subject-specific shape statistics. IEEE Trans. Med. Imaging 2008, 27, 481–494. [Google Scholar] [CrossRef] [PubMed]
  25. El-Melegy, M.T.; Abd El-Karim, R.M.; El-Baz, A.S.; Abou El-Ghar, M. A Combined Fuzzy C-means and level set method for automatic DCE-MRI kidney segmentation using both population-based and patient-specific shape statistics. In Proceedings of the IEEE International Conference on Fuzzy Systems, Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar] [CrossRef]
  26. Friedman, N.; Singer, Y. Efficient Bayesian parameter estimation in large discrete domains. In Proceedings of the 12th International Conference on Advances in Neural Information Processing Systems (NIPS’98), Denver, CO, USA, 30 November–5 December 1998; pp. 417–423. [Google Scholar]
  27. Reinke, A.; Eisenmann, M.; Tizabi, M.D.; Sudre, C.H.; Rädsch, T.; Antonelli, M.; Arbel, T.; Bakas, S.; Cardoso, M.J.; Cheplygina, V.; et al. Common limitations of image processing metrics: A picture story. arXiv 2021. [Google Scholar] [CrossRef]
  28. Azad, R.; Asadi-Aghbolaghi, M.; Fathy, M.; Escalera, S. Bi-Directional ConvLSTM U-Net with Densley connected convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019; pp. 1–10. [Google Scholar] [CrossRef]
  29. Heller, N.; Sathianathen, N.; Kalapara, A.; Walczak, E.; Moore, K.; Kaluzniak, H.; Rosenberg, J.; Blake, P.; Rengel, Z.; Oestreich, M.; et al. The kits19 challenge data: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes. arXiv 2019. [Google Scholar] [CrossRef]
  30. El-Melegy, M.T.; Mokhtar, H. Tumor segmentation in brain MRI using a fuzzy approach with class center priors. EURASIP J. Image Video Process 2014, 2014, 21. [Google Scholar] [CrossRef] [Green Version]
  31. Viola, P.; Wells, W.M., III. Alignment by maximization of mutual information. Int. J. Comput. Vis. 1997, 24, 137–154. [Google Scholar] [CrossRef]
  32. Heller, K.A.; Svore, K.M.; Keromytis, A.D.; Stolfo, S.J. One class support vector machines for detecting anomalous windows registry accesses. In Proceedings of the ICDM Workshop on Data Mining for Computer Security, Melbourne, FL, USA, 19 November 2003. [Google Scholar] [CrossRef]
  33. Tsai, A.; Yezzi, A.; Wells, W.; Tempany, C.; Tucker, D.; Fan, A.; Willsky, A. A shape-based approach to the segmentation of medical imagery using level sets. IEEE Trans. Med. Imaging 2003, 22, 137–154. [Google Scholar] [CrossRef]
  34. El Munim, H.E.A.; Farag, A.A. Curve/surface representation and evolution using vector level sets with application to the shape-based segmentation problem. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 945–958. [Google Scholar] [CrossRef]
  35. Salah, M.B.; Mitiche, A.; Ayed, I.B. Multiregion image segmentation by parametric kernel graph cuts. IEEE Trans. Image Process 2010, 20, 545–557. [Google Scholar] [CrossRef]
  36. Villarini, B.; Asaturyan, H.; Kurugol, S.; Afacan, O.; Bell, J.D.; Thomas, E.L. 3D Deep learning for anatomical structure segmentation in multiple imaging modalities. In Proceedings of the IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), Aveiro, Portugal, 7–9 June 2021; pp. 166–171. [Google Scholar] [CrossRef]
  37. Leng, L.; Zhang, J.S.; Khan, M.K.; Chen, X.; Alghathbar, K. Dynamic weighted discrimination power analysis: A novel approach for face and palmprint recognition in DCT domain. Int. J. Phys. Sci. 2010, 5, 2543–2554. [Google Scholar]
Figure 1. Contrast variation in DCE-MRI images of a patient’s kidney scanned at different time instants t after bolus injection. The concentration of the contrast agent in the kidney tissue is low at the beginning of the acquisition process, yielding low-intensity images (pre-contrast interval), reaches its maximum, generating high-intensity images (post-contrast interval), and then decreases slowly, resulting again in low-intensity images (late-contrast interval).
Figure 1. Contrast variation in DCE-MRI images of a patient’s kidney scanned at different time instants t after bolus injection. The concentration of the contrast agent in the kidney tissue is low at the beginning of the acquisition process, yielding low-intensity images (pre-contrast interval), reaches its maximum, generating high-intensity images (post-contrast interval), and then decreases slowly, resulting again in low-intensity images (late-contrast interval).
Bioengineering 09 00654 g001
Figure 2. FCM clustering segmentation for a DCE-MRI grayscale image (a) into kidney cluster (b) and background cluster (c). The values of pixels in 5 × 5 windows centered at the red point are shown for the original DCE-MRI image in (d), the kidney cluster in (e), and the background cluster in (f), where C K = 248.3 and C B = 96.2 .
Figure 2. FCM clustering segmentation for a DCE-MRI grayscale image (a) into kidney cluster (b) and background cluster (c). The values of pixels in 5 × 5 windows centered at the red point are shown for the original DCE-MRI image in (d), the kidney cluster in (e), and the background cluster in (f), where C K = 248.3 and C B = 96.2 .
Bioengineering 09 00654 g002
Figure 3. PB-shape model constructed using the Bayesian parameter estimation method: Some DCE-MRI kidney images before (a) and after (b) affine registration. Column (c) shows manually segmented kidneys after alignment. Column (d) shows the PB-shape model constructed before (top) and after (bottom) affine registration.
Figure 3. PB-shape model constructed using the Bayesian parameter estimation method: Some DCE-MRI kidney images before (a) and after (b) affine registration. Column (c) shows manually segmented kidneys after alignment. Column (d) shows the PB-shape model constructed before (top) and after (bottom) affine registration.
Bioengineering 09 00654 g003
Figure 4. Changes of the mean of pixel intensities in the kidney region 𝓂 t and weight factor ω t across the subject’s DCE-MRI sequence of Figure 1. Red circles indicate the highest contrast images included in subset S 1 ( 1 = 20 ), while black diamonds refer to the next highest contrast images that comprise subset S 2 ( 2 = 10 ).
Figure 4. Changes of the mean of pixel intensities in the kidney region 𝓂 t and weight factor ω t across the subject’s DCE-MRI sequence of Figure 1. Red circles indicate the highest contrast images included in subset S 1 ( 1 = 20 ), while black diamonds refer to the next highest contrast images that comprise subset S 2 ( 2 = 10 ).
Bioengineering 09 00654 g004
Figure 5. An SS-shape model constructed using Bayesian parameter estimation and updated during segmentation with S 2 images of the subject’s sequence in Figure 1. As 𝒾 increases, the model more precisely captures the patient’s kidney shape ( 2 = 10).
Figure 5. An SS-shape model constructed using Bayesian parameter estimation and updated during segmentation with S 2 images of the subject’s sequence in Figure 1. As 𝒾 increases, the model more precisely captures the patient’s kidney shape ( 2 = 10).
Bioengineering 09 00654 g005
Figure 6. Flowchart of the proposed level-set-based kidney segmentation method.
Figure 6. Flowchart of the proposed level-set-based kidney segmentation method.
Bioengineering 09 00654 g006
Figure 7. Evolution of the level set contour during the segmentation of the kidney from two DCE-MRI images (one per row) by the proposed method. (a) Initial level set contour. (bd) Contour after 10, 30, and 40 iterations. (e) Final segmented kidneys obtained after 60 iterations.
Figure 7. Evolution of the level set contour during the segmentation of the kidney from two DCE-MRI images (one per row) by the proposed method. (a) Initial level set contour. (bd) Contour after 10, 30, and 40 iterations. (e) Final segmented kidneys obtained after 60 iterations.
Bioengineering 09 00654 g007
Figure 8. Segmentation results of the proposed method and our previous methods. (a) DCE-MRI kidney images with initial level set contour. Segmentation results (red outlines) with overlaid ground-truth segmentations (green outlines) along with corresponding DSC are shown for: (b) FCMLS method [19], (c) PBPSFL method [25], (d) FML method [22], and (e) proposed method.
Figure 8. Segmentation results of the proposed method and our previous methods. (a) DCE-MRI kidney images with initial level set contour. Segmentation results (red outlines) with overlaid ground-truth segmentations (green outlines) along with corresponding DSC are shown for: (b) FCMLS method [19], (c) PBPSFL method [25], (d) FML method [22], and (e) proposed method.
Bioengineering 09 00654 g008
Figure 9. Performance of our method in the presence of noise compared with PBPSFL [25]. (a) DCE-MRI kidney images with added Gaussian noise and the initial level set contour are shown in red. Segmented kidneys are shown in red with DSC values extracted from original and noisy images by PBPSFL method [25] (b,c) and by the proposed method (d,e). Ground-truth segmentations are shown in green.
Figure 9. Performance of our method in the presence of noise compared with PBPSFL [25]. (a) DCE-MRI kidney images with added Gaussian noise and the initial level set contour are shown in red. Segmented kidneys are shown in red with DSC values extracted from original and noisy images by PBPSFL method [25] (b,c) and by the proposed method (d,e). Ground-truth segmentations are shown in green.
Bioengineering 09 00654 g009
Figure 10. Segmentation results of two DCE-MRI images reflecting the effective role of each component in the proposed method. (a) DCE-MRI kidney images with the initial contour in red. Segmented kidneys in red, alongside their DSC values, obtained by our level-set-based method incorporating PB-shape model with (b) Fuzzy memberships, (c) Embedded fuzzy memberships, and (d) Embedded fuzzy memberships and the SS-shape model. Ground-truth segmentations are in green.
Figure 10. Segmentation results of two DCE-MRI images reflecting the effective role of each component in the proposed method. (a) DCE-MRI kidney images with the initial contour in red. Segmented kidneys in red, alongside their DSC values, obtained by our level-set-based method incorporating PB-shape model with (b) Fuzzy memberships, (c) Embedded fuzzy memberships, and (d) Embedded fuzzy memberships and the SS-shape model. Ground-truth segmentations are in green.
Bioengineering 09 00654 g010
Figure 11. Segmentation results using the proposed method on a DCE-MRI image with different level set contour initializations (top row, red outlines). The segmented kidney (bottom row, red outlines), for any initialization, closely matches the ground-truth (green outlines), as evidenced by the associated DSC.
Figure 11. Segmentation results using the proposed method on a DCE-MRI image with different level set contour initializations (top row, red outlines). The segmented kidney (bottom row, red outlines), for any initialization, closely matches the ground-truth (green outlines), as evidenced by the associated DSC.
Bioengineering 09 00654 g011
Table 1. Comparison between the segmentation performance of the proposed method and our previous methods.
Table 1. Comparison between the segmentation performance of the proposed method and our previous methods.
MethodAll ImagesLow-Contrast Images
DSCIoUHD95DSCIoUHD95
FCMLS [19]0.941 ± 0.0420.89 ± 0.0561.78 ± 6.210.88 ± 0.1370.80 ± 0.1568.18 ± 22.8
PBPSFL [25]0.952 ± 0.0410.90 ± 0.0431.11 ± 1.70.923 ± 0.130.88 ± 0.0561.93 ± 2.32
FML [22]0.956 ± 0.0190.91 ± 0.0351.15 ± 1.460.936 ± 0.0240.88 ± 0.0421.94 ± 1.58
Proposed0.953 ± 0.0180.91 ± 0.0331.10 ± 1.40.942 ± 0.020.90 ± 0.0341.56 ± 1.46
Table 2. Comparison between the segmentation performance of the proposed method and the PBPSFL Method on noisy images.
Table 2. Comparison between the segmentation performance of the proposed method and the PBPSFL Method on noisy images.
MethodAll ImagesLow-Contrast Images
DSCIoUHD95DSCIoUHD95
PBPSFL [25]0.944 ± 0.0220.89 ± 0.0391.71 ± 1.70.93 ± 0.0250.87 ± 0.0422.47 ± 1.85
Proposed0.952 ± 0.0160.91 ± 0.0291.20 ± 1.00.95 ± 0.0180.90 ± 0.0331.41 ± 1.24
Table 3. Comparison between the segmentation accuracy of the proposed method and the existing methods.
Table 3. Comparison between the segmentation accuracy of the proposed method and the existing methods.
MethodDSCIoUHD95
PKGC [35]0.820 ± 0.180--
VLS [34]0.902 ± 0.0830.84 ± 0.123.62 ± 7.29
SB [33]0.912 ± 0.0430.84 ± 0.072.64 ± 1.63
FCMLS [19]0.941 ± 0.0420.89 ± 0.0561.78 ± 6.21
2nd-MGRF [4]0.943 ± 0.028--
PBPSFL [25]0.952 ± 0.0410.90 ± 0.0431.10 ± 1.69
FML [22]0.956 ± 0.0190.91 ± 0.0351.15 ± 1.46
Proposed0.953 ± 0.0180.91 ± 0.0331.1 ± 1.4
Table 4. Ablation study: Segmentation performance of the proposed method in the three scenarios.
Table 4. Ablation study: Segmentation performance of the proposed method in the three scenarios.
MethodAll ImagesLow-Contrast Images
DSCIoUHD95DSCIoUHD95
PB-shape + Fuzzy memberships0.945 ± 0.0550.89 ± 0.0561.63 ± 3.870.884 ± 0.120.81 ± 0.1285.61 ± 12.54
PB-shape + Embedded fuzzy memberships0.946 ± 0.0290.89 ± 0.0481.63 ± 1.970.918 ± 0.060.85 ± 0.0963.18 ± 4.28
PB-shape + Embedded memberships + SS-shape0.953 ± 0.0180.91 ± 0.0331.10 ± 1.40.942 ± 0.020.90 ± 0.0341.56 ± 1.46
Table 5. Segmentation performance of the proposed method for different 1 and 2 values.
Table 5. Segmentation performance of the proposed method for different 1 and 2 values.
Experiment 1   2   All ImagesLow-Contrast Images
DSCIoUHD95DSCIoUHD95
115150.949 ± 0.0210.90 ± 0.0381.34 ± 1.430.942 ± 0.0220.89 ± 0.0381.58 ± 1.46
220100.953 ± 0.0180.91 ± 0.0331.10 ±1.40.942 ± 0.020.90 ± 0.0341.56 ± 1.46
310200.946 ± 0.0270.89 ± 0.0381.41 ± 1.620.94 ± 0.0230.88 ± 0.0411.61 ± 1.48
Table 6. Comparison between the segmentation performance of the proposed method versus U-Net and BCDU-Net models.
Table 6. Comparison between the segmentation performance of the proposed method versus U-Net and BCDU-Net models.
MethodAll ImagesLow-Contrast Images
DSCIoUHD95DSCIoUHD95
U-Net [13]0.940 ± 0.0410.89 ± 0.06910.30 ± 23.80.88 ± 0.0710.77 ± 0.1319.9 ± 28.8
BCDU-Net [28]0.942 ± 0.0380.89 ± 0.0624.62 ± 12.350.90 ± 0.0570.82 ± 0.0897.89 ± 12.27
Proposed0.957 ± 0.0160.93 ± 0.0190.80 ± 1.030.952 ± 0.0140.90 ± 0.0260.85 ± 0.76
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

El-Melegy, M.; Kamel, R.; Abou El-Ghar, M.; Alghamdi, N.S.; El-Baz, A. Level-Set-Based Kidney Segmentation from DCE-MRI Using Fuzzy Clustering with Population-Based and Subject-Specific Shape Statistics. Bioengineering 2022, 9, 654. https://doi.org/10.3390/bioengineering9110654

AMA Style

El-Melegy M, Kamel R, Abou El-Ghar M, Alghamdi NS, El-Baz A. Level-Set-Based Kidney Segmentation from DCE-MRI Using Fuzzy Clustering with Population-Based and Subject-Specific Shape Statistics. Bioengineering. 2022; 9(11):654. https://doi.org/10.3390/bioengineering9110654

Chicago/Turabian Style

El-Melegy, Moumen, Rasha Kamel, Mohamed Abou El-Ghar, Norah S. Alghamdi, and Ayman El-Baz. 2022. "Level-Set-Based Kidney Segmentation from DCE-MRI Using Fuzzy Clustering with Population-Based and Subject-Specific Shape Statistics" Bioengineering 9, no. 11: 654. https://doi.org/10.3390/bioengineering9110654

APA Style

El-Melegy, M., Kamel, R., Abou El-Ghar, M., Alghamdi, N. S., & El-Baz, A. (2022). Level-Set-Based Kidney Segmentation from DCE-MRI Using Fuzzy Clustering with Population-Based and Subject-Specific Shape Statistics. Bioengineering, 9(11), 654. https://doi.org/10.3390/bioengineering9110654

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop