Next Article in Journal
DNA Hypermethylation at the Invasive Front of Oral Squamous Cell Carcinoma Confers Poorly Differentiated Characteristics and Promotes Migration of Cancer Cells
Previous Article in Journal
The Relationship Between Non-Invasive Tests and Digital Pathology for Quantifying Liver Fibrosis in MASLD
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Diagnosis, Classification, and Segmentation of Abdominal Aortic Aneurysm and Dissection from Computed Tomography Images

1
Cardiovascular Surgery Clinic, Elazig Fethi Sekin City Hospital, Elazığ 23280, Turkey
2
Department of Computer Engineering, Adiyaman University, Adiyaman 02040, Turkey
3
Department of Computer Engineering, Malatya Turgut Ozal University, Malatya 44210, Turkey
4
Department of Software Engineering, Malatya Turgut Ozal University, Malatya 44210, Turkey
*
Author to whom correspondence should be addressed.
Diagnostics 2025, 15(19), 2476; https://doi.org/10.3390/diagnostics15192476
Submission received: 6 July 2025 / Revised: 18 September 2025 / Accepted: 26 September 2025 / Published: 27 September 2025
(This article belongs to the Special Issue Artificial Intelligence and Computational Methods in Cardiology 2025)

Abstract

Background/Objectives: Diagnosis of abdominal aortic aneurysm and abdominal aortic dissection (AAA and AAD) is of strategic importance as cardiovascular disease has fatal implications worldwide. This study presents a novel deep learning-based approach for the accurate and efficient diagnosis of abdominal aortic aneurysms (AAAs) and aortic dissections (AADs) from CT images. Methods: Our proposed convolutional neural network (CNN) architecture effectively extracts relevant features from CT scans and classifies regions as normal or diseased. Additionally, the model accurately delineates the boundaries of detected aneurysms and dissections, aiding in clinical decision-making. A pyramid scene parsing network has been built in a hybrid method. The layer block after the classification layer is divided into two groups: whether there is an AAA or AAD region in the abdominal CT image, and determination of the borders of the detected diseased region in the medical image. Results: In this sense, both detection and segmentation are performed in AAA and AAD diseases. Python programming has been used to assess the accuracy and performance results of the proposed strategy. From the results, average accuracy rates of 83.48%, 86.9%, 88.25%, and 89.64% were achieved using ResDenseUNet, INet, C-Net, and the proposed strategy, respectively. Also, intersection over union (IoU) of 79.24%, 81.63%, 82.48%, and 83.76% have been achieved using ResDenseUNet, INet, C-Net, and the proposed method. Conclusions: The proposed strategy is a promising technique for automatically diagnosing AAA and AAD, thereby reducing the workload of cardiovascular surgeons.

1. Introduction

Artificial intelligence (AI) aims to make machines capable of performing tasks parallel to and related to human thought [1,2]. A branch of AI called machine learning enables patterns to be discovered and decisions to be made by analyzing large datasets or bases without the need for given instructions or assumptions [3]. Artificial neural networks (ANNs) are an AI technology inspired by the human nervous system called neurons [4,5,6]. Deep learning, on the other hand, is a machine learning method that is more advanced than standard neural networks and has more learning layers [7,8]. Convolutional neural networks (CNNs) [9,10], which are AI-based deep learning methods [11,12], are used in a wide variety of medical fields, including image analysis, object identification, classification and segmentation, and biomedical image recognition [13,14]. It has also made great strides using CNNs, potentially offering new approaches for the diagnosis, prognosis, or treatment of diseases [15,16].
Many CNN architectures have been proposed for disease detection, biomedical image processing, and classification [17,18]. SegNet is a CNN deep learning technique based on semantic pixel-wise segmentation [19]. For quick and accurate figure segmentation, the encoder–decoder CNN model known as UNet was created [20]. The major goal of this model is to build a common negotiation network with successive levels where pooling operators are altered. Any advances made by INet are not solely attributable to the adoption of residual shortcuts in ResUNet [21,22]. When dense layers are inserted into the original ResUNet, it is known as a ResDenseUNet [23]. The ResDenseUNet is a model equipped with dense connections defined as DenseINet [22]. This method was created to satisfy the idea that INet can serve as an alternative backbone CNN for the segmentation of biomedical images. Barzekar and Yu [24] presented a novel CNN method called C-Net. This method includes combining multiple networks to classify biomedical images. The C-Net model consists of multiple CNNs, including an outer, middle, and inner. C-Net was used for histopathological image classification on common datasets.
AI-based CNN architectures are also applied for the diagnosis, classification, and segmentation of cardiovascular diseases [25]. Abdominal aortic aneurysms (AAAs) and abdominal aortic dissections (AADs) are cardiovascular diseases that should be taken very seriously [26]. These diseases usually progress insidiously and may lead to greater fatal consequences in the latter years of a patient’s life [27]. Although the aortic diameter is usually less than approximately 30 mm in healthy adults, it increases due to prolonged blood flow and stimulation of some pathological factors such as hereditary factors and inflammation. An uncontrolled increase in aortic diameter can cause AAAs and even AADs. An AAA is a disease of the abdominal aorta [28]. Conventional AAA treatment is accomplished with open surgery or endovascular aneurysm treatment. Modern medical imaging technology advancements have prompted the creation of software that allows the recognition and examination of AAAs [29]. On the other hand, most recent techniques lack automatic recognition and require humans to determine aortic localization and measure vessel diameters. In addition, these techniques do not have the ability to perform automated quantitative analysis of AAAs with anatomical features such as vascular calcification or intraluminal thrombus formation [30]. Therefore, with the co-development of AI and medical technology, AAA diagnoses are likely to be more accurate and faster [31]. An AAD is an event in which a rupture occurs in the inner layer of the body’s aorta [31,32]. In this case, blood flows through the created tear and causes the inner and middle layers of the aorta to separate, which is called “dissection”. In severe AAD, blood filters through the outer aortic wall, separating the layers: death occurs due to hypovolemic and pain-induced shock [33]. Aortic dissection is rare, and usually occurs in men in their 60 s and 70 s [34]. The symptoms of aortic dissection are vague in the early stages and may feature symptoms of other diseases such as dizziness, fainting, abdominal pain, and swelling. Hence, patients often learn that they have AAD by chance during examination for another disease. As a result, the treatment of this disease is unintentionally delayed. However, when an aortic dissection is diagnosed early and treated promptly, the patient’s chances of survival increase. The Stanford system divides aortic dissections into two classes based on their anatomical features: Stanford type A and Stanford type B [31,35]. The ascending aorta, as well as the aortic arch and descending aorta, may be dissected in a Stanford type A dissection. Stanford type B dissection also involves the descending aorta as it occurs distal to the left subclavian artery. While the probability of having an aortic dissection patient is low, the mortality rate is quite high. Half of the patients with a Stanford type A dissection die within 3 days without treatment, while at least 10% of patients with a Stanford type B dissection die within 30 days [31,36].
Many machine learning- and deep learning-based methods have been proposed for AAA and AAD detection and classification. On computed tomography angiography (CTA) pictures, Lyu et al. [31] offer a deep-learning-based system to segment the divided aorta. There are two steps to the algorithm. First, the 3D volume is split into two anatomical sections using a 3D CNN. Second, based on a pyramid scene parsing network, two 2D CNNs partition each distinct portion individually. To increase segmentation accuracy on the intimal flap area, an edge extraction branch (EEB) was added to the 2D model. A total of 139 patients had their aortic computed tomography angiographic pictures recorded sequentially by Yu et al. [37]. They put into practice a deep learning technique based on a 3D deep CNN that enables true lumen measurement and autonomous segmentation of the whole aorta and false lumen. To determine the likelihood that these pathologies would be present in potential patients, Harris et al. [38] created a CNN model that was trained on data from aortic dissection and rupture. This aortic damage model was used to prioritize studies over a period of 4 weeks, and model accuracy measures were determined by comparing model outcomes with doctors’ reports. For aortic dissection and aortic rupture, the model’s sensitivity and specificity were 87.8% and 96.0%, and 100% and 96.0%, respectively. A set of 153 CT images were gathered by Bonechi et al. [39], and their 3D annotations at the voxel level were obtained using a semi-automated method. Although less precise, using a semi-supervised labeling method instead of complete supervision was shown to be essential to collect enough data quickly enough. A total of three 2D segmentation networks, one for every CT view, were used to examine the 3D volume (axial, coronal, and sagittal). U-Net and LinkNet, two distinct network topologies, were used and contrasted. The suggested method’s principal benefits come from its capacity to operate with less data, even for noisy targets. Additionally, processing 3D scans with only a low amount of CPU resources is possible when basing the analysis on 2D slices. The acquired results are encouraging and demonstrate that the neural networks used may provide precise aorta segmentation. Three phases make up the strategy that Mohammadi et al. [40] proposed: (1) Creating a classifier using a CNN to divide the various abdominal sections into four categories, including the bone, aorta, interior area, and body boundary. (2) The second stage is to define the edge of the aorta and measure its diameter using the Hough Circle Algorithm, which finds arbitrary shapes in pictures and calculates their diameters in pixels. (3) The identified aorta will ultimately be divided into one of these categories based on its diameter: no risk of AAA exists, medium risk of AAA exists, and high risk of AAA exists. A CNN that is trained for binary classification of the input was proposed by Justin et al. [41]. The VGG-16 basic model was subjected to transfer learning, utilizing the ImageNet database for model testing and development. The CNN model’s preliminary data has demonstrated that the model is capable of properly screening and identifying CTA results of infrarenal AAAs. On CTA figures, Tianling et al. [42] suggested a deep-learning-based approach to segment the dissected aorta. There are two steps to the algorithm. First, the 3D volume is split into two anatomical sections using a 3D CNN. Second, based on a pyramid scene parsing network, two 2D CNNs partition each distinct region individually. To improve segmentation accuracy on the intimal flap area, an EEB was added to the 2D model. Using UNet and ENet approaches, Comelli et al. [43] suggested a deep learning method for the automated segmentation of AAAs. With Mimics software, the CT angiographies performed on 72 patients with AAAs and different valve morphologies, such as bicuspid aortic valve and tricuspid aortic valve, were semi-automatically segmented before being used to train the tested deep learning models. Cao et al. [44] introduced a radiomic model that uses high-resolution vascular wall magnetic resonance imaging (VW-MRI) and a machine learning model. Chhabra et al. [45] presented a deep learning-based approach for fingerprint recognition, which is relevant to this work as both tasks involve image analysis and pattern recognition. Madhu et al. [46] demonstrated the application of deep learning for medical image analysis, specifically in the context of disease detection. This is directly related to this research, as we are also using deep learning for medical image analysis. Madhu et al. [47] highlights the use of deep learning for medical image classification, which is a key component of our research. The focus on point-of-care ultrasound images is also relevant, as this study involves analyzing CT images. Chen et al. [48] performed a detailed analysis of a patient’s Stanford type B aortic dissection to better understand the disease’s effects on blood flow. They created a 3D computer model of the patient’s aorta using medical scans and then simulated the hemodynamic parameters of the blood within it. This study focused on several key metrics: the velocity of the blood flow as it moved through the aorta, the wall shear stress on the aortic walls, and the flow patterns at the fracture site. The goal was to provide a comprehensive evaluation of the condition’s impact on the patient’s circulatory system. To assess the impact of treatment on Type B aortic dissection, a non-invasive computational study was conducted on 12 male patients by Polanczyk et al. [49]. They created 3D models of their aortas using pre- and post-operative CT scans. These models were then used to run computational fluid dynamics (CFD) simulations to analyze several key factors [50]. The study focused on measuring and quantifying the mass flow rate, blood velocity, wall stress, and shear rate within the aorta. The analysis aimed to identify areas where blood flow was slow, which could increase the likelihood of blood clots. Ultimately, the study provided insights into the displacement forces on the aortic wall and the overall hemodynamic changes following treatment.
In this study, a novel CNN-based deep learning scheme is proposed for diagnosing AAA and AAD. This scheme determines whether there is an AAA, AAD, or neither in the abdominal CT images of a real dataset. The proposed scheme classifies the AAAs and AADs and detects the minimum bounding box (MBB) coordinates of them in the abdominal images. In addition, several performance analyzes have been executed using Python software (v3.8.5). The performance results of the proposed model have been compared with current CNN-based models, such as ResDenseUNet [23], INet [22], and C-Net [24], in terms of precision, recall, and accuracy rate benchmarks.

1.1. The Main Contributions

The main novel contributions of this work can be listed as follows:
  • A novel deep CNN model is proposed for the classification and segmentation of the CT images. The proposed model might allow for the early diagnosis and treatment of AAAs and AADs.
  • A unique CNN model is proposed in terms of the sequence of convolutional, pooling, activation, dropout, fully connected (FC), and classifier layers. It is best known in the study, and this model is highly successful in diagnosing both AAA and AAD.
  • A real dataset obtained from the Ministry of Health of the Republic of Turkey was used. The dataset consists of non-disease, AAA, and AAD abdominal CT images in Digital Imaging and Communications in Medicine (DICOM) format.
  • The results of the experiments note that the proposed method is better than ResDenseUNet, INet, and C-Net.

1.2. Paper Organization

This paper is structured as follows: Section 2 provides a comprehensive overview of the proposed study, outlining the materials and methods employed. Within this section, we delve into the specific detection techniques utilized for abdominal aortic aneurysm (AAA) and abdominal aortic dissection (AAD). Section 3 presents a detailed analysis of the experimental results obtained, including a thorough evaluation of the performance metrics achieved in both AAA and AAD detection and classification. Finally, Section 4 concludes the study by summarizing the key findings and discussing potential future research directions.

2. Materials and Methods

2.1. The CNN Architecture of the Proposed Scheme

In this section, a CNN architecture is proposed for AAA and AAD diagnosis. We aim for the proposed CNN to process 2D axial slices in abdominal CT images. This is because analysis of 3D volumes is less efficient in terms of memory consumption and computation. The general scheme of the proposed hybrid method is shown in Figure 1. The CNN model for deep learning-based diagnosis and classification is proposed for classifying vectors of quantitative feature maps. In this way, the abdominal CT images given in the input have been classified as no-disease, aortic aneurysm, and aortic dissection.
The CNN model proposed in this study works as a classifier that can better learn important image features. The proposed CNN shown in Figure 1 includes 4 (3 × 3) convolutions ranging from 64 pixels to 512 pixels, 4 (2 × 2) convolutions, and 4 maximum pooling layers. These layers are joined via a dense layer with 2 dropouts and 3 FC layers of 128 and 4, respectively. An activated rectified linear unit follows each convolutional layer (ReLU). The designed (3 × 3) convolutional layers with residual shortcuts are subjected to the proposed CNN model. This means that the residual shortcuts of the INet architecture are used in the proposed method. The FC layers end with the SoftMax activation function to obtain the final class probabilities. In the pyramid scene parsing network of the hybrid method, a CNN was first used to obtain the feature map of the last convolutional layer, and then a pyramid parsing module was applied to collect different subregion representations. Next, upsampling and merging layers were applied along with global context information to create the final feature representation carrying both native and local data. The x–y dimension corresponding to a multi-channel feature map is shown in Equation (1):
( I K 3 d ) i , j = a b c I 3 d a , b , c K 3 d ( a + i 1 , j + b 1 , c )
where I denotes a 3D abdominal CT image, which is convolved by 3D kernel K 3 d . The convolution is performed by passing through all spatial locations. The ReLU activation function r a , b , c is used in the convolution layers, and is defined in Equation (2):
r a , b , c = m a x ( 0 , w c T x a , b )
where ( a , b ) denotes the parameters for the feature map, c depicts the channel index, w denotes the filter, and x a , b indicates the input at location ( a , b ). The returning features of the output are then concatenated, as represented by Q operator with a dense layer. This f operation is applied two-by-two on all of the outputs of the outer networks, as defined in Equation (3):
f y , w = y a , b , c i Q w a , b , c j = I y a , b , c i + c j
where y and w indicate the feature maps of different deep networks, and ( c i , c j ) denotes the number of channels in each output. Also, the method takes advantage of identity mapping as the basic residual shortcut can be expressed as H i G i x i = x i , as in INet [22]. The concatenation of the feature maps gives equal importance to all preceding convolutional layers in INet. The convolutional index is a larger weight on the output feature maps of the previous convolutional layer consisting of the highest-level semantics. The convolutional index allows INet to remove the feature maps concatenation.

2.2. Abdominal Aortic Aneurysm (AAA) Detection

The aim of this study is to detect AAA and AAD diseases using CT images of the abdomen. In the previous section, a CNN has been proposed that can successfully classify these diseases. In this section, it has been first decided to measure the diameter of the aorta to determine the presence of AAA disease. To handle this task, a Circle Hough Transform (CHT) method is used for aorta diameter calculation to detect the AAA. CHT is a feature extraction method based on digital image processing to detect circles in an image. This method is a Hough transform that uses the three basic features used in image processing—image filtering, edge detection, and Hough transform. The CHT is provided by voting in the Hough parameter environment [40]. A Gaussian filter (GF) is applied to the image to remove unwanted noise. Edges are detected as a basic outline in the image using Zero Cross Gauss Laplace (ZCGL). At each point on the edge, all possible circles in Hough space are voted to yield circles with a local maximum in Hough space. Note that a threshold is set to determine the local maximums that characterize it. A circle is represented as (x − a)2 + (y − b)2 = r2, with the center of the circle being the radius of a, b, and r. Actually, (a, b, r) is required to define a circle. Also, the 2D accumulator array is defined with two parameters (r, θ ). In this way, a 3D accumulator array is required to define a circle. A CHT algorithm for aortic diameter determination is given in Algorithm 1.
Algorithm 1: Hough circle transform method for aorta diameter calculation
1: Start the accumulator (H[a,b,r]) to all zeros
2: Detect the edge image using Canny edge detector
3:      for each edge pixel(x,y) in the abdomen image then
4:          for  θ = 0 to 360 then
5:            a = x-r*cosθ
6:            b = y-r*sinθ
7:            H[a,b,r] = H[a,b,r] + 1
8:          Determine the [a,b,r] values, where H[a,b,r] is above an appropriate threshold value
9:       end for
10:      end for
Herein, it is assumed that the aortic radius (r) is known. The Open Source Computer Vision Library (OpenCV) library has been used to determine aortic borders and measure the aorta diameter. The output of Algorithm 1 is the aortic diameter in pixels, so this value needs to be converted to millimeters for anyone to understand and know it properly. Assuming the capture threshold is 480 and the image is scaled from its original 512 × 512 size, the captured size is 384 × 384 pixels, as in [40]. In addition, the scan window for patch removal has been determined as 64 × 64 pixels. On the other hand, the scanning window can cover 80 × 80 mm of the whole image each time. Therefore, it is also more reasonable to convert the aortic diameter in this way. Thus, the output of the CHT method, which is the number of pixels, must be multiplied by 1.25. Finally, after determining the aorta’s diameter, whether it is AAA or not will be classified as follows: If the diameter of the aorta is less than 24 pixels, the output of the algorithm will report no presence of AAA. If the diameter of the aorta is more than 24 pixels, the output of the algorithm will report presence of AAA.

2.3. Abdominal Aortic Dissection (AAD) Detection

In this section, an EEB based on the holistically nested edge detection (HED) network is used to detect AADs as well as aortic borders [51]. In fact, AADs are known as intimal flaps that arise from the aorta. A HED is a network used for edge detection under deep supervision to accompany segmentation results in the first layers [52]. The main branch of a HED is a conventional CNN that consists of convolution layers and cascading convolution layers [31]. After each convolution, a block comes to a side output layer to reveal edge features at each resolution. Each side output layer l k corresponds to a sigmoid cross-entropy loss function l o s s e d g e k .
In the proposed CNN architecture, each convolution layer is followed by a batch normalization layer and an activation layer. In this architecture, the extraction branch is implemented as a single network, rather than performing multitasking learning, since the edge removed from the extraction branch is utilized for the improvement of the segmentation operation. Each side output has a new size based on its original resolution. After that, these outputs are forwarded to the convolution layers to obtain the E o u t output after being combined together. A sigmoid cross-entropy loss function l o s s o u t is utilized as the objective function on E o u t . The loss function for the EEB loss edge is given in Equation (4):
l o s s e d g e = l o s s o u t + k = 1 N W k l o s s e d g e k
where N is the number of side output layers, and the weighting parameter for the loss function l o s s e d g e k of the side output layer l k is W k . The aorta label is used to express the edge label l a b e l e d g e at the original resolution. Equation (5) supplies the edge label:
l a b e l e d g e = L S E L
where L is the label of the aorta in a given slice, is the morphological expansion operator, and S E is the 3 × 3 structural element. For lower resolutions, edge labels are generated by downsampling l a b e l e d g e using maximum pooling operations every 2 × 2 neighborhoods. The loss function l o s s e d g e is calculated around the labeled aortic area (with an MBB) to learn the specified information. Aortic dissected margins and intimal flaps should be estimated as 1 and aortic voxels as 0. The forecast in other regions is insignificant in this branch. Hence, the boundary area θ is defined as Equation (6):
θ = { ( x , y ) | L x , y = 1   o   l a b e l e d g e x , y = 1 }
where o is symbolized as a logical OR operation.
To output the final segmentation, a tag fusion network has been used by combining the outputs of the EEB. Another cross-entropy loss function, l o s s f u s i o n , is used as the objective function of our 2D model. Also, a focal loss (FL) is used for measuring differences in segmentation that radiologists approve (ground truth). An FL is a binary cross-loss (BCL) improvement. Additionally, a modulating factor adds the FL. This factor raises the range as the loss rate drops and lowers the loss rate from simple samples [12]. The calculation of FL ( l o s s f o c a l ), leveraged from the BCL, is given in Equation (7):
l o s s f o c a l ( p , g ) = i = 1 N f 1 p log p ,   i f   g = 1 i = 1 N b 1 p log 1 p ,   o t h e r w i s e
where p   [0, 1] indicates the prediction’s probability value, and 0 and 1 stand in for the foreground and background, respectively. Here, the ground truth based on pixel level is indicated by g   [0, 1]. Additionally, N f and N b represent the pixel values for classes 0 and 1, respectively. The modulation factors are   (0, 1] and   [0, 5]. These variables are dynamically set according to the circumstances. In order to maintain a positive number, the l o g value is used in the loss function [12].
Depending on the loss functions given in Equations (4) and (7), an advanced 2D loss ( l o s s 2 d ) is presented to increase the convergence rate. In this manner, an adaptive update was made to the gradient that was acquired for each cycle. It has also been examined whether there are any mutually reinforcing links utilizing the two loss functions. The FL has also been included in this function to help with it. The FL equals the total of all voxel probabilities. Deep supervision and end-to-end learning are thus defined as the weighted sum of the aforementioned three loss functions, which is represented as follows:
l o s s 2 d = p . l o s s e d g e + q . l o s s f u s i o n + r . l o s s f o c a l
where p , q , and r are the inter-weights between the three loss functions.
As a result, the output of the proposed CNN model consists of three nodes: no disease, aortic aneurysm, and aortic dissection. As mentioned before, a SoftMax layer has been used as the classification method, and three classes have been introduced. So, it is determined whether there is an aortic aneurysm, dissection, or neither from the abdominal CT images.

3. Results and Discussion

3.1. Dataset

In this study, a dataset provided by the Republic of Turkey Ministry of Health Teleradiology System has been used. In the dataset used, the abdominal CT images are 512 × 512 pixels in DICOM format. The abdominal CT images in the dataset were obtained from 56 female and 70 male patients aged between 20 and 87. AAA and AAD diseases typically occur within a specific age range and are more prevalent in male individuals. A total of 13,294 CT abdominal scans, of which 4286 are non-aortic and 9008 are aortic patients, are used from the dataset [53,54]. Half of the CT images with aortic disease are AAA, and the remaining half are images with AAD. In the experiments, 80% of the total images have been chosen randomly for training, 10% for testing, and the remaining 10% for validation. Table 1 presents the number of CT images used for training, testing, and validation in the study. Figure 2 demonstrates several CT images. Figure 2a, Figure 2b, and Figure 2c present several abdominal CT images of no disease, aortic aneurysm, and aortic dissection, respectively.

3.2. Experimental Setup

The proposed method and other models have been conducted on a Windows 10 operating system installed on a computer with an Intel CoreTM i7-8700 processor, 16 GB RAM, and an NVIDIA GeForce 4 GB Graphics Card device. All models in the experiments are coded using Python 3.8.5 programming. Keras [55] and Tensorflow [56] libraries are utilized in the programs for training the proposed networks. Table 2 presents several significant parameters used in experiments for this study. To use the memory effectively, the batch size is adjusted to 32 for the 2D CNN model. Here, a batch size includes 32 slices used for each training iteration. Because it is considered that the vascular pixels only operate in a small part of the CT images, the parameters and have been set to 0.9 and 3 in the weighting matrix. The parameters p , q , and r , and all W k values have been set to 1, as was applied in [40] for the calculation of l o s s 2 d and l o s s e d g e .
To analyze the proposed method, the evaluation metrics of precision ( P r c ), recall ( R c a l l ), F-1 score F s c r , and accuracy ( A c c ) are defined and computed as following in Equations (9)–(12), respectively:
P r c = T P T P + F P
R c a l l = T P T P + F N
F s c r = 2 x P r c x R c a l l P r c + R c a l l
A c c = T P + T N T P + T N + F P + F N
where T P , T N , F P , and F N stand for the corresponding true positive, true negative, false positive, and false negative.

3.3. The Performance Results of the AAA and AAD Diagnosis and Classification

This section discusses and analyzes the results of the AAA and AAD diagnosis and classification obtained from abdominal CT images. The classification results of no disease, aneurysm, or dissection from abdominal CT images are evaluated. The verification and validation of the proposed algorithm were conducted through a two-pronged approach—verification to ensure the code’s accuracy and validation to confirm that the model reflects physical reality.
Figure 3, Figure 4, Figure 5 and Figure 6 illustrate the confusion matrices of the AAA and AAD classification results for training, test, and validation using ResDenseUNet, INet, C-Net, and the proposed model, respectively. The confusion matrices are presented as two classes consisting of the true and predicted class. With this classification, it has been found how many of the abdominal images have been correctly forecasted as AAA or AAD without the aortic disease present. In fact, the numbers of correct predictions are shown in the confusion matrices. The performance rates were calculated in terms of training, testing, and validation, respectively, by using the metrics in Equations (9)–(12) and the values in the confusion matrices. The performance results are presented in Table 3. It is understood that results of 79.51%, 84.06%, 85.58%, and 87.93% in precision, 80.22%, 85.55%, 87.11%, and 87.17% in recall, and 79.86%, 84.8%, 86.34%, and 87.55% in F1-score were experienced for aneurysm tests with ResDenseUNet, INet, C-Net, and the proposed model, respectively. It is clear from the results that scores of 81.51%, 85.77%, 87.38%, and 88.37% in precision, 81.15%, 85.58%, 87.38%, and 88.47% in recall, and 81.33%, 85.68%, 87.48%, and 88.47% in F1-score were achieved for aneurysm validation with ResDenseUNet, INet, C-Net, and the proposed model, respectively. From the results, it is seen that scores of 81.46%, 86.49%, 87.69%, and 90.04% in precision, 79.11%, 84%, 85.55%, and 88.44% in recall, and 80.27%, 85.23%, 86.61%, and 89.23% in F1-score were achieved for dissection tests with ResDenseUNet, INet, C-Net, and the proposed model, respectively. It can be understood that scores of 80.43%, 85.36%, 87.36%, and 88.83% in precision, 81.15%, 85.36%, 87.36%, and 88.24% in recall, and 80.79%, 85.36%, 87.36%, and 88.54% in F1-score were obtained for dissection validation with ResDenseUNet, INet, C-Net, and the proposed model, respectively. From the results, accuracy rates of 89.49%, 80%, and 80.97% using ResDenseUNet, 90.26%, 84.88%, and 85.56% using INet, and 91.01%, 86.31%, and 87.44% using C-Net were obtained for training, test, and validation, respectively. Actually, the highest and lowest performances were observed for training and validation, respectively. This is because when CNNs are run it is stated that the accuracy rate will decrease since the number of data allocated for testing and accuracy are less than that for training. However, the proposed algorithm had higher success in testing. In general, the performance of algorithms in AAA and AAD detection differs from each other. C-Net is more successful than Inet, and INet is more successful than ResDenseUNet. However, it is clear that the proposed method achieves higher performance of AAD diagnosis than AAA diagnosis. In fact, it is not surprising that the AAD detection success is higher in this way since internal flaps are easier to distinguish in pixel transformation. The accuracy rate of the proposed model is 91.63%, 88.72%, and 88.57% after 50 epochs are terminated for training, testing, and validation, respectively. It can be clearly seen that the least erroneous estimations of AAA and AAD were achieved using the proposed method. In other words, the best performances are obtained with the proposed model, and the proposed model gives superior performance results than others in terms of the benchmark of accuracy. This is due to the CNN architecture of the proposed model, where activation and dropout layers are used after each convolution layer. In addition, it is seen that the training of the data transmitted to the dropout layer is strengthened thanks to the dense network layer. The created CNN architecture has shown fruitful and encouraging results in the detection and classification of the aortic diseases. The proposed model has just not used residual shortcuts and encoder–decoder-based structures like ResDenseUNet. Also, the proposed method does not just uses outer, middle, and inner layers like C-Net. Instead, a more stable and robust classification model of a CNN is presented.
As seen from Figure 7, throughout the training and optimization cycles of the proposed hybrid CNN notable enhancements in performance were observed. As training progressed with more epochs, the frequency of weight adjustments within the neural network diminished, leading to consistent convergence and improved model fitting. This process also led to a corresponding increase in overall accuracy (see Figure 7a) and a reduction in the loss function (see Figure 7b), contributing positively to the model’s performance.
Figure 8 and Figure 9 illustrate many of diagnostic results of AAA and AAD detection of MBB coordinates, respectively. Figure 8 shows the aortic aneurysm diagnosis estimation results from abdominal CT images. In all images in Figure 8 and Figure 9, the MBB coordinates of the diagnosed diseased aortic region were found, and the diseased aortic region was plotted with a red rectangle. In the testing phase, each 16-pixel slice in the image was analyzed separately by CNN and the final probabilities were obtained by averaging the class probabilities for each slice. In the results in Figure 8 and Figure 9, it can be said that the detection of the border of AAA and AAD in abdominal CT images is quite successful.
The analysis of misjudgments involved a study of heat maps generated using gradient-weighted class activation mapping overlaid on the original CT images, as seen in Figure 10. Particularly, a notable frequency of misjudgments was observed for small AAAs.
Confirmation of this success was provided by proving how accurate the MBB coordinates of the AAA and AAD were, as determined by the intersection over union (IoU) value. The proportion of pixels that are different from 0 (true positive) at the intersection of the M p and M d pictures, as well as the percentage of pixels that are different from 0 at the intersection of the M p and M e images, are used to calculate the IoU value, as shown in Equation (13):
I o U =   T P T P + F N + F P
where M p , M d , and M e refer to, respectively, the image produced by the segmentation model, the dilation operation, and the 3 × 3 convolution matrix of the mask image’s erosion process. Each image’s IoU value is calculated independently, and while testing the models the average of these values was taken into consideration. The IoU calculation is shown in Figure 11. Squares 1 and 2 in Figure 11 are represented by [x1, y1, x2, y2] and [x3, y3, x4, y4], respectively. This convention was created to calculate the MBB of the AAA and AAD regions. Confirmation of MBB coordinates verification (ground truth) performance was obtained from ten radiologists. The IoU values are measured to prove the accuracy of the calculation of the MBB determination process. In this case, measuring how precisely the detection of AAA or AAD regions is predicted spatially from abdominal CT images.
The proposed method generates a higher accuracy rate of IoU and a lower loss rate in the experiments. Figure 12a,b demonstrate the IoU performance results of the proposed model in AAA and AAD detection, respectively. The average IoU performance results are given in Table 4. The proposed method, C-Net, INet, and ResDenseUNet achieved 83.76%, 82.48%, 81.63%, and 79.24% for testing in IoU, respectively. The IoU values were 84.96% and 83.76%, respectively, for training and testing after 50 rounds of operation, and brought out very successful results compared to INet and ResDenseUNet. A classification task can enhance the success of the proposed method and lead to it predicting the input image more accurately.
We also recognize the importance of providing specific case studies and experimental results to demonstrate the practical application of our AI technology in the field of medical image processing, particularly for the diagnosis and segmentation of aortic aneurysm and aortic dissection.
A 60-year-old female patient presented with acute chest pain [57]. A CT scan revealed a complex aortic dissection with multiple false lumens. The AI system accurately segmented the diseased regions, providing valuable information for surgical planning and treatment. In a study [58], two feature selection techniques were employed to classify patients into “control” and “aortic aneurysm” groups using Naive Bayes and K-Nearest-Neighbor algorithms. These methods were applied to in vivo data collected from a clinical study involving 55 patients. The first approach utilized parameter estimation of AutoRegressive–MovingAverage (ARMAX) models, while the second focused on the frequency response of the transfer function. Both techniques were based on two peripheral photoplethysmographic signals. Despite achieving an overall accuracy of approximately 60%, the classifiers demonstrated an intrinsic effect of aneurysms, as evidenced by their lower accuracy when trained and tested with randomly permuted labels. The lower accuracy compared to previous in silico results can be attributed to the low signal-to-noise ratio of the sensors used, the reduced peripheral perfusion in the highly morbid patients, and the variability and limited number of patients in the clinical study.
While both approaches have demonstrated basic classification capabilities in a proof-of-concept clinical setting, further research is necessary to address the remaining uncertainties. Training the classifier with a larger patient cohort is essential to improve accuracy and reliability.
This study conducted a comprehensive analysis of Stanford type B aortic dissection using a non-invasive CFD approach. We created 3D computer models of the aorta based on CTA images from 126 patients, which is a dataset comprising over 13,000 scans in DICOM format. The core of the study involved using CFD simulations to analyze critical hemodynamic parameters, including blood velocity, wall shear stress, and mass flow rate, with the aim of identifying areas prone to thrombus formation. The study’s limitations, such as dataset constraints, model generalizability, and the challenges of accurate segmentation for subtle lesions, were also explicitly discussed, emphasizing that the model is intended as a supplementary tool for clinicians rather than a standalone diagnostic solution.
We addressed challenges such as interpretability, regulatory approval, and user acceptance. For interpretability, while deep learning models often operate as “black boxes”, we aim to incorporate XAI techniques in future iterations, such as saliency maps or attention mechanisms, to highlight the specific image regions driving the model’s decisions, thereby increasing radiologist trust and understanding. Regarding regulatory approval, Turkey’s legal framework for AI in healthcare is still evolving, with the Ministry of Health and the Turkish Medicines and Medical Devices Agency (TİTCK) being the key authorities. Our strategy will involve close collaboration with these bodies, ensuring compliance with existing medical device regulations where software can be classified as a medical device if serving a medical diagnosis purpose. We will also have proactive engagement with forthcoming AI-specific legislation, potentially aligning with EU AI Act standards. For user acceptance, we plan to implement a phased-integration approach, starting with pilot programs in select clinical environments to gather direct feedback from radiologists. This will allow us to refine the tool based on real-world usage, ensuring it seamlessly integrates into their existing Picture Archiving and Communication Systems (PACS) and Radiology Information Systems (RIS) workflows, minimizes disruption, and demonstrably reduces workload, ultimately fostering a collaborative rather than displacing role for AI. Future research will aim to incorporate crucial features like aortic tortuosity, aneurysm location, wall elasticity, and the morphology of dissection flaps to provide a more clinically relevant tool. These features are critical for prognosis and surgical planning. We believe that this iterative approach will lead to more sophisticated diagnostic aids that can serve as powerful decision-support tools for cardiologists and surgeons, ultimately improving the efficiency of diagnosis and patient outcomes.

4. Conclusions

The development of AI-based algorithms for cardiovascular diseases is extremely important for rapid diagnosis and treatment in medical applications. Abdominal aortic aneurysm (AAA) and abdominal aortic dissection (AAD) are among the most important cardiovascular diseases in the world. Thus, this paper proposes a classification strategy to automatically diagnose AAA and AAD. Since CNN architectures are more successful than other AI technologies in terms of image processing, object recognition, and medical image training, a robust CNN model has been put forward for diagnosing aortic disease from abdomen CT images. The proposed CNN strategy includes various convolutional layers followed by activation, dropout layers, FC layers, and finally a SoftMax classifier layer. Several analyses and comparisons of the proposed model have been carried out using Python and its libraries in terms of precision, recall, F-1 score, and accuracy. The proposed model has 1.55%, 2.6%, and 5.7% more IoU performance than C-Net, INet, and ResDenseUNet, respectively. The CNN strategy can be useful for many medical procedures. The results of this study have several practical implications for clinical practice, offering a promising tool to assist cardiologists and surgeons. Our proposed deep learning model can be directly applied in a clinical setting to streamline the diagnostic workflow for AAA and AAD. In future studies, we plan to use of more data and integrate a novel segmentation method into the proposed CNN strategy. We plan to incorporate XAI techniques to provide insights into our model’s decision-making process. Specifically, methods such as gradient-weighted class activation mapping (Grad-CAM) or integrated gradients will be employed to generate visual saliency maps. These maps will highlight the specific regions within the CT images that are most influential in the model’s predictions for AAA and AAD detection and segmentation. By visually demonstrating why the model arrives at a particular conclusion, clinicians can gain valuable insight, verify the model’s focus on clinically relevant anatomical features, identify potential biases, and ultimately develop greater confidence in the system’s recommendations. This transparency is paramount for the eventual clinical adoption of our tool, enabling radiologists to confidently integrate AI-driven insights with their expert judgment, thereby improving diagnostic accuracy and efficiency.

Author Contributions

Conceptualization, H.B. (Hakan Baltaci), S.Y., M.Y. and H.B. (Harun Bingol); Methodology, S.Y., M.Y. and H.B. (Harun Bingol); Software, S.Y., M.Y. and H.B. (Harun Bingol); Validation, H.B. (Hakan Baltaci), S.Y., M.Y. and H.B. (Harun Bingol); Formal analysis, H.B. (Hakan Baltaci), S.Y., M.Y. and H.B. (Harun Bingol); Investigation, H.B. (Hakan Baltaci), S.Y., M.Y. and H.B. (Harun Bingol); Resources, H.B. (Hakan Baltaci), S.Y., M.Y. and H.B. (Harun Bingol); Data curation, H.B. (Hakan Baltaci), S.Y., M.Y. and H.B. (Harun Bingol); Writing—original draft, H.B. (Hakan Baltaci), S.Y., M.Y. and H.B. (Harun Bingol); Writing—review & editing, H.B. (Hakan Baltaci), S.Y., M.Y. and H.B. (Harun Bingol); Visualization, H.B. (Hakan Baltaci), S.Y., M.Y. and H.B. (Harun Bingol); Supervision, H.B. (Hakan Baltaci), S.Y., M.Y. and H.B. (Harun Bingol); Project administration, H.B. (Hakan Baltaci), S.Y., M.Y. and H.B. (Harun Bingol). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

A public dataset was used in the study [53,54].

Acknowledgments

The author thanks to General Directorate of Health Information Systems of the Ministry of Health of the Republic of Turkey for providing the data used in this study. This scientific study has been carried out in accordance with Article 28 of the Law on Protection of Personal Data No. 6698 of the Republic of Turkey, and Article 16 of the Regulation on Personal Health Data No. 30808, which is enacted in accordance with the relevant law. This work has been supported by the scientific and technological research council (TUBITAK) of Turkey under grant no: 124E130.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this article.

References

  1. Selvaraj, J.; Prabha, T.; Kumar, T.D.A.; Palaniappan, S. Artificial Intelligence in Biomedical Image Processing. In Machine Learning and Systems Biology in Genomics and Health; Singh, S., Ed.; Springer: Singapore, 2022. [Google Scholar]
  2. Devi, K.G.; Balasubramanian, K. Machine Learning and Deep Learning Techniques for Medical Science; CRC Press: Boca Raton, FL, USA, 2022; 412p. [Google Scholar]
  3. Payal, M.; Kumar, K.S.; Kumar, T.A. Recent advances of Machine Learning Techniques in Biomedicine. Int. J. Multidiscip. Res. Sci. Eng. Technol. 2022, 5, 772–779. [Google Scholar]
  4. Das, A.; Mohapatra, S.K.; Mohanty, M.N. Design of deep ensemble classifier with fuzzy decision method for biomedical image classification. Appl. Soft Comput. 2022, 115, 108178. [Google Scholar] [CrossRef]
  5. van der Velden, B.H.M.; Kuijf, H.J.; Gilhuijs, K.G.A.; Viergever, M.A. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med. Image Anal. 2022, 79, 102470. [Google Scholar] [CrossRef] [PubMed]
  6. Rahimzadeh, M.; Attar, A. A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2. Inform. Med. Unlocked 2020, 19, 100360. [Google Scholar] [CrossRef] [PubMed]
  7. Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar] [CrossRef]
  8. Kumara, K.S.A.; Prasad, A.Y.; Metan, J. A hybrid deep CNN-Cov-19-Res-Net Transfer learning architype for an enhanced Brain tumor Detection and Classification scheme in medical image processing. Biomed. Signal Process. Control 2022, 76, 103631. [Google Scholar]
  9. Nisha, J.S.; Varun PGopi Palanisamy, P. Automated colorectal polyp detection based on image enhancement and dual-path CNN architecture. Biomed. Signal Process. Control 2022, 73, 103465. [Google Scholar] [CrossRef]
  10. Xu, Z.; Ren, H.; Zhou, W.; Liu, Z. ISANET: Non-small cell lung cancer classification and detection based on CNN and attention mechanism. Biomed. Signal Process. Control 2022, 77, 103773. [Google Scholar] [CrossRef]
  11. Ma, D.; Shang, L.; Tang, J.; Bao, Y.; Fu, J.; Yin, J. Classifying breast cancer tissue by Raman spectroscopy with one-dimensional convolutional neural network. Spectrochim. Acta 2021, 256, 119732. [Google Scholar] [CrossRef]
  12. Yalçın, S.; Herdem, M.S. A CNN-ABC model for estimation and optimization of heat generation rate and voltage distributions of lithium-ion batteries for electric vehicles. Int. J. Heat Mass Transf. 2022, 199, 123486. [Google Scholar] [CrossRef]
  13. Zhou, T.; Ruan, S.; Canu, S. A review: Deep learning for medical image segmentation using multi-modality fusion. Array 2019, 3, 100004. [Google Scholar] [CrossRef]
  14. Mansour, R.F.; Alfar, N.M.; Abdel-Khalek, S.; Abdelhaq, M.; Saeed, R.A.; Alsaqour, R. Optimal deep learning based fusion model for biomedical image classification. Expert Syst. 2021, 39, e12764. [Google Scholar] [CrossRef]
  15. Punn, N.S.; Agarwal, S. Modality specific U-Net variants for biomedical image segmentation: A survey. Artif. Intell. Rev. 2022, 55, 5845–5889. [Google Scholar] [CrossRef] [PubMed]
  16. Yang, J.; Faraji, M.; Basu, A. Robust segmentation of arterial walls in intravascular ultrasound images using dual path u-net. Ultrasonics 2019, 96, 24–33. [Google Scholar] [CrossRef]
  17. Wang, H.; Xie, S.; Lin, L.; Iwamoto, Y.; Han, X.H.; Chen, Y.W.; Tong, R. Mixed transformer u-net for medical image segmentation. arXiv 2021, arXiv:2111.04734. [Google Scholar] [CrossRef]
  18. Xia, H.; Ma, M.; Li, H.; Song, S. Mc-net: Multi-scale context-attention network for medical CT image segmentation. Appl. Intell. 2022, 52, 1508–1519. [Google Scholar] [CrossRef]
  19. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  20. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  21. Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual UNet. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef]
  22. Weng, W.; Zhu, X. INet: Convolutional Networks for Biomedical Image Segmentation. IEEE Access 2021, 9, 16591–16603. [Google Scholar] [CrossRef]
  23. Khened, M.; Kollerathu, V.A.; Krishnamurthi, G. Fully convolutional multi-scale residual DenseNets for cardiac segmentation and automated cardiac diagnosis using ensemble of classifiers. Med. Image Anal. 2019, 51, 21–45. [Google Scholar] [CrossRef]
  24. Barzekar, H.; Yu, Z. C-Net: A reliable convolutional neural network for biomedical image classification. Expert Syst. Appl. 2022, 187, 116003. [Google Scholar] [CrossRef]
  25. Artzner, C.; Bongers, M.N.; Kärgel, R.; Faby, S.; Hefferman, G.; Herrmann, J.; Nopper, S.L.; Perl, R.M.; Walter, S.S. Assessing the Accuracy of an Artificial Intelligence-Based Segmentation Algorithm for the Thoracic Aorta in Computed Tomography Applications. Diagnostics 2022, 12, 1790. [Google Scholar] [CrossRef] [PubMed]
  26. Secchi, F.; Interlenghi, M.; Alì, M.; Schiavon, E.; Monti, C.B.; Capra, D.; Salvatore, C.; Castiglioni, I.; Papa, S.; Sardanelli, F.; et al. A Combined Deep Learning System for Automatic Detection of “Bovine” Aortic Arch on Computed Tomography Scans. Appl. Sci. 2022, 12, 2056. [Google Scholar] [CrossRef]
  27. Qi, F.; Liu, Y.; Zhang, K.; Zhang, Y.; Xu, K.; Zhou, M.; Zhao, H.; Zhu, S.; Chen, J.; Li, P.; et al. Artificial Intelligence Uncovers Natural MMP Inhibitor Crocin as a Potential Treatment of Thoracic Aortic Aneurysm and Dissection. Front. Cardiovasc. Med. 2022, 9, 871486. [Google Scholar] [CrossRef] [PubMed]
  28. Huang, Y.; Shi, D.; Zhang, X.; Zhu, Z.; Zhang Xueli Tang, S.; Shang, X.; Li, C.; He, M. Retinal vascular features as new biomarkers for aortic aneurysms and aortic dissections. Investig. Ophthalmol. Vis. Sci. 2022, 63, 2666. [Google Scholar]
  29. Fanga, Z.M.; Feng, X.; Chen, Y.; Luo, H.; Jiang, D.S.; Yi, X. Targeting autophagy in aortic aneurysm and dissection. Biomed. Pharmacother. 2022, 153, 113547. [Google Scholar] [CrossRef]
  30. Golledge, J. Abdominal aortic aneurysm: Update on pathogenesis and medical treatments. Nat. Rev. Cardiol. 2019, 16, 225–242. [Google Scholar] [CrossRef]
  31. Raffort, J.; Adam, C.; Carrier, M.; Ballaith, A.; Coscas, R.; Jean-Baptiste, E.; Hassen-Khodja, R.; Chakfé, N.; Lareyre, F. Artificial intelligence in abdominal aortic aneurysm. J. Vasc. Surg. 2020, 72, 321–333. [Google Scholar] [CrossRef]
  32. Criado, F.J. Aortic dissection: A 250-year perspective. Tex. Heart Inst. J. 2011, 38, 694. [Google Scholar]
  33. White, A.; Broder, J.; Mando-Vandrick, J.; Wendell, J.; Crowe, J. Acute aortic emergencies—Part 2 aortic dissections. Adv. Emerg. Nurs. J. 2013, 35, 28–52. [Google Scholar] [CrossRef]
  34. Borioni, R.; Garofalo, M.; De Paulis, R.; Nardi, P.; Scaffa, R.; Chiariello, L. Abdominal Aortic dissections: Anatomic and clinical features and therapeutic options. Tex. Heart Inst. J. 2005, 32, 70–73. [Google Scholar]
  35. Huang, L.T.; Tsai, Y.S.; Liou, C.F.; Lee, T.H.; Kuo, P.T.P.; Huang, H.S.; Wang, C.K. Automated Stanford classification of aortic dissection using a 2-step hierarchical neural network at computed tomography angiography. Eur. Radiol. 2022, 32, 2277–2285. [Google Scholar] [CrossRef]
  36. Chen, H.; Yan, S.; Xie, M.; Huang, J. Application of cascaded GAN based on CT scan in the diagnosis of aortic dissection. Comput. Methods Programs Biomed. 2022, 226, 107130. [Google Scholar] [CrossRef]
  37. Yu, Y.; Gao, Y.; Wei, J.; Liao, F.; Xiao, Q.; Zhang, J.; Yin, W.; Lu, B. A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection. Korean J. Radiol. 2021, 22, 168–178. [Google Scholar] [CrossRef]
  38. Harris, R.J.; Kim, S.; Lohr, J.; Towey, S.; Velichkovich, Z.; Kabachenko, T.; Driscoll, I.; Baker, B. Classification of Aortic Dissection and Rupture on Post-contrast CT Images Using a Convolutional Neural Network. J. Digit. Imaging 2019, 32, 939–946. [Google Scholar] [CrossRef]
  39. Bonechi, S.; Andreini, P.; Mecocci, A.; Giannelli, N.; Scarselli, F.; Neri, E.; Bianchini, M.; Dimitri, G.M. Segmentation of Aorta 3D CT Images Based on 2D Convolutional Neural Networks. Electronics 2021, 10, 2559. [Google Scholar] [CrossRef]
  40. Mohammadi, S.; Mohammadi, M.; Dehlaghi, V.; Ahmadi, A. Automatic Segmentation, Detection, and Diagnosis of Abdominal Aortic Aneurysm (AAA) Using Convolutional Neural Networks and Hough Circles Algorithm. Cardiovasc. Eng. Technol. 2019, 10, 490–499. [Google Scholar] [CrossRef]
  41. Camara, J.R.; Tomihama, R.T.; Pop, A.; Shedd, M.P.; Dobrowski, B.S.; Knox, C.J.; Abou-Zamzam, A.M.; Kiang, S.C. Development of a convolutional neural network to detect abdominal aortic aneurysms. J. Vasc. Surg. Cases Innov. Tech. 2022, 8, 305–311. [Google Scholar] [CrossRef] [PubMed]
  42. Lyu, T.; Yang, G.; Zhao, X.; Shu, H.; Luo, L.; Chen, D.; Xiong, J.; Yang, J.; Li, S.; Coatrieux, J.L.; et al. Dissected aorta segmentation using convolutional neural networks. Comput. Methods Programs Biomed. 2021, 211, 106417. [Google Scholar] [CrossRef] [PubMed]
  43. Comelli, A.; Dahiya, N.; Stefano, A.; Benfante, V.; Gentile, G.; Agnese, V.; Raffa, G.M.; Pilato, M.; Yezzi, A.; Petrucc, G.; et al. Deep learning approach for the segmentation of aneurysmal ascending aorta. Biomed. Eng. Lett. 2021, 11, 15–24. [Google Scholar] [CrossRef] [PubMed]
  44. Cao, X.; Zeng, Y.; Wang, J.; Cao, Y.; Wu, Y.; Xia, W. Differentiation of Cerebral Dissecting Aneurysm from Hemorrhagic Saccular Aneurysm by Machine-Learning Based on Vessel Wall MRI: A Multicenter Study. J. Clin. Med. 2022, 11, 3623. [Google Scholar] [CrossRef] [PubMed]
  45. Chhabra, M.; Ravulakollu, K.K.; Kumar, M.; Sharma, A.; Nayyar, A. Improving automated latent fingerprint detection and segmentation using deep convolutional neural network. Neural Comput. Appl. 2023, 35, 6471–6497. [Google Scholar] [CrossRef]
  46. Madhu, G.; Govardhan, A.; Ravi, V.; Kautish, S.; Srinivas, B.S.; Chaudhary, T.; Kumar, M. DSCN-net: A deep Siamese capsule neural network model for automatic diagnosis of malaria parasites detection. Multimed. Tools Appl. 2022, 81, 34105–34127. [Google Scholar] [CrossRef]
  47. Madhu, G.; Kautish, S.; Gupta, Y.; Nagachandrika, G.; Biju, S.M.; Kumar, M. XCovNet: An optimized xception convolutional neural network for classification of COVID-19 from point-of-care lung ultrasound images. Multimed. Tools Appl. 2024, 83, 33653–33674. [Google Scholar] [CrossRef]
  48. Chen, Y.; Wei, X.; Zhang, Y.; Yuan, D.; Tian, X.; Jiang, W.; Li, Z. Hemodynamic Analysis of Stanford Type B Aortic Dissection Based on Computational Fluid Dynamics. J. Med. Biomech. 2018, 33, 490–495. [Google Scholar]
  49. Polanczyk, A.; Piechota-Polanczyk, A.; Huk, I.; Neumayer, C.; Balcer, J.; Strzelecki, M. Computational Fluid Dynamic Technique for Assessment of How Changing Character of Blood Flow and Different Value of Hct Influence Blood Hemodynamic in Dissected Aorta. Diagnostics 2021, 11, 1866. [Google Scholar] [CrossRef]
  50. Siddqi, Z.F. Computational Fluid Dynamics: Modeling and Analysis of Blood Flow in Arteries. In Motion Analysis of Biological Systems; Springer: New York, NY, USA, 2024; pp. 89–121. [Google Scholar]
  51. Xie, S.; Tu, Z. Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, Washington, DC, USA, 7–13 December 2015; pp. 1395–1403. [Google Scholar]
  52. Lee, C.Y.; Xie, S.; Gallagher, P.; Zhang, Z.; Tu, Z. Deeply-supervised nets. In Proceedings of the Artificial Intelligence and Statistics, San Diego, CA, USA, 9–12 May 2015; pp. 562–570. [Google Scholar]
  53. TR_Abdomen Rad_Emergency Data Set. Available online: https://acikveri.saglik.gov.tr/Home/DataSetDetail/2 (accessed on 10 January 2025).
  54. Koç, U.; Sezer, E.A.; Özkaya, Y.A.; Yarbay, Y.; Beşler, M.S.; Taydaş, O.; Yalçın, A.; Evrimler, Ş.; Kızıloğlu, H.A.; Kesimal, U.; et al. Elevating healthcare through artificial intelligence: Analyzing the abdominal emergencies data set (TR_ABDOMEN_RAD_EMERGENCY) at TEKNOFEST-2022. Eur. Radiol. 2024, 34, 3588–3597. [Google Scholar] [CrossRef]
  55. Chollet, F. Keras. 2015. Available online: https://github.com/fchollet/keras (accessed on 5 July 2025).
  56. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Berkeley, CA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
  57. Chen, D.; Zhang, X.; Mei, Y.; Liao, F.; Xu, H.; Li, Z.; Xiao, Q.; Guo, W.; Zhang, H.; Yan, T.; et al. Multi-stage learning for segmentation of aortic dissections using a prior aortic anatomy simplification. Med. Image Anal. 2021, 69, 101931. [Google Scholar] [CrossRef]
  58. Hackstein, U.; Krüger, T.; Mair, A.; Degünther, C.; Krickl, S.; Schlensak, C.; Bernhard, S. Early diagnosis of aortic aneurysms based on the classification of transfer function parameters estimated from two photoplethysmographic signals. Inform. Med. Unlocked 2021, 25, 100652. [Google Scholar] [CrossRef]
Figure 1. The hybrid CNN architecture of the proposed scheme.
Figure 1. The hybrid CNN architecture of the proposed scheme.
Diagnostics 15 02476 g001
Figure 2. Some abdominal CT images of (a) no disease, (b) AAA, and (c) AAD in the dataset.
Figure 2. Some abdominal CT images of (a) no disease, (b) AAA, and (c) AAD in the dataset.
Diagnostics 15 02476 g002
Figure 3. Confusion matrix of the AAA and AAD classification results using ResDenseUNet: (a) for training, (b) for testing, and (c) for validation.
Figure 3. Confusion matrix of the AAA and AAD classification results using ResDenseUNet: (a) for training, (b) for testing, and (c) for validation.
Diagnostics 15 02476 g003
Figure 4. Confusion matrix of the AAA and AAD classification results using Inet: (a) for training, (b) for testing, and (c) for validation.
Figure 4. Confusion matrix of the AAA and AAD classification results using Inet: (a) for training, (b) for testing, and (c) for validation.
Diagnostics 15 02476 g004
Figure 5. Confusion matrix of the AAA and AAD classification results using C-Net: (a) for training, (b) for testing, and (c) for validation.
Figure 5. Confusion matrix of the AAA and AAD classification results using C-Net: (a) for training, (b) for testing, and (c) for validation.
Diagnostics 15 02476 g005
Figure 6. Confusion matrix of the AAA and AAD classification results using the proposed model: (a) for training, (b) for testing, and (c) for validation.
Figure 6. Confusion matrix of the AAA and AAD classification results using the proposed model: (a) for training, (b) for testing, and (c) for validation.
Diagnostics 15 02476 g006
Figure 7. The performance results of the proposed model: (a) accuracy and (b) loss function.
Figure 7. The performance results of the proposed model: (a) accuracy and (b) loss function.
Diagnostics 15 02476 g007
Figure 8. Some results for diagnosis of AAA and detection of MBB coordinates.
Figure 8. Some results for diagnosis of AAA and detection of MBB coordinates.
Diagnostics 15 02476 g008
Figure 9. Some results for diagnosis of AAD and detection of MBB coordinates.
Figure 9. Some results for diagnosis of AAD and detection of MBB coordinates.
Diagnostics 15 02476 g009aDiagnostics 15 02476 g009b
Figure 10. Examination of assessments involved the evaluation of heat maps generated using gradient-weighted class activation mapping overlaid on original CT images. (a) The proposed CNN accurately detected the AAA, (b) a misjudgment occurred due to the presence of a relatively small-sized aneurysm and mural clot, resulting in a false-negative diagnosis.
Figure 10. Examination of assessments involved the evaluation of heat maps generated using gradient-weighted class activation mapping overlaid on original CT images. (a) The proposed CNN accurately detected the AAA, (b) a misjudgment occurred due to the presence of a relatively small-sized aneurysm and mural clot, resulting in a false-negative diagnosis.
Diagnostics 15 02476 g010
Figure 11. The demonstration of intersection over union (IoU).
Figure 11. The demonstration of intersection over union (IoU).
Diagnostics 15 02476 g011
Figure 12. The IoU results of the proposed model (a) for AAA and (b) for AAD.
Figure 12. The IoU results of the proposed model (a) for AAA and (b) for AAD.
Diagnostics 15 02476 g012
Table 1. Number of abdominal CT images used in the dataset for training, testing, and validation.
Table 1. Number of abdominal CT images used in the dataset for training, testing, and validation.
Type of PhaseNo DiseaseAbdominal Aortic Disease
AneurysmDissection
Training342836033603
Test430450450
Validation428451451
Table 2. Several significant parameters used in the study.
Table 2. Several significant parameters used in the study.
ParametersDefinition
HardwareNVIDIA GeForce RTX 3090
SoftwarePython 3.8, TensorFlow 2.5, CUDA 11.4
Data preprocessingImage normalization to a range of 0–1, and random cropping and rotation for data augmentation
Evaluation metricsAccuracy, sensitivity, specificity, F1-score, and dice coefficient
Convolution layer kernel size(3 × 3) kernel size is used
Output nodes3 classes classification (no disease, aneurysm, or dissection)
Learning rate0.001
Optimization methodAdam
Batch size32
Number of epochs50
Dropout0.5
Table 3. Performance results of the AAA and AAD classification results for training, testing, and validation.
Table 3. Performance results of the AAA and AAD classification results for training, testing, and validation.
MethodsPhaseDisease TypePrecisionRecallF1-ScoreAccuracy
ResDenseUNetTrainingNo0.89970.90220.90090.8949
Aneurysm0.89310.89560.8944
Dissection0.89220.88730.8897
TestNo0.79040.80690.79860.8000
Aneurysm0.79510.80220.7986
Dissection0.81460.79110.8027
ValidationNo0.80980.80600.80790.8097
Aneurysm0.81510.81150.8133
Dissection0.80430.81150.8079
INetTrainingNo0.90710.90950.90830.9026
Aneurysm0.90100.90250.9018
Dissection0.89990.89610.8980
TestNo0.84130.85110.84620.8488
Aneurysm0.84060.85550.8480
Dissection0.86490.84000.8523
ValidationNo0.85540.85740.85640.8556
Aneurysm0.85770.85580.8568
Dissection0.85360.85360.8536
C-NetTrainingNo0.91610.91510.91560.9101
Aneurysm0.90830.91000.9091
Dissection0.90630.90560.9060
TestNo0.85680.86270.85970.8631
Aneurysm0.85580.87110.8634
Dissection0.87690.85550.8661
ValidationNo0.87580.87380.87480.8744
Aneurysm0.87380.87580.8748
Dissection0.87360.87360.8736
Proposed CNNTrainingNo0.92340.91890.92110.9163
Aneurysm0.91370.91720.9155
Dissection0.91200.91280.9124
TestNo0.87780.88600.88190.8872
Aneurysm0.87930.87170.8755
Dissection0.90040.88440.8923
ValidationNo0.88390.89010.88700.8857
Aneurysm0.88470.88470.8847
Dissection0.88830.88240.8854
Table 4. The comparison of the IoU accuracy rates of AAA and AAD detection.
Table 4. The comparison of the IoU accuracy rates of AAA and AAD detection.
AuthorsPublication YearMethodIntersection-over-Union (IoU)
Khened et al. [23]2019ResDenseUNet0.7924
Weng and Zhu [22]2021INet 0.8163
Barzekar and Yu [24]2022C-Net0.8248
Chen et al. [57]2021Cascaded neural networks0.8251
Hackstein et al. [58]2021Naive Bayes and K-Nearest-Neighbor0.8328
Proposed Proposed CNN method0.8376
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baltaci, H.; Yalcin, S.; Yildirim, M.; Bingol, H. Automatic Diagnosis, Classification, and Segmentation of Abdominal Aortic Aneurysm and Dissection from Computed Tomography Images. Diagnostics 2025, 15, 2476. https://doi.org/10.3390/diagnostics15192476

AMA Style

Baltaci H, Yalcin S, Yildirim M, Bingol H. Automatic Diagnosis, Classification, and Segmentation of Abdominal Aortic Aneurysm and Dissection from Computed Tomography Images. Diagnostics. 2025; 15(19):2476. https://doi.org/10.3390/diagnostics15192476

Chicago/Turabian Style

Baltaci, Hakan, Sercan Yalcin, Muhammed Yildirim, and Harun Bingol. 2025. "Automatic Diagnosis, Classification, and Segmentation of Abdominal Aortic Aneurysm and Dissection from Computed Tomography Images" Diagnostics 15, no. 19: 2476. https://doi.org/10.3390/diagnostics15192476

APA Style

Baltaci, H., Yalcin, S., Yildirim, M., & Bingol, H. (2025). Automatic Diagnosis, Classification, and Segmentation of Abdominal Aortic Aneurysm and Dissection from Computed Tomography Images. Diagnostics, 15(19), 2476. https://doi.org/10.3390/diagnostics15192476

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop