Detecting Coronary Artery Disease from Computed Tomography Images Using a Deep Learning Technique

In recent times, coronary artery disease (CAD) has become one of the leading causes of morbidity and mortality across the globe. Diagnosing the presence and severity of CAD in individuals is essential for choosing the best course of treatment. Presently, computed tomography (CT) provides high spatial resolution images of the heart and coronary arteries in a short period. On the other hand, there are many challenges in analyzing cardiac CT scans for signs of CAD. Research studies apply machine learning (ML) for high accuracy and consistent performance to overcome the limitations. It allows excellent visualization of the coronary arteries with high spatial resolution. Convolutional neural networks (CNN) are widely applied in medical image processing to identify diseases. However, there is a demand for efficient feature extraction to enhance the performance of ML techniques. The feature extraction process is one of the factors in improving ML techniques’ efficiency. Thus, the study intends to develop a method to detect CAD from CT angiography images. It proposes a feature extraction method and a CNN model for detecting the CAD in minimum time with optimal accuracy. Two datasets are utilized to evaluate the performance of the proposed model. The present work is unique in applying a feature extraction model with CNN for CAD detection. The experimental analysis shows that the proposed method achieves 99.2% and 98.73% prediction accuracy, with F1 scores of 98.95 and 98.82 for benchmark datasets. In addition, the outcome suggests that the proposed CNN model achieves the area under the receiver operating characteristic and precision-recall curve of 0.92 and 0.96, 0.91 and 0.90 for datasets 1 and 2, respectively. The findings highlight that the performance of the proposed feature extraction and CNN model is superior to the existing models.


Introduction
Coronary artery disease (CAD) has recently become regarded as one of the most dangerous and life-threatening chronic diseases [1]. Blockage and narrowing of the coronary arteries is the primary cause of heart failure. The coronary arteries must be open to provide the heart with adequate blood [2][3][4]. According to a recent survey, the United States has the highest heart disease prevalence and the highest ratio of heart disease patients [5]. Shortness of breath, swelling feet, fatigue, and other symptoms of heart disease are among the most frequent. CAD is the most common type of heart disease, which can cause chest discomfort, stroke, and heart attack. Besides heart disease, there are heart rhythm issues, congestive heart failure, congenital heart disease, and cardiovascular disease [6].
Traditional methods of investigating cardiac disease are complex [7][8][9][10]. The lack of medical diagnostic instruments and automated systems makes pulmonary heart disease detection and treatment challenging in developing nations. However, to reduce the impact

•
To build a CNN model to predict CAD from CT images.

•
To improve the performance of CNN by reducing the number of features.
research. Results and discussion are highlighted in Section 4. Finally, Section 5 concludes the study with its future improvement.

Literature Review
High-accuracy data-mining techniques can identify risk factors for heart disease. Studies on the diagnosis of CAD can be found in existing studies [1][2][3][4][5]. Artificial immune recognition system (AIRS), K nearest neighbor (KNN), and clinical data were used to develop a system for diagnosing CAD and achieved an accuracy rate of 87%.
The authors [1] developed and evaluated a deep-learning algorithm for diagnosing CAD based on facial photographs. Patients who underwent coronary angiography or CT angiography at nine Chinese locations participated in a multicenter cross-sectional study to train and evaluate a deep CNN to detect CAD using patient facial images. More than 5796 patients were included in the study and were randomly assigned to training and validation groups for algorithm development. According to the findings, a deep-learning algorithm based on facial photographs can help predict CAD.
According to a study [2], the combination of semi-upright and supine stress myocardial perfusion imaging with deep learning can be used to predict the presence of obstructive disease. The total perfusion deficit was calculated using standard gender and camera type limits. A study [3] employed interferometric OCT in cardiology to describe coronary artery tissues, yielding a resolution of between 10 and 20 µm. Using OCT, the authors [3] investigated the various deep learning models for robust tissue characterization to learn the various intracoronary pathological formations induced by Kawasaki disease. A total of 33 historical cases of intracoronary cross-sectional pictures from different pediatric patients with KD are used in the experimentation. The authors analyzed in-depth features generated from three pre-trained convolutional networks, which were then compared. Moreover, voting was conducted to determine the final classification.
The authors [6] used deep-learning analysis of the myocardium of the left ventricle to identify individuals with functionally significant coronary stenosis in rest coronary CT angiography (CCTA). There were 166 participants in the study who had invasive FFR tests and CCTA scans taken sequentially throughout time. Analyses were carried out in stages to identify patients with functionally significant stenosis of the coronary arteries.
Using deep learning, the researchers [7] investigated the accuracy of the automatic prediction of obstructive disease from myocardial perfusion imaging compared to the overall perfusion deficit. Single-photon emission computed tomography may be used to build deep convolutional neural networks that can better predict coronary artery disease in individual patients and individual vessels. Obstructive disease was found in 1018 patients (62%) and 1797 of 4914 (37%) arteries in this study. A larger area under the receiveroperating characteristic curve for illness prediction using deep learning than for total perfusion deficits. Myocardial perfusion imaging can be improved using deep learning compared to existing clinical techniques.
In the study [8], several deep-learning algorithms were used to classify electrocardiogram (ECG) data into CAD, myocardial infarction, and congestive heart failure. In terms of classification, CNNs and LSTMs tend to be the most effective architectures to use. This study built and verified a 16-layer LSTM model using a 10-fold cross-validation procedure. The accuracy of the classification was 98.5%. They claimed their algorithm might be used in hospitals to identify and classify aberrant ECG patterns.
Author [9] proposed an enhanced DenseNet algorithm based on transfer learning techniques for fundus medical imaging. Medical imaging data from the fundus has been the subject of two separate experiments. A DenseNet model can be trained from scratch or fine-tuned using transfer learning. Pre-trained models from a realistic image dataset to fundus medical images are used to improve the model's performance. Fundus medical image categorization accuracy can be improved with this method, which is critical for determining a patient's medical condition.
The study [10] developed and implemented a heterogeneous low-light image-enhancing approach based on DenseNet generative adversarial network. Initially, a generative adversarial network is implemented using the DenseNet framework. The generative adversarial network is employed to learn the feature map from low-light to normal-light images.
To overcome the gradient vanishing problem in deep networks, the DenseNet convolutional neural network with dense connections combines ResNet and Highway's strengths [11,12]. As a result, all network layers can be directly connected through the DenseNet. Each layer of the network is directly related to the next layer. It is important to remember that each subsequent layer's input is derived from the output of all preceding layers. The weak information transmitted in the deep network is the primary cause of the loss of gradients [13]. A more efficient way to reduce gradient disappearance and improve network convergence is to use the dense block design, in which each layer is directly coupled to input and loss [14].
The authors [15] employed a bright-pass filter and logarithmic transformation to improve the quality of an image. Simultaneous reflectance and illumination estimation (SRIE) was given a weighted variational model by the authors [16] to deal with the issue of overly enhanced dark areas. Authors [17] developed low light image enhancement by illumination map estimation (LIME), which simply estimates the illumination component. The reflection component of the image was calculated using local consistency and structural perception restrictions, and the output result was based on this calculation.
The study [18] used the Doppler signal and a neural network to gain the best possible CAD diagnosis. By combining the exercise test data with a support vector machine (SVM), the authors [19] achieved an accuracy of 81.46% in the diagnosis of coronary artery disease (CAD). By employing multiple neural networks, authors [20] achieved an accuracy of 89.01% for CAD diagnosis using the Cleveland dataset [21]. It is possible to forecast artery stenosis disease using various feature selection approaches, including CBA, filter, genetic algorithm, wrapper, and numerical and nominal attribute selection. Also, Ref. [22] uses a new feature creation method to diagnose CAD.
Inception-v3 [24] is an enhanced version of GoogleNet and is applied in medical image analysis. It categorizes knee images by training support vector machines using deep feature extraction from CaffeNets. Adults' retinal fundus pictures were analyzed using a fine-tuned network to detect diabetic retinopathy [24]. Classification results utilizing finetuned networks compete with human expert performance [25]. Recent research has focused on applying deep learning techniques to segment retinal optical coherence tomography (OCT) images [26][27][28]. Combining CNN and graph search methods, OCT retinal images are segmented. Layer border classification probabilities are used in the Cifar-CNN architecture to partition the graph search layer [29,30].
Authors [31] proposed a deep learning technique to quantify and segment intraregional cystoid fluid using fuzzy CNN. Geographic atrophy (GA) segmentation using a deep network is the subject of another study [33]. An automated CAD detector was developed using a CNN with encoder-decoder architecture [34]. In another study, researchers employed GoogleNet to identify retinal diseases in OCT pictures [35].
Several grayscale features collected from echocardiogram pictures of regular and CAD participants were proposed in [36] as a computer-aided diagnosis approach. In [24], ECG data from routine and CAD participants was evaluated for HR signals. Various methods were used to examine the heart rate data, including non-linear analysis, frequency, and time-domain. They found that CAD participants' heart rate signals were less erratic than normal subjects. The recent CNN models are widely applied in CAD diagnostics [36]. In [37], the authors proposed a model for identifying cardiovascular diseases and obtained a prediction accuracy of 96.75%. Ali Md Mamun et al. [38] argued that a simple supervised ML algorithm can predict heart disease with high accuracy. The authors [39] developed a biomedical electrocardiogram (ECG)-based ML technique for detecting heart disease. Jiely Yan et al. [40], proposed a model to predict ion channel peptide from the images. Table 1 outlines the features and limitations of the existing CNN models.

Authors Methodology Features Limitations
Lin. S et al. [1] Conducted a cross-sectional study of CAD patients for validating CNN-based CAD.
The findings showed that the deep learning algorithm could support physicians in detecting cardiovascular diseases.
The findings are based on the specific location and lack of a benchmark dataset for evaluating the CNN model.
Jingsi Z et al. [10] Proposed a low-light image enhancement method.
The DenseNet framework has reduced the noise in the images.
Lack of discussion of the application of bright images.
Abdar M et al. [13] Integrated genetic algorithm and support vector machine for feature extraction.
The outcome showed that N2Genetic-nuSVM showed a better accuracy.
Lack of comparison with the recent techniques.
Wolterink J.M. et al. [20] A 3D-dilated CNN is developed to predict the radius of an artery from CCTA images.
Results show that the method extracted 92% of clinically relevant coronary artery segments.
Trained with a small dataset. The outcome may be with the size of the dataset.
Papandrianos N. and Papageorgiou E. [21] Applied CNN model for CAD detection from images.
The method can differentiate the infarction from healthy patients.
The classification accuracy is better. However, there is a lack of benchmark evaluation techniques.
Nishi et al. [27] Developed an image segmentation technique for predicting CAD.
The outcome highlighted that the method could produce effective results.
The performance is based on a single dataset.
Cho et al. [30] Proposed an intravascular ultrasound-based algorithm for classifying attenuation and calcified plaques.
The results outlined that the model achieved 98% accuracy.
The model performance is based on the dataset of 598 patients.
Morris S.A. and Lopez K.N. [31] Developed a detection model for congenital heart disease in the fetus.
The outcome showed that the model's performance is better than the recent models.
The authors evaluated the model using 1326 fetal echocardiograms.
Cheung et al. [36] Proposed an image segmentation approach using Unet model.
The model achieved 91,320% of dice similarity coefficient.
The lack of discussion of the image quality used in the study.
Bhanu Prakash Doppala et al. [37] Developed an ensemble model for cardiovascular disease detection.
The model achieves an accuracy of 96.75%.
The model is based on the voting mechanisms, which may lead to a larger computation time.
Ali Md Mamun et al. [38] Proposed an ML algorithm for heart disease detection.
The outcome shows that the model has achieved a 100% of accuracy with the Kaggle dataset.
There is a lack of experimentation with the model with different datasets.
Khanna, Ashish et al. [39] Developed an ML technique for heart disease detection from ECG.
Employed regression model to predict heart disease from ECG.
Limited discussion on the model uncertainty.
Yan, Jielu et al. [40] Proposed an ML technique for predicting ion channel peptides.
The outcome shows that the model achieves high accurate results.
The dataset is relatively small.

Research Methodology
According to the research questions, the researchers developed a CNN architecture to predict positive CAD patients from CT images. Figure 1 presents the proposed architecture. Initially, the images are processed to extract the features. The CNN model treats the extracted features, generating output through an activation function. The following part of this section provides the information related to datasets, feature extraction, CNN construction, and evaluation metrics.

Research Methodology
According to the research questions, the researchers developed a CNN architecture to predict positive CAD patients from CT images. Figure 1 presents the proposed architecture. Initially, the images are processed to extract the features. The CNN model treats the extracted features, generating output through an activation function. The following part of this section provides the information related to datasets, feature extraction, CNN construction, and evaluation metrics. In this study, researchers employed two datasets of CT angiography images. The details of the datasets are as follows: Dataset 1 [4] contains coronary artery image sets of 500 patients. A number of 18 views of the same straightened coronary artery are shown in each mosaic projection view (MPV). The Training-Validation-Test picture sets have a 3/1/1 ratio (300/100/100) with 50% normal and 50% sick cases for each patient in the subset. To improve modeling and dataset balance, 2364 (i.e., 394 × 6) artery pictures were obtained from the 300 training instances. Only 2304 images of the training dataset were augmented: 1. the standard component; 2. all the validation images; and 3. all the testing images. The balance was maintained in the validation dataset by randomly selecting one artery per normal case (50 images) and sick patient (50 images). Figure 2a,b outlines the CT images of positive and negative CAD patients. In this study, researchers employed two datasets of CT angiography images. The details of the datasets are as follows: Dataset 1 [4] contains coronary artery image sets of 500 patients. A number of 18 views of the same straightened coronary artery are shown in each mosaic projection view (MPV). The Training-Validation-Test picture sets have a 3/1/1 ratio (300/100/100) with 50% normal and 50% sick cases for each patient in the subset. To improve modeling and dataset balance, 2364 (i.e., 394 × 6) artery pictures were obtained from the 300 training instances. Only 2304 images of the training dataset were augmented: 1. the standard component; 2. all the validation images; and 3. all the testing images. The balance was maintained in the validation dataset by randomly selecting one artery per normal case (50 images) and sick patient (50 images). Figure 2a Figure 3 shows the CT images of dataset_2. Table 2 outlines the description of the datasets. Both datasets contain CT images of CAD and Non-CAD patients.  Figure 3 shows the CT images of dataset_2. Table 2 outlines the description of the datasets. Both datasets contain CT images of CAD and Non-CAD patients. Dataset 2 [5] consists of CT angiography images of 200 patients. This dataset used images from a multicenter registry of patients who had undergone clinically indicated coronary computed tomography angiography (CCTA). The annotated ground truth included the ascending and descending aortas (PAA, DA), superior and inferior vena cavae (SVC, IVC), pulmonary artery (PA), coronary sinus (CS), right ventricular wall (RVW), and left atrial wall (LAW). Figure 3 shows the CT images of dataset_2. Table 2 outlines the description of the datasets. Both datasets contain CT images of CAD and Non-CAD patients.  The study applies the following steps for identifying CAD using CNN architecture from datasets: Step 1: Preprocess images  The study applies the following steps for identifying CAD using CNN architecture from datasets: Step 1: Preprocess images The CCTA images are processed to fit the feature extraction phase. All images are converted into 600 × 600 pixels. The image size suits the feature extraction process to generate a reduced set of features without losing any valuable data.
Step 2: Feature extraction The proposed study applies an enhanced features from accelerated segment test (FAST) [6] algorithm for extracting features to support the pooling layer of CNN to produce effective feature maps to answer RQ1. To reduce the processing time of the FAST algorithm, researchers employed the enhanced FAST [5]. Figure 4 showcases the feature extracted from a 4 × 4 image into a 2 × 2 image. In addition, it highlights that the actual image can be reconstructed from a 2 × 2 image to a 4 × 4 image. The CCTA images are processed to fit the feature extraction phase. All images are converted into 600 × 600 pixels. The image size suits the feature extraction process to generate a reduced set of features without losing any valuable data.
Step 2: Feature extraction The proposed study applies an enhanced features from accelerated segment test (FAST) [6] algorithm for extracting features to support the pooling layer of CNN to produce effective feature maps to answer RQ1. To reduce the processing time of the FAST algorithm, researchers employed the enhanced FAST [5]. Figure 4 showcases the feature extracted from a 4 × 4 image into a 2 × 2 image. In addition, it highlights that the actual image can be reconstructed from a 2 × 2 image to a 4 × 4 image. The extraction process is described as follows: The extraction process is described as follows: Let image I of M 1 × M 2 pixels be divided into segments S 1 × S n . The number of segments is N 1 × N 2, where N 1 = M 1 /S 1 and N 2 = M 2 /S n . The segments are represented in Equation (1).
where Sd x,y referred to the image segment in the x and y direction and is described in Equation (2).
where i and j represent the size of the image segment, Sdx,y. Both Equations (3) and (4) describe the pixel values of image segments.
The transformation function ensures that the image or segment can be reconstructed to its original form. It supports the proposed method to backtrack the CNN network to fine-tune its performance. The transformation function for each segment is mentioned in Equation (5) as follows: where ϕMd x,y represents a part of an extracted feature from the image segment, x = 1, . . . . . . , N 1 , y = 1, . . . . . . , N n and T represents the transform matrix, Z M1 ∈ Z O M 1 , O represents the order of the transformation. The segment can be reconstructed as in Equation (6).
Sequentially, the process must be repeated N 1 × N n times to extract a set of features from the image. Thus, the transform co-efficient of all image segments can be integrated using Equations (7)- (11).
Equations (8) and (9) denote the features F S 1 and F S n , which represent the features that can be constructed using Z s1 & Z sn , as follows: Equation (10)   Equation (11) defines the reconstruction of the image using the extracted features.
Step 3: Processing features The extracted features F S 1 . . . . . . F S n are treated as an input for the proposed CNN. DenseNet ensures the transmission of information between the layers. One of the features of the DenseNet is the direct link between each layer. Thus, a back propagation method can be implemented in DenseNet. The feature extraction process reduces the number of blocks in DenseNet and improves its performance. Therefore, the modified DenseNet contains a smaller number of blocks and parameters. Research studies highlight that the complex network requires a greater number of samples. This study applies DenseNet-161 (K = 48), which includes three block modules. Figure 5 illustrates the proposed DenseNet model. Most CNN models depend on the features to make a decision. Thus, the feature extraction process is crucial in disease detection techniques. The minimal set of features reduces the training time of the CNN model. In addition, the features should support CNN to generate effective results. Researchers applied an edge-detection technique. Step 3.1: Pooling layer Two-dimensional filters are used to integrate the features in the area covered by the two-dimensional filter as it slides over each feature map channel. The dimension of the pooling layer output is in Equation (12): where -the height of the feature map, -width of the feature map, -number of channels in the map, f-filter size, l-stride length Step 3.2: Generating output Transfer learning is adopted to alter the architecture of DenseNet. Leaky ReLu is used as the activation function. The existing CNN includes are employed. GITHUB portal (https://github.com/titu1994/DenseNet accessed on 7 December 2021) is utilized to implement the existing CNN architecture. The studies [10,18,21] are employed to evaluate the performance of the proposed CNN (PCNN) model. In addition, CNN models, including GoogleNet and Inception V3, are used for performance evaluation. The following form of the sigmoid function is applied for implementing the modified DenseNet. Figure 6 represents the proposed feature extraction for pre-processing the CT images and extracting the valuable features. Furthermore, Figure 7 highlights the proposed CNN technique for predicting CAD from the CT images. Step 3.1: Pooling layer Two-dimensional filters are used to integrate the features in the area covered by the two-dimensional filter as it slides over each feature map channel. The dimension of the pooling layer output is in Equation (12): where I h -the height of the feature map, I w -width of the feature map, I c -number of channels in the map, f -filter size, l-stride length Step 3.2: Generating output Transfer learning is adopted to alter the architecture of DenseNet. Leaky ReLu is used as the activation function. The existing CNN includes are employed. GITHUB portal (https://github.com/titu1994/DenseNet accessed on 7 December 2021) is utilized to implement the existing CNN architecture. The studies [10,18,21] are employed to evaluate the performance of the proposed CNN (PCNN) model. In addition, CNN models, including GoogleNet and Inception V3, are used for performance evaluation. The following form of the sigmoid function is applied for implementing the modified DenseNet. Figure 6 represents the proposed feature extraction for pre-processing the CT images and extracting the valuable features. Furthermore, Figure 7 highlights the proposed CNN technique for predicting CAD from the CT images. ment the existing CNN architecture. The studies [10,18,21] are employed to evaluate the performance of the proposed CNN (PCNN) model. In addition, CNN models, including GoogleNet and Inception V3, are used for performance evaluation. The following form of the sigmoid function is applied for implementing the modified DenseNet. Figure 6 represents the proposed feature extraction for pre-processing the CT images and extracting the valuable features. Furthermore, Figure 7 highlights the proposed CNN technique for predicting CAD from the CT images.  The study constructs a feed-forward back propagation network. Thus, Leaky ReLu is employed in the study as an activation function in Equation (13) to produce an outcome.
Leaky ReLu considers negative value as a minimal linear component of X. The definition of Leaky ReLu is defined as: Def Leaky_function(I) If feature(I) < 0: return 0.01 * f(I) Else: return f(I) Step 4: Evaluation metrics The study applies the benchmark evaluation metrics, including accuracy, recall, precision, and F-measure, to provide a solution for RQ2. The metrics are computed as shown in Equations (14)  The study constructs a feed-forward back propagation network. Thus, Leaky ReLu is employed in the study as an activation function in Equation (13) to produce an outcome.
Leaky ReLu considers negative value as a minimal linear component of X. The definition of Leaky ReLu is defined as: Def Leaky_function(I) If feature(I) < 0: return 0.01 * f(I) Else: return f(I) Step 4: Evaluation metrics The study applies the benchmark evaluation metrics, including accuracy, recall, precision, and F-measure, to provide a solution for RQ2. The metrics are computed as shown in Equations (14) (18) In addition, Matthews correlation coefficient (MCC) (Equation (19)) and Cohen's Kappa (K) (Equation (20)) are employed to ensure the performance of the proposed method.
The minimum MCC is −1, which indicates a wrong prediction, whereas the maximum MCC is +1, which denotes a perfect prediction.
MCC and K are class symmetric, reflecting the ML technologies' classification accuracy. Finally, CNN technique computational complexity is presented to find the time and space complexities.
In order to ensure the predictive uncertainty of the proposed CNN (PCNN), the researchers applied standard deviation (SD) and entropy (E). The mathematical expression of the confidence interval (CI) is defined in Equation (21).
where a represents the mean of the predictive distribution of an image a (i) , N is the total number of predictions, and z is the critical value of the distribution. The researchers computed CI at 95% confidence. Thus, the value of Z is 1.96. Finally, the researchers followed E of the prediction to evaluate the uncertainty of the proposed model. It is calculated over the mean predictive distribution. The mathematical expression of E is defined in Equation (22).

Experiment and Results
The PCNN is implemented in Python with Windows 10 Professional platform. The existing algorithms are developed using the GITHUB portal. Both datasets are divided into training and testing sets. Accordingly, the CNN architectures are trained with a relevant training set of dataset_1 and dataset_2.
To evaluate the performance of PCNN, the dataset is utilized using 5-fold crossvalidation. Statistical tests, including SD, CI using binary class classification, and E are applied accordingly on the dataset_1 and dataset_2. Table 3 presents the implementation of PCNN during the cross-validation using daaset_1. It highlights that PCNN achieves more than 98% accuracy, precision, recall, F-measure, and specificity, respectively. Likewise, Table 4 denotes the cross-validation outcome for dataset_2.

Uncertainty Estimation
In this study, the researchers apply Monte Carlo dropout (MC dropout) to compute the model uncertainty. The dropout value ensures that the predictive distribution is not diverse, and CI is insignificant. The researchers experimentally found that the MC dropout value of 0.379 is optimal for this model. The predictive distribution is obtained by evaluating PCNN 200 times for each image. Furthermore, model uncertainty is computed using CI, SD, and E.
Tables 5 and 6 highlight the model uncertainty for dataset_1 and dataset_2, respectively. The proposed model achieved a low entropy and SD for both datasets. It can be observed in Tables 5 and 6 [18] is lower than the other CNN architectures. PCNN performs better than the existing CNN models for CAD prediction. Dataset_1 contains a greater number of images. The mapping of features made the CNN architectures generate more features. However, the feature extraction process of the proposed method enabled PCNN to produce a smaller number of features and maintain a better performance than the existing architectures. Figure 8 represents the comparative analysis outcome of CNN. It is evident from Figure 8 that the performance of PCNN is higher than the current architectures.  Likewise, Table 8 outlines the performance of CNN architectures with Dataset_2. The value of accuracy, precision, recall, F-measure, and specificity is 98.96, 98.2, 98.52, 98.36, and 98.7, accordingly. However, GoogleNet has scored low accuracy, precision, recall, Fmeasure, and specificity of 97.1, 96.7, 97.1, 96.9, and 96.4, respectively. The absence of temporary memory is one of the limitations of the Banerjee model that reduces its predicting performance. In addition, the outcome of Tables 5 and 6 suggest that the performance of PCNN is higher than the existing CNN architectures. Figure 9 shows the relevant graph of Table 6.  Likewise, Table 8 outlines the performance of CNN architectures with Dataset_2. The value of accuracy, precision, recall, F-measure, and specificity is 98.96, 98.2, 98.52, 98.36, and 98.7, accordingly. However, GoogleNet has scored low accuracy, precision, recall, F-measure, and specificity of 97.1, 96.7, 97.1, 96.9, and 96.4, respectively. The absence of temporary memory is one of the limitations of the Banerjee model that reduces its predicting performance. In addition, the outcome of Tables 5 and 6 suggest that the performance of PCNN is higher than the existing CNN architectures. Figure 9 shows the relevant graph of Table 6.  In addition to the initial comparative analysis, the researcher applied MCC and Kappa to evaluate the performance of PCNN. Figures 10 and 11 reveal that PCNN achieved a superior MCC and K score compared to the existing models.  In addition to the initial comparative analysis, the researcher applied MCC and Kappa to evaluate the performance of PCNN. Figures 10 and 11 reveal that PCNN achieved a superior MCC and K score compared to the existing models. In addition to the initial comparative analysis, the researcher applied MCC and Kappa to evaluate the performance of PCNN. Figures 10 and 11 reveal that PCNN achieved a superior MCC and K score compared to the existing models.     Figure 12 highlights CNN's space and computation time for both Dataset_1 and Dataset_2.     Figure 12 highlights CNN's space and computation time for both Dataset_1 and Dataset_2.    Figure 12 highlights CNN's space and computation time for both Dataset_1 and Dataset_2.     Figure 13 illustrates the error rate of CNN models.  Table 10 outlines the error rate of the CNN architectures during the testing phase. The error rate of PCNN is 15.1 and 13.9 for Dataset_1 and Dataset_2, respectively. Nevertheless, Jingsi model scores 20.5 and 19.6, which is higher than other CNN models. The outcome emphasizes the efficiency of the feature extraction process of PCNN. Figure 13 illustrates the error rate of CNN models.   Figure 14 represents the receiver operating characteristic (ROC) and precision-recall (PR) curve for dataset_1 during the testing phase. It shows that PCNN achieves a better Area under the ROC curve (AUC) for CAD and No CAD classification, respectively.    Table 10 outlines the error rate of the CNN architectures during the testing phase. The error rate of PCNN is 15.1 and 13.9 for Dataset_1 and Dataset_2, respectively. Nevertheless, Jingsi model scores 20.5 and 19.6, which is higher than other CNN models. The outcome emphasizes the efficiency of the feature extraction process of PCNN. Figure 13 illustrates the error rate of CNN models.   Figure 14 represents the receiver operating characteristic (ROC) and precision-recall (PR) curve for dataset_1 during the testing phase. It shows that PCNN achieves a better Area under the ROC curve (AUC) for CAD and No CAD classification, respectively.  Similarly, Figure 15 reflects the ROC and PR curve for dataset_2. It outlines that PCNN achieves a better ROC AUC score of 0.93. Furthermore, the AUC score of the PR curve (0.91) indicates that PCNN predicts CAD better than the existing models. Similarly, Figure 15 reflects the ROC and PR curve for dataset_2. It outlines that PCNN achieves a better ROC AUC score of 0.93. Furthermore, the AUC score of the PR curve (0.91) indicates that PCNN predicts CAD better than the existing models.   Likewise, Table 12 reflects the outcome for Dataset_2. It shows that PCNN generates an output with fewer parameters, flops, and learning rates than the existing CNN models.

Clinical Insights and Limitations
PCNN generates outcomes that are superior to the existing CNN models. It can be employed in real-time applications to support physicians in diagnosing CAD. In addition,   increase the training time and require a better graphical processing unit. The imbalanced dataset may reduce the performance of the proposed method. The researcher introduced the concept of temporary storage to hold the intermediate results.
Nonetheless, there is a possibility of losing information due to multiple features. The lack of co-ordinate frames may lead to the adversarial visualization of images. The feature selection process can improve the images' internal representation. Finally, the structure of PCNN requires a considerable amount of data to produce an exciting result. To maintain the better performance, data pre-processing is necessary to handle image rotation and scaling tasks.

Conclusions
This study developed a CNN model for predicting CAD from CT images. The existing CNN architectures require a high-end hardware configuration for processing complex images. A feature extraction technique is employed to support the proposed CNN model. The proposed method modifies the existing DenseNet architecture in order to implement a feed-forward back-propagation network. Two benchmark datasets are used for the performance evaluation. The experiment analysis's outcome highlights the superior performance of the proposed CNN model in terms of accuracy, precision, recall, F-measure, and specificity. Moreover, the proposed CNN's memory consumption and computation time during the training phase are lower than the existing CNNs. In addition, ROC and PR curve analysis suggest that the proposed method can predict CAD with a lower false positive rate with higher prediction accuracy. Thus, the proposed method can support the physician in detecting and preventing CAD patients. In the future, the proposed model can be implemented to predict CAD from electronic health records.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author.

Conflicts of Interest:
The authors declare that they have no conflict of interest.