Next Article in Journal
An Unsupervised Condition Monitoring System for Electrode Milling Problems in the Resistance Welding Process
Previous Article in Journal
Force and Torque Characterization in the Actuation of a Walking-Assistance, Cable-Driven Exosuit
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cardiovascular Disease Diagnosis from DXA Scan and Retinal Images Using Deep Learning

by
Hamada R. H. Al-Absi
1,
Mohammad Tariqul Islam
2,
Mahmoud Ahmed Refaee
3,
Muhammad E. H. Chowdhury
4 and
Tanvir Alam
1,*
1
College of Science and Engineering, Hamad Bin Khalifa University, Doha 34110, Qatar
2
Computer Science Department, Southern Connecticut State University, New Haven, CT 06515, USA
3
Geriatric Department, Hamad Medical Corporation, Doha 3050, Qatar
4
Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(12), 4310; https://doi.org/10.3390/s22124310
Submission received: 17 April 2022 / Revised: 17 May 2022 / Accepted: 17 May 2022 / Published: 7 June 2022
(This article belongs to the Section Biomedical Sensors)

Abstract

:
Cardiovascular diseases (CVD) are the leading cause of death worldwide. People affected by CVDs may go undiagnosed until the occurrence of a serious heart failure event such as stroke, heart attack, and myocardial infraction. In Qatar, there is a lack of studies focusing on CVD diagnosis based on non-invasive methods such as retinal image or dual-energy X-ray absorptiometry (DXA). In this study, we aimed at diagnosing CVD using a novel approach integrating information from retinal images and DXA data. We considered an adult Qatari cohort of 500 participants from Qatar Biobank (QBB) with an equal number of participants from the CVD and the control groups. We designed a case-control study with a novel multi-modal (combining data from multiple modalities—DXA and retinal images)—to propose a deep learning (DL)-based technique to distinguish the CVD group from the control group. Uni-modal models based on retinal images and DXA data achieved 75.6% and 77.4% accuracy, respectively. The multi-modal model showed an improved accuracy of 78.3% in classifying CVD group and the control group. We used gradient class activation map (GradCAM) to highlight the areas of interest in the retinal images that influenced the decisions of the proposed DL model most. It was observed that the model focused mostly on the centre of the retinal images where signs of CVD such as hemorrhages were present. This indicates that our model can identify and make use of certain prognosis markers for hypertension and ischemic heart disease. From DXA data, we found higher values for bone mineral density, fat content, muscle mass and bone area across majority of the body parts in CVD group compared to the control group indicating better bone health in the Qatari CVD cohort. This seminal method based on DXA scans and retinal images demonstrate major potentials for the early detection of CVD in a fast and relatively non-invasive manner.

1. Introduction

Globally, cardiovascular diseases (CVDs) remain the leading cause of mortality and hence increasing healthcare costs in many countries [1]. CVDs have accounted for 32% of the overall global death where two-third occurs in low- and middle-income countries, according to the World Health Organization (WHO) [2]. Furthermore, CVD has caused 38% of the total deaths of people below 70 years old of age (i.e., premature deaths) in non-communicable diseases (NCD) [3]. In the East Mediterranean region, 54% of the total NCD deaths are caused by CVD according to the WHO’s Eastern Mediterranean Regional Office [4]. In the State of Qatar, according to the Planning and Statistics Authority, 33.1% of the total deaths in 2018 were caused by diseases related to the circulatory system including blood pressure, which made CVD the top concern for Qatar in combating NCD [5]. Although diagnosis and treatment methods/tools of CVD have advanced throughout the years, yet many people still go undiagnosed until the occurrence of serious CVD events such as stroke, heart attack, or myocardial infraction [3].
Risk factors such as age, gender, smoking, obesity, diabetes, hypertension, low-density lipoprotein (LDL) cholesterol, and sedentary lifestyle [6] are known to be linked to CVD and some of these factors have been studied and confirmed for the Qatari population [7,8]. In fact, there exists risk score calculators for CVD such as the Framingham Risk Score [9], and ASCVD [10], which use some of the above factors to predict the 10-year risk of developing CVD. There exists multiple clinical biomarkers such as cardiac troponins I and T, C-reactive protein, D-dimer, and B-type natriuretic peptides which are also shown to be linked to CVD [6]. Furthermore, with the advancement of medical imaging techniques, several imaging modalities have been used to diagnose CVD. Imaging modalities such as delayed enhancement cardiac magnetic resonance (DE-CMR) [11], echocardiogram (ECG) [12,13], ultrasound imaging [14], magnetic resonance imaging (MRI) [15,16,17], and computed tomography (CT) [18,19,20], single-photon emission computerized tomography (SPECT) [21,22], and Coronary CT angiography (CCTA) [23] are also used to detect CVDs. Recently retinal fundus image has emerged as a non-invasive data modality to examine the heart condition. Retinal microvascular abnormalities are found to be linked to CVDs [24,25]. For example, [25] showed that generalized arteriolar narrowing, focal arteriolar narrowing, arteriovenous nicking, and retinopathy, which are all abnormalities in retinal vessels, can be used as indicators for cardiovascular diseases. They also showed that Fundus images can capture these abnormalities. In [26], the authors predicted a variety of CVD risk factors such as hypertension, hyperglycemia, and dyslipidemia from retinal fundus images. Moreover, retinal fundus images have been utilized to predict diabetes [27], and CVD risk factors [28,29].
Another imaging technique that could also be used to diagnose CVD events is the dual-energy X-ray absorptiometry (DXA) through lean and fat mass [30] and/or bone mineral density (BMD) [31]. DXA measurements of fat mass have been shown to have a strong correlation with CVD events as well as its risks [32]. Furthermore, a recent study has shown an independent association between lower BMD and high risk of ASCVD events and death among women in South Korea [33]. DXA provides a number of advantages, including the ability to accurately and precisely measure and differentiate fat, lean, and bone components. It can assess the entire body in a single scan, making it fast in the acquisition while less hazardous due to low-radiation. In addition, it is a non-invasive procedure [34].
In this paper, we focused on two imaging techniques, namely DXA scan and retinal fundus images to demonstrate their contribution to diagnose CVD in Qatari population. We selected these techniques considering their non-invasive nature as well as the ease of access to the data source as they have already been incorporated in QBB protocol to collect participants’ health status. Moreover, the capability of the retinal image and DXA scan in CVD diagnosis has not been investigated thoroughly for the Qatari population. The contributions of this work are as follows:
  • We proposed a novel technique that uses retinal images to distinguish CVD group from the control group with over 75% accuracy. To the best of our knowledge, this is a seminal work in CVD diagnosis from retinal images.
  • We conducted extensive experiments on DXA scan data and showed that it has reasonable discriminating ability to separate CVD group from the control group yielding over 77% accuracy in the Qatari population.
  • We proposed a superior multi-modal approach for CVD diagnosis that fuses both tabular (DXA) and image data (retinal images) to distinguish CVD from the control group with an accuracy of 78.3%. Hence, our study proposes a fast and relatively non-invasive approach to diagnose patients with CVD.

2. Related Work

2.1. Retinal Fundus Images

Retinal fundus images provide an easy-to-use, non-invasive, and accessible way to screen the human eye health [35]. Retinal images allow medical professionals to examine the eye and diagnose diseases such as diabetic retinopathy, hypertension, or arteriosclerosis. They also provide the diameter and tortuosity measurements of the retinal blood vessels enabling the diagnosis of CVD [36]. ML-based techniques have been utilized to process retinal images for risk factor prediction for CVD. Guo et al. [37] presented a study to investigate the association between CVD and series of retinal information, and whether this association is independent of CVD risk factors in patients with type 2 diabetes. The age- and gender-based case-control study recruited 79 eligible patients with CVD and 150 non-CVD (control) and used three stepwise logistic regression models to evaluate CVD risk factors. Area Under the Curve (AUC) was used to evaluate three models having AUC values of 0.692, 0.661, and 0.775, respectively. Results of the first model showed that hypertension, longer diabetes duration and decreased high-density lipoprotein (HDL)-cholesterol were associated with CVD. Furthermore, the second model showed that patients with diabetic retinopathy, smaller arteriolar-to-venular diameter ratio (AVR), and arteriolar junctional exponent (JE) were likely to develop CVD. Finally, the third model showed that CVD is associated with hypertension, longer diabetes duration, higher HbA1c level, smaller AVR, arteriolar branching coefficient (BC) and JE, as well as larger venular length-to-diameter ratio; this shows that retinal information is associated with CVD, however, this association is independent from CVD risk factors in patients having type 2 diabetes, but it needs further investigation. Cheung et al. [29] developed a DL-based model that can measure the retinal vascular calibre from retinal images. The authors used the Singapore I Vessel Assessment (SIVA) to evaluate the assessments given by the DL system with human graders and found a strong correlation between the DL system and human graders. The dataset used in that study was a composite of images from 15 different retinal imaging datasets, totaling over 70,000 images. The results indicated a strong correlation between 0.82 and 0.95 for the system and human utilizing SIVA on various datasets. Furthermore, multivariable linear regression analysis was carried out between CVD risk factors and vessel caliber measurements by both DL system and human. The central retinal artery equivalent (CRAE) and central retinal vein equivalent (CRVE) were both assessed using regression analysis, and the results showed that R-squared ( R 2 ) for all CVD risk factors was greater while using the DL system compared to human measurement. Zhang et al. [26] published a study that used retinal images to predict hypertension, hyperglycemia, dyslipidemia, and other CVD risk factors. 1222 retinal images were collected from 625 Chinese people. For the creation of the prediction model, the researchers applied transfer learning. For hyperglycemia detection, the model had an accuracy of 78.7% and an AUC of 0.880; for hypertension detection, the model had an accuracy of 68.8% and an AUC of 0.766; and for dyslipidemia detection, the model had an accuracy of 66.7% and an AUC of 0.703. Age, gender, drinking status, smoking status, salty taste, BMI, waist-hip ratio (WHR), and hematocrit were also used to train the model, and it was able to predict them with an AUC over 0.7. Gerrits et al. [38] used retinal images to develop a deep neural network approach for predicting cardiometabolic risk variables. The study employed data from the Qatar Biobank (QBB), which included 3000 people and 12,000 retinal scans. Age, sex, smoking, total lipid, blood pressure, glycaemic state, sex steroid hormones, and bioimpedance measures were all investigated as risk factors. In addition, the study looked into the impact of age and sex as a mediating factor in cardiometabolic risk prediction. When four images were utilized for person level, good results were achieved for predicting age and sex; acceptable results were obtained for the prediction of diastolic blood pressure (DBP), HbA1c, relative fat mass (RFM), testosterone, and smoking habit; nevertheless, poor results were obtained for total lipid prediction. Craenendonck et al. [39] used retinal images to investigate the relationship between mono- and multi-fractal retinal vessels and cardiometabolic risk variables. The study considered data from 2333 QBB participants. To estimate retinal blood vessel topological complexity, mono- and multi-fractal metrics were derived using the dataset in this work. Results from multiple linear regression analysis revealed that there is a significant relationship between one or more fractal metrics and age, sex, systolic blood pressure (SBP), DBP, BMI, insulin, HbA1c, glucose, albumin, and LDL cholesterol. Based on above-mentioned studies, we can observe that there exists multiple studies focusing on CVD risk factor estimation based on retinal fundus images (Table 1). However, to the best of our knowledge, there exists no study that focused on diagnosing CVD by classifying CVD from non-CVD (control) subjects based on retinal images for the Qatari population.
Table 1. Summary of DXA and retinal image-based works for CVD associated risk factor; (AUC: Area under the curve, MAE: mean absolute error, N/A: Not available).
Table 1. Summary of DXA and retinal image-based works for CVD associated risk factor; (AUC: Area under the curve, MAE: mean absolute error, N/A: Not available).
ReferenceYearDatasetCohortML/DL ResultsFindings
Retinal Images Data
[37]2016Retina ImagesImages of 79 CVD
and 150 Non-CVD
patients in Hong Kong
AUC:
Model 1 (0.692)
Model 2 (0.661)
Model 3 (0.775)
The paper evaluated three stepwise logistic regression models to
investigate CVD association to retinal information and whether the
association is independent from having type 2 diabetes.
The findings reveal that information obtained from the retina is
independently linked to CVD in type 2 diabetic patients.
[29]2020Retina Images>70,000 images
(Collection from 15 datasets
from multiple countries
and ethnicities)
N/AThe paper developed a DL system to assess retinal vessel caliber
(which measure microvascular structure changes which are
correlated with CVD risk factors. Outcome of the DL system was
compared with human graders. Later a comparison with associated
risk factors was carried out. In addition, DL system was able to
predict CVD risk factors better than or comparable to human graders.
[26]2020Retina ImagesRetina images
of 625 patients
from China
Accuracy of 78.7% for
hyperglycemia detection,
68.8% for hypertension detection
and
66.7% for dyslipidemia detection
The study aimed at predicting hypertension, hyperglycemia, and
dyslipidemia, and other CVD risk factors from retinal images.
Transfer learning utilized for model development and obtained
a good result.
[38]2020Retina Images12,000 images
from QBB
AUC (sex): 0.97;
MAE (age): 2.78 years;
MEA (SBP): 8.96 mmHg;
MAE (DBP): 6.84 mmHg;
MAE (Hb1Ac): 0.61%;
MAE (relative mass): 5.68 units);
MAE (testosterone): 3.76 nmol/L
The paper investigated the possibility of fundus images in
predicting cardiometabolic risk factors such as age, gender,
smoking habit, blood pressure, lipid profile, and bioimpedance
using DL
[39]2020Retina ImagesImages of 2333
participants from QBB
N/AThe study investigated association between cardiometabolic risk
factors and mono- and multifractal retinal vessel using retinal
images. Fractal metrics were calculated, and then linear regression
analysis was carried out. One or more fractals are linked to sex, age,
BMI, SBP, DBP, glucose, insulin, HbA1c, albumin, and LDL,
according to the findings.
DXA Data
[40]2012DXA Data409 participantsN/AThe study aims at comparing BMI with direct measure of fat and
lean mass to predict CVD and diabetes among buffalo police officers
in New York, US. Findings shows a strong correlation of multiple
DXA indices of obesity showed with cardiovascular disease.
[41]2014DXA Data616 ambulatory patients
who were not pregnant women,
or had self-reported cardiac failure,
had cardiac-pacemaker
or undergone limb amputation.
N/AThe researchers looked at how body composition factors affect BMI
and if they may be used as markers for metabolic and
cardiovascular health in Switzerland. The researchers observed that
fat mass and muscle mass were important nutritional status markers,
and they broadened their investigation to look at the impact on
health outcomes for all BMI categories. The authors also underlined
the need of evaluating body composition during medical
examinations to predict metabolic and cardiovascular diseases.
[42]2016DXA Data117 patients with heart failure
with preserved ejection fraction
N/AThe study that was conducted on patients from Germany, England
and Slovenia aimed to find out how sarcopenia in individuals
with heart failure with preserved ejection fraction is related
to exercise ability and muscle strength as well as quality of life.
It was found that heart failure has a detrimental effect
on appendicular skeletal muscle mass,
[43]2020DXA Data570 patients with
and without heart failure
N/AThe goal of the study was to see how aging and heart failure
treatments affected bone mineral density. Heart failure was
observed to be linked to a greater BMD prevalence of osteoporosis.
Heart failure exacerbates the loss of mineral bone density
that comes with aging.
[44]2020Anthropometric
&
DXA Data
558 participants who were not
diagnosed with diabetes,
hypertension, dyslipidemia or CVD.
N/AThe study, which was based on Qatari population from QBB,
aimed at comparing Anthropometric & DXA Data in predicting
cardio-metabolic risk factors. randomly healthy participants were
selected. The study revealed a more in-depth relationship between
DXA-based assessment of adiposity as a cardio metabolic risk
predictor in Qatar compared to anthropometric markers.
[45]2020Demographic,
Cardio-metabolic
and DXA Data
2802 participants from QBBN/AThe goal of the study was to find the body fat composition cut-off
values to predict metabolic risk in the Qatari population.
For Qatari adults of various ages and genders, the study
developed cut-off values for body fat measurements that may
be used as a reference for assessing obesity-related metabolic risks.
According to the findings, there is a substantial link between body
fatness and the likelihood of developing metabolic illnesses.

2.2. Dual-Energy X-ray Absorptiometry (DXA)

Bone densitometry, also known as Dual-energy X-ray Absorptiometry (DXA), is a form of X-ray technology that is used to analyze bone health and bone loss. DXA is currently a widely accepted procedure for measuring BMD [46]. It has been reported that reduced BMD has an adverse effect on cardiovascular system and osteoporosis which poses a risk to patients with heart failure history [47]. Furthermore, another study with 570 patients found that heart failure was linked to a faster loss of BMD, regardless of other osteoporosis risk factors [43].
In another study involving 117 people having the history of heart failure, DXA revealed that heart failure has a negative impact on appendicular skeletal muscle mass [42]. In a study conducted on 409 participants from police officers in buffalo, New York, multiple DXA indices of obesity showed a strong correlation with CVD [40]. Lang et al. [41] discovered that DXA readouts of fat mass and muscle mass were key nutritional status indicators, and they expanded their research to look at the influence on health outcomes for all BMI categories. Furthermore, the authors emphasized the necessity of assessing body composition during medical examinations in order to anticipate metabolic and cardiovascular disorders. It was reported that BMD, muscle mass, and fat content associated with CVD have detrimental impact in patients with advanced CVD stage which can be seen in the above-mentioned research. Reid et al. [48] investigated the use of CNN to predict abdominal aortic calcification (AAC) based on DXA. High AAC values could be used as a predictor of coronary artery calcium, cardiovascular outcome or even death. In their work, they used data of vertebral fracture assessment (VFA) lateral spine images extracted from DXA and an ensemble CNN was used for training and evaluation. Computational prediction showed high correlation with human-level annotation. In Qatar, the use of DXA in the prediction of CVD, its risk factors, and associated medical conditions has just lately gained traction. Bawadi et al. [49] examined data from the QBB in 2019 to investigate if body shape index could be used as a predictor of diabetic mellitus (DM). Kerkadi et al. [44] studied DXA-based measures in cardio-metabolic risk prediction a year later in 2020, and this study revealed a more in-depth relationship between DXA-based assessment of adiposity as a cardio metabolic risk predictor in Qatar. In the same year, Bawadi et al. [45] carried out a study in Qatar on the age and gender-specific cut-off points for body fat among adults. From the above mentioned articles (Table 1) we can say that, DXA data are linked to multiple metabolic risk factors related to CVD. However, to the best of our knowledge, there exists no study that applies machine or deep learning techniques to diagnose CVD using DXA measurements.

2.3. Multi-Modal Approaches

Both DXA and retinal image data collection processes are non-invasive and require less preparation overhead; there is no published work that uses a combination of DXA and retinal image data for CVD detection. We, however, present a summary of existing methods (Table 2) that use a multi-modal approach albeit with modalities different from DXA and retinal images. The existing multi-modal based approaches mainly rely on modalities such as Magnetic Resonance Imaging (MRI), X-Ray, Myocardial Perfusion Imaging (MPI), Electrocardiogram (ECG), Echocardiography (ECHO), and Phonocardiogram (PCG) (Table 2). We would like to mention that none of these invasive multi-modal methods are directly comparable to ours due to the use of different modalities than our approach.
Table 2. CVD detection based on multi-modal dataset.
Table 2. CVD detection based on multi-modal dataset.
Ref.YearCountryFused DataResultsSummary
[50]2019USAElectronic Health Record (EHR)
and Genetic data
AUROC: 0.790
AUPRC 0.285
The study used a 10-year data from HER and genetic data to predict CVD
events using random forest, gradient boosting trees, logistic regression, CNN and
long short-term memory (LSTM). Chi-squared was used for feature selection on the
EMR data. Results show an improved prediction of CVD with AUROC of 0.790
compared to EMR alone (AUC of 0.71) or genetic alone (AUC of 0.698)
[51]2020ChinaElectrocardiogram (ECG),
Phonocardiogram (PCG),
Holter monitoring,
Echocardiography (ECHO),
and biomarker levels (BIO)
Accuracy: 96.67
Sensitivity: 96.67
Specificity: 96.67
F1-score: 96.64
The study aimed at the detection of coronary artery disease (CAD).
Data from ECG and PCG of 62 patients were used. Furthermore, data were also
collected from Holter monitoring, ECHO and BIO. Feature selection was applied
to attain optimum features and support vector machine was used for classification.
Results show best performance when feature were fused from all sources.
[52]2020USASensors (collect blood pressure,
oxygen, respiration rate, etc.)
and Medical records
(history, lab test, etc.)
Results after
feature weighting
method:
Accuracy: 98.5
Recall: 96.4
Precision: 98.2
F1-score: 97.2
RMSE: 0.21
MAE: 0.12
The study aimed at predicting heart diseases (such as heart attack or stroke) using
data gathered from sensors and medical records. Features such as age, height, BMI,
respiration rate, and blood pressure were extracted, and then data from both
sources were fused. Furthermore, conditional probability is utilized for feature
weighting to help in accuracy improvement. An ensemble deep learning is then
used for the prediction of heart disease. 
[53]2021GreeceMyocardial Perfusion Imaging
(MPI) and Clinical data
Accuracy: 78.44
Sensitivity: 77.36
Specificity: 79.25
F1-score: 75.50
AUC: 79.26  
The study aimed at cardiovascular disease diagnosis using MPI and Clinical data.
Polar maps were derived from the MPI data and fused with clinical data of 566
patients. Random forest, neural network, and deep learning with Inception V3
were used for classification. Results show a hybrid model of Inception V3 with
random forest achieved an accuracy of 78.44% compared to an accuracy
of 79.15% achieved by medical experts.
[54]2021USAElectronic medical records
(EMR) and Abdominopelvic
CT imaging
AUROC: 0.86
AUCPR: 0.70
The study aimed at developing a risk assessment model of ischemic heart disease
(IHD) using combined information from patientsí EMR and features extracted
from abdominopelvic CT imaging. In this study, CNN used to extract features
from images and XGBoost was used as the learning algorithm. Results show
an improved prediction performance with AUROC of 0.86 and AUCPR of 0.70
[55]2021USAGenetic, clinical, Demographic,
imaging, and lifestyle.
-The study aimed at evaluating the ability of machine learning in detecting CAD
subgroups using multimodal data. The multimodal data consisted of genetic, clinical,
demographic, imaging, and lifestyle data. K-means clustering as well as Generalized
low rank Modeling were utilized. Results show that 4 subgroups were
uniquely identified.

3. Materials and Methods

3.1. Ethical Approval

This research was carried out under the regulation of Qatar’s Ministry of Public Health. This work was approved by the Institutional Review Board of Qatar Biobank (QBB) in Qatar, and used a de-identified dataset from (QBB).

3.2. Data Collection from QBB

The data used in this study was collected from QBB. The details of the data collection protocol adopted by QBB are described in [56,57]. In brief, participants were invited to QBB and they were interviewed by staff nurses to collect their background history. Then multiple lab tests and imaging such as DXA scan and retinal images were collected. The dataset considered for this study is comprised of 500 participants equally divided into CVD group and control group. There were 262 male participants (CVD:Control = 137:125) and 238 female participants (CVD:Control = 113:125) in the studied cohort. Participants were all adult Qatari nationals aged between 18 and 84 years. Overall BMI was higher in CVD compared to the control group (26.10 ± 2.8:23.20 ± 2.8).

3.3. DXA Scan Data Preprocessing

The DXA data consisted of measurements related to bone mineral density, lean mass, fat content, and bone area measurements from different body parts of the participants. The dataset was cleaned by removing columns with missing values of more than 50% from each class (i.e., CVD and control). For the remaining columns, the missing values were imputed by the median value. After the data cleaning stage, 122 features were finally selected for this study. Supplementary File S1 provides the summary statistics of these features. The CVD and control groups’ data were then normalized using the min-max normalization technique [58]:
x i = x i m i n i { x i } m a x i { x i } m i n i { x i }
where x denotes the value of a specific DXA feature and m a x { x } and m i n { x } denote the largest and smallest values of that feature, respectively.

3.4. Retinal Image Collection and Preprocessing

The retinal fundus images were acquired at QBB utilizing a Topcon TRC-NW6S retinal camera to capture the “microscopic" characteristics of the optic nerve and macula of the participants. At least two images (one for each retina) were collected from each participant, but in some cases, multiple images (three or four) were collected from both eyes. Both (a) macula-centered images and (b) disc-centered images were captured for both eyes. A few participants had no retinal images and we discarded them from our analysis. Figure 1 shows few randomly selected images from each group.
In the dataset, we had 1839 retinal images from all participants. Then, we removed low quality images by visual inspection since they may have negative impact on the downstream classification task. After removing low quality images (all were from the CVD group only), the number of images was reduced to 1805 (874 and 931 images from CVD and control group, respectively). For some participants all their images were removed due to low quality, and therefore, the number of participants in our study was reduced to 483, where 250 were from control and 233 were from CVD group. Examples of some low-quality images can be seen in Figure 2.

4. Experiment Setup

To investigate the effect of different types of input data on the outcome of our study, we conducted multiple experiments, both uni-modal and multi-modal. Specifically, we used the tabular and image datasets in isolation in the uni-modal experiments and a combination of them in the multi-modal one. This resulted in three experiment configurations: (i) DXA model: applying traditional machine learning techniques on the DXA data. (ii) Retinal image model: applying deep learning on the retinal images. (iii) Hybrid model: applying deep learning on both the tabular DXA data and the retinal image data. All experiments were performed on an Intel(R) Core(TM) i9 CPU @ 3.60 GHz machine with 64 GB RAM and equipped with NVIDIA GeForce RTX 2080 Ti GPU. For implementation, We used Scikit-learn for the DXA model and fastai for the other two models. Details about each experiment is given below in the following sub-sections.

4.1. DXA Model

In the first experiment, we applied six different machine learning algorithms: Decision Tree (DT) [59], (Shallow) Artificial Neural Network (ANN) [60], Random Forest (RF) [61], Extreme Gradient Boosting (XGBoost) [62], CatBoost [63], and Logistic Regression (LR) [64]. We utilized the GridSearchCV utility from Python’s Scikit-learn package for hyperparameter tuning with nested cross validation [65]. Supplementary File S2 includes all the tuned parameters for the ML models.

4.2. Retinal Image Model

In the second experiment, we applied DL-based techniques to distinguish CVD group from the control group based on the retinal images only. For this experiment, two different image pre-processing steps were applied to generate two sets of images. For the first set, the circular region of each image was extracted. Then, we removed the border-noise by cropping the outside 10% of each image. For the second set, for each cropped image, local mean was subtracted from a 4 × 4-pixel neighborhood, and then placed on a dark background within a square-shaped image with tight boundaries. At the end, we had all images with a size of 540 × 540 with black background. Figure 3 presents samples of original images and pre-processed images. We also applied data augmentation techniques such as random horizontal flip as well as a random brightness and contrast perturbation to enhance the robustness of the model. We experimented with eight popular image classification models, namely, AlexNet [66], VGGNet-11 [67], VGGNet-16 [67], ResNet-18 [68], ResNet-34 [68], DenseNet-121 [69], SqueezeNet-0 and SqueezeNet-1 [70]. We used super-convergence in our experiments which allowed the network to converge faster [71]. The model was fine-tuned through 10 epochs, with all layers (except the last layer) frozen using a one-cycle policy [72] for scheduling the learning rate with a maximum learning rate of 0.01 and a batch size of 8. The model took around 30 min to train.

4.3. Hybrid Model

In the third experiment setup, we aimed to combine the DXA and the retinal image data in a deep learning-based approach. To this end, we designed a deep neural network that accepted a multi-modal input. The network consisted of three components: the CNN stem, the MLP stem, and the Classification Head (Figure 4). The CNN stem was responsible for processing the retinal images, while the MLP stem processed the DXA data. Hence, in our work, the fusion between the two data modalities does not take place at image-level. The data from the two different modalities, tabular data (DXA) and images (retinal fundus images) are passed through the two stem networks (MLP stem and CNN stem, respectively) and feature vectors produced from these are fused (vector concatenation) to form a single feature vector which is finally passed through the Classification Head to produce the output classification probability. The CNN stems we experimented with extensions of AlexNet, VGGNet-11, VGGNet-16, ResNet-18, ResNet-34, DenseNet-121, SqueezeNet-0, and SqueezeNet-1. We tried with multipl configurations of MLP stem and Classification Head with different number of layers and neurons. The best configuration we found for MLP stem, and the Classification Head is shown in Table 3.
For the retinal images, we used the cropped images and the mean subtracted images shown in Figure 3 and concatenated them with the tabular data. To achieve this, we used fastai and the image_tabluar library (https://github.com/naity/image_tabular, accessed on 15 August 2021) to integrate the image data and the DXA (tabular) data. Figure 4 shows a high-level diagram of our proposed network architecture for the hybrid model. For the CNN stem, multiple data augmentation techniques such as brightness, contrast, flipping, rotation, and scaling of the images were applied.
The hybrid model was fine-tuned through 10 epochs, with all layers (except the last layer) frozen using a one-cycle scheduler and a 1 × 10 2 learning rate and a batch size of 64. Then, after unfreezing the whole network, 10 epochs were employed for training utilizing discriminative learning rates in the range of (1 × 10 3 and 1 × 10 2 ). With the 20 epochs, the model took around 30 min to train.

4.4. Performance Evaluation Metrics

We evaluated the models with 5-fold cross validation (CV) [65] for all models. The models were evaluated on accuracy, sensitivity, precision, F1-score, and Matthews Correlation Coefficient (MCC). These metrics are highlighted in the following equations:
Accuracy = t p + t n t p + t n + f p + f n
Sensitivity ( recall ) = t p t p + f n
Precision = t p t p + f p
F 1 - score = 2 × Precision × Recall Precision + Recall
MCC = ( t p × t n ) ( f p × f n ) ( t p + f p ) ( t p + f n ) ( t n + f p ) ( t n + f n )
where true positive, false negative, false positive, and true negative are represented as tp, fn, fp, and tn, respectively. Moreover, we computed an empirical p-value for evaluating the significance of a cross-validated performance scores with permutations

5. Results

5.1. Experimental Results from the Machine Learning Models

The results obtained from the three experiments are presented in this section. For the DXA Model, an ablation study was conducted on different feature types of DXA separately and on all of them with 5-fold CV. Results based on the ablation study shows that the area measurements features have low contribution with 66.5% accuracy. body fat composition, BMD, and lean mass achieved highest accuracy of 77.0%, 73.2%, and 70.2%, respectively. Considering all features (122 features), XGBoost model performed the best achieving highest accuracy of 77.4%. The highest F1-score of 76.8% and the highest MCC of 55.5% (see Table 4). For all DXA based experiments, a p-value of < 0.05 was obtained indicating their statistical significance.
For the Retinal Image Model, where retinal images were used only to distinguish CVD form control group, DenseNet-121 model achieved the highest accuracy of 75.6% on the cropped image set based on 5 fold CV. DenseNet-121 model achieved 73.0% accuracy on the mean subtracted image set. Table 5 highlights the comparison of performance among all DL models used in this study. For all retinal image-based experiments, a p-value of <0.05 was obtained indicating their statistical significance.
Finally, for the Hybrid Model, where both DXA tabular data and retinal images were combined, ResNet-34 achieved the highest accuracy of 78.3% with 5-fold CV. Table 6 shows a comparison between the eight DL models that we tested in this experiment. For all hybrid model based experiments, a p-value of <0.05 was obtained indicating their statistical significance. We also calculated the area under of curve (AUC) of receiver operating characteristics (ROC) for all DL models in this experiment for the cropped images (Supplementary File S3).

5.2. Performance of the Hybrid Model Based on Gender and Age Stratified Dataset

To check the effectiveness of the proposed model developed on different subgroups of population, we tested the Hybrid Model (incorporating the DXA and retinal image) on age- and gender-stratified samples. Though the performance of the model dropped slightly on the gender-stratified dataset, ResNet-34 based model achieved the highest performance with 75% and 72.9% accuracy for male and female, respectively (Figure 5). For the gender-stratified samples, all the eight models achieved better performance for males compared to females (Figure 5).
Table 7 provides information on the numbers of participants and images used for different age groups. For the age-stratified samples, the performance of the model was close to the performance of the model while whole dataset. The highest performance across all DL models were obtained from ResNet-34 model with 76.5% accuracy for the 40 and above age group. For the other age group (below 40), the ResNet-34 model also achieve best performance with 76.2% accuracy. Figure 6 shows the detailed results and comparison of the model performance for age-stratified participants.

5.3. Statistical Analysis on DXA Data

We conducted statistical analysis on the DXA data for comparing the CVD group against the control group. After analyzing the 122 features, 95 features were statistically significant (p-value < 0.05) (Supplementary File S1).
When compared against the control group, CVD group accumulated more fat in different body parts including the arms, legs, trunk, android, gynoid, and android visceral.
Measurements of lean mass in android, leg, arm, trunk, etc., were also higher in the CVD group. For instance, lean mass in trunk (20,329.32 ± 3809:18,779.82 ± 3965) was greater in CVD than the control group. Overall, the total fat mass (25,908.58 ± 6342:20,962.12 ± 4946) as well as total lean mass (43,643.57 ± 8617:40,614.88 ± 9059) was higher in the CVD group and higher BMI value in CVD group reflect their fat and lean mass content.
We also observed higher level of BMD and anthropometric measurements from different body parts in the CVD group compared to the control group (Supplementary File S1). For BMD measurements in different body parts, e.g., head, arms, spine, troch, and trunk, the CVD group had a greater BMD than the control group. Overall the total BMD (1.20 ± 0.11:1.17 ± 0.12) was higher in the CVD group compared to the control group. Individuals with higher weight and BMI tend to have high BMD which might help to reduce the risk of bone fracture [73,74,75]. A similar trend was seen in our cohort.
Furthermore, when it comes to anthropometic measurements, we observed larger bone area in the troch, lumbar spine (L1, L2, L3, L4), and pelvis body parts for CVD group compared to the control group. In summary, most measurements for BMD, fat content, muscle mass, and bone area were higher in the CVD group compared to the control group, according to the analysis. Details of the analysis can be found in Supplementary File S1.

5.4. Class Activation Map for Highlighting the Region of Interest in CVD Patients

We used Gradient-weighted Class Activation Mapping (GradCAM) [76] to highlight regions of interest which influenced the DL model to make predictions. Figure 7 shows results of GradCAM on images of the CVD class. It was observed that regions of interest, as shown in the color-coded heat map on the right of each image, are mostly in the central region of the retina. Micro hemorrhage can be observed in all images and especially in the bottom left images which is associated with hypertension [77,78].

6. Discussion

6.1. Principal Findings

Integrating retinal images with DXA improves the performance of CVD detection. In our experiments, the classification accuracy for the retinal image data only and for the DXA data only was 75.6% and 77.4% respectively. However, when both datasets were integrated using a joined deep learning model, there was an improvement in the performance which has reached up to 78.3% accuracy with ResNet-34 based model. This indicates that integration of multi-modal dataset has improved the performance of the proposed model. For the gender-stratified participants, the proposed model achieved almost similar performance for both genders (Figure 5). The performance of the model based on the age-stratified participants were close to the performance of the model with all participants (Figure 6). This indicates that the performance of the proposed model is unbiased towards age-stratified adult population.
The majority of DXA readings in the CVD group were higher than the control group. For statistically significant variables, majority of the BMD, fat content, muscle mass, and bone area measurements for the CVD group had greater average values than the control group (Supplementary File S1). Obesity was shown to be linked as a protective factor of osteoporosis through different mechanisms including mechanical and biochemical mechanisms [79]. In the current study, although obesity (higher BMI) has deleterious effect on cardiovascular risk factors, we can also observe its protective effects on bone health. In summary, our results indicate a better bone health condition for the CVD group which had higher BMI than the control group. Ablation study revealed the better discriminatory power of fat content and BMD than muscle mass and bone areas (Table 4). This highlights the discriminatory power of DXA measurements which could open new avenues in the diagnosis plan of CVD.

6.2. Comparison against Other Tools

We could not find any work that used retinal images and DXA for CVD detection based on the Qatari population or outside Qatar. Published work that has been reviewed focused on the prediction of risk factors associated with CVD (refer to Table 1) instead of predicting CVD directly, and therefore, we could not compare the performance of the proposed model against any existing model.
We, however, present a summary of the existing methods (Table 2) that use a multi-modal approach albeit with modalities different from DXA and retinal images. Therefore, we would like to re-iterate that a direct comparison of our proposed multi-modal approach with any of these would not be fair due to the following reasons: First, none of the existing research uses the particular combination of multi-modal data (DXA and retinal images) in their work. To the best of our knowledge, ours is the first research that considers these two data modalities using a machine learning approach. Second, the data used in the approaches (Table 2) include bio-markers related directly to the health of the heart, and hence have an unfair advantage in CVD prediction over our approach at the cost of being more invasive. Our method stands superior in scenarios involving health centers located at remote regions with access to limited non-invasive resources.

6.3. Motivation for Using a Multi-Modal Approach

We were motivated to use a “multi-modal” approach to predict CVD using retinal images and DXA data for multiple reasons. First, we have shown that a multi-modal approach is a superior technique (Table 6) than the uni-modal counterparts (Table 4 and Table 5) for predicting CVD. Second, the particular combination of the modalities (DXA and retinal images) we used in our work has never been explored, hence providing a completely novel insight into the possible ways of predicting CVD in a non-invasive manner. Last but not the least, since neither DXA nor retinal data collection is invasive, our proposed multi-modal approach is also, overall, non-invasive, and hence is applicable to a wide variety of health centers possibly located in remote regions with limited resources.

7. Limitations

In this work, we used dataset from QBB that consisted of participants from Qatar only. This means that the outcome of this study could be specific to the Qatari population and people of gulf countries with similar lifestyle and ethnicity. This indicates that the results from this study might not be generalized to other population. Moreover, the human retinal images contain a vascular tree that orchestrate a complex branching pattern. To identify different aspects of this tree it is important to have retinal images with high contrast, color balance, and quality. Hence, having more and better quality controlled images could have pushed the accuracy of the proposed model even higher. Despite these limitations, this study serves as a proof of concept that incorporating information from retinal images and DXA provides a novel avenue to diagnose CVD with a reasonable accuracy in a non-invasive manner.

8. Conclusions

In this work, we presented a novel deep learning based model to distinguish CVD group from control group by integrating information from retinal images and DXA scans. The proposed multi-modal approach achieved an accuracy of 78.3% which performed better than the individual uni-modal (DXA and retinal images) models. To the best of our knowledge, this is the first study to diagnose CVD from DXA data and retinal images. The achieved performance demonstrated that signals from fast and relatively non-invasive techniques such as DXA and retinal images can be used to diagnose patients with CVD. The findings from our study are required to be validated in a clinical setup for proper understanding of their links to CVD diagnosis.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/s22124310/s1. Supplementary File S1: Summary Statistics of the DXA features. Supplementary File S2: Details for parameter optimization for the models. Supplementary File S3: ROC Curve Analysis.

Author Contributions

H.R.H.A.-A., T.A. conceived and planned the experiments. H.R.H.A.-A. and M.T.I. performed the experiments. H.R.H.A.-A. analyzed the data. H.R.H.A.-A., T.A. and M.T.I. prepared the manuscript with the assistance from other authors. Data curation, M.A.R. Validation, M.E.H.C. All authors have read and agreed to the final version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This research was carried out under the regulation of Qatar’s Ministry of Public Health. This work was approved by the Institutional Review Board of Qatar Biobank (QBB) in Qatar, and used a de-identified dataset from QBB.

Informed Consent Statement

QBB received the infomred consent of all participants.

Data Availability Statement

Restrictions apply to the availability of these data. Data was obtained from QBB under non-disclosure agreement (NDA).

Acknowledgments

We thank Qatar Biobank (QBB) for providing access to the de-identified dataset. The open-access publication of this article is funded by the College of Science and Engineering, Hamad Bin Khalifa University (HBKU), Doha 34110, Qatar.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

The following abbreviations are used in this manuscript:
ASCVDAtherosclerotic Cardiovascular Disease
AUCArea Under the Curve
BMIBody Mass Index
BMDBone Mineral Density
CCTACoronary CT Angiography
CTComputed Tomography
CVDCardiovascular disease
DLDeep Learning
DXADual-energy X-ray Absorptiometry
ECGElectrocardiogram
ECHOEchocardiography
MLMachine Learning
MPIMyocardial Perfusion Imaging
MRIMagnetic Resonance Imaging
PCGPhonocardiogram
ROCReceiver Operating Characteristics
WHOWorld Health Organization
WHRWaist-to-hip Ratio

References

  1. Roth, G.A.; Mensah, G.A.; Johnson, C.O.; Addolorato, G.; Ammirati, E.; Baddour, L.M.; Barengo, N.C.; Beaton, A.Z.; Benjamin, E.J.; Benziger, C.P.; et al. Global burden of cardiovascular diseases and risk factors, 1990–2019: Update from the GBD 2019 study. J. Am. Coll. Cardiol. 2020, 76, 2982–3021. [Google Scholar] [CrossRef] [PubMed]
  2. World Health Organization. Cardiovascular Diseases (CVDs); World Health Organization: Geneva, Switzerland, 2021. [Google Scholar]
  3. Long, C.P.; Chan, A.X.; Bakhoum, C.Y.; Toomey, C.B.; Madala, S.; Garg, A.K.; Freeman, W.R.; Goldbaum, M.H.; DeMaria, A.N.; Bakhoum, M.F. Prevalence of subclinical retinal ischemia in patients with cardiovascular disease—A hypothesis driven study. EClinicalMedicine 2021, 33, 100775. [Google Scholar] [CrossRef]
  4. WHO-EMRO. Cardiovascular Diseases; World Health Organization, Regional Office for the Eastern Mediterranean: Cairo, Egypt, 2021. [Google Scholar]
  5. Planning and Statistics Authority (Qatar). Births & Deaths in the State of Qatar (Review & Analysis). Report 2018. Available online: https://www.psa.gov.qa/en/statistics/Statistical%20Releases/General/StatisticalAbstract/2018/Birth_death_2018_EN.pdf (accessed on 18 September 2021).
  6. Ghantous, C.M.; Kamareddine, L.; Farhat, R.; Zouein, F.A.; Mondello, S.; Kobeissy, F.; Zeidan, A. Advances in cardiovascular biomarker discovery. Biomedicines 2020, 8, 552. [Google Scholar] [CrossRef] [PubMed]
  7. Al-Absi, H.R.; Refaee, M.A.; Rehman, A.U.; Islam, M.T.; Belhaouari, S.B.; Alam, T. Risk Factors and Comorbidities Associated to Cardiovascular Disease in Qatar: A Machine Learning Based Case-Control Study. IEEE Access 2021, 9, 29929–29941. [Google Scholar] [CrossRef]
  8. Rehman, A.U.; Alam, T.; Belhaouari, S.B. Investigating Potential Risk Factors for Cardiovascular Diseases in Adult Qatari Population. In Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, 2–5 February 2020; pp. 267–270. [Google Scholar]
  9. Wilson, P.W.; D’Agostino, R.B.; Levy, D.; Belanger, A.M.; Silbershatz, H.; Kannel, W.B. Prediction of coronary heart disease using risk factor categories. Circulation 1998, 97, 1837–1847. [Google Scholar] [CrossRef] [Green Version]
  10. American College of Cardiology. ASCVD Risk Estimator; American College of Cardiology: Washington, DC, USA, 2022; Available online: https://tools.acc.org/ascvd-risk-estimator-plus/ (accessed on 18 September 2021).
  11. Xu, C.; Xu, L.; Gao, Z.; Zhao, S.; Zhang, H.; Zhang, Y.; Du, X.; Zhao, S.; Ghista, D.; Liu, H.; et al. Direct delineation of myocardial infarction without contrast agents using a joint motion feature learning architecture. Med. Image Anal. 2018, 50, 82–94. [Google Scholar] [CrossRef]
  12. Sudarshan, V.K.; Acharya, U.R.; Ng, E.; San Tan, R.; Chou, S.M.; Ghista, D.N. An integrated index for automated detection of infarcted myocardium from cross-sectional echocardiograms using texton-based features (Part 1). Comput. Biol. Med. 2016, 71, 231–240. [Google Scholar] [CrossRef]
  13. Madani, A.; Ong, J.R.; Tibrewal, A.; Mofrad, M.R. Deep echocardiography: Data-efficient supervised and semi-supervised deep learning towards automated diagnosis of cardiac disease. NPJ Digit. Med. 2018, 1, 1–11. [Google Scholar] [CrossRef]
  14. Vidya, K.S.; Ng, E.; Acharya, U.R.; Chou, S.M.; San Tan, R.; Ghista, D.N. Computer-aided diagnosis of myocardial infarction using ultrasound images with DWT, GLCM and HOS methods: A comparative study. Comput. Biol. Med. 2015, 62, 86–93. [Google Scholar] [CrossRef]
  15. Larroza, A.; López-Lereu, M.P.; Monmeneu, J.V.; Gavara, J.; Chorro, F.J.; Bodí, V.; Moratal, D. Texture analysis of cardiac cine magnetic resonance imaging to detect nonviable segments in patients with chronic myocardial infarction. Med. Phys. 2018, 45, 1471–1480. [Google Scholar] [CrossRef]
  16. Snaauw, G.; Gong, D.; Maicas, G.; Van Den Hengel, A.; Niessen, W.J.; Verjans, J.; Carneiro, G. End-to-end diagnosis and segmentation learning from cardiac magnetic resonance imaging. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 802–805. [Google Scholar]
  17. Baessler, B.; Luecke, C.; Lurz, J.; Klingel, K.; Das, A.; von Roeder, M.; de Waha-Thiele, S.; Besler, C.; Rommel, K.P.; Maintz, D.; et al. Cardiac MRI and texture analysis of myocardial T1 and T2 maps in myocarditis with acute versus chronic symptoms of heart failure. Radiology 2019, 292, 608–617. [Google Scholar] [CrossRef] [PubMed]
  18. Mannil, M.; von Spiczak, J.; Manka, R.; Alkadhi, H. Texture analysis and machine learning for detecting myocardial infarction in noncontrast low-dose computed tomography: Unveiling the invisible. Investig. Radiol. 2018, 53, 338–343. [Google Scholar] [CrossRef] [PubMed]
  19. Coenen, A.; Kim, Y.H.; Kruk, M.; Tesche, C.; De Geer, J.; Kurata, A.; Lubbers, M.L.; Daemen, J.; Itu, L.; Rapaka, S.; et al. Diagnostic accuracy of a machine-learning approach to coronary computed tomographic angiography–based fractional flow reserve: Result from the MACHINE consortium. Circ. Cardiovasc. Imaging 2018, 11, e007217. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Zreik, M.; Van Hamersvelt, R.W.; Wolterink, J.M.; Leiner, T.; Viergever, M.A.; Išgum, I. A recurrent CNN for automatic detection and classification of coronary artery plaque and stenosis in coronary CT angiography. IEEE Trans. Med Imaging 2018, 38, 1588–1598. [Google Scholar] [CrossRef] [Green Version]
  21. Sacha, J.P.; Goodenday, L.S.; Cios, K.J. Bayesian learning for cardiac SPECT image interpretation. Artif. Intell. Med. 2002, 26, 109–143. [Google Scholar] [CrossRef]
  22. Shibutani, T.; Nakajima, K.; Wakabayashi, H.; Mori, H.; Matsuo, S.; Yoneyama, H.; Konishi, T.; Okuda, K.; Onoguchi, M.; Kinuya, S. Accuracy of an artificial neural network for detecting a regional abnormality in myocardial perfusion SPECT. Ann. Nucl. Med. 2019, 33, 86–92. [Google Scholar] [CrossRef]
  23. Liu, C.Y.; Tang, C.X.; Zhang, X.L.; Chen, S.; Xie, Y.; Zhang, X.Y.; Qiao, H.Y.; Zhou, C.S.; Xu, P.P.; Lu, M.J.; et al. Deep learning powered coronary CT angiography for detecting obstructive coronary artery disease: The effect of reader experience, calcification and image quality. Eur. J. Radiol. 2021, 142, 109835. [Google Scholar] [CrossRef]
  24. Farrah, T.E.; Dhillon, B.; Keane, P.A.; Webb, D.J.; Dhaun, N. The eye, the kidney, and cardiovascular disease: Old concepts, better tools, and new horizons. Kidney Int. 2020, 98, 323–342. [Google Scholar] [CrossRef] [Green Version]
  25. Wong, T.Y.; Klein, R.; Klein, B.E.; Tielsch, J.M.; Hubbard, L.; Nieto, F.J. Retinal microvascular abnormalities and their relationship with hypertension, cardiovascular disease, and mortality. Surv. Ophthalmol. 2001, 46, 59–80. [Google Scholar] [CrossRef]
  26. Zhang, L.; Yuan, M.; An, Z.; Zhao, X.; Wu, H.; Li, H.; Wang, Y.; Sun, B.; Li, H.; Ding, S.; et al. Prediction of hypertension, hyperglycemia and dyslipidemia from retinal fundus photographs via deep learning: A cross-sectional study of chronic diseases in central China. PLoS ONE 2020, 15, e0233166. [Google Scholar] [CrossRef]
  27. Islam, M.T.; Al-Absi, H.R.; Ruagh, E.A.; Alam, T. DiaNet: A deep learning based architecture to diagnose diabetes using retinal images only. IEEE Access 2021, 9, 15686–15695. [Google Scholar] [CrossRef]
  28. Poplin, R.; Varadarajan, A.V.; Blumer, K.; Liu, Y.; McConnell, M.V.; Corrado, G.S.; Peng, L.; Webster, D.R. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2018, 2, 158–164. [Google Scholar] [CrossRef] [PubMed]
  29. Cheung, C.Y.; Xu, D.; Cheng, C.Y.; Sabanayagam, C.; Tham, Y.C.; Yu, M.; Rim, T.H.; Chai, C.Y.; Gopinath, B.; Mitchell, P.; et al. A deep-learning system for the assessment of cardiovascular disease risk via the measurement of retinal-vessel calibre. Nat. Biomed. Eng. 2021, 5, 498–508. [Google Scholar] [CrossRef] [PubMed]
  30. Spahillari, A.; Mukamal, K.; DeFilippi, C.; Kizer, J.R.; Gottdiener, J.S.; Djoussé, L.; Lyles, M.F.; Bartz, T.M.; Murthy, V.L.; Shah, R.V. The association of lean and fat mass with all-cause mortality in older adults: The Cardiovascular Health Study. Nutr. Metab. Cardiovasc. Dis. 2016, 26, 1039–1047. [Google Scholar] [CrossRef] [Green Version]
  31. Chuang, T.L.; Lin, J.W.; Wang, Y.F. Bone Mineral Density as a Predictor of Atherogenic Indexes of Cardiovascular Disease, Especially in Nonobese Adults. Dis. Markers 2019, 2019, 1045098. [Google Scholar] [CrossRef] [Green Version]
  32. Messina, C.; Albano, D.; Gitto, S.; Tofanelli, L.; Bazzocchi, A.; Ulivieri, F.M.; Guglielmi, G.; Sconfienza, L.M. Body composition with dual energy X-ray absorptiometry: From basics to new tools. Quant. Imaging Med. Surg. 2020, 10, 1687. [Google Scholar] [CrossRef]
  33. Park, J.; Yoon, Y.E.; Kim, K.M.; Hwang, I.C.; Lee, W.; Cho, G.Y. Prognostic value of lower bone mineral density in predicting adverse cardiovascular disease in Asian women. Heart 2021, 107, 1040–1046. [Google Scholar] [CrossRef]
  34. Ceniccola, G.D.; Castro, M.G.; Piovacari, S.M.F.; Horie, L.M.; Corrêa, F.G.; Barrere, A.P.N.; Toledo, D.O. Current technologies in body composition assessment: Advantages and disadvantages. Nutrition 2019, 62, 25–31. [Google Scholar] [CrossRef]
  35. Hansen, A.B.; Sander, B.; Larsen, M.; Kleener, J.; Borch-Johnsen, K.; Klein, R.; Lund-Andersen, H. Screening for diabetic retinopathy using a digital non-mydriatic camera compared with standard 35-mm stereo colour transparencies. Acta Ophthalmol. Scand. 2004, 82, 656–665. [Google Scholar] [CrossRef]
  36. Oloumi, F.; Rangayyan, R.M.; Ells, A.L. Digital Image Processing for Ophthalmology: Detection and Modeling of Retinal Vascular Architecture. Synth. Lect. Biomed. Eng. 2014, 9, 1–185. [Google Scholar] [CrossRef]
  37. Guo, V.Y.; Chan, J.C.N.; Chung, H.; Ozaki, R.; So, W.; Luk, A.; Lam, A.; Lee, J.; Zee, B.C.Y. Retinal information is independently associated with cardiovascular disease in patients with type 2 diabetes. Sci. Rep. 2016, 6, 19053. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Gerrits, N.; Elen, B.; Van Craenendonck, T.; Triantafyllidou, D.; Petropoulos, I.N.; Malik, R.A.; De Boever, P. Age and sex affect deep learning prediction of cardiometabolic risk factors from retinal images. Sci. Rep. 2020, 10, 9432. [Google Scholar] [CrossRef] [PubMed]
  39. Van Craenendonck, T.; Gerrits, N.; Buelens, B.; Petropoulos, I.N.; Shuaib, A.; Standaert, A.; Malik, R.A.; De Boever, P. Retinal microvascular complexity comparing mono-and multifractal dimensions in relation to cardiometabolic risk factors in a Middle Eastern population. Acta Ophthalmol. 2021, 99, e368–e377. [Google Scholar] [CrossRef] [PubMed]
  40. Sharp, D.S.; Andrew, M.E.; Burchfiel, C.M.; Violanti, J.M.; Wactawski-Wende, J. Body mass index versus dual energy X-ray absorptiometry-derived indexes: Predictors of cardiovascular and diabetic disease risk factors. Am. J. Hum. Biol. 2012, 24, 400–405. [Google Scholar] [CrossRef] [PubMed]
  41. Lang, P.O.; Trivalle, C.; Vogel, T.; Proust, J.; Papazian, J.P. Markers of metabolic and cardiovascular health in adults: Comparative analysis of DEXA-based body composition components and BMI categories. J. Cardiol. 2015, 65, 42–49. [Google Scholar] [CrossRef] [Green Version]
  42. Bekfani, T.; Pellicori, P.; Morris, D.A.; Ebner, N.; Valentova, M.; Steinbeck, L.; Wachter, R.; Elsner, S.; Sliziuk, V.; Schefold, J.C.; et al. Sarcopenia in patients with heart failure with preserved ejection fraction: Impact on muscle strength, exercise capacity and quality of life. Int. J. Cardiol. 2016, 222, 41–46. [Google Scholar] [CrossRef] [Green Version]
  43. Martens, P.; Ter Maaten, J.M.; Vanhaen, D.; Heeren, E.; Caers, T.; Bovens, B.; Dauw, J.; Dupont, M.; Mullens, W. Heart failure is associated with accelerated age related metabolic bone disease. Acta Cardiol. 2021, 76, 718–726. [Google Scholar] [CrossRef]
  44. Kerkadi, A.; Suleman, D.; Salah, L.A.; Lotfy, C.; Attieh, G.; Bawadi, H.; Shi, Z. Adiposity indicators as cardio-metabolic risk predictors in adults from country with high burden of obesity. Diabetes Metab. Syndr. Obesity Targets Ther. 2020, 13, 175. [Google Scholar] [CrossRef] [Green Version]
  45. Bawadi, H.; Hassan, S.; Zadeh, A.S.; Sarv, H.; Kerkadi, A.; Tur, J.A.; Shi, Z. Age and gender specific cut-off points for body fat parameters among adults in Qatar. Nutr. J. 2020, 19, 75. [Google Scholar] [CrossRef]
  46. Bartl, R.; Bartl, C. Bone densitometry. In The Osteoporosis Manual: Prevention, Diagnosis and Management; Springer International Publishing: Cham, Switzerland, 2019; pp. 67–75. [Google Scholar] [CrossRef]
  47. Loncar, G.; Cvetinovic, N.; Lainscak, M.; Isaković, A.; von Haehling, S. Bone in heart failure. J. Cachexia Sarcopenia Muscle 2020, 11, 381–393. [Google Scholar] [CrossRef] [Green Version]
  48. Reid, S.; Schousboe, J.T.; Kimelman, D.; Monchka, B.A.; Jozani, M.J.; Leslie, W.D. Machine learning for automated abdominal aortic calcification scoring of DXA vertebral fracture assessment images: A pilot study. Bone 2021, 148, 115943. [Google Scholar] [CrossRef] [PubMed]
  49. Bawadi, H.; Abouwatfa, M.; Alsaeed, S.; Kerkadi, A.; Shi, Z. Body shape index is a stronger predictor of diabetes. Nutrients 2019, 11, 1018. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Zhao, J.; Feng, Q.; Wu, P.; Lupu, R.A.; Wilke, R.A.; Wells, Q.S.; Denny, J.C.; Wei, W.Q. Learning from longitudinal data in electronic health record and genetic data to improve cardiovascular event prediction. Sci. Rep. 2019, 9, 717. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Zhang, H.; Wang, X.; Liu, C.; Liu, Y.; Li, P.; Yao, L.; Li, H.; Wang, J.; Jiao, Y. Detection of coronary artery disease using multi-modal feature fusion and hybrid feature selection. Physiol. Meas. 2020, 41, 115007. [Google Scholar] [CrossRef] [PubMed]
  52. Ali, F.; El-Sappagh, S.; Islam, S.R.; Kwak, D.; Ali, A.; Imran, M.; Kwak, K.S. A smart healthcare monitoring system for heart disease prediction based on ensemble deep learning and feature fusion. Inf. Fusion 2020, 63, 208–222. [Google Scholar] [CrossRef]
  53. Apostolopoulos, I.D.; Apostolopoulos, D.I.; Spyridonidis, T.I.; Papathanasiou, N.D.; Panayiotakis, G.S. Multi-input deep learning approach for cardiovascular disease diagnosis using myocardial perfusion imaging and clinical data. Phys. Medica 2021, 84, 168–177. [Google Scholar] [CrossRef]
  54. Chaves, J.M.Z.; Chaudhari, A.S.; Wentland, A.L.; Desai, A.D.; Banerjee, I.; Boutin, R.D.; Maron, D.J.; Rodriguez, F.; Sandhu, A.T.; Jeffrey, R.B.; et al. Opportunistic assessment of ischemic heart disease risk using abdominopelvic computed tomography and medical record data: A multimodal explainable artificial intelligence approach. medRxiv 2021. [Google Scholar] [CrossRef]
  55. Flores, A.M.; Schuler, A.; Eberhard, A.V.; Olin, J.W.; Cooke, J.P.; Leeper, N.J.; Shah, N.H.; Ross, E.G. Unsupervised Learning for Automated Detection of Coronary Artery Disease Subgroups. J. Am. Heart Assoc. 2021, 10, e021976. [Google Scholar] [CrossRef]
  56. Al Thani, A.; Fthenou, E.; Paparrodopoulos, S.; Al Marri, A.; Shi, Z.; Qafoud, F.; Afifi, N. Qatar biobank cohort study: Study design and first results. Am. J. Epidemiol. 2019, 188, 1420–1433. [Google Scholar] [CrossRef]
  57. Al Kuwari, H.; Al Thani, A.; Al Marri, A.; Al Kaabi, A.; Abderrahim, H.; Afifi, N.; Qafoud, F.; Chan, Q.; Tzoulaki, I.; Downey, P.; et al. The Qatar Biobank: Background and methods. BMC Public Health 2015, 15, 1–9. [Google Scholar] [CrossRef] [Green Version]
  58. Jain, Y.K.; Bhandare, S.K. Min max normalization based data perturbation method for privacy protection. Int. J. Comput. Commun. Technol. 2011, 2, 45–50. [Google Scholar] [CrossRef]
  59. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Routledge: London, UK, 2017. [Google Scholar]
  60. Hagan, M.; Demuth, H.; Beale, M. Neural Net and Traditional Classifiers; Lincoln Laboratory: Lexington, MA, USA, 1987. [Google Scholar]
  61. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  62. Brownlee, J. XGBoost with Python: Gradient Boosted Trees with XGBoost and Scikit-Learn. Machine Learning Mastery 2016. Available online: https://machinelearningmastery.com/xgboost-with-python/ (accessed on 27 October 2021).
  63. Dorogush, A.V.; Ershov, V.; Gulin, A. CatBoost: Gradient boosting with categorical features support. arXiv 2018, arXiv:1810.11363. [Google Scholar]
  64. Hoffman, J.I. Biostatistics for Medical and Biomedical Practitioners; Academic Press: Cambridge, MA, USA, 2015. [Google Scholar]
  65. Cawley, G.C.; Talbot, N.L. On over-fitting in model selection and subsequent selection bias in performance evaluation. J. Mach. Learn. Res. 2010, 11, 2079–2107. [Google Scholar]
  66. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  67. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  68. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  69. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  70. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and< 0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  71. Smith, L.N.; Topin, N. Super-convergence: Very fast training of neural networks using large learning rates. In Proceedings of the Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, Baltimore, MD, USA, 10 May 2019; Volume 11006, p. 1100612. [Google Scholar]
  72. Smith, L.N. A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay. arXiv 2018, arXiv:1803.09820. [Google Scholar]
  73. Nguyen, T.; Kelly, P.; Sambrook, P.; Gilbert, C.; Pocock, N.; Eisman, J. Lifestyle factors and bone density in the elderly: Implications for osteoporosis prevention. J. Bone Miner. Res. 1994, 9, 1339–1346. [Google Scholar] [CrossRef]
  74. Ngugyen, T.V.; Eisman, J.A.; Kelly, P.J.; Sambroak, P.N. Risk factors for osteoporotic fractures in elderly men. Am. J. Epidemiol. 1996, 144, 255–263. [Google Scholar] [CrossRef] [Green Version]
  75. Khondaker, M.; Islam, T.; Khan, J.Y.; Refaee, M.A.; Hajj, N.E.; Rahman, M.S.; Alam, T. Obesity in Qatar: A Case-Control Study on the Identification of Associated Risk Factors. Diagnostics 2020, 10, 883. [Google Scholar] [CrossRef] [PubMed]
  76. Gildenblat, J. PyTorch Library for CAM Methods. Available online: https://github.com/jacobgil/pytorch-grad-cam (accessed on 2 January 2022).
  77. Wong, T.Y.; Mitchell, P. Hypertensive retinopathy. N. Engl. J. Med. 2004, 351, 2310–2317. [Google Scholar] [CrossRef] [PubMed]
  78. Kanukollu, V.M.; Ahmad, S.S. Retinal Hemorrhage. StatPearls. 2021. Available online: https://www.ncbi.nlm.nih.gov/books/NBK560777/ (accessed on 16 January 2022).
  79. Gkastaris, K.; Goulis, D.G.; Potoupnis, M.; Anastasilakis, A.D.; Kapetanos, G. Obesity, osteoporosis and bone metabolism. J. Musculoskelet. Neuronal Interact. 2020, 20, 372. [Google Scholar] [PubMed]
Figure 1. Some randomly selected images from the QBB retinal image dataset.
Figure 1. Some randomly selected images from the QBB retinal image dataset.
Sensors 22 04310 g001
Figure 2. Examples of few low-quality images.
Figure 2. Examples of few low-quality images.
Sensors 22 04310 g002
Figure 3. Example of images from QBB dataset before and after pre-processing.
Figure 3. Example of images from QBB dataset before and after pre-processing.
Sensors 22 04310 g003
Figure 4. Hybrid model to distinguish CVD from non-CVD using Retinal Images and DXA tabular data. CNN stem for Retinal Image was based on ResNet-34 architecture [68]. The MLP stem includes Linear (Lin), Batch Normalization (BN), Dropout (Dr) layers as shown in the diagram. Both stems were integrated, and their output was fed into the Classification Head having multiple layers of BN, Dr, Lin, ReLU, BN, Dr, and finally a single linear layer as output layer (CVD or non-CVD).
Figure 4. Hybrid model to distinguish CVD from non-CVD using Retinal Images and DXA tabular data. CNN stem for Retinal Image was based on ResNet-34 architecture [68]. The MLP stem includes Linear (Lin), Batch Normalization (BN), Dropout (Dr) layers as shown in the diagram. Both stems were integrated, and their output was fed into the Classification Head having multiple layers of BN, Dr, Lin, ReLU, BN, Dr, and finally a single linear layer as output layer (CVD or non-CVD).
Sensors 22 04310 g004
Figure 5. Performance of the hybrid models on gender-stratified participants (based on cropped image).
Figure 5. Performance of the hybrid models on gender-stratified participants (based on cropped image).
Sensors 22 04310 g005
Figure 6. Performance of the hybrid models on age-stratified participants (based on cropped image).
Figure 6. Performance of the hybrid models on age-stratified participants (based on cropped image).
Sensors 22 04310 g006
Figure 7. Few retinal images from CVD group with overlaid GradCAM. Red-ish color indicates higher influence on the decision of prediction model compared to the blue-ish color that indicate less influence.
Figure 7. Few retinal images from CVD group with overlaid GradCAM. Red-ish color indicates higher influence on the decision of prediction model compared to the blue-ish color that indicate less influence.
Sensors 22 04310 g007
Table 3. Details of the layers in the MLP stem and the Classification Head of Hybrid model.
Table 3. Details of the layers in the MLP stem and the Classification Head of Hybrid model.
Layer NameOutput Size
MLP Stem
Linear 8
ReLU 8
BatchNorm1d 8
Dropout 8
Linear 8
Classification Head
BatchNorm1d 264
Dropout 264
Linear 32
ReLU 32
BatchNorm1d 32
Dropout 32
Linear 2
Table 4. Performance of ML techniques for ablation study on DXA Model.
Table 4. Performance of ML techniques for ablation study on DXA Model.
Property (No of Features)Evaluation MetricDTMLPRFLRCatBoostXGBoost
Bone Mineral Density (55)Accuracy0.6200.6820.6860.7320.7100.726
Sensitivity0.6170.5740.6470.6780.6350.672
Specificity0.6280.7950.7240.7850.7840.780
Precision0.6240.7400.7010.7580.7460.758
F1-score0.6090.6390.6720.7160.6850.710
MCC0.2500.3820.3730.4660.4240.456
p-value1.996 × 10 3 1.089 × 10 2 1.664 × 10 3 1.536 × 10 3 6.623 × 10 3 1.332 × 10 3
Body Fat Composition (15)Accuracy0.7200.7700.7460.7420.7400.754
Sensitivity0.5940.6400.6970.7410.6730.723
Specificity0.8320.9020.7890.7490.8060.787
Precision0.7900.8670.7700.7470.7770.767
F1-score0.6690.7340.7310.7410.7200.743
MCC0.4450.5600.4890.4880.4820.508
p-value1.248 × 10 3 1.175 × 10 3 1.110 × 10 3 1.052 × 10 3 4.975 × 10 3 9.515 × 10 4
Lean Mass (7)Accuracy0.5760.6520.6900.6340.6680.702
Sensitivity0.5560.6520.6740.6130.7020.669
Specificity0.5960.6640.7100.6570.6380.736
Precision0.5820.7170.6990.6410.6610.716
F1-score0.5600.6500.6830.6250.6780.691
MCC0.1560.3290.3860.2700.3420.406
p-value9.083 × 10 4 6.950 × 10 3 8.326 × 10 4 7.994 × 10 4 3.984 × 10 3 7.402 × 10 4
Area Measurements (45)Accuracy0.5800.6140.6000.6640.6440.598
Sensitivity0.5460.5280.6240.6650.6600.605
Specificity0.6230.6980.5840.6670.6330.597
Precision0.5970.6360.6020.6750.6440.598
F1-score0.5560.5750.6070.6640.6470.598
MCC0.1750.2300.2100.3350.2950.203
p-value7.138 × 10 4 4.135 × 10 3 6.662 × 10 4 6.447 × 10 4 3.332 × 10 3 6.057 × 10 4
All (122)Accuracy0.6720.7500.7480.7680.7500.774
Sensitivity0.6580.6560.7040.7610.6940.754
Specificity0.6910.8550.7970.7800.8120.800
Precision0.6940.8270.7770.7770.7890.790
F1-score0.6630.7220.7360.7660.7340.768
MCC0.3630.5260.5040.5420.5110.555
p-value5.879 × 10 4 5.711 × 10 4 5.552 × 10 4 5.402 × 10 4 2.849 × 10 3 5.126 × 10 4
Table 5. A Comparison on the performance of deep learning models built based on retinal images.
Table 5. A Comparison on the performance of deep learning models built based on retinal images.
Type of ImagesDL ModelAccuracySensitivitySpecificityPrecisionf1 ScoreMCCp-Value
Cropped imagesDenseNet-1210.7560.7530.7580.740.7460.5112.023 × 10 10
Resnet-180.6940.7350.6560.6610.6960.3921.732 × 10 2
ResNet-340.7530.6820.8170.7730.7250.5058.824 × 10 4
VGGNet-110.7440.7120.7740.7420.7270.4875.529 × 10 8
VGGNet-160.7390.70.7740.7390.7190.4764.612 × 10 16
AlexNet0.6990.6590.7370.6960.6770.3973.519 × 10 2
SqueezeNet1_00.7190.6650.7690.7240.6930.4363.130 × 10 5
SqueezeNet1_10.6850.7290.6450.6530.6890.3756.687 × 10 3
Mean subtracted imagesDenseNet-1210.730.7120.7470.720.7160.4593.545 × 10 2
Resnet-180.7130.6820.7420.7070.6950.4251.953 × 10 2
ResNet 340.7130.6350.7850.730.6790.4266.846 × 10 5
VGGNet-110.6850.7350.640.6510.6910.3765.984 × 10 4
VGGNet-160.7250.7240.7260.7070.7150.4493.542 × 10 2
AlexNet0.6830.6880.6770.6610.6740.3652.806 × 10 3
SqueezeNet1_00.6770.6350.7150.6710.6530.3522.390 × 10 3
SqueezeNet1_10.6690.6120.720.6670.6380.3341.390 × 10 3
Table 6. A comparison on the performance of hybrid models built based on both retinal image and DXA data.
Table 6. A comparison on the performance of hybrid models built based on both retinal image and DXA data.
Type of ImagesDL ModelAccuracySensitivitySpecificityPrecisionf 1 ScoreMCCp-Value
Cropped imagesDenseNet-121 + DXA0.740.6880.7930.7710.7190.4921.371 × 10 3
ResNet-18 + DXA0.7560.6660.8420.8020.7250.5191.255 × 10 3
ResNet-34 + DXA0.7830.7470.8160.7930.7670.5661.290 × 10 3
VGGNet-11 + DXA0.7520.6910.8120.7840.7290.5121.297 × 10 3
VGGNet-16 + DXA0.7390.6750.80.7730.710.491.608 × 10 3
AlexNet + DXA0.7780.6980.8540.8150.7510.5591.166 × 10 3
SqueezeNet1_0 + DXA0.7480.6530.8360.7860.7130.4983.795 × 10 3
SqueezeNet1_1 + DXA0.7670.7360.7950.7730.7530.5341.243 × 10 3
Mean subtracted imagesDenseNet-121 + DXA0.7360.6690.80.760.710.4751.409 × 10 3
ResNet-18 + DXA0.7340.6390.8250.7750.6990.4741.355 × 10 3
ResNet-34 + DXA0.7570.7550.7610.7540.750.521.173 × 10 3
VGGNet-11 + DXA0.7340.7070.7600.7370.7150.4741.323 × 10 3
VGGNet-16 + DXA0.7230.7020.7460.7270.7050.4561.608 × 10 3
AlexNet + DXA0.7530.6580.8410.7960.7180.5111.466 × 10 3
SqueezeNet1_0 + DXA0.7540.6730.8290.7890.7250.5111.241 × 10 3
SqueezeNet1_1 + DXA0.7700.7320.8040.7800.7540.5401.464 × 10 3
Table 7. Details of number of participants and number of images used for each age group.
Table 7. Details of number of participants and number of images used for each age group.
ClassAge Group (18–39)Age Group (40 and Above)
ParticipantsImagesParticipantsImages
CVD115440118434
Control21078240140
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Al-Absi, H.R.H.; Islam, M.T.; Refaee, M.A.; Chowdhury, M.E.H.; Alam, T. Cardiovascular Disease Diagnosis from DXA Scan and Retinal Images Using Deep Learning. Sensors 2022, 22, 4310. https://doi.org/10.3390/s22124310

AMA Style

Al-Absi HRH, Islam MT, Refaee MA, Chowdhury MEH, Alam T. Cardiovascular Disease Diagnosis from DXA Scan and Retinal Images Using Deep Learning. Sensors. 2022; 22(12):4310. https://doi.org/10.3390/s22124310

Chicago/Turabian Style

Al-Absi, Hamada R. H., Mohammad Tariqul Islam, Mahmoud Ahmed Refaee, Muhammad E. H. Chowdhury, and Tanvir Alam. 2022. "Cardiovascular Disease Diagnosis from DXA Scan and Retinal Images Using Deep Learning" Sensors 22, no. 12: 4310. https://doi.org/10.3390/s22124310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop