You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

7 June 2022

Cardiovascular Disease Diagnosis from DXA Scan and Retinal Images Using Deep Learning

,
,
,
and
1
College of Science and Engineering, Hamad Bin Khalifa University, Doha 34110, Qatar
2
Computer Science Department, Southern Connecticut State University, New Haven, CT 06515, USA
3
Geriatric Department, Hamad Medical Corporation, Doha 3050, Qatar
4
Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
This article belongs to the Section Biomedical Sensors

Abstract

Cardiovascular diseases (CVD) are the leading cause of death worldwide. People affected by CVDs may go undiagnosed until the occurrence of a serious heart failure event such as stroke, heart attack, and myocardial infraction. In Qatar, there is a lack of studies focusing on CVD diagnosis based on non-invasive methods such as retinal image or dual-energy X-ray absorptiometry (DXA). In this study, we aimed at diagnosing CVD using a novel approach integrating information from retinal images and DXA data. We considered an adult Qatari cohort of 500 participants from Qatar Biobank (QBB) with an equal number of participants from the CVD and the control groups. We designed a case-control study with a novel multi-modal (combining data from multiple modalities—DXA and retinal images)—to propose a deep learning (DL)-based technique to distinguish the CVD group from the control group. Uni-modal models based on retinal images and DXA data achieved 75.6% and 77.4% accuracy, respectively. The multi-modal model showed an improved accuracy of 78.3% in classifying CVD group and the control group. We used gradient class activation map (GradCAM) to highlight the areas of interest in the retinal images that influenced the decisions of the proposed DL model most. It was observed that the model focused mostly on the centre of the retinal images where signs of CVD such as hemorrhages were present. This indicates that our model can identify and make use of certain prognosis markers for hypertension and ischemic heart disease. From DXA data, we found higher values for bone mineral density, fat content, muscle mass and bone area across majority of the body parts in CVD group compared to the control group indicating better bone health in the Qatari CVD cohort. This seminal method based on DXA scans and retinal images demonstrate major potentials for the early detection of CVD in a fast and relatively non-invasive manner.

1. Introduction

Globally, cardiovascular diseases (CVDs) remain the leading cause of mortality and hence increasing healthcare costs in many countries [1]. CVDs have accounted for 32% of the overall global death where two-third occurs in low- and middle-income countries, according to the World Health Organization (WHO) [2]. Furthermore, CVD has caused 38% of the total deaths of people below 70 years old of age (i.e., premature deaths) in non-communicable diseases (NCD) [3]. In the East Mediterranean region, 54% of the total NCD deaths are caused by CVD according to the WHO’s Eastern Mediterranean Regional Office [4]. In the State of Qatar, according to the Planning and Statistics Authority, 33.1% of the total deaths in 2018 were caused by diseases related to the circulatory system including blood pressure, which made CVD the top concern for Qatar in combating NCD [5]. Although diagnosis and treatment methods/tools of CVD have advanced throughout the years, yet many people still go undiagnosed until the occurrence of serious CVD events such as stroke, heart attack, or myocardial infraction [3].
Risk factors such as age, gender, smoking, obesity, diabetes, hypertension, low-density lipoprotein (LDL) cholesterol, and sedentary lifestyle [6] are known to be linked to CVD and some of these factors have been studied and confirmed for the Qatari population [7,8]. In fact, there exists risk score calculators for CVD such as the Framingham Risk Score [9], and ASCVD [10], which use some of the above factors to predict the 10-year risk of developing CVD. There exists multiple clinical biomarkers such as cardiac troponins I and T, C-reactive protein, D-dimer, and B-type natriuretic peptides which are also shown to be linked to CVD [6]. Furthermore, with the advancement of medical imaging techniques, several imaging modalities have been used to diagnose CVD. Imaging modalities such as delayed enhancement cardiac magnetic resonance (DE-CMR) [11], echocardiogram (ECG) [12,13], ultrasound imaging [14], magnetic resonance imaging (MRI) [15,16,17], and computed tomography (CT) [18,19,20], single-photon emission computerized tomography (SPECT) [21,22], and Coronary CT angiography (CCTA) [23] are also used to detect CVDs. Recently retinal fundus image has emerged as a non-invasive data modality to examine the heart condition. Retinal microvascular abnormalities are found to be linked to CVDs [24,25]. For example, [25] showed that generalized arteriolar narrowing, focal arteriolar narrowing, arteriovenous nicking, and retinopathy, which are all abnormalities in retinal vessels, can be used as indicators for cardiovascular diseases. They also showed that Fundus images can capture these abnormalities. In [26], the authors predicted a variety of CVD risk factors such as hypertension, hyperglycemia, and dyslipidemia from retinal fundus images. Moreover, retinal fundus images have been utilized to predict diabetes [27], and CVD risk factors [28,29].
Another imaging technique that could also be used to diagnose CVD events is the dual-energy X-ray absorptiometry (DXA) through lean and fat mass [30] and/or bone mineral density (BMD) [31]. DXA measurements of fat mass have been shown to have a strong correlation with CVD events as well as its risks [32]. Furthermore, a recent study has shown an independent association between lower BMD and high risk of ASCVD events and death among women in South Korea [33]. DXA provides a number of advantages, including the ability to accurately and precisely measure and differentiate fat, lean, and bone components. It can assess the entire body in a single scan, making it fast in the acquisition while less hazardous due to low-radiation. In addition, it is a non-invasive procedure [34].
In this paper, we focused on two imaging techniques, namely DXA scan and retinal fundus images to demonstrate their contribution to diagnose CVD in Qatari population. We selected these techniques considering their non-invasive nature as well as the ease of access to the data source as they have already been incorporated in QBB protocol to collect participants’ health status. Moreover, the capability of the retinal image and DXA scan in CVD diagnosis has not been investigated thoroughly for the Qatari population. The contributions of this work are as follows:
  • We proposed a novel technique that uses retinal images to distinguish CVD group from the control group with over 75% accuracy. To the best of our knowledge, this is a seminal work in CVD diagnosis from retinal images.
  • We conducted extensive experiments on DXA scan data and showed that it has reasonable discriminating ability to separate CVD group from the control group yielding over 77% accuracy in the Qatari population.
  • We proposed a superior multi-modal approach for CVD diagnosis that fuses both tabular (DXA) and image data (retinal images) to distinguish CVD from the control group with an accuracy of 78.3%. Hence, our study proposes a fast and relatively non-invasive approach to diagnose patients with CVD.

3. Materials and Methods

3.1. Ethical Approval

This research was carried out under the regulation of Qatar’s Ministry of Public Health. This work was approved by the Institutional Review Board of Qatar Biobank (QBB) in Qatar, and used a de-identified dataset from (QBB).

3.2. Data Collection from QBB

The data used in this study was collected from QBB. The details of the data collection protocol adopted by QBB are described in [56,57]. In brief, participants were invited to QBB and they were interviewed by staff nurses to collect their background history. Then multiple lab tests and imaging such as DXA scan and retinal images were collected. The dataset considered for this study is comprised of 500 participants equally divided into CVD group and control group. There were 262 male participants (CVD:Control = 137:125) and 238 female participants (CVD:Control = 113:125) in the studied cohort. Participants were all adult Qatari nationals aged between 18 and 84 years. Overall BMI was higher in CVD compared to the control group (26.10 ± 2.8:23.20 ± 2.8).

3.3. DXA Scan Data Preprocessing

The DXA data consisted of measurements related to bone mineral density, lean mass, fat content, and bone area measurements from different body parts of the participants. The dataset was cleaned by removing columns with missing values of more than 50% from each class (i.e., CVD and control). For the remaining columns, the missing values were imputed by the median value. After the data cleaning stage, 122 features were finally selected for this study. Supplementary File S1 provides the summary statistics of these features. The CVD and control groups’ data were then normalized using the min-max normalization technique [58]:
x i = x i m i n i { x i } m a x i { x i } m i n i { x i }
where x denotes the value of a specific DXA feature and m a x { x } and m i n { x } denote the largest and smallest values of that feature, respectively.

3.4. Retinal Image Collection and Preprocessing

The retinal fundus images were acquired at QBB utilizing a Topcon TRC-NW6S retinal camera to capture the “microscopic" characteristics of the optic nerve and macula of the participants. At least two images (one for each retina) were collected from each participant, but in some cases, multiple images (three or four) were collected from both eyes. Both (a) macula-centered images and (b) disc-centered images were captured for both eyes. A few participants had no retinal images and we discarded them from our analysis. Figure 1 shows few randomly selected images from each group.
Figure 1. Some randomly selected images from the QBB retinal image dataset.
In the dataset, we had 1839 retinal images from all participants. Then, we removed low quality images by visual inspection since they may have negative impact on the downstream classification task. After removing low quality images (all were from the CVD group only), the number of images was reduced to 1805 (874 and 931 images from CVD and control group, respectively). For some participants all their images were removed due to low quality, and therefore, the number of participants in our study was reduced to 483, where 250 were from control and 233 were from CVD group. Examples of some low-quality images can be seen in Figure 2.
Figure 2. Examples of few low-quality images.

4. Experiment Setup

To investigate the effect of different types of input data on the outcome of our study, we conducted multiple experiments, both uni-modal and multi-modal. Specifically, we used the tabular and image datasets in isolation in the uni-modal experiments and a combination of them in the multi-modal one. This resulted in three experiment configurations: (i) DXA model: applying traditional machine learning techniques on the DXA data. (ii) Retinal image model: applying deep learning on the retinal images. (iii) Hybrid model: applying deep learning on both the tabular DXA data and the retinal image data. All experiments were performed on an Intel(R) Core(TM) i9 CPU @ 3.60 GHz machine with 64 GB RAM and equipped with NVIDIA GeForce RTX 2080 Ti GPU. For implementation, We used Scikit-learn for the DXA model and fastai for the other two models. Details about each experiment is given below in the following sub-sections.

4.1. DXA Model

In the first experiment, we applied six different machine learning algorithms: Decision Tree (DT) [59], (Shallow) Artificial Neural Network (ANN) [60], Random Forest (RF) [61], Extreme Gradient Boosting (XGBoost) [62], CatBoost [63], and Logistic Regression (LR) [64]. We utilized the GridSearchCV utility from Python’s Scikit-learn package for hyperparameter tuning with nested cross validation [65]. Supplementary File S2 includes all the tuned parameters for the ML models.

4.2. Retinal Image Model

In the second experiment, we applied DL-based techniques to distinguish CVD group from the control group based on the retinal images only. For this experiment, two different image pre-processing steps were applied to generate two sets of images. For the first set, the circular region of each image was extracted. Then, we removed the border-noise by cropping the outside 10% of each image. For the second set, for each cropped image, local mean was subtracted from a 4 × 4-pixel neighborhood, and then placed on a dark background within a square-shaped image with tight boundaries. At the end, we had all images with a size of 540 × 540 with black background. Figure 3 presents samples of original images and pre-processed images. We also applied data augmentation techniques such as random horizontal flip as well as a random brightness and contrast perturbation to enhance the robustness of the model. We experimented with eight popular image classification models, namely, AlexNet [66], VGGNet-11 [67], VGGNet-16 [67], ResNet-18 [68], ResNet-34 [68], DenseNet-121 [69], SqueezeNet-0 and SqueezeNet-1 [70]. We used super-convergence in our experiments which allowed the network to converge faster [71]. The model was fine-tuned through 10 epochs, with all layers (except the last layer) frozen using a one-cycle policy [72] for scheduling the learning rate with a maximum learning rate of 0.01 and a batch size of 8. The model took around 30 min to train.
Figure 3. Example of images from QBB dataset before and after pre-processing.

4.3. Hybrid Model

In the third experiment setup, we aimed to combine the DXA and the retinal image data in a deep learning-based approach. To this end, we designed a deep neural network that accepted a multi-modal input. The network consisted of three components: the CNN stem, the MLP stem, and the Classification Head (Figure 4). The CNN stem was responsible for processing the retinal images, while the MLP stem processed the DXA data. Hence, in our work, the fusion between the two data modalities does not take place at image-level. The data from the two different modalities, tabular data (DXA) and images (retinal fundus images) are passed through the two stem networks (MLP stem and CNN stem, respectively) and feature vectors produced from these are fused (vector concatenation) to form a single feature vector which is finally passed through the Classification Head to produce the output classification probability. The CNN stems we experimented with extensions of AlexNet, VGGNet-11, VGGNet-16, ResNet-18, ResNet-34, DenseNet-121, SqueezeNet-0, and SqueezeNet-1. We tried with multipl configurations of MLP stem and Classification Head with different number of layers and neurons. The best configuration we found for MLP stem, and the Classification Head is shown in Table 3.
Figure 4. Hybrid model to distinguish CVD from non-CVD using Retinal Images and DXA tabular data. CNN stem for Retinal Image was based on ResNet-34 architecture [68]. The MLP stem includes Linear (Lin), Batch Normalization (BN), Dropout (Dr) layers as shown in the diagram. Both stems were integrated, and their output was fed into the Classification Head having multiple layers of BN, Dr, Lin, ReLU, BN, Dr, and finally a single linear layer as output layer (CVD or non-CVD).
Table 3. Details of the layers in the MLP stem and the Classification Head of Hybrid model.
For the retinal images, we used the cropped images and the mean subtracted images shown in Figure 3 and concatenated them with the tabular data. To achieve this, we used fastai and the image_tabluar library (https://github.com/naity/image_tabular, accessed on 15 August 2021) to integrate the image data and the DXA (tabular) data. Figure 4 shows a high-level diagram of our proposed network architecture for the hybrid model. For the CNN stem, multiple data augmentation techniques such as brightness, contrast, flipping, rotation, and scaling of the images were applied.
The hybrid model was fine-tuned through 10 epochs, with all layers (except the last layer) frozen using a one-cycle scheduler and a 1 × 10 2 learning rate and a batch size of 64. Then, after unfreezing the whole network, 10 epochs were employed for training utilizing discriminative learning rates in the range of (1 × 10 3 and 1 × 10 2 ). With the 20 epochs, the model took around 30 min to train.

4.4. Performance Evaluation Metrics

We evaluated the models with 5-fold cross validation (CV) [65] for all models. The models were evaluated on accuracy, sensitivity, precision, F1-score, and Matthews Correlation Coefficient (MCC). These metrics are highlighted in the following equations:
Accuracy = t p + t n t p + t n + f p + f n
Sensitivity ( recall ) = t p t p + f n
Precision = t p t p + f p
F 1 - score = 2 × Precision × Recall Precision + Recall
MCC = ( t p × t n ) ( f p × f n ) ( t p + f p ) ( t p + f n ) ( t n + f p ) ( t n + f n )
where true positive, false negative, false positive, and true negative are represented as tp, fn, fp, and tn, respectively. Moreover, we computed an empirical p-value for evaluating the significance of a cross-validated performance scores with permutations

5. Results

5.1. Experimental Results from the Machine Learning Models

The results obtained from the three experiments are presented in this section. For the DXA Model, an ablation study was conducted on different feature types of DXA separately and on all of them with 5-fold CV. Results based on the ablation study shows that the area measurements features have low contribution with 66.5% accuracy. body fat composition, BMD, and lean mass achieved highest accuracy of 77.0%, 73.2%, and 70.2%, respectively. Considering all features (122 features), XGBoost model performed the best achieving highest accuracy of 77.4%. The highest F1-score of 76.8% and the highest MCC of 55.5% (see Table 4). For all DXA based experiments, a p-value of < 0.05 was obtained indicating their statistical significance.
Table 4. Performance of ML techniques for ablation study on DXA Model.
For the Retinal Image Model, where retinal images were used only to distinguish CVD form control group, DenseNet-121 model achieved the highest accuracy of 75.6% on the cropped image set based on 5 fold CV. DenseNet-121 model achieved 73.0% accuracy on the mean subtracted image set. Table 5 highlights the comparison of performance among all DL models used in this study. For all retinal image-based experiments, a p-value of <0.05 was obtained indicating their statistical significance.
Table 5. A Comparison on the performance of deep learning models built based on retinal images.
Finally, for the Hybrid Model, where both DXA tabular data and retinal images were combined, ResNet-34 achieved the highest accuracy of 78.3% with 5-fold CV. Table 6 shows a comparison between the eight DL models that we tested in this experiment. For all hybrid model based experiments, a p-value of <0.05 was obtained indicating their statistical significance. We also calculated the area under of curve (AUC) of receiver operating characteristics (ROC) for all DL models in this experiment for the cropped images (Supplementary File S3).
Table 6. A comparison on the performance of hybrid models built based on both retinal image and DXA data.

5.2. Performance of the Hybrid Model Based on Gender and Age Stratified Dataset

To check the effectiveness of the proposed model developed on different subgroups of population, we tested the Hybrid Model (incorporating the DXA and retinal image) on age- and gender-stratified samples. Though the performance of the model dropped slightly on the gender-stratified dataset, ResNet-34 based model achieved the highest performance with 75% and 72.9% accuracy for male and female, respectively (Figure 5). For the gender-stratified samples, all the eight models achieved better performance for males compared to females (Figure 5).
Figure 5. Performance of the hybrid models on gender-stratified participants (based on cropped image).
Table 7 provides information on the numbers of participants and images used for different age groups. For the age-stratified samples, the performance of the model was close to the performance of the model while whole dataset. The highest performance across all DL models were obtained from ResNet-34 model with 76.5% accuracy for the 40 and above age group. For the other age group (below 40), the ResNet-34 model also achieve best performance with 76.2% accuracy. Figure 6 shows the detailed results and comparison of the model performance for age-stratified participants.
Table 7. Details of number of participants and number of images used for each age group.
Figure 6. Performance of the hybrid models on age-stratified participants (based on cropped image).

5.3. Statistical Analysis on DXA Data

We conducted statistical analysis on the DXA data for comparing the CVD group against the control group. After analyzing the 122 features, 95 features were statistically significant (p-value < 0.05) (Supplementary File S1).
When compared against the control group, CVD group accumulated more fat in different body parts including the arms, legs, trunk, android, gynoid, and android visceral.
Measurements of lean mass in android, leg, arm, trunk, etc., were also higher in the CVD group. For instance, lean mass in trunk (20,329.32 ± 3809:18,779.82 ± 3965) was greater in CVD than the control group. Overall, the total fat mass (25,908.58 ± 6342:20,962.12 ± 4946) as well as total lean mass (43,643.57 ± 8617:40,614.88 ± 9059) was higher in the CVD group and higher BMI value in CVD group reflect their fat and lean mass content.
We also observed higher level of BMD and anthropometric measurements from different body parts in the CVD group compared to the control group (Supplementary File S1). For BMD measurements in different body parts, e.g., head, arms, spine, troch, and trunk, the CVD group had a greater BMD than the control group. Overall the total BMD (1.20 ± 0.11:1.17 ± 0.12) was higher in the CVD group compared to the control group. Individuals with higher weight and BMI tend to have high BMD which might help to reduce the risk of bone fracture [73,74,75]. A similar trend was seen in our cohort.
Furthermore, when it comes to anthropometic measurements, we observed larger bone area in the troch, lumbar spine (L1, L2, L3, L4), and pelvis body parts for CVD group compared to the control group. In summary, most measurements for BMD, fat content, muscle mass, and bone area were higher in the CVD group compared to the control group, according to the analysis. Details of the analysis can be found in Supplementary File S1.

5.4. Class Activation Map for Highlighting the Region of Interest in CVD Patients

We used Gradient-weighted Class Activation Mapping (GradCAM) [76] to highlight regions of interest which influenced the DL model to make predictions. Figure 7 shows results of GradCAM on images of the CVD class. It was observed that regions of interest, as shown in the color-coded heat map on the right of each image, are mostly in the central region of the retina. Micro hemorrhage can be observed in all images and especially in the bottom left images which is associated with hypertension [77,78].
Figure 7. Few retinal images from CVD group with overlaid GradCAM. Red-ish color indicates higher influence on the decision of prediction model compared to the blue-ish color that indicate less influence.

6. Discussion

6.1. Principal Findings

Integrating retinal images with DXA improves the performance of CVD detection. In our experiments, the classification accuracy for the retinal image data only and for the DXA data only was 75.6% and 77.4% respectively. However, when both datasets were integrated using a joined deep learning model, there was an improvement in the performance which has reached up to 78.3% accuracy with ResNet-34 based model. This indicates that integration of multi-modal dataset has improved the performance of the proposed model. For the gender-stratified participants, the proposed model achieved almost similar performance for both genders (Figure 5). The performance of the model based on the age-stratified participants were close to the performance of the model with all participants (Figure 6). This indicates that the performance of the proposed model is unbiased towards age-stratified adult population.
The majority of DXA readings in the CVD group were higher than the control group. For statistically significant variables, majority of the BMD, fat content, muscle mass, and bone area measurements for the CVD group had greater average values than the control group (Supplementary File S1). Obesity was shown to be linked as a protective factor of osteoporosis through different mechanisms including mechanical and biochemical mechanisms [79]. In the current study, although obesity (higher BMI) has deleterious effect on cardiovascular risk factors, we can also observe its protective effects on bone health. In summary, our results indicate a better bone health condition for the CVD group which had higher BMI than the control group. Ablation study revealed the better discriminatory power of fat content and BMD than muscle mass and bone areas (Table 4). This highlights the discriminatory power of DXA measurements which could open new avenues in the diagnosis plan of CVD.

6.2. Comparison against Other Tools

We could not find any work that used retinal images and DXA for CVD detection based on the Qatari population or outside Qatar. Published work that has been reviewed focused on the prediction of risk factors associated with CVD (refer to Table 1) instead of predicting CVD directly, and therefore, we could not compare the performance of the proposed model against any existing model.
We, however, present a summary of the existing methods (Table 2) that use a multi-modal approach albeit with modalities different from DXA and retinal images. Therefore, we would like to re-iterate that a direct comparison of our proposed multi-modal approach with any of these would not be fair due to the following reasons: First, none of the existing research uses the particular combination of multi-modal data (DXA and retinal images) in their work. To the best of our knowledge, ours is the first research that considers these two data modalities using a machine learning approach. Second, the data used in the approaches (Table 2) include bio-markers related directly to the health of the heart, and hence have an unfair advantage in CVD prediction over our approach at the cost of being more invasive. Our method stands superior in scenarios involving health centers located at remote regions with access to limited non-invasive resources.

6.3. Motivation for Using a Multi-Modal Approach

We were motivated to use a “multi-modal” approach to predict CVD using retinal images and DXA data for multiple reasons. First, we have shown that a multi-modal approach is a superior technique (Table 6) than the uni-modal counterparts (Table 4 and Table 5) for predicting CVD. Second, the particular combination of the modalities (DXA and retinal images) we used in our work has never been explored, hence providing a completely novel insight into the possible ways of predicting CVD in a non-invasive manner. Last but not the least, since neither DXA nor retinal data collection is invasive, our proposed multi-modal approach is also, overall, non-invasive, and hence is applicable to a wide variety of health centers possibly located in remote regions with limited resources.

7. Limitations

In this work, we used dataset from QBB that consisted of participants from Qatar only. This means that the outcome of this study could be specific to the Qatari population and people of gulf countries with similar lifestyle and ethnicity. This indicates that the results from this study might not be generalized to other population. Moreover, the human retinal images contain a vascular tree that orchestrate a complex branching pattern. To identify different aspects of this tree it is important to have retinal images with high contrast, color balance, and quality. Hence, having more and better quality controlled images could have pushed the accuracy of the proposed model even higher. Despite these limitations, this study serves as a proof of concept that incorporating information from retinal images and DXA provides a novel avenue to diagnose CVD with a reasonable accuracy in a non-invasive manner.

8. Conclusions

In this work, we presented a novel deep learning based model to distinguish CVD group from control group by integrating information from retinal images and DXA scans. The proposed multi-modal approach achieved an accuracy of 78.3% which performed better than the individual uni-modal (DXA and retinal images) models. To the best of our knowledge, this is the first study to diagnose CVD from DXA data and retinal images. The achieved performance demonstrated that signals from fast and relatively non-invasive techniques such as DXA and retinal images can be used to diagnose patients with CVD. The findings from our study are required to be validated in a clinical setup for proper understanding of their links to CVD diagnosis.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/s22124310/s1. Supplementary File S1: Summary Statistics of the DXA features. Supplementary File S2: Details for parameter optimization for the models. Supplementary File S3: ROC Curve Analysis.

Author Contributions

H.R.H.A.-A., T.A. conceived and planned the experiments. H.R.H.A.-A. and M.T.I. performed the experiments. H.R.H.A.-A. analyzed the data. H.R.H.A.-A., T.A. and M.T.I. prepared the manuscript with the assistance from other authors. Data curation, M.A.R. Validation, M.E.H.C. All authors have read and agreed to the final version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This research was carried out under the regulation of Qatar’s Ministry of Public Health. This work was approved by the Institutional Review Board of Qatar Biobank (QBB) in Qatar, and used a de-identified dataset from QBB.

Data Availability Statement

Restrictions apply to the availability of these data. Data was obtained from QBB under non-disclosure agreement (NDA).

Acknowledgments

We thank Qatar Biobank (QBB) for providing access to the de-identified dataset. The open-access publication of this article is funded by the College of Science and Engineering, Hamad Bin Khalifa University (HBKU), Doha 34110, Qatar.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

The following abbreviations are used in this manuscript:
ASCVDAtherosclerotic Cardiovascular Disease
AUCArea Under the Curve
BMIBody Mass Index
BMDBone Mineral Density
CCTACoronary CT Angiography
CTComputed Tomography
CVDCardiovascular disease
DLDeep Learning
DXADual-energy X-ray Absorptiometry
ECGElectrocardiogram
ECHOEchocardiography
MLMachine Learning
MPIMyocardial Perfusion Imaging
MRIMagnetic Resonance Imaging
PCGPhonocardiogram
ROCReceiver Operating Characteristics
WHOWorld Health Organization
WHRWaist-to-hip Ratio

References

  1. Roth, G.A.; Mensah, G.A.; Johnson, C.O.; Addolorato, G.; Ammirati, E.; Baddour, L.M.; Barengo, N.C.; Beaton, A.Z.; Benjamin, E.J.; Benziger, C.P.; et al. Global burden of cardiovascular diseases and risk factors, 1990–2019: Update from the GBD 2019 study. J. Am. Coll. Cardiol. 2020, 76, 2982–3021. [Google Scholar] [CrossRef] [PubMed]
  2. World Health Organization. Cardiovascular Diseases (CVDs); World Health Organization: Geneva, Switzerland, 2021. [Google Scholar]
  3. Long, C.P.; Chan, A.X.; Bakhoum, C.Y.; Toomey, C.B.; Madala, S.; Garg, A.K.; Freeman, W.R.; Goldbaum, M.H.; DeMaria, A.N.; Bakhoum, M.F. Prevalence of subclinical retinal ischemia in patients with cardiovascular disease—A hypothesis driven study. EClinicalMedicine 2021, 33, 100775. [Google Scholar] [CrossRef]
  4. WHO-EMRO. Cardiovascular Diseases; World Health Organization, Regional Office for the Eastern Mediterranean: Cairo, Egypt, 2021. [Google Scholar]
  5. Planning and Statistics Authority (Qatar). Births & Deaths in the State of Qatar (Review & Analysis). Report 2018. Available online: https://www.psa.gov.qa/en/statistics/Statistical%20Releases/General/StatisticalAbstract/2018/Birth_death_2018_EN.pdf (accessed on 18 September 2021).
  6. Ghantous, C.M.; Kamareddine, L.; Farhat, R.; Zouein, F.A.; Mondello, S.; Kobeissy, F.; Zeidan, A. Advances in cardiovascular biomarker discovery. Biomedicines 2020, 8, 552. [Google Scholar] [CrossRef] [PubMed]
  7. Al-Absi, H.R.; Refaee, M.A.; Rehman, A.U.; Islam, M.T.; Belhaouari, S.B.; Alam, T. Risk Factors and Comorbidities Associated to Cardiovascular Disease in Qatar: A Machine Learning Based Case-Control Study. IEEE Access 2021, 9, 29929–29941. [Google Scholar] [CrossRef]
  8. Rehman, A.U.; Alam, T.; Belhaouari, S.B. Investigating Potential Risk Factors for Cardiovascular Diseases in Adult Qatari Population. In Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, 2–5 February 2020; pp. 267–270. [Google Scholar]
  9. Wilson, P.W.; D’Agostino, R.B.; Levy, D.; Belanger, A.M.; Silbershatz, H.; Kannel, W.B. Prediction of coronary heart disease using risk factor categories. Circulation 1998, 97, 1837–1847. [Google Scholar] [CrossRef] [Green Version]
  10. American College of Cardiology. ASCVD Risk Estimator; American College of Cardiology: Washington, DC, USA, 2022; Available online: https://tools.acc.org/ascvd-risk-estimator-plus/ (accessed on 18 September 2021).
  11. Xu, C.; Xu, L.; Gao, Z.; Zhao, S.; Zhang, H.; Zhang, Y.; Du, X.; Zhao, S.; Ghista, D.; Liu, H.; et al. Direct delineation of myocardial infarction without contrast agents using a joint motion feature learning architecture. Med. Image Anal. 2018, 50, 82–94. [Google Scholar] [CrossRef]
  12. Sudarshan, V.K.; Acharya, U.R.; Ng, E.; San Tan, R.; Chou, S.M.; Ghista, D.N. An integrated index for automated detection of infarcted myocardium from cross-sectional echocardiograms using texton-based features (Part 1). Comput. Biol. Med. 2016, 71, 231–240. [Google Scholar] [CrossRef]
  13. Madani, A.; Ong, J.R.; Tibrewal, A.; Mofrad, M.R. Deep echocardiography: Data-efficient supervised and semi-supervised deep learning towards automated diagnosis of cardiac disease. NPJ Digit. Med. 2018, 1, 1–11. [Google Scholar] [CrossRef]
  14. Vidya, K.S.; Ng, E.; Acharya, U.R.; Chou, S.M.; San Tan, R.; Ghista, D.N. Computer-aided diagnosis of myocardial infarction using ultrasound images with DWT, GLCM and HOS methods: A comparative study. Comput. Biol. Med. 2015, 62, 86–93. [Google Scholar] [CrossRef]
  15. Larroza, A.; López-Lereu, M.P.; Monmeneu, J.V.; Gavara, J.; Chorro, F.J.; Bodí, V.; Moratal, D. Texture analysis of cardiac cine magnetic resonance imaging to detect nonviable segments in patients with chronic myocardial infarction. Med. Phys. 2018, 45, 1471–1480. [Google Scholar] [CrossRef]
  16. Snaauw, G.; Gong, D.; Maicas, G.; Van Den Hengel, A.; Niessen, W.J.; Verjans, J.; Carneiro, G. End-to-end diagnosis and segmentation learning from cardiac magnetic resonance imaging. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 802–805. [Google Scholar]
  17. Baessler, B.; Luecke, C.; Lurz, J.; Klingel, K.; Das, A.; von Roeder, M.; de Waha-Thiele, S.; Besler, C.; Rommel, K.P.; Maintz, D.; et al. Cardiac MRI and texture analysis of myocardial T1 and T2 maps in myocarditis with acute versus chronic symptoms of heart failure. Radiology 2019, 292, 608–617. [Google Scholar] [CrossRef] [PubMed]
  18. Mannil, M.; von Spiczak, J.; Manka, R.; Alkadhi, H. Texture analysis and machine learning for detecting myocardial infarction in noncontrast low-dose computed tomography: Unveiling the invisible. Investig. Radiol. 2018, 53, 338–343. [Google Scholar] [CrossRef] [PubMed]
  19. Coenen, A.; Kim, Y.H.; Kruk, M.; Tesche, C.; De Geer, J.; Kurata, A.; Lubbers, M.L.; Daemen, J.; Itu, L.; Rapaka, S.; et al. Diagnostic accuracy of a machine-learning approach to coronary computed tomographic angiography–based fractional flow reserve: Result from the MACHINE consortium. Circ. Cardiovasc. Imaging 2018, 11, e007217. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Zreik, M.; Van Hamersvelt, R.W.; Wolterink, J.M.; Leiner, T.; Viergever, M.A.; Išgum, I. A recurrent CNN for automatic detection and classification of coronary artery plaque and stenosis in coronary CT angiography. IEEE Trans. Med Imaging 2018, 38, 1588–1598. [Google Scholar] [CrossRef] [Green Version]
  21. Sacha, J.P.; Goodenday, L.S.; Cios, K.J. Bayesian learning for cardiac SPECT image interpretation. Artif. Intell. Med. 2002, 26, 109–143. [Google Scholar] [CrossRef]
  22. Shibutani, T.; Nakajima, K.; Wakabayashi, H.; Mori, H.; Matsuo, S.; Yoneyama, H.; Konishi, T.; Okuda, K.; Onoguchi, M.; Kinuya, S. Accuracy of an artificial neural network for detecting a regional abnormality in myocardial perfusion SPECT. Ann. Nucl. Med. 2019, 33, 86–92. [Google Scholar] [CrossRef]
  23. Liu, C.Y.; Tang, C.X.; Zhang, X.L.; Chen, S.; Xie, Y.; Zhang, X.Y.; Qiao, H.Y.; Zhou, C.S.; Xu, P.P.; Lu, M.J.; et al. Deep learning powered coronary CT angiography for detecting obstructive coronary artery disease: The effect of reader experience, calcification and image quality. Eur. J. Radiol. 2021, 142, 109835. [Google Scholar] [CrossRef]
  24. Farrah, T.E.; Dhillon, B.; Keane, P.A.; Webb, D.J.; Dhaun, N. The eye, the kidney, and cardiovascular disease: Old concepts, better tools, and new horizons. Kidney Int. 2020, 98, 323–342. [Google Scholar] [CrossRef] [Green Version]
  25. Wong, T.Y.; Klein, R.; Klein, B.E.; Tielsch, J.M.; Hubbard, L.; Nieto, F.J. Retinal microvascular abnormalities and their relationship with hypertension, cardiovascular disease, and mortality. Surv. Ophthalmol. 2001, 46, 59–80. [Google Scholar] [CrossRef]
  26. Zhang, L.; Yuan, M.; An, Z.; Zhao, X.; Wu, H.; Li, H.; Wang, Y.; Sun, B.; Li, H.; Ding, S.; et al. Prediction of hypertension, hyperglycemia and dyslipidemia from retinal fundus photographs via deep learning: A cross-sectional study of chronic diseases in central China. PLoS ONE 2020, 15, e0233166. [Google Scholar] [CrossRef]
  27. Islam, M.T.; Al-Absi, H.R.; Ruagh, E.A.; Alam, T. DiaNet: A deep learning based architecture to diagnose diabetes using retinal images only. IEEE Access 2021, 9, 15686–15695. [Google Scholar] [CrossRef]
  28. Poplin, R.; Varadarajan, A.V.; Blumer, K.; Liu, Y.; McConnell, M.V.; Corrado, G.S.; Peng, L.; Webster, D.R. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2018, 2, 158–164. [Google Scholar] [CrossRef] [PubMed]
  29. Cheung, C.Y.; Xu, D.; Cheng, C.Y.; Sabanayagam, C.; Tham, Y.C.; Yu, M.; Rim, T.H.; Chai, C.Y.; Gopinath, B.; Mitchell, P.; et al. A deep-learning system for the assessment of cardiovascular disease risk via the measurement of retinal-vessel calibre. Nat. Biomed. Eng. 2021, 5, 498–508. [Google Scholar] [CrossRef] [PubMed]
  30. Spahillari, A.; Mukamal, K.; DeFilippi, C.; Kizer, J.R.; Gottdiener, J.S.; Djoussé, L.; Lyles, M.F.; Bartz, T.M.; Murthy, V.L.; Shah, R.V. The association of lean and fat mass with all-cause mortality in older adults: The Cardiovascular Health Study. Nutr. Metab. Cardiovasc. Dis. 2016, 26, 1039–1047. [Google Scholar] [CrossRef] [Green Version]
  31. Chuang, T.L.; Lin, J.W.; Wang, Y.F. Bone Mineral Density as a Predictor of Atherogenic Indexes of Cardiovascular Disease, Especially in Nonobese Adults. Dis. Markers 2019, 2019, 1045098. [Google Scholar] [CrossRef] [Green Version]
  32. Messina, C.; Albano, D.; Gitto, S.; Tofanelli, L.; Bazzocchi, A.; Ulivieri, F.M.; Guglielmi, G.; Sconfienza, L.M. Body composition with dual energy X-ray absorptiometry: From basics to new tools. Quant. Imaging Med. Surg. 2020, 10, 1687. [Google Scholar] [CrossRef]
  33. Park, J.; Yoon, Y.E.; Kim, K.M.; Hwang, I.C.; Lee, W.; Cho, G.Y. Prognostic value of lower bone mineral density in predicting adverse cardiovascular disease in Asian women. Heart 2021, 107, 1040–1046. [Google Scholar] [CrossRef]
  34. Ceniccola, G.D.; Castro, M.G.; Piovacari, S.M.F.; Horie, L.M.; Corrêa, F.G.; Barrere, A.P.N.; Toledo, D.O. Current technologies in body composition assessment: Advantages and disadvantages. Nutrition 2019, 62, 25–31. [Google Scholar] [CrossRef]
  35. Hansen, A.B.; Sander, B.; Larsen, M.; Kleener, J.; Borch-Johnsen, K.; Klein, R.; Lund-Andersen, H. Screening for diabetic retinopathy using a digital non-mydriatic camera compared with standard 35-mm stereo colour transparencies. Acta Ophthalmol. Scand. 2004, 82, 656–665. [Google Scholar] [CrossRef]
  36. Oloumi, F.; Rangayyan, R.M.; Ells, A.L. Digital Image Processing for Ophthalmology: Detection and Modeling of Retinal Vascular Architecture. Synth. Lect. Biomed. Eng. 2014, 9, 1–185. [Google Scholar] [CrossRef]
  37. Guo, V.Y.; Chan, J.C.N.; Chung, H.; Ozaki, R.; So, W.; Luk, A.; Lam, A.; Lee, J.; Zee, B.C.Y. Retinal information is independently associated with cardiovascular disease in patients with type 2 diabetes. Sci. Rep. 2016, 6, 19053. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Gerrits, N.; Elen, B.; Van Craenendonck, T.; Triantafyllidou, D.; Petropoulos, I.N.; Malik, R.A.; De Boever, P. Age and sex affect deep learning prediction of cardiometabolic risk factors from retinal images. Sci. Rep. 2020, 10, 9432. [Google Scholar] [CrossRef] [PubMed]
  39. Van Craenendonck, T.; Gerrits, N.; Buelens, B.; Petropoulos, I.N.; Shuaib, A.; Standaert, A.; Malik, R.A.; De Boever, P. Retinal microvascular complexity comparing mono-and multifractal dimensions in relation to cardiometabolic risk factors in a Middle Eastern population. Acta Ophthalmol. 2021, 99, e368–e377. [Google Scholar] [CrossRef] [PubMed]
  40. Sharp, D.S.; Andrew, M.E.; Burchfiel, C.M.; Violanti, J.M.; Wactawski-Wende, J. Body mass index versus dual energy X-ray absorptiometry-derived indexes: Predictors of cardiovascular and diabetic disease risk factors. Am. J. Hum. Biol. 2012, 24, 400–405. [Google Scholar] [CrossRef] [PubMed]
  41. Lang, P.O.; Trivalle, C.; Vogel, T.; Proust, J.; Papazian, J.P. Markers of metabolic and cardiovascular health in adults: Comparative analysis of DEXA-based body composition components and BMI categories. J. Cardiol. 2015, 65, 42–49. [Google Scholar] [CrossRef] [Green Version]
  42. Bekfani, T.; Pellicori, P.; Morris, D.A.; Ebner, N.; Valentova, M.; Steinbeck, L.; Wachter, R.; Elsner, S.; Sliziuk, V.; Schefold, J.C.; et al. Sarcopenia in patients with heart failure with preserved ejection fraction: Impact on muscle strength, exercise capacity and quality of life. Int. J. Cardiol. 2016, 222, 41–46. [Google Scholar] [CrossRef] [Green Version]
  43. Martens, P.; Ter Maaten, J.M.; Vanhaen, D.; Heeren, E.; Caers, T.; Bovens, B.; Dauw, J.; Dupont, M.; Mullens, W. Heart failure is associated with accelerated age related metabolic bone disease. Acta Cardiol. 2021, 76, 718–726. [Google Scholar] [CrossRef]
  44. Kerkadi, A.; Suleman, D.; Salah, L.A.; Lotfy, C.; Attieh, G.; Bawadi, H.; Shi, Z. Adiposity indicators as cardio-metabolic risk predictors in adults from country with high burden of obesity. Diabetes Metab. Syndr. Obesity Targets Ther. 2020, 13, 175. [Google Scholar] [CrossRef] [Green Version]
  45. Bawadi, H.; Hassan, S.; Zadeh, A.S.; Sarv, H.; Kerkadi, A.; Tur, J.A.; Shi, Z. Age and gender specific cut-off points for body fat parameters among adults in Qatar. Nutr. J. 2020, 19, 75. [Google Scholar] [CrossRef]
  46. Bartl, R.; Bartl, C. Bone densitometry. In The Osteoporosis Manual: Prevention, Diagnosis and Management; Springer International Publishing: Cham, Switzerland, 2019; pp. 67–75. [Google Scholar] [CrossRef]
  47. Loncar, G.; Cvetinovic, N.; Lainscak, M.; Isaković, A.; von Haehling, S. Bone in heart failure. J. Cachexia Sarcopenia Muscle 2020, 11, 381–393. [Google Scholar] [CrossRef] [Green Version]
  48. Reid, S.; Schousboe, J.T.; Kimelman, D.; Monchka, B.A.; Jozani, M.J.; Leslie, W.D. Machine learning for automated abdominal aortic calcification scoring of DXA vertebral fracture assessment images: A pilot study. Bone 2021, 148, 115943. [Google Scholar] [CrossRef] [PubMed]
  49. Bawadi, H.; Abouwatfa, M.; Alsaeed, S.; Kerkadi, A.; Shi, Z. Body shape index is a stronger predictor of diabetes. Nutrients 2019, 11, 1018. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Zhao, J.; Feng, Q.; Wu, P.; Lupu, R.A.; Wilke, R.A.; Wells, Q.S.; Denny, J.C.; Wei, W.Q. Learning from longitudinal data in electronic health record and genetic data to improve cardiovascular event prediction. Sci. Rep. 2019, 9, 717. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Zhang, H.; Wang, X.; Liu, C.; Liu, Y.; Li, P.; Yao, L.; Li, H.; Wang, J.; Jiao, Y. Detection of coronary artery disease using multi-modal feature fusion and hybrid feature selection. Physiol. Meas. 2020, 41, 115007. [Google Scholar] [CrossRef] [PubMed]
  52. Ali, F.; El-Sappagh, S.; Islam, S.R.; Kwak, D.; Ali, A.; Imran, M.; Kwak, K.S. A smart healthcare monitoring system for heart disease prediction based on ensemble deep learning and feature fusion. Inf. Fusion 2020, 63, 208–222. [Google Scholar] [CrossRef]
  53. Apostolopoulos, I.D.; Apostolopoulos, D.I.; Spyridonidis, T.I.; Papathanasiou, N.D.; Panayiotakis, G.S. Multi-input deep learning approach for cardiovascular disease diagnosis using myocardial perfusion imaging and clinical data. Phys. Medica 2021, 84, 168–177. [Google Scholar] [CrossRef]
  54. Chaves, J.M.Z.; Chaudhari, A.S.; Wentland, A.L.; Desai, A.D.; Banerjee, I.; Boutin, R.D.; Maron, D.J.; Rodriguez, F.; Sandhu, A.T.; Jeffrey, R.B.; et al. Opportunistic assessment of ischemic heart disease risk using abdominopelvic computed tomography and medical record data: A multimodal explainable artificial intelligence approach. medRxiv 2021. [Google Scholar] [CrossRef]
  55. Flores, A.M.; Schuler, A.; Eberhard, A.V.; Olin, J.W.; Cooke, J.P.; Leeper, N.J.; Shah, N.H.; Ross, E.G. Unsupervised Learning for Automated Detection of Coronary Artery Disease Subgroups. J. Am. Heart Assoc. 2021, 10, e021976. [Google Scholar] [CrossRef]
  56. Al Thani, A.; Fthenou, E.; Paparrodopoulos, S.; Al Marri, A.; Shi, Z.; Qafoud, F.; Afifi, N. Qatar biobank cohort study: Study design and first results. Am. J. Epidemiol. 2019, 188, 1420–1433. [Google Scholar] [CrossRef]
  57. Al Kuwari, H.; Al Thani, A.; Al Marri, A.; Al Kaabi, A.; Abderrahim, H.; Afifi, N.; Qafoud, F.; Chan, Q.; Tzoulaki, I.; Downey, P.; et al. The Qatar Biobank: Background and methods. BMC Public Health 2015, 15, 1–9. [Google Scholar] [CrossRef] [Green Version]
  58. Jain, Y.K.; Bhandare, S.K. Min max normalization based data perturbation method for privacy protection. Int. J. Comput. Commun. Technol. 2011, 2, 45–50. [Google Scholar] [CrossRef]
  59. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Routledge: London, UK, 2017. [Google Scholar]
  60. Hagan, M.; Demuth, H.; Beale, M. Neural Net and Traditional Classifiers; Lincoln Laboratory: Lexington, MA, USA, 1987. [Google Scholar]
  61. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  62. Brownlee, J. XGBoost with Python: Gradient Boosted Trees with XGBoost and Scikit-Learn. Machine Learning Mastery 2016. Available online: https://machinelearningmastery.com/xgboost-with-python/ (accessed on 27 October 2021).
  63. Dorogush, A.V.; Ershov, V.; Gulin, A. CatBoost: Gradient boosting with categorical features support. arXiv 2018, arXiv:1810.11363. [Google Scholar]
  64. Hoffman, J.I. Biostatistics for Medical and Biomedical Practitioners; Academic Press: Cambridge, MA, USA, 2015. [Google Scholar]
  65. Cawley, G.C.; Talbot, N.L. On over-fitting in model selection and subsequent selection bias in performance evaluation. J. Mach. Learn. Res. 2010, 11, 2079–2107. [Google Scholar]
  66. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  67. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  68. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  69. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  70. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and< 0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  71. Smith, L.N.; Topin, N. Super-convergence: Very fast training of neural networks using large learning rates. In Proceedings of the Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, Baltimore, MD, USA, 10 May 2019; Volume 11006, p. 1100612. [Google Scholar]
  72. Smith, L.N. A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay. arXiv 2018, arXiv:1803.09820. [Google Scholar]
  73. Nguyen, T.; Kelly, P.; Sambrook, P.; Gilbert, C.; Pocock, N.; Eisman, J. Lifestyle factors and bone density in the elderly: Implications for osteoporosis prevention. J. Bone Miner. Res. 1994, 9, 1339–1346. [Google Scholar] [CrossRef]
  74. Ngugyen, T.V.; Eisman, J.A.; Kelly, P.J.; Sambroak, P.N. Risk factors for osteoporotic fractures in elderly men. Am. J. Epidemiol. 1996, 144, 255–263. [Google Scholar] [CrossRef] [Green Version]
  75. Khondaker, M.; Islam, T.; Khan, J.Y.; Refaee, M.A.; Hajj, N.E.; Rahman, M.S.; Alam, T. Obesity in Qatar: A Case-Control Study on the Identification of Associated Risk Factors. Diagnostics 2020, 10, 883. [Google Scholar] [CrossRef] [PubMed]
  76. Gildenblat, J. PyTorch Library for CAM Methods. Available online: https://github.com/jacobgil/pytorch-grad-cam (accessed on 2 January 2022).
  77. Wong, T.Y.; Mitchell, P. Hypertensive retinopathy. N. Engl. J. Med. 2004, 351, 2310–2317. [Google Scholar] [CrossRef] [PubMed]
  78. Kanukollu, V.M.; Ahmad, S.S. Retinal Hemorrhage. StatPearls. 2021. Available online: https://www.ncbi.nlm.nih.gov/books/NBK560777/ (accessed on 16 January 2022).
  79. Gkastaris, K.; Goulis, D.G.; Potoupnis, M.; Anastasilakis, A.D.; Kapetanos, G. Obesity, osteoporosis and bone metabolism. J. Musculoskelet. Neuronal Interact. 2020, 20, 372. [Google Scholar] [PubMed]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.