Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = DenseNet 169

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1718 KiB  
Article
Classification of Intraoral Photographs with Deep Learning Algorithms Trained According to Cephalometric Measurements
by Sultan Büşra Ay Kartbak, Mehmet Birol Özel, Duygu Nur Cesur Kocakaya, Muhammet Çakmak and Enver Alper Sinanoğlu
Diagnostics 2025, 15(9), 1059; https://doi.org/10.3390/diagnostics15091059 - 22 Apr 2025
Cited by 2 | Viewed by 759
Abstract
Background/Objectives: Clinical intraoral photographs are important for orthodontic diagnosis, treatment planning, and documentation. This study aimed to evaluate deep learning algorithms trained utilizing actual cephalometric measurements for the classification of intraoral clinical photographs. Methods: This study was executed on lateral cephalograms [...] Read more.
Background/Objectives: Clinical intraoral photographs are important for orthodontic diagnosis, treatment planning, and documentation. This study aimed to evaluate deep learning algorithms trained utilizing actual cephalometric measurements for the classification of intraoral clinical photographs. Methods: This study was executed on lateral cephalograms and intraoral right-side images of 990 patients. IMPA, interincisal angle, U1–palatal plane angle, and Wits appraisal values were measured utilizing WebCeph. Intraoral photographs were divided into three groups based on cephalometric measurements. A total of 14 deep learning models (DenseNet 121, DenseNet 169, DenseNet 201, EfficientNet B0, EfficientNet V2, Inception V3, MobileNet V2, NasNetMobile, ResNet101, ResNet152, ResNet50, VGG16, VGG19, and Xception) were employed to classify the intraoral photographs. Performance metrics (F1 scores, accuracy, precision, and recall) were calculated and confusion matrices were formed. Results: The highest accuracy rates were 98.33% for IMPA groups, 99.00% for interincisal angle groups, 96.67% for U1–palatal plane angle groups, and 98.33% for Wits measurement groups. Lowest accuracy rates were 59% for IMPA groups, 53% for interincisal angle groups, 33.33% for U1–palatal plane angle groups, and 83.67% for Wits measurement groups. Conclusions: Although accuracy rates varied among classifications and DL algorithms, successful classification could be achieved in the majority of cases. Our results may be promising for case classification and analysis without the need for lateral cephalometric radiographs. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

31 pages, 12013 KiB  
Article
Detection of Atrial Fibrillation in Holter ECG Recordings by ECHOView Images: A Deep Transfer Learning Study
by Vessela Krasteva, Todor Stoyanov, Stefan Naydenov, Ramun Schmid and Irena Jekova
Diagnostics 2025, 15(7), 865; https://doi.org/10.3390/diagnostics15070865 - 28 Mar 2025
Viewed by 784
Abstract
Background/Objectives: The timely and accurate detection of atrial fibrillation (AF) is critical from a clinical perspective. Detecting short or transient AF events is challenging in 24–72 h Holter ECG recordings, especially when symptoms are infrequent. This study aims to explore the potential [...] Read more.
Background/Objectives: The timely and accurate detection of atrial fibrillation (AF) is critical from a clinical perspective. Detecting short or transient AF events is challenging in 24–72 h Holter ECG recordings, especially when symptoms are infrequent. This study aims to explore the potential of deep transfer learning with ImageNet deep neural networks (DNNs) to improve the interpretation of short-term ECHOView images for the presence of AF. Methods: Thirty-second ECHOView images, composed of stacked heartbeat amplitudes, were rescaled to fit the input of 18 pretrained ImageNet DNNs with the top layers modified for binary classification (AF, non-AF). Transfer learning provided both retrained DNNs by training only the top layers (513–2048 trainable parameters) and fine-tuned DNNs by slowly training retrained DNNs (0.38–23.48 M parameters). Results: Transfer learning used 13,536 training and 6624 validation samples from the two leads in the IRIDIA-AF Holter ECG database, evenly split between AF and non-AF cases. The top-ranked DNNs evaluated on 11,400 test samples from independent records are the retrained EfficientNetV2B1 (96.3% accuracy with minimal inter-patient (1%) and inter-lead (0.3%) drops), and fine-tuned EfficientNetV2B1 and DenseNet-121, -169, -201 (97.2–97.6% accuracy with inter-patient (1.4–1.6%) and inter-lead (0.5–1.2%) drops). These models can process shorter ECG episodes with a tolerable accuracy drop of up to 0.6% for 20 s and 4–15% for 10 s. Case studies present the GradCAM heatmaps of retrained EfficientNetV2B1 overlaid on raw ECG and ECHOView images to illustrate model interpretability. Conclusions: In an extended deep transfer learning study, we validate that ImageNet DNNs applied to short-term ECHOView images through retraining and fine-tuning can significantly enhance automated AF diagnoses. GradCAM heatmaps provide meaningful model interpretability, highlighting ECG regions of interest aligned with cardiologist focus. Full article
(This article belongs to the Special Issue Diagnosis and Management of Arrhythmias)
Show Figures

Figure 1

19 pages, 3915 KiB  
Article
Detection of Malignant Skin Lesions Based on Decision Fusion of Ensembles of Neural Networks
by Loretta Ichim, Razvan-Ionut Mitrica, Madalina-Oana Serghei and Dan Popescu
Cancers 2023, 15(20), 4946; https://doi.org/10.3390/cancers15204946 - 11 Oct 2023
Cited by 2 | Viewed by 3122
Abstract
Today, skin cancer, and especially melanoma, is an increasing and dangerous health disease. The high mortality rate of some types of skin cancers needs to be detected in the early stages and treated urgently. The use of neural network ensembles for the detection [...] Read more.
Today, skin cancer, and especially melanoma, is an increasing and dangerous health disease. The high mortality rate of some types of skin cancers needs to be detected in the early stages and treated urgently. The use of neural network ensembles for the detection of objects of interest in images has gained more and more interest due to the increased performance of the results. In this sense, this paper proposes two ensembles of neural networks, based on the fusion of the decisions of the component neural networks for the detection of four skin lesions (basal cancer cell, melanoma, benign keratosis, and melanocytic nevi). The first system is based on separate learning of three neural networks (MobileNet V2, DenseNet 169, and EfficientNet B2), with multiple weights for the four classes of lesions and weighted overall prediction. The second system is made up of six binary models (one for each pair of classes) for each network; the fusion and prediction are conducted by weighted summation per class and per model. In total, 18 such binary models will be considered. The 91.04% global accuracy of this set of binary models is superior to the first system (89.62%). Separately, only for the binary classifications within the system was the individual accuracy better. The individual F1 score for each class and the global system varied from 81.36% to 94.17%. Finally, a critical comparison is made with similar works from the literature. Full article
(This article belongs to the Special Issue Diagnosis of Melanoma and Non-melanoma Skin Cancer)
Show Figures

Figure 1

19 pages, 3763 KiB  
Article
Rethinking Densely Connected Convolutional Networks for Diagnosing Infectious Diseases
by Prajoy Podder, Fatema Binte Alam, M. Rubaiyat Hossain Mondal, Md Junayed Hasan, Ali Rohan and Subrato Bharati
Computers 2023, 12(5), 95; https://doi.org/10.3390/computers12050095 - 2 May 2023
Cited by 17 | Viewed by 4531
Abstract
Due to its high transmissibility, the COVID-19 pandemic has placed an unprecedented burden on healthcare systems worldwide. X-ray imaging of the chest has emerged as a valuable and cost-effective tool for detecting and diagnosing COVID-19 patients. In this study, we developed a deep [...] Read more.
Due to its high transmissibility, the COVID-19 pandemic has placed an unprecedented burden on healthcare systems worldwide. X-ray imaging of the chest has emerged as a valuable and cost-effective tool for detecting and diagnosing COVID-19 patients. In this study, we developed a deep learning model using transfer learning with optimized DenseNet-169 and DenseNet-201 models for three-class classification, utilizing the Nadam optimizer. We modified the traditional DenseNet architecture and tuned the hyperparameters to improve the model’s performance. The model was evaluated on a novel dataset of 3312 X-ray images from publicly available datasets, using metrics such as accuracy, recall, precision, F1-score, and the area under the receiver operating characteristics curve. Our results showed impressive detection rate accuracy and recall for COVID-19 patients, with 95.98% and 96% achieved using DenseNet-169 and 96.18% and 99% using DenseNet-201. Unique layer configurations and the Nadam optimization algorithm enabled our deep learning model to achieve high rates of accuracy not only for detecting COVID-19 patients but also for identifying normal and pneumonia-affected patients. The model’s ability to detect lung problems early on, as well as its low false-positive and false-negative rates, suggest that it has the potential to serve as a reliable diagnostic tool for a variety of lung diseases. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

29 pages, 21299 KiB  
Article
Sensing and Detection of Traffic Signs Using CNNs: An Assessment on Their Performance
by Lorenzo Canese, Gian Carlo Cardarilli, Luca Di Nunzio, Rocco Fazzolari, Hamed Famil Ghadakchi, Marco Re and Sergio Spanò
Sensors 2022, 22(22), 8830; https://doi.org/10.3390/s22228830 - 15 Nov 2022
Cited by 22 | Viewed by 3697
Abstract
Traffic sign detection systems constitute a key component in trending real-world applications such as autonomous driving and driver safety and assistance. In recent years, many learning systems have been used to help detect traffic signs more accurately, such as ResNet, Vgg, Squeeznet, and [...] Read more.
Traffic sign detection systems constitute a key component in trending real-world applications such as autonomous driving and driver safety and assistance. In recent years, many learning systems have been used to help detect traffic signs more accurately, such as ResNet, Vgg, Squeeznet, and DenseNet, but which of these systems can perform better than the others is debatable. They must be examined carefully and under the same conditions. To check the system under the same conditions, you must first have the same database structure. Moreover, the practice of training under the same number of epochs should be the same. Other points to consider are the language in which the coding operation was performed as well as the method of calling the training system, which should be the same. As a result, under these conditions, it can be said that the comparison between different education systems has been done under equal conditions, and the result of this analogy will be valid. In this article, traffic sign detection was done using AlexNet and XresNet 50 training methods, which had not been used until now. Then, with the implementation of ResNet 18, 34, and 50, DenseNet 121, 169, and 201, Vgg 16_bn and Vgg19_bn, AlexNet, SqueezeNet1_0, and SqueezeNet1_1 training methods under completely the same conditions. The results are compared with each other, and finally, the best ones for use in detecting traffic signs are introduced. The experimental results showed that, considering parameters train loss, valid loss, accuracy, error rate and Time, three types of CNN learning models Vgg 16_bn, Vgg19_bn and, AlexNet performed better for the intended purpose. As a result, these three types of learning models can be considered for further studies. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

16 pages, 2874 KiB  
Article
Image Classification of Wheat Rust Based on Ensemble Learning
by Qian Pan, Maofang Gao, Pingbo Wu, Jingwen Yan and Mohamed A. E. AbdelRahman
Sensors 2022, 22(16), 6047; https://doi.org/10.3390/s22166047 - 12 Aug 2022
Cited by 36 | Viewed by 3515
Abstract
Rust is a common disease in wheat that significantly impacts its growth and yield. Stem rust and leaf rust of wheat are difficult to distinguish, and manual detection is time-consuming. With the aim of improving this situation, this study proposes a method for [...] Read more.
Rust is a common disease in wheat that significantly impacts its growth and yield. Stem rust and leaf rust of wheat are difficult to distinguish, and manual detection is time-consuming. With the aim of improving this situation, this study proposes a method for identifying wheat rust based on ensemble learning (WR-EL). The WR-EL method extracts and integrates multiple convolutional neural network (CNN) models, namely VGG, ResNet 101, ResNet 152, DenseNet 169, and DenseNet 201, based on bagging, snapshot ensembling, and the stochastic gradient descent with warm restarts (SGDR) algorithm. The identification results of the WR-EL method were compared to those of five individual CNN models. The results show that the identification accuracy increases by 32%, 19%, 15%, 11%, and 8%. Additionally, we proposed the SGDR-S algorithm, which improved the f1 scores of healthy wheat, stem rust wheat and leaf rust wheat by 2%, 3% and 2% compared to the SGDR algorithm, respectively. This method can more accurately identify wheat rust disease and can be implemented as a timely prevention and control measure, which can not only prevent economic losses caused by the disease, but also improve the yield and quality of wheat. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

31 pages, 10028 KiB  
Article
Ten Fast Transfer Learning Models for Carotid Ultrasound Plaque Tissue Characterization in Augmentation Framework Embedded with Heatmaps for Stroke Risk Stratification
by Skandha S. Sanagala, Andrew Nicolaides, Suneet K. Gupta, Vijaya K. Koppula, Luca Saba, Sushant Agarwal, Amer M. Johri, Manudeep S. Kalra and Jasjit S. Suri
Diagnostics 2021, 11(11), 2109; https://doi.org/10.3390/diagnostics11112109 - 15 Nov 2021
Cited by 50 | Viewed by 3868
Abstract
Background and Purpose: Only 1–2% of the internal carotid artery asymptomatic plaques are unstable as a result of >80% stenosis. Thus, unnecessary efforts can be saved if these plaques can be characterized and classified into symptomatic and asymptomatic using non-invasive B-mode ultrasound. Earlier [...] Read more.
Background and Purpose: Only 1–2% of the internal carotid artery asymptomatic plaques are unstable as a result of >80% stenosis. Thus, unnecessary efforts can be saved if these plaques can be characterized and classified into symptomatic and asymptomatic using non-invasive B-mode ultrasound. Earlier plaque tissue characterization (PTC) methods were machine learning (ML)-based, which used hand-crafted features that yielded lower accuracy and unreliability. The proposed study shows the role of transfer learning (TL)-based deep learning models for PTC. Methods: As pertained weights were used in the supercomputer framework, we hypothesize that transfer learning (TL) provides improved performance compared with deep learning. We applied 11 kinds of artificial intelligence (AI) models, 10 of them were augmented and optimized using TL approaches—a class of Atheromatic™ 2.0 TL (AtheroPoint™, Roseville, CA, USA) that consisted of (i–ii) Visual Geometric Group-16, 19 (VGG16, 19); (iii) Inception V3 (IV3); (iv–v) DenseNet121, 169; (vi) XceptionNet; (vii) ResNet50; (viii) MobileNet; (ix) AlexNet; (x) SqueezeNet; and one DL-based (xi) SuriNet-derived from UNet. We benchmark 11 AI models against our earlier deep convolutional neural network (DCNN) model. Results: The best performing TL was MobileNet, with accuracy and area-under-the-curve (AUC) pairs of 96.10 ± 3% and 0.961 (p < 0.0001), respectively. In DL, DCNN was comparable to SuriNet, with an accuracy of 95.66% and 92.7 ± 5.66%, and an AUC of 0.956 (p < 0.0001) and 0.927 (p < 0.0001), respectively. We validated the performance of the AI architectures with established biomarkers such as greyscale median (GSM), fractal dimension (FD), higher-order spectra (HOS), and visual heatmaps. We benchmarked against previously developed Atheromatic™ 1.0 ML and showed an improvement of 12.9%. Conclusions: TL is a powerful AI tool for PTC into symptomatic and asymptomatic plaques. Full article
(This article belongs to the Special Issue Advances in Carotid Artery Imaging)
Show Figures

Figure 1

15 pages, 7031 KiB  
Article
Evaluation of Deep Learning for Automatic Multi-View Face Detection in Cattle
by Beibei Xu, Wensheng Wang, Leifeng Guo, Guipeng Chen, Yaowu Wang, Wenju Zhang and Yongfeng Li
Agriculture 2021, 11(11), 1062; https://doi.org/10.3390/agriculture11111062 - 28 Oct 2021
Cited by 45 | Viewed by 5827
Abstract
Individual identification plays an important part in disease prevention and control, traceability of meat products, and improvement of agricultural false insurance claims. Automatic and accurate detection of cattle face is prior to individual identification and facial expression recognition based on image analysis technology. [...] Read more.
Individual identification plays an important part in disease prevention and control, traceability of meat products, and improvement of agricultural false insurance claims. Automatic and accurate detection of cattle face is prior to individual identification and facial expression recognition based on image analysis technology. This paper evaluated the possibility of the cutting-edge object detection algorithm, RetinaNet, performing multi-view cattle face detection in housing farms with fluctuating illumination, overlapping, and occlusion. Seven different pretrained CNN models (ResNet 50, ResNet 101, ResNet 152, VGG 16, VGG 19, Densenet 121 and Densenet 169) were fine-tuned by transfer learning and re-trained on the dataset in the paper. Experimental results showed that RetinaNet incorporating the ResNet 50 was superior in accuracy and speed through performance evaluation, which yielded an average precision score of 99.8% and an average processing time of 0.0438 s per image. Compared with the typical competing algorithms, the proposed method was preferable for cattle face detection, especially in particularly challenging scenarios. This research work demonstrated the potential of artificial intelligence towards the incorporation of computer vision systems for individual identification and other animal welfare improvements. Full article
(This article belongs to the Special Issue Digital Innovations in Agriculture)
Show Figures

Figure 1

15 pages, 1798 KiB  
Article
On the Effect of Training Convolution Neural Network for Millimeter-Wave Radar-Based Hand Gesture Recognition
by Kang Zhang, Shengchang Lan and Guiyuan Zhang
Sensors 2021, 21(1), 259; https://doi.org/10.3390/s21010259 - 2 Jan 2021
Cited by 12 | Viewed by 3719
Abstract
The purpose of this paper was to investigate the effect of a training state-of-the-art convolution neural network (CNN) for millimeter-wave radar-based hand gesture recognition (MR-HGR). Focusing on the small training dataset problem in MR-HGR, this paper first proposed to transfer the knowledge with [...] Read more.
The purpose of this paper was to investigate the effect of a training state-of-the-art convolution neural network (CNN) for millimeter-wave radar-based hand gesture recognition (MR-HGR). Focusing on the small training dataset problem in MR-HGR, this paper first proposed to transfer the knowledge with the CNN models in computer vision to MR-HGR by fine-tuning the models with radar data samples. Meanwhile, for the different data modality in MR-HGR, a parameterized representation of temporal space-velocity (TSV) spectrogram was proposed as an integrated data modality of the time-evolving hand gesture features in the radar echo signals. The TSV spectrograms representing six common gestures in human–computer interaction (HCI) from nine volunteers were used as the data samples in the experiment. The evaluated models included ResNet with 50, 101, and 152 layers, DenseNet with 121, 161 and 169 layers, as well as light-weight MobileNet V2 and ShuffleNet V2, mostly proposed by many latest publications. In the experiment, not only self-testing (ST), but also more persuasive cross-testing (CT), were implemented to evaluate whether the fine-tuned models generalize to the radar data samples. The CT results show that the best fine-tuned models can reach to an average accuracy higher than 93% with a comparable ST average accuracy almost 100%. Moreover, in order to alleviate the problem caused by private gesture habits, an auxiliary test was performed by augmenting four shots of the gestures with the heaviest misclassifications into the training set. This enriching test is similar with the scenario that a tablet reacts to a new user. The results of two different volunteer in the enriching test shows that the average accuracy of the enriched gesture can be improved from 55.59% and 65.58% to 90.66% and 95.95% respectively. Compared with some baseline work in MR-HGR, the investigation by this paper can be beneficial in promoting MR-HGR in future industry applications and consumer electronic design. Full article
(This article belongs to the Special Issue Sensors for Posture and Human Motion Recognition)
Show Figures

Figure 1

Back to TopTop