Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,562)

Search Parameters:
Keywords = deep convolutional neural networks (CNNs)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 7531 KiB  
Article
Evaluating the Impact of 2D MRI Slice Orientation and Location on Alzheimer’s Disease Diagnosis Using a Lightweight Convolutional Neural Network
by Nadia A. Mohsin and Mohammed H. Abdulameer
J. Imaging 2025, 11(8), 260; https://doi.org/10.3390/jimaging11080260 - 5 Aug 2025
Abstract
Accurate detection of Alzheimer’s disease (AD) is critical yet challenging for early medical intervention. Deep learning methods, especially convolutional neural networks (CNNs), have shown promising potential for improving diagnostic accuracy using magnetic resonance imaging (MRI). This study aims to identify the most informative [...] Read more.
Accurate detection of Alzheimer’s disease (AD) is critical yet challenging for early medical intervention. Deep learning methods, especially convolutional neural networks (CNNs), have shown promising potential for improving diagnostic accuracy using magnetic resonance imaging (MRI). This study aims to identify the most informative combination of MRI slice orientation and anatomical location for AD classification. We propose an automated framework that first selects the most relevant slices using a feature entropy-based method applied to activation maps from a pretrained CNN model. For classification, we employ a lightweight CNN architecture based on depthwise separable convolutions to efficiently analyze the selected 2D MRI slices extracted from preprocessed 3D brain scans. To further interpret model behavior, an attention mechanism is integrated to analyze which feature level contributes the most to the classification process. The model is evaluated on three binary tasks: AD vs. mild cognitive impairment (MCI), AD vs. cognitively normal (CN), and MCI vs. CN. The experimental results show the highest accuracy (97.4%) in distinguishing AD from CN when utilizing the selected slices from the ninth axial segment, followed by the tenth segment of coronal and sagittal orientations. These findings demonstrate the significance of slice location and orientation in MRI-based AD diagnosis and highlight the potential of lightweight CNNs for clinical use. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

25 pages, 3310 KiB  
Article
Real-Time Signal Quality Assessment and Power Adaptation of FSO Links Operating Under All-Weather Conditions Using Deep Learning Exploiting Eye Diagrams
by Somia A. Abd El-Mottaleb and Ahmad Atieh
Photonics 2025, 12(8), 789; https://doi.org/10.3390/photonics12080789 (registering DOI) - 4 Aug 2025
Abstract
This paper proposes an intelligent power adaptation framework for Free-Space Optics (FSO) communication systems operating under different weather conditions exploiting a deep learning (DL) analysis of received eye diagram images. The system incorporates two Convolutional Neural Network (CNN) architectures, LeNet and Wide Residual [...] Read more.
This paper proposes an intelligent power adaptation framework for Free-Space Optics (FSO) communication systems operating under different weather conditions exploiting a deep learning (DL) analysis of received eye diagram images. The system incorporates two Convolutional Neural Network (CNN) architectures, LeNet and Wide Residual Network (Wide ResNet) algorithms to perform regression tasks that predict received signal quality metrics such as the Quality Factor (Q-factor) and Bit Error Rate (BER) from the received eye diagram. These models are evaluated using Mean Squared Error (MSE) and the coefficient of determination (R2 score) to assess prediction accuracy. Additionally, a custom CNN-based classifier is trained to determine whether the BER reading from the eye diagram exceeds a critical threshold of 104; this classifier achieves an overall accuracy of 99%, correctly detecting 194/195 “acceptable” and 4/5 “unacceptable” instances. Based on the predicted signal quality, the framework activates a dual-amplifier configuration comprising a pre-channel amplifier with a maximum gain of 25 dB and a post-channel amplifier with a maximum gain of 10 dB. The total gain of the amplifiers is adjusted to support the operation of the FSO system under all-weather conditions. The FSO system uses a 15 dBm laser source at 1550 nm. The DL models are tested on both internal and external datasets to validate their generalization capability. The results show that the regression models achieve strong predictive performance, and the classifier reliably detects degraded signal conditions, enabling the real-time gain control of the amplifiers to achieve the quality of transmission. The proposed solution supports robust FSO communication under challenging atmospheric conditions including dry snow, making it suitable for deployment in regions like Northern Europe, Canada, and Northern Japan. Full article
Show Figures

Figure 1

20 pages, 1688 KiB  
Article
Spectrum Sensing for Noncircular Signals Using Augmented Covariance-Matrix-Aware Deep Convolutional Neural Network
by Songlin Chen, Zhenqing He, Wenze Song and Guohao Sun
Sensors 2025, 25(15), 4791; https://doi.org/10.3390/s25154791 - 4 Aug 2025
Abstract
This work investigates spectrum sensing in cognitive radio networks, where multi-antenna secondary users aim to detect the spectral occupancy of noncircular signals transmitted by primary users. Specifically, we propose a deep-learning-based spectrum sensing approach using an augmented covariance-matrix-aware convolutional neural network (CNN). The [...] Read more.
This work investigates spectrum sensing in cognitive radio networks, where multi-antenna secondary users aim to detect the spectral occupancy of noncircular signals transmitted by primary users. Specifically, we propose a deep-learning-based spectrum sensing approach using an augmented covariance-matrix-aware convolutional neural network (CNN). The core innovation of our approach lies in employing an augmented sample covariance matrix, which integrates both a standard covariance matrix and complementary covariance matrix, thereby fully exploiting the statistical properties of noncircular signals. By feeding augmented sample covariance matrices into the designed CNN architecture, the proposed approach effectively learns discriminative patterns from the underlying data structure, without stringent model constraints. Meanwhile, our approach eliminates the need for restrictive model assumptions and significantly enhances the detection performance by fully exploiting noncircular signal characteristics. Various experimental results demonstrate the significant performance improvement and generalization capability of the proposed approach compared to existing benchmark methods. Full article
Show Figures

Figure 1

33 pages, 5056 KiB  
Article
Interpretable Deep Learning Models for Arrhythmia Classification Based on ECG Signals Using PTB-X Dataset
by Ahmed E. Mansour Atwa, El-Sayed Atlam, Ali Ahmed, Mohamed Ahmed Atwa, Elsaid Md. Abdelrahim and Ali I. Siam
Diagnostics 2025, 15(15), 1950; https://doi.org/10.3390/diagnostics15151950 - 4 Aug 2025
Abstract
Background/Objectives: Automatic classification of ECG signal arrhythmias plays a vital role in early cardiovascular diagnostics by enabling prompt detection of life-threatening conditions. Manual ECG interpretation is labor-intensive and susceptible to errors, highlighting the demand for automated, scalable approaches. Deep learning (DL) methods are [...] Read more.
Background/Objectives: Automatic classification of ECG signal arrhythmias plays a vital role in early cardiovascular diagnostics by enabling prompt detection of life-threatening conditions. Manual ECG interpretation is labor-intensive and susceptible to errors, highlighting the demand for automated, scalable approaches. Deep learning (DL) methods are effective in ECG analysis due to their ability to learn complex patterns from raw signals. Methods: This study introduces two models: a custom convolutional neural network (CNN) with a dual-branch architecture for processing ECG signals and demographic data (e.g., age, gender), and a modified VGG16 model adapted for multi-branch input. Using the PTB-XL dataset, a widely adopted large-scale ECG database with over 20,000 recordings, the models were evaluated on binary, multiclass, and subclass classification tasks across 2, 5, 10, and 15 disease categories. Advanced preprocessing techniques, combined with demographic features, significantly enhanced performance. Results: The CNN model achieved up to 97.78% accuracy in binary classification and 79.7% in multiclass tasks, outperforming the VGG16 model (97.38% and 76.53%, respectively) and state-of-the-art benchmarks like CNN-LSTM and CNN entropy features. This study also emphasizes interpretability, providing lead-specific insights into ECG contributions to promote clinical transparency. Conclusions: These results confirm the models’ potential for accurate, explainable arrhythmia detection and their applicability in real-world healthcare diagnostics. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

44 pages, 4499 KiB  
Article
A Hybrid Deep Reinforcement Learning Architecture for Optimizing Concrete Mix Design Through Precision Strength Prediction
by Ali Mirzaei and Amir Aghsami
Math. Comput. Appl. 2025, 30(4), 83; https://doi.org/10.3390/mca30040083 (registering DOI) - 3 Aug 2025
Viewed by 32
Abstract
Concrete mix design plays a pivotal role in ensuring the mechanical performance, durability, and sustainability of construction projects. However, the nonlinear interactions among the mix components challenge traditional approaches in predicting compressive strength and optimizing proportions. This study presents a two-stage hybrid framework [...] Read more.
Concrete mix design plays a pivotal role in ensuring the mechanical performance, durability, and sustainability of construction projects. However, the nonlinear interactions among the mix components challenge traditional approaches in predicting compressive strength and optimizing proportions. This study presents a two-stage hybrid framework that integrates deep learning with reinforcement learning to overcome these limitations. First, a Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) model was developed to capture spatial–temporal patterns from a dataset of 1030 historical concrete samples. The extracted features were enhanced using an eXtreme Gradient Boosting (XGBoost) meta-model to improve generalizability and noise resistance. Then, a Dueling Double Deep Q-Network (Dueling DDQN) agent was used to iteratively identify optimal mix ratios that maximize the predicted compressive strength. The proposed framework outperformed ten benchmark models, achieving an MAE of 2.97, RMSE of 4.08, and R2 of 0.94. Feature attribution methods—including SHapley Additive exPlanations (SHAP), Elasticity-Based Feature Importance (EFI), and Permutation Feature Importance (PFI)—highlighted the dominant influence of cement content and curing age, as well as revealing non-intuitive effects such as the compensatory role of superplasticizers in low-water mixtures. These findings demonstrate the potential of the proposed approach to support intelligent concrete mix design and real-time optimization in smart construction environments. Full article
(This article belongs to the Section Engineering)
20 pages, 3468 KiB  
Article
Fine-Tuning Models for Histopathological Classification of Colorectal Cancer
by Houda Saif ALGhafri and Chia S. Lim
Diagnostics 2025, 15(15), 1947; https://doi.org/10.3390/diagnostics15151947 - 3 Aug 2025
Viewed by 57
Abstract
Background/Objectives: This study aims to design and evaluate transfer learning strategies that fine-tune multiple pre-trained convolutional neural network architectures based on their characteristics to improve the accuracy and generalizability of colorectal cancer histopathological image classification. Methods: The application of transfer learning with pre-trained [...] Read more.
Background/Objectives: This study aims to design and evaluate transfer learning strategies that fine-tune multiple pre-trained convolutional neural network architectures based on their characteristics to improve the accuracy and generalizability of colorectal cancer histopathological image classification. Methods: The application of transfer learning with pre-trained models on specialized and multiple datasets is proposed, where the proposed models, CRCHistoDense, CRCHistoIncep, and CRCHistoXcep, are algorithmically fine-tuned at varying depths to improve the performance of colorectal cancer classification. These models were applied to datasets of 10,613 images from public and private repositories, external sources, and unseen data. To validate the models’ decision-making and improve transparency, we integrated Grad-CAM to provide visual explanations that influence classification decisions. Results and Conclusions: On average across all datasets, CRCHistoDense, CRCHistoIncep, and CRCHistoXcep achieved test accuracies of 99.34%, 99.48%, and 99.45%, respectively, highlighting the effectiveness of fine-tuning in improving classification performance and generalization. Statistical methods, including paired t-tests, ANOVA, and the Kruskal–Wallis test, confirmed significant improvements in the proposed methods’ performance, with p-values below 0.05. These findings demonstrate that fine-tuning based on the characteristics of CNN’s architecture enhances colorectal cancer classification in histopathology, thereby improving the diagnostic potential of deep learning models. Full article
Show Figures

Figure 1

24 pages, 997 KiB  
Article
A Spatiotemporal Deep Learning Framework for Joint Load and Renewable Energy Forecasting in Stability-Constrained Power Systems
by Min Cheng, Jiawei Yu, Mingkang Wu, Yihua Zhu, Yayao Zhang and Yuanfu Zhu
Information 2025, 16(8), 662; https://doi.org/10.3390/info16080662 - 3 Aug 2025
Viewed by 69
Abstract
With the increasing uncertainty introduced by the large-scale integration of renewable energy sources, traditional power dispatching methods face significant challenges, including severe frequency fluctuations, substantial forecasting deviations, and the difficulty of balancing economic efficiency with system stability. To address these issues, a deep [...] Read more.
With the increasing uncertainty introduced by the large-scale integration of renewable energy sources, traditional power dispatching methods face significant challenges, including severe frequency fluctuations, substantial forecasting deviations, and the difficulty of balancing economic efficiency with system stability. To address these issues, a deep learning-based dispatching framework is proposed, which integrates spatiotemporal feature extraction with a stability-aware mechanism. A joint forecasting model is constructed using Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to handle multi-source inputs, while a reinforcement learning-based stability-aware scheduler is developed to manage dynamic system responses. In addition, an uncertainty modeling mechanism combining Dropout and Bayesian networks is incorporated to enhance dispatch robustness. Experiments conducted on real-world power grid and renewable generation datasets demonstrate that the proposed forecasting module achieves approximately a 2.1% improvement in accuracy compared with Autoformer and reduces Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) by 18.1% and 14.1%, respectively, compared with traditional LSTM models. The achieved Mean Absolute Percentage Error (MAPE) of 5.82% outperforms all baseline models. In terms of scheduling performance, the proposed method reduces the total operating cost by 5.8% relative to Autoformer, decreases the frequency deviation from 0.158 Hz to 0.129 Hz, and increases the Critical Clearing Time (CCT) to 2.74 s, significantly enhancing dynamic system stability. Ablation studies reveal that removing the uncertainty modeling module increases the frequency deviation to 0.153 Hz and raises operational costs by approximately 6.9%, confirming the critical role of this module in maintaining robustness. Furthermore, under diverse load profiles and meteorological disturbances, the proposed method maintains stable forecasting accuracy and scheduling policy outputs, demonstrating strong generalization capabilities. Overall, the proposed approach achieves a well-balanced performance in terms of forecasting precision, system stability, and economic efficiency in power grids with high renewable energy penetration, indicating substantial potential for practical deployment and further research. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Figure 1

30 pages, 1142 KiB  
Review
Beyond the Backbone: A Quantitative Review of Deep-Learning Architectures for Tropical Cyclone Track Forecasting
by He Huang, Difei Deng, Liang Hu, Yawen Chen and Nan Sun
Remote Sens. 2025, 17(15), 2675; https://doi.org/10.3390/rs17152675 - 2 Aug 2025
Viewed by 119
Abstract
Accurate forecasting of tropical cyclone (TC) tracks is critical for disaster preparedness and risk mitigation. While traditional numerical weather prediction (NWP) systems have long served as the backbone of operational forecasting, they face limitations in computational cost and sensitivity to initial conditions. In [...] Read more.
Accurate forecasting of tropical cyclone (TC) tracks is critical for disaster preparedness and risk mitigation. While traditional numerical weather prediction (NWP) systems have long served as the backbone of operational forecasting, they face limitations in computational cost and sensitivity to initial conditions. In recent years, deep learning (DL) has emerged as a promising alternative, offering data-driven modeling capabilities for capturing nonlinear spatiotemporal patterns. This paper presents a comprehensive review of DL-based approaches for TC track forecasting. We categorize all DL-based TC tracking models according to the architecture, including recurrent neural networks (RNNs), convolutional neural networks (CNNs), Transformers, graph neural networks (GNNs), generative models, and Fourier-based operators. To enable rigorous performance comparison, we introduce a Unified Geodesic Distance Error (UGDE) metric that standardizes evaluation across diverse studies and lead times. Based on this metric, we conduct a critical comparison of state-of-the-art models and identify key insights into their relative strengths, limitations, and suitable application scenarios. Building on this framework, we conduct a critical cross-model analysis that reveals key trends, performance disparities, and architectural tradeoffs. Our analysis also highlights several persistent challenges, such as long-term forecast degradation, limited physical integration, and generalization to extreme events, pointing toward future directions for developing more robust and operationally viable DL models for TC track forecasting. To support reproducibility and facilitate standardized evaluation, we release an open-source UGDE conversion tool on GitHub. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

18 pages, 7062 KiB  
Article
Multimodal Feature Inputs Enable Improved Automated Textile Identification
by Magken George Enow Gnoupa, Andy T. Augousti, Olga Duran, Olena Lanets and Solomiia Liaskovska
Textiles 2025, 5(3), 31; https://doi.org/10.3390/textiles5030031 - 2 Aug 2025
Viewed by 71
Abstract
This study presents an advanced framework for fabric texture classification by leveraging macro- and micro-texture extraction techniques integrated with deep learning architectures. Co-occurrence histograms, local binary patterns (LBPs), and albedo-dependent feature maps were employed to comprehensively capture the surface properties of fabrics. A [...] Read more.
This study presents an advanced framework for fabric texture classification by leveraging macro- and micro-texture extraction techniques integrated with deep learning architectures. Co-occurrence histograms, local binary patterns (LBPs), and albedo-dependent feature maps were employed to comprehensively capture the surface properties of fabrics. A late fusion approach was applied using four state-of-the-art convolutional neural networks (CNNs): InceptionV3, ResNet50_V2, DenseNet, and VGG-19. Excellent results were obtained, with the ResNet50_V2 achieving a precision of 0.929, recall of 0.914, and F1 score of 0.913. Notably, the integration of multimodal inputs allowed the models to effectively distinguish challenging fabric types, such as cotton–polyester and satin–silk pairs, which exhibit overlapping texture characteristics. This research not only enhances the accuracy of textile classification but also provides a robust methodology for material analysis, with significant implications for industrial applications in fashion, quality control, and robotics. Full article
14 pages, 841 KiB  
Article
Enhanced Deep Learning for Robust Stress Classification in Sows from Facial Images
by Syed U. Yunas, Ajmal Shahbaz, Emma M. Baxter, Mark F. Hansen, Melvyn L. Smith and Lyndon N. Smith
Agriculture 2025, 15(15), 1675; https://doi.org/10.3390/agriculture15151675 - 2 Aug 2025
Viewed by 113
Abstract
Stress in pigs poses significant challenges to animal welfare and productivity in modern pig farming, contributing to increased antimicrobial use and the rise of antimicrobial resistance (AMR). This study involves stress classification in pregnant sows by exploring five deep learning models: ConvNeXt, EfficientNet_V2, [...] Read more.
Stress in pigs poses significant challenges to animal welfare and productivity in modern pig farming, contributing to increased antimicrobial use and the rise of antimicrobial resistance (AMR). This study involves stress classification in pregnant sows by exploring five deep learning models: ConvNeXt, EfficientNet_V2, MobileNet_V3, RegNet, and Vision Transformer (ViT). These models are used for stress detection from facial images, leveraging an expanded dataset. A facial image dataset of sows was collected at Scotland’s Rural College (SRUC) and the images were categorized into primiparous Low-Stressed (LS) and High-Stress (HS) groups based on expert behavioural assessments and cortisol level analysis. The selected deep learning models were then trained on this enriched dataset and their performance was evaluated using cross-validation on unseen data. The Vision Transformer (ViT) model outperformed the others across the dataset of annotated facial images, achieving an average accuracy of 0.75, an F1 score of 0.78 for high-stress detection, and consistent batch-level performance (up to 0.88 F1 score). These findings highlight the efficacy of transformer-based models for automated stress detection in sows, supporting early intervention strategies to enhance welfare, optimize productivity, and mitigate AMR risks in livestock production. Full article
Show Figures

Figure 1

27 pages, 1326 KiB  
Systematic Review
Application of Artificial Intelligence in Pancreatic Cyst Management: A Systematic Review
by Donghyun Lee, Fadel Jesry, John J. Maliekkal, Lewis Goulder, Benjamin Huntly, Andrew M. Smith and Yazan S. Khaled
Cancers 2025, 17(15), 2558; https://doi.org/10.3390/cancers17152558 - 2 Aug 2025
Viewed by 188
Abstract
Background: Pancreatic cystic lesions (PCLs), including intraductal papillary mucinous neoplasms (IPMNs) and mucinous cystic neoplasms (MCNs), pose a diagnostic challenge due to their variable malignant potential. Current guidelines, such as Fukuoka and American Gastroenterological Association (AGA), have moderate predictive accuracy and may lead [...] Read more.
Background: Pancreatic cystic lesions (PCLs), including intraductal papillary mucinous neoplasms (IPMNs) and mucinous cystic neoplasms (MCNs), pose a diagnostic challenge due to their variable malignant potential. Current guidelines, such as Fukuoka and American Gastroenterological Association (AGA), have moderate predictive accuracy and may lead to overtreatment or missed malignancies. Artificial intelligence (AI), incorporating machine learning (ML) and deep learning (DL), offers the potential to improve risk stratification, diagnosis, and management of PCLs by integrating clinical, radiological, and molecular data. This is the first systematic review to evaluate the application, performance, and clinical utility of AI models in the diagnosis, classification, prognosis, and management of pancreatic cysts. Methods: A systematic review was conducted in accordance with PRISMA guidelines and registered on PROSPERO (CRD420251008593). Databases searched included PubMed, EMBASE, Scopus, and Cochrane Library up to March 2025. The inclusion criteria encompassed original studies employing AI, ML, or DL in human subjects with pancreatic cysts, evaluating diagnostic, classification, or prognostic outcomes. Data were extracted on the study design, imaging modality, model type, sample size, performance metrics (accuracy, sensitivity, specificity, and area under the curve (AUC)), and validation methods. Study quality and bias were assessed using the PROBAST and adherence to TRIPOD reporting guidelines. Results: From 847 records, 31 studies met the inclusion criteria. Most were retrospective observational (n = 27, 87%) and focused on preoperative diagnostic applications (n = 30, 97%), with only one addressing prognosis. Imaging modalities included Computed Tomography (CT) (48%), endoscopic ultrasound (EUS) (26%), and Magnetic Resonance Imaging (MRI) (9.7%). Neural networks, particularly convolutional neural networks (CNNs), were the most common AI models (n = 16), followed by logistic regression (n = 4) and support vector machines (n = 3). The median reported AUC across studies was 0.912, with 55% of models achieving AUC ≥ 0.80. The models outperformed clinicians or existing guidelines in 11 studies. IPMN stratification and subtype classification were common focuses, with CNN-based EUS models achieving accuracies of up to 99.6%. Only 10 studies (32%) performed external validation. The risk of bias was high in 93.5% of studies, and TRIPOD adherence averaged 48%. Conclusions: AI demonstrates strong potential in improving the diagnosis and risk stratification of pancreatic cysts, with several models outperforming current clinical guidelines and human readers. However, widespread clinical adoption is hindered by high risk of bias, lack of external validation, and limited interpretability of complex models. Future work should prioritise multicentre prospective studies, standardised model reporting, and development of interpretable, externally validated tools to support clinical integration. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

22 pages, 4300 KiB  
Article
Optimised DNN-Based Agricultural Land Mapping Using Sentinel-2 and Landsat-8 with Google Earth Engine
by Nisha Sharma, Sartajvir Singh and Kawaljit Kaur
Land 2025, 14(8), 1578; https://doi.org/10.3390/land14081578 - 1 Aug 2025
Viewed by 237
Abstract
Agriculture is the backbone of Punjab’s economy, and with much of India’s population dependent on agriculture, the requirement for accurate and timely monitoring of land has become even more crucial. Blending remote sensing with state-of-the-art machine learning algorithms enables the detailed classification of [...] Read more.
Agriculture is the backbone of Punjab’s economy, and with much of India’s population dependent on agriculture, the requirement for accurate and timely monitoring of land has become even more crucial. Blending remote sensing with state-of-the-art machine learning algorithms enables the detailed classification of agricultural lands through thematic mapping, which is critical for crop monitoring, land management, and sustainable development. Here, a Hyper-tuned Deep Neural Network (Hy-DNN) model was created and used for land use and land cover (LULC) classification into four classes: agricultural land, vegetation, water bodies, and built-up areas. The technique made use of multispectral data from Sentinel-2 and Landsat-8, processed on the Google Earth Engine (GEE) platform. To measure classification performance, Hy-DNN was contrasted with traditional classifiers—Convolutional Neural Network (CNN), Random Forest (RF), Classification and Regression Tree (CART), Minimum Distance Classifier (MDC), and Naive Bayes (NB)—using performance metrics including producer’s and consumer’s accuracy, Kappa coefficient, and overall accuracy. Hy-DNN performed the best, with overall accuracy being 97.60% using Sentinel-2 and 91.10% using Landsat-8, outperforming all base models. These results further highlight the superiority of the optimised Hy-DNN in agricultural land mapping and its potential use in crop health monitoring, disease diagnosis, and strategic agricultural planning. Full article
Show Figures

Figure 1

17 pages, 1530 KiB  
Article
Enhanced Respiratory Sound Classification Using Deep Learning and Multi-Channel Auscultation
by Yeonkyeong Kim, Kyu Bom Kim, Ah Young Leem, Kyuseok Kim and Su Hwan Lee
J. Clin. Med. 2025, 14(15), 5437; https://doi.org/10.3390/jcm14155437 - 1 Aug 2025
Viewed by 112
Abstract
Background/Objectives: Identifying and classifying abnormal lung sounds is essential for diagnosing patients with respiratory disorders. In particular, the simultaneous recording of auscultation signals from multiple clinically relevant positions offers greater diagnostic potential compared to traditional single-channel measurements. This study aims to improve [...] Read more.
Background/Objectives: Identifying and classifying abnormal lung sounds is essential for diagnosing patients with respiratory disorders. In particular, the simultaneous recording of auscultation signals from multiple clinically relevant positions offers greater diagnostic potential compared to traditional single-channel measurements. This study aims to improve the accuracy of respiratory sound classification by leveraging multichannel signals and capturing positional characteristics from multiple sites in the same patient. Methods: We evaluated the performance of respiratory sound classification using multichannel lung sound data with a deep learning model that combines a convolutional neural network (CNN) and long short-term memory (LSTM), based on mel-frequency cepstral coefficients (MFCCs). We analyzed the impact of the number and placement of channels on classification performance. Results: The results demonstrated that using four-channel recordings improved accuracy, sensitivity, specificity, precision, and F1-score by approximately 1.11, 1.15, 1.05, 1.08, and 1.13 times, respectively, compared to using three, two, or single-channel recordings. Conclusions: This study confirms that multichannel data capture a richer set of features corresponding to various respiratory sound characteristics, leading to significantly improved classification performance. The proposed method holds promise for enhancing sound classification accuracy not only in clinical applications but also in broader domains such as speech and audio processing. Full article
(This article belongs to the Section Respiratory Medicine)
Show Figures

Figure 1

19 pages, 1889 KiB  
Article
Infrared Thermographic Signal Analysis of Bioactive Edible Oils Using CNNs for Quality Assessment
by Danilo Pratticò and Filippo Laganà
Signals 2025, 6(3), 38; https://doi.org/10.3390/signals6030038 - 1 Aug 2025
Viewed by 160
Abstract
Nutrition plays a fundamental role in promoting health and preventing chronic diseases, with bioactive food components offering a therapeutic potential in biomedical applications. Among these, edible oils are recognised for their functional properties, which contribute to disease prevention and metabolic regulation. The proposed [...] Read more.
Nutrition plays a fundamental role in promoting health and preventing chronic diseases, with bioactive food components offering a therapeutic potential in biomedical applications. Among these, edible oils are recognised for their functional properties, which contribute to disease prevention and metabolic regulation. The proposed study aims to evaluate the quality of four bioactive oils (olive oil, sunflower oil, tomato seed oil, and pumpkin seed oil) by analysing their thermal behaviour through infrared (IR) imaging. The study designed a customised electronic system to acquire thermographic signals under controlled temperature and humidity conditions. The acquisition system was used to extract thermal data. Analysis of the acquired thermal signals revealed characteristic heat absorption profiles used to infer differences in oil properties related to stability and degradation potential. A hybrid deep learning model that integrates Convolutional Neural Networks (CNNs) with Long Short-Term Memory (LSTM) units was used to classify and differentiate the oils based on stability, thermal reactivity, and potential health benefits. A signal analysis showed that the AI-based method improves both the accuracy (achieving an F1-score of 93.66%) and the repeatability of quality assessments, providing a non-invasive and intelligent framework for the validation and traceability of nutritional compounds. Full article
Show Figures

Figure 1

26 pages, 1790 KiB  
Article
A Hybrid Deep Learning Model for Aromatic and Medicinal Plant Species Classification Using a Curated Leaf Image Dataset
by Shareena E. M., D. Abraham Chandy, Shemi P. M. and Alwin Poulose
AgriEngineering 2025, 7(8), 243; https://doi.org/10.3390/agriengineering7080243 - 1 Aug 2025
Viewed by 176
Abstract
In the era of smart agriculture, accurate identification of plant species is critical for effective crop management, biodiversity monitoring, and the sustainable use of medicinal resources. However, existing deep learning approaches often underperform when applied to fine-grained plant classification tasks due to the [...] Read more.
In the era of smart agriculture, accurate identification of plant species is critical for effective crop management, biodiversity monitoring, and the sustainable use of medicinal resources. However, existing deep learning approaches often underperform when applied to fine-grained plant classification tasks due to the lack of domain-specific, high-quality datasets and the limited representational capacity of traditional architectures. This study addresses these challenges by introducing a novel, well-curated leaf image dataset consisting of 39 classes of medicinal and aromatic plants collected from the Aromatic and Medicinal Plant Research Station in Odakkali, Kerala, India. To overcome performance bottlenecks observed with a baseline Convolutional Neural Network (CNN) that achieved only 44.94% accuracy, we progressively enhanced model performance through a series of architectural innovations. These included the use of a pre-trained VGG16 network, data augmentation techniques, and fine-tuning of deeper convolutional layers, followed by the integration of Squeeze-and-Excitation (SE) attention blocks. Ultimately, we propose a hybrid deep learning architecture that combines VGG16 with Batch Normalization, Gated Recurrent Units (GRUs), Transformer modules, and Dilated Convolutions. This final model achieved a peak validation accuracy of 95.24%, significantly outperforming several baseline models, such as custom CNN (44.94%), VGG-19 (59.49%), VGG-16 before augmentation (71.52%), Xception (85.44%), Inception v3 (87.97%), VGG-16 after data augumentation (89.24%), VGG-16 after fine-tuning (90.51%), MobileNetV2 (93.67), and VGG16 with SE block (94.94%). These results demonstrate superior capability in capturing both local textures and global morphological features. The proposed solution not only advances the state of the art in plant classification but also contributes a valuable dataset to the research community. Its real-world applicability spans field-based plant identification, biodiversity conservation, and precision agriculture, offering a scalable tool for automated plant recognition in complex ecological and agricultural environments. Full article
(This article belongs to the Special Issue Implementation of Artificial Intelligence in Agriculture)
Show Figures

Figure 1

Back to TopTop