Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (300)

Search Parameters:
Keywords = 3D deep convolutional neural network (3D DCNN)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2212 KB  
Article
A Lightweight Model for Power Quality Disturbance Recognition Targeting Edge Deployment
by Hao Bai, Ruotian Yao, Tong Liu, Ziji Ma, Shangyu Liu, Yiyong Lei and Yawen Zheng
Energies 2026, 19(2), 368; https://doi.org/10.3390/en19020368 - 12 Jan 2026
Viewed by 31
Abstract
To address the dual demands of accuracy and real-time performance in power quality disturbance (PQD) recognition for new power system, this paper proposes a lightweight model named the Cross-Channel Attention Three-Layer Convolutional Model (1D-CCANet-3), specifically designed for edge deployment. Based on the one-dimensional [...] Read more.
To address the dual demands of accuracy and real-time performance in power quality disturbance (PQD) recognition for new power system, this paper proposes a lightweight model named the Cross-Channel Attention Three-Layer Convolutional Model (1D-CCANet-3), specifically designed for edge deployment. Based on the one-dimensional convolutional neural network (1D-CNN), the model features an ultra-compact architecture with only three convolutional layers and one fully connected layer. By incorporating a set of cross-channel attention (CCA) mechanisms in the final convolutional layer, the model further enhances disturbance recognition accuracy. Compared to other deep learning models, 1D-CCANet-3 significantly reduces computational and storage requirements for edge devices while achieving accurate and efficient PQD recognition. The model demonstrates robust performance in recognizing 10 types of PQD under varying signal-to-noise ratio (SNR) conditions. Furthermore, the model has been successfully deployed on the FPGA platform and exhibits high recognition accuracy and efficiency in real-world data validation. This work provides a feasible and effective solution for accurate and real-time PQD monitoring on edge devices in new power systems. Full article
Show Figures

Figure 1

41 pages, 25791 KB  
Article
TGDHTL: Hyperspectral Image Classification via Transformer–Graph Convolutional Network–Diffusion with Hybrid Domain Adaptation
by Zarrin Mahdavipour, Nashwan Alromema, Abdolraheem Khader, Ghulam Farooque, Ali Ahmed and Mohamed A. Damos
Remote Sens. 2026, 18(2), 189; https://doi.org/10.3390/rs18020189 - 6 Jan 2026
Viewed by 273
Abstract
Hyperspectral image (HSI) classification is pivotal for remote sensing applications, including environmental monitoring, precision agriculture, and urban land-use analysis. However, its accuracy is often limited by scarce labeled data, class imbalance, and domain discrepancies between standard RGB and HSI imagery. Although recent deep [...] Read more.
Hyperspectral image (HSI) classification is pivotal for remote sensing applications, including environmental monitoring, precision agriculture, and urban land-use analysis. However, its accuracy is often limited by scarce labeled data, class imbalance, and domain discrepancies between standard RGB and HSI imagery. Although recent deep learning approaches, such as 3D convolutional neural networks (3D-CNNs), transformers, and generative adversarial networks (GANs), show promise, they struggle with spectral fidelity, computational efficiency, and cross-domain adaptation in label-scarce scenarios. To address these challenges, we propose the Transformer–Graph Convolutional Network–Diffusion with Hybrid Domain Adaptation (TGDHTL) framework. This framework integrates domain-adaptive alignment of RGB and HSI data, efficient synthetic data generation, and multi-scale spectral–spatial modeling. Specifically, a lightweight transformer, guided by Maximum Mean Discrepancy (MMD) loss, aligns feature distributions across domains. A class-conditional diffusion model generates high-quality samples for underrepresented classes in only 15 inference steps, reducing labeled data needs by approximately 25% and computational costs by up to 80% compared to traditional 1000-step diffusion models. Additionally, a Multi-Scale Stripe Attention (MSSA) mechanism, combined with a Graph Convolutional Network (GCN), enhances pixel-level spatial coherence. Evaluated on six benchmark datasets including HJ-1A and WHU-OHS, TGDHTL consistently achieves high overall accuracy (e.g., 97.89% on University of Pavia) with just 11.9 GFLOPs, surpassing state-of-the-art methods. This framework provides a scalable, data-efficient solution for HSI classification under domain shifts and resource constraints. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

30 pages, 12301 KB  
Article
Deep Learning 1D-CNN-Based Ground Contact Detection in Sprint Acceleration Using Inertial Measurement Units
by Felix Friedl, Thorben Menrad and Jürgen Edelmann-Nusser
Sensors 2026, 26(1), 342; https://doi.org/10.3390/s26010342 - 5 Jan 2026
Viewed by 247
Abstract
Background: Ground contact (GC) detection is essential for sprint performance analysis. Inertial measurement units (IMUs) enable field-based assessment, but their reliability during sprint acceleration remains limited when using heuristic and recently used machine learning algorithms. This study introduces a deep learning one-dimensional convolutional [...] Read more.
Background: Ground contact (GC) detection is essential for sprint performance analysis. Inertial measurement units (IMUs) enable field-based assessment, but their reliability during sprint acceleration remains limited when using heuristic and recently used machine learning algorithms. This study introduces a deep learning one-dimensional convolutional neural network (1D-CNN) to improve GC event and GC times detection in sprint acceleration. Methods: Twelve sprint-trained athletes performed 60 m sprints while bilateral shank-mounted IMUs (1125 Hz) and synchronized high-speed video (250 Hz) captured the first 15 m. Video-derived GC events served as reference labels for model training, validation, and testing, using resultant acceleration and angular velocity as model inputs. Results: The optimized model (18 inception blocks, window = 100, stride = 15) achieved mean Hausdorff distances ≤ 6 ms and 100% precision and recall for both validation and test datasets (Rand Index ≥ 0.977). Agreement with video references was excellent (bias < 1 ms, limits of agreement ± 15 ms, r > 0.90, p < 0.001). Conclusions: The 1D-CNN surpassed heuristic and prior machine learning approaches in the sprint acceleration phase, offering robust, near-perfect GC detection. These findings highlight the promise of deep learning-based time-series models for reliable, real-world biomechanical monitoring in sprint acceleration tasks. Full article
(This article belongs to the Special Issue Inertial Sensing System for Motion Monitoring)
Show Figures

Figure 1

18 pages, 2548 KB  
Article
Quantitative Analysis Model for the Powder Content of Zanthoxylum bungeanum Based on IncepSpect-CBAM
by Yue Wang, Pingzeng Liu, Sicheng Liang, Yan Zhang, Ke Zhu and Qun Yu
Foods 2026, 15(1), 169; https://doi.org/10.3390/foods15010169 - 4 Jan 2026
Viewed by 208
Abstract
The adulteration of Zanthoxylum bungeanum powder presents a complex challenge, as current near-infrared spectroscopy (NIRS) models are typically designed for specific adulterants and require extensive preprocessing, limiting their practical utility. To overcome these limitations, this study proposes IncepSpect-CBAM, an end-to-end one-dimensional convolutional neural [...] Read more.
The adulteration of Zanthoxylum bungeanum powder presents a complex challenge, as current near-infrared spectroscopy (NIRS) models are typically designed for specific adulterants and require extensive preprocessing, limiting their practical utility. To overcome these limitations, this study proposes IncepSpect-CBAM, an end-to-end one-dimensional convolutional neural network that integrates multi-scale Inception modules, a Convolutional Block Attention Module (CBAM), and residual connections. The model directly learns features from raw spectra while maintaining robustness across multiple adulteration scenarios, focusing specifically on quantifying Zanthoxylum bungeanum powder content. When evaluated on a dataset containing four common adulterants (corn flour, wheat bran powder, rice bran powder, and Zanthoxylum bungeanum stem powder), the model achieved a Root Mean Square Error of Prediction (RMSEP) of 0.058 and a coefficient of determination for prediction (RP2) of 0.980, demonstrating superior performance over traditional methods including Partial Least Squares Regression (PLSR) and Support Vector Regression (SVR), as well as deep learning benchmarks such as 1D-CNN and DeepSpectra. The results establish that the proposed model enables high-precision quantitative analysis of Zanthoxylum bungeanum powder content across diverse adulteration types, providing a robust technical framework for rapid, non-destructive quality assessment of powdered food products using near-infrared spectroscopy. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

19 pages, 1646 KB  
Article
Sim-to-Real Domain Adaptation for Early Alzheimer’s Detection from Handwriting Kinematics Using Hybrid Deep Learning
by Ikram Bazarbekov, Ali Almisreb, Madina Ipalakova, Madina Bazarbekova and Yevgeniya Daineko
Sensors 2026, 26(1), 298; https://doi.org/10.3390/s26010298 - 2 Jan 2026
Viewed by 519
Abstract
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder characterized by cognitive and motor decline. Early detection remains challenging, as traditional neuroimaging and neuropsychological assessments often fail to capture subtle, preclinical changes. Recent advances in digital health and artificial intelligence (AI) offer new opportunities [...] Read more.
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder characterized by cognitive and motor decline. Early detection remains challenging, as traditional neuroimaging and neuropsychological assessments often fail to capture subtle, preclinical changes. Recent advances in digital health and artificial intelligence (AI) offer new opportunities to identify non-invasive biomarkers of cognitive impairment. In this study, we propose an AI-driven framework for early AD based on handwriting motion data captured using a sensor-integrated Smart Pen. The system employs an inertial measurement unit (MPU-9250) to record fine-grained kinematic and dynamic signals during handwriting and drawing tasks. Multiple machine learning (ML) algorithms—Logistic Regression, Support Vector Machine (SVM), Random Forest (RF), and k-Nearest Neighbors (kNN)—and deep learning (DL) architectures, including one-dimensional Convolutional Neural Networks (1D-CNN), Long Short-Term Memory (LSTM), and a hybrid CNN-BiLSTM network, were systematically evaluated. To address data scarcity, we implemented a Sim-to-Real Domain Adaptation strategy, augmenting the training set with physics-based synthetic samples. Results show that classical ML models achieved moderate diagnostic performance (AUC: 0.62–0.76), while the proposed hybrid DL model demonstrated superior predictive capability (accuracy: 0.91, AUC: 0.96). These findings underscore the potential of motion-based digital biomarkers for the automated, non-invasive detection of AD. The proposed framework represents a cost-effective and clinically scalable informatics solution for digital cognitive assessment. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Graphical abstract

17 pages, 42997 KB  
Article
State-of-Charge Estimation of Lithium-Ion Batteries Based on the CNN-Bi-LSTM-AM Model Under Low-Temperature Environments
by Ran Li, Yiming Hao, Mingze Zhang and Yanling Lv
Sensors 2026, 26(1), 264; https://doi.org/10.3390/s26010264 - 1 Jan 2026
Viewed by 373
Abstract
Accurate state-of-charge (SOC) estimation is essential for lithium-ion battery management, especially under low temperatures where traditional methods suffer from noise sensitivity and nonlinear dynamics. In this paper, a hybrid deep learning model integrating a one-dimensional convolutional neural network (1D-CNN), bidirectional long short-term memory [...] Read more.
Accurate state-of-charge (SOC) estimation is essential for lithium-ion battery management, especially under low temperatures where traditional methods suffer from noise sensitivity and nonlinear dynamics. In this paper, a hybrid deep learning model integrating a one-dimensional convolutional neural network (1D-CNN), bidirectional long short-term memory (Bi-LSTM), and an attention mechanism (AM) is introduced to enhance SOC estimation accuracy. The 1D-CNN extracts local features from voltage and current sequences, while Bi-LSTM captures bidirectional temporal dependencies, and the AM dynamically emphasizes critical time steps. Experiments conducted on the Panasonic 18650PF dataset at temperatures ranging from −20 to 0 degrees Celsius show that the proposed CNN-Bi-LSTM-AM model achieves a mean absolute error (MAE) of 0.17–0.77% and a root mean square error (RMSE) of 0.33–0.94% under US06 and UDDS driving cycles, outperforming CNN-LSTM and CNN-Bi-LSTM benchmarks. The results demonstrate that the model effectively handles voltage distortion and nonlinearities in low-temperature environments, offering a reliable solution for battery management systems operating under extreme conditions. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

25 pages, 2546 KB  
Article
From Joint Distribution Alignment to Spatial Configuration Learning: A Multimodal Financial Governance Diagnostic Framework to Enhance Capital Market Sustainability
by Wenjuan Li, Xinghua Liu, Ziyi Li, Zulei Qin, Jinxian Dong and Shugang Li
Sustainability 2025, 17(24), 11236; https://doi.org/10.3390/su172411236 - 15 Dec 2025
Viewed by 267
Abstract
Financial fraud, as a salient manifestation of corporate governance failure, erodes investor confidence and threatens the long-term sustainability of capital markets. This study aims to develop and validate SFG-2DCNN, a multimodal deep learning framework that adopts a configurational perspective to diagnose financial fraud [...] Read more.
Financial fraud, as a salient manifestation of corporate governance failure, erodes investor confidence and threatens the long-term sustainability of capital markets. This study aims to develop and validate SFG-2DCNN, a multimodal deep learning framework that adopts a configurational perspective to diagnose financial fraud under class-imbalanced conditions and support sustainable corporate governance. Conventional diagnostic approaches struggle to capture the higher-order interactions within covert fraud patterns due to scarce fraud samples and complex multimodal signals. To overcome these limitations, SFG-2DCNN adopts a systematic two-stage mechanism. First, to ensure a logically consistent data foundation, the framework builds a domain-adaptive generative model (SMOTE-FraudGAN) that enforces joint distribution alignment to fundamentally resolve the issue of economic logic coherence in synthetic samples. Subsequently, the framework pioneers a feature topology mapping strategy that spatializes extracted multimodal covert signals, including non-traditional indicators (e.g., Total Liabilities/Operating Costs) and affective dissonance in managerial narratives, into an ordered two-dimensional matrix, enabling a two-dimensional Convolutional Neural Network (2D-CNN) to efficiently identify potential governance failure patterns through deep spatial fusion. Experiments on Chinese A-share listed firms demonstrate that SFG-2DCNN achieves an F1-score of 0.917 and an AUC of 0.942, significantly outperforming baseline models. By advancing the analytical paradigm from isolated variable assessment to holistic multimodal configurational analysis, this research provides a high-fidelity tool for strengthening sustainable corporate governance and market transparency. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Figure 1

28 pages, 33315 KB  
Article
Hyperspectral Image Classification with Multi-Path 3D-CNN and Coordinated Hierarchical Attention
by Wenyi Hu, Wei Shi, Chunjie Lan, Yuxia Li and Lei He
Remote Sens. 2025, 17(24), 4035; https://doi.org/10.3390/rs17244035 - 15 Dec 2025
Viewed by 662
Abstract
Convolutional Neural Networks (CNNs) have been extensively applied for the extraction of deep features in hyperspectral imagery tasks. However, traditional 3D-CNNs are limited by their fixed-size receptive fields and inherent locality. This restricts their ability to capture multi-scale objects and model long-range dependencies, [...] Read more.
Convolutional Neural Networks (CNNs) have been extensively applied for the extraction of deep features in hyperspectral imagery tasks. However, traditional 3D-CNNs are limited by their fixed-size receptive fields and inherent locality. This restricts their ability to capture multi-scale objects and model long-range dependencies, ultimately hindering the representation of large-area land-cover structures. To overcome these drawbacks, we present a new framework designed to integrate multi-scale feature fusion and a hierarchical attention mechanism for hyperspectral image classification. Channel-wise Squeeze-and-Excitation (SE) and Convolutional Block Attention Module (CBAM) spatial attention are combined to enhance feature representation from both spectral bands and spatial locations, allowing the network to emphasize critical wavelengths and salient spatial structures. Finally, by integrating the self-attention inherent in the Transformer architecture with a Cross-Attention Fusion (CAF) mechanism, a local-global feature fusion module is developed. This module effectively captures extended-span interdependencies present in hyperspectral remote sensing images, and this process facilitates the effective integration of both localized and holistic attributes. On the Salinas Valley dataset, the proposed method delivers an Overall Accuracy (OA) of 0.9929 and an Average Accuracy (AA) of 0.9949, attaining perfect recognition accuracy for certain classes. The proposed model demonstrates commendable class balance and classification stability. Across multiple publicly available hyperspectral remote sensing image datasets, it systematically produces classification outcomes that significantly outperform those of established benchmark methods, exhibiting distinct advantages in feature representation, structural modeling, and the discrimination of complex ground objects. Full article
Show Figures

Figure 1

21 pages, 1138 KB  
Article
Explainable Deep Learning for Bearing Fault Diagnosis: Architectural Superiority of ResNet-1D Validated by SHAP
by Milos Poliak, Lukasz Pawlik and Damian Frej
Electronics 2025, 14(24), 4875; https://doi.org/10.3390/electronics14244875 - 11 Dec 2025
Viewed by 377
Abstract
Rolling element bearing fault diagnosis (BFD) is fundamental to Predictive Maintenance (PdM) strategies for rotating machinery, as early anomaly detection prevents catastrophic failures, reduces unplanned downtime, and optimizes operational costs. This study introduces an interpretable Deep Learning (DL) framework that rigorously compares the [...] Read more.
Rolling element bearing fault diagnosis (BFD) is fundamental to Predictive Maintenance (PdM) strategies for rotating machinery, as early anomaly detection prevents catastrophic failures, reduces unplanned downtime, and optimizes operational costs. This study introduces an interpretable Deep Learning (DL) framework that rigorously compares the performance of an Artificial Neural Network–Multilayer Perceptron (ANN-MLP), a one-dimensional Convolutional Neural Network (1D-CNN), and a ResNet-1D architecture for classifying seven bearing health states using a compact vector of 15 statistical features extracted from vibration signals. Both baseline models (ANN-MLP and 1D-CNN) failed to detect the critical Abrasive Particles fault (F1 = 0.0000). In contrast, the ResNet-1D architecture achieved statistically superior diagnostic performance, successfully resolving the most challenging class with a perfect F1-score of 1.0000 and an overall macro F1-score of 0.9913. This superiority was confirmed by a paired t-test on 100 bootstrap samples, establishing a highly significant difference in performance against the 1D-CNN (t=592.702, p=0.00000). To boost transparency and trust, the SHapley Additive exPlanations (SHAP) method was applied to interpret the ResNet-1D’s decisions. The SHAP analysis revealed that the Crest Factor from Sensor 1 (Crest_1) exerts the strongest influence on the critical Abrasive Particles fault predictions, physically validating the model’s intelligence against established domain knowledge of impulsive wear events. These findings support transparent, highly reliable, and evidence-based decision-making in industrial PdM applications within Industry 4.0 environments. Full article
Show Figures

Figure 1

31 pages, 9303 KB  
Article
Automatic Quadrotor Dispatch Missions Based on Air-Writing Gesture Recognition
by Pu-Sheng Tsai, Ter-Feng Wu and Yen-Chun Wang
Processes 2025, 13(12), 3984; https://doi.org/10.3390/pr13123984 - 9 Dec 2025
Viewed by 416
Abstract
This study develops an automatic dispatch system for quadrotor UAVs that integrates air-writing gesture recognition with a graphical user interface (GUI). The DJI RoboMaster quadrotor UAV (DJI, Shenzhen, China) was employed as the experimental platform, combined with an ESP32 microcontroller (Espressif Systems, Shanghai, [...] Read more.
This study develops an automatic dispatch system for quadrotor UAVs that integrates air-writing gesture recognition with a graphical user interface (GUI). The DJI RoboMaster quadrotor UAV (DJI, Shenzhen, China) was employed as the experimental platform, combined with an ESP32 microcontroller (Espressif Systems, Shanghai, China) and the RoboMaster SDK (version 3.0). On the Python (version 3.12.7) platform, a GUI was implemented using Tkinter (version 8.6), allowing users to input addresses or landmarks, which were then automatically converted into geographic coordinates and imported into Google Maps for route planning. The generated flight commands were transmitted to the UAV via a UDP socket, enabling remote autonomous flight. For gesture recognition, a Raspberry Pi integrated with the MediaPipe Hands module was used to capture 16 types of air-written flight commands in real time through a camera. The training samples were categorized into one-dimensional coordinates and two-dimensional images. In the one-dimensional case, X/Y axis coordinates were concatenated after data augmentation, interpolation, and normalization. In the two-dimensional case, three types of images were generated, namely font trajectory plots (T-plots), coordinate-axis plots (XY-plots), and composite plots combining the two (XYT-plots). To evaluate classification performance, several machine learning and deep learning architectures were employed, including a multi-layer perceptron (MLP), support vector machine (SVM), one-dimensional convolutional neural network (1D-CNN), and two-dimensional convolutional neural network (2D-CNN). The results demonstrated effective recognition accuracy across different models and sample formats, verifying the feasibility of the proposed air-writing trajectory framework for non-contact gesture-based UAV control. Furthermore, by combining gesture recognition with a GUI-based map planning interface, the system enhances the intuitiveness and convenience of UAV operation. Future extensions, such as incorporating aerial image object recognition, could extend the framework’s applications to scenarios including forest disaster management, vehicle license plate recognition, and air pollution monitoring. Full article
Show Figures

Figure 1

21 pages, 8629 KB  
Article
Nondestructive Identification of Eggshell Cracks Using Hyperspectral Imaging Combined with Attention-Enhanced 3D-CNN
by Hao Li, Aoyun Zheng, Chaoxian Liu, Jun Huang, Yong Ma, Huanjun Hu and You Du
Foods 2025, 14(24), 4183; https://doi.org/10.3390/foods14244183 - 5 Dec 2025
Viewed by 413
Abstract
Eggshell cracks are a critical factor affecting egg quality and food safety, with traditional detection methods often struggling to detect fine cracks, especially under multi-colored shells and complex backgrounds. To address this issue, we propose a non-destructive detection approach based on an enhanced [...] Read more.
Eggshell cracks are a critical factor affecting egg quality and food safety, with traditional detection methods often struggling to detect fine cracks, especially under multi-colored shells and complex backgrounds. To address this issue, we propose a non-destructive detection approach based on an enhanced three-dimensional convolutional neural network (3D-CNN), named 3D-CrackNet, integrated with hyperspectral imaging (HSI) for high-precision identification and localization of eggshell cracks. Operating within the 1000–2500 nm spectral range, the proposed framework employs spectral preprocessing and optimal band selection to improve discriminative feature representation. A residual learning module is incorporated to mitigate gradient degradation during deep joint spectral-spatial feature extraction, while a parameter-free SimAM attention mechanism adaptively enhances crack-related regions and suppresses background interference. This architecture enables the network to effectively capture both fine-grained spatial textures and contiguous spectral patterns associated with cracks. Experiments on a self-constructed dataset of 400 egg samples show that 3D-CrackNet achieves an F1-score of 75.49% and an Intersection over Union (IoU) of 60.62%, significantly outperforming conventional 1D-CNN and 2D-CNN models. These findings validate that 3D-CrackNet offers a robust, non-destructive, and efficient solution for accurately detecting and localizing subtle eggshell cracks, demonstrating strong potential for intelligent online egg quality grading and micro-defect monitoring in industrial applications. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

21 pages, 2016 KB  
Article
Molecular-Level Identification of Liquor Vintage via an Intelligent Electronic Tongue Integrated with a One-Dimensional Convolutional Neural Network
by Yali Bi, Yalong Zhu, Jiaming Liu, Digan Yu, Qiqing Fan, Xuefeng Hu and Wei Zhang
Sensors 2025, 25(23), 7350; https://doi.org/10.3390/s25237350 - 3 Dec 2025
Viewed by 1248
Abstract
Accurate identification of liquor vintage is crucial for ensuring product authenticity and optimizing market value, as the price and sensory quality of liquor increase with age. Traditional sensory evaluation by sommeliers is inherently limited by subjectivity, physiological fatigue, and inconsistency, posing challenges for [...] Read more.
Accurate identification of liquor vintage is crucial for ensuring product authenticity and optimizing market value, as the price and sensory quality of liquor increase with age. Traditional sensory evaluation by sommeliers is inherently limited by subjectivity, physiological fatigue, and inconsistency, posing challenges for reliable large-scale quality assessment. To address these limitations, this study introduces an innovative homemade electronic tongue (ET) system integrated with machine learning and deep learning algorithms for rapid and precise vintage identification. The ET system, consisting of six metallic electrodes and a MEMS-based temperature sensor, successfully discriminated five consecutive liquor vintages produced at one-year intervals. Using Support Vector Machine (SVM) and Random Forest (RF) algorithms, classification accuracies of 91.0% and 78.0% were achieved, respectively. Remarkably, the proposed one-dimensional convolutional neural network (1D-CNN) model further improved the recognition accuracy to 94.0%, representing the highest reported performance for ET-based vintage prediction to date. The findings demonstrate that the integration of multi-electrode electrochemical sensing with artificial intelligence enables objective, reproducible, and high-throughput evaluation of liquor aging characteristics. This approach provides a scientifically robust alternative to human sensory analysis, offering significant potential for counterfeit detection, liquor authentication, and the broader assessment of food and beverage quality within molecular sensing frameworks. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

22 pages, 4161 KB  
Article
Hybrid One-Dimensional Convolutional Neural Network—Recurrent Neural Network Model for Reconstructing Missing Data in Structural Health Monitoring Systems
by Nguyen Thi Thu Nga, Jose C. Matos and Son Dang Ngoc
Machines 2025, 13(12), 1101; https://doi.org/10.3390/machines13121101 - 27 Nov 2025
Viewed by 599
Abstract
Data loss is a recurring and critical issue in Structural Health Monitoring (SHM) systems, often arising from a range of factors including sensor malfunction, communication breakdown, and exposure to adverse environmental conditions. Such interruptions in data availability can significantly compromise the accuracy and [...] Read more.
Data loss is a recurring and critical issue in Structural Health Monitoring (SHM) systems, often arising from a range of factors including sensor malfunction, communication breakdown, and exposure to adverse environmental conditions. Such interruptions in data availability can significantly compromise the accuracy and reliability of structural performance assessments, thereby hindering effective decision-making in safety evaluation and maintenance planning. In this study, a novel deep learning-based framework is proposed for data reconstruction in SHM, employing a hybrid architecture that integrates one-dimensional convolutional neural networks (1D-CNNs) with recurrent neural networks (RNNs). By combining these complementary strengths, the hybrid 1D-CNN–RNN model demonstrates superior capacity for accurate signal reconstruction. A real-world case study was conducted using vibration data from the Trai Hut Bridge in Vietnam. Five network configurations with varying depths were examined under single- and multi-channel loss scenarios. The results confirm that the method can accurately reconstruct lost signals. For single-channel loss, the best configuration achieved an MAE = 0.019 m/s2 and R2 = 0.987, while for multi-channel loss, a deeper network yielded an MAE = 0.044 m/s2 and R2 = 0.974. Furthermore, the model exhibits robust and stable performance even under more demanding multi-channel data loss conditions, highlighting its resilience to practical operational challenges. The results demonstrate that the proposed CNN–RNN framework is accurate, robust, and adaptable for practical SHM data reconstruction applications. Full article
Show Figures

Figure 1

17 pages, 3038 KB  
Article
Research on Deep Learning-Based Human–Robot Static/Dynamic Gesture-Driven Control Framework
by Gong Zhang, Jiahong Su, Shuzhong Zhang, Jianzheng Qi, Zhicheng Hou and Qunxu Lin
Sensors 2025, 25(23), 7203; https://doi.org/10.3390/s25237203 - 25 Nov 2025
Cited by 1 | Viewed by 718
Abstract
For human–robot gesture-driven control, this paper proposes a deep learning-based approach that employs both static and dynamic gestures to drive and control robots for object-grasping and delivery tasks. The method utilizes two-dimensional Convolutional Neural Networks (2D-CNNs) for static gesture recognition and a hybrid [...] Read more.
For human–robot gesture-driven control, this paper proposes a deep learning-based approach that employs both static and dynamic gestures to drive and control robots for object-grasping and delivery tasks. The method utilizes two-dimensional Convolutional Neural Networks (2D-CNNs) for static gesture recognition and a hybrid architecture combining three-dimensional Convolutional Neural Networks (3D-CNNs) and Long Short-Term Memory networks (3D-CNN+LSTM) for dynamic gesture recognition. Results on a custom gesture dataset demonstrate validation accuracies of 95.38% for static gestures and 93.18% for dynamic gestures, respectively. Then, in order to control and drive the robot to perform corresponding tasks, hand pose estimation was performed. The MediaPipe machine learning framework was first employed to extract hand feature points. These 2D feature points were then converted into 3D coordinates using a depth camera-based pose estimation method, followed by coordinate system transformation to obtain hand poses relative to the robot’s base coordinate system. Finally, an experimental platform for human–robot gesture-driven interaction was established, deploying both gesture recognition models. Four participants were invited to perform 100 trials each of gesture-driven object-grasping and delivery tasks under three lighting conditions: natural light, low light, and strong light. Experimental results show that the average success rates for completing tasks via static and dynamic gestures are no less than 96.88% and 94.63%, respectively, with task completion times consistently within 20 s. These findings demonstrate that the proposed approach enables robust vision-based robotic control through natural hand gestures, showing great prospects for human–robot collaboration applications. Full article
Show Figures

Figure 1

36 pages, 2334 KB  
Article
Fair and Explainable Multitask Deep Learning on Synthetic Endocrine Trajectories for Real-Time Prediction of Stress, Performance, and Neuroendocrine States
by Abdullah, Zulaikha Fatima, Carlos Guzman Sánchez Mejorada, Muhammad Ateeb Ather, José Luis Oropeza Rodríguez and Grigori Sidorov
Computers 2025, 14(12), 515; https://doi.org/10.3390/computers14120515 - 25 Nov 2025
Viewed by 577
Abstract
Cortisol and testosterone are key digital biomarkers reflecting neuroendocrine activity across the hypothalamic–pituitary–adrenal (HPA) and hypothalamic–pituitary–gonadal (HPG) axes, encoding stress adaptation and behavioral regulation. Continuous real-world monitoring remains challenging due to the sparsity of sensing and the complexity of multimodal data. This study [...] Read more.
Cortisol and testosterone are key digital biomarkers reflecting neuroendocrine activity across the hypothalamic–pituitary–adrenal (HPA) and hypothalamic–pituitary–gonadal (HPG) axes, encoding stress adaptation and behavioral regulation. Continuous real-world monitoring remains challenging due to the sparsity of sensing and the complexity of multimodal data. This study introduces a synthetic sensor-driven computational framework that models hormone variability through data-driven simulation and predictive learning, eliminating the need for continuous biosensor input. A hybrid deep ensemble integrates biological, behavioral, and contextual data using bidirectional multitask learning with one-dimensional convolutional neural network (1D-CNN) and long short-term memory (LSTM) branches, meta-gated expert fusion, Bayesian variational layers with Monte Carlo Dropout, and adversarial debiasing. Synthetically derived longitudinal hormone profiles that were validated by Kolmogorov–Smirnov (KS), Wasserstein, maximum mean discrepancy (MMD), and dynamic time warping (DTW) metrics account for class imbalance and temporal sparsity. Our framework achieved up to 99.99% macro F1-score on augmented samples and more than 97% for unseen data with ECE below 0.001. Selective prediction further maximized the convergence of predictions for low-confidence cases, achieving 99.9992–99.9998% accuracy on 99.5% of samples, which were smaller than 5 MB in size so that they can be employed in real time when mounted on wearable devices. Explainability investigations revealed the most important features on both the physiological and behavioral levels, demonstrating framework capabilities for adaptive clinical or organizational stress monitoring. Full article
(This article belongs to the Special Issue Wearable Computing and Activity Recognition)
Show Figures

Figure 1

Back to TopTop