Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,773)

Search Parameters:
Keywords = contrast learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5369 KiB  
Article
Smart Postharvest Management of Strawberries: YOLOv8-Driven Detection of Defects, Diseases, and Maturity
by Luana dos Santos Cordeiro, Irenilza de Alencar Nääs and Marcelo Tsuguio Okano
AgriEngineering 2025, 7(8), 246; https://doi.org/10.3390/agriengineering7080246 (registering DOI) - 1 Aug 2025
Abstract
Strawberries are highly perishable fruits prone to postharvest losses due to defects, diseases, and uneven ripening. This study proposes a deep learning-based approach for automated quality assessment using the YOLOv8n object detection model. A custom dataset of 5663 annotated strawberry images was compiled, [...] Read more.
Strawberries are highly perishable fruits prone to postharvest losses due to defects, diseases, and uneven ripening. This study proposes a deep learning-based approach for automated quality assessment using the YOLOv8n object detection model. A custom dataset of 5663 annotated strawberry images was compiled, covering eight quality categories, including anthracnose, gray mold, powdery mildew, uneven ripening, and physical defects. Data augmentation techniques, such as rotation and Gaussian blur, were applied to enhance model generalization and robustness. The model was trained over 100 and 200 epochs, and its performance was evaluated using standard metrics: Precision, Recall, and mean Average Precision (mAP). The 200-epoch model achieved the best results, with a mAP50 of 0.79 and an inference time of 1 ms per image, demonstrating suitability for real-time applications. Classes with distinct visual features, such as anthracnose and gray mold, were accurately classified. In contrast, visually similar categories, such as ‘Good Quality’ and ‘Unripe’ strawberries, presented classification challenges. Full article
Show Figures

Figure 1

15 pages, 1767 KiB  
Article
A Contrastive Representation Learning Method for Event Classification in Φ-OTDR Systems
by Tong Zhang, Xinjie Peng, Yifan Liu, Kaiyang Yin and Pengfei Li
Sensors 2025, 25(15), 4744; https://doi.org/10.3390/s25154744 (registering DOI) - 1 Aug 2025
Abstract
The phase-sensitive optical time-domain reflectometry (Φ-OTDR) system has shown substantial potential in distributed acoustic sensing applications. Accurate event classification is crucial for effective deployment of Φ-OTDR systems, and various methods have been proposed for event classification in Φ-OTDR systems. However, most existing methods [...] Read more.
The phase-sensitive optical time-domain reflectometry (Φ-OTDR) system has shown substantial potential in distributed acoustic sensing applications. Accurate event classification is crucial for effective deployment of Φ-OTDR systems, and various methods have been proposed for event classification in Φ-OTDR systems. However, most existing methods typically rely on sufficient labeled signal data for model training, which poses a major bottleneck in applying these methods due to the expensive and laborious process of labeling extensive data. To address this limitation, we propose CLWTNet, a novel contrastive representation learning method enhanced with wavelet transform convolution for event classification in Φ-OTDR systems. CLWTNet learns robust and discriminative representations directly from unlabeled signal data by transforming time-domain signals into STFT images and employing contrastive learning to maximize inter-class separation while preserving intra-class similarity. Furthermore, CLWTNet incorporates wavelet transform convolution to enhance its capacity to capture intricate features of event signals. The experimental results demonstrate that CLWTNet achieves competitive performance with the supervised representation learning methods and superior performance to unsupervised representation learning methods, even when training with unlabeled signal data. These findings highlight the effectiveness of CLWTNet in extracting discriminative representations without relying on labeled data, thereby enhancing data efficiency and reducing the costs and effort involved in extensive data labeling in practical Φ-OTDR system applications. Full article
(This article belongs to the Topic Distributed Optical Fiber Sensors)
Show Figures

Figure 1

23 pages, 13529 KiB  
Article
A Self-Supervised Contrastive Framework for Specific Emitter Identification with Limited Labeled Data
by Jiaqi Wang, Lishu Guo, Pengfei Liu, Peng Shang, Xiaochun Lu and Hang Zhao
Remote Sens. 2025, 17(15), 2659; https://doi.org/10.3390/rs17152659 (registering DOI) - 1 Aug 2025
Abstract
Specific Emitter Identification (SEI) is a specialized technique for identifying different emitters by analyzing the unique characteristics embedded in received signals, known as Radio Frequency Fingerprints (RFFs), and SEI plays a crucial role in civilian applications. Recently, various SEI methods based on deep [...] Read more.
Specific Emitter Identification (SEI) is a specialized technique for identifying different emitters by analyzing the unique characteristics embedded in received signals, known as Radio Frequency Fingerprints (RFFs), and SEI plays a crucial role in civilian applications. Recently, various SEI methods based on deep learning have been proposed. However, in real-world scenarios, the scarcity of accurately labeled data poses a significant challenge to these methods, which typically rely on large-scale supervised training. To address this issue, we propose a novel SEI framework based on self-supervised contrastive learning. Our approach comprises two stages: an unsupervised pretraining phase that uses contrastive loss to learn discriminative RFF representations from unlabeled data, and a supervised fine-tuning stage regularized through virtual adversarial training (VAT) to improve generalization under limited labels. This framework enables effective feature learning while mitigating overfitting. To validate the effectiveness of the proposed method, we collected real-world satellite navigation signals using a 40-meter antenna and conducted extensive experiments. The results demonstrate that our approach achieves outstanding SEI performance, significantly outperforming several mainstream SEI methods, thereby highlighting the practical potential of contrastive self-supervised learning in satellite transmitter identification. Full article
Show Figures

Figure 1

23 pages, 4379 KiB  
Article
Large Vision Language Model: Enhanced-RSCLIP with Exemplar-Image Prompting for Uncommon Object Detection in Satellite Imagery
by Taiwo Efunogbon, Abimbola Efunogbon, Enjie Liu, Dayou Li and Renxi Qiu
Electronics 2025, 14(15), 3071; https://doi.org/10.3390/electronics14153071 (registering DOI) - 31 Jul 2025
Abstract
Large Vision Language Models (LVLMs) have shown promise in remote sensing applications, yet struggle with “uncommon” objects that lack sufficient public labeled data. This paper presents Enhanced-RSCLIP, a novel dual-prompt architecture that combines text prompting with exemplar-image processing for cattle herd detection in [...] Read more.
Large Vision Language Models (LVLMs) have shown promise in remote sensing applications, yet struggle with “uncommon” objects that lack sufficient public labeled data. This paper presents Enhanced-RSCLIP, a novel dual-prompt architecture that combines text prompting with exemplar-image processing for cattle herd detection in satellite imagery. Our approach introduces a key innovation where an exemplar-image preprocessing module using crop-based or attention-based algorithms extracts focused object features which are fed as a dual stream to a contrastive learning framework that fuses textual descriptions with visual exemplar embeddings. We evaluated our method on a custom dataset of 260 satellite images across UK and Nigerian regions. Enhanced-RSCLIP with crop-based exemplar processing achieved 72% accuracy in cattle detection and 56.2% overall accuracy on cross-domain transfer tasks, significantly outperforming text-only CLIP (31% overall accuracy). The dual-prompt architecture enables effective few-shot learning and cross-regional transfer from data-rich (UK) to data-sparse (Nigeria) environments, demonstrating a 41% improvement over baseline approaches for uncommon object detection in satellite imagery. Full article
Show Figures

Figure 1

21 pages, 2909 KiB  
Article
Novel Federated Graph Contrastive Learning for IoMT Security: Protecting Data Poisoning and Inference Attacks
by Amarudin Daulay, Kalamullah Ramli, Ruki Harwahyu, Taufik Hidayat and Bernardi Pranggono
Mathematics 2025, 13(15), 2471; https://doi.org/10.3390/math13152471 (registering DOI) - 31 Jul 2025
Abstract
Malware evolution presents growing security threats for resource-constrained Internet of Medical Things (IoMT) devices. Conventional federated learning (FL) often suffers from slow convergence, high communication overhead, and fairness issues in dynamic IoMT environments. In this paper, we propose FedGCL, a secure and efficient [...] Read more.
Malware evolution presents growing security threats for resource-constrained Internet of Medical Things (IoMT) devices. Conventional federated learning (FL) often suffers from slow convergence, high communication overhead, and fairness issues in dynamic IoMT environments. In this paper, we propose FedGCL, a secure and efficient FL framework integrating contrastive graph representation learning for enhanced feature discrimination, a Jain-index-based fairness-aware aggregation mechanism, an adaptive synchronization scheduler to optimize communication rounds, and secure aggregation via homomorphic encryption within a Trusted Execution Environment. We evaluate FedGCL on four benchmark malware datasets (Drebin, Malgenome, Kronodroid, and TUANDROMD) using 5 to 15 graph neural network clients over 20 communication rounds. Our experiments demonstrate that FedGCL achieves 96.3% global accuracy within three rounds and converges to 98.9% by round twenty—reducing required training rounds by 45% compared to FedAvg—while incurring only approximately 10% additional computational overhead. By preserving patient data privacy at the edge, FedGCL enhances system resilience without sacrificing model performance. These results indicate FedGCL’s promise as a secure, efficient, and fair federated malware detection solution for IoMT ecosystems. Full article
Show Figures

Figure 1

20 pages, 2320 KiB  
Article
Electric Vehicle Energy Management Under Unknown Disturbances from Undefined Power Demand: Online Co-State Estimation via Reinforcement Learning
by C. Treesatayapun, A. D. Munoz-Vazquez, S. K. Korkua, B. Srikarun and C. Pochaiya
Energies 2025, 18(15), 4062; https://doi.org/10.3390/en18154062 (registering DOI) - 31 Jul 2025
Abstract
This paper presents a data-driven energy management scheme for fuel cell and battery electric vehicles, formulated as a constrained optimal control problem. The proposed method employs a co-state network trained using real-time measurements to estimate the control law without requiring prior knowledge of [...] Read more.
This paper presents a data-driven energy management scheme for fuel cell and battery electric vehicles, formulated as a constrained optimal control problem. The proposed method employs a co-state network trained using real-time measurements to estimate the control law without requiring prior knowledge of the system model or a complete dataset across the full operating domain. In contrast to conventional reinforcement learning approaches, this method avoids the issue of high dimensionality and does not depend on extensive offline training. Robustness is demonstrated by treating uncertain and time-varying elements, including power consumption from air conditioning systems, variations in road slope, and passenger-related demands, as unknown disturbances. The desired state of charge is defined as a reference trajectory, and the control input is computed while ensuring compliance with all operational constraints. Validation results based on a combined driving profile confirm the effectiveness of the proposed controller in maintaining the battery charge, reducing fluctuations in fuel cell power output, and ensuring reliable performance under practical conditions. Comparative evaluations are conducted against two benchmark controllers: one designed to maintain a constant state of charge and another based on a soft actor–critic learning algorithm. Full article
(This article belongs to the Special Issue Forecasting and Optimization in Transport Energy Management Systems)
Show Figures

Figure 1

29 pages, 15488 KiB  
Article
GOFENet: A Hybrid Transformer–CNN Network Integrating GEOBIA-Based Object Priors for Semantic Segmentation of Remote Sensing Images
by Tao He, Jianyu Chen and Delu Pan
Remote Sens. 2025, 17(15), 2652; https://doi.org/10.3390/rs17152652 (registering DOI) - 31 Jul 2025
Abstract
Geographic object-based image analysis (GEOBIA) has demonstrated substantial utility in remote sensing tasks. However, its integration with deep learning remains largely confined to image-level classification. This is primarily due to the irregular shapes and fragmented boundaries of segmented objects, which limit its applicability [...] Read more.
Geographic object-based image analysis (GEOBIA) has demonstrated substantial utility in remote sensing tasks. However, its integration with deep learning remains largely confined to image-level classification. This is primarily due to the irregular shapes and fragmented boundaries of segmented objects, which limit its applicability in semantic segmentation. While convolutional neural networks (CNNs) excel at local feature extraction, they inherently struggle to capture long-range dependencies. In contrast, Transformer-based models are well suited for global context modeling but often lack fine-grained local detail. To overcome these limitations, we propose GOFENet (Geo-Object Feature Enhanced Network)—a hybrid semantic segmentation architecture that effectively fuses object-level priors into deep feature representations. GOFENet employs a dual-encoder design combining CNN and Swin Transformer architectures, enabling multi-scale feature fusion through skip connections to preserve both local and global semantics. An auxiliary branch incorporating cascaded atrous convolutions is introduced to inject information of segmented objects into the learning process. Furthermore, we develop a cross-channel selection module (CSM) for refined channel-wise attention, a feature enhancement module (FEM) to merge global and local representations, and a shallow–deep feature fusion module (SDFM) to integrate pixel- and object-level cues across scales. Experimental results on the GID and LoveDA datasets demonstrate that GOFENet achieves superior segmentation performance, with 66.02% mIoU and 51.92% mIoU, respectively. The model exhibits strong capability in delineating large-scale land cover features, producing sharper object boundaries and reducing classification noise, while preserving the integrity and discriminability of land cover categories. Full article
Show Figures

Figure 1

17 pages, 5062 KiB  
Article
DropDAE: Denosing Autoencoder with Contrastive Learning for Addressing Dropout Events in scRNA-seq Data
by Wanlin Juan, Kwang Woo Ahn, Yi-Guang Chen and Chien-Wei Lin
Bioengineering 2025, 12(8), 829; https://doi.org/10.3390/bioengineering12080829 (registering DOI) - 31 Jul 2025
Abstract
Single-cell RNA sequencing (scRNA-seq) has revolutionized molecular biology and genomics by enabling the profiling of individual cell types, providing insights into cellular heterogeneity. Deep learning methods have become popular in single cell analysis for tasks such as dimension reduction, cell clustering, and data [...] Read more.
Single-cell RNA sequencing (scRNA-seq) has revolutionized molecular biology and genomics by enabling the profiling of individual cell types, providing insights into cellular heterogeneity. Deep learning methods have become popular in single cell analysis for tasks such as dimension reduction, cell clustering, and data imputation. In this work, we introduce DropDAE, a denoising autoencoder (DAE) model enhanced with contrastive learning, to specifically address the dropout events in scRNA-seq data, where certain genes show very low or even zero expression levels due to technical limitations. DropDAE uses the architecture of a denoising autoencoder to recover the underlying data patterns while leveraging contrastive learning to enhance group separation. Our extensive evaluations across multiple simulation settings based on synthetic data and a real-world dataset demonstrate that DropDAE not only reconstructs data effectively but also further improves clustering performance, outperforming existing methods in terms of accuracy and robustness. Full article
Show Figures

Figure 1

17 pages, 920 KiB  
Article
Enhancing Early GI Disease Detection with Spectral Visualization and Deep Learning
by Tsung-Jung Tsai, Kun-Hua Lee, Chu-Kuang Chou, Riya Karmakar, Arvind Mukundan, Tsung-Hsien Chen, Devansh Gupta, Gargi Ghosh, Tao-Yuan Liu and Hsiang-Chen Wang
Bioengineering 2025, 12(8), 828; https://doi.org/10.3390/bioengineering12080828 - 30 Jul 2025
Abstract
Timely and accurate diagnosis of gastrointestinal diseases (GIDs) remains a critical bottleneck in clinical endoscopy, particularly due to the limited contrast and sensitivity of conventional white light imaging (WLI) in detecting early-stage mucosal abnormalities. To overcome this, this research presents Spectrum Aided Vision [...] Read more.
Timely and accurate diagnosis of gastrointestinal diseases (GIDs) remains a critical bottleneck in clinical endoscopy, particularly due to the limited contrast and sensitivity of conventional white light imaging (WLI) in detecting early-stage mucosal abnormalities. To overcome this, this research presents Spectrum Aided Vision Enhancer (SAVE), an innovative, software-driven framework that transforms standard WLI into high-fidelity hyperspectral imaging (HSI) and simulated narrow-band imaging (NBI) without any hardware modification. SAVE leverages advanced spectral reconstruction techniques, including Macbeth Color Checker-based calibration, principal component analysis (PCA), and multivariate polynomial regression, achieving a root mean square error (RMSE) of 0.056 and structural similarity index (SSIM) exceeding 90%. Trained and validated on the Kvasir v2 dataset (n = 6490) using deep learning models like ResNet-50, ResNet-101, EfficientNet-B2, both EfficientNet-B5 and EfficientNetV2-B0 were used to assess diagnostic performance across six key GI conditions. Results demonstrated that SAVE enhanced imagery and consistently outperformed raw WLI across precision, recall, and F1-score metrics, with EfficientNet-B2 and EfficientNetV2-B0 achieving the highest classification accuracy. Notably, this performance gain was achieved without the need for specialized imaging hardware. These findings highlight SAVE as a transformative solution for augmenting GI diagnostics, with the potential to significantly improve early detection, streamline clinical workflows, and broaden access to advanced imaging especially in resource constrained settings. Full article
Show Figures

Figure 1

18 pages, 1777 KiB  
Article
Machine Learning in Sensory Analysis of Mead—A Case Study: Ensembles of Classifiers
by Krzysztof Przybył, Daria Cicha-Wojciechowicz, Natalia Drabińska and Małgorzata Anna Majcher
Molecules 2025, 30(15), 3199; https://doi.org/10.3390/molecules30153199 - 30 Jul 2025
Abstract
The aim was to explore using machine learning (including cluster mapping and k-means methods) to classify types of mead based on sensory analysis and aromatic compounds. Machine learning is a modern tool that helps with detailed analysis, especially because verifying aromatic compounds is [...] Read more.
The aim was to explore using machine learning (including cluster mapping and k-means methods) to classify types of mead based on sensory analysis and aromatic compounds. Machine learning is a modern tool that helps with detailed analysis, especially because verifying aromatic compounds is challenging. In the first stage, a cluster map analysis was conducted, allowing for the exploratory identification of the most characteristic features of mead. Based on this, k-means clustering was performed to evaluate how well the identified sensory features align with logically consistent groups of observations. In the next stage, experiments were carried out to classify the type of mead using algorithms such as Random Forest (RF), adaptive boosting (AdaBoost), Bootstrap aggregation (Bagging), K-Nearest Neighbors (KNN), and Decision Tree (DT). The analysis revealed that the RF and KNN algorithms were the most effective in classifying mead based on sensory characteristics, achieving the highest accuracy. In contrast, the AdaBoost algorithm consistently produced the lowest accuracy results. However, the Decision Tree algorithm achieved the highest accuracy value (0.909), demonstrating its potential for precise classification based on aroma characteristics. The error matrix analysis also indicated that acacia mead was easier for the algorithms to identify than tilia or buckwheat mead. The results show the potential of combining an exploratory approach (cluster map with the k-means method) with machine learning. It is also important to focus on selecting and optimizing classification models used in practice because, as the results so far indicate, choosing the right algorithm greatly affects the success of mead identification. Full article
(This article belongs to the Special Issue Analytical Technologies and Intelligent Applications in Future Food)
Show Figures

Graphical abstract

26 pages, 7736 KiB  
Article
Integrating Remote Sensing and Ground Data to Assess the Effects of Subsoiling on Drought Stress in Maize and Sunflower Grown on Haplic Chernozem
by Milena Kercheva, Dessislava Ganeva, Zlatomir Dimitrov, Atanas Z. Atanasov, Gergana Kuncheva, Viktor Kolchakov, Plamena Nikolova, Stelian Dimitrov, Martin Nenov, Lachezar Filchev, Petar Nikolov, Galin Ginchev, Maria Ivanova, Iliana Ivanova, Katerina Doneva, Tsvetina Paparkova, Milena Mitova and Martin Banov
Agriculture 2025, 15(15), 1644; https://doi.org/10.3390/agriculture15151644 - 30 Jul 2025
Abstract
In drought-prone regions without irrigation systems, effective agrotechnologies such as subsoiling are crucial for enhancing soil infiltration and water retention. However, the effects of subsoiling can vary depending on crop type and environmental conditions. Despite previous research, there is limited understanding of the [...] Read more.
In drought-prone regions without irrigation systems, effective agrotechnologies such as subsoiling are crucial for enhancing soil infiltration and water retention. However, the effects of subsoiling can vary depending on crop type and environmental conditions. Despite previous research, there is limited understanding of the contrasting responses of C3 (sunflower) and C4 (maize) crops to subsoiling under drought stress. This study addresses this knowledge gap by assessing the effectiveness of subsoiling as a drought mitigation practice on Haplic Chernozem in Northern Bulgaria, integrating ground-based and remote sensing data. Soil physical parameters, leaf area index (LAI), canopy temperature, crop water stress index (CWSI), soil moisture, and yield were evaluated under both conventional tillage and subsoiling for the two crops. A variety of optical and radar descriptive remote sensing products derived from Sentinel-1 and Sentinel-2 satellite data were calculated for different crop types. Consequently, the use of machine learning, utilizing all the processed remote sensing products, enabled the reasonable prediction of LAI, achieving a coefficient of determination (R2) after a cross-validation greater than 0.42 and demonstrating good agreement with in situ observations. Results revealed differing responses: subsoiling had a positive effect on sunflower, improving LAI, water status, and slightly increasing yield, while it had no positive effect on maize. These findings highlight the importance of crop-specific responses in evaluating subsoiling practices and demonstrate the added value of integrating unmanned aerial systems (UAS) and satellite-based remote sensing data into agricultural drought monitoring. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

30 pages, 37977 KiB  
Article
Text-Guided Visual Representation Optimization for Sensor-Acquired Video Temporal Grounding
by Yun Tian, Xiaobo Guo, Jinsong Wang and Xinyue Liang
Sensors 2025, 25(15), 4704; https://doi.org/10.3390/s25154704 - 30 Jul 2025
Abstract
Video temporal grounding (VTG) aims to localize a semantically relevant temporal segment within an untrimmed video based on a natural language query. The task continues to face challenges arising from cross-modal semantic misalignment, which is largely attributed to redundant visual content in sensor-acquired [...] Read more.
Video temporal grounding (VTG) aims to localize a semantically relevant temporal segment within an untrimmed video based on a natural language query. The task continues to face challenges arising from cross-modal semantic misalignment, which is largely attributed to redundant visual content in sensor-acquired video streams, linguistic ambiguity, and discrepancies in modality-specific representations. Most existing approaches rely on intra-modal feature modeling, processing video and text independently throughout the representation learning stage. However, this isolation undermines semantic alignment by neglecting the potential of cross-modal interactions. In practice, a natural language query typically corresponds to spatiotemporal content in video signals collected through camera-based sensing systems, encompassing a particular sequence of frames and its associated salient subregions. We propose a text-guided visual representation optimization framework tailored to enhance semantic interpretation over video signals captured by visual sensors. This framework leverages textual information to focus on spatiotemporal video content, thereby narrowing the cross-modal gap. Built upon the unified cross-modal embedding space provided by CLIP, our model leverages video data from sensing devices to structure representations and introduces two dedicated modules to semantically refine visual representations across spatial and temporal dimensions. First, we design a Spatial Visual Representation Optimization (SVRO) module to learn spatial information within intra-frames. It selects salient patches related to the text, capturing more fine-grained visual details. Second, we introduce a Temporal Visual Representation Optimization (TVRO) module to learn temporal relations from inter-frames. Temporal triplet loss is employed in TVRO to enhance attention on text-relevant frames and capture clip semantics. Additionally, a self-supervised contrastive loss is introduced at the clip–text level to improve inter-clip discrimination by maximizing semantic variance during training. Experiments on Charades-STA, ActivityNet Captions, and TACoS, widely used benchmark datasets, demonstrate that our method outperforms state-of-the-art methods across multiple metrics. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 4452 KiB  
Article
Upper Limb Joint Angle Estimation Using a Reduced Number of IMU Sensors and Recurrent Neural Networks
by Kevin Niño-Tejada, Laura Saldaña-Aristizábal, Jhonathan L. Rivas-Caicedo and Juan F. Patarroyo-Montenegro
Electronics 2025, 14(15), 3039; https://doi.org/10.3390/electronics14153039 - 30 Jul 2025
Abstract
Accurate estimation of upper-limb joint angles is essential in biomechanics, rehabilitation, and wearable robotics. While inertial measurement units (IMUs) offer portability and flexibility, systems requiring multiple inertial sensors can be intrusive and complex to deploy. In contrast, optical motion capture (MoCap) systems provide [...] Read more.
Accurate estimation of upper-limb joint angles is essential in biomechanics, rehabilitation, and wearable robotics. While inertial measurement units (IMUs) offer portability and flexibility, systems requiring multiple inertial sensors can be intrusive and complex to deploy. In contrast, optical motion capture (MoCap) systems provide precise tracking but are constrained to controlled laboratory environments. This study presents a deep learning-based approach for estimating shoulder and elbow joint angles using only three IMU sensors positioned on the chest and both wrists, validated against reference angles obtained from a MoCap system. The input data includes Euler angles, accelerometer, and gyroscope data, synchronized and segmented into sliding windows. Two recurrent neural network architectures, Convolutional Neural Network with Long-short Term Memory (CNN-LSTM) and Bidirectional LSTM (BLSTM), were trained and evaluated using identical conditions. The CNN component enabled the LSTM to extract spatial features that enhance sequential pattern learning, improving angle reconstruction. Both models achieved accurate estimation performance: CNN-LSTM yielded lower Mean Absolute Error (MAE) in smooth trajectories, while BLSTM provided smoother predictions but underestimated some peak movements, especially in the primary axes of rotation. These findings support the development of scalable, deep learning-based wearable systems and contribute to future applications in clinical assessment, sports performance analysis, and human motion research. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Position, Attitude and Motion Tracking)
Show Figures

Figure 1

16 pages, 5655 KiB  
Article
A Multi-Branch Deep Learning Framework with Frequency–Channel Attention for Liquid-State Recognition
by Minghao Wu, Jiajun Zhou, Shuaiyu Yang, Hao Wang, Xiaomin Wang, Haigang Gong and Ming Liu
Electronics 2025, 14(15), 3028; https://doi.org/10.3390/electronics14153028 - 29 Jul 2025
Viewed by 125
Abstract
In the industrial production of polytetrafluoroethylene (PTFE), accurately recognizing the liquid state within the coagulation vessel is critical to achieving better product quality and higher production efficiency. However, the complex and subtle changes in the coagulation process pose significant challenges for traditional sensing [...] Read more.
In the industrial production of polytetrafluoroethylene (PTFE), accurately recognizing the liquid state within the coagulation vessel is critical to achieving better product quality and higher production efficiency. However, the complex and subtle changes in the coagulation process pose significant challenges for traditional sensing methods, calling for more reliable visual approaches that can handle varying scales and dynamic state changes. This study proposes a multi-branch deep learning framework for classifying the liquid state of PTFE emulsions based on high-resolution images captured in real-world factory conditions. The framework incorporates multi-scale feature extraction through a three-branch network and introduces a frequency–channel attention module to enhance feature discrimination. To address optimization challenges across branches, contrastive learning is employed for deep supervision, encouraging consistent and informative feature learning. The experimental results show that the proposed method significantly improves classification accuracy, achieving a mean F1-score of 94.3% across key production states. This work demonstrates the potential of deep learning-based visual classification methods for improving automation and reliability in industrial production. Full article
Show Figures

Figure 1

13 pages, 1945 KiB  
Article
An Explainable AI Exploration of the Machine Learning Classification of Neoplastic Intracerebral Hemorrhage from Non-Contrast CT
by Sophia Schulze-Weddige, Georg Lukas Baumgärtner, Tobias Orth, Anna Tietze, Michael Scheel, David Wasilewski, Mike P. Wattjes, Uta Hanning, Helge Kniep, Tobias Penzkofer and Jawed Nawabi
Cancers 2025, 17(15), 2502; https://doi.org/10.3390/cancers17152502 - 29 Jul 2025
Viewed by 135
Abstract
Intracerebral hemorrhage (ICH) associated with primary and metastatic brain tumors presents a significant challenge in neuro-oncology due to the substantial risk of complications [...] Full article
(This article belongs to the Special Issue Medical Imaging and Artificial Intelligence in Cancer)
Show Figures

Figure 1

Back to TopTop