Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (414)

Search Parameters:
Keywords = multimodal integrated sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 1036 KB  
Article
ConMonity: An IoT-Enabled LoRa/LTE-M Platform for Multimodal, Real-Time Monitoring of Concrete Curing in Construction Environments
by Ivars Namatēvs, Gatis Gaigals and Kaspars Ozols
Sensors 2026, 26(1), 14; https://doi.org/10.3390/s26010014 - 19 Dec 2025
Abstract
Monitoring the curing process of concrete remains a challenging and critical aspect of modern construction, often hindered by labour-intensive, invasive, and inflexible methods. The primary aim of this study is to develop an integrated IoT-enabled platform for automated, real-time monitoring of concrete curing, [...] Read more.
Monitoring the curing process of concrete remains a challenging and critical aspect of modern construction, often hindered by labour-intensive, invasive, and inflexible methods. The primary aim of this study is to develop an integrated IoT-enabled platform for automated, real-time monitoring of concrete curing, using a combination of LoRa-based sensor networks and an LTE-M backhaul. The resulting ConMonity system employs embedded multi-sensor nodes—capable of measuring strain, temperature, and humidity‒connected via an energy-efficient, TDMA-based LoRa wireless protocol to an LTE-M gateway with cloud-based management and analytics. By employing a robust architecture with battery-powered embedded nodes and adaptive firmware, ConMonity enables multi-modal, multi-site assessments and demonstrates stable, autonomous operation over multi-modal, multi-site assessment and demonstrates stable, autonomous operation over multi-month field deployments. Measured data are transmitted in a compact binary MQTT format, optimising cellular bandwidth and allowing secure, remote access via a dedicated mobile application. Operation in laboratory construction environments indicates that ConMonity outperforms conventional and earlier wireless monitoring systems in scalability and automation, delivering actionable real-time data and proactive alerts. The platform establishes a foundation for intelligent, scalable, and cost-effective monitoring of concrete curing, with future work focused on extending sensor modalities and enhancing resilience under diverse site conditions. Full article
(This article belongs to the Section Sensor Networks)
14 pages, 2983 KB  
Article
Lightweight Multimodal Fusion for Urban Tree Health and Ecosystem Services
by Abror Buriboev, Djamshid Sultanov, Ilhom Rahmatullaev, Ozod Yusupov, Erali Eshonqulov, Dilshod Bekmuradov, Nodir Egamberdiev and Andrew Jaeyong Choi
Sensors 2026, 26(1), 7; https://doi.org/10.3390/s26010007 - 19 Dec 2025
Abstract
Rapid urban expansion has heightened the demand for accurate, scalable, and real-time methods to assess tree health and the provision of ecosystem services. Urban trees are the major contributors to air-quality improvement and climate change mitigation; however, their monitoring is mostly constrained to [...] Read more.
Rapid urban expansion has heightened the demand for accurate, scalable, and real-time methods to assess tree health and the provision of ecosystem services. Urban trees are the major contributors to air-quality improvement and climate change mitigation; however, their monitoring is mostly constrained to inherently subjective and inefficient manual inspections. In order to break this barrier, we put forward a lightweight multimodal deep-learning framework that fuses RGB imagery with environmental and biometric sensor data for a combined evaluation of tree-health condition as well as the estimation of the daily oxygen production and CO2 absorption. The proposed architecture features an EfficientNet-B0 vision encoder upgraded with Mobile Inverted Bottleneck Convolutions (MBConv) and a squeeze-and-excitation attention mechanism, along with a small multilayer perceptron for sensor processing. A common multimodal representation facilitates a three-task learning set-up, thus allowing simultaneous classification and regression within a single model. Our experiments with a carefully curated dataset of segmented tree images accompanied by synchronized sensor measurements show that our method attains a health-classification accuracy of 92.03% while also lowering the regression error for O2 (MAE = 1.28) and CO2 (MAE = 1.70) in comparison with unimodal and multimodal baselines. The proposed architecture, with its 5.4 million parameters and an inference latency of 38 ms, can be readily deployed on edge devices and real-time monitoring platforms. Full article
Show Figures

Figure 1

24 pages, 2676 KB  
Article
The Adaptive Lab Mentor (ALM): An AI-Driven IoT Framework for Real-Time Personalized Guidance in Hands-On Engineering Education
by Md Shakib Hasan, Awais Ahmed, Nouman Rasool, MST Mosaddeka Naher Jabe, Xiaoyang Zeng and Farman Ali Pirzado
Sensors 2025, 25(24), 7688; https://doi.org/10.3390/s25247688 - 18 Dec 2025
Abstract
Engineering education is based on experiential learning, but the problem is that in laboratory conditions, it is difficult to give feedback to the students in real time and personalize this feedback. The paper introduces the proposal of an innovative approach to the laboratories, [...] Read more.
Engineering education is based on experiential learning, but the problem is that in laboratory conditions, it is difficult to give feedback to the students in real time and personalize this feedback. The paper introduces the proposal of an innovative approach to the laboratories, called Adaptive Lab Mentor (ALM), which combines the technologies of Artificial Intelligence (AI), Internet of Things (IoT), and sensor technology to facilitate intelligent and customized laboratory setting. ALM is supported by a new real-time multimodal sensor fusion model in which a sensor-instrumented laboratory is used to record real-time electrical measurements (voltage and current) which are used in parallel with symbolic component measurements (target resistance) with a lightweight, dual-input Convolutional Neural Network (1D-CNN) running on an edge device. In this initial validation, visual context is presented as a symbolic target value, which establishes a pathway for the future integration of full computer vision. The architecture will enable monitoring of the student progress, making error diagnoses within a short time period, and provision of adaptive feedback based on information available in the context. To test this strategy, a high-fidelity model of an Ohm Laboratory was developed. LTspice was used to generate a huge amount of current and voltage time series of various circuit states. The trained model achieved 93.3% test accuracy and demonstrated that the proposed system could be applied. The ALM model, compared to the current Intelligent Tutoring Systems, is based on physical sensing and edge AI inference in real-time, as well as adaptive and safety-sensitive feedback throughout hands-on engineering demonstrations. The ALM framework serves as a blueprint for the new smart laboratory assistant. Full article
(This article belongs to the Special Issue AI and Sensors in Computer-Based Educational Systems)
48 pages, 6449 KB  
Review
Flexible Sensing for Precise Lithium-Ion Battery Swelling Monitoring: Mechanisms, Integration Strategies, and Outlook
by Yusheng Lei, Jinwei Zhao, Yihang Wang, Chenyang Xue and Libo Gao
Sensors 2025, 25(24), 7677; https://doi.org/10.3390/s25247677 - 18 Dec 2025
Abstract
The expansion force generated by lithium-ion batteries during charge–discharge cycles is a key indicator of their structural safety and health. Recently, flexible pressure-sensing technologies have emerged as promising solutions for in situ swelling monitoring, owing to their high flexibility, sensitivity and integration capability. [...] Read more.
The expansion force generated by lithium-ion batteries during charge–discharge cycles is a key indicator of their structural safety and health. Recently, flexible pressure-sensing technologies have emerged as promising solutions for in situ swelling monitoring, owing to their high flexibility, sensitivity and integration capability. This review provides a systematic summary of progress in this field. Firstly, we discuss the mechanisms of battery swelling and the principles of conventional measurement methods. It then compares their accuracy, dynamic response and environmental adaptability. Subsequently, the main flexible pressure-sensing mechanisms are categorized, including piezoresistive, capacitive, piezoelectric and triboelectric types, and their material designs, structural configurations and sensing behaviors are discussed. Building on this, we examine integration strategies for flexible pressure sensors in battery systems. It covers surface-mounted and embedded approaches at the cell level, as well as array-based and distributed schemes at the module level. A comparative analysis highlights the differences in installation constraints and monitoring capabilities between these approaches. Additionally, this section also summarizes the characteristics of swelling signals and recent advances in data processing techniques, including AI-assisted feature extraction, fault detection and health state correlation. Despite their promise, challenges such as long-term material stability and signal interference remain. Future research is expected to focus on high-performance sensing materials, multimodal sensing fusion and intelligent data processing, with the aim of further advancing the integration of flexible sensing technologies into battery management systems and enhancing early warning and safety protection capabilities. Full article
20 pages, 4309 KB  
Article
Targetless Radar–Camera Calibration via Trajectory Alignment
by Ozan Durmaz and Hakan Cevikalp
Sensors 2025, 25(24), 7574; https://doi.org/10.3390/s25247574 - 13 Dec 2025
Viewed by 311
Abstract
Accurate extrinsic calibration between radar and camera sensors is essential for reliable multi-modal perception in robotics and autonomous navigation. Traditional calibration methods often rely on artificial targets such as checkerboards or corner reflectors, which can be impractical in dynamic or large-scale environments. This [...] Read more.
Accurate extrinsic calibration between radar and camera sensors is essential for reliable multi-modal perception in robotics and autonomous navigation. Traditional calibration methods often rely on artificial targets such as checkerboards or corner reflectors, which can be impractical in dynamic or large-scale environments. This study presents a fully targetless calibration framework that estimates the rigid spatial transformation between radar and camera coordinate frames by aligning their observed trajectories of a moving object. The proposed method integrates You Only Look Once version 5 (YOLOv5)-based 3D object localization for the camera stream with Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Random Sample Consensus (RANSAC) filtering for sparse and noisy radar measurements. A passive temporal synchronization technique, based on Root Mean Square Error (RMSE) minimization, corrects timestamp offsets without requiring hardware triggers. Rigid transformation parameters are computed using Kabsch and Umeyama algorithms, ensuring robust alignment even under millimeter-wave (mmWave) radar sparsity and measurement bias. The framework is experimentally validated in an indoor OptiTrack-equipped laboratory using a Skydio 2 drone as the dynamic target. Results demonstrate sub-degree rotational accuracy and decimeter-level translational error (approximately 0.12–0.27 m depending on the metric), with successful generalization to unseen motion trajectories. The findings highlight the method’s applicability for real-world autonomous systems requiring practical, markerless multi-sensor calibration. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

31 pages, 2824 KB  
Article
A Digital Health Platform for Remote and Multimodal Monitoring in Neurodegenerative Diseases
by Adrian-Victor Vevera, Marilena Ianculescu and Adriana Alexandru
Future Internet 2025, 17(12), 571; https://doi.org/10.3390/fi17120571 - 13 Dec 2025
Viewed by 234
Abstract
Continuous and personalized monitoring are beneficial for patients suffering from neurodegenerative diseases such as Alzheimer’s disease, Parkinson’s disease and multiple sclerosis. However, such levels of monitoring are seldom ensured by traditional models of care. This paper presents NeuroPredict, a secure edge–cloud Internet of [...] Read more.
Continuous and personalized monitoring are beneficial for patients suffering from neurodegenerative diseases such as Alzheimer’s disease, Parkinson’s disease and multiple sclerosis. However, such levels of monitoring are seldom ensured by traditional models of care. This paper presents NeuroPredict, a secure edge–cloud Internet of Medical Things (IoMT) platform that addresses this problem by integrating commercial wearables and in-house sensors with cognitive and behavioral evaluations. The NeuroPredict platform links high-frequency physiological signals with periodic cognitive tests through the use of a modular architecture with lightweight device connectivity, a semantic integration layer for timestamp alignment and feature harmonization across heterogeneous streams, and multi-timescale data fusion. Its use of encrypted transport and storage, role-based access control, token-based authentication, identifier separation, and GDPR-aligned governance addresses security and privacy concerns. Moreover, the platform’s user interface was built by considering human-centered design principles and includes role-specific dashboards, alerts, and patient-facing summaries that are meant to encourage engagement and decision-making for patients and healthcare providers. Experimental evaluation demonstrated the NeuroPredict platform’s data acquisition reliability, coherence in multimodal synchronization, and correctness in role-based personalization and reporting. The NeuroPredict platform provides a smart system infrastructure for eHealth and remote monitoring in neurodegenerative care, aligned with priorities on wearables/IoMT integration, data security and privacy, interoperability, and human-centered design. Full article
(This article belongs to the Special Issue eHealth and mHealth—2nd Edition)
Show Figures

Graphical abstract

21 pages, 542 KB  
Systematic Review
Application of Augmented Reality Technology as a Dietary Monitoring and Control Measure Among Adults: A Systematic Review
by Gabrielle Victoria Gonzalez, Bingjing Mao, Ruxin Wang, Wen Liu, Chen Wang and Tung Sung Tseng
Nutrients 2025, 17(24), 3893; https://doi.org/10.3390/nu17243893 - 12 Dec 2025
Viewed by 160
Abstract
Background/Objectives: Traditional dietary monitoring methods such as 24 h recalls rely on self-report, leading to recall bias and underreporting. Similarly, dietary control approaches, including portion control and calorie restriction, depend on user accuracy and consistency. Augmented reality (AR) offers a promising alternative [...] Read more.
Background/Objectives: Traditional dietary monitoring methods such as 24 h recalls rely on self-report, leading to recall bias and underreporting. Similarly, dietary control approaches, including portion control and calorie restriction, depend on user accuracy and consistency. Augmented reality (AR) offers a promising alternative for improving dietary monitoring and control by enhancing engagement, feedback accuracy, and user learning. This systematic review aimed to examine how AR technologies are implemented to support dietary monitoring and control and to evaluate their usability and effectiveness among adults. Methods: A systematic search of PubMed, CINAHL, and Embase identified studies published between 2000 and 2025 that evaluated augmented reality for dietary monitoring and control among adults. Eligible studies included peer-reviewed and gray literature in English. Data extraction focused on study design, AR system type, usability, and effectiveness outcomes. Risk of bias was assessed using the Cochrane RoB 2 tool for randomized controlled trials and ROBINS-I for non-randomized studies. Results: Thirteen studies met inclusion criteria. Since the evidence based was heterogeneous in design, outcomes, and measurement, findings were synthesized qualitatively rather than pooled. Most studies utilized smartphone-based AR systems for portion size estimation, nutrition education, and behavior modification. Usability and satisfaction varied by study: One study found that 80% of participants (N = 15) were satisfied or extremely satisfied with the AR tool. Another reported that 100% of users (N = 26) rated the app easy to use, and a separate study observed a 72.5% agreement rate on ease of use among participants (N = 40). Several studies also examined portion size estimation, with one reporting a 12.2% improvement in estimation accuracy and another showing −6% estimation, though a 12.7% overestimation in energy intake persisted. Additional outcomes related to behavior, dietary knowledge, and physiological or psychological effects were also identified across the review. Common limitations included difficulty aligning markers, overestimation of amorphous foods, and short intervention durations. Despite these promising findings, the existing evidence is limited by small sample sizes, heterogeneity in intervention and device design, short study durations, and variability in usability and accuracy measures. The limitations of this review warrant cautious interpretation of findings. Conclusions: AR technologies show promise for improving dietary monitoring and control by enhancing accuracy, engagement, and behavior change. Future research should focus on longitudinal designs, diverse populations, and integration with multimodal sensors and artificial intelligence. Full article
(This article belongs to the Section Nutrition Methodology & Assessment)
Show Figures

Figure 1

15 pages, 2248 KB  
Article
A Multimodal Sensor Fusion and Dynamic Prediction-Based Personnel Intrusion Detection System for Crane Operations
by Fengyu Wu, Maoqian Hu, Fangcheng Xie, Wenxie Bu and Zongxi Zhang
Processes 2025, 13(12), 4017; https://doi.org/10.3390/pr13124017 - 12 Dec 2025
Viewed by 204
Abstract
With the rapid development of industries such as construction and port hoisting, the operational safety of truck cranes in crowded areas has become a critical issue. Under complex working conditions, traditional monitoring methods are often plagued by issues such as compromised image quality, [...] Read more.
With the rapid development of industries such as construction and port hoisting, the operational safety of truck cranes in crowded areas has become a critical issue. Under complex working conditions, traditional monitoring methods are often plagued by issues such as compromised image quality, increased parallax computation errors, delayed fence response times, and inadequate accuracy in dynamic target recognition. To address these challenges, this study proposes a personnel intrusion detection system based on multimodal sensor fusion and dynamic prediction. The system utilizes the combined application of a binocular camera and a lidar, integrates the spatiotemporal attention mechanism and an improved LSTM network to predict the movement trajectory of the crane boom in real time, and generates a dynamic 3D fence with an advance margin. It classifies intrusion risks by matching the spatiotemporal prediction of pedestrian trajectories with the fence boundaries, and finally generates early warning information. The experimental results show that this method can significantly improve the detection accuracy of personnel intrusion under complex environments such as rain, fog, and strong light. This system provides a feasible solution for the safety monitoring of truck crane operations and significantly enhances operational safety. Full article
(This article belongs to the Section Chemical Processes and Systems)
Show Figures

Figure 1

22 pages, 3733 KB  
Article
LightEdu-Net: Noise-Resilient Multimodal Edge Intelligence for Student-State Monitoring in Resource-Limited Environments
by Chenjia Huang, Yanli Chen, Bocheng Zhou, Xiuqi Cai, Ziying Zhai, Jiarui Zhang and Yan Zhan
Sensors 2025, 25(24), 7529; https://doi.org/10.3390/s25247529 - 11 Dec 2025
Viewed by 247
Abstract
Multimodal perception for student-state monitoring is difficult to deploy in rural classrooms because sensors are noisy and computing resources are highly constrained. This work targets these challenges by enabling noise-resilient, multimodal, real-time student-state recognition on low-cost edge devices. We propose LightEdu-Net, a sensor-noise-adaptive [...] Read more.
Multimodal perception for student-state monitoring is difficult to deploy in rural classrooms because sensors are noisy and computing resources are highly constrained. This work targets these challenges by enabling noise-resilient, multimodal, real-time student-state recognition on low-cost edge devices. We propose LightEdu-Net, a sensor-noise-adaptive Transformer-based multimodal network that integrates visual, physiological, and environmental signals in a unified lightweight architecture. The model incorporates three key components: a sensor noise adaptive module (SNAM) to suppress degraded sensor inputs, a cross-modal attention fusion module (CMAF) to capture complementary temporal dependencies across modalities, and an edge-aware knowledge distillation module (EAKD) to transfer knowledge from high-capacity teachers to an embedded-friendly student network. We construct a multimodal behavioral dataset from several rural schools and formulate student-state recognition as a multimodal classification task with explicit evaluation of noise robustness and edge deployability. Experiments show that LightEdu-Net achieves 92.4% accuracy with an F1-score of 91.4%, outperforming representative lightweight CNN and Transformer baselines. Under a noise level of 0.3, accuracy drops by only 1.1%, indicating strong robustness to sensor degradation. Deployment experiments further show that the model operates in real time on Jetson Nano with a latency of 42.8 ms (23.4 FPS) and maintains stable high accuracy on Raspberry Pi 4B and Intel NUC platforms. Beyond technical performance, the proposed system provides a low-cost and quantifiable mechanism for capturing fine-grained learning process indicators, offering new data support for educational economics studies on instructional efficiency and resource allocation in underdeveloped regions. Full article
Show Figures

Figure 1

18 pages, 268 KB  
Review
AI-Enabled Technologies and Biomarker Analysis for the Early Identification of Autism and Related Neurodevelopmental Disorders
by Rohan Patel, Beth A. Jerskey, Jennifer Shannon, Neelkamal Soares and Jason M. Fogler
Children 2025, 12(12), 1670; https://doi.org/10.3390/children12121670 - 9 Dec 2025
Viewed by 462
Abstract
Background: Autism spectrum disorder (ASD) and related neurodevelopmental conditions are a significant public health concern, with diagnostic delays hindering timely intervention. Traditional assessments often lead to waiting times exceeding a year. Advances in artificial intelligence (AI) and biomarker-based screening offer objective, efficient alternatives [...] Read more.
Background: Autism spectrum disorder (ASD) and related neurodevelopmental conditions are a significant public health concern, with diagnostic delays hindering timely intervention. Traditional assessments often lead to waiting times exceeding a year. Advances in artificial intelligence (AI) and biomarker-based screening offer objective, efficient alternatives for early identification. Objective: This review synthesizes the latest evidence for AI-enabled technologies aimed at improving early ASD identification. Modalities covered include eye-tracking, acoustic analysis, video- and sensor-based behavioral screening, neuroimaging, molecular/genetic assays, electronic health record prediction, and home-based digital applications or apps. This manuscript critically evaluates their diagnostic accuracy, clinical feasibility, scalability, and implementation hurdles, while highlighting regulatory and ethical considerations. Findings: Across modalities, machine learning approaches demonstrate strong accuracy and specificity in ASD detection. Eye-tracking and voice-acoustic classifiers reliably differentiate for autistic children, while home-video analysis and Electronic Health Record (EHR)-based algorithms show promise for scalable screening. Multimodal integration significantly enhances predictive power. Several tools have received Food and Drug Administration clearance, signaling momentum for wider clinical deployment. Issues persist regarding equity, data privacy, algorithmic bias, and real-world performance. Conclusions: AI-enabled screeners and diagnostic aids have the potential to transform ASD detection and access to early intervention. Integrating these technologies into clinical workflows must safeguard equity, privacy, and clinician oversight. Ongoing longitudinal research and robust regulatory frameworks are essential to ensure these advances benefit diverse populations and deliver meaningful outcomes for children and families. Full article
33 pages, 7725 KB  
Review
Self-Powered Strain Sensing System: A Cutting-Edge Review Paving the Way for Autonomous Wearable Electronics
by Hui Song
Polymers 2025, 17(24), 3256; https://doi.org/10.3390/polym17243256 - 6 Dec 2025
Viewed by 639
Abstract
Self-powered strain sensing technology represents a pivotal frontier in overcoming the energy constraints of wearable electronics, thereby enabling their long-term intelligence and operational autonomy. This review systematically summarizes recent advances in integrated strain sensing systems, with a particular focus on three primary strategies [...] Read more.
Self-powered strain sensing technology represents a pivotal frontier in overcoming the energy constraints of wearable electronics, thereby enabling their long-term intelligence and operational autonomy. This review systematically summarizes recent advances in integrated strain sensing systems, with a particular focus on three primary strategies for achieving self-powered functionality: integration with energy storage devices (e.g., flexible supercapacitors and microbatteries); integration with energy harvesters (e.g., triboelectric and piezoelectric nanogenerators); and advanced systems that synergistically combine energy harvesting, storage, and management modules. The article begins by outlining the fundamental working mechanisms and key performance parameters of strain sensors. It then provides a detailed analysis of the material systems, innovative structural designs, operational mechanisms, and applications in health monitoring and human-computer interaction associated with the different self-powered strategies. Finally, the review critically examines the persistent challenges in this field, including energy balance, mechanical robustness, and environmental stability, and offers perspectives on future research directions such as multimodal energy harvesting, intelligent data processing, and the development of biocompatible materials. This work aims to serve as a valuable reference for advancing the practical implementation of truly autonomous and wearable strain sensing systems. Full article
(This article belongs to the Special Issue Polymeric Materials for Flexible Electronics)
Show Figures

Figure 1

20 pages, 7111 KB  
Article
Machine Learning-Assisted Simultaneous Measurement of Salinity and Temperature Using OCHFI Cascaded Sensor Structure
by Anirban Majee, Koustav Dey, Nikhil Vangety and Sourabh Roy
Photonics 2025, 12(12), 1203; https://doi.org/10.3390/photonics12121203 - 5 Dec 2025
Viewed by 345
Abstract
A compact offset-coupled hybrid fiber interferometer (OCHFI) is designed and experimentally demonstrated for simultaneous measurement of salinity and temperature. The sensor integrates multimode fiber (MMF) and offset no-core fiber (NCF) through an intermediate single-mode fiber (SMF), producing distinct interference patterns for multi-parameter sensing. [...] Read more.
A compact offset-coupled hybrid fiber interferometer (OCHFI) is designed and experimentally demonstrated for simultaneous measurement of salinity and temperature. The sensor integrates multimode fiber (MMF) and offset no-core fiber (NCF) through an intermediate single-mode fiber (SMF), producing distinct interference patterns for multi-parameter sensing. The optimal SMF length was determined through COMSOL simulations (version 6.2) and fixed at 50 cm to achieve stable and well-separated interference dips. Fast Fourier Transform analysis confirmed that the modal behavior originates from the single-mode-multimode-single-mode (SMS) and single-mode-no-core-single-mode (SNS) segments. Experimentally, Dip 1 exhibits salinity sensitivity of 0.62206 nm/, while Dip 2 shows temperature sensitivity of 0.09318 nm/°C, both with linearity (R2 > 0.99), excellent repeatability, and stability, with fluctuations within 0.15 nm over 60 min. To remove cross-sensitivity, both the transfer matrix method and an Artificial Neural Network (ANN) model were employed. The ANN approach significantly enhanced prediction accuracy (R2 = 0.9999) with RMSE improvement approximately 539-fold for salinity and 56-fold for temperature, compared with the analytical model. The proposed OCHFI sensor provides a compact, low-cost, and intelligent solution for precise simultaneous salinity and temperature measurement, with strong potential for applications in marine, chemical, and industrial process control. Full article
(This article belongs to the Special Issue Optical Fiber Sensors: Shedding More Light with Machine Learning)
Show Figures

Figure 1

17 pages, 5283 KB  
Article
VAE-Based Rhythm Disturbance Index Correlates with Bilateral Symmetry Breakdown in Human Motion
by Yadong Liang, Jingsong Liu, Xilin Cui, Xuanyong Zhu, Jie Liu and Xingbin Du
Symmetry 2025, 17(12), 2092; https://doi.org/10.3390/sym17122092 - 5 Dec 2025
Viewed by 255
Abstract
Rhythm disturbances during human exercise represent a critical challenge for both physiological monitoring and athlete safety. To address this, a structure-enhanced β-TCVAE framework was proposed that derives a Rhythm Disturbance Index (RDI) from multimodal wearable sensor signals. RDI demonstrated a strong correlation with [...] Read more.
Rhythm disturbances during human exercise represent a critical challenge for both physiological monitoring and athlete safety. To address this, a structure-enhanced β-TCVAE framework was proposed that derives a Rhythm Disturbance Index (RDI) from multimodal wearable sensor signals. RDI demonstrated a strong correlation with bilateral imbalance (r = 0.838, R2 = 0.702) and achieved high discriminative performance (ROC-AUC = 0.823). Importantly, its weak and non-significant correlation with heart rate (r = 0.0569, p > 0.05) supported independence from cardiovascular load, underscoring its specificity to motor rhythm rather than systemic exertion. Analyses conducted on multimodal datasets further validated the robustness of this correlation, showing that RDI consistently aligns with disruptions in locomotor symmetry even after controlling for heart rate. This quantifiable coupling between rhythmic instability and symmetry loss positions RDI as a dual correlational indicator, sensitively reflecting both neuromuscular rhythm irregularities and axial imbalance. Such dual insight enables continuous and objective monitoring of locomotor quality, empowering coaches, clinicians, and sports scientists to tailor training strategies, optimize performance, and reduce the risk of injury. By integrating advanced variational reasoning with real-time wearable sensing, the proposed framework offers an evidence-based step forward in precision monitoring and risk assessment for athletes. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Sport Biomechanics)
Show Figures

Figure 1

45 pages, 1164 KB  
Review
Integrating Cutting-Edge Technologies in Food Sensory and Consumer Science: Applications and Future Directions
by Dongju Lee, Hyemin Jeon, Yoonseo Kim and Youngseung Lee
Foods 2025, 14(24), 4169; https://doi.org/10.3390/foods14244169 - 5 Dec 2025
Viewed by 665
Abstract
With the introduction of emerging digital technologies, sensory and consumer science has evolved beyond traditional laboratory-based and self-response-centered sensory evaluations toward more objective assessments that reflect real-world consumption contexts. This review examines recent trends and potential applications in sensory evaluation research focusing on [...] Read more.
With the introduction of emerging digital technologies, sensory and consumer science has evolved beyond traditional laboratory-based and self-response-centered sensory evaluations toward more objective assessments that reflect real-world consumption contexts. This review examines recent trends and potential applications in sensory evaluation research focusing on key enabling technologies—artificial intelligence (AI) and machine learning (ML), extended reality (XR), biometrics, and digital sensors. Furthermore, it explores strategies for establishing personalized, multimodal, and intelligent–adaptive sensory evaluation systems through the integration of these technologies, as well as the applicability of sensory evaluation software. Recent studies report that AI/ML models used for sensory or preference prediction commonly achieve RMSE values of approximately 0.04–24.698, with prediction accuracy ranging from 79 to 100% (R2 = 0.643–0.999). In XR environment, presence measured by the IPQ (7-point scale) is generally considered adequate when scores exceed 3. Finally, the review discusses ethical considerations arising throughout data collection, interpretation, and utilization processes and proposes future directions for the advancement of sensory and consumer science research. This systematic literature review aims to identify emerging technologies rather than provide a quantitative meta-analysis and therefore does not cover domain-specific analytical areas such as chemometrics beyond ML approaches or detailed flavor and aroma chemistry. Full article
Show Figures

Figure 1

19 pages, 2271 KB  
Article
Plasmonic Nanopore Sensing to Probe the DNA Loading Status of Adeno-Associated Viruses
by Scott Renkes, Steven J. Gray, Minjun Kim and George Alexandrakis
Chemosensors 2025, 13(12), 418; https://doi.org/10.3390/chemosensors13120418 - 4 Dec 2025
Viewed by 448
Abstract
Adeno-associated viruses (AAVs) are a leading vector for gene therapy, yet their clinical utility is limited by the lack of robust quality control methods to distinguish between empty (AAVempty), partially loaded (AAVpartial), and fully DNA loaded (AAVfull) [...] Read more.
Adeno-associated viruses (AAVs) are a leading vector for gene therapy, yet their clinical utility is limited by the lack of robust quality control methods to distinguish between empty (AAVempty), partially loaded (AAVpartial), and fully DNA loaded (AAVfull) capsids. Current analytical techniques provide partial insights but remain limited in sensitivity, throughput, or resolution. Here we present a multimodal plasmonic nanopore sensor that integrates optical trapping with electrical resistive-pulse sensing to characterize AAV9 capsids at the single-particle level in tens of μL sample volumes and fM range concentrations. As a model system, we employed AAV9 capsids not loaded with DNA, capsids loaded with a self-complementary 4.7 kbp DNA (AAVscDNA), and ones loaded with single-stranded 4.7 kbp DNA (AAVssDNA). Ground-truth validation was performed with analytical ultracentrifugation (AUC). Nanosensor data were acquired concurrently for optical step changes (occurring at AAV trapping and un-trapping) both in transmittance and reflectance geometries, and electrical nanopore resistive pulse signatures, making for a total of five data dimensions. The acquired data was then filtered and clustered by Gaussian mixture models (GMMs), accompanied by spectral clustering stability analysis, to successfully separate between AAV species based on their DNA load status (AAVempty, AAVpartial, AAVfull) and DNA load type (AAVscDNA versus AAVssDNA). The motivation for quantifying the AAVempty and AAVpartial population fractions is that they reduce treatment efficacy and increase immunogenicity. Likewise, the motivation to identify AAVscDNA population fractions is that these have much higher transfection rates. Importantly, the results showed that the nanosensor could differentiate between AAVscDNA and AAVssDNA despite their identical masses. In contrast, AUC could not differentiate between AAVscDNA and AAVssDNA. An equimolar mixture of AAVscDNA, AAVssDNA and AAVempty was also measured with the sensor, and the results showed the expected population fractions, supporting the capacity of the method to differentiate AAV load status in heterogeneous solutions. In addition, less common optical and electrical signal signatures were identified in the acquired data, which were attributed to debris, rapid entry re-entry to the optical trap, or weak optical trap exits, representing critical artifacts to recognize for correct interpretation of the data. Together, these findings establish plasmonic nanopore sensing as a promising platform for quantifying AAV DNA loading status and genome type with the potential to extend ultra-sensitive single-particle characterization beyond the capabilities of existing methods. Full article
(This article belongs to the Special Issue Electrochemical Sensors Based on Various Materials)
Show Figures

Figure 1

Back to TopTop