Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (878)

Search Parameters:
Keywords = best sensor selection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3082 KB  
Article
When Does Geostatistical Interpolation Work? Monthly and Hourly Sensitivity of Ordinary Kriging for Urban Air Pollutant Mapping in Mexico City
by Eva Selene Hernández-Gress and David Conchouso González
Algorithms 2026, 19(3), 213; https://doi.org/10.3390/a19030213 - 12 Mar 2026
Viewed by 125
Abstract
Urban air quality assessment increasingly relies on spatial interpolation to complement fixed monitoring networks; however, the reliability of geostatistical methods depends strongly on temporal conditions and pollutant characteristics. Despite extensive application, limited attention has been paid to how kriging performance varies across hours [...] Read more.
Urban air quality assessment increasingly relies on spatial interpolation to complement fixed monitoring networks; however, the reliability of geostatistical methods depends strongly on temporal conditions and pollutant characteristics. Despite extensive application, limited attention has been paid to how kriging performance varies across hours of the day and months of the year, particularly when contrasting primary pollutants driven by local emissions with secondary pollutants formed through atmospheric chemistry. This study evaluates the temporal sensitivity of Ordinary Kriging (OK) for mapping urban air pollutants in the Mexico City Metropolitan Area. Using hourly observations from the official air quality monitoring network (2021), we analyze ozone (O3), a secondary pollutant, and sulfur dioxide (SO2), a primary pollutant, under representative diurnal and monthly scenarios. Variogram model selection and predictive performance are assessed through leave-one-out cross-validation and external hold-out validation across multiple temporal blocks and months. Results indicate that kriging performance is highly sensitive to both hour of day and month. For O3, smoother Gaussian variogram structures perform best during peak photochemical conditions, producing coherent regional concentration fields with gradual spatial gradients. In contrast, SO2 exhibits stronger local variability and sharper spatial gradients, favoring exponential variogram models, particularly under stable morning atmospheric conditions associated with primary emission accumulation. Sensitivity analyses further reveal that no single variogram model is universally optimal and that interpolation accuracy depends more on temporal stratification and pollutant behavior than on variogram form alone. These findings demonstrate that geostatistical interpolation is a valuable tool for urban air quality assessment only when temporal sensitivity and pollutant-specific dynamics are explicitly incorporated. The proposed framework provides practical guidance for the responsible use of interpolated air quality maps, supports sustainable urban monitoring strategies, and contributes to more reliable exposure assessment in megacities with limited sensor coverage. Full article
Show Figures

Figure 1

47 pages, 8613 KB  
Review
2D-to-3D Image Reconstruction in Agriculture: A Review of Methods, Challenges, and AI-Driven Opportunities
by Hemanth Reddy Sankaramaddi, Won Suk Lee, Kyoungchul Kim and Youngki Hong
Sensors 2026, 26(6), 1775; https://doi.org/10.3390/s26061775 - 11 Mar 2026
Viewed by 156
Abstract
Agriculture is rapidly becoming a data-driven field where automation relies on transforming 2D images into accurate 3D models. However, selecting the most effective method remains challenging due to the unconstrained nature of the environment. This review assesses the effectiveness of geometry-based, sensor-based, and [...] Read more.
Agriculture is rapidly becoming a data-driven field where automation relies on transforming 2D images into accurate 3D models. However, selecting the most effective method remains challenging due to the unconstrained nature of the environment. This review assesses the effectiveness of geometry-based, sensor-based, and learning-based reconstruction methodologies in agricultural settings. We analyze photogrammetric pipelines, active sensing, and neural rendering methods based on their geometric accuracy, data processing speed, and field performance against wind or occlusion. Our analysis indicates that while Light Detection and Ranging (LiDAR) is highly accurate, it is too expensive for widespread adoption. Conversely, geometry-based methods are inexpensive but struggle with complex biological structures. Learning-based methods, especially 3D Gaussian Splatting (3DGS), have revolutionized the field by enabling a balance between visual fidelity and real-time inference speed. We conclude that the best chance for scalability and accuracy lies in hybrid pipelines that integrate Vision Foundation Models (VFMs) with geometric priors. We believe that “hybrid intelligence” systems, such as edge-native 3D Gaussian Splatting combined with semantic priors, are the future of 3D reconstruction. These systems will enable the creation of real-time, spatiotemporal (4D) digital twins that drive automated decision-making in precision agriculture. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2025)
Show Figures

Figure 1

25 pages, 9221 KB  
Article
Research on Building Recognition in Ethnic Minority Villages Based on Multi-Feature Fusion
by Xiaoqiong Sun, Jiafang Yang, Wei Li, Ting Luo and Dongdong Xie
Buildings 2026, 16(6), 1099; https://doi.org/10.3390/buildings16061099 - 10 Mar 2026
Viewed by 100
Abstract
As a unique cultural heritage of Chinese ethnic minorities, Dong architecture provides rich historical and cultural information. Rapid and accurate extraction of ethnic building information from remote sensing images in complex terrain and high-density settlement environments is highly important for the protection of [...] Read more.
As a unique cultural heritage of Chinese ethnic minorities, Dong architecture provides rich historical and cultural information. Rapid and accurate extraction of ethnic building information from remote sensing images in complex terrain and high-density settlement environments is highly important for the protection of architectural heritage and the management of rural space. Huanggang Dong Village in Liping County, Guizhou Province, China, is taken as a case study. This paper develops a multifeature fusion machine learning framework for the automatic recognition of Dong ethnic architecture based on centimeter-level visible images captured by UAV. First, the vegetation index, HSI color features and texture features based on the gray level co-occurrence matrix are extracted from the UAV visible light orthophoto image. Through the random forest feature importance ranking and correlation test, six key features, namely, the VDVI, HSI-S, HSI-I, mean, variance and contrast, are selected to construct a multifeature space. This step constitutes the feature construction stage of the proposed methodology and provides the basis for subsequent classification. Second, on the basis of a support vector machine (SVM) and random forest (RF), classification models are constructed. The effects of different feature combinations and different algorithms on classification accuracy are systematically compared, and the results are evaluated in terms of overall accuracy (OA), the kappa coefficient, user accuracy (UA) and producer accuracy (PA). This second part highlights the classification phase of the methodology, which tests the feature space using different algorithms and evaluates the performance of the models. The experimental data fully show that under the condition of a single feature, the SVM model dominated by texture features performs best, with an OA of 85.33% and a kappa of 0.799; under the condition of multifeature fusion, the RF algorithm has a stronger ability to integrate multisource features. The accuracy of building category recognition based on the total feature and dimensionality reduction feature space is particularly prominent. The total feature and overall accuracy reach 89.00%, and the kappa coefficient is 0.850. The UA and PA reached 89.66% and 94.55%, respectively. Through in-depth comparative analysis, the vegetation index–color–texture multifeature fusion and machine learning classification framework based on UAV visible light images can achieve high-precision extraction of Dong architecture without relying on high-cost sensors. It can effectively alleviate the confusion between water bodies and shadows and between dark roofs and vegetation and effectively separate traditional Dong architecture from roads, vegetation and other elements. It provides a low-cost and feasible way for digital archiving, dynamic monitoring and protection management of the traditional village architectural heritage of ethnic minorities. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

20 pages, 77395 KB  
Article
Underwater Moving Target Localization Based on High-Density Pressure Array Sensing
by Jiamin Chen, Yilin Li, Ruixin Chen, Wenjun Li, Keqiang Yue and Ruixue Li
J. Mar. Sci. Eng. 2026, 14(5), 484; https://doi.org/10.3390/jmse14050484 - 3 Mar 2026
Viewed by 244
Abstract
The artificial lateral line sensing principle provides a promising approach for underwater target perception and the navigation of underwater vehicles in complex flow environments. However, the highly nonlinear hydrodynamic mechanisms in complex flow fields make it difficult to establish accurate analytical models, which [...] Read more.
The artificial lateral line sensing principle provides a promising approach for underwater target perception and the navigation of underwater vehicles in complex flow environments. However, the highly nonlinear hydrodynamic mechanisms in complex flow fields make it difficult to establish accurate analytical models, which limits the development of high-precision perception and localization methods for underwater moving targets. In this study, a high-fidelity simulation model is established to characterize the pressure field variations induced by a moving source on an artificial lateral line pressure array. The influences of source velocity and sensing distance on the sensitivity and discretization characteristics of the pressure array are systematically investigated. Simulation results indicate that the sensor density of the pressure array is strongly correlated with the spatial resolution of the acquired pressure data, and a resolution of 50 sensors per meter is selected as the best-performing configuration by balancing sensing accuracy and sensor quantity. Under this configuration, the pressure distribution induced by the moving source exhibits clear and distinguishable spatiotemporal features, making it suitable for deep learning-based modeling. Furthermore, a large-scale temporal pressure dataset is constructed based on high-fidelity simulations under multiple motion directions and velocity conditions, and a spatiotemporal neural network is employed to predict the position of the underwater moving source. Experimental results demonstrate that, for straight-line underwater motion scenarios, the average localization error is within 7 cm, and a classification accuracy of 71% is achieved in practical engineering experiments. These results indicate that the proposed artificial lateral line pressure array design and deep learning-based prediction framework provide a feasible and effective solution for underwater target perception and localization in complex flow environments. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

36 pages, 6481 KB  
Review
Advances in Photonic Gas Sensors Operating in the VIS–NIR Spectrum: Structures, Materials, and Performance
by Nourhan Rasheed, Xun Li and Mohamed Bakr
Sensors 2026, 26(5), 1568; https://doi.org/10.3390/s26051568 - 2 Mar 2026
Viewed by 341
Abstract
The growing need for real-time, accurate monitoring of hazardous gases in environmental, industrial, and healthcare settings has highlighted the limitations of traditional sensing methods. Photonic Integrated Circuits (PICs) have become a revolutionary platform due to their high sensitivity, accurate selectivity, compact size and [...] Read more.
The growing need for real-time, accurate monitoring of hazardous gases in environmental, industrial, and healthcare settings has highlighted the limitations of traditional sensing methods. Photonic Integrated Circuits (PICs) have become a revolutionary platform due to their high sensitivity, accurate selectivity, compact size and cost-effectiveness. We present in this work a comprehensive overview of the best-reported PIC-based gas sensors. We discuss the basic concepts behind resonance-based and absorption-based sensing. A detailed overview of the various material platforms, from well-known silicon and silicon nitride to new polymers, chalcogenide glasses, and 2D materials, is presented. A comparison of key device topologies, such as waveguides, microring resonators, Mach–Zehnder interferometers, and metasurfaces, is conducted, with performance benchmarks indicating the limit of detection (LoD). The main limitations of PIC sensors are discussed in this review. We also discuss promising technologies, especially the game-changing potential of artificial intelligence to create fully autonomous devices. Full article
(This article belongs to the Special Issue Optical Sensors for Industry Applications)
Show Figures

Figure 1

24 pages, 4999 KB  
Article
PhysGMM-MoE: A Physics-Aware GMM-Mixture-of-Experts Framework for Small-Sample Engine Fault Classification
by Qingang Xu, Hongwei Wang, Yunhang Wang and Xicong Chen
Appl. Sci. 2026, 16(5), 2417; https://doi.org/10.3390/app16052417 - 2 Mar 2026
Viewed by 215
Abstract
Accurate engine fault classification with limited labeled data is critical for the safety and reliability of rotating machinery. This task is challenging because operating regimes are time-varying, and key variables must satisfy physical constraints, under which traditional feature classifier pipelines degrade and deep [...] Read more.
Accurate engine fault classification with limited labeled data is critical for the safety and reliability of rotating machinery. This task is challenging because operating regimes are time-varying, and key variables must satisfy physical constraints, under which traditional feature classifier pipelines degrade and deep networks tend to overfit. We propose PhysGMM-MoE, a physics-aware Gaussian Mixture Model (GMM)-Mixture-of-Experts (MoE) framework for small-sample engine fault classification. At the data level, PhysGMM-MoE fits class-conditional, regime-aware GMMs and performs physically constrained, distance-based quality control to selectively augment minority classes while preserving engine operating semantics. At the model level, a heterogeneous pool of lightweight statistical experts and a lightweight Transformer-based deep expert (ECFT-Transformer) capture complementary neighborhood cues and high order multi-sensor correlations, and an L2-regularized logistic regression meta-learner fuses expert outputs via stacking. We evaluate fault classification on the 3500-DEFault diesel-engine dataset using the adopted eight-class cylinder-fault labeling (H, F1–F7) built from in-cylinder pressure statistics and torsional-vibration harmonics; although severity levels exist in the dataset, this study focuses on classification rather than severity estimation. With 40 training samples per class, PhysGMM-MoE achieves a mean accuracy of 0.9875, exceeding SMOTE+XGBoost by 0.0086, and attains the best macro precision/recall/F1 of 0.9878/0.9826/0.9889, demonstrating strong performance under the adopted small-sample setting. Full article
Show Figures

Figure 1

50 pages, 3734 KB  
Article
DT-LCAF: Digital Twin-Enabled Life Cycle Assessment Framework for Real-Time Embodied Carbon Optimization in Smart Building Construction
by Naif Albelwi
Sustainability 2026, 18(5), 2321; https://doi.org/10.3390/su18052321 - 27 Feb 2026
Viewed by 316
Abstract
The construction sector contributes approximately 39% of global carbon emissions, with embodied carbon—emissions from material extraction, manufacturing, transportation, and construction—representing a systematically underestimated yet increasingly critical component of building life cycle environmental impacts. Traditional Life Cycle Assessment (LCA) methods suffer from static database [...] Read more.
The construction sector contributes approximately 39% of global carbon emissions, with embodied carbon—emissions from material extraction, manufacturing, transportation, and construction—representing a systematically underestimated yet increasingly critical component of building life cycle environmental impacts. Traditional Life Cycle Assessment (LCA) methods suffer from static database dependencies, delayed feedback cycles, and limited integration with active construction decision-making, creating a fundamental gap between environmental assessment and construction operations. This paper presents the Digital Twin-Enabled Life Cycle Assessment Framework (DT-LCAF), a dynamic construction-phase embodied carbon accounting system aligned with the EN 15978 standard (stages A1–A5) that integrates Building Information Modeling (BIM), Internet of Things (IoT) sensor networks, and machine learning designed to support real-time sustainability decision-making during smart building construction, with computational performance validated through the offline processing of historical datasets. The framework introduces two enabling mechanisms: (1) a Multi-Scale Carbon Prediction Network (MSCPN) employing hierarchical graph attention networks to capture material interdependencies across component, system, and building scales; and (2) a Reinforcement Learning-based Carbon Optimization Engine (RL-COE) that generates constraint-aware recommendations for material substitution, supplier selection, and construction sequencing while respecting structural, economic, and temporal constraints. Experimental evaluation employs two complementary validation strategies using proxy embodied carbon labels (not ground-truth construction measurements): embodied carbon prediction accuracy is assessed using proxy carbon labels derived from the CBECS dataset (5900 commercial buildings) combined with the ICE Database v3.0 emission factors, achieving a 10.24% MAPE, representing a 23.7% improvement over the best-performing baseline in predicting these proxy estimates; temporal responsiveness and streaming data ingestion capabilities are validated using the Building Data Genome Project 2 (1636 buildings, 3053 m). The RL-COE optimization engine demonstrates an 18.4% mean carbon reduction rate within the proxy label framework across building types while maintaining cost and schedule feasibility. A BIM-based case study illustrates the framework’s construction-phase update loop, showing how embodied carbon estimates evolve dynamically as construction progresses. The limitations regarding the proxy-based nature of embodied carbon labels and the absence of ground-truth construction-phase measurements are explicitly discussed. The framework contributes to smart city sustainability by enabling scalable, data-driven embodied carbon intelligence across building portfolios. All quantitative results are based on proxy embodied carbon estimates derived from building characteristics and standard emission factor databases, rather than measured project data. The reported performance therefore demonstrates a proof-of-concept within the proxy system, and real-project, measurement-based validation remains future work. Full article
Show Figures

Figure 1

27 pages, 1683 KB  
Article
Prediction of Blaine Fineness of Final Product in Cement Production Using Industrial Quality Control Data Based on Chemical and Granulometric Inputs Using Machine Learning
by Mustafa Taha Topaloğlu, Cevher Kürşat Macit, Ukbe Usame Uçar and Burak Tanyeri
Appl. Sci. 2026, 16(4), 2046; https://doi.org/10.3390/app16042046 - 19 Feb 2026
Viewed by 264
Abstract
The cement industry is central to sustainable manufacturing due to its high energy demand and associated CO2 emissions. In cement production, a substantial share of electrical energy is consumed in the clinker grinding circuit, where Blaine fineness (specific surface area, cm2 [...] Read more.
The cement industry is central to sustainable manufacturing due to its high energy demand and associated CO2 emissions. In cement production, a substantial share of electrical energy is consumed in the clinker grinding circuit, where Blaine fineness (specific surface area, cm2/g), a key quality output, affects both cement performance and specific energy consumption. However, laboratory Blaine measurements are typically available with a 30–60 min delay, which limits timely process interventions and may promote conservative operating practices (e.g., precautionary over-grinding) to secure quality. This study develops machine-learning models to predict the finished-product Blaine fineness (Blaine-F) from routinely recorded industrial quality-control inputs, including XRF-based oxide composition, derived chemical moduli (lime saturation factor, LSF; silica modulus, SM; alumina modulus, AM), laser-diffraction particle-size distribution descriptors (Q10/Q50/Q90 corresponding to D10/D50/D90 percentile diameters; and R3 residual fractions at selected cut sizes), and intermediate in-process fineness (Blaine-P). The models were trained on over 200 finished-product samples obtained from the quality-control laboratory information management system (LIMS) of Seza Cement Factory (SYCS Group, Turkey). Ridge regression, Random Forest, XGBoost, LightGBM, and CatBoost were tuned using RandomizedSearchCV with five-fold cross-validation and evaluated on a held-out test set using MAE, RMSE, and R2. The results show that the linear baseline provides limited explanatory power (Ridge: R2 ≈ 0.50), consistent with the strongly non-linear behavior of the grinding–separation system, whereas tree-based ensemble methods achieve higher predictive accuracy. XGBoost yields the best overall performance (R2 = 0.754; RMSE = 76.9 cm2/g), while Random Forest attains R2 = 0.744 with the lowest MAE (61.7 cm2/g). Explainability analyses indicate that Blaine-F is primarily influenced by the fine-tail PSD descriptor Q10 (D10 particle size) and the intermediate fineness Blaine-P, whereas chemistry-related variables (e.g., LSF and SiO2, and particularly SM) provide secondary yet meaningful contributions. These findings support the use of the proposed model as a virtual sensor to reduce decision latency associated with delayed laboratory Blaine measurements and to enable tighter fineness targeting. Potential energy and CO2 implications should be quantified using site-specific, plant-calibrated relationships between kWh/t and Blaine fineness, rather than inferred as measured outcomes within the present study. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Industrial Engineering)
Show Figures

Figure 1

17 pages, 4471 KB  
Article
Utilizing Data Quality Indices for Strategic Sensor Channel Selection to Enhance Performance of Hand Gesture Recognition Systems
by Shen Zhang, Hao Zhou, Rayane Tchantchane and Gursel Alici
Sensors 2026, 26(4), 1213; https://doi.org/10.3390/s26041213 - 12 Feb 2026
Viewed by 294
Abstract
This study proposes a data quality-driven channel selection methodology to improve hand gesture recognition performance in multi-channel wearable Human–Machine Interface (HMI) systems. The methodology centers around calculating (i) five data quality indices for both surface electromyography (sEMG) and pressure-based force myography (pFMG) signals [...] Read more.
This study proposes a data quality-driven channel selection methodology to improve hand gesture recognition performance in multi-channel wearable Human–Machine Interface (HMI) systems. The methodology centers around calculating (i) five data quality indices for both surface electromyography (sEMG) and pressure-based force myography (pFMG) signals and (ii) establishing a relationship between these data quality indices and the accuracy of gesture recognition for applications typified by prosthetic hand control. Machine learning (ML)-based and correlation-based methods were used to select three optimal channel/pair configurations from an eight-channel/pair system. Evaluations on the UOW and Ninapro DB2 datasets showed that the proposed methods consistently outperformed random channel selection, with the ML-based approach achieving the best results (76.36% for sEMG, 71.59% for pFMG, and 88.2% for fused sEMG-pFMG on the UOW dataset and 70.28% on Ninapro DB2). Notably, using three pairs of strategically selected sEMG-pFMG channels generated 88.2%, which is comparable to the 88.38% accuracy obtained with a full eight-channel sEMG system on the UOW dataset, highlighting the efficacy of our channel selection methodologies. These results highlight the value of data quality indices for sensor selection and provide a foundation for developing more efficient wearable HMI systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

31 pages, 2826 KB  
Article
HEOCP: Hybrid Energy-Optimized Clustering Protocol for WSNs Using Analytical Modeling and Deep Learning Integration
by Yen-Wu Ti, Rei-Heng Cheng, Songlin Wei and Chih-Min Yu
Sensors 2026, 26(4), 1188; https://doi.org/10.3390/s26041188 - 12 Feb 2026
Viewed by 358
Abstract
Wireless Sensor Networks (WSNs) play a pivotal role in Internet of Things (IoT) applications; however, their lifetime is fundamentally constrained by the limited energy of sensor nodes. This paper introduces a Hybrid Energy-Optimized Clustering Protocol (HEOCP) that combines analytical modeling of radio energy [...] Read more.
Wireless Sensor Networks (WSNs) play a pivotal role in Internet of Things (IoT) applications; however, their lifetime is fundamentally constrained by the limited energy of sensor nodes. This paper introduces a Hybrid Energy-Optimized Clustering Protocol (HEOCP) that combines analytical modeling of radio energy consumption with deep learning–assisted cluster-head (CH) selection. First, an analytical framework is developed to determine the distance-constrained CH eligibility region and the optimal number of clusters, thereby minimizing redundant transmissions and balancing energy consumption. Then, a genetic algorithm (GA) is used to determine the best cluster head configuration. These configurations are then trained by a ResNet-50 deep network and averaged to reduce noise, allowing for real-time cluster head prediction without repeatedly performing expensive heuristic optimization, resulting in more steady performance. Extensive simulations under various network scales demonstrate that HEOCP extends network lifetime by up to 60% compared with conventional LEACH and GA-based approaches, effectively delaying the first-node death and improving overall energy efficiency. Furthermore, the hybrid GA–ResNet framework exhibits high scalability and computational efficiency, making it suitable for large-scale IoT deployments. The results confirm that integrating analytical energy modeling with deep learning provides a powerful and sustainable paradigm for intelligent energy management in future IoT-enabled WSNs. Full article
(This article belongs to the Special Issue IoT/AIoT-Enabled Wireless Sensor Networks: Issues and Challenges)
Show Figures

Figure 1

22 pages, 3535 KB  
Article
Bridge Health Monitoring and Assessment in Industry 5.0: Lessons Learned from Long-Term Real-Time Field Monitoring of Highway Bridges
by Prakash Bhandari, Shinae Jang, Song Han and Ramesh B. Malla
Infrastructures 2026, 11(2), 55; https://doi.org/10.3390/infrastructures11020055 - 7 Feb 2026
Viewed by 391
Abstract
The rapid aging of bridges has increased interest in real-time, data-driven monitoring for predictive maintenance and safety management; however, practical deployment on in-service bridges remains limited. This paper presents lessons learned from long-term field deployment of real-time bridge joint monitoring systems on three [...] Read more.
The rapid aging of bridges has increased interest in real-time, data-driven monitoring for predictive maintenance and safety management; however, practical deployment on in-service bridges remains limited. This paper presents lessons learned from long-term field deployment of real-time bridge joint monitoring systems on three in-service highway bridges and demonstrates how these insights can support the transition toward Industry 5.0. A unified framework is introduced to integrate key enabling technologies, including Internet of Things (IoT), digital twins, and artificial intelligence (AI), into a practical, human-centric monitoring architecture. Best practices for achieving durable, site-compliant, and cost-effective system design are summarized, with emphasis on sensor selection, wireless communication strategies, modular system development, and maintaining seamless operation. The development of a Docker-based analytics and visualization platform illustrates how interactive dashboards enhance human–machine collaboration and support informed decision-making. The role of advanced analytical tools, including digital twins, AI, and statistical modeling, in providing reliable structural assessments is highlighted, along with guidance on balancing cloud and edge computing for energy-efficient performance under constraints such as limited power, weather exposure, and site accessibility. Overall, the findings support the development of scalable, resilient, and human-centric real-time monitoring systems that advance data-driven decision-making and directly contribute to the realization of Industry 5.0 objectives in bridge health management. Full article
Show Figures

Figure 1

21 pages, 5931 KB  
Article
Validation of Inertial Sensor-Based Step Detection Algorithms for Edge Device Deployment
by Maksymilian Kisiel, Arslan Amjad and Agnieszka Szczęsna
Sensors 2026, 26(3), 876; https://doi.org/10.3390/s26030876 - 29 Jan 2026
Viewed by 350
Abstract
Step detection based on measurements of inertial measurement units (IMUs) is fundamental for human activity recognition, indoor navigation, and health monitoring applications. This study validates and compares five fundamentally different step detection algorithms for potential implementation on edge devices. A dedicated measurement system [...] Read more.
Step detection based on measurements of inertial measurement units (IMUs) is fundamental for human activity recognition, indoor navigation, and health monitoring applications. This study validates and compares five fundamentally different step detection algorithms for potential implementation on edge devices. A dedicated measurement system based on the Raspberry Pi Pico 2W microcontroller with two IMU sensors (Waveshare Pico-10DOF-IMU and Adafruit ST-9-DOF-Combo) was designed. The implemented algorithms include Peak Detection, Zero-Crossing, Spectral Analysis, Adaptive Threshold, and SHOE (Step Heading Offset Estimator). Validation was performed across 84 measurement sessions covering seven test scenarios (Timed Up and Go test, natural and fast walking, jogging, and stair climbing) and four sensor mounting locations (thigh pocket, ankle, wrist, and upper arm). Results demonstrate that Peak Detection achieved the best overall performance, with an average F1-score of 0.82, while Spectral Analysis excelled in stair scenarios (F1 = 0.86–0.92). Surprisingly, upper arm mounting yielded the highest accuracy (F1 = 0.84), outperforming ankle placement. The TUG clinical test proved most challenging (average F1 = 0.68), while fast walking was easiest (F1 = 0.87). Additionally, a preliminary application to 668 clinical TUG recordings from the open-access FRAILPOL database revealed algorithm-specific failure modes when continuous gait assumptions are violated. These findings provide practical guidelines for algorithm selection in edge computing applications and activity monitoring systems. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

20 pages, 4893 KB  
Article
Ethyl 2-Cyanoacrylate as a Promising Matrix for Carbon Nanomaterial-Based Amperometric Sensors for Neurotransmitter Monitoring
by Riccarda Zappino, Ylenia Spissu, Antonio Barberis, Salvatore Marceddu, Pier Andrea Serra and Gaia Rocchitta
Appl. Sci. 2026, 16(3), 1255; https://doi.org/10.3390/app16031255 - 26 Jan 2026
Viewed by 449
Abstract
Dopamine (DA) is a critical catecholaminergic neurotransmitter that facilitates signal transduction across synaptic junctions and modulates essential neurophysiological processes, including motor coordination, motivational drive, and reward-motivated behaviors. The fabrication of cost-effective, miniaturized, and high-fidelity analytical platforms is imperative for real-time DA monitoring. Due [...] Read more.
Dopamine (DA) is a critical catecholaminergic neurotransmitter that facilitates signal transduction across synaptic junctions and modulates essential neurophysiological processes, including motor coordination, motivational drive, and reward-motivated behaviors. The fabrication of cost-effective, miniaturized, and high-fidelity analytical platforms is imperative for real-time DA monitoring. Due to its inherent electrochemical activity, carbon-based amperometric sensors constitute the primary modality for DA quantification. In this study, graphite, multi-walled carbon nanotubes (MWCNTs), and graphene were immobilized within an ethyl 2-cyanoacrylate (ECA) polymer matrix. ECA was selected for its rapid polymerization kinetics and established biocompatibility in electrochemical frameworks. All fabricated composites demonstrated robust electrocatalytic activity toward DA; however, MWCNT- and graphene-based sensors exhibited superior analytical performance, characterized by highly competitive limits of detection (LOD) and quantification (LOQ). Specifically, MWCNT-modified electrodes achieved an interesting LOD of 0.030 ± 0.001 µM and an LOQ of 0.101 ± 0.008 µM. Discrepancies in baseline current amplitudes suggest that the spatial orientation of carbonaceous nanomaterials within the cyanoacrylate matrix significantly influences the electrochemical surface area and resulting baseline characteristics. The impact of interfering species commonly found in biological environments on the sensors’ response was systematically evaluated. The best-performing sensor, the graphene-based one, was used to measure the DA intracellular content of PC12 cells. Full article
Show Figures

Figure 1

18 pages, 1419 KB  
Review
How the Vestibular Labyrinth Encodes Air-Conducted Sound: From Pressure Waves to Jerk-Sensitive Afferent Pathways
by Leonardo Manzari
J. Otorhinolaryngol. Hear. Balance Med. 2026, 7(1), 5; https://doi.org/10.3390/ohbm7010005 - 14 Jan 2026
Viewed by 705
Abstract
Background/Objectives: The vestibular labyrinth is classically viewed as a sensor of low-frequency head motion—linear acceleration for the otoliths and angular velocity/acceleration for the semicircular canals. However, there is now substantial evidence that air-conducted sound (ACS) can also activate vestibular receptors and afferents in [...] Read more.
Background/Objectives: The vestibular labyrinth is classically viewed as a sensor of low-frequency head motion—linear acceleration for the otoliths and angular velocity/acceleration for the semicircular canals. However, there is now substantial evidence that air-conducted sound (ACS) can also activate vestibular receptors and afferents in mammals and other vertebrates. This sound sensitivity underlies sound-evoked vestibular-evoked myogenic potentials (VEMPs), sound-induced eye movements, and several clinical phenomena in third-window pathologies. The cellular and biophysical mechanisms by which a pressure wave in the cochlear fluids is transformed into a vestibular neural signal remain incompletely integrated into a single framework. This study aimed to provide a narrative synthesis of how ACS activates the vestibular labyrinth, with emphasis on (1) the anatomical and biophysical specializations of the maculae and cristae, (2) the dual-channel organization of vestibular hair cells and afferents, and (3) the encoding of fast, jerk-rich acoustic transients by irregular, striolar/central afferents. Methods: We integrate experimental evidence from single-unit recordings in animals, in vitro hair cell and calyx physiology, anatomical studies of macular structure, and human clinical data on sound-evoked VEMPs and sound-induced eye movements. Key concepts from vestibular cellular neurophysiology and from the physics of sinusoidal motion (displacement, velocity, acceleration, jerk) are combined into a unified interpretative scheme. Results: ACS transmitted through the middle ear generates pressure waves in the perilymph and endolymph not only in the cochlea but also in vestibular compartments. These waves produce local fluid particle motions and pressure gradients that can deflect hair bundles in selected regions of the otolith maculae and canal cristae. Irregular afferents innervating type I hair cells in the striola (maculae) and central zones (cristae) exhibit phase locking to ACS up to at least 1–2 kHz, with much lower thresholds than regular afferents. Cellular and synaptic specializations—transducer adaptation, low-voltage-activated K+ conductances (KLV), fast quantal and non-quantal transmission, and afferent spike-generator properties—implement effective high-pass filtering and phase lead, making these pathways particularly sensitive to rapid changes in acceleration, i.e., mechanical jerk, rather than to slowly varying displacement or acceleration. Clinically, short-rise-time ACS stimuli (clicks and brief tone bursts) elicit robust cervical and ocular VEMPs with clear thresholds and input–output relationships, reflecting the recruitment of these jerk-sensitive utricular and saccular pathways. Sound-induced eye movements and nystagmus in third-window syndromes similarly reflect abnormally enhanced access of ACS-generated pressure waves to canal and otolith receptors. Conclusions: The vestibular labyrinth does not merely “tolerate” air-conducted sound as a spill-over from cochlear mechanics; it contains a dedicated high-frequency, transient-sensitive channel—dominated by type I hair cells and irregular afferents—that is well suited to encoding jerk-rich acoustic events. We propose that ACS-evoked vestibular responses, including VEMPs, are best interpreted within a dual-channel framework in which (1) regular, extrastriolar/peripheral pathways encode sustained head motion and low-frequency acceleration, while (2) irregular, striolar/central pathways encode fast, sound-driven transients distinguished by high jerk, steep onset, and precise spike timing. Full article
(This article belongs to the Section Otology and Neurotology)
Show Figures

Figure 1

37 pages, 2730 KB  
Article
Identification of a Flexible Fixed-Wing Aircraft Using Different Artificial Neural Network Structures
by Rodrigo Costa do Nascimento, Éder Alves de Moura, Thiago Rosado de Paula, Vitor Paixão Fernandes, Luiz Carlos Sandoval Góes and Roberto Gil Annes da Silva
Aerospace 2026, 13(1), 53; https://doi.org/10.3390/aerospace13010053 - 5 Jan 2026
Viewed by 368
Abstract
This work proposes an analysis of the capability of three deep learning models—the feedforward neural network (FFNN), long short-term memory (LSTM) network, and physics-informed neural network (PINN)—to identify the parameters of a flexible fixed-wing aircraft using in-flight data. These neural networks, composed of [...] Read more.
This work proposes an analysis of the capability of three deep learning models—the feedforward neural network (FFNN), long short-term memory (LSTM) network, and physics-informed neural network (PINN)—to identify the parameters of a flexible fixed-wing aircraft using in-flight data. These neural networks, composed of multiple hidden layers, are evaluated for their ability to perform system identification and to capture the nonlinear and dynamic behavior of the aircraft. The FNN and LSTM models are compared to assess the impact of temporal dependency learning on parameter estimation, while the PINN integrates prior knowledge of the system’s governing of ordinary differential equations (ODEs) to enhance physical consistency in the identification process. The objective is to exploit the generalization capability of neural network-based models while preserving the accurate estimation of the physical parameters that characterize the analyzed system. The neural networks are evaluated for their ability to perform system identification and capture the nonlinear behavior of the aircraft. The results show that the FFNN achieved the best overall performance, with average Theil’s inequality coefficient (TIC) values of 0.162 during training and 0.386 during testing, efficiently modeling the input-output relationships but tending to fit high-frequency measurement noise. The LSTM network demonstrated superior noise robustness due to its temporal filtering capability, producing smoother predictions with average TIC values of 0.398 (training) and 0.408 (testing), albeit with some amplitude underestimation. The PINN, while successfully integrating physical constraints through pretraining with target aerodynamic derivatives, showed more complex convergence, with average TIC values of 0.243 (training) and 0.475 (testing), and its estimated aerodynamic coefficients differed significantly from the conventional values. All three architectures effectively captured the coupled rigid-body and flexible dynamics when trained with distributed wing sensor data, demonstrating that neural network-based approaches can model aeroelastic phenomena without requiring explicit high-fidelity flexible-body models. This study provides a comparative framework for selecting appropriate neural network architectures based on the specific requirements of aircraft system identification tasks. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

Back to TopTop