Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (832)

Search Parameters:
Keywords = RealSense D455

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 24387 KB  
Article
Green Pepper Harvesting Robot System Based on Multi-Target Tracking with Filtering and Intelligent Scheduling
by Tianyu Liu, Zelong Liu, Jianmin Wang, Dongxin Guo, Yuxuan Tan and Ping Jiang
Horticulturae 2026, 12(4), 464; https://doi.org/10.3390/horticulturae12040464 - 8 Apr 2026
Viewed by 195
Abstract
To address the challenges of unstable target localization and poor multi-module coordination in automated green pepper harvesting—caused by occlusions from branches and leaves, as well as varying lighting conditions—this paper presents the design and implementation of a modular robotic picking system. At the [...] Read more.
To address the challenges of unstable target localization and poor multi-module coordination in automated green pepper harvesting—caused by occlusions from branches and leaves, as well as varying lighting conditions—this paper presents the design and implementation of a modular robotic picking system. At the perception level, the system integrates a YOLOv8 detector with a RealSense D435i camera to identify and locate the calyx–ectocarp junctions of green peppers. An integrated multi-target tracking and filtering framework is proposed, which fuses multi-feature association, trajectory smoothing and coordinate denoising strategies to suppress depth noise and trajectory jitter, thereby enhancing the stability and accuracy of 3D localization. At the control and execution level, a depth-first picking sequence strategy with ID freeze-state management is implemented within a multithreaded software–hardware co-design architecture. This approach avoids task conflicts and duplicate operations while supporting continuous multi-fruit harvesting. Field experiments under natural outdoor lighting and varying occlusion levels demonstrate that the proposed system achieves recognition rates of 91.57% and 80.29% and harvesting success rates of 82.85% and 77.68% for non-occluded and lightly occluded fruits, respectively. The average picking cycle per pepper fruit is 9.8 s. This system provides an effective technical solution for addressing stability control challenges in the automated harvesting process of green peppers. Full article
(This article belongs to the Section Vegetable Production Systems)
Show Figures

Figure 1

17 pages, 2592 KB  
Technical Note
SpecResNet: Hyperspectral Image Compression via Hybrid Residual Learning and Spectral Calibration
by Fahad Saeed, Shumin Liu and Jie Chen
Remote Sens. 2026, 18(7), 1074; https://doi.org/10.3390/rs18071074 - 3 Apr 2026
Viewed by 245
Abstract
Hyperspectral imaging provides rich spatial–spectral information but generates huge data volumes, posing significant challenges for storage, transmission, and real-time processing in remote sensing applications. In this study, we propose SpecResNet, a 3D autoencoder-based model for hyperspectral image compression. This framework introduces hybrid residual [...] Read more.
Hyperspectral imaging provides rich spatial–spectral information but generates huge data volumes, posing significant challenges for storage, transmission, and real-time processing in remote sensing applications. In this study, we propose SpecResNet, a 3D autoencoder-based model for hyperspectral image compression. This framework introduces hybrid residual blocks for preserving representational power and a spectral calibration (SC) block to enhance spectral fidelity. It also uses Squeeze-and-Excitation (SE) blocks for adaptive feature recalibration. Our model obtains different compression operating points by varying model capacity, with bitrate emerging implicitly from the learned latent representations. Experiments on several benchmark datasets show that SpecResNet surpasses the performance of existing frameworks on most datasets in terms of PSNR, MS-SSIM, and SAM, demonstrating its strong potential. Our results suggest that SpecResNet offers a promising trade-off for efficient hyperspectral image compression, with potential for further refinement in complex scenes. Full article
Show Figures

Figure 1

26 pages, 55794 KB  
Article
Distortion-Aware Routing and Parameter-Shared MoE for Multispectral Remote Sensing Super-Resolution
by Shuo Yang, Shi Chen, Yuxuan Liu and Tianhui Zhang
Sensors 2026, 26(7), 2186; https://doi.org/10.3390/s26072186 - 1 Apr 2026
Viewed by 498
Abstract
Multispectral remote sensing image super-resolution (RSISR) aims to reconstruct high-frequency details while preserving cross-band structural consistency under strict computational budgets. However, real-world satellite imagery exhibits heterogeneous distortions, ranging from band-dependent noise to spatially varying texture degradation, rendering uniform restoration strategies suboptimal. To address [...] Read more.
Multispectral remote sensing image super-resolution (RSISR) aims to reconstruct high-frequency details while preserving cross-band structural consistency under strict computational budgets. However, real-world satellite imagery exhibits heterogeneous distortions, ranging from band-dependent noise to spatially varying texture degradation, rendering uniform restoration strategies suboptimal. To address these challenges, we propose a unified framework that integrates cue extraction, expert specialization, and efficiency-aware restoration. Specifically, a Distortion-Aware Feature Extractor (DAFE) explicitly encodes distortion cues by synthesizing fixed frequency bases, learnable residual components, lightweight spatial edge representations, and noise proxies. Subsequently, a Distortion-Aware Expert Choice (DAEC) router utilizes these cues to establish distortion-conditioned affinities and performs capacity-constrained, load-balanced expert assignment. Finally, a parameter-shared Mixture-of-Experts (PS-MoE) architecture employs shared expert parameters across spectral bands, augmented by band-wise low-rank adapters, to enable coarse-to-fine restoration with minimal computational overhead. Extensive experiments on the SEN2VENμS and OLI2MSI datasets demonstrate that the proposed method achieves a PSNR of 49.38 dB on SEN2VENμS 2×, 45.91 dB on SEN2VENμS 4×, and 45.94 dB on OLI2MSI 3×. Compared to the strongest baseline for each task, our method yields PSNR improvements of 0.12 dB, 0.10 dB, and 0.09 dB, respectively, while simultaneously reducing FLOPs and parameter counts. These results confirm that explicit distortion modeling and parameter-shared expert specialization provide an effective and computationally efficient solution for multispectral remote sensing image super-resolution. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

31 pages, 7441 KB  
Article
Non-Contact Characterization of TPA-like Texture Properties of Gel-Based Soft Foods Using a Controlled Airflow–Laser System
by Hui Yu, Shi Yu, Meng He and Xiuying Tang
Foods 2026, 15(7), 1166; https://doi.org/10.3390/foods15071166 - 30 Mar 2026
Viewed by 343
Abstract
Texture characteristics are critical quality evaluation indicators for soft foods. Traditional texture profile analysis (TPA) relies on probe–sample contact and may cause irreversible structural damage, limiting its application in nondestructive or online detection. In this study, a non-contact and nondestructive Controlled Airflow–Laser Texturemeter [...] Read more.
Texture characteristics are critical quality evaluation indicators for soft foods. Traditional texture profile analysis (TPA) relies on probe–sample contact and may cause irreversible structural damage, limiting its application in nondestructive or online detection. In this study, a non-contact and nondestructive Controlled Airflow–Laser Texturemeter (CAFLT) system was developed to achieve rapid multi-parameter texture characterization. The system integrates programmable airflow loading with laser displacement sensing to implement a TPA-like double-cycle loading protocol, simultaneously acquiring time–applied airflow pressure (T–AP) and time–displacement (T–D) responses. Gelatin–maltose composite gels with graded Bloom strengths (CL50–CL250) were used as model samples. Texture-related descriptors were extracted using a dual-curve feature framework and compared with traditional TPA measurements. The CAFLT system produced a double-peak response pattern resembling that of traditional TPA and showed clear monotonic trends with increasing gel strength. Hardness_CAFLT exhibited a strong correlation with the reference TPA hardness value (r = 0.97). In addition, Gumminess_CAFLT showed a positive association with traditional gumminess (r = 0.87), but should be interpreted within the CAFLT-specific loading framework. Multivariate principal coordinates analysis further demonstrated clear multivariate discrimination among samples. Additionally, the time-domain descriptor tPeak1 showed a strong power-law relationship with Bloom strength (R2=0.96), indicating enhanced sensitivity to mechanical differences under small-deformation conditions. Overall, the CAFLT system provides a feasible approach for non-contact, nondestructive, and quantitative texture evaluation of soft foods, and shows strong potential for real-time quality monitoring and intelligent food inspection. Full article
(This article belongs to the Section Food Engineering and Technology)
Show Figures

Figure 1

26 pages, 4885 KB  
Article
Reading Noise: Integrating Physiological Sensing and Sound-Driven Visualization to Externalize Noise-Related Cognitive Disruption During Reading
by Xueyi Li, Yonghong Liu, Zihui Jiang and Yangcheng Wang
Multimodal Technol. Interact. 2026, 10(4), 35; https://doi.org/10.3390/mti10040035 - 30 Mar 2026
Viewed by 312
Abstract
Environmental noise may interfere with the reading experience by increasing cognitive load and psychophysiological arousal, yet these effects are difficult to perceive and communicate in real time. This study presents Reading Noise, an interactive installation that combines physiological sensing and sound-driven visualization to [...] Read more.
Environmental noise may interfere with the reading experience by increasing cognitive load and psychophysiological arousal, yet these effects are difficult to perceive and communicate in real time. This study presents Reading Noise, an interactive installation that combines physiological sensing and sound-driven visualization to externalize perceived noise-related disturbance and psychophysiological strain during reading. In a controlled experiment, 46 participants completed reading tasks under four levels of background conversational noise (0–30, 31–60, 61–90, and >90 dB) while ambient sound level, electrodermal activity (EDA), and electrocardiogram (ECG) were recorded in real time. Following data quality screening, inferential statistical analyses were performed on the analyzable physiological subset (n = 16). Based on these data, a hybrid mapping strategy combining rule-based assignment and LMM-informed exploratory calibration was developed to map acoustic and physiological changes onto dynamic text-based visual parameters, including deformation intensity, jitter, and motion instability, for real-time feedback. Within the analyzable subset, noise level was associated with significant changes in the recorded physiological indicators (all p < 0.05): skin conductance level (SCL) and skin conductance responses per minute (SCRs/min) increased (4.69 ± 2.13 to 5.93 ± 2.19 μS; 1.49 ± 1.59 to 2.51 ± 2.13), whereas the percentage of successive RR intervals differing by more than 50 ms (pNN50) and the root mean square of successive differences (RMSSD) decreased (15.84 ± 16.52% to 10.57 ± 11.35%; 36.63 ± 17.62 to 29.67 ± 16.66 ms). Subjective cognitive load also increased significantly (2.06 ± 0.29 to 6.38 ± 0.31). A follow-up installation study with 24 cross-disciplinary participants, with reported group interaction observations drawn from a 12-participant subset, suggested that the installation may facilitate shared interpretation of attention-related disruption and cognitive strain, indicating the potential of physiology-informed visual translation as a boundary object approach for empathetic, sound-mediated communication. Full article
Show Figures

Figure 1

35 pages, 51987 KB  
Article
Structurally Consistent and Grounding-Aware Stagewise Reasoning for Referring Remote Sensing Image Segmentation
by Shan Dong, Jianlin Xie, Liang Chen, He Chen, Baogui Qi and Yunqiu Ge
Remote Sens. 2026, 18(7), 1015; https://doi.org/10.3390/rs18071015 - 28 Mar 2026
Viewed by 261
Abstract
Referring Remote Sensing Image Segmentation (RRSIS) is a representative multimodal understanding task for remote sensing, which segments designated targets from remote images according to free-form natural language descriptions. However, complex remote sensing characteristics, such as cluttered backgrounds, large-scale variations, small scattered targets and [...] Read more.
Referring Remote Sensing Image Segmentation (RRSIS) is a representative multimodal understanding task for remote sensing, which segments designated targets from remote images according to free-form natural language descriptions. However, complex remote sensing characteristics, such as cluttered backgrounds, large-scale variations, small scattered targets and repetitive textures, lead to unstable visual grounding and further spatial grounding drift, resulting in inaccurate segmentation results. Existing approaches typically perform implicit visual–linguistic fusion across encoding and decoding stages, entangling spatial grounding with mask refinement. This tightly coupled formulation lacks explicit structural constraints and is prone to cross-modal ambiguity, especially in complex remote sensing layouts. To address these limitations, we propose a Structurally consistent and Grounding-aware Stagewise Reasoning Framework (SGSRF) that follows a grounding-first, segmentation-second paradigm. The framework decomposes inference into three cascaded stages with progressively imposed structural constraints. First, Cross-modal Consistency Refinement (CCR) lays the foundation for stable spatial grounding by enhancing visual–textual structural alignment via CLIP-based features and Structural Consistency Regularization (SCR), producing well-aligned multimodal representations and reliable grounding cues. Second, Grounding-aware Prompt (GPG) Generation bridges grounding and segmentation by converting aligned representations into complementary sparse and dense prompts, which serve as explicit grounding guidance for the segmentation model. Third, Grounding Modulated Segmentation (GMS) leverages the Segment Anything Model (SAM) to generate fine-grained mask prediction under the joint guidance of prompts and grounding cues, improving spatial grounding stability and robustness to background interference and scale variation. Extensive experiments on three remote sensing benchmarks, namely RefSegRS, RRSIS-D, and RISBench, demonstrate that SGSRF achieves state-of-the-art performance. The proposed stagewise paradigm integrates structural alignment, explicit grounding, and prompt-driven segmentation into a unified framework, providing a practical and robust solution for RRSIS in real-world Earth observation applications. Full article
Show Figures

Figure 1

63 pages, 32785 KB  
Article
Cost-Effective TinyML-Ready Design and Field Deployment of a Solar-Powered Environmental Monitoring Data Collector Using LTE-M Communication
by Emanuel-Crăciun Trînc, Valentin Niţă, Cristina Stolojescu-Crisan, Cosmin Ancuţi, Răzvan Marius Mihai and Cristian Pațachia Sultănoiu
Appl. Sci. 2026, 16(7), 3237; https://doi.org/10.3390/app16073237 - 27 Mar 2026
Viewed by 437
Abstract
Environmental monitoring is essential for smart agriculture, renewable energy assessment, and climate-aware farm management. However, deploying autonomous sensing platforms in rural environments remains challenging because of energy constraints, communication reliability, and real-time processing requirements. This paper presents a modular, solar-powered environmental monitoring platform [...] Read more.
Environmental monitoring is essential for smart agriculture, renewable energy assessment, and climate-aware farm management. However, deploying autonomous sensing platforms in rural environments remains challenging because of energy constraints, communication reliability, and real-time processing requirements. This paper presents a modular, solar-powered environmental monitoring platform integrating LTE-M communication and TinyML-enabled edge sensing. The proposed system adopts a dual-microcontroller architecture that combines an Arduino Nano 33 BLE for real-time sensor acquisition and edge processing with an Arduino MKR NB 1500 dedicated to low-power wide-area communication. The platform integrates temperature, humidity, atmospheric pressure, rainfall, wind, and light sensors within a scalable framework. Two monitoring stations were deployed in rural regions of Romania to evaluate communication robustness, sensing stability, and energy autonomy. Field results demonstrated reliable LTE-M connectivity (4306 received signal strength indicator [RSSI] samples; mean 75.51 dBm) and strong agreement with a regional weather station, with mean deviations of −0.71 °C (temperature), 4.98% (humidity), and a stable pressure offset of 9.58 hPa attributable to altitude differences. Despite a total system cost of €315, the platform achieved measurement performance comparable to that of professional meteorological stations while maintaining long-term solar-powered operation. The proposed architecture provides a scalable and cost-effective solution for distributed smart agriculture and environmental monitoring applications. Full article
(This article belongs to the Special Issue The Internet of Things (IoT) and Its Application in Monitoring)
Show Figures

Figure 1

22 pages, 3943 KB  
Article
Modeling and Manufacturing Error Analysis of a Magnetic Off-Axis Rotor Position Sensor for Synchronous Motors
by Selma Čorović, Kris Ambroželi, Roman Manko and Damijan Miljavec
Machines 2026, 14(4), 361; https://doi.org/10.3390/machines14040361 - 25 Mar 2026
Viewed by 377
Abstract
In the vehicle electrification sector, the precise and reliable control of e-motors is of the utmost importance for ensuring the efficient and safe operation of the whole electric vehicle drivetrain. Specifically, the assessment of the absolute rotor position of the permanent magnet-based synchronous [...] Read more.
In the vehicle electrification sector, the precise and reliable control of e-motors is of the utmost importance for ensuring the efficient and safe operation of the whole electric vehicle drivetrain. Specifically, the assessment of the absolute rotor position of the permanent magnet-based synchronous motors is necessary for precise e-motor control, which is strongly determined by the precision of the sensing device used for the absolute rotor position assessment. Magnetic rotational position sensing devices/encoders are predominantly used in the automotive sector. The accuracy of a magnetic-based rotational position sensing device can be affected by defects/errors which may occur during its manufacturing and/or assembly process. These defects may in turn affect the accuracy of the e-motor’s control and operation. The primary objective of this study was to numerically and experimentally design and investigate the accuracy of a magnetic-based off-axis rotational position sensing device intended for the control of a new permanent magnet e-motor, which was developed for a two-wheeler electric vehicle drivetrain. First, a 3D parametric numerical model of a magnetic rotational position sensing device mounted on the motor shaft was built by virtue of the finite element method (FEM). Based on numerical simulations, the appropriate dimensions of the magnetic ring were determined and the possible errors which may have occurred during its manufacturing process have been numerically imposed and analyzed. Second, the rotor position sensing device was prototyped based on the recommendations obtained with the 3D FEM model. Finally, the accuracy of the designed rotational position device was then experimentally assessed by comparing it to a standardized end-of-shaft rotational position encoder. To evaluate the influence of the possible errors on the e-motor rotor position measurement, the output characteristics of the motor torque as a function of its rotational speed of a real permanent magnet e-motor were experimentally assessed using two different rotational position devices. Based on the numerical end experimental results, we identified the manufacturing errors of the magnetic ring and analyzed their influence on the resulting output characteristics of the e-motor. The results revealed that the magnetic ring eccentricity and its magnetization process could affect the accuracy of the e-motor’s output torque characteristics. Full article
Show Figures

Figure 1

30 pages, 5330 KB  
Review
Real-Time and Spatially Resolved Epigenetic Dynamics Tracking Beyond DNA Methylation via Live-Cell Epigenetic Sensors in 3D Systems
by Aqsa Tariq, Iram Naz, Fareeha Arshad, Raja Chinnappan, Tanveer Ahmad Mir, Mohammed Imran Khan and Ahmed Yaqinuddin
Biosensors 2026, 16(4), 188; https://doi.org/10.3390/bios16040188 - 25 Mar 2026
Viewed by 558
Abstract
Background: Gene expression and cellular identity are regulated by epigenetics that occurs through chromatin modifications, RNA changes, chromatin accessibility, and three-dimensional genome organization. Although DNA methylation has been the focus of most epigenetics studies in the past, other non-methyl epigenetic processes, including [...] Read more.
Background: Gene expression and cellular identity are regulated by epigenetics that occurs through chromatin modifications, RNA changes, chromatin accessibility, and three-dimensional genome organization. Although DNA methylation has been the focus of most epigenetics studies in the past, other non-methyl epigenetic processes, including histone post-translational modifications (PTMs), epitranscriptomic marks, and chromatin remodeling, are dynamic, reversible, and context-dependent, and thus are difficult to accurately interrogate using endpoint sequencing-based assays, especially in heterogeneous tissues, developing systems, and therapeutic response environments. Scope and Approach: The present review discusses epigenetic modifications other than DNA methylation regarding sensor-based technologies that can measure live, dynamic, and spatially resolved measurements. Epigenetic sensors include any genetically encoded sensors (GECs) based on resonance energy transfer, CRISPR/dCas-derived sensors, or aptamer-based sensors, and hybrid biochemical/imaging sensors that can be used in live or semi-live settings. It lays emphasis on the technologies, which have been developed recently, that allow real-time kinetic measurements, working in three-dimensional and organoid models, and being applied to disease-relevant perturbations. On these platforms, performance properties such as specificity, sensitivity, spatial and temporal resolution, ability to perform dynamic versus locus-specific interrogation, and perturbed endogenous chromatin states are compared. Key Conclusions and Outlook: Together, these sensing strategies are complementary to the traditional methods of measuring epigenomics in that they show epigenetic dynamics unobservable with static measurements. We list the important technical issues, including specificity, quantitation, multiplexing, and chromatin perturbation, and report the barriers and solutions in development and design. Lastly, we provide a conceptual map of how live epigenetic sensing and multi-omics and translational models can be integrated, and how the two methodologies can be used to develop functional epigenetics and guide disease modeling and drug development. Full article
(This article belongs to the Section Biosensors and Healthcare)
Show Figures

Figure 1

31 pages, 16969 KB  
Article
Research on Cooperative Vehicle–Infrastructure Perception Integrating Enhanced Point-Cloud Features and Spatial Attention
by Shiyang Yan, Yanfeng Wu, Zhennan Liu and Chengwei Xie
World Electr. Veh. J. 2026, 17(4), 164; https://doi.org/10.3390/wevj17040164 - 24 Mar 2026
Viewed by 334
Abstract
Vehicle–infrastructure cooperative perception (VICP) extends the sensing capability of single-vehicle systems by integrating multi-source information from onboard and roadside sensors, thereby alleviating limitations in sensing range and field-of-view coverage. However, in complex urban environments, the robustness of such systems—particularly in terms of blind-spot [...] Read more.
Vehicle–infrastructure cooperative perception (VICP) extends the sensing capability of single-vehicle systems by integrating multi-source information from onboard and roadside sensors, thereby alleviating limitations in sensing range and field-of-view coverage. However, in complex urban environments, the robustness of such systems—particularly in terms of blind-spot coverage and feature representation—is severely affected by both static and dynamic occlusions, as well as distance-induced sparsity in point cloud data. To address these challenges, a 3D object detection framework incorporating point cloud feature enhancement and spatially adaptive fusion is proposed. First, to mitigate feature degradation under sparse and occluded conditions, a Redefined Squeeze-and-Excitation Network (R-SENet) attention module is integrated into the feature encoding stage. This module employs a dual-dimensional squeeze-and-excitation mechanism operating across pillars and intra-pillar points, enabling adaptive recalibration of critical geometric features. In addition, a Feature Pyramid Backbone Network (FPB-Net) is designed to improve target representation across varying distances through multi-scale feature extraction and cross-layer aggregation. Second, to address feature heterogeneity and spatial misalignment between heterogeneous sensing agents, a Spatial Adaptive Feature Fusion (SAFF) module is introduced. By explicitly encoding the origin of features and leveraging spatial attention mechanisms, the SAFF module enables dynamic weighting and complementary fusion between fine-grained vehicle-side features and globally informative roadside semantics. Extensive experiments conducted on the DAIR-V2X benchmark and a custom dataset demonstrate that the proposed approach outperforms several state-of-the-art methods. Specifically, Average Precision (AP) scores of 0.762 and 0.694 are achieved at an IoU threshold of 0.5, while AP scores of 0.617 and 0.563 are obtained at an IoU threshold of 0.7 on the two datasets, respectively. Furthermore, the proposed framework maintains real-time inference performance, highlighting its effectiveness and practical potential for real-world deployment. Full article
(This article belongs to the Section Automated and Connected Vehicles)
Show Figures

Figure 1

23 pages, 5784 KB  
Article
Learning Italian Hand Gesture Culture Through an Automatic Gesture Recognition Approach
by Chiara Innocente, Giorgio Di Pisa, Irene Lionetti, Andrea Mamoli, Manuela Vitulano, Giorgia Marullo, Simone Maffei, Enrico Vezzetti and Luca Ulrich
Future Internet 2026, 18(4), 177; https://doi.org/10.3390/fi18040177 - 24 Mar 2026
Viewed by 231
Abstract
Italian hand gestures constitute a distinctive and widely recognized form of nonverbal communication, deeply embedded in everyday interaction and cultural identity. Despite their prominence, these gestures are rarely formalized or systematically taught, posing challenges for foreign speakers and visitors seeking to interpret their [...] Read more.
Italian hand gestures constitute a distinctive and widely recognized form of nonverbal communication, deeply embedded in everyday interaction and cultural identity. Despite their prominence, these gestures are rarely formalized or systematically taught, posing challenges for foreign speakers and visitors seeking to interpret their meaning and pragmatic use. Moreover, their ephemeral and embodied nature complicates traditional preservation and transmission approaches, positioning them within the broader domain of intangible cultural heritage. This paper introduces a machine learning–based framework for recognizing iconic Italian hand gestures, designed to support cultural learning and engagement among foreign speakers and visitors. The approach combines RGB–D sensing with depth-enhanced geometric feature extraction, employing interpretable classification models trained on a purpose-built dataset. The recognition system is integrated into a non-immersive virtual reality application simulating an interactive digital totem conceived for public arrival spaces, providing tutorial content, real-time gesture recognition, and immediate feedback within a playful and accessible learning environment. Three supervised machine learning pipelines were evaluated, and Random Forest achieved the best overall performance. Its integration with an Isolation Forest module was further considered for deployment, achieving a macro-averaged accuracy and F1-score of 0.82 under a 5-fold cross-validation protocol. An experimental user study was conducted with 25 subjects to evaluate the proposed interactive system in terms of usability, user engagement, and learning effectiveness, obtaining favorable results and demonstrating its potential as a practical tool for cultural education and intercultural communication. Full article
Show Figures

Figure 1

28 pages, 22901 KB  
Article
IAMS (Interior-Anchored Mean-Shift) Algorithm for Supervoxel Segmentation of Airborne LiDAR Roof Points
by Hanyu Zhou, Liang Zhang, Zhiyue Zhang, Haiqiong Yang, Xiongfei Tang, Hongchao Ma and Chunjing Yao
Remote Sens. 2026, 18(6), 965; https://doi.org/10.3390/rs18060965 - 23 Mar 2026
Viewed by 231
Abstract
Accurate building roof classification from airborne LiDAR point clouds is fundamental to reliable three-dimensional (3D) urban reconstruction. While supervoxel-based methods offer efficiency and resilience to uneven point density, their performance is critically undermined by cross-boundary segmentation errors—a direct consequence of random seed initialization [...] Read more.
Accurate building roof classification from airborne LiDAR point clouds is fundamental to reliable three-dimensional (3D) urban reconstruction. While supervoxel-based methods offer efficiency and resilience to uneven point density, their performance is critically undermined by cross-boundary segmentation errors—a direct consequence of random seed initialization that merges geometrically similar yet semantically distinct objects. To address this root cause, this study proposes Interior-Anchored Mean-Shift (IAMS), a novel supervoxel segmentation framework that rethinks seed placement as a geometry-aware interior localization problem. By integrating local geometric consistency point density, and spatial correlation into a unified kernel density estimator, supplemented by density-adaptive voxel weighting and a semi-variogram-driven bandwidth, IAMS reliably anchors seeds within object interiors, yielding highly homogeneous supervoxels without post-processing. Extensive experiments on three diverse airborne LiDAR datasets demonstrated that IAMS consistently outperformed state-of-the-art baselines. On the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen benchmark, our approach improved roof classification completeness, correctness, and quality by up to 7.1% (per-object) over the conventional Voxel Cloud Connectivity Segmentation (VCCS) algorithm while being significantly faster than recent boundary-preserving alternatives. Critically, IAMS maintains robust performance under challenging conditions, including sparse sampling and dense vegetation occlusion, making it a practical solution for real-world urban remote sensing. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

30 pages, 2392 KB  
Review
Lab-on-a-Chip and Microfluidics Technologies for Nano Drug Delivery
by Bochun Guo, Yuchao Zhao and Xunli Zhang
Bioengineering 2026, 13(3), 363; https://doi.org/10.3390/bioengineering13030363 - 20 Mar 2026
Viewed by 850
Abstract
Lab-on-a-Chip (LoC) and microfluidic technologies are rapidly reshaping the development pipeline for nano drug delivery systems (DDSs) by enabling precise control of physicochemical properties, high-throughput screening, and integrated biological evaluation within miniaturized platforms. This review synthesizes recent advances in microfluidic principles, fabrication strategies, [...] Read more.
Lab-on-a-Chip (LoC) and microfluidic technologies are rapidly reshaping the development pipeline for nano drug delivery systems (DDSs) by enabling precise control of physicochemical properties, high-throughput screening, and integrated biological evaluation within miniaturized platforms. This review synthesizes recent advances in microfluidic principles, fabrication strategies, and sensing modalities that facilitate continuous flow synthesis, real-time characterization, and adaptive formulation of nanoparticles. We highlight how LoC-enabled systems improve monodispersity, reproducibility, and tunability of liposomes, polymeric nanoparticles, and metallic nanocarriers, while providing powerful tools for assessing pharmacokinetics, drug release, and systemic responses using organ-on-chip (OoC) models. Emerging trends, including AI-driven autonomous optimization, stimuli-responsive materials, 3D-printed hybrid architectures, and self-powered portable devices, are discussed in the context of future integrated nano-pharmaceutics platforms. Despite existing challenges related to biocompatibility, standardization, data integration, and translation to industrial and clinical applications, the synergistic evolution of LoC engineering and nanomedicine holds transformative potential for personalized and next-generation therapeutic strategies. Full article
(This article belongs to the Special Issue Bioengineering Platforms for Drug Delivery)
Show Figures

Figure 1

25 pages, 6467 KB  
Review
Ultrasound Patches Toward Intelligent Theranostics: From Flexible Materials to Closed-Loop Biomedical Systems
by Jinpeng Zhao, Yi Huang, Yuan Zhang, Yuhang Xie, Wei Guo, Yang Li and Shidong Wang
Bioengineering 2026, 13(3), 345; https://doi.org/10.3390/bioengineering13030345 - 17 Mar 2026
Viewed by 530
Abstract
Ultrasound patches represent a transformative advancement beyond conventional ultrasonography, evolving into intelligent theranostic systems for personalized healthcare. This evolution is propelled by synergistic innovations in flexible piezoelectric materials and integrated designs. The development of piezoelectric polymers, lead-free ceramics, and bio-composite materials has laid [...] Read more.
Ultrasound patches represent a transformative advancement beyond conventional ultrasonography, evolving into intelligent theranostic systems for personalized healthcare. This evolution is propelled by synergistic innovations in flexible piezoelectric materials and integrated designs. The development of piezoelectric polymers, lead-free ceramics, and bio-composite materials has laid the foundation for long-term, conformal, and biosafe interfacing with the human body. Structurally, miniaturized transducer arrays (e.g., CMOS-integrated arrays achieving ~200 μm focal spots and 100 kPa focal pressure), multimodal integration, and bioinspired interfaces have enabled high-precision deep-tissue sensing and spatiotemporally controlled energy delivery—exemplified by strain-sensing feedback improving the signal-to-noise ratio by 5 dB for precise neuromodulation. These capabilities are converging to create closed-loop platforms, as demonstrated in continuous cardiovascular monitoring (up to 164 mm depth for 12 h), image-guided neuromodulation for neurological disorders, on-demand drug delivery (achieving 100% higher plasma concentration than ultrasound alone), and integrated tumor therapy with real-time feedback. Despite persistent challenges in material biocompatibility, energy efficiency, and clinical standardization, the future of ultrasound patches lies in their deep integration with multimodal sensing, machine learning, and adaptive control algorithms. This path will ultimately realize their potential for intelligent, closed-loop theranostics in chronic disease management, telemedicine, and personalized therapy. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Figure 1

30 pages, 11789 KB  
Article
A Multi-Source Data Fusion-Based Method for Safety Monitoring of Construction Workers on Concrete Placement Surfaces
by Jijiang Chen, Zijun Zhang, Xiao Sun, Yanyin Zhou, Yao Zhou, Yingjie Zhao and Jun Shi
Buildings 2026, 16(6), 1165; https://doi.org/10.3390/buildings16061165 - 16 Mar 2026
Viewed by 231
Abstract
Concrete placement surfaces are characterized by intensive construction processes, frequent equipment interactions, and strong spatial dynamics, which make it difficult to identify unsafe actions of construction workers in real time and to accurately quantify and warn about regional safety risks. To address these [...] Read more.
Concrete placement surfaces are characterized by intensive construction processes, frequent equipment interactions, and strong spatial dynamics, which make it difficult to identify unsafe actions of construction workers in real time and to accurately quantify and warn about regional safety risks. To address these challenges, this study proposes a safety monitoring method for construction workers operating on complex concrete placement surfaces. First, a coupled risk assessment framework integrating regional hazard levels, unsafe action risks, and worker authorization is established based on trajectory intersection theory (TIT). Subsequently, a multi-source continuous sensing system is developed by integrating global navigation satellite system (GNSS) positioning, inertial measurement unit (IMU)-based human activity recognition (HAR) using a BiLSTM-Attention model, and unmanned aerial vehicle (UAV)-based 3D realistic scene modeling. On this basis, real-time visualization and risk warning of worker trajectories, action states, and spatial risks are achieved through multi-source data fusion and a WebGL-based visualization platform. Field validation results indicate that the proposed system can generate alarm outputs that are consistent with the predefined risk rules within 3 s in typical construction scenarios, demonstrating rule-consistent real-time feasibility and stable system response performance. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

Back to TopTop