Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (132)

Search Parameters:
Keywords = 3D multi-object tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1518 KB  
Article
Biophysical Features of Outer Membrane Vesicles (OMVs) from Pathogenic Escherichia coli: Methodological Implications for Reproducible OMV Characterization
by Giorgia Barbieri, Linda Maurizi, Maurizio Zini, Federica Fratini, Agostina Pietrantoni, Ilaria Bellini, Serena Cavallero, Eleonora D’Intino, Federica Rinaldi, Paola Chiani, Valeria Michelacci, Stefano Morabito, Barbara Chirullo and Catia Longhi
Antibiotics 2026, 15(2), 117; https://doi.org/10.3390/antibiotics15020117 - 26 Jan 2026
Viewed by 80
Abstract
Background/Objectives: Bacterial outer membrane vesicles (OMVs) play a role in bacterial communication, virulence, antimicrobial resistance, and host–pathogen interaction. OMV isolation is a key step for studying these particles’ functions; nevertheless, isolation procedures can greatly influence the yield, purity, and structural integrity of [...] Read more.
Background/Objectives: Bacterial outer membrane vesicles (OMVs) play a role in bacterial communication, virulence, antimicrobial resistance, and host–pathogen interaction. OMV isolation is a key step for studying these particles’ functions; nevertheless, isolation procedures can greatly influence the yield, purity, and structural integrity of OMVs, thereby affecting downstream biological analyses and functional interpretation. Methods: In this study, we compared the efficacy of two OMV isolation techniques, differential ultracentrifugation (dUC) and size-exclusion chromatography (SEC), in separating and concentrating vesicles produced by two Escherichia coli strains belonging to uropathogenic (UPEC) and Shiga toxin-producing (STEC) pathotypes. The isolated OMVs were characterized using a multi-analytical approach including transmission and scanning electron microscopy (TEM, SEM), nanoparticle tracking analysis (NTA), dynamic light scattering (DLS), ζ-potential measurement, and protein quantification to assess the purity of the preparations. Results: Samples obtained by dUC exhibited higher total protein content, broader particle size distributions, and more pronounced contamination by non-vesicular material. In contrast, SEC yielded morphologically homogeneous and structurally well-preserved vesicles, higher particle-to-protein ratios, and lower total protein content, reflecting reduced co-isolation of protein aggregates. NTA and DLS analyses revealed polydisperse populations in samples obtained with both isolation methods, with DLS measurements highlighting the contribution of larger or transient aggregates. ζ-potential values were close to neutrality for all samples, consistent with limited electrostatic repulsion and with the aggregation tendencies observed in some preparations. Conclusions: This study describes features of OMV produced by two relevant E. coli strains considering two isolation strategies which exert method- and strain-dependent effects on vesicle properties, including size distribution and surface charge, and emphasizes the trade-offs between yield, purity, and vesicle integrity. Full article
Show Figures

Figure 1

21 pages, 15860 KB  
Article
Robot Object Detection and Tracking Based on Image–Point Cloud Instance Matching
by Hongxing Wang, Rui Zhu, Zelin Ye and Yaxin Li
Sensors 2026, 26(2), 718; https://doi.org/10.3390/s26020718 - 21 Jan 2026
Viewed by 173
Abstract
Effectively fusing the rich semantic information from camera images with the high-precision geometric measurements provided by LiDAR point clouds is a key challenge in mobile robot environmental perception. To address this problem, this paper proposes a highly extensible instance-aware fusion framework designed to [...] Read more.
Effectively fusing the rich semantic information from camera images with the high-precision geometric measurements provided by LiDAR point clouds is a key challenge in mobile robot environmental perception. To address this problem, this paper proposes a highly extensible instance-aware fusion framework designed to achieve efficient alignment and unified modeling of heterogeneous sensory data. The proposed approach adopts a modular processing pipeline. First, semantic instance masks are extracted from RGB images using an instance segmentation network, and a projection mechanism is employed to establish spatial correspondences between image pixels and LiDAR point cloud measurements. Subsequently, three-dimensional bounding boxes are reconstructed through point cloud clustering and geometric fitting, and a reprojection-based validation mechanism is introduced to ensure consistency across modalities. Building upon this representation, the system integrates a data association module with a Kalman filter-based state estimator to form a closed-loop multi-object tracking framework. Experimental results on the KITTI dataset demonstrate that the proposed system achieves strong 2D and 3D detection performance across different difficulty levels. In multi-object tracking evaluation, the method attains a MOTA score of 47.8 and an IDF1 score of 71.93, validating the stability of the association strategy and the continuity of object trajectories in complex scenes. Furthermore, real-world experiments on a mobile computing platform show an average end-to-end latency of only 173.9 ms, while ablation studies further confirm the effectiveness of individual system components. Overall, the proposed framework exhibits strong performance in terms of geometric reconstruction accuracy and tracking robustness, and its lightweight design and low latency satisfy the stringent requirements of practical robotic deployment. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

23 pages, 21878 KB  
Article
STC-SORT: A Dynamic Spatio-Temporal Consistency Framework for Multi-Object Tracking in UAV Videos
by Ziang Ma, Chuanzhi Chen, Jinbao Chen and Yuhan Jiang
Appl. Sci. 2026, 16(2), 1062; https://doi.org/10.3390/app16021062 - 20 Jan 2026
Viewed by 109
Abstract
Multi-object tracking (MOT) in videos captured by Unmanned Aerial Vehicles (UAVs) is critically challenged by significant camera ego-motion, frequent occlusions, and complex object interactions. To address the limitations of conventional trackers that depend on static, rule-based association strategies, this paper introduces STC-SORT, a [...] Read more.
Multi-object tracking (MOT) in videos captured by Unmanned Aerial Vehicles (UAVs) is critically challenged by significant camera ego-motion, frequent occlusions, and complex object interactions. To address the limitations of conventional trackers that depend on static, rule-based association strategies, this paper introduces STC-SORT, a novel tracking framework whose core is a two-level reasoning architecture for data association. First, a Spatio-Temporal Consistency Graph Network (STC-GN) models inter-object relationships via graph attention to learn adaptive weights for fusing motion, appearance, and geometric cues. Second, these dynamic weights are integrated into a 4D association cost volume, enabling globally optimal matching across a temporal window. When integrated with an enhanced AEE-YOLO detector, STC-SORT achieves significant and statistically robust improvements on major UAV tracking benchmarks. It elevates MOTA by 13.0% on UAVDT and 6.5% on VisDrone, while boosting IDF1 by 9.7% and 9.9%, respectively. The framework also maintains real-time inference speed (75.5 FPS) and demonstrates substantial reductions in identity switches. These results validate STC-SORT as having strong potential for robust multi-object tracking in challenging UAV scenarios. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

41 pages, 7497 KB  
Article
Vertically Constrained LiDAR-Inertial SLAM in Dynamic Environments
by Shuangfeng Wei, Junfeng Qiu, Anpeng Shen, Keming Qu and Tong Yang
Appl. Sci. 2026, 16(2), 1046; https://doi.org/10.3390/app16021046 - 20 Jan 2026
Viewed by 86
Abstract
With the advancement of Light Detection and Ranging (LiDAR) technology and computer science, LiDAR–Inertial Simultaneous Localization and Mapping (SLAM) has become essential in autonomous driving, robotic navigation, and 3D reconstruction. However, dynamic objects such as pedestrians and vehicles, with complex terrain conditions, pose [...] Read more.
With the advancement of Light Detection and Ranging (LiDAR) technology and computer science, LiDAR–Inertial Simultaneous Localization and Mapping (SLAM) has become essential in autonomous driving, robotic navigation, and 3D reconstruction. However, dynamic objects such as pedestrians and vehicles, with complex terrain conditions, pose serious challenges to existing SLAM systems. These factors introduce artifacts into the acquired point clouds and result in significant vertical drift in SLAM trajectories. To address these challenges, this study focuses on controlling vertical drift errors in LiDAR–Inertial SLAM systems operating in dynamic environments. The research focuses on three key aspects: ground point segmentation, dynamic artifact removal, and vertical drift optimization. In order to improve the robustness of ground point segmentation operations, this study proposes a method based on a concentric sector model. This method divides point clouds into concentric regions and fits flat surfaces within each region to accurately extract ground points. To mitigate the impact of dynamic objects on map quality, this study proposes a removal algorithm that combines multi-frame residual analysis with curvature-based filtering. Specifically, the algorithm tracks residual changes in non-ground points across consecutive frames to detect inconsistencies caused by motion, while curvature features are used to further distinguish moving objects from static structures. This combined approach enables effective identification and removal of dynamic artifacts, resulting in a reduction in vertical drift. Full article
Show Figures

Figure 1

24 pages, 13052 KB  
Article
FGO-PMB: A Factor Graph Optimized Poisson Multi-Bernoulli Filter for Accurate Online 3D Multi-Object Tracking
by Jingyi Jin, Jindong Zhang, Yiming Wang and Yitong Liu
Sensors 2026, 26(2), 591; https://doi.org/10.3390/s26020591 - 15 Jan 2026
Viewed by 190
Abstract
Three-dimensional multi-object tracking (3D MOT) plays a vital role in enabling reliable perception for LiDAR-based autonomous systems. However, LiDAR measurements often exhibit sparsity, occlusion, and sensor noise that lead to uncertainty and instability in downstream tracking. To address these challenges, we propose FGO-PMB, [...] Read more.
Three-dimensional multi-object tracking (3D MOT) plays a vital role in enabling reliable perception for LiDAR-based autonomous systems. However, LiDAR measurements often exhibit sparsity, occlusion, and sensor noise that lead to uncertainty and instability in downstream tracking. To address these challenges, we propose FGO-PMB, a unified probabilistic framework that integrates the Poisson Multi-Bernoulli (PMB) filter from Random Finite Set (RFS) theory with Factor Graph Optimization (FGO) for robust LiDAR-based object tracking. In the proposed framework, object states, existence probabilities, and association weights are jointly formulated as optimizable variables within a factor graph. Four factors, including state transition, observation, existence, and association consistency, are formulated to uniformly encode the spatio-temporal constraints among these variables. By unifying the uncertainty modeling capability of RFS with the global optimization strength of FGO, the proposed framework achieves temporally consistent and uncertainty-aware estimation across continuous LiDAR scans. Experiments on KITTI and nuScenes indicate that the proposed method achieves competitive 3D MOT accuracy while maintaining real-time performance. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensing Technology for Autonomous Vehicles)
Show Figures

Figure 1

26 pages, 3417 KB  
Article
Optimal Fractional Order PID Controller Design for Hydraulic Turbines Using a Multi-Objective Imperialist Competitive Algorithm
by Mohamed Nejlaoui, Abdullah Alghafis and Nasser Ayidh Alqahtani
Fractal Fract. 2026, 10(1), 46; https://doi.org/10.3390/fractalfract10010046 - 11 Jan 2026
Cited by 1 | Viewed by 193
Abstract
This paper introduces a novel approach for designing a Fractional Order Proportional-Integral-Derivative (FOPID) controller for the Hydraulic Turbine Regulating System (HTRS), aiming to overcome the challenge of tuning its five complex parameters (Kp,Ki,Kd, λ [...] Read more.
This paper introduces a novel approach for designing a Fractional Order Proportional-Integral-Derivative (FOPID) controller for the Hydraulic Turbine Regulating System (HTRS), aiming to overcome the challenge of tuning its five complex parameters (Kp,Ki,Kd, λ and μ). The design is formulated as a multi-objective optimization problem, minimized using the Multi-Objective Imperialist Competitive Algorithm (MOICA). The goal is to minimize two key transient performance metrics: the Integral of Squared Error (ISE) and the Integral of the Time Multiplied Squared Error (ITSE). MOICA efficiently generates a Pareto-front of non-dominated solutions, providing control system designers with diverse trade-off options. The resulting optimal FOPID controller demonstrated superior robustness when evaluated against simulated variations in key HTRS parameters (mg, eg and Tw). Comparative simulations against an optimally tuned integer-order PID and established literature methods (FOPID-GA, FOPID-MOPSO and FOPID-MOHHO) confirm the enhanced dynamic response and stable operation of the MOICA-based FOPID. The MOICA-tuned FOPID demonstrated superior performance for Setpoint Tracking, achieving up to a 26% faster settling speed (ITSE) and an 8% higher accuracy (ISE). Furthermore, for Disturbance Rejection, it showed enhanced robustness, leading to up to a 23% quicker recovery speed (ITSE) and an 18.9% greater error suppression (ISE). Full article
(This article belongs to the Section Engineering)
Show Figures

Figure 1

57 pages, 12554 KB  
Article
Multi-Fidelity Surrogate Models for Accelerated Multi-Objective Analog Circuit Design and Optimization
by Gianluca Cornetta, Abdellah Touhafi, Jorge Contreras and Alberto Zaragoza
Electronics 2026, 15(1), 105; https://doi.org/10.3390/electronics15010105 - 25 Dec 2025
Viewed by 568
Abstract
This work presents a unified framework for multiobjective analog circuit optimization that combines surrogate modeling, uncertainty-aware evolutionary search, and adaptive high-fidelity verification. The approach integrates ensemble regressors and graph-based surrogate models with a closed-loop multi-fidelity controller that selectively invokes SPICE evaluations based on [...] Read more.
This work presents a unified framework for multiobjective analog circuit optimization that combines surrogate modeling, uncertainty-aware evolutionary search, and adaptive high-fidelity verification. The approach integrates ensemble regressors and graph-based surrogate models with a closed-loop multi-fidelity controller that selectively invokes SPICE evaluations based on predictive uncertainty and diversity criteria. The framework includes reproducible caching, metadata tracking, and process- and Dask-based parallelism to reduce redundant simulations and improve throughput. The methodology is evaluated on four CMOS operational-amplifier topologies using NSGA-II, NSGA-III, SPEA2, and MOEA/D under a uniform configuration to ensure fair comparison. Surrogate-Guided Optimization (SGO) replaces approximately 96.5% of SPICE calls with fast model predictions, achieving about a 20× reduction in total simulation time while maintaining close agreement with ground-truth Pareto fronts. Multi-Fidelity Optimization (MFO) further improves robustness through adaptive verification, reducing SPICE usage by roughly 90%. The results show that the proposed workflow provides substantial computational savings with consistent Pareto-front quality across circuit families and algorithms. The framework is modular and extensible, enabling quantitative evaluation of analog circuits with significantly reduced simulation cost. Full article
(This article belongs to the Special Issue Machine/Deep Learning Applications and Intelligent Systems)
Show Figures

Figure 1

18 pages, 10407 KB  
Article
Multi-Object Tracking with Distributed Drones’ RGB Cameras Considering Object Localization Uncertainty
by Xin Liao, Bohui Fang, Weiyu Shao, Wenxing Fu and Tao Yang
Drones 2025, 9(12), 867; https://doi.org/10.3390/drones9120867 - 16 Dec 2025
Viewed by 447
Abstract
Reliable 3D multi-object tracking (MOT) using distributed drones remains challenging due to the lack of active sensing and the ambiguity in associating detections from different views. This paper presents a passive sensing framework that integrates multi-view data association and 3D MOT for aerial [...] Read more.
Reliable 3D multi-object tracking (MOT) using distributed drones remains challenging due to the lack of active sensing and the ambiguity in associating detections from different views. This paper presents a passive sensing framework that integrates multi-view data association and 3D MOT for aerial objects. First, object localization is achieved via triangulation using two onboard RGB cameras. To mitigate false positive objects caused by crossing bearings, spatial–temporal cues derived from 2D image detections and tracking results are exploited to establish a likelihood-based association matrix, enabling robust multi-view data association. Subsequently, optimized process and observation noise covariance matrices are formulated to quantitatively model localization uncertainty, and a Mahalanobis distance-based data association is introduced to improve the consistency of 3D tracking. Both simulation and real-world experiments demonstrate that the proposed approach achieves accurate and stable tracking performance under passive sensing conditions. Full article
Show Figures

Figure 1

20 pages, 4309 KB  
Article
Targetless Radar–Camera Calibration via Trajectory Alignment
by Ozan Durmaz and Hakan Cevikalp
Sensors 2025, 25(24), 7574; https://doi.org/10.3390/s25247574 - 13 Dec 2025
Viewed by 753
Abstract
Accurate extrinsic calibration between radar and camera sensors is essential for reliable multi-modal perception in robotics and autonomous navigation. Traditional calibration methods often rely on artificial targets such as checkerboards or corner reflectors, which can be impractical in dynamic or large-scale environments. This [...] Read more.
Accurate extrinsic calibration between radar and camera sensors is essential for reliable multi-modal perception in robotics and autonomous navigation. Traditional calibration methods often rely on artificial targets such as checkerboards or corner reflectors, which can be impractical in dynamic or large-scale environments. This study presents a fully targetless calibration framework that estimates the rigid spatial transformation between radar and camera coordinate frames by aligning their observed trajectories of a moving object. The proposed method integrates You Only Look Once version 5 (YOLOv5)-based 3D object localization for the camera stream with Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Random Sample Consensus (RANSAC) filtering for sparse and noisy radar measurements. A passive temporal synchronization technique, based on Root Mean Square Error (RMSE) minimization, corrects timestamp offsets without requiring hardware triggers. Rigid transformation parameters are computed using Kabsch and Umeyama algorithms, ensuring robust alignment even under millimeter-wave (mmWave) radar sparsity and measurement bias. The framework is experimentally validated in an indoor OptiTrack-equipped laboratory using a Skydio 2 drone as the dynamic target. Results demonstrate sub-degree rotational accuracy and decimeter-level translational error (approximately 0.12–0.27 m depending on the metric), with successful generalization to unseen motion trajectories. The findings highlight the method’s applicability for real-world autonomous systems requiring practical, markerless multi-sensor calibration. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

19 pages, 24785 KB  
Article
Capsicum Counting Algorithm Using Infrared Imaging and YOLO11
by Enrico Mendez, Jesús Arturo Escobedo Cabello, Alfonso Gómez-Espinosa, Jose Antonio Cantoral-Ceballos and Oscar Ochoa
Agriculture 2025, 15(24), 2574; https://doi.org/10.3390/agriculture15242574 - 12 Dec 2025
Viewed by 489
Abstract
Fruit detection and counting is a key component of data-driven resource management and yield estimation in greenhouses. This work presents a novel infrared-based approach to capsicum counting in greenhouses that takes advantage of the light penetration of infrared (IR) imaging to enhance detection [...] Read more.
Fruit detection and counting is a key component of data-driven resource management and yield estimation in greenhouses. This work presents a novel infrared-based approach to capsicum counting in greenhouses that takes advantage of the light penetration of infrared (IR) imaging to enhance detection under challenging lighting conditions. The proposed capsicum counting pipeline integrates the YOLO11 detection model for capsicum identification and the BoT-SORT multi-object tracker to track detections across a video stream, enabling accurate fruit counting. The detector model is trained on a dataset of 1000 images, with 11,916 labeled capsicums, captured with an OAK-D pro camera mounted on a mobile robot inside a capsicum greenhouse. On the IR test set, the YOLO11m model achieved an F1-score of 0.82, while the tracker obtained a multiple object tracking accuracy (MOTA) of 0.85, correctly counting 67 of 70 capsicums in a representative greenhouse row. The results demonstrate the effectiveness of this IR-based approach in automating fruit counting in greenhouse environments, offering potential applications in yield estimation. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

34 pages, 6823 KB  
Article
Three-Dimensional Autonomous Navigation of Unmanned Underwater Vehicle Based on Deep Reinforcement Learning and Adaptive Line-of-Sight Guidance
by Jianya Yuan, Hongjian Wang, Bo Zhong, Chengfeng Li, Yutong Huang and Shaozheng Song
J. Mar. Sci. Eng. 2025, 13(12), 2360; https://doi.org/10.3390/jmse13122360 - 11 Dec 2025
Viewed by 409
Abstract
Unmanned underwater vehicles (UUVs) face significant challenges in achieving safe and efficient autonomous navigation in complex marine environments due to uncertain perception, dynamic obstacles, and nonlinear coupled motion control. This study proposes a hierarchical autonomous navigation framework that integrates improved particle swarm optimization [...] Read more.
Unmanned underwater vehicles (UUVs) face significant challenges in achieving safe and efficient autonomous navigation in complex marine environments due to uncertain perception, dynamic obstacles, and nonlinear coupled motion control. This study proposes a hierarchical autonomous navigation framework that integrates improved particle swarm optimization (PSO) for 3D global route planning, and a deep deterministic policy gradient (DDPG) algorithm enhanced by noisy networks and proportional prioritized experience replay (PPER) for local collision avoidance. To address dynamic sideslip and current-induced deviations during execution, a novel 3D adaptive line-of-sight (ALOS) guidance method is developed, which decouples nonlinear motion in horizontal and vertical planes and ensures robust tracking. The global planner incorporates a multi-objective cost function that considers yaw and pitch adjustments, while the improved PSO employs nonlinearly synchronized adaptive weights to enhance convergence and avoid local minima. For local avoidance, the proposed DDPG framework incorporates a memory-enhanced state–action representation, GRU-based temporal processing, and stratified sample replay to enhance learning stability and exploration. Simulation results indicate that the proposed method reduces route length by 5.96% and planning time by 82.9% compared to baseline algorithms in dynamic scenarios, it achieves an up to 11% higher success rate and 10% better efficiency than SAC and standard DDPG. The 3D ALOS controller outperforms existing guidance strategies under time-varying currents, ensuring smoother tracking and reduced actuator effort. Full article
(This article belongs to the Special Issue Design and Application of Underwater Vehicles)
Show Figures

Figure 1

26 pages, 11944 KB  
Article
Lightweight 3D Multi-Object Tracking via Collaborative Camera and LiDAR Sensors
by Dong Feng, Hengyuan Liu and Zhiyu Liu
Sensors 2025, 25(23), 7351; https://doi.org/10.3390/s25237351 - 3 Dec 2025
Viewed by 764
Abstract
With the widespread adoption of camera and LiDAR sensors, 3D multi-object tracking (MOT) technology has been extensively applied across numerous fields such as robotics, autonomous driving, and surveillance. However, existing 3D MOT methods still face significant challenges in addressing issues such as false [...] Read more.
With the widespread adoption of camera and LiDAR sensors, 3D multi-object tracking (MOT) technology has been extensively applied across numerous fields such as robotics, autonomous driving, and surveillance. However, existing 3D MOT methods still face significant challenges in addressing issues such as false detections, ghost trajectories, incorrect associations, and identity switches. To address these challenges, we propose a lightweight 3D multi-object tracking framework via collaborative camera and LiDAR sensors. Firstly, we design a confidence inverse normalization guided ghost trajectories suppression module (CIGTS). This module suppresses false detections and ghost trajectories at their source using inverse normalization and a virtual trajectory survival frame strategy. Secondly, an adaptive matching space-driven lightweight association module (AMSLA) is proposed. By discarding global association strategies, this module improves association efficiency and accuracy using low-cost decision factors. Finally, a multi-factor collaborative perception-based intelligent trajectory management module (MFCTM) is constructed. This module enables accurate retention or deletion decisions for unmatched trajectories, thereby reducing computational overhead and the risk of identity mismatches. Extensive experiments on the KITTI dataset show that the proposed method outperforms state-of-the-art methods across multiple performance metrics, achieving Higher Order Tracking Accuracy (HOTA) scores of 80.13% and 53.24% for the Car and Pedestrian categories, respectively. Full article
(This article belongs to the Special Issue Vision Sensors for Object Detection and Tracking)
Show Figures

Figure 1

26 pages, 8517 KB  
Article
Seeing the City Live: Bridging Edge Vehicle Perception and Cloud Digital Twins to Empower Smart Cities
by Hafsa Iqbal, Jaime Godoy, Beatriz Martin, Abdulla Al-kaff and Fernando Garcia
Smart Cities 2025, 8(6), 197; https://doi.org/10.3390/smartcities8060197 - 25 Nov 2025
Viewed by 966
Abstract
This paper presents a framework that integrates real-time onboard (ego vehicle) perception module with edge processing capabilities and a cloud-based digital twin for intelligent transportation systems (ITSs) in smart city applications. The proposed system combines onboard 3D object detection and tracking with low [...] Read more.
This paper presents a framework that integrates real-time onboard (ego vehicle) perception module with edge processing capabilities and a cloud-based digital twin for intelligent transportation systems (ITSs) in smart city applications. The proposed system combines onboard 3D object detection and tracking with low latency edge-to-cloud communication, achieving an average end-to-end latency below 0.02 s at 10 Hz update frequency. Experiments conducted on a real autonomous vehicle platform demonstrate a mean Average Precision (mAP@40) of 83.5% for the 3D perception module. The proposed system enables real-time traffic visualization while enabling scalable data management by reducing communication overhead. Future work will extend the system to multi-vehicle deployments and incorporate additional environmental semantics such as traffic signal states, road conditions, and predictive Artificial Intelligence (AI) models to enhance decision support in dynamic urban environments. Full article
Show Figures

Figure 1

26 pages, 2125 KB  
Review
Vitamin D as a Systemic Regulatory Axis: From Homeostasis to Multiorgan Disease
by María Rodríguez-Rivero and Miguel Ángel Medina
Biomedicines 2025, 13(11), 2733; https://doi.org/10.3390/biomedicines13112733 - 7 Nov 2025
Cited by 1 | Viewed by 1209
Abstract
Background/Objectives: To critically evaluate the current scientific literature on the physiological and preventive functions of vitamin D, with special emphasis on its possible involvement in multi-organ pathologies, and to assess the effectiveness of supplementation strategies for maintaining homeostasis. Methods: A review [...] Read more.
Background/Objectives: To critically evaluate the current scientific literature on the physiological and preventive functions of vitamin D, with special emphasis on its possible involvement in multi-organ pathologies, and to assess the effectiveness of supplementation strategies for maintaining homeostasis. Methods: A review of the literature was conducted following a methodological approach in accordance with the PRISMA 2020 statement for systematic reviews. The bibliographic search was carried out in the PubMed, Scopus, and Web of Science databases, using controlled terms and Boolean operators. Rigorous inclusion and exclusion criteria were applied in three phases: blind search, selection by title/abstract, and full-text evaluation. Articles published in first quartile journals (JCR 2023) were prioritized. The search was complemented with targeted strategies such as consulting ORCID profiles, using the Jábega tool, and tracking cross-references. Results: The selected studies reinforce that vitamin D acts as a transcriptional modulator with effects beyond the skeletal system, including immunomodulatory, neuroprotective, and antitumor functions. Associations were identified between low levels of 25(OH)D and a higher prevalence of autoimmune, neurodegenerative, and metabolic diseases, as well as certain types of cancer. However, evidence of causality is still limited, and clinical trials have shown mixed results regarding its preventive efficacy. Supplementation strategies are useful in vulnerable populations, although their indiscriminate use without a documented deficiency is not recommended. Conclusions: Vitamin D is emerging as a potentially relevant agent in preventive medicine. While its benefits extrapolated from bone metabolism still require robust clinical validation, current findings support its role in regulating key systemic functions. A balanced approach combining sun protection, health education, food fortification, and targeted supplementation, tailored to the clinical context of each individual, is recommended. Full article
(This article belongs to the Section Cell Biology and Pathology)
Show Figures

Figure 1

20 pages, 1597 KB  
Article
Three-Level MIFT: A Novel Multi-Source Information Fusion Waterway Tracking Framework
by Wanqing Liang, Chen Qiu, Mei Wang and Ruixiang Kan
Electronics 2025, 14(21), 4344; https://doi.org/10.3390/electronics14214344 - 5 Nov 2025
Viewed by 506
Abstract
To address the limitations of single-sensor perception in inland vessel monitoring and the lack of robustness of traditional tracking methods in occlusion and maneuvering scenarios, this paper proposes a hierarchical multi-target tracking framework that fuses Light Detection and Ranging (LiDAR) data with Automatic [...] Read more.
To address the limitations of single-sensor perception in inland vessel monitoring and the lack of robustness of traditional tracking methods in occlusion and maneuvering scenarios, this paper proposes a hierarchical multi-target tracking framework that fuses Light Detection and Ranging (LiDAR) data with Automatic Identification System (AIS) information. First, an improved adaptive LiDAR tracking algorithm is introduced: stable trajectory tracking and state estimation are achieved through hybrid cost association and an Adaptive Kalman Filter (AKF). Experimental results demonstrate that the LiDAR module achieves a Multi-Object Tracking Accuracy (MOTA) of 89.03%, an Identity F1 Score (IDF1) of 89.80%, and an Identity Switch count (IDSW) as low as 5.1, demonstrating competitive performance compared with representative non-deep-learning-based approaches. Furthermore, by incorporating a fusion mechanism based on improved Dempster–Shafer (D-S) evidence theory and Covariance Intersection (CI), the system achieves further improvements in MOTA (90.33%) and IDF1 (90.82%), while the root mean square error (RMSE) of vessel size estimation decreases from 3.41 m to 1.97 m. Finally, the system outputs structured three-level tracks: AIS early-warning tracks, LiDAR-confirmed tracks, and LiDAR-AIS fused tracks. This hierarchical design not only enables beyond-visual-range (BVR) early warning but also enhances perception coverage and estimation accuracy. Full article
Show Figures

Figure 1

Back to TopTop