Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (923)

Search Parameters:
Keywords = lidar simulator

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 23293 KB  
Article
A Deep Learning Approach to Lidar Signal Denoising and Atmospheric Feature Detection
by Joseph Gomes, Matthew J. McGill, Patrick A. Selmer and Shi Kuang
Remote Sens. 2025, 17(24), 4060; https://doi.org/10.3390/rs17244060 - 18 Dec 2025
Abstract
Laser-based remote sensing (lidar) is a proven technique for detecting atmospheric features such as clouds and aerosols as well as for determining their vertical distribution with high accuracy. Even simple elastic backscatter lidars can distinguish clouds from aerosols, and accurate knowledge of their [...] Read more.
Laser-based remote sensing (lidar) is a proven technique for detecting atmospheric features such as clouds and aerosols as well as for determining their vertical distribution with high accuracy. Even simple elastic backscatter lidars can distinguish clouds from aerosols, and accurate knowledge of their vertical location is essential for air quality assessment, hazard avoidance, and operational decision-making. However, daytime lidar measurements suffer from reduced signal-to-noise ratio (SNR) due to solar background contamination. Conventional processing approaches mitigate this by applying horizontal and vertical averaging, which improves SNR at the expense of spatial resolution and feature detectability. This work presents a deep learning-based framework that enhances lidar SNR at native resolution and performs fast layer detection and cloud–aerosol discrimination. We apply this approach to ICESat-2 532 nm photon-counting data, using artificially noised nighttime profiles to generate simulated daytime observations for training and evaluation. Relative to the simulated daytime data, our method improves peak SNR by more than a factor of three while preserving structural similarity with true nighttime profiles. After recalibration, the denoised photon counts yield an order-of-magnitude reduction in mean absolute percentage error in calibrated attenuated backscatter compared with the simulated daytime data, when validated against real nighttime measurements. We further apply the trained model to a full month of real daytime ICESat-2 observations (April 2023) and demonstrate effective layer detection and cloud–aerosol discrimination, maintaining high recall for both clouds and aerosols and showing qualitative improvement relative to the standard ATL09 data products. As an alternative to traditional averaging-based workflows, this deep learning approach offers accurate, near real-time data processing at native resolution. A key implication is the potential to enable smaller, lower-power spaceborne lidar systems that perform as well as larger instruments. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

33 pages, 3854 KB  
Article
Introducing a Development Method for Active Perception Sensor Simulations Using Continuous Verification and Validation
by Kristof Hofrichter, Lukas Elster, Clemens Linnhoff, Timm Ruppert and Steven Peters
Sensors 2025, 25(24), 7642; https://doi.org/10.3390/s25247642 - 17 Dec 2025
Abstract
Simulation-based testing is playing an increasingly important role in the development and validation of automated driving functions, as real-world testing is often limited by cost, safety, and scalability. An essential part of this is the simulation of active perception sensors such as lidar [...] Read more.
Simulation-based testing is playing an increasingly important role in the development and validation of automated driving functions, as real-world testing is often limited by cost, safety, and scalability. An essential part of this is the simulation of active perception sensors such as lidar and radar which enable accurate perception of the vehicle’s environment. In this context, the particular challenge lies in ensuring the credibility of these sensor simulations. This paper presents a novel method for the efficient and credible realization and validation of active perception sensor simulations in the context of the overall development process. Since the validity of these simulations is crucial for the safety augmentation of automated driving functions, the proposed method integrates a continuous verification and validation approach into the development process. Using this method, requirements like individual sensor effects are iteratively implemented into the simulation. Every iteration ends with the verification and validation of the resulting simulation. In addition, initial practical approaches are presented for validating measurement data required for the development process to avoid errors in data acquisition and for deriving quantified acceptance criteria as part of the validation process. All new approaches and methods are subsequently demonstrated on the example of a ray tracing-based lidar sensor simulation. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

25 pages, 3700 KB  
Article
SP-LiDAR for Fast and Robust Depth Imaging at Low SBR and Few Photons
by Kehao Chi, Xialin Liu, Ruikai Xue and Genghua Huang
Photonics 2025, 12(12), 1229; https://doi.org/10.3390/photonics12121229 - 12 Dec 2025
Viewed by 134
Abstract
Single photon LiDAR has demonstrated remarkable proficiency in long-range sensing under conditions of weak returns. However, in the few-photon regime (SPPP ≈ 1) and at low signal-to-background ratios (SBR ≤ 0.1), depth estimation is subject to significant degradation due to Poisson fluctuations and [...] Read more.
Single photon LiDAR has demonstrated remarkable proficiency in long-range sensing under conditions of weak returns. However, in the few-photon regime (SPPP ≈ 1) and at low signal-to-background ratios (SBR ≤ 0.1), depth estimation is subject to significant degradation due to Poisson fluctuations and background contamination. To address these challenges, we propose GLARE-Depth, a patch-wise Poisson-GLRT framework with reflectance-guided spatial fusion. In the temporal domain, our method employs a continuous-time Poisson-GLRT peak search with a physically consistent exponentially modified Gaussian (EMG) kernel, complemented by closed-form amplitude updates and mode-bias correction. In the spatial domain, we implement a methodology that incorporates reflectance-guided, edge-preserving aggregation and confidence-gated lightweight hole filling to enhance effective coverage for few-photon pixels. In controlled simulations derived from the Middlebury dataset, under high-background conditions (SPPP ≈ 1, SBR ≈ 0.06–0.10), GLARE-Depth demonstrates substantial gains over representative baselines in RMSE, MAE, and valid-pixel ratio (insert concrete numbers when finalized) while maintaining smoothness in planar regions and sharpness at geometric boundaries. These results highlight the robustness of GLARE-Depth and its practical potential for low-SBR scenarios. Full article
Show Figures

Figure 1

25 pages, 3616 KB  
Article
A Deep Learning-Driven Semantic Mapping Strategy for Robotic Inspection of Desalination Facilities
by Albandari Alotaibi, Reem Alrashidi, Hanan Alatawi, Lamaa Duwayriat, Aseel Binnouh, Tareq Alhmiedat and Ahmad Al-Qerem
Machines 2025, 13(12), 1129; https://doi.org/10.3390/machines13121129 - 8 Dec 2025
Viewed by 218
Abstract
The area of robot autonomous navigation has become essential for reducing labor-intensive tasks. These robots’ current navigation systems are based on sensed geometrical structures of the environment, with the engagement of an array of sensor units such as laser scanners, range-finders, and light [...] Read more.
The area of robot autonomous navigation has become essential for reducing labor-intensive tasks. These robots’ current navigation systems are based on sensed geometrical structures of the environment, with the engagement of an array of sensor units such as laser scanners, range-finders, and light detection and ranging (LiDAR) in order to obtain the environment layout. Scene understanding is an important task in the development of robots that need to act autonomously. Hence, this paper presents an efficient semantic mapping system that integrates LiDAR, RGB-D, and odometry data to generate precise and information-rich maps. The proposed system enables the automatic detection and labeling of critical infrastructure components, while preserving high spatial accuracy. As a case study, the system was applied to a desalination plant, where it interactively labeled key entities by integrating Simultaneous Localization and Mapping (SLAM) with vision-based techniques in order to determine the location of installed pipes. The developed system was validated using an efficient development environment known as Robot Operating System (ROS) and a two-wheel-drive robot platform. Several simulations and real-world experiments were conducted to validate the efficiency of the developed semantic mapping system. The obtained results are promising, as the developed semantic map generation system achieves an average object detection accuracy of 84.97% and an average localization error of 1.79 m. Full article
Show Figures

Figure 1

17 pages, 4453 KB  
Article
Robust Positioning Scheme Based on Deep Reinforcement Learning with Context-Aware Intelligence
by Seongwoo Lee, Byungsun Hwang, Joonho Seon, Jinwook Kim, Kyounghun Kim, Jeongho Kim, Mingyu Lee, Soohyun Kim, Youngghyu Sun and Jinyoung Kim
Appl. Sci. 2025, 15(23), 12785; https://doi.org/10.3390/app152312785 - 3 Dec 2025
Viewed by 187
Abstract
An integration of deep learning and classical filtering techniques has been developed for achieving an accurate and robust positioning system. Despite significant technological advances, positioning systems often face fundamental challenges such as signal obstruction, multipath interference, and error drift. Although 3D map matching [...] Read more.
An integration of deep learning and classical filtering techniques has been developed for achieving an accurate and robust positioning system. Despite significant technological advances, positioning systems often face fundamental challenges such as signal obstruction, multipath interference, and error drift. Although 3D map matching techniques using LiDAR sensors can mitigate these challenges, their practical implementation is limited by high costs, computational burdens, and dependency on pre-modeled environments. In response, recent research has increasingly focused on enhancing positioning systems through deep learning–based sensor fusion and adaptive filtering methods, emphasizing improvements in accuracy and operational robustness. In this paper, a seamless hierarchical context-aware intelligent positioning system (SH-CAIPS) is proposed by integrating sensor fusion, two-stage outlier mitigation gates, and deep reinforcement learning (DRL)-based noise scaling with an innovation-based adaptive Kalman filter (IAKF). As a result, the proposed positioning system can enhance robustness by adaptively adjusting measurement noise and dynamically updating the position based on the confidence score between model predictions and sensor measurements. From simulation results, it is confirmed that the positioning performances can be significantly improved based on the proposed SH-CAIPS compared with conventional positioning systems. Full article
Show Figures

Figure 1

26 pages, 18496 KB  
Article
Turbulence and Windshear Study for Typhoon Wipha in 2025
by Ka Wai Lo, Ming Chun Lam, Kai Kwong Lai, Man Lok Chong, Pak Wai Chan, Yu Cheng Xue and E Deng
Appl. Sci. 2025, 15(23), 12772; https://doi.org/10.3390/app152312772 - 2 Dec 2025
Viewed by 334
Abstract
This paper reports on the study of turbulence at various locations in Hong Kong during Typhoon Wipha in July 2025, including turbulence intensity based on Doppler Light Detection and Ranging (LIDAR) systems and radiosondes, observations by microclimate stations, and low-level windshear and turbulence [...] Read more.
This paper reports on the study of turbulence at various locations in Hong Kong during Typhoon Wipha in July 2025, including turbulence intensity based on Doppler Light Detection and Ranging (LIDAR) systems and radiosondes, observations by microclimate stations, and low-level windshear and turbulence at the Hong Kong International Airport (HKIA) by LIDAR, flight data, and pilot reports. Although the observation period was primarily limited to 20 July 2025, passage of a typhoon over a densely instrumented urban area is uncommon; these observations on turbulent flow associated with typhoons therefore can serve as valuable benchmarks for similar studies on turbulent flow associated with typhoons in other coastal areas, particularly for operational alerts in aviation. To assess the predictability of turbulence, the eddy dissipation rate (EDR) was derived from a high-resolution numerical weather prediction (NWP) model using diagnostic and reconstruction approaches. Compared with radiosonde data, both approaches performed similarly in the shear-dominated low-level atmosphere, while the diagnostic approach outperformed when buoyancy became important. This result highlights the importance of incorporating buoyancy effects in the reconstruction approach if the EDR diagnostic is not available. The high-resolution NWP was also used to provide time-varying boundary conditions for computational fluid dynamics simulations in urban areas, and its limitations were discussed. This study also demonstrated the difficulty of capturing low-level windshear encountered by departing aircraft in an operational environment and demonstrated that a trajectory-aware method for deriving headwind could align more closely with onboard measurements than the standard fixed-path product. Full article
(This article belongs to the Special Issue Transportation and Infrastructures Under Extreme Weather Conditions)
Show Figures

Figure 1

21 pages, 3387 KB  
Article
Development of an Autonomous and Interactive Robot Guide for Industrial Museum Environments Using IoT and AI Technologies
by Andrés Arteaga-Vargas, David Velásquez, Juan Pablo Giraldo-Pérez and Daniel Sanin-Villa
Sci 2025, 7(4), 175; https://doi.org/10.3390/sci7040175 - 1 Dec 2025
Viewed by 434
Abstract
This paper presents the design of an autonomous robot guide for a museum-like environment in a motorcycle assembly plant. The system integrates Industry 4.0 technologies such as artificial vision, indoor positioning, generative artificial intelligence, and cloud connectivity to enhance the visitor experience. The [...] Read more.
This paper presents the design of an autonomous robot guide for a museum-like environment in a motorcycle assembly plant. The system integrates Industry 4.0 technologies such as artificial vision, indoor positioning, generative artificial intelligence, and cloud connectivity to enhance the visitor experience. The development follows the Design Inclusive Research (DIR) methodology and the VDI 2206 standard to ensure a structured scientific and engineering process. A key innovation is the integration of mmWave sensors alongside LiDAR and RGB-D cameras, enabling reliable human detection and improved navigation safety in reflective indoor environments, as well as the deployment of an open-source large language model for natural, on-device interaction with visitors. The current results include the complete mechanical, electronic, and software architecture; simulation validation; and a preliminary implementation in the real museum environment, where the system demonstrated consistent autonomous navigation, stable performance, and effective user interaction. Full article
(This article belongs to the Section Computer Sciences, Mathematics and AI)
Show Figures

Figure 1

15 pages, 2240 KB  
Article
On-Site Localization of Unmanned Vehicles in Large-Scale Outdoor Environments
by Jianbiao Yan, Lizuo Xin, Hongjin Fang and Hanxiao Zhou
World Electr. Veh. J. 2025, 16(12), 650; https://doi.org/10.3390/wevj16120650 - 28 Nov 2025
Viewed by 194
Abstract
This paper proposes a method for the on-site localization of autonomous vehicles in large-scale outdoor environments, where a single sensor cannot achieve the high precision for localization. The method is based on an improved Kalman filter by fusion of odometry and LiDAR, and [...] Read more.
This paper proposes a method for the on-site localization of autonomous vehicles in large-scale outdoor environments, where a single sensor cannot achieve the high precision for localization. The method is based on an improved Kalman filter by fusion of odometry and LiDAR, and it is intended to address the challenge of localization in large-scale environments. Given the complex nature of such environments and the difficulty of identifying natural features at the worksite accurately, the paper uses artificial landmarks to model the working environment. The Iterative Closest Point (ICP) algorithm matches local features of landmarks that were scanned by LiDAR at the current time with local landmark features from the past to obtain the vehicle’s on-site pose. Within the extended Kalman filter (EKF) framework, odometry information is fused with the pose information obtained by the ICP algorithm to further enhance the accuracy of the vehicle’s localization. Simulation results demonstrate that the localization accuracy of unmanned vehicles optimized by the EKF algorithm improves by 9.21% and 53.91% compared to the ICP algorithm and odometry, respectively. This reduces the noise error of measurements, which improves the precise movement and on-site localization performance of unmanned vehicles in large-scale outdoor environments. Full article
Show Figures

Figure 1

15 pages, 1071 KB  
Article
Analysis of Automotive Lidar Corner Cases Under Adverse Weather Conditions
by Behrus Alavi, Thomas Illing, Felician Campean, Paul Spencer and Amr Abdullatif
Electronics 2025, 14(23), 4695; https://doi.org/10.3390/electronics14234695 - 28 Nov 2025
Viewed by 316
Abstract
The validation of sensor systems, particularly lidar, is crucial in advancing autonomous vehicle technology. Despite their robust perception capabilities, certain weather conditions and object characteristics can challenge detection performance, leading to potential safety concerns. This study investigates corner cases where object detection may [...] Read more.
The validation of sensor systems, particularly lidar, is crucial in advancing autonomous vehicle technology. Despite their robust perception capabilities, certain weather conditions and object characteristics can challenge detection performance, leading to potential safety concerns. This study investigates corner cases where object detection may fail due to physical constraints. Utilizing virtual testing environments like Carla and ROS2, simulations analyze how reflection characteristics affect detectability by implementing weather models into a real-time simulation. Results reveal challenges in detecting black objects compared to white ones, particularly in adverse weather conditions. A time-sensitive corner case was analyzed, revealing that while bad weather and wet roads restrict the safe driving speed range, complete deactivation of the driving assistant at certain speeds may be unnecessary despite current manufacturer practices. The study underscores the importance of considering such factors in future safety protocols to mitigate accidents and ensure reliable autonomous driving systems. Full article
(This article belongs to the Special Issue Autonomous Vehicles: Sensing, Mapping, and Positioning)
Show Figures

Figure 1

18 pages, 1972 KB  
Article
Automatic Reconstruction of 3D Building Models from ALS Point Clouds Based on Façade Geometry
by Tingting Zhao, Tao Xiong, Muzi Li and Zhilin Li
ISPRS Int. J. Geo-Inf. 2025, 14(12), 462; https://doi.org/10.3390/ijgi14120462 - 25 Nov 2025
Viewed by 520
Abstract
Three-dimensional (3D) building models are essential for urban planning, spatial analysis, and virtual simulations. However, most reconstruction methods based on Airborne LiDAR Scanning (ALS) rely primarily on rooftop information, often resulting in distorted footprints and the omission of façade semantics such as windows [...] Read more.
Three-dimensional (3D) building models are essential for urban planning, spatial analysis, and virtual simulations. However, most reconstruction methods based on Airborne LiDAR Scanning (ALS) rely primarily on rooftop information, often resulting in distorted footprints and the omission of façade semantics such as windows and doors. To address these limitations, this study proposes an automatic 3D building reconstruction method driven by façade geometry. The proposed method introduces three key contributions: (1) a façade-guided footprint generation strategy that eliminates geometric distortions associated with roof projection methods; (2) robust detection and reconstruction of façade openings, enabling reliable identification of windows and doors even under sparse ALS conditions; and (3) an integrated volumetric modeling pipeline that produces watertight models with embedded façade details, ensuring both structural accuracy and semantic completeness. Experimental results show that the proposed method achieves geometric deviations at the decimeter level and feature recognition accuracy exceeding 97%. On average, the reconstruction time of a single building is 91 s, demonstrating reliable reconstruction accuracy and satisfactory computational performance. These findings highlight the potential of the method as a robust and scalable solution for large-scale ALS-based urban modeling, offering substantial improvements in both structural precision and semantic richness compared with conventional roof-based approaches. Full article
(This article belongs to the Special Issue Knowledge-Guided Map Representation and Understanding)
Show Figures

Figure 1

16 pages, 3861 KB  
Article
Evaluation of Non-Spherical Particle Models for Mineral Dust in Multi-Wavelength Polarization Lidar Applications: Comparison of Spheroid, Super-Ellipsoid, and Irregular-Hexagonal Models
by Laibin Wang and Dong Liu
Remote Sens. 2025, 17(23), 3804; https://doi.org/10.3390/rs17233804 - 24 Nov 2025
Viewed by 265
Abstract
Mineral dust aerosols play an important role in the Earth’s radiation budget and climate system, and their optical properties strongly influence the accuracy of polarization lidar retrievals. Developing realistic optical models is therefore essential for representing dust scattering and polarization behaviors. This study [...] Read more.
Mineral dust aerosols play an important role in the Earth’s radiation budget and climate system, and their optical properties strongly influence the accuracy of polarization lidar retrievals. Developing realistic optical models is therefore essential for representing dust scattering and polarization behaviors. This study compares three shape models—spheroid, super-ellipsoid, and irregular-hexagonal—in reproducing the optical characteristics of mineral dust at wavelengths of 355, 532, and 1064 nanometers. By combining theoretical simulations with multi-wavelength field and laboratory observations, the suitability of each model is evaluated. Optimal combinations of particle shape and refractive index are obtained using the sum of squared errors criterion. The results show that both the irregular-hexagonal and super-ellipsoid models reproduce the observed relationships between the particle linear depolarization ratio and the lidar ratio, whereas the spheroid model fails to capture high depolarization values. Shape mixing analyses indicate that the irregular-hexagonal model is relatively insensitive to morphological variation, while the super-ellipsoid model exhibits higher adaptability. When measurement uncertainties are considered, the super-ellipsoid model performs best under low error conditions, whereas the irregular-hexagonal model remains more robust under high error conditions. These findings highlight the importance of realistic shape distributions in dust optical modeling and support more reliable polarization lidar retrievals. Full article
Show Figures

Figure 1

15 pages, 1414 KB  
Article
Gait Cycle Duration Analysis in Lower Limb Amputees Using an IoT-Based Photonic Wearable Sensor: A Preliminary Proof-of-Concept Study
by Bruna Alves, Alessandro Fantoni, José Pedro Matos, João Costa and Manuela Vieira
Sensors 2025, 25(23), 7148; https://doi.org/10.3390/s25237148 - 23 Nov 2025
Viewed by 589
Abstract
This study represents a preliminary proof of concept intended to demonstrate the feasibility of using a single-point LiDAR sensor for wearable gait analysis. The study presents a low-cost wearable sensor system that integrates a single-point LiDAR module and IoT connectivity to assess Gait [...] Read more.
This study represents a preliminary proof of concept intended to demonstrate the feasibility of using a single-point LiDAR sensor for wearable gait analysis. The study presents a low-cost wearable sensor system that integrates a single-point LiDAR module and IoT connectivity to assess Gait Cycle Duration (GCD) and gait symmetry in real time. The device is positioned on the medial side of the calf to detect the contralateral limb crossing—used as a proxy for mid-stance—enabling the computation of GCD for both limbs and the derivation of the Symmetry Ratio and Symmetry Index. This was conducted under simulated walking at three cadences (slow, normal and fast). GCD estimated by the sensor was compared against the visual annotation with Kinovea®, showing reasonable agreement, with most cycle-wise relative differences below approximately 13% and both methods capturing similar symmetry trends. The wearable system operated reliably across different speeds, with an estimated materials cost of under 100 € and wireless data streaming to a cloud dashboard for real-time visualization. Although the validation is preliminary and limited to a single healthy participant and a video-based reference, the results support the feasibility of a photonic, IoT-based approach for portable and objective gait assessment, motivating future studies with larger and clinical cohorts and gold-standard references to quantify accuracy, repeatability and clinical utility. Full article
Show Figures

Figure 1

30 pages, 40146 KB  
Article
Blast Hole Seeking and Dipping: Navigation and Perception Framework in a Mine Site Inspection Robot
by Liyang Liu, Ehsan Mihankhah, Nathan D. Wallace, Javier Martinez and Andrew J. Hill
Robotics 2025, 14(12), 173; https://doi.org/10.3390/robotics14120173 - 21 Nov 2025
Viewed by 429
Abstract
In open-pit mining, holes are drilled into the surface of the excavation site and are then detonated with explosives to facilitate digging. These blast holes need to be inspected internally for quality assurance, as well as for operational and geological reasons. Manual hole [...] Read more.
In open-pit mining, holes are drilled into the surface of the excavation site and are then detonated with explosives to facilitate digging. These blast holes need to be inspected internally for quality assurance, as well as for operational and geological reasons. Manual hole inspection is slow and expensive, limited in its ability to capture the geometric and geological characteristics of holes. This is the motivation behind the development of our autonomous mine site inspection robot—“DIPPeR”. In this paper, the automation aspect of the project is explained. We present a robust navigation and perception framework that provides streamlined blasthole detection, tracking, and precise down-hole sensor insertion during repetitive inspection tasks. To mitigate the effects of noisy GPS and odometry data typical of surface mining environments, we employ a proximity-based adaptive navigation system that enables the vehicle to dynamically adjust its operations according to target detectability and localisation accuracy. For perception, we process LiDAR data to extract the cone-shaped volume of drill waste above ground, and then project the 3D cone points into a virtual depth image to form accurate 2D segmentation of hole regions. To ensure continuous target tracking as the robot approaches the goal, our system automatically adjusts the projection parameters to ensure consistent appearance of the hole in the image. In the vicinity of the hole, we apply least squares circle fitting combined with non-maximum candidate suppression to achieve accurate hole localisation and collision-free down-hole sensor insertion. We demonstrate the effectiveness and robustness of our framework through dedicated perception and navigation feature tests, as well as streamlined mission trials conducted in high-fidelity simulations and real mine-site field experiments. Full article
(This article belongs to the Section Agricultural and Field Robotics)
Show Figures

Graphical abstract

9 pages, 1953 KB  
Proceeding Paper
Visual Mapping and Autonomous Navigation Using AprilTags in Omnidirectional Mobile Robots: A Realistic ROS-Gazebo Simulation Framework
by Brad Steven Herrera, Mateo García, William Chamorro and Diego Maldonado
Eng. Proc. 2025, 115(1), 5; https://doi.org/10.3390/engproc2025115005 - 15 Nov 2025
Viewed by 732
Abstract
This paper presents a modular, high-fidelity simulation framework for the autonomous navigation of omnidirectional mobile robots using visual localization with AprilTags. The proposed system integrates realistic robot dynamics, a dual-layer path planning architecture, and interchangeable trajectory tracking controllers, all within the ROS Noetic [...] Read more.
This paper presents a modular, high-fidelity simulation framework for the autonomous navigation of omnidirectional mobile robots using visual localization with AprilTags. The proposed system integrates realistic robot dynamics, a dual-layer path planning architecture, and interchangeable trajectory tracking controllers, all within the ROS Noetic and Gazebo simulation environment. AprilTags are employed as low-cost fiducial markers for map construction, eliminating the need for LiDAR or GPS. The architecture supports global path planning via the A* algorithm and local reactive replanning to avoid unexpected obstacles while preserving trajectory continuity. Three control strategies—Lyapunov-based, null-space Lyapunov, and proportional–integral (PI) control—are implemented and evaluated in multiple maze-like environments. Experimental results in the simulation demonstrate accurate trajectory tracking, successful visual mapping, and effective obstacle avoidance under realistic conditions. Full article
(This article belongs to the Proceedings of The XXXIII Conference on Electrical and Electronic Engineering)
Show Figures

Figure 1

27 pages, 40043 KB  
Article
Collaborative Infrastructure-Free Aerial–Ground Robotic System for Warehouse Inventory Data Capture
by Rafaela Chaffilla, Paulo Alvito and Meysam Basiri
Drones 2025, 9(11), 792; https://doi.org/10.3390/drones9110792 - 13 Nov 2025
Viewed by 669
Abstract
Efficient and reliable inventory management remains a challenge in modern warehouses, where manual counting is time-consuming, error-prone, and costly. We present an autonomous aerial–ground system for warehouse inventory data capture that operates without external infrastructure or prior mapping operations. A differential-drive unmanned ground [...] Read more.
Efficient and reliable inventory management remains a challenge in modern warehouses, where manual counting is time-consuming, error-prone, and costly. We present an autonomous aerial–ground system for warehouse inventory data capture that operates without external infrastructure or prior mapping operations. A differential-drive unmanned ground vehicle (UGV) performs global localization and navigation from a simple 2D floor plan via 2D LiDAR scan-to-map matching fused in an Extended Kalman Filter. An unmanned aerial vehicle (UAV) uses fiducial-based relative localization to execute short, autonomous take-off, follow, precision landing, and close-range imaging of high shelves. By ferrying the UAV between aisles, the UGV extends the UAV’s effective endurance and coverage, limiting flight to brief, high-value segments. We validate the system in simulation and real environments. In simulation, the proposed localization method achieves higher accuracy and consistency than AMCL, GMapping, and KartoSLAM across varied layouts. In experiments, the UAV reliably follows and lands on the UGV, producing geo-referenced imagery of high shelves suitable for downstream inventory recognition. Full article
Show Figures

Figure 1

Back to TopTop