Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,738)

Search Parameters:
Keywords = lidar sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
68 pages, 5859 KB  
Review
A Comprehensive Review of Sensing, Control, and Networking in Agricultural Robots: From Perception to Coordination
by Chijioke Leonard Nkwocha, Adeayo Adewumi, Samuel Oluwadare Folorunsho, Chrisantus Eze, Pius Jjagwe, James Kemeshi and Ning Wang
Robotics 2025, 14(11), 159; https://doi.org/10.3390/robotics14110159 - 29 Oct 2025
Abstract
This review critically examines advancements in sensing, control, and networking technologies for agricultural robots (AgRobots) and their impact on modern farming. AgRobots—including Unmanned Aerial Vehicles (UAVs), Unmanned Ground Vehicles (UGVs), Unmanned Surface Vehicles (USVs), and robotic arms—are increasingly adopted to address labour shortages, [...] Read more.
This review critically examines advancements in sensing, control, and networking technologies for agricultural robots (AgRobots) and their impact on modern farming. AgRobots—including Unmanned Aerial Vehicles (UAVs), Unmanned Ground Vehicles (UGVs), Unmanned Surface Vehicles (USVs), and robotic arms—are increasingly adopted to address labour shortages, sustainability challenges, and rising food demand. This paper reviews sensing technologies such as cameras, LiDAR, and multispectral sensors for navigation, object detection, and environmental perception. Control approaches, from classical PID (Proportional-Integral-Derivative) to advanced nonlinear and learning-based methods, are analysed to ensure precision, adaptability, and stability in dynamic agricultural settings. Networking solutions, including ZigBee, LoRaWAN, 5G, and emerging 6G, are evaluated for enabling real-time communication, multi-robot coordination, and data management. Swarm robotics and hybrid decentralized architectures are highlighted for efficient collective operations. This review is based on the literature published between 2015 and 2025 to identify key trends, challenges, and future directions in AgRobots. While AgRobots promise enhanced productivity, reduced environmental impact, and sustainable practices, barriers such as high costs, complex field conditions, and regulatory limitations remain. This review is expected to provide a foundation for guiding research and development toward innovative, integrated solutions for global food security and sustainable agriculture. Full article
(This article belongs to the Special Issue Smart Agriculture with AI and Robotics)
Show Figures

Graphical abstract

6 pages, 1514 KB  
Proceeding Paper
ROS 2-Based Framework for Semi-Automatic Vector Map Creation in Autonomous Driving Systems
by Abdelrahman Alabdallah, Barham Jeries Barham Farraj and Ernő Horváth
Eng. Proc. 2025, 113(1), 13; https://doi.org/10.3390/engproc2025113013 - 28 Oct 2025
Abstract
High-definition vector maps, such as Lanelet2, are critical for autonomous driving systems, enabling precise localization, path planning, and regulatory compliance. However, creating and maintaining these maps traditionally demands labor-intensive manual annotation or resource-heavy automated pipelines. This paper presents an ROS 2-based framework for [...] Read more.
High-definition vector maps, such as Lanelet2, are critical for autonomous driving systems, enabling precise localization, path planning, and regulatory compliance. However, creating and maintaining these maps traditionally demands labor-intensive manual annotation or resource-heavy automated pipelines. This paper presents an ROS 2-based framework for semi-automatic vector map generation, leveraging Lanelet2 primitives to streamline map creation while balancing automation with human oversight. The framework integrates multi-sensor inputs (LIDAR, GPS/IMU) within ROS 2 to extract and fuse road features such as lanes, traffic signs, and curbs. The pipeline employs modular ROS 2 nodes for tasks including NDT and SLAM-based pose estimation and the semantic segmentation of drivable areas which serve as a basis for Lanelet2 primitives. To promote adoption, the implementation is released as an open source. This work bridges the gap between automated map generation and human expertise, advancing the practical deployment of dynamic vector maps in autonomous systems. Full article
Show Figures

Figure 1

9 pages, 889 KB  
Proceeding Paper
Integrating a Stereo Vision System on the F1Tenth Platform for Enhanced Perception
by Péter Farkas, Bence Török and Szilárd Aradi
Eng. Proc. 2025, 113(1), 10; https://doi.org/10.3390/engproc2025113010 - 28 Oct 2025
Abstract
During the development of vehicle control algorithms, effective real-world validation is crucial. Model vehicle platforms provide a cost-effective and accessible method for such testing. The open-source F1Tenth project is a popular choice, but its reliance on lidar sensors limits certain applications. To enable [...] Read more.
During the development of vehicle control algorithms, effective real-world validation is crucial. Model vehicle platforms provide a cost-effective and accessible method for such testing. The open-source F1Tenth project is a popular choice, but its reliance on lidar sensors limits certain applications. To enable more universal environmental perception, integrating a stereo camera system could be advantageous, although existing software packages do not yet support this functionality. Therefore, our research focuses on developing a modular software architecture for the F1Tenth platform, incorporating real-time stereo vision-based environment perception, robust state representation, and clear actuator interfaces. The system simplifies the integration and testing of control algorithms, while minimizing the simulation-to-reality gap. The framework’s operation is demonstrated through a real-world control problem. Environmental sensing, representation, and the control method combine classical and deep learning techniques to ensure real-time performance and robust operation. Our platform facilitates real-world testing and is suitable for validating research projects. Full article
Show Figures

Figure 1

19 pages, 10509 KB  
Article
High-Precision Mapping and Real-Time Localization for Agricultural Machinery Sheds and Farm Access Roads Environments
by Yang Yu, Zengyao Li, Buwang Dai, Jiahui Pan and Lizhang Xu
Agriculture 2025, 15(21), 2248; https://doi.org/10.3390/agriculture15212248 - 28 Oct 2025
Abstract
To address the issues of signal loss and insufficient accuracy of traditional GNSS (Global Navigation Satellite System) navigation in agricultural machinery sheds and farm access road environments, this paper proposes a high-precision mapping method for such complex environments and a real-time localization system [...] Read more.
To address the issues of signal loss and insufficient accuracy of traditional GNSS (Global Navigation Satellite System) navigation in agricultural machinery sheds and farm access road environments, this paper proposes a high-precision mapping method for such complex environments and a real-time localization system for agricultural vehicles. First, an autonomous navigation system was developed by integrating multi-sensor data from LiDAR (Light Laser Detection and Ranging), GNSS, and IMU (Inertial Measurement Unit), with functional modules for mapping, localization, planning, and control implemented within the ROS (Robot Operating System) framework. Second, an improved LeGO-LOAM algorithm is introduced for constructing maps of machinery sheds and farm access roads. The mapping accuracy is enhanced through reflectivity filtering, ground constraint optimization, and ScanContext-based loop closure detection. Finally, a localization method combining NDT (Normal Distribution Transform), IMU, and a UKF (Unscented Kalman Filter) is proposed for tracked grain transport vehicles. The UKF and IMU measurements are used to predict the vehicle state, while the NDT algorithm provides pose estimates for state update, yielding a fused and more accurate pose estimate. Experimental results demonstrate that the proposed mapping method reduces APE (absolute pose error) by 79.99% and 49.04% in the machinery sheds and farm access roads environments, respectively, indicating a significant improvement over conventional methods. The real-time localization module achieves an average processing time of 26.49 ms with an average error of 3.97 cm, enhancing localization accuracy without compromising output frequency. This study provides technical support for fully autonomous operation of agricultural machinery. Full article
(This article belongs to the Topic Digital Agriculture, Smart Farming and Crop Monitoring)
Show Figures

Figure 1

21 pages, 5023 KB  
Article
Robust 3D Target Detection Based on LiDAR and Camera Fusion
by Miao Jin, Bing Lu, Gang Liu, Yinglong Diao, Xiwen Chen and Gaoning Nie
Electronics 2025, 14(21), 4186; https://doi.org/10.3390/electronics14214186 - 27 Oct 2025
Viewed by 262
Abstract
Autonomous driving relies on multimodal sensors to acquire environmental information for supporting decision making and control. While significant progress has been made in 3D object detection regarding point cloud processing and multi-sensor fusion, existing methods still suffer from shortcomings—such as sparse point clouds [...] Read more.
Autonomous driving relies on multimodal sensors to acquire environmental information for supporting decision making and control. While significant progress has been made in 3D object detection regarding point cloud processing and multi-sensor fusion, existing methods still suffer from shortcomings—such as sparse point clouds of foreground targets, fusion instability caused by fluctuating sensor data quality, and inadequate modeling of cross-frame temporal consistency in video streams—which severely restrict the practical performance of perception systems. To address these issues, this paper proposes a multimodal video stream 3D object detection framework based on reliability evaluation. Specifically, it dynamically perceives the reliability of each modal feature by evaluating the Region of Interest (RoI) features of cameras and LiDARs, and adaptively adjusts their contribution ratios in the fusion process accordingly. Additionally, a target-level semantic soft matching graph is constructed within the RoI region. Combined with spatial self-attention and temporal cross-attention mechanisms, the spatio-temporal correlations between consecutive frames are fully explored to achieve feature completion and enhancement. Verification on the nuScenes dataset shows that the proposed algorithm achieves an optimal performance of 67.3% and 70.6% in terms of the two core metrics, mAP and NDS, respectively—outperforming existing mainstream 3D object detection algorithms. Ablation experiments confirm that each module plays a crucial role in improving overall performance, and the algorithm exhibits better robustness and generalization in dynamically complex scenarios. Full article
Show Figures

Figure 1

30 pages, 11870 KB  
Article
Early Mapping of Farmland and Crop Planting Structures Using Multi-Temporal UAV Remote Sensing
by Lu Wang, Yuan Qi, Juan Zhang, Rui Yang, Hongwei Wang, Jinlong Zhang and Chao Ma
Agriculture 2025, 15(21), 2186; https://doi.org/10.3390/agriculture15212186 - 22 Oct 2025
Viewed by 357
Abstract
Fine-grained identification of crop planting structures provides key data for precision agriculture, thereby supporting scientific production and evidence-based policy making. This study selected a representative experimental farmland in Qingyang, Gansu Province, and acquired Unmanned Aerial Vehicle (UAV) multi-temporal data (six epochs) from multiple [...] Read more.
Fine-grained identification of crop planting structures provides key data for precision agriculture, thereby supporting scientific production and evidence-based policy making. This study selected a representative experimental farmland in Qingyang, Gansu Province, and acquired Unmanned Aerial Vehicle (UAV) multi-temporal data (six epochs) from multiple sensors (multispectral [visible–NIR], thermal infrared, and LiDAR). By fusing 59 feature indices, we achieved high-accuracy extraction of cropland and planting structures and identified the key feature combinations that discriminate among crops. The results show that (1) multi-source UAV data from April + June can effectively delineate cropland and enable accurate plot segmentation; (2) July is the optimal time window for fine-scale extraction of all planting-structure types in the area (legumes, millet, maize, buckwheat, wheat, sorghum, maize–legume intercropping, and vegetables), with a cumulative importance of 72.26% for the top ten features, while the April + June combination retains most of the separability (67.36%), enabling earlier but slightly less precise mapping; and (3) under July imagery, the SAM (Segment Anything Model) segmentation + RF (Random Forest) classification approach—using the RF-selected top 10 of the 59 features—achieved an overall accuracy of 92.66% with a Kappa of 0.9163, representing a 7.57% improvement over the contemporaneous SAM + CNN (Convolutional Neural Network) method. This work establishes a basis for UAV-based recognition of typical crops in the Qingyang sector of the Loess Plateau and, by deriving optimal recognition timelines and feature combinations from multi-epoch data, offers useful guidance for satellite-based mapping of planting structures across the Loess Plateau following multi-scale data fusion. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

15 pages, 17822 KB  
Article
Dust Filtering in LIDAR Point Clouds Using Deep Learning for Mining Applications
by Bruno Cavieres, Nicolás Cruz and Javier Ruiz-del-Solar
Sensors 2025, 25(20), 6441; https://doi.org/10.3390/s25206441 - 18 Oct 2025
Viewed by 287
Abstract
In the domain of mining and mineral processing, LIDAR sensors are employed to obtain precise three-dimensional measurements of the surrounding environment. However, the functionality of these sensors is hindered by the dust produced by mining operations. In order to address this problem, a [...] Read more.
In the domain of mining and mineral processing, LIDAR sensors are employed to obtain precise three-dimensional measurements of the surrounding environment. However, the functionality of these sensors is hindered by the dust produced by mining operations. In order to address this problem, a neural network-based method is proposed. This method is capable of filtering dust measurements in real time from point clouds obtained using LIDARs. The proposed method is trained and validated using real data, yielding results that are at the forefront of the field. Furthermore, a public database is constructed using LIDAR sensor data from diverse dusty environments. The database is made public for use in the training and benchmarking of dust filtering methods. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

25 pages, 4025 KB  
Review
Precision Forestry Revisited
by Can Vatandaslar, Kevin Boston, Zennure Ucar, Lana L. Narine, Marguerite Madden and Abdullah Emin Akay
Remote Sens. 2025, 17(20), 3465; https://doi.org/10.3390/rs17203465 - 17 Oct 2025
Viewed by 498
Abstract
This review presents a synthesis of global research on precision forestry, a field that integrates advanced technologies to enhance—rather than replace—established tools and methods used in the operational forest management and the wood products industry. By evaluating 210 peer-reviewed publications indexed in Web [...] Read more.
This review presents a synthesis of global research on precision forestry, a field that integrates advanced technologies to enhance—rather than replace—established tools and methods used in the operational forest management and the wood products industry. By evaluating 210 peer-reviewed publications indexed in Web of Science (up to 2025), the study identifies six main categories and eight components of precision forestry. The findings indicate that “forest management and planning” is the most common category, with nearly half of the studies focusing on this topic. “Remote sensing platforms and sensors” emerged as the most frequently used component, with unmanned aerial vehicle (UAV) and light detection and ranging (LiDAR) systems being the most widely adopted tools. The analysis also reveals a notable increase in precision forestry research since the early 2010s, coinciding with rapid developments in small UAVs and mobile sensor technologies. Despite growing interest, robotics and real-time process control systems remain underutilized, mainly due to challenging forest conditions and high implementation costs. The research highlights geographical disparities, with Europe, Asia, and North America hosting the majority of studies. Italy, China, Finland, and the United States stand out as the most active countries in terms of research output. Notably, the review emphasizes the need to integrate precision forestry into academic curricula and support industry adoption through dedicated information and technology specialists. As the forestry workforce ages and technology advances rapidly, a growing skills gap exists between industry needs and traditional forestry education. Equipping the next generation with hands-on experience in big data analysis, geospatial technologies, automation, and Artificial Intelligence (AI) is critical for ensuring the effective adoption and application of precision forestry. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Figure 1

15 pages, 516 KB  
Perspective
Advances in High-Resolution Spatiotemporal Monitoring Techniques for Indoor PM2.5 Distribution
by Qingyang Liu
Atmosphere 2025, 16(10), 1196; https://doi.org/10.3390/atmos16101196 - 17 Oct 2025
Viewed by 286
Abstract
Indoor air pollution, including fine particulate matter (PM2.5), poses a severe threat to human health. Due to the diverse sources of indoor PM2.5 and its high spatial heterogeneity in distribution, traditional single-point fixed monitoring fails to accurately reflect the actual [...] Read more.
Indoor air pollution, including fine particulate matter (PM2.5), poses a severe threat to human health. Due to the diverse sources of indoor PM2.5 and its high spatial heterogeneity in distribution, traditional single-point fixed monitoring fails to accurately reflect the actual human exposure level. In recent years, the development of high spatiotemporal resolution monitoring technologies has provided a new perspective for revealing the dynamic distribution patterns of indoor PM2.5. This study discusses two cutting-edge monitoring strategies: (1) mobile monitoring technology based on Indoor Positioning Systems (IPS) and portable sensors, which maps 2D exposure trajectories and concentration fields by having personnel carry sensors while moving; and (2) 3D dynamic monitoring technology based on in situ Lateral Scattering LiDAR (I-LiDAR), which non-intrusively reconstructs the 3D dynamic distribution of PM2.5 concentrations using laser arrays. This study elaborates on the principles, calibration methods, application cases, advantages, and disadvantages of the two technologies, compares their applicable scenarios, and outlines future research directions in multi-technology integration, intelligent calibration, and public health applications. It aims to provide a theoretical basis and technical reference for the accurate assessment of indoor air quality and the prevention and control of health risks. Full article
Show Figures

Graphical abstract

29 pages, 7085 KB  
Article
Marine Boundary Layer Cloud Boundaries and Phase Estimation Using Airborne Radar and In Situ Measurements During the SOCRATES Campaign over Southern Ocean
by Anik Das, Baike Xi, Xiaojian Zheng and Xiquan Dong
Atmosphere 2025, 16(10), 1195; https://doi.org/10.3390/atmos16101195 - 16 Oct 2025
Viewed by 227
Abstract
The Southern Ocean Clouds, Radiation, Aerosol Transport Experimental Study (SOCRATES) was an aircraft-based campaign (15 January–26 February 2018) that deployed in situ probes and remote sensors to investigate low-level clouds over the Southern Ocean (SO). A novel methodology was developed to identify cloud [...] Read more.
The Southern Ocean Clouds, Radiation, Aerosol Transport Experimental Study (SOCRATES) was an aircraft-based campaign (15 January–26 February 2018) that deployed in situ probes and remote sensors to investigate low-level clouds over the Southern Ocean (SO). A novel methodology was developed to identify cloud boundaries and classify cloud phases in single-layer, low-level marine boundary layer (MBL) clouds below 3 km using the HIAPER Cloud Radar (HCR) and in situ measurements. The cloud base and top heights derived from HCR reflectivity, Doppler velocity, and spectrum width measurements agreed well with corresponding lidar-based and in situ estimates of cloud boundaries, with mean differences below 100 m. A liquid water content–reflectivity (LWC-Z) relationship, LWC = 0.70Z0.29, was derived to retrieve the LWC and liquid water path (LWP) from HCR profiles. The cloud phase was classified using HCR measurements, temperature, and LWP, yielding 40.6% liquid, 18.3% mixed-phase, and 5.1% ice samples, along with drizzle (29.1%), rain (3.2%), and snow (3.7%) for drizzling cloud cases. The classification algorithm demonstrates good consistency with established methods. This study provides a framework for the boundary and phase detection of MBL clouds, offering insights into SO cloud microphysics and supporting future efforts in satellite retrievals and climate model evaluation. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

25 pages, 10766 KB  
Article
Prediction of Thermal Response of Burning Outdoor Vegetation Using UAS-Based Remote Sensing and Artificial Intelligence
by Pirunthan Keerthinathan, Imanthi Kalanika Subasinghe, Thanirosan Krishnakumar, Anthony Ariyanayagam, Grant Hamilton and Felipe Gonzalez
Remote Sens. 2025, 17(20), 3454; https://doi.org/10.3390/rs17203454 - 16 Oct 2025
Viewed by 298
Abstract
The increasing frequency and intensity of wildfires pose severe risks to ecosystems, infrastructure, and human safety. In wildland–urban interface (WUI) areas, nearby vegetation strongly influences building ignition risk through flame contact and radiant heat exposure. However, limited research has leveraged Unmanned Aerial Systems [...] Read more.
The increasing frequency and intensity of wildfires pose severe risks to ecosystems, infrastructure, and human safety. In wildland–urban interface (WUI) areas, nearby vegetation strongly influences building ignition risk through flame contact and radiant heat exposure. However, limited research has leveraged Unmanned Aerial Systems (UAS) remote sensing (RS) to capture species-specific vegetation geometry and predict thermal responses during ignition events This study proposes a two-stage framework integrating UAS-based multispectral (MS) imagery, LiDAR data, and Fire Dynamics Simulator (FDS) modeling to estimate the maximum temperature (T) and heat flux (HF) of outdoor vegetation, focusing on Syzygium smithii (Lilly Pilly). The study data was collected at a plant nursery at Queensland, Australia. A total of 72 commercially available outdoor vegetation samples were classified into 11 classes based on pixel counts. In the first stage, ensemble learning and watershed segmentation were employed to segment target vegetation patches. Vegetation UAS-LiDAR point cloud delineation was performed using Raycloudtools, then projected onto a 2D raster to generate instance ID maps. The delineated point clouds associated with the target vegetation were filtered using georeferenced vegetation patches. In the second stage, cone-shaped synthetic models of Lilly Pilly were simulated in FDS, and the resulting data from the sensor grid placed near the vegetation in the simulation environment were used to train an XGBoost model to predict T and HF based on vegetation height (H) and crown diameter (D). The point cloud delineation successfully extracted all Lilly Pilly vegetation within the test region. The thermal response prediction model demonstrated high accuracy, achieving an RMSE of 0.0547 °C and R2 of 0.9971 for T, and an RMSE of 0.1372 kW/m2 with an R2 of 0.9933 for HF. This study demonstrates the framework’s feasibility using a single vegetation species under controlled ignition simulation conditions and establishes a scalable foundation for extending its applicability to diverse vegetation types and environmental conditions. Full article
Show Figures

Figure 1

37 pages, 1690 KB  
Review
Advances in Crop Row Detection for Agricultural Robots: Methods, Performance Indicators, and Scene Adaptability
by Zhen Ma, Xinzhong Wang, Xuegeng Chen, Bin Hu and Jingbin Li
Agriculture 2025, 15(20), 2151; https://doi.org/10.3390/agriculture15202151 - 16 Oct 2025
Viewed by 579
Abstract
Crop row detection technology, as one of the key technologies for agricultural robots to achieve autonomous navigation and precise operations, is related to the precision and stability of agricultural machinery operations. Its research and development will also significantly determine the development process of [...] Read more.
Crop row detection technology, as one of the key technologies for agricultural robots to achieve autonomous navigation and precise operations, is related to the precision and stability of agricultural machinery operations. Its research and development will also significantly determine the development process of intelligent agriculture. The paper first summarizes the mainstream technical methods, performance evaluation systems, and adaptability analysis of typical agricultural scenes for crop row detection. The paper also summarizes and explains the technical principles and characteristics of traditional methods based on visual sensors, point cloud preprocessing based on LiDAR, line structure extraction and 3D feature calculation methods, and multi-sensor fusion methods. Secondly, a review was conducted on performance evaluation criteria such as accuracy, efficiency, robustness, and practicality, analyzing and comparing the applicability of different methods in typical scenarios such as open fields, facility agriculture, orchards, and special terrains. Based on the multidimensional analysis above, it is concluded that a single technology has specific environmental adaptability limitations. Multi-sensor fusion can help improve robustness in complex scenarios, and the fusion advantage will gradually increase with the increase in the number of sensors. Suggestions on the development of agricultural robot navigation technology are made based on the current status of technological applications in the past five years and the needs for future development. This review systematically summarizes crop row detection technology, providing a clear technical framework and scenario adaptation reference for research in this field, and striving to promote the development of precision and efficiency in agricultural production. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

32 pages, 2733 KB  
Article
Collaborative Multi-Agent Platform with LIDAR Recognition and Web Integration for STEM Education
by David Cruz García, Sergio García González, Arturo Álvarez Sanchez, Rubén Herrero Pérez and Gabriel Villarrubia González
Appl. Sci. 2025, 15(20), 11053; https://doi.org/10.3390/app152011053 - 15 Oct 2025
Viewed by 223
Abstract
STEM (Science, Technology, Engineering, and Mathematics) education faces the challenge of incorporating advanced technologies that foster motivation, collaboration, and hands-on learning. This study proposes a portable system capable of transforming ordinary surfaces into interactive learning spaces through gamification and spatial perception. A prototype [...] Read more.
STEM (Science, Technology, Engineering, and Mathematics) education faces the challenge of incorporating advanced technologies that foster motivation, collaboration, and hands-on learning. This study proposes a portable system capable of transforming ordinary surfaces into interactive learning spaces through gamification and spatial perception. A prototype based on multi-agent architecture was developed on the PANGEA (Platform for automatic coNstruction of orGanizations of intElligent agents) platform, integrating LIDAR (Light Detection and Ranging) sensors for gesture detection, an ultra-short-throw projector for visual interaction and a web platform to manage educational content, organize activities and evaluate student performance. The data from the sensors is processed in real time using ROS (Robot Operating System), generating precise virtual interactions on the projected surface, while the web allows you to configure physical and pedagogical parameters. Preliminary tests show that the system accurately detects gestures, translates them into digital interactions, and maintains low latency in different classroom environments, demonstrating robustness, modularity, and portability. The results suggest that the combination of multi-agent architectures, LIDAR sensors, and gamified platforms offers an effective approach to promote active learning in STEM, facilitate the adoption of advanced technologies in diverse educational settings, and improve student engagement and experience. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 4546 KB  
Article
LiDAR Dreamer: Efficient World Model for Autonomous Racing with Cartesian-Polar Encoding and Lightweight State-Space Cells
by Myeongjun Kim, Jong-Chan Park, Sang-Min Choi and Gun-Woo Kim
Information 2025, 16(10), 898; https://doi.org/10.3390/info16100898 - 14 Oct 2025
Viewed by 513
Abstract
Autonomous racing serves as a challenging testbed that exposes the limitations of perception-decision-control algorithms in extreme high-speed environments, revealing safety gaps not addressed in existing autonomous driving research. However, traditional control techniques (e.g., FGM and MPC) and reinforcement learning-based approaches (including model-free and [...] Read more.
Autonomous racing serves as a challenging testbed that exposes the limitations of perception-decision-control algorithms in extreme high-speed environments, revealing safety gaps not addressed in existing autonomous driving research. However, traditional control techniques (e.g., FGM and MPC) and reinforcement learning-based approaches (including model-free and Dreamer variants) struggle to simultaneously satisfy sample efficiency, prediction reliability, and real-time control performance, making them difficult to apply in actual high-speed racing environments. To address these challenges, we propose LiDAR Dreamer, a novel world model specialized for LiDAR sensor data. LiDAR Dreamer introduces three core techniques: (1) efficient point cloud preprocessing and encoding via Cartesian Polar Bar Charts, (2) Light Structured State-Space Cells (LS3C) that reduce RSSM parameters by 14.2% while preserving key dynamic information, and (3) a Displacement Covariance Distance divergence function, which enhances both learning stability and expressiveness. Experiments in PyBullet F1TENTH simulation environments demonstrate that LiDAR Dreamer achieves competitive performance across different track complexities. On the Austria track with complex corners, it reaches 90% of DreamerV3’s performance (1.14 vs. 1.27 progress) while using 81.7% fewer parameters. On the simpler Columbia track, while model-free methods achieve higher absolute performance, LiDAR Dreamer shows improved sample efficiency compared to baseline Dreamer models, converging faster to stable performance. The Treitlstrasse environment results demonstrate comparable performance to baseline methods. Furthermore, beyond the 14.2% RSSM parameter reduction, reward loss converged more stably without spikes, improving overall training efficiency and stability. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

36 pages, 18073 KB  
Article
Multi-Domain Robot Swarm for Industrial Mapping and Asset Monitoring: Technical Challenges and Solutions
by Fethi Ouerdane, Ahmed Abubaker, Mubarak Badamasi Aremu, Mohammed Abdel-Nasser, Ahmed Eltayeb, Karim Asif Sattar, Abdulrahman Javaid, Ahmed Ibnouf, Sami El Ferik and Mustafa Alnasser
Sensors 2025, 25(20), 6295; https://doi.org/10.3390/s25206295 - 11 Oct 2025
Viewed by 735
Abstract
Industrial environments are complex, making the monitoring of gauge meters challenging. This is especially true in confined spaces, underground, or at high altitudes. These difficulties underscore the need for intelligent solutions in the inspection and monitoring of plant assets, such as gauge meters. [...] Read more.
Industrial environments are complex, making the monitoring of gauge meters challenging. This is especially true in confined spaces, underground, or at high altitudes. These difficulties underscore the need for intelligent solutions in the inspection and monitoring of plant assets, such as gauge meters. In this study, we plan to integrate unmanned ground vehicles and unmanned aerial vehicles to address the challenge, but the integration of these heterogeneous systems introduces additional complexities in terms of coordination, interoperability, and communication. Our goal is to develop a multi-domain robotic swarm system for industrial mapping and asset monitoring. We created an experimental setup to simulate industrial inspection tasks, involving the integration of a TurtleBot 2 and a QDrone 2. The TurtleBot 2 utilizes simultaneous localization and mapping (SLAM) technology, along with a LiDAR sensor, for mapping and navigation purposes. The QDrone 2 captures high-resolution images of meter gauges. We evaluated the system’s performance in both simulation and real-world environments. The system achieved accurate mapping, high localization, and landing precision, with 84% accuracy in detecting meter gauges. It also reached 87.5% accuracy in reading gauge indicators using the paddle OCR algorithm. The system navigated complex environments effectively, showcasing the potential for real-time collaboration between ground and aerial robotic platforms. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Graphical abstract

Back to TopTop