Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (50)

Search Parameters:
Keywords = static environment assumption

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3832 KiB  
Article
Design of Message Formatting and Utilization Strategies for UAV-Based Pseudolite Systems Compatible with GNSS Receivers
by Guanbing Zhang, Yang Zhang, Hong Yuan, Yi Lu and Ruocheng Guo
Drones 2025, 9(8), 526; https://doi.org/10.3390/drones9080526 - 25 Jul 2025
Viewed by 202
Abstract
This paper proposes a GNSS-compatible method for characterizing the motion of UAV-based navigation enhancement platforms, designed to provide reliable navigation and positioning services in emergency scenarios where GNSS signals are unavailable or severely degraded. The method maps UAV trajectories into standard GNSS navigation [...] Read more.
This paper proposes a GNSS-compatible method for characterizing the motion of UAV-based navigation enhancement platforms, designed to provide reliable navigation and positioning services in emergency scenarios where GNSS signals are unavailable or severely degraded. The method maps UAV trajectories into standard GNSS navigation messages by establishing a correspondence between ephemeris parameters and platform positions through coordinate transformation and Taylor series expansion. To address modeling inaccuracies, the approach incorporates truncation error analysis and motion-assumption compensation via parameter optimization. This design enables UAV-mounted pseudolite systems to broadcast GNSS-compatible signals without modifying existing receivers, significantly enhancing rapid deployment capabilities in complex or degraded environments. Simulation results confirm precise positional representation in static scenarios and robust error control under dynamic motion through higher-order modeling and optimized broadcast strategies. UAV flight tests demonstrated a theoretical maximum error of 0.4262 m and an actual maximum error of 3.1878 m under real-world disturbances, which is within operational limits. Additional experiments confirmed successful message parsing with standard GNSS receivers. The proposed method offers a lightweight, interoperable solution for integrating UAV platforms into GNSS-enhanced positioning systems, supporting timely and accurate navigation services in emergency and disaster relief operations. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Enhanced Emergency Response)
Show Figures

Figure 1

16 pages, 3775 KiB  
Article
Optimizing Energy Efficiency in Last-Mile Delivery: A Collaborative Approach with Public Transportation System and Drones
by Pierre Romet, Charbel Hage, El-Hassane Aglzim, Tonino Sophy and Franck Gechter
Drones 2025, 9(8), 513; https://doi.org/10.3390/drones9080513 - 22 Jul 2025
Viewed by 291
Abstract
Accurately estimating the energy consumption of unmanned aerial vehicles (UAVs) in real-world delivery scenarios remains a critical challenge, particularly when UAVs operate in complex urban environments and are coupled with public transportation systems. Most existing models rely on oversimplified assumptions or static mission [...] Read more.
Accurately estimating the energy consumption of unmanned aerial vehicles (UAVs) in real-world delivery scenarios remains a critical challenge, particularly when UAVs operate in complex urban environments and are coupled with public transportation systems. Most existing models rely on oversimplified assumptions or static mission profiles, limiting their applicability to realistic, scalable drone-based logistics. In this paper, we propose a physically-grounded and scenario-aware energy sizing methodology for UAVs operating as part of a last-mile delivery system integrated with a city’s bus network. The model incorporates detailed physical dynamics—including lift, drag, thrust, and payload variations—and considers real-time mission constraints such as delivery execution windows and infrastructure interactions. To enhance the realism of the energy estimation, we integrate computational fluid dynamics (CFD) simulations that quantify the impact of surrounding structures and moving buses on UAV thrust efficiency. Four mission scenarios of increasing complexity are defined to evaluate the effects of delivery delays, obstacle-induced aerodynamic perturbations, and early return strategies on energy consumption. The methodology is applied to a real-world transport network in Belfort, France, using a graph-based digital twin. Results show that environmental and operational constraints can lead to up to 16% additional energy consumption compared to idealized mission models. The proposed framework provides a robust foundation for UAV battery sizing, mission planning, and sustainable integration of aerial delivery into multimodal urban transport systems. Full article
(This article belongs to the Special Issue Urban Air Mobility Solutions: UAVs for Smarter Cities)
Show Figures

Figure 1

30 pages, 956 KiB  
Article
Stochastic Production Planning with Regime-Switching: Sensitivity Analysis, Optimal Control, and Numerical Implementation
by Dragos-Patru Covei
Axioms 2025, 14(7), 524; https://doi.org/10.3390/axioms14070524 - 8 Jul 2025
Viewed by 187
Abstract
This study investigates a stochastic production planning problem with regime-switching parameters, inspired by economic cycles impacting production and inventory costs. The model considers types of goods and employs a Markov chain to capture probabilistic regime transitions, coupled with a multidimensional Brownian motion representing [...] Read more.
This study investigates a stochastic production planning problem with regime-switching parameters, inspired by economic cycles impacting production and inventory costs. The model considers types of goods and employs a Markov chain to capture probabilistic regime transitions, coupled with a multidimensional Brownian motion representing stochastic demand dynamics. The production and inventory cost optimization problem is formulated as a quadratic cost functional, with the solution characterized by a regime-dependent system of elliptic partial differential equations (PDEs). Numerical solutions to the PDE system are computed using a monotone iteration algorithm, enabling quantitative analysis. Sensitivity analysis and model risk evaluation illustrate the effects of regime-dependent volatility, holding costs, and discount factors, revealing the conservative bias of regime-switching models when compared to static alternatives. Practical implications include optimizing production strategies under fluctuating economic conditions and exploring future extensions such as correlated Brownian dynamics, non-quadratic cost functions, and geometric inventory frameworks. In contrast to earlier studies that imposed static or overly simplified regime-switching assumptions, our work presents a fully integrated framework—combining optimal control theory, a regime-dependent system of elliptic PDEs, and comprehensive numerical and sensitivity analyses—to more accurately capture the complex stochastic dynamics of production planning and thereby deliver enhanced, actionable insights for modern manufacturing environments. Full article
Show Figures

Figure 1

35 pages, 21267 KiB  
Article
Unmanned Aerial Vehicle–Unmanned Ground Vehicle Centric Visual Semantic Simultaneous Localization and Mapping Framework with Remote Interaction for Dynamic Scenarios
by Chang Liu, Yang Zhang, Liqun Ma, Yong Huang, Keyan Liu and Guangwei Wang
Drones 2025, 9(6), 424; https://doi.org/10.3390/drones9060424 - 10 Jun 2025
Viewed by 1223
Abstract
In this study, we introduce an Unmanned Aerial Vehicle (UAV) centric visual semantic simultaneous localization and mapping (SLAM) framework that integrates RGB–D cameras, inertial measurement units (IMUs), and a 5G–enabled remote interaction module. Our system addresses three critical limitations in existing approaches: (1) [...] Read more.
In this study, we introduce an Unmanned Aerial Vehicle (UAV) centric visual semantic simultaneous localization and mapping (SLAM) framework that integrates RGB–D cameras, inertial measurement units (IMUs), and a 5G–enabled remote interaction module. Our system addresses three critical limitations in existing approaches: (1) Distance constraints in remote operations; (2) Static map assumptions in dynamic environments; and (3) High–dimensional perception requirements for UAV–based applications. By combining YOLO–based object detection with epipolar–constraint-based dynamic feature removal, our method achieves real-time semantic mapping while rejecting motion artifacts. The framework further incorporates a dual–channel communication architecture to enable seamless human–in–the–loop control over UAV–Unmanned Ground Vehicle (UGV) teams in large–scale scenarios. Experimental validation across indoor and outdoor environments indicates that the system can achieve a detection rate of up to 75 frames per second (FPS) on an NVIDIA Jetson AGX Xavier using YOLO–FASTEST, ensuring the rapid identification of dynamic objects. In dynamic scenarios, the localization accuracy attains an average absolute pose error (APE) of 0.1275 m. This outperforms state–of–the–art methods like Dynamic–VINS (0.211 m) and ORB–SLAM3 (0.148 m) on the EuRoC MAV Dataset. The dual-channel communication architecture (Web Real–Time Communication (WebRTC) for video and Message Queuing Telemetry Transport (MQTT) for telemetry) reduces bandwidth consumption by 65% compared to traditional TCP–based protocols. Moreover, our hybrid dynamic feature filtering can reject 89% of dynamic features in occluded scenarios, guaranteeing accurate mapping in complex environments. Our framework represents a significant advancement in enabling intelligent UAVs/UGVs to navigate and interact in complex, dynamic environments, offering real-time semantic understanding and accurate localization. Full article
(This article belongs to the Special Issue Advances in Perception, Communications, and Control for Drones)
Show Figures

Figure 1

17 pages, 10094 KiB  
Article
EMS-SLAM: Dynamic RGB-D SLAM with Semantic-Geometric Constraints for GNSS-Denied Environments
by Jinlong Fan, Yipeng Ning, Jian Wang, Xiang Jia, Dashuai Chai, Xiqi Wang and Ying Xu
Remote Sens. 2025, 17(10), 1691; https://doi.org/10.3390/rs17101691 - 12 May 2025
Viewed by 620
Abstract
Global navigation satellite systems (GNSSs) exhibit significant performance limitations in signal-deprived environments such as indoor spaces and underground spaces. Although visual SLAM has emerged as a viable solution for ego-motion estimation in GNSS-denied areas, conventional approaches remain constrained by static environment assumptions, resulting [...] Read more.
Global navigation satellite systems (GNSSs) exhibit significant performance limitations in signal-deprived environments such as indoor spaces and underground spaces. Although visual SLAM has emerged as a viable solution for ego-motion estimation in GNSS-denied areas, conventional approaches remain constrained by static environment assumptions, resulting in a substantial degradation in accuracy when handling dynamic scenarios. The EMS-SLAM framework combines the geometric constraints and semantics of SLAM to provide a real-time solution for addressing the challenges of robustness and accuracy in dynamic environments. To improve the accuracy of the initial pose, EMS-SLAM employs a feature-matching algorithm based on a graph-cut RANSAC. In addition, a degeneracy-resistant geometric constraint method is proposed, which effectively addresses the degeneracy issues of purely epipolar approaches. Finally, EMS-SLAM combines semantic information with geometric constraints to maintain high accuracy while quickly eliminating dynamic feature points. Experiments were conducted on the public datasets and our collected datasets. The results demonstrate that our method outperformed the current algorithms of SLAM in highly dynamic environments. Full article
Show Figures

Graphical abstract

32 pages, 2699 KiB  
Article
Dynamic Marketing Uplift Modeling: A Symmetry-Preserving Framework Integrating Causal Forests with Deep Reinforcement Learning for Personalized Intervention Strategies
by Jiyuan Wang, Yutong Tan, Bingying Jiang, Bi Wu and Wenhe Liu
Symmetry 2025, 17(4), 610; https://doi.org/10.3390/sym17040610 - 17 Apr 2025
Cited by 2 | Viewed by 2510
Abstract
Traditional marketing uplift models suffer from a fundamental limitation: they typically operate under static assumptions that fail to capture the temporal dynamics of customer responses to marketing interventions. This paper introduces a novel framework that combines causal forest algorithms with deep reinforcement learning [...] Read more.
Traditional marketing uplift models suffer from a fundamental limitation: they typically operate under static assumptions that fail to capture the temporal dynamics of customer responses to marketing interventions. This paper introduces a novel framework that combines causal forest algorithms with deep reinforcement learning to dynamically model marketing uplift effects. Our approach enables the real-time identification of heterogeneous treatment effects across customer segments while simultaneously optimizing intervention strategies through an adaptive learning mechanism. The key innovations of our framework include the following: (1) a counterfactual simulation environment that emulates diverse customer response patterns; (2) an adaptive reward mechanism that captures both immediate and long-term intervention outcomes; and (3) a dynamic policy optimization process that continually refines targeting strategies based on evolving customer behaviors. Empirical evaluations on both simulated and real-world marketing campaign data demonstrate that our approach significantly outperforms traditional static uplift models, achieving up to a 27% improvement in targeting efficiency and an 18% increase in the return on marketing investment. The framework leverages inherent symmetries in customer-intervention interactions, where balanced and symmetric reward structures ensure fair optimization across diverse customer segments. The proposed framework addresses the limitations of existing methods by effectively modeling the dynamic and heterogeneous nature of customer responses to marketing interventions, providing marketers with a powerful tool for implementing personalized and adaptive campaign strategies. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

19 pages, 24555 KiB  
Article
A Multi-Strategy Visual SLAM System for Motion Blur Handling in Indoor Dynamic Environments
by Shuo Huai, Long Cao, Yang Zhou, Zhiyang Guo and Jingyao Gai
Sensors 2025, 25(6), 1696; https://doi.org/10.3390/s25061696 - 9 Mar 2025
Cited by 2 | Viewed by 976
Abstract
Typical SLAM systems adhere to the assumption of environment rigidity, which limits their functionality when deployed in the dynamic indoor environments commonly encountered by household robots. Prevailing methods address this issue by employing semantic information for the identification and processing of dynamic objects [...] Read more.
Typical SLAM systems adhere to the assumption of environment rigidity, which limits their functionality when deployed in the dynamic indoor environments commonly encountered by household robots. Prevailing methods address this issue by employing semantic information for the identification and processing of dynamic objects in scenes. However, extracting reliable semantic information remains challenging due to the presence of motion blur. In this paper, a novel visual SLAM algorithm is proposed in which various approaches are integrated to obtain more reliable semantic information, consequently reducing the impact of motion blur on visual SLAM systems. Specifically, to accurately distinguish moving objects and static objects, we introduce a missed segmentation compensation mechanism into our SLAM system for predicting and restoring semantic information, and depth and semantic information is then leveraged to generate masks of dynamic objects. Additionally, to refine keypoint filtering, a probability-based algorithm for dynamic feature detection and elimination is incorporated into our SLAM system. Evaluation experiments using the TUM and Bonn RGB-D datasets demonstrated that our SLAM system achieves lower absolute trajectory error (ATE) than existing systems in different dynamic indoor environments, particularly those with large view angle variations. Our system can be applied to enhance the autonomous navigation and scene understanding capabilities of domestic robots. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 8888 KiB  
Article
E2-VINS: An Event-Enhanced Visual–Inertial SLAM Scheme for Dynamic Environments
by Jiafeng Huang, Shengjie Zhao and Lin Zhang
Appl. Sci. 2025, 15(3), 1314; https://doi.org/10.3390/app15031314 - 27 Jan 2025
Viewed by 1513
Abstract
Simultaneous Localization and Mapping (SLAM) technology has garnered significant interest in the robotic vision community over the past few decades. The rapid development of SLAM technology has resulted in its widespread application across various fields, including autonomous driving, robot navigation, and virtual reality. [...] Read more.
Simultaneous Localization and Mapping (SLAM) technology has garnered significant interest in the robotic vision community over the past few decades. The rapid development of SLAM technology has resulted in its widespread application across various fields, including autonomous driving, robot navigation, and virtual reality. Although SLAM, especially Visual–Inertial SLAM (VI-SLAM), has made substantial progress, most classic algorithms in this field are designed based on the assumption that the observed scene is static. In complex real-world environments, the presence of dynamic objects such as pedestrians and vehicles can seriously affect the robustness and accuracy of such systems. Event cameras, which use recently introduced motion-sensitive biomimetic sensors, efficiently capture scene changes (referred to as “events”) with high temporal resolution, offering new opportunities to enhance VI-SLAM performance in dynamic environments. Integrating this kind of innovative sensor, we propose the first event-enhanced Visual–Inertial SLAM framework specifically designed for dynamic environments, termed E2-VINS. Specifically, the system uses visual–inertial alignment strategy to estimate IMU biases and correct IMU measurements. The calibrated IMU measurements are used to assist in motion compensation, achieving spatiotemporal alignment of events. The event-based dynamicity metrics, which measure the dynamicity of each pixel, are then generated on these aligned events. Based on these metrics, the visual residual terms of different pixels are adaptively assigned weights, namely, dynamicity weights. Subsequently, E2-VINS jointly and alternately optimizes the system state (camera poses and map points) and dynamicity weights, effectively filtering out dynamic features through a soft-threshold mechanism. Our scheme enhances the robustness of classic VI-SLAM against dynamic features, which significantly enhances VI-SLAM performance in dynamic environments, resulting in an average improvement of 1.884% in the mean position error compared to state-of-the-art methods. The superior performance of E2-VINS is validated through both qualitative and quantitative experimental results. To ensure that our results are fully reproducible, all the relevant data and codes have been released. Full article
(This article belongs to the Special Issue Advances in Audio/Image Signals Processing)
Show Figures

Figure 1

15 pages, 10531 KiB  
Article
Mechanical Characterization of Main Minerals in Carbonate Rock at the Micro Scale Based on Nanoindentation
by Ting Deng, Junliang Zhao, Hongchuan Yin, Qiang Xie and Ling Gou
Processes 2024, 12(12), 2727; https://doi.org/10.3390/pr12122727 - 2 Dec 2024
Viewed by 1145
Abstract
The mechanical characterization of carbonate rock is crucial for the development of a hydrocarbon reservoir and underground gas storage. As a kind of natural composite material, the mechanical properties of carbonate rock exhibit multiscale characteristics. The macroscopic mechanical properties of carbonate rock are [...] Read more.
The mechanical characterization of carbonate rock is crucial for the development of a hydrocarbon reservoir and underground gas storage. As a kind of natural composite material, the mechanical properties of carbonate rock exhibit multiscale characteristics. The macroscopic mechanical properties of carbonate rock are determined by the mineral composition and structure at the micro scale. To achieve a mechanical investigation at the micro scale, this study designed a scheme for micromechanical characterization of carbonate rock. First, scanning electron microscope observation and energy dispersive spectroscopy analysis were combined to select the appropriate micromechanical test areas and to identify the mineral types in each area. Second, the selected test area was positioned in the nanoindentation instrument through the comparison of different-type microscopic images. Finally, quasi-static nanoindentation was carried out on the surface of different minerals in the selected test area to obtain quantitative mechanical evaluation results. A typical carbonate rock sample from the Huangcaoxia gas storage was investigated in this study. The experimental results indicated apparent micromechanical heterogeneity in the carbonate rock. The Young’s modulus of pyrite was over 200 GPa, while that of clay minerals was only approximately 50 GPa. In addition, the proposed micromechanical characterization scheme was discussed based on experimental results. For minerals with an unknown Poisson’s ratio, the maximum error introduced by the 0.25 assumption was lower than 15%. To discuss the effectiveness of the nanoindentation results, the characterization abilities constituted by lateral spatial resolution and elastic response depth were analyzed. The analysis results revealed that the nanoindentation measurement of clay was more susceptible to influence by the surrounding environment as compared to other kinds of minerals with the experimental setup in this study. The micromechanical characterization scheme for clay minerals can be optimized in future research. The mechanical data obtained at the micro scale can be used for the interpretation of the macroscopic mechanical features of carbonate rock for the parameter input and validation of mineral-related simulation and for the construction of a mechanical upscaling model. Full article
(This article belongs to the Special Issue Advances in Enhancing Unconventional Oil/Gas Recovery, 2nd Edition)
Show Figures

Figure 1

32 pages, 3912 KiB  
Article
Proposed Multi-ST Model for Collaborating Multiple Robots in Dynamic Environments
by Hai Van Pham, Huy Quoc Do, Minh Nguyen Quang, Farzin Asadi and Philip Moore
Machines 2024, 12(11), 797; https://doi.org/10.3390/machines12110797 - 11 Nov 2024
Cited by 2 | Viewed by 1446
Abstract
Coverage path planning describes the process of finding an effective path robots can take to traverse a defined dynamic operating environment where there are static (fixed) and dynamic (mobile) obstacles that must be located and avoided in coverage path planning. However, most coverage [...] Read more.
Coverage path planning describes the process of finding an effective path robots can take to traverse a defined dynamic operating environment where there are static (fixed) and dynamic (mobile) obstacles that must be located and avoided in coverage path planning. However, most coverage path planning methods are limited in their ability to effectively manage the coordination of multiple robots operating in concert. In this paper, we propose a novel coverage path planning model (termed Multi-ST) which utilizes the spiral-spanning tree coverage algorithm with intelligent reasoning and knowledge-based methods to achieve optimal coverage, obstacle avoidance, and robot coordination. In experimental testing, we have evaluated the proposed model with a comparative analysis of alternative current approaches under the same conditions. The reported results show that the proposed model enables the avoidance of static and moving obstacles by multiple robots operating in concert in a dynamic operating environment. Moreover, the results demonstrate that the proposed model outperforms existing coverage path planning methods in terms of coverage quality, robustness, scalability, and efficiency. In this paper, the assumptions, limitations, and constraints applicable to this study are set out along with related challenges, open research questions, and proposed directions for future research. We posit that our proposed approach can provide an effective basis upon which multiple robots can operate in concert in a range of ‘real-world’ domains and systems where coverage path planning and the avoidance of static and dynamic obstacles encountered in completing tasks is a systemic requirement. Full article
(This article belongs to the Special Issue Recent Developments in Machine Design, Automation and Robotics)
Show Figures

Figure 1

20 pages, 6263 KiB  
Article
YPR-SLAM: A SLAM System Combining Object Detection and Geometric Constraints for Dynamic Scenes
by Xukang Kan, Gefei Shi, Xuerong Yang and Xinwei Hu
Sensors 2024, 24(20), 6576; https://doi.org/10.3390/s24206576 - 12 Oct 2024
Cited by 4 | Viewed by 1439
Abstract
Traditional SLAM systems assume a static environment, but moving objects break this ideal assumption. In the real world, moving objects can greatly influence the precision of image matching and camera pose estimation. In order to solve these problems, the YPR-SLAM system is proposed. [...] Read more.
Traditional SLAM systems assume a static environment, but moving objects break this ideal assumption. In the real world, moving objects can greatly influence the precision of image matching and camera pose estimation. In order to solve these problems, the YPR-SLAM system is proposed. First of all, the system includes a lightweight YOLOv5 detection network for detecting both dynamic and static objects, which provides pre-dynamic object information to the SLAM system. Secondly, utilizing the prior information of dynamic targets and the depth image, a method of geometric constraint for removing motion feature points from the depth image is proposed. The Depth-PROSAC algorithm is used to differentiate the dynamic and static feature points so that dynamic feature points can be removed. At last, the dense cloud map is constructed by the static feature points. The YPR-SLAM system is an efficient combination of object detection and geometry constraint in a tightly coupled way, eliminating motion feature points and minimizing their adverse effects on SLAM systems. The performance of the YPR-SLAM was assessed on the public TUM RGB-D dataset, and it was found that YPR-SLAM was suitable for dynamic situations. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 16538 KiB  
Article
BY-SLAM: Dynamic Visual SLAM System Based on BEBLID and Semantic Information Extraction
by Daixian Zhu, Peixuan Liu, Qiang Qiu, Jiaxin Wei and Ruolin Gong
Sensors 2024, 24(14), 4693; https://doi.org/10.3390/s24144693 - 19 Jul 2024
Cited by 2 | Viewed by 2063
Abstract
SLAM is a critical technology for enabling autonomous navigation and positioning in unmanned vehicles. Traditional visual simultaneous localization and mapping algorithms are built upon the assumption of a static scene, overlooking the impact of dynamic targets within real-world environments. Interference from dynamic targets [...] Read more.
SLAM is a critical technology for enabling autonomous navigation and positioning in unmanned vehicles. Traditional visual simultaneous localization and mapping algorithms are built upon the assumption of a static scene, overlooking the impact of dynamic targets within real-world environments. Interference from dynamic targets can significantly degrade the system’s localization accuracy or even lead to tracking failure. To address these issues, we propose a dynamic visual SLAM system named BY-SLAM, which is based on BEBLID and semantic information extraction. Initially, the BEBLID descriptor is introduced to describe Oriented FAST feature points, enhancing both feature point matching accuracy and speed. Subsequently, FasterNet replaces the backbone network of YOLOv8s to expedite semantic information extraction. By using the results of DBSCAN clustering object detection, a more refined semantic mask is obtained. Finally, by leveraging the semantic mask and epipolar constraints, dynamic feature points are discerned and eliminated, allowing for the utilization of only static feature points for pose estimation and the construction of a dense 3D map that excludes dynamic targets. Experimental evaluations are conducted on both the TUM RGB-D dataset and real-world scenarios and demonstrate the effectiveness of the proposed algorithm at filtering out dynamic targets within the scenes. On average, the localization accuracy for the TUM RGB-D dataset improves by 95.53% compared to ORB-SLAM3. Comparative analyses against classical dynamic SLAM systems further corroborate the improvement in localization accuracy, map readability, and robustness achieved by BY-SLAM. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

27 pages, 3382 KiB  
Article
DOT-SLAM: A Stereo Visual Simultaneous Localization and Mapping (SLAM) System with Dynamic Object Tracking Based on Graph Optimization
by Yuan Zhu, Hao An, Huaide Wang, Ruidong Xu, Zhipeng Sun and Ke Lu
Sensors 2024, 24(14), 4676; https://doi.org/10.3390/s24144676 - 18 Jul 2024
Cited by 5 | Viewed by 2535
Abstract
Most visual simultaneous localization and mapping (SLAM) systems are based on the assumption of a static environment in autonomous vehicles. However, when dynamic objects, particularly vehicles, occupy a large portion of the image, the localization accuracy of the system decreases significantly. To mitigate [...] Read more.
Most visual simultaneous localization and mapping (SLAM) systems are based on the assumption of a static environment in autonomous vehicles. However, when dynamic objects, particularly vehicles, occupy a large portion of the image, the localization accuracy of the system decreases significantly. To mitigate this challenge, this paper unveils DOT-SLAM, a novel stereo visual SLAM system that integrates dynamic object tracking through graph optimization. By integrating dynamic object pose estimation into the SLAM system, the system can effectively utilize both foreground and background points for ego vehicle localization and obtain a static feature points map. To rectify the inaccuracies in depth estimation from stereo disparity directly on the foreground points of dynamic objects due to their self-similarity characteristics, a coarse-to-fine depth estimation method based on camera–road plane geometry is presented. This method uses rough depth to guide fine stereo matching, thereby obtaining the 3 dimensions (3D)spatial positions of feature points on dynamic objects. Subsequently, by establishing constraints on the dynamic object’s pose using the road plane and non-holonomic constraints (NHCs) of the vehicle, reducing the initial pose uncertainty of dynamic objects leads to more accurate dynamic object initialization. Finally, by considering foreground points, background points, the local road plane, the ego vehicle pose, and dynamic object poses as optimization nodes, through the establishment and joint optimization of a nonlinear model based on graph optimization, accurate six degrees of freedom (DoFs) pose estimations are obtained for both the ego vehicle and dynamic objects. Experimental validation on the KITTI-360 dataset demonstrates that DOT-SLAM effectively utilizes features from the background and dynamic objects in the environment, resulting in more accurate vehicle trajectory estimation and a static environment map. Results obtained from a real-world dataset test reinforce the effectiveness. Full article
Show Figures

Figure 1

24 pages, 6026 KiB  
Article
Monocular Depth Estimation via Self-Supervised Self-Distillation
by Haifeng Hu, Yuyang Feng, Dapeng Li, Suofei Zhang and Haitao Zhao
Sensors 2024, 24(13), 4090; https://doi.org/10.3390/s24134090 - 24 Jun 2024
Cited by 3 | Viewed by 2836
Abstract
Self-supervised monocular depth estimation can exhibit excellent performance in static environments due to the multi-view consistency assumption during the training process. However, it is hard to maintain depth consistency in dynamic scenes when considering the occlusion problem caused by moving objects. For this [...] Read more.
Self-supervised monocular depth estimation can exhibit excellent performance in static environments due to the multi-view consistency assumption during the training process. However, it is hard to maintain depth consistency in dynamic scenes when considering the occlusion problem caused by moving objects. For this reason, we propose a method of self-supervised self-distillation for monocular depth estimation (SS-MDE) in dynamic scenes, where a deep network with a multi-scale decoder and a lightweight pose network are designed to predict depth in a self-supervised manner via the disparity, motion information, and the association between two adjacent frames in the image sequence. Meanwhile, in order to improve the depth estimation accuracy of static areas, the pseudo-depth images generated by the LeReS network are used to provide the pseudo-supervision information, enhancing the effect of depth refinement in static areas. Furthermore, a forgetting factor is leveraged to alleviate the dependency on the pseudo-supervision. In addition, a teacher model is introduced to generate depth prior information, and a multi-view mask filter module is designed to implement feature extraction and noise filtering. This can enable the student model to better learn the deep structure of dynamic scenes, enhancing the generalization and robustness of the entire model in a self-distillation manner. Finally, on four public data datasets, the performance of the proposed SS-MDE method outperformed several state-of-the-art monocular depth estimation techniques, achieving an accuracy (δ1) of 89% while minimizing the error (AbsRel) by 0.102 in NYU-Depth V2 and achieving an accuracy (δ1) of 87% while minimizing the error (AbsRel) by 0.111 in KITTI. Full article
Show Figures

Figure 1

27 pages, 14033 KiB  
Article
MOLO-SLAM: A Semantic SLAM for Accurate Removal of Dynamic Objects in Agricultural Environments
by Jinhong Lv, Beihuo Yao, Haijun Guo, Changlun Gao, Weibin Wu, Junlin Li, Shunli Sun and Qing Luo
Agriculture 2024, 14(6), 819; https://doi.org/10.3390/agriculture14060819 - 24 May 2024
Cited by 3 | Viewed by 2620
Abstract
Visual simultaneous localization and mapping (VSLAM) is a foundational technology that enables robots to achieve fully autonomous locomotion, exploration, inspection, and more within complex environments. Its applicability also extends significantly to agricultural settings. While numerous impressive VSLAM systems have emerged, a majority of [...] Read more.
Visual simultaneous localization and mapping (VSLAM) is a foundational technology that enables robots to achieve fully autonomous locomotion, exploration, inspection, and more within complex environments. Its applicability also extends significantly to agricultural settings. While numerous impressive VSLAM systems have emerged, a majority of them rely on static world assumptions. This reliance constrains their use in real dynamic scenarios and leads to increased instability when applied to agricultural contexts. To address the problem of detecting and eliminating slow dynamic objects in outdoor forest and tea garden agricultural scenarios, this paper presents a dynamic VSLAM innovation called MOLO-SLAM (mask ORB label optimization SLAM). MOLO-SLAM merges the ORBSLAM2 framework with the Mask-RCNN instance segmentation network, utilizing masks and bounding boxes to enhance the accuracy and cleanliness of 3D point clouds. Additionally, we used the BundleFusion reconstruction algorithm for 3D mesh model reconstruction. By comparing our algorithm with various dynamic VSLAM algorithms on the TUM and KITTI datasets, the results demonstrate significant improvements, with enhancements of up to 97.72%, 98.51%, and 28.07% relative to the original ORBSLAM2 on the three datasets. This showcases the outstanding advantages of our algorithm. Full article
(This article belongs to the Special Issue Advanced Image Processing in Agricultural Applications)
Show Figures

Figure 1

Back to TopTop