Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = covisibility

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 8490 KB  
Article
BDGS-SLAM: A Probabilistic 3D Gaussian Splatting Framework for Robust SLAM in Dynamic Environments
by Tianyu Yang, Shuangfeng Wei, Jingxuan Nan, Mingyang Li and Mingrui Li
Sensors 2025, 25(21), 6641; https://doi.org/10.3390/s25216641 - 30 Oct 2025
Viewed by 2459
Abstract
Simultaneous Localization and Mapping (SLAM) utilizes sensor data to concurrently construct environmental maps and estimate its own position, finding wide application in scenarios like robotic navigation and augmented reality. SLAM systems based on 3D Gaussian Splatting (3DGS) have garnered significant attention due to [...] Read more.
Simultaneous Localization and Mapping (SLAM) utilizes sensor data to concurrently construct environmental maps and estimate its own position, finding wide application in scenarios like robotic navigation and augmented reality. SLAM systems based on 3D Gaussian Splatting (3DGS) have garnered significant attention due to their real-time, high-fidelity rendering capabilities. However, in real-world environments containing dynamic objects, existing 3DGS-SLAM methods often suffer from mapping errors and tracking drift due to dynamic interference. To address this challenge, this paper proposes BDGS-SLAM—a Bayesian Dynamic Gaussian Splatting SLAM framework specifically designed for dynamic environments. During the tracking phase, the system integrates semantic detection results from YOLOv5 to build a dynamic prior probability model based on Bayesian filtering, enabling accurate identification of dynamic Gaussians. In the mapping phase, a multi-view probabilistic update mechanism is employed, which aggregates historical observation information from co-visible keyframes. By introducing an exponential decay factor to dynamically adjust weights, this mechanism effectively restores static Gaussians that were mistakenly culled. Furthermore, an adaptive dynamic Gaussian optimization strategy is proposed. This strategy applies penalizing constraints to suppress the negative impact of dynamic Gaussians on rendering while avoiding the erroneous removal of static Gaussians and ensuring the integrity of critical scene information. Experimental results demonstrate that, compared to baseline methods, BDGS-SLAM achieves comparable tracking accuracy while generating fewer artifacts in rendered results and realizing higher-fidelity scene reconstruction. Full article
(This article belongs to the Special Issue Indoor Localization Technologies and Applications)
Show Figures

Figure 1

26 pages, 8883 KB  
Article
Enhancing Machine Learning Techniques in VSLAM for Robust Autonomous Unmanned Aerial Vehicle Navigation
by Hussam Rostum and József Vásárhelyi
Electronics 2025, 14(7), 1440; https://doi.org/10.3390/electronics14071440 - 2 Apr 2025
Cited by 2 | Viewed by 1642
Abstract
This study introduces a visual SLAM real-time system designed for small indoor environments. The system demonstrates resilience against significant motion clutter and supports wide-baseline loop closing, re-localization, and automatic initialization. Leveraging state-of-the-art algorithms, the approach presented in this article utilizes adapted Oriented FAST [...] Read more.
This study introduces a visual SLAM real-time system designed for small indoor environments. The system demonstrates resilience against significant motion clutter and supports wide-baseline loop closing, re-localization, and automatic initialization. Leveraging state-of-the-art algorithms, the approach presented in this article utilizes adapted Oriented FAST and Rotated BRIEF features for tracking, mapping, re-localization, and loop closing. In addition, the research uses an adaptive threshold to find putative feature matches that provide efficient map initialization and accurate tracking. The assignment is to process visual information from the camera of a DJI Tello drone for the construction of an indoor map and the estimation of the trajectory of the camera. In a ’survival of the fittest’ style, the algorithms selectively pick adaptive points and keyframes for reconstruction. This leads to robustness and a concise traceable map that develops as scene content emerges, making lifelong operation possible. The results give an improvement in the RMSE for the adaptive ORB algorithm and the adaptive threshold (3.280). However, the standard ORB algorithm failed to achieve the mapping process. Full article
(This article belongs to the Special Issue Development and Advances in Autonomous Driving Technology)
Show Figures

Figure 1

16 pages, 19990 KB  
Article
Implicit–Explicit Coupling Enhancement for UAV Scene 3D Reconstruction
by Xiaobo Lin and Shibiao Xu
Appl. Sci. 2024, 14(6), 2425; https://doi.org/10.3390/app14062425 - 13 Mar 2024
Cited by 2 | Viewed by 1927
Abstract
In unmanned aerial vehicle (UAV) large-scale scene modeling, challenges such as missed shots, low overlap, and data gaps due to flight paths and environmental factors, such as variations in lighting, occlusion, and weak textures, often lead to incomplete 3D models with blurred geometric [...] Read more.
In unmanned aerial vehicle (UAV) large-scale scene modeling, challenges such as missed shots, low overlap, and data gaps due to flight paths and environmental factors, such as variations in lighting, occlusion, and weak textures, often lead to incomplete 3D models with blurred geometric structures and textures. To address these challenges, an implicit–explicit coupling enhancement for a UAV large-scale scene modeling framework is proposed. Benefiting from the mutual promotion of implicit and explicit models, we initially address the issue of missing co-visibility clusters caused by environmental noise through large-scale implicit modeling with UAVs. This enhances the inter-frame photometric and geometric consistency. Subsequently, we enhance the multi-view point cloud reconstruction density via synthetic co-visibility clusters, effectively recovering missing spatial information and constructing a more complete dense point cloud. Finally, during the mesh modeling phase, high-quality 3D modeling of large-scale UAV scenes is achieved by inversely radiating and mapping additional texture details into 3D voxels. The experimental results demonstrate that our method achieves state-of-the-art modeling accuracy across various scenarios, outperforming existing commercial UAV aerial photography software (COLMAP 3.9, Context Capture 2023, PhotoScan 2023, Pix4D 4.5.6) and related algorithms. Full article
(This article belongs to the Special Issue UAV Remote Sensing and 3D Reconstruction)
Show Figures

Figure 1

12 pages, 13941 KB  
Communication
Accurate and Serialized Dense Point Cloud Reconstruction for Aerial Video Sequences
by Shibiao Xu, Bingbing Pan, Jiguang Zhang and Xiaopeng Zhang
Remote Sens. 2023, 15(6), 1625; https://doi.org/10.3390/rs15061625 - 17 Mar 2023
Cited by 2 | Viewed by 3730
Abstract
Traditional multi-view stereo (MVS) is not applicable for the point cloud reconstruction of serialized video frames. Among them, the exhausted feature extraction and matching for all the prepared frames are time-consuming, and the scope of the search requires covering all the key frames. [...] Read more.
Traditional multi-view stereo (MVS) is not applicable for the point cloud reconstruction of serialized video frames. Among them, the exhausted feature extraction and matching for all the prepared frames are time-consuming, and the scope of the search requires covering all the key frames. In this paper, we propose a novel serialized reconstruction method to solve the above issues. Specifically, a joint feature descriptors-based covisibility cluster generation strategy is designed to accelerate the feature matching and improve the performance of the pose estimation. Then, a serialized structure-from-motion (SfM) and dense point cloud reconstruction framework is designed to achieve high efficiency and competitive precision reconstruction for serialized frames. To fully demonstrate the superiority of our method, we collect a public aerial sequences dataset with referable ground truth for the dense point cloud reconstruction evaluation. Through a time complexity analysis and the experimental validation in this dataset, the comprehensive performance of our algorithm is better than the other compared outstanding methods. Full article
Show Figures

Graphical abstract

17 pages, 9894 KB  
Article
An Online 3D Modeling Method for Pose Measurement under Uncertain Dynamic Occlusion Based on Binocular Camera
by Xuanchang Gao, Junzhi Yu and Min Tan
Sensors 2023, 23(5), 2871; https://doi.org/10.3390/s23052871 - 6 Mar 2023
Cited by 4 | Viewed by 2901
Abstract
3D modeling plays a significant role in many industrial applications that require geometry information for pose measurements, such as grasping, spraying, etc. Due to random pose changes in the workpieces on the production line, demand for online 3D modeling has increased and many [...] Read more.
3D modeling plays a significant role in many industrial applications that require geometry information for pose measurements, such as grasping, spraying, etc. Due to random pose changes in the workpieces on the production line, demand for online 3D modeling has increased and many researchers have focused on it. However, online 3D modeling has not been entirely determined due to the occlusion of uncertain dynamic objects that disturb the modeling process. In this study, we propose an online 3D modeling method under uncertain dynamic occlusion based on a binocular camera. Firstly, focusing on uncertain dynamic objects, a novel dynamic object segmentation method based on motion consistency constraints is proposed, which achieves segmentation by random sampling and poses hypotheses clustering without any prior knowledge about objects. Then, in order to better register the incomplete point cloud of each frame, an optimization method based on local constraints of overlapping view regions and a global loop closure is introduced. It establishes constraints in covisibility regions between adjacent frames to optimize the registration of each frame, and it also establishes them between the global closed-loop frames to jointly optimize the entire 3D model. Finally, a confirmatory experimental workspace is designed and built to verify and evaluate our method. Our method achieves online 3D modeling under uncertain dynamic occlusion and acquires an entire 3D model. The pose measurement results further reflect the effectiveness. Full article
(This article belongs to the Special Issue Recent Advances in Robotics and Intelligent Mechatronics Systems)
Show Figures

Figure 1

20 pages, 5814 KB  
Article
Attitude Determination for Unmanned Cooperative Navigation Swarm Based on Multivectors in Covisibility Graph
by Yilin Liu, Ruochen Liu, Ruihang Yu, Zhiming Xiong, Yan Guo, Shaokun Cai and Pengfei Jiang
Drones 2023, 7(1), 40; https://doi.org/10.3390/drones7010040 - 6 Jan 2023
Cited by 5 | Viewed by 2995
Abstract
To reduce costs, an unmanned swarm usually consists of nodes with high-accuracy navigation sensors (HAN) and nodes with low-accuracy navigation sensors (LAN). Transmitting and fusing the navigation information obtained by HANs enables LANs to improve their positioning accuracy, which in general is called [...] Read more.
To reduce costs, an unmanned swarm usually consists of nodes with high-accuracy navigation sensors (HAN) and nodes with low-accuracy navigation sensors (LAN). Transmitting and fusing the navigation information obtained by HANs enables LANs to improve their positioning accuracy, which in general is called cooperative navigation (CN). In this method, the accuracy of relative observation between platforms in the swarm have dramatic effects on the positioning results. In the popular research, constructing constraints in three-dimensional (3D) frame could only optimize the position and velocity of LANs but neglected the attitude estimation so LANs cannot maintain a high attitude accuracy when utilizing navigation information obtained by sensors installed during maneuvers over long periods. Considering the performance of the inertial measurement unit (IMU) and other common sensors, this paper advances a new method to estimate the attitude of LANs in a swarm. Because the small unmanned nodes are strictly limited by relevant practical engineering problems such as size, weight and power, the method proposed could compensate for the attitude error caused by strapdown gyroscopic drift, which only use visual vectors built by the targets detected by cameras with the function of range finding. In our method, the coordinates of targets are mainly given by the You Only Look Once (YOLO) algorithm, then the visual vectors are built by connecting the targets in the covisibility graph of the nodes in the swarm. The attitude transformation matrices between each camera frame are calculated using the multivector attitude determination algorithm. Finally, we design an information filter (IF) to determine the attitude of LANs based on the observation of HANs. Considering the problem of positioning reference, the field test was conducted in the open air and we chose to use two-wheeled robots and one UAV to carry out the experiment. The results show that the relative attitude error between nodes is less than 4 degrees using the visual vector. After filtering, the attitude divergence of LANs’ installed low precision IMU can be effectively constrained, and the high-precision attitude estimation in an unmanned CN swarm can be realized. Full article
(This article belongs to the Special Issue Drone-Based Information Fusion to Improve Autonomous Navigation)
Show Figures

Figure 1

33 pages, 6231 KB  
Article
Branding4Resilience: Explorative and Collaborative Approaches for Inner Territories
by Maddalena Ferretti, Sara Favargiotti, Barbara Lino and Diana Rolando
Sustainability 2022, 14(18), 11235; https://doi.org/10.3390/su141811235 - 7 Sep 2022
Cited by 22 | Viewed by 3395
Abstract
This article analyzes inner and marginal territories in four Italian peripheral contexts by first discussing some of the results and future steps of the “B4R Branding4Resilience” research project, funded by the Italian Ministry of Research from 2020 to 2023. The overall research is [...] Read more.
This article analyzes inner and marginal territories in four Italian peripheral contexts by first discussing some of the results and future steps of the “B4R Branding4Resilience” research project, funded by the Italian Ministry of Research from 2020 to 2023. The overall research is based on three phases: (1) the exploration phase to analyze socio-economic data and territorial dynamics; (2) the co-design phase involving local actors to develop ideas for a selected pilot case; (3) the co-visioning phase where a future transformative perspective for the whole area was shared with the institutions. The article focuses on phase 1 and presents some first results achieved by the application of a methodological approach based on the integration of different qualitative and quantitative tools and methods. The results outline the exploration of the four selected territories through data analyses and mapping, perceptive-narrative explorations, field research, and explorative designs. The concept of peripherality is addressed in a critical way, trying to go beyond standardized definitions, including interdisciplinarity as an essential tool for territorial enhancement and branding. The main interpretation findings not only outline possible strategies and actions for the four analyzed inner territories, but also foster the application of the proposed methodological approach in other complex socio-economic contexts. Full article
Show Figures

Figure 1

19 pages, 5563 KB  
Article
Image-Based Localization Aided Indoor Pedestrian Trajectory Estimation Using Smartphones
by Yan Zhou, Xianwei Zheng, Ruizhi Chen, Hanjiang Xiong and Sheng Guo
Sensors 2018, 18(1), 258; https://doi.org/10.3390/s18010258 - 17 Jan 2018
Cited by 24 | Viewed by 7044
Abstract
Accurately determining pedestrian location in indoor environments using consumer smartphones is a significant step in the development of ubiquitous localization services. Many different map-matching methods have been combined with pedestrian dead reckoning (PDR) to achieve low-cost and bias-free pedestrian tracking. However, this works [...] Read more.
Accurately determining pedestrian location in indoor environments using consumer smartphones is a significant step in the development of ubiquitous localization services. Many different map-matching methods have been combined with pedestrian dead reckoning (PDR) to achieve low-cost and bias-free pedestrian tracking. However, this works only in areas with dense map constraints and the error accumulates in open areas. In order to achieve reliable localization without map constraints, an improved image-based localization aided pedestrian trajectory estimation method is proposed in this paper. The image-based localization recovers the pose of the camera from the 2D-3D correspondences between the 2D image positions and the 3D points of the scene model, previously reconstructed by a structure-from-motion (SfM) pipeline. This enables us to determine the initial location and eliminate the accumulative error of PDR when an image is successfully registered. However, the image is not always registered since the traditional 2D-to-3D matching rejects more and more correct matches when the scene becomes large. We thus adopt a robust image registration strategy that recovers initially unregistered images by integrating 3D-to-2D search. In the process, the visibility and co-visibility information is adopted to improve the efficiency when searching for the correspondences from both sides. The performance of the proposed method was evaluated through several experiments and the results demonstrate that it can offer highly acceptable pedestrian localization results in long-term tracking, with an error of only 0.56 m, without the need for dedicated infrastructures. Full article
(This article belongs to the Special Issue Smartphone-based Pedestrian Localization and Navigation)
Show Figures

Figure 1

Back to TopTop