Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = next-best-view (NBV)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2469 KiB  
Article
A Next-Best-View Method for Complex 3D Environment Exploration Using Robotic Arm with Hand-Eye System
by Michal Dobiš, Jakub Ivan, Martin Dekan, František Duchoň, Andrej Babinec and Róbert Málik
Appl. Sci. 2025, 15(14), 7757; https://doi.org/10.3390/app15147757 - 10 Jul 2025
Viewed by 299
Abstract
The ability to autonomously generate up-to-date 3D models of robotic workcells is critical for advancing smart manufacturing, yet existing Next-Best-View (NBV) methods often rely on paradigms ill-suited for the fixed-base manipulators found in dynamic industrial environments. To address this gap, this paper proposes [...] Read more.
The ability to autonomously generate up-to-date 3D models of robotic workcells is critical for advancing smart manufacturing, yet existing Next-Best-View (NBV) methods often rely on paradigms ill-suited for the fixed-base manipulators found in dynamic industrial environments. To address this gap, this paper proposes a novel NBV method for the complete exploration of a 6-DOF robotic arm’s workspace. Our approach integrates collision-based information gain metric, a potential field technique to generate candidate views from exploration frontiers, and a tunable fitness function to balance information gain with motion cost. The method was rigorously tested in three simulated scenarios and validated on a physical industrial robot. Results demonstrate that our approach successfully maps the majority of the workspace in all setups, with a balanced weighting strategy proving most effective for combining exploration speed and path efficiency, a finding confirmed in the real-world experiment. We conclude that our method provides a practical and robust solution for autonomous workspace mapping, offering a flexible, training-free approach that advances the state-of-the-art for on-demand 3D model generation in industrial robotics. Full article
(This article belongs to the Special Issue Smart Manufacturing and Industry 4.0, 2nd Edition)
Show Figures

Figure 1

22 pages, 6640 KiB  
Article
Efficient 3D Exploration with Distributed Multi-UAV Teams: Integrating Frontier-Based and Next-Best-View Planning
by André Ribeiro and Meysam Basiri
Drones 2024, 8(11), 630; https://doi.org/10.3390/drones8110630 - 31 Oct 2024
Cited by 1 | Viewed by 2025
Abstract
Autonomous exploration of unknown environments poses many challenges in robotics, particularly when dealing with vast and complex landscapes. This paper presents a novel framework tailored for distributed multi-robot systems, harnessing the 3D mobility capabilities of Unmanned Aerial Vehicles (UAVs) equipped with advanced LiDAR [...] Read more.
Autonomous exploration of unknown environments poses many challenges in robotics, particularly when dealing with vast and complex landscapes. This paper presents a novel framework tailored for distributed multi-robot systems, harnessing the 3D mobility capabilities of Unmanned Aerial Vehicles (UAVs) equipped with advanced LiDAR sensors for the rapid and effective exploration of uncharted territories. The proposed approach uniquely integrates the robustness of frontier-based exploration with the precision of Next-Best-View (NBV) planning, supplemented by a distance-based assignment cooperative strategy, offering a comprehensive and adaptive strategy for these systems. Through extensive experiments conducted across distinct environments using up to three UAVs, the efficacy of the exploration planner and cooperative strategy is rigorously validated. Benchmarking against existing methods further underscores the superiority of the proposed approach. The results demonstrate successful navigation through complex 3D landscapes, showcasing the framework’s capability in both single- and multi-UAV scenarios. While the benefits of employing multiple UAVs are evident, exhibiting reductions in exploration time and individual travel distance, this study also reveals findings regarding the optimal number of UAVs, particularly in smaller and wider environments. Full article
(This article belongs to the Special Issue Recent Advances in UAV Navigation)
Show Figures

Figure 1

18 pages, 8968 KiB  
Article
BIMBot for Autonomous Laser Scanning in Built Environments
by Nanying Liang, Yu Pin Ang, Kaiyun Yeo, Xiao Wu, Yuan Xie and Yiyu Cai
Robotics 2024, 13(2), 22; https://doi.org/10.3390/robotics13020022 - 26 Jan 2024
Cited by 1 | Viewed by 2862
Abstract
Accurate and complete 3D point clouds are essential in creating as-built building information modeling (BIM) models, although there are challenges in automating the process for 3D point cloud creation in complex environments. In this paper, an autonomous scanning system named BIMBot is introduced, [...] Read more.
Accurate and complete 3D point clouds are essential in creating as-built building information modeling (BIM) models, although there are challenges in automating the process for 3D point cloud creation in complex environments. In this paper, an autonomous scanning system named BIMBot is introduced, which integrates advanced light detection and ranging (LiDAR) technology with robotics to create 3D point clouds. Using our specially developed algorithmic pipeline for point cloud processing, iterative registration refinement, and next best view (NBV) calculation, this system facilitates an efficient, accurate, and fully autonomous scanning process. The BIMBot’s performance was validated using a case study in a campus laboratory, featuring complex structural and mechanical, electrical, and plumbing (MEP) elements. The experimental results showed that the autonomous scanning system produced 3D point cloud mappings in fewer scans than the manual method while maintaining comparable detail and accuracy, demonstrating its potential for wider application in complex built environments. Full article
(This article belongs to the Special Issue The State-of-the-Art of Robotics in Asia)
Show Figures

Figure 1

21 pages, 3955 KiB  
Article
VP-SOM: View-Planning Method for Indoor Active Sparse Object Mapping Based on Information Abundance and Observation Continuity
by Jiadong Zhang and Wei Wang
Sensors 2023, 23(23), 9415; https://doi.org/10.3390/s23239415 - 26 Nov 2023
Viewed by 1165
Abstract
Active mapping is an important technique for mobile robots to autonomously explore and recognize indoor environments. View planning, as the core of active mapping, determines the quality of the map and the efficiency of exploration. However, most current view-planning methods focus on low-level [...] Read more.
Active mapping is an important technique for mobile robots to autonomously explore and recognize indoor environments. View planning, as the core of active mapping, determines the quality of the map and the efficiency of exploration. However, most current view-planning methods focus on low-level geometric information like point clouds and neglect the indoor objects that are important for human–robot interaction. We propose a novel View-Planning method for indoor active Sparse Object Mapping (VP-SOM). VP-SOM takes into account for the first time the properties of object clusters in the coexisting human–robot environment. We categorized the views into global views and local views based on the object cluster, to balance the efficiency of exploration and the mapping accuracy. We developed a new view-evaluation function based on objects’ information abundance and observation continuity, to select the Next-Best View (NBV). Especially for calculating the uncertainty of the sparse object model, we built the object surface occupancy probability map. Our experimental results demonstrated that our view-planning method can explore the indoor environments and build object maps more accurately, efficiently, and robustly. Full article
(This article belongs to the Special Issue Advances in Mobile Robot Perceptions, Planning, Control and Learning)
Show Figures

Figure 1

15 pages, 6758 KiB  
Article
Decentralized Multi-UAV Cooperative Exploration Using Dynamic Centroid-Based Area Partition
by Jianjun Gui, Tianyou Yu, Baosong Deng, Xiaozhou Zhu and Wen Yao
Drones 2023, 7(6), 337; https://doi.org/10.3390/drones7060337 - 23 May 2023
Cited by 8 | Viewed by 3265
Abstract
Efficient exploration is a critical issue in swarm UAVs with substantial research interest due to its applications in search and rescue missions. In this study, we propose a cooperative exploration approach that uses multiple unmanned aerial vehicles (UAVs). Our approach allows UAVs to [...] Read more.
Efficient exploration is a critical issue in swarm UAVs with substantial research interest due to its applications in search and rescue missions. In this study, we propose a cooperative exploration approach that uses multiple unmanned aerial vehicles (UAVs). Our approach allows UAVs to explore separate areas dynamically, resulting in increased efficiency and decreased redundancy. We use a novel dynamic centroid-based method to partition the 3D working area for each UAV, with each UAV generating new targets in its partitioned area only using the onboard computational resource. To ensure the cooperation and exploration of the unknown, we use a next-best-view (NBV) method based on rapidly-exploring random tree (RRT), which generates a tree in the partitioned area until a threshold is reached. We compare this approach with three classical methods using Gazebo simulation, including a Voronoi-based area partition method, a coordination method for reducing scanning repetition between UAVs, and a greedy method that works according to its exploration planner without any interaction. We also conduct practical experiments to verify the effectiveness of our proposed method. Full article
(This article belongs to the Special Issue Multi-UAV Networks)
Show Figures

Figure 1

30 pages, 25100 KiB  
Article
Autonomous Scene Exploration for Robotics: A Conditional Random View-Sampling and Evaluation Using a Voxel-Sorting Mechanism for Efficient Ray Casting
by João Santos, Miguel Oliveira, Rafael Arrais and Germano Veiga
Sensors 2020, 20(15), 4331; https://doi.org/10.3390/s20154331 - 4 Aug 2020
Cited by 7 | Viewed by 4097
Abstract
Carrying out the task of the exploration of a scene by an autonomous robot entails a set of complex skills, such as the ability to create and update a representation of the scene, the knowledge of the regions of the scene which are [...] Read more.
Carrying out the task of the exploration of a scene by an autonomous robot entails a set of complex skills, such as the ability to create and update a representation of the scene, the knowledge of the regions of the scene which are yet unexplored, the ability to estimate the most efficient point of view from the perspective of an explorer agent and, finally, the ability to physically move the system to the selected Next Best View (NBV). This paper proposes an autonomous exploration system that makes use of a dual OcTree representation to encode the regions in the scene which are occupied, free, and unknown. The NBV is estimated through a discrete approach that samples and evaluates a set of view hypotheses that are created by a conditioned random process which ensures that the views have some chance of adding novel information to the scene. The algorithm uses ray-casting defined according to the characteristics of the RGB-D sensor, and a mechanism that sorts the voxels to be tested in a way that considerably speeds up the assessment. The sampled view that is estimated to provide the largest amount of novel information is selected, and the system moves to that location, where a new exploration step begins. The exploration session is terminated when there are no more unknown regions in the scene or when those that exist cannot be observed by the system. The experimental setup consisted of a robotic manipulator with an RGB-D sensor assembled on its end-effector, all managed by a Robot Operating System (ROS) based architecture. The manipulator provides movement, while the sensor collects information about the scene. Experimental results span over three test scenarios designed to evaluate the performance of the proposed system. In particular, the exploration performance of the proposed system is compared against that of human subjects. Results show that the proposed approach is able to carry out the exploration of a scene, even when it starts from scratch, building up knowledge as the exploration progresses. Furthermore, in these experiments, the system was able to complete the exploration of the scene in less time when compared to human subjects. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

23 pages, 19467 KiB  
Article
Automated 3D Reconstruction Using Optimized View-Planning Algorithms for Iterative Development of Structure-from-Motion Models
by Samuel Arce, Cory A. Vernon, Joshua Hammond, Valerie Newell, Joseph Janson, Kevin W. Franke and John D. Hedengren
Remote Sens. 2020, 12(13), 2169; https://doi.org/10.3390/rs12132169 - 7 Jul 2020
Cited by 24 | Viewed by 6066
Abstract
Unsupervised machine learning algorithms (clustering, genetic, and principal component analysis) automate Unmanned Aerial Vehicle (UAV) missions as well as the creation and refinement of iterative 3D photogrammetric models with a next best view (NBV) approach. The novel approach uses Structure-from-Motion (SfM) to achieve [...] Read more.
Unsupervised machine learning algorithms (clustering, genetic, and principal component analysis) automate Unmanned Aerial Vehicle (UAV) missions as well as the creation and refinement of iterative 3D photogrammetric models with a next best view (NBV) approach. The novel approach uses Structure-from-Motion (SfM) to achieve convergence to a specified orthomosaic resolution by identifying edges in the point cloud and planning cameras that “view” the holes identified by edges without requiring an initial model. This iterative UAV photogrammetric method successfully runs in various Microsoft AirSim environments. Simulated ground sampling distance (GSD) of models reaches as low as 3.4 cm per pixel, and generally, successive iterations improve resolution. Besides analogous application in simulated environments, a field study of a retired municipal water tank illustrates the practical application and advantages of automated UAV iterative inspection of infrastructure using 63 % fewer photographs than a comparable manual flight with analogous density point clouds obtaining a GSD of less than 3 cm per pixel. Each iteration qualitatively increases resolution according to a logarithmic regression, reduces holes in models, and adds details to model edges. Full article
(This article belongs to the Special Issue Latest Developments in 3D Mapping with Unmanned Aerial Vehicles)
Show Figures

Graphical abstract

25 pages, 7958 KiB  
Article
Victim Localization in USAR Scenario Exploiting Multi-Layer Mapping Structure
by Abdulrahman Goian, Reem Ashour, Ubaid Ahmad, Tarek Taha, Nawaf Almoosa and Lakmal Seneviratne
Remote Sens. 2019, 11(22), 2704; https://doi.org/10.3390/rs11222704 - 19 Nov 2019
Cited by 13 | Viewed by 3947
Abstract
Urban search and rescue missions require rapid intervention to locate victims and survivors in the affected environments. To facilitate this activity, Unmanned Aerial Vehicles (UAVs) have been recently used to explore the environment and locate possible victims. In this paper, a UAV equipped [...] Read more.
Urban search and rescue missions require rapid intervention to locate victims and survivors in the affected environments. To facilitate this activity, Unmanned Aerial Vehicles (UAVs) have been recently used to explore the environment and locate possible victims. In this paper, a UAV equipped with multiple complementary sensors is used to detect the presence of a human in an unknown environment. A novel human localization approach in unknown environments is proposed that merges information gathered from deep-learning-based human detection, wireless signal mapping, and thermal signature mapping to build an accurate global human location map. A next-best-view (NBV) approach with a proposed multi-objective utility function is used to iteratively evaluate the map to locate the presence of humans rapidly. Results demonstrate that the proposed strategy outperforms other methods in several performance measures such as the number of iterations, entropy reduction, and traveled distance. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

20 pages, 5956 KiB  
Article
Guided Next Best View for 3D Reconstruction of Large Complex Structures
by Randa Almadhoun, Abdullah Abduldayem, Tarek Taha, Lakmal Seneviratne and Yahya Zweiri
Remote Sens. 2019, 11(20), 2440; https://doi.org/10.3390/rs11202440 - 21 Oct 2019
Cited by 22 | Viewed by 5491
Abstract
In this paper, a Next Best View (NBV) approach with a profiling stage and a novel utility function for 3D reconstruction using an Unmanned Aerial Vehicle (UAV) is proposed. The proposed approach performs an initial scan in order to build a rough model [...] Read more.
In this paper, a Next Best View (NBV) approach with a profiling stage and a novel utility function for 3D reconstruction using an Unmanned Aerial Vehicle (UAV) is proposed. The proposed approach performs an initial scan in order to build a rough model of the structure that is later used to improve coverage completeness and reduce flight time. Then, a more thorough NBV process is initiated, utilizing the rough model in order to create a dense 3D reconstruction of the structure of interest. The proposed approach exploits the reflectional symmetry feature if it exists in the initial scan of the structure. The proposed NBV approach is implemented with a novel utility function, which consists of four main components: information theory, model density, traveled distance, and predictive measures based on symmetries in the structure. This system outperforms classic information gain approaches with a higher density, entropy reduction and coverage completeness. Simulated and real experiments were conducted and the results show the effectiveness and applicability of the proposed approach. Full article
Show Figures

Figure 1

22 pages, 21714 KiB  
Article
Two-Dimensional Frontier-Based Viewpoint Generation for Exploring and Mapping Underwater Environments
by Eduard Vidal, Narcís Palomeras, Klemen Istenič, Juan David Hernández and Marc Carreras
Sensors 2019, 19(6), 1460; https://doi.org/10.3390/s19061460 - 25 Mar 2019
Cited by 23 | Viewed by 5206
Abstract
To autonomously explore complex underwater environments, it is convenient to develop motion planning strategies that do not depend on prior information. In this publication, we present a robotic exploration algorithm for autonomous underwater vehicles (AUVs) that is able to guide the robot so [...] Read more.
To autonomously explore complex underwater environments, it is convenient to develop motion planning strategies that do not depend on prior information. In this publication, we present a robotic exploration algorithm for autonomous underwater vehicles (AUVs) that is able to guide the robot so that it explores an unknown 2-dimensional (2D) environment. The algorithm is built upon view planning (VP) and frontier-based (FB) strategies. Traditional robotic exploration algorithms seek full coverage of the scene with data from only one sensor. If data coverage is required for multiple sensors, multiple exploration missions are required. Our approach has been designed to sense the environment achieving full coverage with data from two sensors in a single exploration mission: occupancy data from the profiling sonar, from which the shape of the environment is perceived, and optical data from the camera, to capture the details of the environment. This saves time and mission costs. The algorithm has been designed to be computationally efficient, so that it can run online in the AUV’s onboard computer. In our approach, the environment is represented using a labeled quadtree occupancy map which, at the same time, is used to generate the viewpoints that guide the exploration. We have tested the algorithm in different environments through numerous experiments, which include sea operations using the Sparus II AUV and its sensor suite. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Back to TopTop