Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = artificial fiducial markers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3130 KiB  
Article
Large-Scale Indoor Camera Positioning Using Fiducial Markers
by Pablo García-Ruiz, Francisco J. Romero-Ramirez, Rafael Muñoz-Salinas, Manuel J. Marín-Jiménez and Rafael Medina-Carnicer
Sensors 2024, 24(13), 4303; https://doi.org/10.3390/s24134303 - 2 Jul 2024
Cited by 2 | Viewed by 2203
Abstract
Estimating the pose of a large set of fixed indoor cameras is a requirement for certain applications in augmented reality, autonomous navigation, video surveillance, and logistics. However, accurately mapping the positions of these cameras remains an unsolved problem. While providing partial solutions, existing [...] Read more.
Estimating the pose of a large set of fixed indoor cameras is a requirement for certain applications in augmented reality, autonomous navigation, video surveillance, and logistics. However, accurately mapping the positions of these cameras remains an unsolved problem. While providing partial solutions, existing alternatives are limited by their dependence on distinct environmental features, the requirement for large overlapping camera views, and specific conditions. This paper introduces a novel approach to estimating the pose of a large set of cameras using a small subset of fiducial markers printed on regular pieces of paper. By placing the markers in areas visible to multiple cameras, we can obtain an initial estimation of the pair-wise spatial relationship between them. The markers can be moved throughout the environment to obtain the relationship between all cameras, thus creating a graph connecting all cameras. In the final step, our method performs a full optimization, minimizing the reprojection errors of the observed markers and enforcing physical constraints, such as camera and marker coplanarity and control points. We validated our approach using novel artificial and real datasets with varying levels of complexity. Our experiments demonstrated superior performance over existing state-of-the-art techniques and increased effectiveness in real-world applications. Accompanying this paper, we provide the research community with access to our code, tutorials, and an application framework to support the deployment of our methodology. Full article
(This article belongs to the Special Issue Sensor Fusion Applications for Navigation and Indoor Positioning)
Show Figures

Figure 1

13 pages, 1493 KiB  
Perspective
Integration of Square Fiducial Markers in Patient-Specific Instrumentation and Their Applicability in Knee Surgery
by Vicente J. León-Muñoz, Joaquín Moya-Angeler, Mirian López-López, Alonso J. Lisón-Almagro, Francisco Martínez-Martínez and Fernando Santonja-Medina
J. Pers. Med. 2023, 13(5), 727; https://doi.org/10.3390/jpm13050727 - 25 Apr 2023
Cited by 4 | Viewed by 2513
Abstract
Computer technologies play a crucial role in orthopaedic surgery and are essential in personalising different treatments. Recent advances allow the usage of augmented reality (AR) for many orthopaedic procedures, which include different types of knee surgery. AR assigns the interaction between virtual environments [...] Read more.
Computer technologies play a crucial role in orthopaedic surgery and are essential in personalising different treatments. Recent advances allow the usage of augmented reality (AR) for many orthopaedic procedures, which include different types of knee surgery. AR assigns the interaction between virtual environments and the physical world, allowing both to intermingle (AR superimposes information on real objects in real-time) through an optical device and allows personalising different processes for each patient. This article aims to describe the integration of fiducial markers in planning knee surgeries and to perform a narrative description of the latest publications on AR applications in knee surgery. Augmented reality-assisted knee surgery is an emerging set of techniques that can increase accuracy, efficiency, and safety and decrease the radiation exposure (in some surgical procedures, such as osteotomies) of other conventional methods. Initial clinical experience with AR projection based on ArUco-type artificial marker sensors has shown promising results and received positive operator feedback. Once initial clinical safety and efficacy have been demonstrated, the continued experience should be studied to validate this technology and generate further innovation in this rapidly evolving field. Full article
(This article belongs to the Special Issue Latest Advances in Musculoskeletal (Orthopedic) Surgery)
Show Figures

Figure 1

17 pages, 2452 KiB  
Article
A Machine Learning Approach to Robot Localization Using Fiducial Markers in RobotAtFactory 4.0 Competition
by Luan C. Klein, João Braun, João Mendes, Vítor H. Pinto, Felipe N. Martins, Andre Schneider de Oliveira, Heinrich Wörtche, Paulo Costa and José Lima
Sensors 2023, 23(6), 3128; https://doi.org/10.3390/s23063128 - 15 Mar 2023
Cited by 10 | Viewed by 4998
Abstract
Localization is a crucial skill in mobile robotics because the robot needs to make reasonable navigation decisions to complete its mission. Many approaches exist to implement localization, but artificial intelligence can be an interesting alternative to traditional localization techniques based on model calculations. [...] Read more.
Localization is a crucial skill in mobile robotics because the robot needs to make reasonable navigation decisions to complete its mission. Many approaches exist to implement localization, but artificial intelligence can be an interesting alternative to traditional localization techniques based on model calculations. This work proposes a machine learning approach to solve the localization problem in the RobotAtFactory 4.0 competition. The idea is to obtain the relative pose of an onboard camera with respect to fiducial markers (ArUcos) and then estimate the robot pose with machine learning. The approaches were validated in a simulation. Several algorithms were tested, and the best results were obtained by using Random Forest Regressor, with an error on the millimeter scale. The proposed solution presents results as high as the analytical approach for solving the localization problem in the RobotAtFactory 4.0 scenario, with the advantage of not requiring explicit knowledge of the exact positions of the fiducial markers, as in the analytical approach. Full article
(This article belongs to the Collection Smart Robotics for Automation)
Show Figures

Figure 1

18 pages, 12953 KiB  
Article
Vision-Based Autonomous Following of a Moving Platform and Landing for an Unmanned Aerial Vehicle
by Jesús Morales, Isabel Castelo, Rodrigo Serra, Pedro U. Lima and Meysam Basiri
Sensors 2023, 23(2), 829; https://doi.org/10.3390/s23020829 - 11 Jan 2023
Cited by 26 | Viewed by 6068
Abstract
Interest in Unmanned Aerial Vehicles (UAVs) has increased due to their versatility and variety of applications, however their battery life limits their applications. Heterogeneous multi-robot systems can offer a solution to this limitation, by allowing an Unmanned Ground Vehicle (UGV) to serve as [...] Read more.
Interest in Unmanned Aerial Vehicles (UAVs) has increased due to their versatility and variety of applications, however their battery life limits their applications. Heterogeneous multi-robot systems can offer a solution to this limitation, by allowing an Unmanned Ground Vehicle (UGV) to serve as a recharging station for the aerial one. Moreover, cooperation between aerial and terrestrial robots allows them to overcome other individual limitations, such as communication link coverage or accessibility, and to solve highly complex tasks, e.g., environment exploration, infrastructure inspection or search and rescue. This work proposes a vision-based approach that enables an aerial robot to autonomously detect, follow, and land on a mobile ground platform. For this purpose, ArUcO fiducial markers are used to estimate the relative pose between the UAV and UGV by processing RGB images provided by a monocular camera on board the UAV. The pose estimation is fed to a trajectory planner and four decoupled controllers to generate speed set-points relative to the UAV. Using a cascade loop strategy, these set-points are then sent to the UAV autopilot for inner loop control. The proposed solution has been tested both in simulation, with a digital twin of a solar farm using ROS, Gazebo and Ardupilot Software-in-the-Loop (SiL); and in the real world at IST Lisbon’s outdoor facilities, with a UAV built on the basis of a DJ550 Hexacopter and a modified Jackal ground robot from DJI and Clearpath Robotics, respectively. Pose estimation, trajectory planning and speed set-point are computed on board the UAV, using a Single Board Computer (SBC) running Ubuntu and ROS, without the need for external infrastructure. Full article
(This article belongs to the Special Issue Sensors for Smart Vehicle Applications)
Show Figures

Figure 1

14 pages, 5375 KiB  
Article
An Underwater Visual Navigation Method Based on Multiple ArUco Markers
by Zhizun Xu, Maryam Haroutunian, Alan J. Murphy, Jeff Neasham and Rose Norman
J. Mar. Sci. Eng. 2021, 9(12), 1432; https://doi.org/10.3390/jmse9121432 - 15 Dec 2021
Cited by 32 | Viewed by 5694
Abstract
Underwater navigation presents crucial issues because of the rapid attenuation of electronic magnetic waves. The conventional underwater navigation methods are achieved by acoustic equipment, such as the ultra-short-baseline localisation systems and Doppler velocity logs, etc. However, they suffer from low fresh rate, low [...] Read more.
Underwater navigation presents crucial issues because of the rapid attenuation of electronic magnetic waves. The conventional underwater navigation methods are achieved by acoustic equipment, such as the ultra-short-baseline localisation systems and Doppler velocity logs, etc. However, they suffer from low fresh rate, low bandwidth, environmental disturbance and high cost. In the paper, a novel underwater visual navigation is investigated based on the multiple ArUco markers. Unlike other underwater navigation approaches based on the artificial markers, the noise model of the pose estimation of a single marker and an optimal algorithm of the multiple markers are developed to increase the precision of the method. The experimental tests are conducted in the towing tank. The results show that the proposed method is able to localise the underwater vehicle accurately. Full article
Show Figures

Figure 1

32 pages, 11489 KiB  
Article
Smart Artificial Markers for Accurate Visual Mapping and Localization
by Luis E. Ortiz-Fernandez, Elizabeth V. Cabrera-Avila, Bruno M. F. da Silva and Luiz M. G. Gonçalves
Sensors 2021, 21(2), 625; https://doi.org/10.3390/s21020625 - 18 Jan 2021
Cited by 22 | Viewed by 6785
Abstract
Artificial marker mapping is a useful tool for fast camera localization estimation with a certain degree of accuracy in large indoor and outdoor environments. Nonetheless, the level of accuracy can still be enhanced to allow the creation of applications such as the new [...] Read more.
Artificial marker mapping is a useful tool for fast camera localization estimation with a certain degree of accuracy in large indoor and outdoor environments. Nonetheless, the level of accuracy can still be enhanced to allow the creation of applications such as the new Visual Odometry and SLAM datasets, low-cost systems for robot detection and tracking, and pose estimation. In this work, we propose to improve the accuracy of map construction using artificial markers (mapping method) and camera localization within this map (localization method) by introducing a new type of artificial marker that we call the smart marker. A smart marker consists of a square fiducial planar marker and a pose measurement system (PMS) unit. With a set of smart markers distributed throughout the environment, the proposed mapping method estimates the markers’ poses from a set of calibrated images and orientation/distance measurements gathered from the PMS unit. After this, the proposed localization method can localize a monocular camera with the correct scale, directly benefiting from the improved accuracy of the mapping method. We conducted several experiments to evaluate the accuracy of the proposed methods. The results show that our approach decreases the Relative Positioning Error (RPE) by 85% in the mapping stage and Absolute Trajectory Error (ATE) by 50% for the camera localization stage in comparison with the state-of-the-art methods present in the literature. Full article
Show Figures

Figure 1

19 pages, 5358 KiB  
Article
Artificial Marker and MEMS IMU-Based Pose Estimation Method to Meet Multirotor UAV Landing Requirements
by Yibin Wu, Xiaoji Niu, Junwei Du, Le Chang, Hailiang Tang and Hongping Zhang
Sensors 2019, 19(24), 5428; https://doi.org/10.3390/s19245428 - 9 Dec 2019
Cited by 12 | Viewed by 4803
Abstract
The fully autonomous operation of multirotor unmanned air vehicles (UAVs) in many applications requires support of precision landing. Onboard camera and fiducial marker have been widely used for this critical phase due to its low cost and high effectiveness. This paper proposes a [...] Read more.
The fully autonomous operation of multirotor unmanned air vehicles (UAVs) in many applications requires support of precision landing. Onboard camera and fiducial marker have been widely used for this critical phase due to its low cost and high effectiveness. This paper proposes a six-degrees-of-freedom (DoF) pose estimation solution for UAV landing based on an artificial marker and a micro-electromechanical system (MEMS) inertial measurement unit (IMU). The position and orientation of the landing maker are measured in advance. The absolute position and heading of the UAV are estimated by detecting the marker and extracting corner points with the onboard monocular camera. To achieve continuous and reliable positioning when the marker is occasionally shadowed, IMU data is fused by an extended Kalman filter (EKF). The error terms of the IMU sensor are modeled and estimated. Field experiments show that the positioning accuracy of the proposed system is at centimeter level, and the heading error is less than 0.1 degrees. Comparing to the marker-based approach, the roll and pitch angle errors decreased by 33% and 54% on average. Within five seconds of vision outage, the average drifts of the horizontal and vertical position were 0.41 and 0.09 m, respectively. Full article
(This article belongs to the Collection Multi-Sensor Information Fusion)
Show Figures

Figure 1

Back to TopTop