Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (27)

Search Parameters:
Keywords = offline map matching

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 45694 KB  
Article
Visual Localization for Deep-Sea Mining Vehicles During Operation
by Yangrui Cheng, Bingkun Wang, Xiaojun Zhuo, Kai Liu and Yingjie Guan
J. Mar. Sci. Eng. 2026, 14(8), 759; https://doi.org/10.3390/jmse14080759 - 21 Apr 2026
Viewed by 214
Abstract
Deep-sea mining operations demand continuous, drift-free positioning over multi-day missions—a requirement that traditional acoustic dead-reckoning systems struggle to meet due to cumulative error accumulation and frequent DVL bottom-lock loss in sediment plume environments. Inspired by Google Cartographer’s 2D grid mapping paradigm, we present [...] Read more.
Deep-sea mining operations demand continuous, drift-free positioning over multi-day missions—a requirement that traditional acoustic dead-reckoning systems struggle to meet due to cumulative error accumulation and frequent DVL bottom-lock loss in sediment plume environments. Inspired by Google Cartographer’s 2D grid mapping paradigm, we present a prior map-based visual localization framework that decouples offline mapping from real-time localization, fundamentally eliminating drift through absolute image registration against pre-built seabed mosaics. By integrating adaptive keyframe selection, Multi-Scale Retinex (MSR) enhancement, and the AD-LG deep feature matching architecture, our system constructs globally consistent seabed maps for absolute positioning. The framework leverages deformable convolutions and LightGlue to effectively mitigate challenges such as low texture and non-rigid distortion. Quantitative validation on tank simulation datasets demonstrates significant superiority over IMU-only and standard fusion schemes; qualitative deployment on real Pacific CCZ imagery confirms near-real-time operational feasibility on an embedded Jetson Orin NX platform. This system establishes visual navigation as a viable backup to acoustic systems, addressing a critical gap in deep-sea mining vehicle autonomy. Full article
(This article belongs to the Special Issue Advances in Underwater Positioning and Navigation Technology)
Show Figures

Figure 1

23 pages, 2873 KB  
Article
An Online Calibration Method for UAV Electro-Optical Pod Zoom Cameras Based on IMU-Vision Fusion
by Weiming Zhu, Zhangsong Shi, Huihui Xu, Qingping Hu, Wenjian Ying and Fan Gui
Drones 2026, 10(3), 224; https://doi.org/10.3390/drones10030224 - 22 Mar 2026
Viewed by 472
Abstract
To address the calibration challenge caused by the nonlinear variation in intrinsic parameters during continuous camera zooming in UAV electro-optical pods, this paper proposes an online calibration method based on IMU-visual fusion. Traditional offline calibration cannot adapt to dynamic scenarios, while existing self-calibration [...] Read more.
To address the calibration challenge caused by the nonlinear variation in intrinsic parameters during continuous camera zooming in UAV electro-optical pods, this paper proposes an online calibration method based on IMU-visual fusion. Traditional offline calibration cannot adapt to dynamic scenarios, while existing self-calibration methods suffer from slow convergence and insufficient robustness. The proposed method aims to achieve real-time and accurate estimation of camera intrinsic parameters during zooming. Specifically, we first construct a unified state estimation framework that encodes the internal and external parameters of the camera and the 3D positions of scene feature points into a high-dimensional state vector, then establish a camera motion model based on IMU data, construct a visual observation model by combining the pinhole camera and second-order radial distortion model to establish a nonlinear mapping from 3D feature points to 2D pixel coordinates, and adopt an improved ORB algorithm for feature extraction and LK optical flow method to achieve high-precision cross-frame feature matching to enhance the stability of visual observation. Most importantly, we design a tight-coupling fusion strategy based on the Extended Kalman Filter (EKF) prediction-update iteration mechanism, which fuses IMU high-frequency motion constraints and visual geometric constraints in real time to suppress parameter drift induced by focal length changes. Finally, we recursively solve the state vector to complete the online dynamic estimation of intrinsic parameters. Monte Carlo simulation experiments and real UAV flight experiments confirm that the method has both high estimation accuracy and strong environmental adaptability, can meet the high-precision calibration needs of UAVs in dynamic scenarios, and provides reliable technical support for accurate target positioning. Full article
Show Figures

Figure 1

20 pages, 39023 KB  
Article
Lightweight Insulator Defect Detection in High-Resolution UAV Imagery via System-Level Co-Design
by Yujie Zhu, Guanhua Chen, Linghao Zhang, Jiajun Zhou, Junwei Kuang and Jiangxiong Zhu
Remote Sens. 2026, 18(6), 953; https://doi.org/10.3390/rs18060953 - 21 Mar 2026
Viewed by 458
Abstract
The inspection of minuscule insulator defects from high-resolution (HR) UAV imagery presents a significant algorithmic challenge. The severe scale mismatch between HR images and low-resolution model inputs often leads to feature distortion for sparsely distributed targets. To address these issues, this paper proposes [...] Read more.
The inspection of minuscule insulator defects from high-resolution (HR) UAV imagery presents a significant algorithmic challenge. The severe scale mismatch between HR images and low-resolution model inputs often leads to feature distortion for sparsely distributed targets. To address these issues, this paper proposes an integrated data–model collaborative framework. At the data level, an offline label-guided optimal tiling (LGOT) strategy is introduced to alleviate scale mismatch by curating information-dense training tiles. At the model level, we design the semi-decoupled prior-driven detection head (SDPD-Head), which leverages evolutionary priors to stabilize the learning of microscopic spatial features. During inference, an online inference-time adaptive tiling (ITAT) strategy is used to match the spatial scale distribution between training and inference and to reduce feature loss caused by direct downscaling. Experiments on a real-world inspection dataset show that the proposed framework achieves an mAP@50 of 92.9% with 2.17 M parameters and 4.7 GFLOPs. Full article
Show Figures

Figure 1

30 pages, 16905 KB  
Article
Real-Time 2D Orthomosaic Mapping from UAV Video via Feature-Based Image Registration
by Se-Yun Hwang, Seunghoon Oh, Jae-Chul Lee, Soon-Sub Lee and Changsoo Ha
Appl. Sci. 2026, 16(4), 2133; https://doi.org/10.3390/app16042133 - 22 Feb 2026
Viewed by 708
Abstract
This study presents a real-time framework for generating two-dimensional (2D) orthomosaic maps directly from UAV video. The method targets operational scenarios in which a continuously updated 2D overview is required during flight or immediately after landing, without relying on time-consuming offline photogrammetry workflows [...] Read more.
This study presents a real-time framework for generating two-dimensional (2D) orthomosaic maps directly from UAV video. The method targets operational scenarios in which a continuously updated 2D overview is required during flight or immediately after landing, without relying on time-consuming offline photogrammetry workflows such as structure-from-motion (SfM) and multi-view stereo (MVS). The proposed procedure incrementally registers sparsely sampled video frames on standard CPU hardware using classical feature-based image registration. Each selected frame is converted to grayscale and processed under a fixed keypoint budget to maintain predictable runtime. Tentative correspondences are obtained through descriptor matching with ratio-test filtering, and outliers are removed using random sample consensus (RANSAC) to ensure geometric consistency. Inter-frame motion is modeled by a planar homography, enabling the mapping process to jointly account for rotation, scale variation, skew, and translation that commonly occur in UAV video due to yaw maneuvers, mild altitude variation, and platform motion. Sequential homographies are accumulated to warp incoming frames into a global mosaic canvas, which is updated incrementally using lightweight blending suitable for real-time visualization. Experimental results on three UAV video sequences with different durations, flight patterns, and scene targets report representative orthomosaic-style outputs and per-step CPU runtime statistics (mean, 95th percentile, and maximum), illustrating typical operating behavior under the tested settings. The framework produces visually coherent orthomosaic-style maps in real time for approximately planar scenes with sufficient overlap and texture, while clarifying practical failure modes under weak texture, motion blur, and strong parallax. Limitations include potential drift over long sequences and the absence of ground-truth references for absolute registration-error evaluation. Full article
Show Figures

Figure 1

21 pages, 10300 KB  
Article
Cross-Detector Visual Localization with Coplanarity Constraints for Indoor Environments
by Jose-Luis Matez-Bandera, Alberto Jaenal, Clara Gomez, Alejandra C. Hernandez, Javier Monroy, José Araújo and Javier Gonzalez-Jimenez
Sensors 2025, 25(24), 7593; https://doi.org/10.3390/s25247593 - 15 Dec 2025
Viewed by 600
Abstract
Most visual localization (VL) methods typically assume that keypoints in the query image are detected with the same algorithm as those stored in the reference map. This poses a serious limitation, as new and better detectors may progressively appear, and we would like [...] Read more.
Most visual localization (VL) methods typically assume that keypoints in the query image are detected with the same algorithm as those stored in the reference map. This poses a serious limitation, as new and better detectors may progressively appear, and we would like to ensure the interoperability and coexistence of cameras with heterogeneous detectors in a single map representation. While rebuilding the map with new detectors might seem a solution, it is often impractical, as original images may be unavailable or restricted due to data privacy constraints. In this paper, we address this challenge with two main contributions. First, we introduce and formalize the problem of cross-detector VL, in which the inherent spatial discrepancies between keypoints from different detectors hinder the process of establishing correct correspondences when relying strictly on the similarity of descriptors for matching. Second, we propose CoplaMatch, the first approach to solve this problem by relaxing strict descriptor similarity and imposing geometric coplanarity constraints. The latter is achieved by leveraging 2D homographies between groups of query and map keypoints. This process involves segmenting planar patches, which is performed offline once for the map, and also in the query image, which adds an extra computational overhead to the VL process, although we demonstrated in our experiments that this does not hinder the online applicability. We extensively validate our proposal through experiments in indoor environments using real-world datasets, demonstrating its effectiveness against two state-of-the-art methods by enabling accurate localization in cross-detector scenarios. Additionally, our work validates the feasibility of cross-detector VL and opens a new direction for the long-term usability of feature-based maps. Full article
Show Figures

Figure 1

27 pages, 4420 KB  
Article
Real-Time Quarry Truck Monitoring with Deep Learning and License Plate Recognition: Weighbridge Reconciliation for Production Control
by Ibrahima Dia, Bocar Sy, Ousmane Diagne, Sidy Mané and Lamine Diouf
Mining 2025, 5(4), 84; https://doi.org/10.3390/mining5040084 - 14 Dec 2025
Viewed by 1077
Abstract
This paper presents a real-time quarry truck monitoring system that combines deep learning and license plate recognition (LPR) for operational monitoring and weighbridge reconciliation. Rather than estimating load volumes directly from imagery, the system ensures auditable matching between detected trucks and official weight [...] Read more.
This paper presents a real-time quarry truck monitoring system that combines deep learning and license plate recognition (LPR) for operational monitoring and weighbridge reconciliation. Rather than estimating load volumes directly from imagery, the system ensures auditable matching between detected trucks and official weight records. Deployed at quarry checkpoints, fixed cameras stream to an edge stack that performs truck detection, line-crossing counts, and per-frame plate Optical Character Recognition (OCR); a temporal voting and format-constrained post-processing step consolidates plate strings for registry matching. The system exposes a dashboard with auditable session bundles (model/version hashes, Region of Interest (ROI)/line geometry, thresholds, logs) to ensure replay and traceability between offline evaluation and live operations. We evaluate detection (precision, recall, mAP@0.5, and mAP@0.5:0.95), tracking (ID metrics), and (LPR) usability, and we quantify operational validity by reconciling estimated shift-level tonnage T against weighbridge tonnage T* using Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), R2, and Bland–Altman analysis. Results show stable convergence of the detection models, reliable plate usability under varied optics (day, dusk, night, and dust), low-latency processing suitable for commodity hardware, and close agreement with weighbridge references at the shift level. The study demonstrates that vision-based counting coupled with plate linkage can provide regulator-ready KPIs and auditable evidence for production control in quarry operations. Full article
(This article belongs to the Special Issue Mine Management Optimization in the Era of AI and Advanced Analytics)
Show Figures

Graphical abstract

22 pages, 12768 KB  
Article
Multi-Agent Coverage Path Planning Using Graph-Adapted K-Means in Road Network Digital Twin
by Haeseong Lee and Myungho Lee
Electronics 2025, 14(19), 3921; https://doi.org/10.3390/electronics14193921 - 1 Oct 2025
Cited by 2 | Viewed by 1323
Abstract
In this paper, we research multi-robot coverage path planning (MCPP), which generates paths for agents to visit all target areas or points. This problem is common in various fields, such as agriculture, rescue, 3D scanning, and data collection. Algorithms to solve MCPP are [...] Read more.
In this paper, we research multi-robot coverage path planning (MCPP), which generates paths for agents to visit all target areas or points. This problem is common in various fields, such as agriculture, rescue, 3D scanning, and data collection. Algorithms to solve MCPP are generally categorized into online and offline methods. Online methods work in an unknown area, while offline methods generate a path for the known. Recently, offline MCPP has been researched through various approaches, such as graph clustering, DARP, genetic algorithms, and deep learning models. However, many previous algorithms can only be applied on grid-like environments. Therefore, this study introduces an offline MCPP algorithm that applies graph-adapted K-means and spanning tree coverage for robust operation in non-grid-structure maps such as road networks. To achieve this, we modify a cost function based on the travel distance by adjusting the referenced clustering algorithm. Moreover, we apply bipartite graph matching to reflect the initial positions of agents. We also introduce a cluster-level graph to alleviate local minima during clustering updates. We compare the proposed algorithm with existing methods in a grid environment to validate its stability, and evaluation on a road network digital twin validates its robustness across most environments. Full article
Show Figures

Figure 1

24 pages, 1651 KB  
Article
Attentive Neural Processes for Few-Shot Learning Anomaly-Based Vessel Localization Using Magnetic Sensor Data
by Luis Fernando Fernández-Salvador, Borja Vilallonga Tejela, Alejandro Almodóvar, Juan Parras and Santiago Zazo
J. Mar. Sci. Eng. 2025, 13(9), 1627; https://doi.org/10.3390/jmse13091627 - 26 Aug 2025
Cited by 1 | Viewed by 1648
Abstract
Underwater vessel localization using passive magnetic anomaly sensing is a challenging problem due to the variability in vessel magnetic signatures and operational conditions. Data-based approaches may fail to generalize even to slightly different conditions. Thus, we propose an Attentive Neural Process (ANP) approach, [...] Read more.
Underwater vessel localization using passive magnetic anomaly sensing is a challenging problem due to the variability in vessel magnetic signatures and operational conditions. Data-based approaches may fail to generalize even to slightly different conditions. Thus, we propose an Attentive Neural Process (ANP) approach, in order to take advantage of its few-shot capabilities to generalize, for robust localization of underwater vessels based on magnetic anomaly measurements. Our ANP models the mapping from multi-sensor magnetic readings to position as a stochastic function: it cross-attends to a variable-size set of context points and fuses these with a global latent code that captures trajectory-level factors. The decoder outputs a Gaussian over coordinates, providing both point estimates and well-calibrated predictive variance. We validate our approach using a comprehensive dataset of magnetic disturbance fields, covering 64 distinct vessel configurations (combinations of varying hull sizes, submersion depths (water-column height over a seabed array), and total numbers of available sensors). Six magnetometer sensors in a fixed circular arrangement record the magnetic field perturbations as a vessel traverses sinusoidal trajectories. We compare the ANP against baseline multilayer perceptron (MLP) models: (1) base MLPs trained separately on each vessel configuration, and (2) a domain-randomized search (DRS) MLP trained on the aggregate of all configurations to evaluate generalization across domains. The results demonstrate that the ANP achieves superior generalization to new vessel conditions, matching the accuracy of configuration-specific MLPs while providing well-calibrated uncertainty quantification. This uncertainty-aware prediction capability is crucial for real-world deployments, as it can inform adaptive sensing and decision-making. Across various in-distribution scenarios, the ANP halves the mean absolute error versus a domain-randomized MLP (0.43 m vs. 0.84 m). The model is even able to generalize to out-of-distribution data, which means that our approach has the potential to facilitate transferability from offline training to real-world conditions. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 3382 KB  
Article
LiDAR as a Geometric Prior: Enhancing Camera Pose Tracking Through High-Fidelity View Synthesis
by Rafael Muñoz-Salinas, Jianheng Liu, Francisco J. Romero-Ramirez, Manuel J. Marín-Jiménez and Fu Zhang
Appl. Sci. 2025, 15(15), 8743; https://doi.org/10.3390/app15158743 - 7 Aug 2025
Cited by 1 | Viewed by 1857
Abstract
This paper presents a robust framework for monocular camera pose estimation by leveraging high-fidelity, pre-built 3D LiDAR maps. The core of our approach is a render-and-match pipeline that synthesizes photorealistic views from a dense LiDAR point cloud. By detecting and matching keypoints between [...] Read more.
This paper presents a robust framework for monocular camera pose estimation by leveraging high-fidelity, pre-built 3D LiDAR maps. The core of our approach is a render-and-match pipeline that synthesizes photorealistic views from a dense LiDAR point cloud. By detecting and matching keypoints between these synthetic images and the live camera feed, we establish reliable 3D–2D correspondences for accurate pose estimation. We evaluate two distinct strategies: an Online Rendering and Tracking method that renders views on the fly, and an Offline Keypoint-Map Tracking method that precomputes a keypoint map for known trajectories, optimizing for computational efficiency. Comprehensive experiments demonstrate that our framework significantly outperforms several state-of-the-art visual SLAM systems in both accuracy and tracking consistency. By anchoring localization to the stable geometric information from the LiDAR map, our method overcomes the reliance on photometric consistency that often causes failures in purely image-based systems, proving particularly effective in challenging real-world environments. Full article
(This article belongs to the Special Issue Image Processing and Computer Vision Applications)
Show Figures

Figure 1

19 pages, 3294 KB  
Article
Rotation- and Scale-Invariant Object Detection Using Compressed 2D Voting with Sparse Point-Pair Screening
by Chenbo Shi, Yue Yu, Gongwei Zhang, Shaojia Yan, Changsheng Zhu, Yanhong Cheng and Chun Zhang
Electronics 2025, 14(15), 3046; https://doi.org/10.3390/electronics14153046 - 30 Jul 2025
Viewed by 971
Abstract
The Generalized Hough Transform (GHT) is a powerful method for rigid shape detection under rotation, scaling, translation, and partial occlusion conditions, but its four-dimensional accumulator incurs prohibitive computational and memory demands that prevent real-time deployment. To address this, we propose a framework that [...] Read more.
The Generalized Hough Transform (GHT) is a powerful method for rigid shape detection under rotation, scaling, translation, and partial occlusion conditions, but its four-dimensional accumulator incurs prohibitive computational and memory demands that prevent real-time deployment. To address this, we propose a framework that compresses the 4-D search space into a concise 2-D voting scheme by combining two-level sparse point-pair screening with an accelerated lookup. In the offline stage, template edges are extracted using an adaptive Canny operator with Otsu-determined thresholds, and gradient-direction differences for all point pairs are quantized to retain only those in the dominant bin, yielding rotation- and scale-invariant descriptors that populate a compact 2-D reference table. During the online stage, an adaptive grid selects only the highest-gradient pixels per cell as a base points, while a precomputed gradient-direction bucket table enables constant-time retrieval of compatible subpoints. Each valid base–subpoint pair is mapped to indices in the lookup table, and “fuzzy” votes are cast over a 3 × 3 neighborhood in the 2-D accumulator, whose global peak determines the object center. Evaluation on 200 real industrial parts—augmented to 1000 samples with noise, blur, occlusion, and nonlinear illumination—demonstrates that our method maintains over 90% localization accuracy, matches the classical GHT, and achieves a ten-fold speedup, outperforming IGHT and LI-GHT variants by 2–3×, thereby delivering a robust, real-time solution for industrial rigid object localization. Full article
Show Figures

Figure 1

38 pages, 98377 KB  
Article
FaSS-MVS: Fast Multi-View Stereo with Surface-Aware Semi-Global Matching from UAV-Borne Monocular Imagery
by Boitumelo Ruf, Martin Weinmann and Stefan Hinz
Sensors 2024, 24(19), 6397; https://doi.org/10.3390/s24196397 - 2 Oct 2024
Viewed by 2321
Abstract
With FaSS-MVS, we present a fast, surface-aware semi-global optimization approach for multi-view stereo that allows for rapid depth and normal map estimation from monocular aerial video data captured by unmanned aerial vehicles (UAVs). The data estimated by FaSS-MVS, in turn, facilitate online 3D [...] Read more.
With FaSS-MVS, we present a fast, surface-aware semi-global optimization approach for multi-view stereo that allows for rapid depth and normal map estimation from monocular aerial video data captured by unmanned aerial vehicles (UAVs). The data estimated by FaSS-MVS, in turn, facilitate online 3D mapping, meaning that a 3D map of the scene is immediately and incrementally generated as the image data are acquired or being received. FaSS-MVS is composed of a hierarchical processing scheme in which depth and normal data, as well as corresponding confidence scores, are estimated in a coarse-to-fine manner, allowing efficient processing of large scene depths, such as those inherent in oblique images acquired by UAVs flying at low altitudes. The actual depth estimation uses a plane-sweep algorithm for dense multi-image matching to produce depth hypotheses from which the actual depth map is extracted by means of a surface-aware semi-global optimization, reducing the fronto-parallel bias of Semi-Global Matching (SGM). Given the estimated depth map, the pixel-wise surface normal information is then computed by reprojecting the depth map into a point cloud and computing the normal vectors within a confined local neighborhood. In a thorough quantitative and ablative study, we show that the accuracy of the 3D information computed by FaSS-MVS is close to that of state-of-the-art offline multi-view stereo approaches, with the error not even an order of magnitude higher than that of COLMAP. At the same time, however, the average runtime of FaSS-MVS for estimating a single depth and normal map is less than 14% of that of COLMAP, allowing us to perform online and incremental processing of full HD images at 1–2 Hz. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

16 pages, 13027 KB  
Article
A Real-Time Global Re-Localization Framework for a 3D LiDAR-Based Navigation System
by Ziqi Chai, Chao Liu and Zhenhua Xiong
Sensors 2024, 24(19), 6288; https://doi.org/10.3390/s24196288 - 28 Sep 2024
Viewed by 3049
Abstract
Place recognition is widely used to re-localize robots in pre-built point cloud maps for navigation. However, current place recognition methods can only be used to recognize previously visited places. Moreover, these methods are limited by the requirement of using the same types of [...] Read more.
Place recognition is widely used to re-localize robots in pre-built point cloud maps for navigation. However, current place recognition methods can only be used to recognize previously visited places. Moreover, these methods are limited by the requirement of using the same types of sensors in the re-localization process and the process is time consuming. In this paper, a template-matching-based global re-localization framework is proposed to address these challenges. The proposed framework includes an offline building stage and an online matching stage. In the offline stage, virtual LiDAR scans are densely resampled in the map and rotation-invariant descriptors can be extracted as templates. These templates are hierarchically clustered to build a template library. The map used to collect virtual LiDAR scans can be built either by the robot itself previously, or by other heterogeneous sensors. So, an important feature of the proposed framework is that it can be used in environments that have never been visited by the robot before. In the online stage, a cascade coarse-to-fine template matching method is proposed for efficient matching, considering both computational efficiency and accuracy. In the simulation with 100 K templates, the proposed framework achieves a 99% success rate and around 11 Hz matching speed when the re-localization error threshold is 1.0 m. In the validation on The Newer College Dataset with 40 K templates, it achieves a 94.67% success rate and around 7 Hz matching speed when the re-localization error threshold is 1.0 m. All the results show that the proposed framework has high accuracy, excellent efficiency, and the capability to achieve global re-localization in heterogeneous maps. Full article
Show Figures

Figure 1

18 pages, 20818 KB  
Article
A Visual Odometry Pipeline for Real-Time UAS Geopositioning
by Jianli Wei and Alper Yilmaz
Drones 2023, 7(9), 569; https://doi.org/10.3390/drones7090569 - 5 Sep 2023
Cited by 4 | Viewed by 4254
Abstract
The state-of-the-art geopositioning is the Global Navigation Satellite System (GNSS), which operates based on the satellite constellation providing positioning, navigation, and timing services. While the Global Positioning System (GPS) is widely used to position an Unmanned Aerial System (UAS), it is not always [...] Read more.
The state-of-the-art geopositioning is the Global Navigation Satellite System (GNSS), which operates based on the satellite constellation providing positioning, navigation, and timing services. While the Global Positioning System (GPS) is widely used to position an Unmanned Aerial System (UAS), it is not always available and can be jammed, introducing operational liabilities. When the GPS signal is degraded or denied, the UAS navigation solution cannot rely on incorrect positions GPS provides, resulting in potential loss of control. This paper presents a real-time pipeline for geopositioning functionality using a down-facing monocular camera. The proposed approach is deployable using only a few initialization parameters, the most important of which is the map of the area covered by the UAS flight plan. Our pipeline consists of an offline geospatial quad-tree generation for fast information retrieval, a choice from a selection of landmark detection and matching schemes, and an attitude control mechanism that improves reference to acquired image matching. To evaluate our method, we collected several image sequences using various flight patterns with seasonal changes. The experiments demonstrate high accuracy and robustness to seasonal changes. Full article
(This article belongs to the Special Issue Advances in AI for Intelligent Autonomous Systems)
Show Figures

Figure 1

14 pages, 3514 KB  
Article
Experimental Validation of a Micro-Extrusion Set-Up with In-Line Rheometry for the Production and Monitoring of Filaments for 3D-Printing
by João Sousa, Paulo F. Teixeira, Loïc Hilliou and José A. Covas
Micromachines 2023, 14(8), 1496; https://doi.org/10.3390/mi14081496 - 26 Jul 2023
Cited by 3 | Viewed by 2758
Abstract
The main objective of this work is to validate an in-line micro-slit rheometer and a micro-extrusion line, both designed for the in-line monitoring and production of filaments for 3D printing using small amounts of material. The micro-filament extrusion line is first presented and [...] Read more.
The main objective of this work is to validate an in-line micro-slit rheometer and a micro-extrusion line, both designed for the in-line monitoring and production of filaments for 3D printing using small amounts of material. The micro-filament extrusion line is first presented and its operational window is assessed. The throughputs ranged between 0.045 kg/h and 0.15 kg/h with a maximum 3% error and with a melt temperature control within 1 °C under the processing conditions tested for an average residence time of about 3 min. The rheological micro slit is then presented and assessed using low-density polyethylene (LDPE) and cyclic olefin copolymer (COC). The excellent matching between the in-line micro-rheological data and the data measured with off-line rotational and capillary rheometers validate the in-line micro-slit rheometer. However, it is shown that the COC does not follow the Cox–Merz rule. The COC filaments produced with the micro-extrusion line were successfully used in the 3D printing of specimens for tensile testing. The quality of both filaments (less than 6% variation in diameter along the filament’s length) and printed specimens validated the whole micro-set-up, which was eventually used to deliver a rheological mapping of COC printability. Full article
(This article belongs to the Special Issue 3D Printing Technology and Its Applications)
Show Figures

Figure 1

15 pages, 7403 KB  
Article
Method of 3D Voxel Prescription Map Construction in Digital Orchard Management Based on LiDAR-RTK Boarded on a UGV
by Leng Han, Shubo Wang, Zhichong Wang, Liujian Jin and Xiongkui He
Drones 2023, 7(4), 242; https://doi.org/10.3390/drones7040242 - 30 Mar 2023
Cited by 15 | Viewed by 3573
Abstract
Precision application of pesticides based on tree canopy characteristics such as tree height is more environmentally friendly and healthier for humans. Offline prescription maps can be used to achieve precise pesticide application at low cost. To obtain a complete point cloud with detailed [...] Read more.
Precision application of pesticides based on tree canopy characteristics such as tree height is more environmentally friendly and healthier for humans. Offline prescription maps can be used to achieve precise pesticide application at low cost. To obtain a complete point cloud with detailed tree canopy information in orchards, a LiDAR-RTK fusion information acquisition system was developed on an all-terrain vehicle (ATV) with an autonomous driving system. The point cloud was transformed into a geographic coordinate system for registration, and the Random sample consensus (RANSAC) was used to segment it into ground and canopy. A 3D voxel prescription map with a unit size of 0.25 m was constructed from the tree canopy point cloud. The height of 20 trees was geometrically measured to evaluate the accuracy of the voxel prescription map. The results showed that the RMSE between tree height calculated from the LiDAR obtained point cloud and the actual measured tree height was 0.42 m, the relative RMSE (rRMSE) was 10.86%, and the mean of absolute percentage error (MAPE) was 8.16%. The developed LiDAR-RTK fusion acquisition system can generate 3D prescription maps that meet the requirements of precision pesticide application. The information acquisition system of developed LiDAR-RTK fusion could construct 3D prescription maps autonomously that match the application requirements in digital orchard management. Full article
(This article belongs to the Special Issue Recent Advances in Crop Protection Using UAV and UGV)
Show Figures

Figure 1

Back to TopTop