Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Keywords = airsim

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 8413 KiB  
Article
Monocular Vision-Based Depth Estimation of Forward-Looking Scenes for Mobile Platforms
by Li Wei, Meng Ding and Shuai Li
Appl. Sci. 2025, 15(8), 4267; https://doi.org/10.3390/app15084267 - 12 Apr 2025
Cited by 1 | Viewed by 644
Abstract
The depth estimation of forward-looking scenes is one of the fundamental tasks for an Intelligent Mobile Platform to perceive its surrounding environment. In response to this requirement, this paper proposes a self-supervised monocular depth estimation method that can be utilized across various mobile [...] Read more.
The depth estimation of forward-looking scenes is one of the fundamental tasks for an Intelligent Mobile Platform to perceive its surrounding environment. In response to this requirement, this paper proposes a self-supervised monocular depth estimation method that can be utilized across various mobile platforms, including unmanned aerial vehicles (UAVs) and autonomous ground vehicles (AGVs). Building on the foundational framework of Monodepth2, we introduce an intermediate module between the encoder and decoder of the depth estimation network to facilitate multiscale fusion of feature maps. Additionally, we integrate the channel attention mechanism ECANet into the depth estimation network to enhance the significance of important channels. Consequently, the proposed method addresses the issue of losing critical features, which can lead to diminished accuracy and robustness. The experiments presented in this paper are conducted on two datasets: KITTI, a publicly available dataset collected from real-world environments used to evaluate depth estimation performance for AGV platforms, and AirSim, a custom dataset generated using simulation software to assess depth estimation performance for UAV platforms. The experimental results demonstrate that the proposed method can overcome the adverse effects of varying working conditions and accurately perceive detailed depth information in specific regions, such as object edges and targets of different scales. Furthermore, the depth predicted by the proposed method is quantitatively compared with the ground truth depth, and a variety of evaluation metrics confirm that our method exhibits superior inference capability and robustness. Full article
Show Figures

Figure 1

30 pages, 71082 KiB  
Article
GTrXL-SAC-Based Path Planning and Obstacle-Aware Control Decision-Making for UAV Autonomous Control
by Jingyi Huang, Yujie Cui, Guipeng Xi, Shuangxia Bai, Bo Li, Geng Wang and Evgeny Neretin
Drones 2025, 9(4), 275; https://doi.org/10.3390/drones9040275 - 3 Apr 2025
Viewed by 975
Abstract
Research on UAV (unmanned aerial vehicle) path planning and obstacle avoidance control based on DRL (deep reinforcement learning) still faces limitations, as previous studies primarily utilized current perceptual inputs while neglecting the continuity of flight processes, resulting in low early-stage learning efficiency. To [...] Read more.
Research on UAV (unmanned aerial vehicle) path planning and obstacle avoidance control based on DRL (deep reinforcement learning) still faces limitations, as previous studies primarily utilized current perceptual inputs while neglecting the continuity of flight processes, resulting in low early-stage learning efficiency. To address these issues, this paper integrates DRL with the Transformer architecture to propose the GTrXL-SAC (gated Transformer-XL soft actor critic) algorithm. The algorithm performs positional embedding on multimodal data combining visual and sensor information. Leveraging the self-attention mechanism of GTrXL, it effectively focuses on different segments of multimodal data for encoding while capturing sequential relationships, significantly improving obstacle recognition accuracy and enhancing both learning efficiency and sample efficiency. Additionally, the algorithm capitalizes on GTrXL’s memory characteristics to generate current drone control decisions through the combined analysis of historical experiences and present states, effectively mitigating long-term dependency issues. Experimental results in the AirSim drone simulation environment demonstrate that compared to PPO and SAC algorithms, GTrXL-SAC achieves more precise policy exploration and optimization, enabling superior control of drone velocity and attitude for stabilized flight while accelerating convergence speed by nearly 20%. Full article
Show Figures

Figure 1

26 pages, 9462 KiB  
Article
A Framework for Autonomous UAV Navigation Based on Monocular Depth Estimation
by Jonas Gaigalas, Linas Perkauskas, Henrikas Gricius, Tomas Kanapickas and Andrius Kriščiūnas
Drones 2025, 9(4), 236; https://doi.org/10.3390/drones9040236 - 23 Mar 2025
Cited by 2 | Viewed by 2662
Abstract
UAVs are vastly used in practical applications such as reconnaissance and search and rescue or other missions which typically require experienced operators. Autonomous drone navigation could aid in situations where the environment is unknown, GPS or radio signals are unavailable, and there are [...] Read more.
UAVs are vastly used in practical applications such as reconnaissance and search and rescue or other missions which typically require experienced operators. Autonomous drone navigation could aid in situations where the environment is unknown, GPS or radio signals are unavailable, and there are no existing 3D models to preplan a trajectory. Traditional navigation methods employ multiple sensors: LiDAR, sonar, inertial measurement units (IMUs), and cameras. This increases the weight and cost of such drones. This work focuses on autonomous drone navigation from point A to point B using visual information obtained from a monocular camera in a simulator. The solution utilizes a depth image estimation model to create an occupancy grid map of the surrounding area and uses an A* path planning algorithm to find optimal paths to end goals while navigating around the obstacles. The simulation is conducted using AirSim in Unreal Engine. With this work, we propose a framework and scenarios in three different open-source virtual environments, varying in complexity, to test and compare autonomous UAV navigation methods based on vision. In this study, fine-tuned models using synthetic RGB and depth image data were used for each environment, demonstrating a noticeable improvement in depth estimation accuracy, with reductions in Mean Absolute Percentage Error (MAPE) from 120.45% to 33.41% in AirSimNH, from 70.09% to 8.04% in Blocks, and from 121.94% to 32.86% in MSBuild2018. While the proposed UAV autonomous navigation framework utilizing depth images directly from AirSim achieves 38.89%, 87.78%, and 13.33% success rates of reaching goals in AirSimNH, Blocks, and MSBuild2018 environments, respectively, the method with pre-trained depth estimation models fails to reach any end points of the scenarios. The fine-tuned depth estimation models enhance performance, increasing the number of reached goals by 3.33% for AirSimNH and 72.22% for Blocks. These findings highlight the benefits of adapting vision-based models to specific environments, improving UAV autonomy in visually guided navigation tasks. Full article
Show Figures

Figure 1

30 pages, 4038 KiB  
Article
Deep Reinforcement Learning for a Self-Driving Vehicle Operating Solely on Visual Information
by Robertas Audinys, Žygimantas Šlikas, Justas Radkevičius, Mantas Šutas and Armantas Ostreika
Electronics 2025, 14(5), 825; https://doi.org/10.3390/electronics14050825 - 20 Feb 2025
Cited by 2 | Viewed by 2739
Abstract
This study investigates the application of Vision Transformers (ViTs) in deep reinforcement learning (DRL) for autonomous driving systems that rely solely on visual input. While convolutional neural networks (CNNs) are widely used for visual processing, they have limitations in capturing global patterns and [...] Read more.
This study investigates the application of Vision Transformers (ViTs) in deep reinforcement learning (DRL) for autonomous driving systems that rely solely on visual input. While convolutional neural networks (CNNs) are widely used for visual processing, they have limitations in capturing global patterns and handling complex driving scenarios. To address these challenges, we developed a ViT-based DRL model and evaluated its performance through extensive training in the MetaDrive simulator and testing in the high-fidelity AirSim simulator. Results show that the ViT-based model significantly outperformed CNN baselines in MetaDrive, achieving nearly seven times the average distance traveled and an 87% increase in average speed. In AirSim, the model exhibited superior adaptability to realistic conditions, maintaining stability and safety in visually complex environments. These findings highlight the potential of ViTs to enhance the robustness and reliability of vision-based autonomous systems, offering a transformative approach to safe exploration in diverse driving scenarios. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

24 pages, 51328 KiB  
Article
A Shortest Distance Priority UAV Path Planning Algorithm for Precision Agriculture
by Guoqing Zhang, Jiandong Liu, Wei Luo, Yongxiang Zhao, Ruiyin Tang, Keyu Mei and Penggang Wang
Sensors 2024, 24(23), 7514; https://doi.org/10.3390/s24237514 - 25 Nov 2024
Cited by 4 | Viewed by 1712
Abstract
Unmanned aerial vehicles (UAVs) have made significant advances in autonomous sensing, particularly in the field of precision agriculture. Effective path planning is critical for autonomous navigation in large orchards to ensure that UAVs are able to recognize the optimal route between the start [...] Read more.
Unmanned aerial vehicles (UAVs) have made significant advances in autonomous sensing, particularly in the field of precision agriculture. Effective path planning is critical for autonomous navigation in large orchards to ensure that UAVs are able to recognize the optimal route between the start and end points. When UAVs perform tasks such as crop protection, monitoring, and data collection in orchard environments, they must be able to adapt to dynamic conditions. To address these challenges, this study proposes an enhanced Q-learning algorithm designed to optimize UAV path planning by combining static and dynamic obstacle avoidance features. A shortest distance priority (SDP) strategy is integrated into the learning process to minimize the distance the UAV must travel to reach the target. In addition, the root mean square propagation (RMSP) method is used to dynamically adjust the learning rate according to gradient changes, which accelerates the learning process and improves path planning efficiency. In this study, firstly, the proposed method was compared with state-of-the-art path planning techniques (including A-star, Dijkstra, and traditional Q-learning) in terms of learning time and path length through a grid-based 2D simulation environment. The results showed that the proposed method significantly improved performance compared to existing methods. In addition, 3D simulation experiments were conducted in the AirSim virtual environment. Due to the complexity of the 3D state, a deep neural network was used to calculate the Q-value based on the proposed algorithm. The results indicate that the proposed method can achieve the shortest path planning and obstacle avoidance operations in an orchard 3D simulation environment. Therefore, drones equipped with this algorithm are expected to make outstanding contributions to the development of precision agriculture through intelligent navigation and obstacle avoidance. Full article
(This article belongs to the Special Issue Application of UAV and Sensing in Precision Agriculture)
Show Figures

Figure 1

25 pages, 7661 KiB  
Article
Application of Reinforcement Learning in Controlling Quadrotor UAV Flight Actions
by Shang-En Shen and Yi-Cheng Huang
Drones 2024, 8(11), 660; https://doi.org/10.3390/drones8110660 - 9 Nov 2024
Cited by 4 | Viewed by 3682
Abstract
Most literature has extensively discussed reinforcement learning (RL) for controlling rotorcraft drones during flight for traversal tasks. However, most studies lack adequate details regarding the design of reward and punishment mechanisms, and there is a limited exploration of the feasibility of applying reinforcement [...] Read more.
Most literature has extensively discussed reinforcement learning (RL) for controlling rotorcraft drones during flight for traversal tasks. However, most studies lack adequate details regarding the design of reward and punishment mechanisms, and there is a limited exploration of the feasibility of applying reinforcement learning in actual flight control following simulation experiments. Consequently, this study focuses on the exploration of reward and punishment design and state input for RL. The simulation environment is constructed using AirSim and Unreal Engine, with onboard camera footage serving as the state input for reinforcement learning. The research investigates three RL algorithms suitable for discrete action training. The Deep Q Network (DQN), Advantage Actor–Critic (A2C), and Proximal Policy Optimization (PPO) were combined with three different reward and punishment design mechanisms for training and testing. The results indicate that employing the PPO algorithm along with a continuous return method as the reward mechanism allows for effective convergence during the training process, achieving a target traversal rate of 71% in the testing environment. Furthermore, this study proposes integrating the YOLOv7-tiny object detection (OD) system to assess the applicability of reinforcement learning in real-world settings. Unifying the state inputs of simulated and OD environments and replacing the original simulated image inputs with a maximum dual-target approach, the experimental simulation achieved a target traversal rate of 52% ultimately. In summary, this research formulates a set of logical frameworks for an RL reward and punishment design deployed with real-time Yolo’s OD implementation synergized as a useful aid for related RL studies. Full article
Show Figures

Figure 1

26 pages, 1405 KiB  
Review
Image Analysis in Autonomous Vehicles: A Review of the Latest AI Solutions and Their Comparison
by Michał Kozłowski, Szymon Racewicz and Sławomir Wierzbicki
Appl. Sci. 2024, 14(18), 8150; https://doi.org/10.3390/app14188150 - 11 Sep 2024
Cited by 9 | Viewed by 11186
Abstract
The integration of advanced image analysis using artificial intelligence (AI) is pivotal for the evolution of autonomous vehicles (AVs). This article provides a thorough review of the most significant datasets and latest state-of-the-art AI solutions employed in image analysis for AVs. Datasets such [...] Read more.
The integration of advanced image analysis using artificial intelligence (AI) is pivotal for the evolution of autonomous vehicles (AVs). This article provides a thorough review of the most significant datasets and latest state-of-the-art AI solutions employed in image analysis for AVs. Datasets such as Cityscapes, NuScenes, CARLA, and Talk2Car form the benchmarks for training and evaluating different AI models, with unique characteristics catering to various aspects of autonomous driving. Key AI methodologies, including Convolutional Neural Networks (CNNs), Transformer models, Generative Adversarial Networks (GANs), and Vision Language Models (VLMs), are discussed. The article also presents a comparative analysis of various AI techniques in real-world scenarios, focusing on semantic image segmentation, 3D object detection, vehicle control in virtual environments, and vehicle interaction using natural language. Simultaneously, the roles of multisensor datasets and simulation platforms like AirSim, TORCS, and SUMMIT in enriching the training data and testing environments for AVs are highlighted. By synthesizing information on datasets, AI solutions, and comparative performance evaluations, this article serves as a crucial resource for researchers, developers, and industry stakeholders, offering a clear view of the current landscape and future directions in autonomous vehicle image analysis technologies. Full article
(This article belongs to the Special Issue Future Autonomous Vehicles and Their Systems)
Show Figures

Figure 1

19 pages, 6394 KiB  
Review
Realistic 3D Simulators for Automotive: A Review of Main Applications and Features
by Ivo Silva, Hélder Silva, Fabricio Botelho and Cristiano Pendão
Sensors 2024, 24(18), 5880; https://doi.org/10.3390/s24185880 - 10 Sep 2024
Cited by 7 | Viewed by 5627
Abstract
Recent advancements in vehicle technology have stimulated innovation across the automotive sector, from Advanced Driver Assistance Systems (ADAS) to autonomous driving and motorsport applications. Modern vehicles, equipped with sensors for perception, localization, navigation, and actuators for autonomous driving, generate vast amounts of data [...] Read more.
Recent advancements in vehicle technology have stimulated innovation across the automotive sector, from Advanced Driver Assistance Systems (ADAS) to autonomous driving and motorsport applications. Modern vehicles, equipped with sensors for perception, localization, navigation, and actuators for autonomous driving, generate vast amounts of data used for training and evaluating autonomous systems. Real-world testing is essential for validation but is complex, expensive, and time-intensive, requiring multiple vehicles and reference systems. To address these challenges, computer graphics-based simulators offer a compelling solution by providing high-fidelity 3D environments to simulate vehicles and road users. These simulators are crucial for developing, validating, and testing ADAS, autonomous driving systems, and cooperative driving systems, and enhancing vehicle performance and driver training in motorsport. This paper reviews computer graphics-based simulators tailored for automotive applications. It begins with an overview of their applications and analyzes their key features. Additionally, this paper compares five open-source (CARLA, AirSim, LGSVL, AWSIM, and DeepDrive) and ten commercial simulators. Our findings indicate that open-source simulators are best for the research community, offering realistic 3D environments, multiple sensor support, APIs, co-simulation, and community support. Conversely, commercial simulators, while less extensible, provide a broader set of features and solutions. Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
Show Figures

Figure 1

20 pages, 9929 KiB  
Article
Application of Deep Reinforcement Learning to Defense and Intrusion Strategies Using Unmanned Aerial Vehicles in a Versus Game
by Chieh-Li Chen, Yu-Wen Huang and Ting-Ju Shen
Drones 2024, 8(8), 365; https://doi.org/10.3390/drones8080365 - 31 Jul 2024
Cited by 3 | Viewed by 2497
Abstract
Drones are used in complex scenes in different scenarios. Efficient and effective algorithms are required for drones to track targets of interest and protect allied targets in a versus game. This study used physical models of quadcopters and scene engines to investigate the [...] Read more.
Drones are used in complex scenes in different scenarios. Efficient and effective algorithms are required for drones to track targets of interest and protect allied targets in a versus game. This study used physical models of quadcopters and scene engines to investigate the resulting performance of attacker drones and defensive drones based on deep reinforcement learning. The deep reinforcement learning network soft actor-critic was applied in association with the proposed reward and penalty functions according to the design scenario. AirSim UAV physical modeling and mission scenarios based on Unreal Engine were used to simultaneously train attacking and defending gaming skills for both drones, such that the required combat strategies and flight skills could be improved through a series of competition episodes. After 500 episodes of practice experience, both drones could accelerate, detour, and evade to achieve reasonably good performance with a roughly tie situation. Validation scenarios also demonstrated that the attacker–defender winning ratio also improved from 1:2 to 1.2:1, which is reasonable for drones with equal flight capabilities. Although this showed that the attacker may have an advantage in inexperienced scenarios, it revealed that the strategies generated by deep reinforcement learning networks are robust and feasible. Full article
(This article belongs to the Collection Drones for Security and Defense Applications)
Show Figures

Figure 1

20 pages, 21698 KiB  
Article
An Enhanced Aircraft Carrier Runway Detection Method Based on Image Dehazing
by Chenliang Li, Yunyang Wang, Yan Zhao, Cheng Yuan, Ruien Mao and Pin Lyu
Appl. Sci. 2024, 14(13), 5464; https://doi.org/10.3390/app14135464 - 24 Jun 2024
Cited by 1 | Viewed by 1250
Abstract
Carrier-based Unmanned Aerial Vehicle (CUAV) landing is an extremely critical link in the overall chain of CUAV operations on ships. Vision-based landing location methods have advantages such as low cost and high accuracy. However, when an aircraft carrier is at sea, it may [...] Read more.
Carrier-based Unmanned Aerial Vehicle (CUAV) landing is an extremely critical link in the overall chain of CUAV operations on ships. Vision-based landing location methods have advantages such as low cost and high accuracy. However, when an aircraft carrier is at sea, it may encounter complex weather conditions such as haze, which could lead to vision-based landing failures. This paper proposes a runway line recognition and localization method based on haze removal enhancement to solve this problem. Firstly, a haze removal algorithm using a multi-mechanism, multi-architecture network model is introduced. Compared with traditional algorithms, the proposed model not only consumes less GPU memory but also achieves superior image restoration results. Based on this, We employed the random sample consensus method to reduce the error in runway line localization. Additionally, extensive experiments conducted in the Airsim simulation environment have shown that our pipeline effectively addresses the issue of decreased detection accuracy of runway line detection algorithms in haze maritime conditions, improving the runway line localization accuracy by approximately 85%. Full article
(This article belongs to the Collection Advances in Automation and Robotics)
Show Figures

Figure 1

13 pages, 2819 KiB  
Article
E-DQN-Based Path Planning Method for Drones in Airsim Simulator under Unknown Environment
by Yixun Chao, Rüdiger Dillmann, Arne Roennau and Zhi Xiong
Biomimetics 2024, 9(4), 238; https://doi.org/10.3390/biomimetics9040238 - 16 Apr 2024
Cited by 5 | Viewed by 3523
Abstract
To improve the rapidity of path planning for drones in unknown environments, a new bio-inspired path planning method using E-DQN (event-based deep Q-network), referring to introducing event stream to reinforcement learning network, is proposed. Firstly, event data are collected through an airsim [...] Read more.
To improve the rapidity of path planning for drones in unknown environments, a new bio-inspired path planning method using E-DQN (event-based deep Q-network), referring to introducing event stream to reinforcement learning network, is proposed. Firstly, event data are collected through an airsim simulator for environmental perception, and an auto-encoder is presented to extract data features and generate event weights. Then, event weights are input into DQN (deep Q-network) to choose the action of the next step. Finally, simulation and verification experiments are conducted in a virtual obstacle environment built with an unreal engine and airsim. The experiment results show that the proposed algorithm is adaptable for drones to find the goal in unknown environments and can improve the rapidity of path planning compared with that of commonly used methods. Full article
(This article belongs to the Section Locomotion and Bioinspired Robotics)
Show Figures

Figure 1

22 pages, 26831 KiB  
Article
UAV-Borne Mapping Algorithms for Low-Altitude and High-Speed Drone Applications
by Jincheng Zhang, Artur Wolek and Andrew R. Willis
Sensors 2024, 24(7), 2204; https://doi.org/10.3390/s24072204 - 29 Mar 2024
Cited by 4 | Viewed by 2640
Abstract
This article presents an analysis of current state-of-the-art sensors and how these sensors work with several mapping algorithms for UAV (Unmanned Aerial Vehicle) applications, focusing on low-altitude and high-speed scenarios. A new experimental construct is created using highly realistic environments made possible by [...] Read more.
This article presents an analysis of current state-of-the-art sensors and how these sensors work with several mapping algorithms for UAV (Unmanned Aerial Vehicle) applications, focusing on low-altitude and high-speed scenarios. A new experimental construct is created using highly realistic environments made possible by integrating the AirSim simulator with Google 3D maps models using the Cesium Tiles plugin. Experiments are conducted in this high-realism simulated environment to evaluate the performance of three distinct mapping algorithms: (1) Direct Sparse Odometry (DSO), (2) Stereo DSO (SDSO), and (3) DSO Lite (DSOL). Experimental results evaluate algorithms based on their measured geometric accuracy and computational speed. The results provide valuable insights into the strengths and limitations of each algorithm. Findings quantify compromises in UAV algorithm selection, allowing researchers to find the mapping solution best suited to their application, which often requires a compromise between computational performance and the density and accuracy of geometric map estimates. Results indicate that for UAVs with restrictive computing resources, DSOL is the best option. For systems with payload capacity and modest compute resources, SDSO is the best option. If only one camera is available, DSO is the option to choose for applications that require dense mapping results. Full article
(This article belongs to the Special Issue Sensors and Algorithms for 3D Visual Analysis and SLAM)
Show Figures

Figure 1

23 pages, 8365 KiB  
Article
Resilient Multi-Sensor UAV Navigation with a Hybrid Federated Fusion Architecture
by Sorin Andrei Negru, Patrick Geragersian, Ivan Petrunin and Weisi Guo
Sensors 2024, 24(3), 981; https://doi.org/10.3390/s24030981 - 2 Feb 2024
Cited by 9 | Viewed by 4229
Abstract
Future UAV (unmanned aerial vehicle) operations in urban environments demand a PNT (position, navigation, and timing) solution that is both robust and resilient. While a GNSS (global navigation satellite system) can provide an accurate position under open-sky assumptions, the complexity of urban operations [...] Read more.
Future UAV (unmanned aerial vehicle) operations in urban environments demand a PNT (position, navigation, and timing) solution that is both robust and resilient. While a GNSS (global navigation satellite system) can provide an accurate position under open-sky assumptions, the complexity of urban operations leads to NLOS (non-line-of-sight) and multipath effects, which in turn impact the accuracy of the PNT data. A key research question within the research community pertains to determining the appropriate hybrid fusion architecture that can ensure the resilience and continuity of UAV operations in urban environments, minimizing significant degradations of PNT data. In this context, we present a novel federated fusion architecture that integrates data from the GNSS, the IMU (inertial measurement unit), a monocular camera, and a barometer to cope with the GNSS multipath and positioning performance degradation. Within the federated fusion architecture, local filters are implemented using EKFs (extended Kalman filters), while a master filter is used in the form of a GRU (gated recurrent unit) block. Data collection is performed by setting up a virtual environment in AirSim for the visual odometry aid and barometer data, while Spirent GSS7000 hardware is used to collect the GNSS and IMU data. The hybrid fusion architecture is compared to a classic federated architecture (formed only by EKFs) and tested under different light and weather conditions to assess its resilience, including multipath and GNSS outages. The proposed solution demonstrates improved resilience and robustness in a range of degraded conditions while maintaining a good level of positioning performance with a 95th percentile error of 0.54 m for the square scenario and 1.72 m for the survey scenario. Full article
(This article belongs to the Special Issue New Methods and Applications for UAVs)
Show Figures

Figure 1

19 pages, 5458 KiB  
Article
A Simulation Framework of Unmanned Aerial Vehicles Route Planning Design and Validation for Landslide Monitoring
by Dongmei Xie, Ruifeng Hu, Chisheng Wang, Chuanhua Zhu, Hui Xu and Qipei Li
Remote Sens. 2023, 15(24), 5758; https://doi.org/10.3390/rs15245758 - 16 Dec 2023
Cited by 7 | Viewed by 3096
Abstract
Unmanned aerial vehicles (UAVs) have emerged as a highly efficient means of monitoring landslide-prone regions, given the growing concern for urban safety and the increasing occurrence of landslides. Designing optimal UAV flight routes is crucial for effective landslide monitoring. However, in real-world scenarios, [...] Read more.
Unmanned aerial vehicles (UAVs) have emerged as a highly efficient means of monitoring landslide-prone regions, given the growing concern for urban safety and the increasing occurrence of landslides. Designing optimal UAV flight routes is crucial for effective landslide monitoring. However, in real-world scenarios, the testing and validating of flight path planning algorithms incur high cost and safety concerns, making overall flight operations challenging. Therefore, this paper proposes the use of the Unreal Engine simulation framework to design UAV flight path planning specifically for landslide monitoring. It aims to validate the authenticity of the simulated flight paths and the correctness of the algorithms. Under the proposed simulation framework, we then test a novel flight path planning algorithm. The simulation results demonstrate that the model reconstruction obtained using the novel flight path algorithm exhibits more detailed textures, with a 3D model simulation accuracy ranging from 10 to 14 cm. Among them, the RMSE value of the novel flight route algorithm falls within the range of 10 to 11 cm, exhibiting a 2 to 3 cm improvement in accuracy compared to the traditional flight path algorithm. Additionally, it effectively reduces the flight duration by 9.3% under the same flight path compared to conventional methods. The results confirm that the simulation framework developed in this paper meets the requirements for landslide damage monitoring and validates the feasibility and correctness of the UAV flight path planning algorithm. Full article
Show Figures

Figure 1

9 pages, 2558 KiB  
Proceeding Paper
Realism-Oriented Design, Verification, and Validation of Novel Robust Navigation Solutions
by Sorin Andrei Negru, Patrick Geragersian, Ivan Petrunin, Raphael Grech and Guy Buesnel
Eng. Proc. 2023, 54(1), 57; https://doi.org/10.3390/ENC2023-15424 - 29 Oct 2023
Cited by 1 | Viewed by 1036
Abstract
Urban environments are characterized by a set of conditions underpinning degradation Position, Navigation and Timing (PNT) signals, such as multipath and non-line of sight (NLOS) effects, negatively affecting the position and the navigation integrity during the Uncrewed Aerial Vehicles (UAVs) operations. Before the [...] Read more.
Urban environments are characterized by a set of conditions underpinning degradation Position, Navigation and Timing (PNT) signals, such as multipath and non-line of sight (NLOS) effects, negatively affecting the position and the navigation integrity during the Uncrewed Aerial Vehicles (UAVs) operations. Before the deployment of such uncrewed aerial platforms, a realistic simulation set-up is required, which should facilitate the identification and mitigation of the performance degradation that may appear during the actual mission. This paper presents the case study of the development of a robust Artificial Intelligence (AI)-based multi-sensor fusion framework using a federated architecture. The dataset for this development, comprising the outputs of a Global Navigation Satellite System (GNSS) receiver, an Inertial Measurement Unit (IMU) and a monocular camera is generated in a high-fidelity simulation framework. The simulation framework is built around Spirent’s GSS7000 simulator, software tools from Spirent (SimGEN and SimSENSOR) and OKTAL-SE (Sim3D), where the realism for the vision sensor data generation is provided by a photorealistic environment generated using the AirSim software with the Unreal Engine aid. To verify and validate the fusion framework a hardware in the loop (HIL) set-up has been implemented using the Pixhawk controller. The results obtained demonstrate that the presented HIL set-up is the essential component of a more robust navigation solution development framework, providing resilience under conditions of GNSS outages. Full article
(This article belongs to the Proceedings of European Navigation Conference ENC 2023)
Show Figures

Figure 1

Back to TopTop