Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = first-person view drones

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 3238 KiB  
Article
Influence of Polymers Surface Roughness on Noise Emissions in 3D-Printed UAV Propellers
by Florin Popișter, Horea Ștefan Goia and Paul Ciudin
Polymers 2025, 17(8), 1015; https://doi.org/10.3390/polym17081015 - 9 Apr 2025
Viewed by 596
Abstract
Following the rising popularity of Unmanned Aerial Vehicles (UAVs) among large-scale users, in the form of domestic as well as professional drones, with applications in domains such as safety (e.g., surveillance drones), terrain mapping (using geo-scanning UAVs), videography drones, and high performance drones [...] Read more.
Following the rising popularity of Unmanned Aerial Vehicles (UAVs) among large-scale users, in the form of domestic as well as professional drones, with applications in domains such as safety (e.g., surveillance drones), terrain mapping (using geo-scanning UAVs), videography drones, and high performance drones used in FPV (First Person View) drone competitions—as well as the rising wide accessibility of Fused Filament Fabrication (FFF)—especially considering the recent apparition and popularization of 3D printers capable of displaying exponential increases in performance metrics, the present work takes into consideration the practice of fabricating UAV propellers by means of FFF, focusing on the theoretical, as well as on the practical aspects of the roughness and quality observed at the level of the resulting surfaces. The paper proposes a set of propeller configurations obtained by combining popular propeller geometries, such as the Gemfan 51466-3 three-bladed propeller and the novel Toroidal propeller model, with a range of different fabrication materials, such as the Polyethylene Terephthalate Glycol (PETG) filament and the Polylactic Acid (PLA) filament. The main aim of the study is to reveal observations on the influence that the surface quality has on the performance metrics of a propeller. Based on the practical work, which aims to develop a comparative study between two drone propeller geometries manufactured by a nonconventional process, 3D printing, the practical applications in the study were carried out using low-cost equipment in order to evaluate the results obtained in a domestic setting. The study involves the identification of the noise values produced by the two geometries due to the roughness of the propeller surfaces. Full article
(This article belongs to the Special Issue 3D Printing and Molding Study in Polymeric Materials)
Show Figures

Figure 1

20 pages, 584 KiB  
Article
Cognitive Radar Waveform Selection for Low-Altitude Maneuvering-Target Tracking: A Robust Information-Aided Fusion Method
by Xiang Feng, Ping Sun, Lu Zhang, Guangle Jia, Jun Wang and Zhiquan Zhou
Remote Sens. 2024, 16(21), 3951; https://doi.org/10.3390/rs16213951 - 23 Oct 2024
Cited by 1 | Viewed by 1484
Abstract
In this paper, we introduce an innovative interacting multiple-criterion selection (IMCS) idea to design the optimal radar waveform, aimingto reduce tracking error and enhance tracking performance. This method integrates the multiple-hypothesis tracking (MHT) and Rao–Blackwellized particle filter (RBPF) algorithms to tackle maneuvering First-Person-View [...] Read more.
In this paper, we introduce an innovative interacting multiple-criterion selection (IMCS) idea to design the optimal radar waveform, aimingto reduce tracking error and enhance tracking performance. This method integrates the multiple-hypothesis tracking (MHT) and Rao–Blackwellized particle filter (RBPF) algorithms to tackle maneuvering First-Person-View (FPV) drones in a three-dimensional low-altitude cluttered environment. A complex hybrid model, combining linear and nonlinear states, is constructed to describe the high maneuverability of the target. Based on the interacting multiple model (IMM) framework, our proposed IMCS method employs several waveform selection criteria as models and determines the optimal criterion with the highest probability to select waveform parameters. The simulation results indicate that the MHT–RBPF algorithm, using the IMCS method for adaptive parameter selection, exhibits high accuracy and robustness in tracking a low-altitude maneuvering target, resulting in lower root mean square error (RMSE) compared with fixed- or single-waveform selection mechanisms. Full article
Show Figures

Graphical abstract

23 pages, 3775 KiB  
Article
Advanced Scale-Propeller Design Using a MATLAB Optimization Code
by Stephen D. Prior and Daniel Newman-Sanders
Appl. Sci. 2024, 14(14), 6296; https://doi.org/10.3390/app14146296 - 19 Jul 2024
Viewed by 2724
Abstract
This study investigated the efficiency of scale-propellers, typically used on small drones. A scale-propeller is accepted as having a diameter of 7 to 21 inches. Recent special operations has demonstrated the utility of relatively small, low-cost first-person view (FPV) drones, which are attritable. [...] Read more.
This study investigated the efficiency of scale-propellers, typically used on small drones. A scale-propeller is accepted as having a diameter of 7 to 21 inches. Recent special operations has demonstrated the utility of relatively small, low-cost first-person view (FPV) drones, which are attritable. This investigation outlines the development of a MATLAB optimisation code, based on minimum induced loss propeller theory, which calculates the optimal chord and twist distribution for a chosen propeller operating in known flight conditions. The MATLAB code includes a minimum Reynolds number functionality, which provides the option to alter the chord distribution to ensure the entire propeller is operating above a set threshold value of Reynolds (>100,000), as this has been found to be a transition point between low and high section lift-to-drag ratios. Additional functions allow plotting of torque and thrust distributions along the blade. The results have been validated on experimental data taken from an APC ‘Thin Electric’ 10” × 7” propeller, where it was found that both the chord and twist distributions were accurately modelled. The MATLAB code resulted in a 16% increase in the maximum propulsive efficiency. Further work will investigate a direct interface to SolidWorks to aid rapid propeller manufacturing capability. Full article
Show Figures

Figure 1

24 pages, 2009 KiB  
Article
Examining the Influence of Using First-Person View Drones as Auxiliary Devices in Matte Painting Courses on College Students’ Continuous Learning Intention
by Chao Gu, Jie Sun, Tong Chen, Wei Miao, Yunshuo Yang, Shuyuan Lin and Jiangjie Chen
J. Intell. 2022, 10(3), 40; https://doi.org/10.3390/jintelligence10030040 - 5 Jul 2022
Cited by 16 | Viewed by 4309
Abstract
In terms of the teaching process of matte painting, it is essential for students to develop a sound understanding of the relationship between virtual and physical environments. In this study, first-person view (FPV) drones are applied to matte painting courses to evaluate the [...] Read more.
In terms of the teaching process of matte painting, it is essential for students to develop a sound understanding of the relationship between virtual and physical environments. In this study, first-person view (FPV) drones are applied to matte painting courses to evaluate the effectiveness of the teaching, and to propose more effective design suggestions for FPV drones that are more suitable for teaching. This provides students with a better learning environment using a digital education system. The results of the study indicate that the flow experience, learning interest, and continuous learning intention of students who use FPV drones in matte painting are significantly greater than those of students who only utilize traditional teaching methods. Furthermore, the technology incentive model (TIM) was developed in this study after being verified by the structural equation model. The results demonstrate that the second-order construct ‘technology incentive’ comprising perceived interactivity, perceived vividness, and novel experience positively influence students’ learning interest and continuous learning intentions under the mediation of flow experience. Full article
(This article belongs to the Special Issue Learning and Instruction)
Show Figures

Figure 1

14 pages, 4816 KiB  
Article
Performance Comparison of H.264 and H.265 Encoders in a 4K FPV Drone Piloting System
by Jakov Benjak, Daniel Hofman, Josip Knezović and Martin Žagar
Appl. Sci. 2022, 12(13), 6386; https://doi.org/10.3390/app12136386 - 23 Jun 2022
Cited by 12 | Viewed by 5135
Abstract
With the rapid growth of video data traffic on the Internet and the development of new types of video transmission systems, the need for ad hoc video encoders has also increased. One such case involves Unmanned Aerial Vehicles (UAVs), widely known as drones, [...] Read more.
With the rapid growth of video data traffic on the Internet and the development of new types of video transmission systems, the need for ad hoc video encoders has also increased. One such case involves Unmanned Aerial Vehicles (UAVs), widely known as drones, which are used in drone races, search and rescue efforts, capturing panoramic views, and so on. In this paper, we provide an efficiency comparison of the two most popular video encoders—H.264 and H.265—in a drone piloting system using first-person view (FPV). In this system, a drone is used to capture video, which is then transmitted to FPV goggles in real time. We examine the compression efficiency of 4K drone footage by varying parameters such as Group of Pictures (GOP) size, Quantization Parameter (QP), and target bitrate. The quality of the compressed footage is determined using four objective video quality measures: PSNR, SSIM, VMAF, and BRISQUE. Apart from video quality, encoding time and encoding energy consumption are also compared. The research was performed using numerous nodes on a supercomputer. Full article
(This article belongs to the Special Issue High Performance Computing and Computer Architectures)
Show Figures

Figure 1

26 pages, 12018 KiB  
Article
Drone Control in AR: An Intuitive System for Single-Handed Gesture Control, Drone Tracking, and Contextualized Camera Feed Visualization in Augmented Reality
by Konstantinos Konstantoudakis, Kyriaki Christaki, Dimitrios Tsiakmakis, Dimitrios Sainidis, Georgios Albanis, Anastasios Dimou and Petros Daras
Drones 2022, 6(2), 43; https://doi.org/10.3390/drones6020043 - 10 Feb 2022
Cited by 18 | Viewed by 14994
Abstract
Traditional drone handheld remote controllers, although well-established and widely used, are not a particularly intuitive control method. At the same time, drone pilots normally watch the drone video feed on a smartphone or another small screen attached to the remote. This forces them [...] Read more.
Traditional drone handheld remote controllers, although well-established and widely used, are not a particularly intuitive control method. At the same time, drone pilots normally watch the drone video feed on a smartphone or another small screen attached to the remote. This forces them to constantly shift their visual focus from the drone to the screen and vice-versa. This can be an eye-and-mind-tiring and stressful experience, as the eyes constantly change focus and the mind struggles to merge two different points of view. This paper presents a solution based on Microsoft’s HoloLens 2 headset that leverages augmented reality and gesture recognition to make drone piloting easier, more comfortable, and more intuitive. It describes a system for single-handed gesture control that can achieve all maneuvers possible with a traditional remote, including complex motions; a method for tracking a real drone in AR to improve flying beyond line of sight or at distances where the physical drone is hard to see; and the option to display the drone’s live video feed in AR, either in first-person-view mode or in context with the environment. Full article
(This article belongs to the Special Issue Feature Papers of Drones)
Show Figures

Figure 1

21 pages, 22784 KiB  
Article
Multi-User Drone Flight Training in Mixed Reality
by Yong-Guk Go, Ho-San Kang, Jong-Won Lee, Mun-Su Yu and Soo-Mi Choi
Electronics 2021, 10(20), 2521; https://doi.org/10.3390/electronics10202521 - 15 Oct 2021
Cited by 6 | Viewed by 3614
Abstract
The development of services and applications involving drones is promoting the growth of the unmanned-aerial-vehicle industry. Moreover, the supply of low-cost compact drones has greatly contributed to the popularization of drone flying. However, flying first-person-view (FPV) drones requires considerable experience because the remote [...] Read more.
The development of services and applications involving drones is promoting the growth of the unmanned-aerial-vehicle industry. Moreover, the supply of low-cost compact drones has greatly contributed to the popularization of drone flying. However, flying first-person-view (FPV) drones requires considerable experience because the remote pilot views a video transmitted from a camera mounted on the drone. In this paper, we propose a remote training system for FPV drone flying in mixed reality. Thereby, beginners who are inexperienced in FPV drone flight control can practice under the guidance of remote experts. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Graphical abstract

25 pages, 17878 KiB  
Article
Multiple Drone Navigation and Formation Using Selective Target Tracking-Based Computer Vision
by Jatin Upadhyay, Abhishek Rawat and Dipankar Deb
Electronics 2021, 10(17), 2125; https://doi.org/10.3390/electronics10172125 - 1 Sep 2021
Cited by 24 | Viewed by 8904
Abstract
Autonomous unmanned aerial vehicles work seamlessly within the GPS signal range, but their performance deteriorates in GPS-denied regions. This paper presents a unique collaborative computer vision-based approach for target tracking as per the image’s specific location of interest. The proposed method tracks any [...] Read more.
Autonomous unmanned aerial vehicles work seamlessly within the GPS signal range, but their performance deteriorates in GPS-denied regions. This paper presents a unique collaborative computer vision-based approach for target tracking as per the image’s specific location of interest. The proposed method tracks any object without considering its properties like shape, color, size, or pattern. It is required to keep the target visible and line of sight during the tracking. The method gives freedom of selection to a user to track any target from the image and form a formation around it. We calculate the parameters like distance and angle from the image center to the object for the individual drones. Among all the drones, the one with a significant GPS signal strength or nearer to the target is chosen as the master drone to calculate the relative angle and distance between an object and other drones considering approximate Geo-location. Compared to actual measurements, the results of tests done on a quadrotor UAV frame achieve 99% location accuracy in a robust environment inside the exact GPS longitude and latitude block as GPS-only navigation methods. The individual drones communicate to the ground station through a telemetry link. The master drone calculates the parameters using data collected at ground stations. Various formation flying methods help escort other drones to meet the desired objective with a single high-resolution first-person view (FPV) camera. The proposed method is tested for Airborne Object Target Tracking (AOT) aerial vehicle model and achieves higher tracking accuracy. Full article
(This article belongs to the Special Issue Autonomous Navigation Systems for Unmanned Aerial Vehicles)
Show Figures

Figure 1

17 pages, 3363 KiB  
Article
An Aerial Mixed-Reality Environment for First-Person-View Drone Flying
by Dong-Hyun Kim, Yong-Guk Go and Soo-Mi Choi
Appl. Sci. 2020, 10(16), 5436; https://doi.org/10.3390/app10165436 - 6 Aug 2020
Cited by 26 | Viewed by 5740
Abstract
A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to [...] Read more.
A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

17 pages, 5973 KiB  
Article
Sharkeye: Real-Time Autonomous Personal Shark Alerting via Aerial Surveillance
by Robert Gorkin, Kye Adams, Matthew J Berryman, Sam Aubin, Wanqing Li, Andrew R Davis and Johan Barthelemy
Drones 2020, 4(2), 18; https://doi.org/10.3390/drones4020018 - 4 May 2020
Cited by 34 | Viewed by 12347
Abstract
While aerial shark spotting has been a standard practice for beach safety for decades, new technologies offer enhanced opportunities, ranging from drones/unmanned aerial vehicles (UAVs) that provide new viewing capabilities, to new apps that provide beachgoers with up-to-date risk analysis before entering the [...] Read more.
While aerial shark spotting has been a standard practice for beach safety for decades, new technologies offer enhanced opportunities, ranging from drones/unmanned aerial vehicles (UAVs) that provide new viewing capabilities, to new apps that provide beachgoers with up-to-date risk analysis before entering the water. This report describes the Sharkeye platform, a first-of-its-kind project to demonstrate personal shark alerting for beachgoers in the water and on land, leveraging innovative UAV image collection, cloud-hosted machine learning detection algorithms, and reporting via smart wearables. To execute, our team developed a novel detection algorithm trained via machine learning based on aerial footage of real sharks and rays collected at local beaches, hosted and deployed the algorithm in the cloud, and integrated push alerts to beachgoers in the water via a shark app to run on smartwatches. The project was successfully trialed in the field in Kiama, Australia, with over 350 detection events recorded, followed by the alerting of multiple smartwatches simultaneously both on land and in the water, and with analysis capable of detecting shark analogues, rays, and surfers in average beach conditions, and all based on ~1 h of training data in total. Additional demonstrations showed potential of the system to enable lifeguard-swimmer communication, and the ability to create a network on demand to enable the platform. Our system was developed to provide swimmers and surfers with immediate information via smart apps, empowering lifeguards/lifesavers and beachgoers to prevent unwanted encounters with wildlife before it happens. Full article
(This article belongs to the Special Issue Drone Technology for Wildlife and Human Management)
Show Figures

Graphical abstract

18 pages, 9091 KiB  
Article
Utilizing A Game Engine for Interactive 3D Topographic Data Visualization
by Dany Laksono and Trias Aditya
ISPRS Int. J. Geo-Inf. 2019, 8(8), 361; https://doi.org/10.3390/ijgi8080361 - 15 Aug 2019
Cited by 53 | Viewed by 13172
Abstract
Developers have long used game engines for visualizing virtual worlds for players to explore. However, using real-world data in a game engine is always a challenging task, since most game engines have very little support for geospatial data. This paper presents our findings [...] Read more.
Developers have long used game engines for visualizing virtual worlds for players to explore. However, using real-world data in a game engine is always a challenging task, since most game engines have very little support for geospatial data. This paper presents our findings from exploring the Unity3D game engine for visualizing large-scale topographic data from mixed sources of terrestrial laser scanner models and topographic map data. Level of detail (LOD) 3 3D models of two buildings of the Universitas Gadjah Mada campus were obtained using a terrestrial laser scanner converted into the FBX format. Mapbox for Unity was used to provide georeferencing support for the 3D model. Unity3D also used road and place name layers via Mapbox for Unity based on OpenStreetMap (OSM) data. LOD1 buildings were modeled from topographic map data using Mapbox, and 3D models from the terrestrial laser scanner replaced two of these buildings. Building information and attributes, as well as visual appearances, were added to 3D features. The Unity3D game engine provides a rich set of libraries and assets for user interactions, and custom C# scripts were used to provide a bird’s-eye-view mode of 3D zoom, pan, and orbital display. In addition to basic 3D navigation tools, a first-person view of the scene was utilized to enable users to gain a walk-through experience while virtually inspecting the objects on the ground. For a fly-through experience, a drone view was offered to help users inspect objects from the air. The result was a multiplatform 3D visualization capable of displaying 3D models in LOD3, as well as providing user interfaces for exploring the scene using “on the ground” and “from the air” types of first person view interactions. Using the Unity3D game engine to visualize mixed sources of topographic data creates many opportunities to optimize large-scale topographic data use. Full article
(This article belongs to the Special Issue Gaming and Geospatial Information)
Show Figures

Graphical abstract

Back to TopTop