Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = human–drone interface

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 955 KiB  
Article
Natural Language Interfaces for Structured Query Generation in IoD Platforms
by Anıl Sezgin
Drones 2025, 9(6), 444; https://doi.org/10.3390/drones9060444 - 18 Jun 2025
Viewed by 464
Abstract
The increasing complexity of Internet of Drones (IoD) platforms demands more accessible ways for users to interact with unmanned aerial vehicle (UAV) data systems. Traditional methods requiring technical API knowledge create barriers for non-specialist users in dynamic operational environments. To address this challenge, [...] Read more.
The increasing complexity of Internet of Drones (IoD) platforms demands more accessible ways for users to interact with unmanned aerial vehicle (UAV) data systems. Traditional methods requiring technical API knowledge create barriers for non-specialist users in dynamic operational environments. To address this challenge, we propose a retrieval-augmented generation (RAG) architecture that enables natural language querying over UAV telemetry, mission, and detection data. Our approach builds a semantic retrieval index from structured application programming interface (API) documentation and uses lightweight large language models to map user queries into executable API calls validated against platform schemas. This design minimizes fine-tuning needs, adapts to evolving APIs, and ensures schema conformity for operational safety. Evaluations conducted on a curated IoD dataset show 91.3% endpoint accuracy, 87.6% parameter match rate, and 95.2% schema conformity, confirming the system’s robustness and scalability. The results demonstrate that combining retrieval-augmented semantic grounding with structured validation bridges the gap between human intent and complex UAV data access, improving usability while maintaining a practical level of operational reliability. Full article
Show Figures

Figure 1

13 pages, 3458 KiB  
Article
Smart Glove: A Cost-Effective and Intuitive Interface for Advanced Drone Control
by Cristian Randieri, Andrea Pollina, Adriano Puglisi and Christian Napoli
Drones 2025, 9(2), 109; https://doi.org/10.3390/drones9020109 - 1 Feb 2025
Cited by 1 | Viewed by 1881
Abstract
Recent years have witnessed the development of human-unmanned aerial vehicle (UAV) interfaces to meet the growing demand for intuitive and efficient solutions in UAV piloting. In this paper, we propose a novel Smart Glove v 1.0 prototype for advanced drone gesture control, leveraging [...] Read more.
Recent years have witnessed the development of human-unmanned aerial vehicle (UAV) interfaces to meet the growing demand for intuitive and efficient solutions in UAV piloting. In this paper, we propose a novel Smart Glove v 1.0 prototype for advanced drone gesture control, leveraging key low-cost components such as Arduino Nano to process data, MPU6050 to detect hand movements, flexible sensors for easy throttle control, and the nRF24L01 module for wireless communication. The proposed research highlights the design methodology of reporting flight tests associated with simulation findings to demonstrate the characteristics of Smart Glove v1.0 in terms of intuitive, responsive, and hands-free piloting gesture interface. We aim to make the drone piloting experience more enjoyable and leverage ergonomics by adapting to the pilot’s preferred position. The overall research project points to a seedbed for future solutions, eventually extending its applications to medicine, space, and the metaverse. Full article
Show Figures

Figure 1

18 pages, 3156 KiB  
Article
FB-CCNN: A Filter Bank Complex Spectrum Convolutional Neural Network with Artificial Gradient Descent Optimization
by Dongcen Xu, Fengzhen Tang, Yiping Li, Qifeng Zhang and Xisheng Feng
Brain Sci. 2023, 13(5), 780; https://doi.org/10.3390/brainsci13050780 - 10 May 2023
Cited by 4 | Viewed by 2617
Abstract
The brain–computer interface (BCI) provides direct communication between human brains and machines, including robots, drones and wheelchairs, without the involvement of peripheral systems. BCI based on electroencephalography (EEG) has been applied in many fields, including aiding people with physical disabilities, rehabilitation, education and [...] Read more.
The brain–computer interface (BCI) provides direct communication between human brains and machines, including robots, drones and wheelchairs, without the involvement of peripheral systems. BCI based on electroencephalography (EEG) has been applied in many fields, including aiding people with physical disabilities, rehabilitation, education and entertainment. Among the different EEG-based BCI paradigms, steady-state visual evoked potential (SSVEP)-based BCIs are known for their lower training requirements, high classification accuracy and high information transfer rate (ITR). In this article, a filter bank complex spectrum convolutional neural network (FB-CCNN) was proposed, and it achieved leading classification accuracies of 94.85 ± 6.18% and 80.58 ± 14.43%, respectively, on two open SSVEP datasets. An optimization algorithm named artificial gradient descent (AGD) was also proposed to generate and optimize the hyperparameters of the FB-CCNN. AGD also revealed correlations between different hyperparameters and their corresponding performances. It was experimentally demonstrated that FB-CCNN performed better when the hyperparameters were fixed values rather than channel number-based. In conclusion, a deep learning model named FB-CCNN and a hyperparameter-optimizing algorithm named AGD were proposed and demonstrated to be effective in classifying SSVEP through experiments. The hyperparameter design process and analysis were carried out using AGD, and advice on choosing hyperparameters for deep learning models in classifying SSVEP was provided. Full article
Show Figures

Graphical abstract

19 pages, 9018 KiB  
Article
Wearable Drone Controller: Machine Learning-Based Hand Gesture Recognition and Vibrotactile Feedback
by Ji-Won Lee and Kee-Ho Yu
Sensors 2023, 23(5), 2666; https://doi.org/10.3390/s23052666 - 28 Feb 2023
Cited by 19 | Viewed by 9458
Abstract
We proposed a wearable drone controller with hand gesture recognition and vibrotactile feedback. The intended hand motions of the user are sensed by an inertial measurement unit (IMU) placed on the back of the hand, and the signals are analyzed and classified using [...] Read more.
We proposed a wearable drone controller with hand gesture recognition and vibrotactile feedback. The intended hand motions of the user are sensed by an inertial measurement unit (IMU) placed on the back of the hand, and the signals are analyzed and classified using machine learning models. The recognized hand gestures control the drone, and the obstacle information in the heading direction of the drone is fed back to the user by activating the vibration motor attached to the wrist. Simulation experiments for drone operation were performed, and the participants’ subjective evaluations regarding the controller’s convenience and effectiveness were investigated. Finally, experiments with a real drone were conducted and discussed to validate the proposed controller. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

19 pages, 2429 KiB  
Article
A Multi-Lingual Speech Recognition-Based Framework to Human-Drone Interaction
by Kheireddine Choutri, Mohand Lagha, Souham Meshoul, Mohamed Batouche, Yasmine Kacel and Nihad Mebarkia
Electronics 2022, 11(12), 1829; https://doi.org/10.3390/electronics11121829 - 9 Jun 2022
Cited by 13 | Viewed by 4661
Abstract
In recent years, human–drone interaction has received increasing interest from the scientific community. When interacting with a drone, humans assume a variety of roles, the nature of which are determined by the drone’s application and degree of autonomy. Common methods of controlling drone [...] Read more.
In recent years, human–drone interaction has received increasing interest from the scientific community. When interacting with a drone, humans assume a variety of roles, the nature of which are determined by the drone’s application and degree of autonomy. Common methods of controlling drone movements include by RF remote control and ground control station. These devices are often difficult to manipulate and may even require some training. An alternative is to use innovative methods called natural user interfaces that allow users to interact with drones in an intuitive manner using speech. However, using only one language of interacting may limit the number of users, especially if different languages are spoken in the same region. Moreover, environmental and propellers noise make speech recognition a complicated task. The goal of this work is to use a multilingual speech recognition system that includes English, Arabic, and Amazigh to control the movement of drones. The reason for selecting these languages is that they are widely spoken in many regions, particularly in the Middle East and North Africa (MENA) zone. To achieve this goal, a two-stage approach is proposed. During the first stage, a deep learning based model for multilingual speech recognition is designed. Then, the developed model is deployed in real settings using a quadrotor UAV. The network was trained using 38,850 records including commands and unknown words mixed with noise to improve robustness. An average class accuracy of more than 93% has been achieved. After that, experiments were conducted involving 16 participants giving voice commands in order to test the efficiency of the designed system. The achieved accuracy is about 93.76% for English recognition and 88.55%, 82.31% for Arabic and Amazigh, respectively. Finally, hardware implementation of the designed system on a quadrotor UAV was made. Real time tests have shown that the approach is very promising as an alternative form of human–drone interaction while offering the benefit of control simplicity. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

19 pages, 1419 KiB  
Article
How Can Autonomous Vehicles Convey Emotions to Pedestrians? A Review of Emotionally Expressive Non-Humanoid Robots
by Yiyuan Wang, Luke Hespanhol and Martin Tomitsch
Multimodal Technol. Interact. 2021, 5(12), 84; https://doi.org/10.3390/mti5120084 - 20 Dec 2021
Cited by 17 | Viewed by 6229
Abstract
In recent years, researchers and manufacturers have started to investigate ways to enable autonomous vehicles (AVs) to interact with nearby pedestrians in compensation for the absence of human drivers. The majority of these efforts focuses on external human–machine interfaces (eHMIs), using different modalities, [...] Read more.
In recent years, researchers and manufacturers have started to investigate ways to enable autonomous vehicles (AVs) to interact with nearby pedestrians in compensation for the absence of human drivers. The majority of these efforts focuses on external human–machine interfaces (eHMIs), using different modalities, such as light patterns or on-road projections, to communicate the AV’s intent and awareness. In this paper, we investigate the potential role of affective interfaces to convey emotions via eHMIs. To date, little is known about the role that affective interfaces can play in supporting AV–pedestrian interaction. However, emotions have been employed in many smaller social robots, from domestic companions to outdoor aerial robots in the form of drones. To develop a foundation for affective AV–pedestrian interfaces, we reviewed the emotional expressions of non-humanoid robots in 25 articles published between 2011 and 2021. Based on findings from the review, we present a set of considerations for designing affective AV–pedestrian interfaces and highlight avenues for investigating these opportunities in future studies. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Graphical abstract

16 pages, 2416 KiB  
Article
Improving Human Ground Control Performance in Unmanned Aerial Systems
by Marianna Di Gregorio, Marco Romano, Monica Sebillo, Giuliana Vitiello and Angela Vozella
Future Internet 2021, 13(8), 188; https://doi.org/10.3390/fi13080188 - 22 Jul 2021
Cited by 11 | Viewed by 3379
Abstract
The use of Unmanned Aerial Systems, commonly called drones, is growing enormously today. Applications that can benefit from the use of fleets of drones and a related human–machine interface are emerging to ensure better performance and reliability. In particular, a fleet of drones [...] Read more.
The use of Unmanned Aerial Systems, commonly called drones, is growing enormously today. Applications that can benefit from the use of fleets of drones and a related human–machine interface are emerging to ensure better performance and reliability. In particular, a fleet of drones can become a valuable tool for monitoring a wide area and transmitting relevant information to the ground control station. We present a human–machine interface for a Ground Control Station used to remotely operate a fleet of drones, in a collaborative setting, by a team of multiple operators. In such a collaborative setting, a major interface design challenge has been to maximize the Team Situation Awareness, shifting the focus from the individual operator to the entire group decision-makers. We were especially interested in testing the hypothesis that shared displays may improve the team situation awareness and hence the overall performance. The experimental study we present shows that there is no difference in performance between shared and non-shared displays. However, in trials when unexpected events occurred, teams using shared displays-maintained good performance whereas in teams using non-shared displays performance reduced. In particular, in case of unexpected situations, operators are able to safely bring more drones home, maintaining a higher level of team situational awareness. Full article
(This article belongs to the Special Issue Interface Design Challenges for Smart Control Rooms)
Show Figures

Figure 1

30 pages, 9103 KiB  
Article
Web AR Solution for UAV Pilot Training and Usability Testing
by Roberto Ribeiro, João Ramos, David Safadinho, Arsénio Reis, Carlos Rabadão, João Barroso and António Pereira
Sensors 2021, 21(4), 1456; https://doi.org/10.3390/s21041456 - 19 Feb 2021
Cited by 21 | Viewed by 6758
Abstract
Data and services are available anywhere at any time thanks to the Internet and mobile devices. Nowadays, there are new ways of representing data through trendy technologies such as augmented reality (AR), which extends our perception of reality through the addition of a [...] Read more.
Data and services are available anywhere at any time thanks to the Internet and mobile devices. Nowadays, there are new ways of representing data through trendy technologies such as augmented reality (AR), which extends our perception of reality through the addition of a virtual layer on top of real-time images. The great potential of unmanned aerial vehicles (UAVs) for carrying out routine and professional tasks has encouraged their use in the creation of several services, such as package delivery or industrial maintenance. Unfortunately, drone piloting is difficult to learn and requires specific training. Since regular training is performed with virtual simulations, we decided to propose a multiplatform cloud-hosted solution based in Web AR for drone training and usability testing. This solution defines a configurable trajectory through virtual elements represented over barcode markers placed on a real environment. The main goal is to provide an inclusive and accessible training solution which could be used by anyone who wants to learn how to pilot or test research related to UAV control. For this paper, we reviewed drones, AR, and human–drone interaction (HDI) to propose an architecture and implement a prototype, which was built using a Raspberry Pi 3, a camera, and barcode markers. The validation was conducted using several test scenarios. The results show that a real-time AR experience for drone pilot training and usability testing is achievable through web technologies. Some of the advantages of this approach, compared to traditional methods, are its high availability by using the web and other ubiquitous devices; the minimization of technophobia related to crashes; and the development of cost-effective alternatives to train pilots and make the testing phase easier for drone researchers and developers through trendy technologies. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicle Control, Networks, System and Application)
Show Figures

Figure 1

15 pages, 2283 KiB  
Article
Unmanned Aerial Vehicle Control through Domain-Based Automatic Speech Recognition
by Ruben Contreras, Angel Ayala and Francisco Cruz
Computers 2020, 9(3), 75; https://doi.org/10.3390/computers9030075 - 19 Sep 2020
Cited by 18 | Viewed by 6568
Abstract
Currently, unmanned aerial vehicles, such as drones, are becoming a part of our lives and extend to many areas of society, including the industrialized world. A common alternative for controlling the movements and actions of the drone is through unwired tactile interfaces, for [...] Read more.
Currently, unmanned aerial vehicles, such as drones, are becoming a part of our lives and extend to many areas of society, including the industrialized world. A common alternative for controlling the movements and actions of the drone is through unwired tactile interfaces, for which different remote control devices are used. However, control through such devices is not a natural, human-like communication interface, which sometimes is difficult to master for some users. In this research, we experimented with a domain-based speech recognition architecture to effectively control an unmanned aerial vehicle such as a drone. The drone control was performed in a more natural, human-like way to communicate the instructions. Moreover, we implemented an algorithm for command interpretation using both Spanish and English languages, as well as to control the movements of the drone in a simulated domestic environment. We conducted experiments involving participants giving voice commands to the drone in both languages in order to compare the effectiveness of each, considering the mother tongue of the participants in the experiment. Additionally, different levels of distortion were applied to the voice commands to test the proposed approach when it encountered noisy input signals. The results obtained showed that the unmanned aerial vehicle was capable of interpreting user voice instructions. Speech-to-action recognition improved for both languages with phoneme matching in comparison to only using the cloud-based algorithm without domain-based instructions. Using raw audio inputs, the cloud-based approach achieves 74.81% and 97.04% accuracy for English and Spanish instructions, respectively. However, with our phoneme matching approach the results are improved, yielding 93.33% accuracy for English and 100.00% accuracy for Spanish. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

28 pages, 1948 KiB  
Review
Augmented Reality for Robotics: A Review
by Zhanat Makhataeva and Huseyin Atakan Varol
Robotics 2020, 9(2), 21; https://doi.org/10.3390/robotics9020021 - 2 Apr 2020
Cited by 221 | Viewed by 38755
Abstract
Augmented reality (AR) is used to enhance the perception of the real world by integrating virtual objects to an image sequence acquired from various camera technologies. Numerous AR applications in robotics have been developed in recent years. The aim of this paper is [...] Read more.
Augmented reality (AR) is used to enhance the perception of the real world by integrating virtual objects to an image sequence acquired from various camera technologies. Numerous AR applications in robotics have been developed in recent years. The aim of this paper is to provide an overview of AR research in robotics during the five year period from 2015 to 2019. We classified these works in terms of application areas into four categories: (1) Medical robotics: Robot-Assisted surgery (RAS), prosthetics, rehabilitation, and training systems; (2) Motion planning and control: trajectory generation, robot programming, simulation, and manipulation; (3) Human-robot interaction (HRI): teleoperation, collaborative interfaces, wearable robots, haptic interfaces, brain-computer interfaces (BCIs), and gaming; (4) Multi-agent systems: use of visual feedback to remotely control drones, robot swarms, and robots with shared workspace. Recent developments in AR technology are discussed followed by the challenges met in AR due to issues of camera localization, environment mapping, and registration. We explore AR applications in terms of how AR was integrated and which improvements it introduced to corresponding fields of robotics. In addition, we summarize the major limitations of the presented applications in each category. Finally, we conclude our review with future directions of AR research in robotics. The survey covers over 100 research works published over the last five years. Full article
Show Figures

Figure 1

13 pages, 1196 KiB  
Article
A Distributed Architecture for Human-Drone Teaming: Timing Challenges and Interaction Opportunities
by Karin Anna Hummel, Manuela Pollak and Johannes Krahofer
Sensors 2019, 19(6), 1379; https://doi.org/10.3390/s19061379 - 20 Mar 2019
Cited by 13 | Viewed by 5032
Abstract
Drones are expected to operate autonomously, yet they will also interact with humans to solve tasks together. To support civilian human-drone teams, we propose a distributed architecture where sophisticated operations such as image recognition, coordination with humans, and flight-control decisions are made, not [...] Read more.
Drones are expected to operate autonomously, yet they will also interact with humans to solve tasks together. To support civilian human-drone teams, we propose a distributed architecture where sophisticated operations such as image recognition, coordination with humans, and flight-control decisions are made, not on-board the drone, but remotely. The benefits of such an architecture are the increased computational power available for image recognition and the possibility to integrate interfaces for humans. On the downside, communication is necessary, resulting in the delayed reception of commands. In this article, we discuss the design considerations of the distributed approach, a sample implementation on a smartphone, and an application to the concrete use case of bookshelf inventory. Further, we report experimentally-derived first insights into messaging and command response delays with a custom drone connected through Wi-Fi. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicle Networks, Systems and Applications)
Show Figures

Figure 1

Back to TopTop