Open-Source Drone Programming Course for Distance Engineering Education

: This article presents a full course for autonomous aerial robotics inside the RoboticsAcademy framework. This “drone programming” course is open-access and ready-to-use for any teacher / student to teach / learn drone programming with it for free. The students may program diverse drones on their computers without a physical presence in this course. Unmanned aerial vehicles (UAV) applications are essentially practical, as their intelligence resides in the software part. Therefore, the proposed course emphasizes drone programming through practical learning. It comprises a collection of exercises resembling drone applications in real life, such as following a road, visual landing, and people search and rescue, including their corresponding background theory. The course has been successfully taught for ﬁve years to students from several university engineering degrees. Some exercises from the course have also been validated in three aerial robotics competitions, including an international one. RoboticsAcademy is also brieﬂy presented in the paper. It is an open framework for distance robotics learning in engineering degrees. It has been designed as a practical complement to the typical online videos of massive open online courses (MOOCs). Its educational contents are built upon robot operating system (ROS) middleware (de facto standard in robot programming), the powerful 3D Gazebo simulator, and the widely used Python programming language. Additionally, RoboticsAcademy is a suitable tool for gamiﬁed learning and online robotics competitions, as it includes several competitive exercises and automatic assessment tools.


Introduction
The impact of unmanned aerial vehicles (UAVs) in daily life has primarily grown in the last decade. Beyond its classic employment in aerial photography or geographic mapping, an increasing number of drone-based solutions hit the general market and make human life more comfortable every year. Drones are increasingly being used for video recordings, wildlife monitoring, precision agriculture, disaster management, entertainment, industrial inspections, etc.
UAV technology's growth has created the need to train professionals in the sector, taking the current limits further and producing new drone applications to serve the general public. UAV design is a cross-disciplinary field involving many technologies: electronics, mechanics, computer science, telecommunications, etc. Drones are composed of hardware (sensors, actuators, processors) and

Related Work
There are two trends in teaching robotics contents at university: subjects focused on industrial robots [1][2][3][4][5][6][7] and subjects based on mobile robots [8][9][10][11][12][13]. In the first case, manipulation control techniques, inverse kinematics, or trajectory calculations are usually addressed. In the second case, the techniques generally explained are local and global navigation, position control (avoiding obstacles), perception, self-location, etc. This dichotomy of contents is also reflected in the robotic platforms that are used in the exercises.
Both types of robotic subjects have a notably practical bias. Student interaction with robots facilitates the learning and assimilation of the theoretical concepts, algorithms, and techniques [14]. Exercises promote the paradigm of learning through doing, that is, active learning. They usually occur within a specific robotic framework, using a teaching tool or a teaching suite where robot behavior is programmed using a particular language. A great diversity of hardware platforms is used to support the experimental component of robotics courses.
Beyond being a widespread platform in robotics education for pre-university students, LEGO MINDSTORMS robots such as NXT or EV3 have also been used in many university courses [15][16][17].
As recent examples, LEGO robots are used to teach motion planning [18], physical manipulation [19], and control systems [20] in real robots. These works combine LEGO robots and MATLAB.
According to Esposito [21], MATLAB and C language remain the dominant choices in programming environments. A minority uses free, open-source packages such as the robot operating system (ROS) or OpenCV.

Distance Learning in Robotics
Digitalization is changing how knowledge is created, consumed, and taught; exploring new educational possibilities such as e-learning where universities increasingly offer massive online open courses (MOOC). Universities such as Stanford, MIT, Harvard have been leading initiatives in this direction since 2011. E-learning allows reaching a mass audience, gaining visibility, and capturing talent. This movement has also influenced the robotics field. There is an increasing number of robotic courses of this style [22], such as Artificial Intelligence for Robotics [23] from Stanford University (Udacity, Sebastian Thrun) or Autonomous Mobile Robots [24] from ETH Zurich. Another example is the MOOC on the Art of Grasping and Manipulation in Robotics [25] from the University of Siena.
Virtual [6,8,14] and remote laboratories are frequent in literature. For instance, the SyRoTek teaching platform [26] provides web access to 13 small real mobile robots operating in an enclosed space-the arena. This remote laboratory was used by more than 80 students from the Czech Republic and Argentina in exercises about robot exploration, navigation, and map building. It uses the Player/Stage framework and ROS interfaces. Another interesting example of a remote laboratory is [27], where several physical robots such as CoroBot, VEX, and Pioneer, and two robotics arms such as Nao's arm and RTX Scara, were programmed or commanded from a distance. It was successfully used in an undergraduate course on cyber-physical systems.
Another distance framework for robotic arms is RobUALab [6,14], programmed in Java and web technologies, which allow the teleoperation, experimentation, and programming of a manipulator. It provides a remote laboratory with a web 3D viewer connected to a real Scorbot ER-IX arm located at Alicante University. It also includes a virtual laboratory with an industrial cell built using augmented reality, including an equivalent simulated robotic arm and as many obstacles as desired. Using this, students can test different configurations, evaluate pairs, both Cartesian and articulation trajectories, and program the arm with command lists.
Electronics 2020, 9, 2163 4 of 18 This trend in web-based and practical educational frameworks also appears in other related fields. For instance, Google Colaboratory (https://colab.research.google.com) provides a simple-to-use infrastructure where the students can learn deep learning using Jupyter notebooks from their web browser, which are run on the Google servers. No installation by the students is required.
In robotics, this is complemented by using robotic simulators through a web interface. For example, one crucial online teaching initiative is Robot Ignite Academy from TheConstruct [28]. It is based on ROS, the Gazebo simulator, and Jupyter, providing several short courses. Another is RobotBenchmark (https://robotbenchmark.net) from Cyberbotics. Based on the Webots simulator, it is open and provides twelve Python exercises with several robots such as Thymio II and Nao humanoid.
Another important ROS-based initiative is Robot Programming Network [29][30][31]. This action extends existing remote robot laboratories with the flexibility and power to write ROS code in a web browser and run it in the robot on the server-side with a single click. It uses Moodle, VirtualBox, HTML, and Robot Web Tools as its underlying infrastructure.
Recent Internet technologies open new possibilities for distance learning in robotics. For instance, in ROSLab [32], ROS has been combined with JupyterLab technology to integrate documentation and robotics software, and Docker container technology. JupyterLab is very useful for teaching purposes, as lessons or exercises may be delivered as Jupyter notebooks. These notebooks are programmed and run locally by students, without any dependency problems, and in any operating system, just inside a container. Another relevant example is cloud computing. For instance, Amazon RoboMaker (https://aws.amazon.com/robomaker/) [33] allows cloud robotics, simulating, and deploying robotic applications at a cloud-scale, which may then be available for distance students. It provides an integrated development environment, simulation service, and fleet management capabilities.

Drone Programming Education
Beyond the courses based on industrial robots and mobile robots, there is an increasing number of drone/aerial robotics courses in higher education. Additionally, they also appear in the MOOC platforms. For instance, EdX offers the "Autonomous Navigation for Flying Robots" course [34], provided by the Technische Universität München. It is mainly focused on the camera-based navigation of a quadcopter [35,36], explaining all the underlying theoretical concepts such as linear algebra, 2D and 3D geometry, motor controllers, probabilistic state estimation, and visual odometry. It uses ROS as middleware and Python exercises. Outside the technical scope, EdX also offers the "Drones and Autonomous Systems" course [37], mainly focused on fundamentals and applications.
Coursera offers the "Aerial Robotics" course [38], provided by the University of Pennsylvania. It introduces 2D and 3D geometry, mechanics of flight in micro aerial vehicles and teaches how to develop dynamic models, derive controllers, and synthesize planners for operating in three-dimensional environments. It uses MATLAB as programming middleware.
Udacity, a spin-out of Stanford university focused on online education, offers the "Flying Car and Autonomous Flight Engineering" nano degree [39]. It puts emphasis on 3D motion planning, control algorithms, and required estimation techniques such as extended Kalman filters. For instance, it teaches how to fly a drone around a complex urban environment, load a real city map, plan a collision-free path between buildings, and watch the drone fly above city streets. It uses C++ as the programming language.
The Udemy platform offers the "Drone Programming Primer for Software Development" course [40]. It teaches the open-source flight stack based in ArduPilot flight controller, MAVLink, and DroneKit middleware. This course uses the ArduPilot software in the loop (SITL) simulator, which simulates an ArduPilot-based autopilot and communicates with it using MAVLink over the local IP network. The drone is modeled as an object in a Python script, allowing the student to command a real or simulated drone using Python. DroneKit has been used in many drone-based research papers [41,42]. This open-source flight stack transcends its hobbyist roots and is branching out into business applications at a high rate. Additionally, some drone-based tutorials have begun to appear in some outstanding conferences. For instance, the "Drone Vision and Deep Learning" tutorial [43] by Ioannis Pitas, from the IEEE/CVF Conference on Computer Vision and Pattern Recognition CVPR2020, focuses on advanced computer vision techniques to be run onboard UAVs. They deal with semantic world mapping, detection of target/obstacle/crowd/point of interest, and 2D/3D target tracking. Moreover, they facilitate drone autonomy as drone vision plays a pivotal role in drone perception/control, coping with the limited computing resources available onboard.

RoboticsAcademy Learning Environment
RoboticsAcademy (http://jderobot.github.io/RoboticsAcademy/) is an open-access platform containing a collection of self-contained exercises to learn robotics and computer vision practically, oriented to engineering education. The proposed activities (http://jderobot.github.io/RoboticsAcademy/ exercises/) cover a wide range of topics, including autonomous cars, mobile robots, industrial robotics, and drones. As an evolution from its initial design [44], RoboticsAcademy is periodically updated with new exercises and functionalities. Some of the exercises provided by the RoboticsAcademy educational platform are depicted in Figure 1. mapping, detection of target/obstacle/crowd/point of interest, and 2D/3D target tracking. Moreover, they facilitate drone autonomy as drone vision plays a pivotal role in drone perception/control, coping with the limited computing resources available onboard.

RoboticsAcademy Learning Environment
RoboticsAcademy (http://jderobot.github.io/RoboticsAcademy/) is an open-access platform containing a collection of self-contained exercises to learn robotics and computer vision practically, oriented to engineering education. The proposed activities (http://jderobot.github.io/RoboticsAcademy/exercises/) cover a wide range of topics, including autonomous cars, mobile robots, industrial robotics, and drones. As an evolution from its initial design [44], RoboticsAcademy is periodically updated with new exercises and functionalities. Some of the exercises provided by the RoboticsAcademy educational platform are depicted in Figure 1. In each exercise, the student is challenged with a specific problem, which is expected to be solved by programming standard robotics algorithms (e.g., perception, planning, and control) to provide the robot with the necessary intelligence to complete the assigned task. Students shall program their solutions using the Python programming language, selected due to its gradual learning curve and growing popularity in robotics [45] and other fields.
The platform provides a simple application programming interface (API) to grant high-level access to robot sensors and actuators for each exercise. This simplicity allows students to focus on the algorithms' coding rather than battling with time-consuming hardware management details. From an architectural point of view, each exercise is divided into three different layers (see Figure 2): 1. Hardware layer: represents the robot itself, being a real robot or a simulated one. 2. Middleware layer: includes the software drivers required to read information from robot sensors and control robot actuators. In each exercise, the student is challenged with a specific problem, which is expected to be solved by programming standard robotics algorithms (e.g., perception, planning, and control) to provide the robot with the necessary intelligence to complete the assigned task. Students shall program their solutions using the Python programming language, selected due to its gradual learning curve and growing popularity in robotics [45] and other fields.
The platform provides a simple application programming interface (API) to grant high-level access to robot sensors and actuators for each exercise. This simplicity allows students to focus on the algorithms' coding rather than battling with time-consuming hardware management details. From an architectural point of view, each exercise is divided into three different layers (see Figure 2):

1.
Hardware layer: represents the robot itself, being a real robot or a simulated one.

2.
Middleware layer: includes the software drivers required to read information from robot sensors and control robot actuators.

3.
Application layer: contains the student's code (algorithms implemented by the student to solve the problem proposed) and a template (internal platform functions that provide two simple APIs to access robot hardware and take care of graphical issues for students' code debugging purposes).
Electronics 2020, 9, x FOR PEER REVIEW 6 of 18 3. Application layer: contains the student's code (algorithms implemented by the student to solve the problem proposed) and a template (internal platform functions that provide two simple APIs to access robot hardware and take care of graphical issues for students' code debugging purposes).

Hardware Layer
As opposed to other robotics teaching platforms [11], the exercises proposed in RoboticsAcademy include a wide variety of robots (drones, mobile robots with wheels, cars, humanoids, industrial manipulators) in real or simulated arrangements. Likewise, practical activities with popular sensors such as lasers, GPS and RGB-D cameras (Kinect, Xtion) can be performed.
The internal design of RoboticsAcademy allows the same student's code to be run in a physical robot (when available) or its simulated counterpart, with only minor configuration changes. For the 3D simulation of the different robots and environments, the open-source Gazebo simulator [46] has been used. The Gazebo simulator is widely recognized among the international robotics community.
Two different flight controllers are conventional when using drones: ArduPilot and PX4. They implement the low-level control and stabilization of the flying robot, leading to flight missions' execution as sequences of waypoints. They run onboard the drones themselves, allowing communication with ground control stations or controlling applications. Regarding the presented course hardware, PX4 is the flight controller supported in RoboticsAcademy, both on real drones and in the Gazebo simulator.

Middleware Layer
ROS [47] is the adopted middleware in RoboticsAcademy. ROS is the de facto standard middleware, with the largest community of users in the robotics field. It provides a comprehensive collection of drivers for a wide variety of robots, helping to shorten the programming development time and increase the robustness of final applications due to code reuse. It supports many programming languages, including Python. The current version used in RoboticsAcademy is ROS-Melodic.
Concerning the middleware needed in the presented course, the MAVLink protocol [48] connects the student's application with the drone flight controller. ROS supports MAVLink through its MAVROS package. For onboard drone cameras' support, regular ROS image drivers are used. In RoboticsAcademy, MAVROS has been extended to support speed regulation of the flying control. In this way, the drone programming interface in RoboticsAcademy provides both the high-level classic

Hardware Layer
As opposed to other robotics teaching platforms [11], the exercises proposed in RoboticsAcademy include a wide variety of robots (drones, mobile robots with wheels, cars, humanoids, industrial manipulators) in real or simulated arrangements. Likewise, practical activities with popular sensors such as lasers, GPS and RGB-D cameras (Kinect, Xtion) can be performed.
The internal design of RoboticsAcademy allows the same student's code to be run in a physical robot (when available) or its simulated counterpart, with only minor configuration changes. For the 3D simulation of the different robots and environments, the open-source Gazebo simulator [46] has been used. The Gazebo simulator is widely recognized among the international robotics community.
Two different flight controllers are conventional when using drones: ArduPilot and PX4. They implement the low-level control and stabilization of the flying robot, leading to flight missions' execution as sequences of waypoints. They run onboard the drones themselves, allowing communication with ground control stations or controlling applications. Regarding the presented course hardware, PX4 is the flight controller supported in RoboticsAcademy, both on real drones and in the Gazebo simulator.

Middleware Layer
ROS [47] is the adopted middleware in RoboticsAcademy. ROS is the de facto standard middleware, with the largest community of users in the robotics field. It provides a comprehensive collection of drivers for a wide variety of robots, helping to shorten the programming development time and increase the robustness of final applications due to code reuse. It supports many programming languages, including Python. The current version used in RoboticsAcademy is ROS-Melodic.
Concerning the middleware needed in the presented course, the MAVLink protocol [48] connects the student's application with the drone flight controller. ROS supports MAVLink through its MAVROS package. For onboard drone cameras' support, regular ROS image drivers are used. In RoboticsAcademy, MAVROS has been extended to support speed regulation of the flying control. In this way, the drone programming interface in RoboticsAcademy provides both the high-level classic waypoint missions and the mid-level speed commands to rule drone movements. Speed commands allow more fine-grained control of the drone, such as the one required in the flying robots' visual control applications.

Application Layer
The application layer comprises the student's code and the exercise template, which acts as a container of the student's code and provides the following services: • Hardware abstraction layer (HAL) API: simple API to obtain processed data from robot sensors and command robot actuators at a high level • Graphical user interface (GUI) API: used for student's code checking and debugging, shows sensory data or partial processing in a visual way (for example, the images from the robot camera or the results of applying color filtering) • Timing skeleton for implementing the reactive robot behavior: a continuous loop of iterations, executed several times per second. Each iteration performs four steps: collecting sensory data, processing them, deciding actions, and sending orders to the actuators The application layer is implemented as a ROS node. The template provided in each node supports access to the robot's sensors and actuators using the appropriate ROS methods and communication mechanisms (topics, subscriptions, publishing . . . ). The hardware complexity and details are hidden to grant a simple local API in Python for the student (HAL API), employing ROSpy (a pure Python client library for ROS). Figure 3 shows a typical execution of an exercise using a simulated robot.
Electronics 2020, 9, x FOR PEER REVIEW 7 of 18 allow more fine-grained control of the drone, such as the one required in the flying robots' visual control applications.

Application Layer
The application layer comprises the student's code and the exercise template, which acts as a container of the student's code and provides the following services:


Hardware abstraction layer (HAL) API: simple API to obtain processed data from robot sensors and command robot actuators at a high level  Graphical user interface (GUI) API: used for student's code checking and debugging, shows sensory data or partial processing in a visual way (for example, the images from the robot camera or the results of applying color filtering)  Timing skeleton for implementing the reactive robot behavior: a continuous loop of iterations, executed several times per second. Each iteration performs four steps: collecting sensory data, processing them, deciding actions, and sending orders to the actuators The application layer is implemented as a ROS node. The template provided in each node supports access to the robot's sensors and actuators using the appropriate ROS methods and communication mechanisms (topics, subscriptions, publishing...). The hardware complexity and details are hidden to grant a simple local API in Python for the student (HAL API), employing ROSpy (a pure Python client library for ROS). Figure 3 shows a typical execution of an exercise using a simulated robot. The drone HAL API used in all the exercises of the presented course includes these methods for getting sensor measurements, images, and drone state:  The drone HAL API used in all the exercises of the presented course includes these methods for getting sensor measurements, images, and drone state:  Additionally, the HAL API provides these methods for drone control: • drone.set_cmd_vel (vx, vy, vz, yaw_rate): commands the linear velocity of the drone in the x, y, and z directions (in m/s) and the yaw rate (rad/s) in its body-fixed frame. RoboticsAcademy native release is an open-source platform, free and simple to use. All of its underlying infrastructure (Python, Gazebo, ROS) is provided as regular and standard official Debian packages, and so they can be installed easily on Ubuntu Linux computers. Additionally, the required underlying resources, which are not contained in traditional Gazebo or ROS packages, are created and stored in several Debian packages from the JdeRobot organization.
The native release runs on Ubuntu Linux machines. Docker images have been created and are maintained for Windows and macOS users so that they can use the platform through a virtual machine (Docker container) in their computers.

Drone Programming Course
An open-access drone programming distance course is proposed in this article, based on the RoboticsAcademy platform. This course is freely available, ready to use by anyone from anyplace with Internet access. It includes both theoretical and practical contents (focusing on the "learning by doing" concept). Although it is recommended to have some previous programming experience to facilitate the realization of practical exercises, it has no relevant prerequisites. These exercises help students to consolidate theoretical concepts, focusing on programming skills for UAVs.

Course Syllabus
Drone autonomous applications, as ground robot applications, may be seen as the onboard combination of perception and control. The course presents the most common drone sensors and actuators. It also describes position-based and vision-based control techniques, which are the main objectives of the course. Additionally, it introduces map-based global navigation in conjunction with relevant self-localization algorithms, allowing drone 3D navigation even in indoor and GPS-denied environments.
The theoretical content of the course is divided into six different units, as follows:

Course Practical Exercises
The practical content of the course is divided into several exercises, enumerated in Table 1 below. Two drones are mainly used: the ArDrone (with a frontal camera) and the 3DRobotics Iris (with two cameras, frontal, and ventral). They run on simulation, but the real drones may also be programmed using RoboticsAcademy if they are available for the students.

DR1
Navigation by position DR2 Following an object on the ground DR3 Following a road with visual control DR4 Cat and mouse DR5 Escape from a maze following visual clues DR6 Searching for people to rescue within a perimeter DR7 Escape from a hangar with moving obstacles DR8 Land on a beacon

CV1
Color filter for object detection CV2 Visual odometry for self-localization Some of the most representative exercises are detailed below: • Navigation by position (http://jderobot.github.io/RoboticsAcademy/exercises/Drones/position_ control). This exercise aims to implement an autopilot by using the GPS sensor, the IMU, and a position-based PID controller. For this exercise, a simulated 3D world has been designed that contains the quadrotor and five beacons arranged in a cross. The objective is to program the drone to follow a predetermined route visiting the five waypoints in a given sequence, as shown in Figure 4. It illustrates the algorithms typically included in commercial autopilots such as ArduPilot or PX4.

•
Following an object on the ground (http://jderobot.github.io/RoboticsAcademy/exercises/Drones/ follow_turtlebot). In this exercise, the objective is to implement the logic that allows a quadrotor to follow a moving object on the ground (an autonomous robot named Turtlebot with a green-colored top, see Figure 5), using a primary color filter in the images and a vision-based PID controller. The drone keeps its altitude and moves only in a 2D plane.

•
Landing on a moving car (http://jderobot.github.io/RoboticsAcademy/exercises/Drones/visual_ lander). In this exercise, the student needs to combine pattern recognition and vision-based control to land on a predefined beacon, a four-square chess pattern on the roof of a moving car (see Figure 6). The required image processing is slightly more complicated than a simple color filter, as the beacon may be partially seen, and its center is the relevant feature. Likewise, the controller needs to command the vertical movement of the drone.

•
Escape from a maze using visual clues (http://jderobot.github.io/RoboticsAcademy/exercises/ Drones/labyrinth_escape). In this exercise, the student needs to combine local navigation and computer vision algorithms to escape from a labyrinth with the aid of visual clues. The clues are green arrows placed on the ground, indicating the direction to be followed (see Figure 7). Pattern recognition in real-time is the focus here, as fast detection is essential for the drone.
• Searching for people to rescue within a perimeter (http://jderobot.github.io/RoboticsAcademy/ exercises/Drones/rescue_people). The objective of this exercise is to implement the logic of a global navigation algorithm to sweep a specific area systematically and efficiently (foraging algorithm), in conjunction with visual face-recognition techniques, to report the location of people for subsequent rescue (Figure 8). The drone behavior is typically implemented as a finite state machine, with several states such as go-to-the-perimeter, explore-inside-the-perimeter, or go-back-home.
illustrates the algorithms typically included in commercial autopilots such as ArduPilot or PX4.  Following an object on the ground (http://jderobot.github.io/RoboticsAcademy/exercises/Drones/follow_turtlebot). In this exercise, the objective is to implement the logic that allows a quadrotor to follow a moving object on the ground (an autonomous robot named Turtlebot with a green-colored top, see Figure 5), using a primary color filter in the images and a vision-based PID controller. The drone keeps its altitude and moves only in a 2D plane.  Landing on a moving car (http://jderobot.github.io/RoboticsAcademy/exercises/Drones/visual_lander). In this exercise, the student needs to combine pattern recognition and vision-based control to land on a predefined beacon, a four-square chess pattern on the roof of a moving car (see Figure 6). The required image processing is slightly more complicated than a simple color filter, as the beacon may be partially seen, and its center is the relevant feature. Likewise, the controller needs to command the vertical movement of the drone.    Escape from a maze using visual clues (http://jderobot.github.io/RoboticsAcademy/exercises/Drones/labyrinth_escape).
In this exercise, the student needs to combine local navigation and computer vision algorithms to escape from a labyrinth with the aid of visual clues. The clues are green arrows placed on the ground, indicating the direction to be followed (see Figure 7). Pattern recognition in real-time is the focus here, as fast detection is essential for the drone.   Escape from a maze using visual clues (http://jderobot.github.io/RoboticsAcademy/exercises/Drones/labyrinth_escape).
In this exercise, the student needs to combine local navigation and computer vision algorithms to escape from a labyrinth with the aid of visual clues. The clues are green arrows placed on the ground, indicating the direction to be followed (see Figure 7). Pattern recognition in real-time is the focus here, as fast detection is essential for the drone.
In this exercise, the student needs to combine local navigation and computer vision algorithms to escape from a labyrinth with the aid of visual clues. The clues are green arrows placed on the ground, indicating the direction to be followed (see Figure 7). Pattern recognition in real-time is the focus here, as fast detection is essential for the drone. Figure 7. "Escape from a maze using visual clues" exercise.


Searching for people to rescue within a perimeter (http://jderobot.github.io/RoboticsAcademy/exercises/Drones/rescue_people). The objective of this exercise is to implement the logic of a global navigation algorithm to sweep a specific area systematically and efficiently (foraging algorithm), in conjunction with visual face-recognition techniques, to report the location of people for subsequent rescue (Figure 8). The drone behavior is typically implemented as a finite state machine, with several states such as go-to-theperimeter, explore-inside-the-perimeter, or go-back-home.

Lessons Learned
The course proposed has already been taught successfully with students from the Rey Juan Carlos University, specifically from several engineering degrees, testing the correct platform functioning and validating the exercises' content over five academic years. The course syllabus covers the usual agendas of engineering degrees, also fitting into the European higher educational framework.
The course's objective is to introduce students to state of the art drone programming: the essential components of drones (sensors and actuators), the fundamental problems of drone programming (navigation, flight control, localization, etc.), and the most successful techniques and algorithms for addressing them. The drones' mechanical and electronic aspects have been considered, with particular emphasis placed on their programming algorithms; in this aspect resides their intelligence (for example, architectures of reactive control to generate autonomous behavior). Factors related to computer vision in drones (based on artificial intelligence), a growing trend in drone applications, are also included.
Our impression about the utilization of RoboticsAcademy on this course is that it was very motivating for the students, taking as reference the same subject taught during the 2017/2018 edition when RoboticsAcademy was not yet used. Most comments extracted from student surveys indicate that the students remarkably welcomed utilizing a platform that facilitates distance drone programming. For instance, Figure 9 shows their evaluation of the practice environment in the 2018/2019 course in a survey with 21 students, yielding that 57.1% of the students had an excellent opinion of it, and 42.9% of them were more than satisfied.

Lessons Learned
The course proposed has already been taught successfully with students from the Rey Juan Carlos University, specifically from several engineering degrees, testing the correct platform functioning and validating the exercises' content over five academic years. The course syllabus covers the usual agendas of engineering degrees, also fitting into the European higher educational framework.
The course's objective is to introduce students to state of the art drone programming: the essential components of drones (sensors and actuators), the fundamental problems of drone programming (navigation, flight control, localization, etc.), and the most successful techniques and algorithms for addressing them. The drones' mechanical and electronic aspects have been considered, with particular emphasis placed on their programming algorithms; in this aspect resides their intelligence (for example, architectures of reactive control to generate autonomous behavior). Factors related to computer vision in drones (based on artificial intelligence), a growing trend in drone applications, are also included.
Our impression about the utilization of RoboticsAcademy on this course is that it was very motivating for the students, taking as reference the same subject taught during the 2017/2018 edition when RoboticsAcademy was not yet used. Most comments extracted from student surveys indicate that the students remarkably welcomed utilizing a platform that facilitates distance drone programming.
For instance, Figure 9 shows their evaluation of the practice environment in the 2018/2019 course in a survey with 21 students, yielding that 57.1% of the students had an excellent opinion of it, and 42.9% of them were more than satisfied.
when RoboticsAcademy was not yet used. Most comments extracted from student surveys indicate that the students remarkably welcomed utilizing a platform that facilitates distance drone programming. For instance, Figure 9 shows their evaluation of the practice environment in the 2018/2019 course in a survey with 21 students, yielding that 57.1% of the students had an excellent opinion of it, and 42.9% of them were more than satisfied. As a result of the lessons learned during the first editions of the course, both the platform and the course itself mostly evolved, currently integrating the following benefits: As a result of the lessons learned during the first editions of the course, both the platform and the course itself mostly evolved, currently integrating the following benefits:

•
Easy and time-efficient installation, allowing the students to start programming drones earlier: installation previously consumed at least two weeks to prepare the programming environment, as it was based on Linux packages and virtual machines. With the current framework release, installation recipes use binary Debian packages (for students with Linux operating system) and Docker containers (for students with Windows or macOS). These Docker containers have the environment already pre-installed, being installed themselves in a fast way. In the 2018/2019 course, 85.7% of students used RoboticsAcademy in native Linux, while 14.3% used Microsoft Windows with Docker containers ( Figure 10). As reported in the surveys, the user experience was pleasant on both systems.

•
The same platform is used for programming simulated and real drones: previously, students had to learn different tools to program different drones (both simulated and real), resulting in less time to learn programming and algorithms.

•
Distance learning: the course was designed to be taught remotely; therefore, any student could learn and practice from home, anytime.

•
The forum proved to be a precious tool for intercommunication and doubt resolution.
Electronics 2020, 9, x FOR PEER REVIEW 12 of 18  Easy and time-efficient installation, allowing the students to start programming drones earlier: installation previously consumed at least two weeks to prepare the programming environment, as it was based on Linux packages and virtual machines. With the current framework release, installation recipes use binary Debian packages (for students with Linux operating system) and Docker containers (for students with Windows or macOS). These Docker containers have the environment already pre-installed, being installed themselves in a fast way. In the 2018/2019 course, 85.7% of students used RoboticsAcademy in native Linux, while 14.3% used Microsoft Windows with Docker containers ( Figure 10). As reported in the surveys, the user experience was pleasant on both systems.


The same platform is used for programming simulated and real drones: previously, students had to learn different tools to program different drones (both simulated and real), resulting in less time to learn programming and algorithms.  Distance learning: the course was designed to be taught remotely; therefore, any student could learn and practice from home, anytime.  The forum proved to be a precious tool for intercommunication and doubt resolution.
All the details of MAVROS, MAVLink, and PX4 have been hidden from students that use the simple drone HAL API described in Section 3.3 for drone programming. In this way, students may focus on navigation and control algorithms, which are the course's core objective. Therefore, this practice environment is not suitable for learning the tools mentioned above, which can also be useful for drone engineering.

Drone Programming Competitions
Gamification is the term commonly used to refer to game-based learning systems. In these All the details of MAVROS, MAVLink, and PX4 have been hidden from students that use the simple drone HAL API described in Section 3.3 for drone programming. In this way, students may focus on navigation and control algorithms, which are the course's core objective. Therefore, this practice environment is not suitable for learning the tools mentioned above, which can also be useful for drone engineering.

Drone Programming Competitions
Gamification is the term commonly used to refer to game-based learning systems. In these systems, specific aspects found in games, such as making competitions, establishing rankings and scoreboards, and giving rewards or badges, are implemented in the classroom to make the learning process more attractive to the student [49].
It is already known that gamified learning in higher education (and in engineering degrees in particular) enhances students' engagement and motivation and improves their attendance ratios and grades [50,51]. Positive reinforcement in both on-site and distance classrooms is promoted when managing the student load using challenges of progressive difficulty and gamified step-by-step tasks [52]. Additionally, gamification provides instant student feedback and grading, proven to help formative self-assessment and auto evaluation [53].
Through stepwise updates, RoboticsAcademy has been redesigned to support gamified learning in computer vision and robotics, including: • Competitive exercises. Several exercises in our present collection are based on completing a task in a given time while competing against the fastest or more complete solution, increasing the student's engagement.

•
Social interactions between students, teachers, and developers promoted utilizing a dedicated RoboticsAcademy (https://developers.unibotics.org/) web forum and several accounts across prime social media platforms since 2015, such as a video channel on YouTube (https://www. youtube.com/channel/UCgmUgpircYAv_QhLQziHJOQ) with more than 300 videos and 40,000 single visits and a Twitter account with more than 200 tweets (https://twitter.com/jderobot).
Beyond being used in university courses, as explained in Section 4.3, RoboticsAcademy and its drone exercises have also been used as a convenient tool for organizing both on-site and distance competitions in Robotics, particularly within our Program-A-Robot event series. This championship was born in 2016 as a competition to foster robotics and computer vision, proposing challenging problems to the participants and offering the possibility to measure and automatically rank your solution with other fellow competitors.
The targets in the first edition were undergrad engineering students at Rey Juan Carlos University (https://jderobot.github.io/activities/competitions/2016). The second edition was organized as a national event inside the Spanish Jornadas Nacionales de Robotica (https://jderobot.github.io/activities/ competitions/2017) in 2017. The third event was a distance international competition aimed at undergraduate, graduate, or Ph.D. engineering students worldwide. Its final round was held as part of the International Conference on Intelligent Robots and Systems IROS 2018 (https://jderobot.github. io/activities/competitions/2018), which took place in Madrid in October 2018, and was live-streamed on YouTube.
Two different challenges related to programming the autonomous intelligence of vision-assisted mobile aerial robots were proposed, focusing on this last competition.
In the first exercise, the cat-and-mouse challenge, a quadcopter aerial robot called "cat drone" had to be programmed to search, chase, and stay at a given distance to another aerial quadcopter, the "mouse drone" (see Figure 11). Beyond being used in university courses, as explained in Section 4.3, RoboticsAcademy and its drone exercises have also been used as a convenient tool for organizing both on-site and distance competitions in Robotics, particularly within our Program-A-Robot event series. This championship was born in 2016 as a competition to foster robotics and computer vision, proposing challenging problems to the participants and offering the possibility to measure and automatically rank your solution with other fellow competitors.
The targets in the first edition were undergrad engineering students at Rey Juan Carlos University (https://jderobot.github.io/activities/competitions/2016). The second edition was organized as a national event inside the Spanish Jornadas Nacionales de Robotica (https://jderobot.github.io/activities/competitions/2017) in 2017. The third event was a distance international competition aimed at undergraduate, graduate, or Ph.D. engineering students worldwide. Its final round was held as part of the International Conference on Intelligent Robots and Systems IROS 2018 (https://jderobot.github.io/activities/competitions/2018), which took place in Madrid in October 2018, and was live-streamed on YouTube.
Two different challenges related to programming the autonomous intelligence of vision-assisted mobile aerial robots were proposed, focusing on this last competition.
In the first exercise, the cat-and-mouse challenge, a quadcopter aerial robot called "cat drone" had to be programmed to search, chase, and stay at a given distance to another aerial quadcopter, the "mouse drone" (see Figure 11). Figure 11. "Cat and mouse" exercise.
The organizing committee programmed several different autonomous mice drones to allow for three qualifying sessions, starting from more straightforward movement behaviors, and transitioning to more complex behaviors to track mice. Some of them were released for training. An automatic assessment tool (referee) was also developed to score each 2 min round systematically. The referee measured the instantaneous distance from each cat drone to the mouse, showing its evolution (along The organizing committee programmed several different autonomous mice drones to allow for three qualifying sessions, starting from more straightforward movement behaviors, and transitioning to more complex behaviors to track mice. Some of them were released for training. An automatic assessment tool (referee) was also developed to score each 2 min round systematically. The referee measured the instantaneous distance from each cat drone to the mouse, showing its evolution (along with the whole game) on the screen. When it was below a certain threshold (close-threshold), both the progress line and the background were painted green, and when it was above in red. As long as the cat drone was near the mouse, the score increased. It was forbidden to touch the mouse, with each contact penalized in the final round score.
In the second challenge, named escape from the hangar, a single mobile aerial robot had to take off inside the hangar and find its way out in less than 1 min and avoiding the collision with six inner walls, which were moving in a random pattern to increase the difficulty (see Figure 12). Again, an automatic referee was built to measure how many seconds each aerial robot takes to leave the hangar. The initial score started from 60 points and decreased during the time each robot was inside the hangar. It was forbidden to touch the walls, and each contact further reduced the score. Additionally, the drone's starting point was changed during the competition, so a pure position-based control was not a proper strategy.
Electronics 2020, 9, x FOR PEER REVIEW 14 of 18 hangar. It was forbidden to touch the walls, and each contact further reduced the score. Additionally, the drone's starting point was changed during the competition, so a pure position-based control was not a proper strategy. The video of the final round in the IROS2018-Program-A-Robot competition is available online (https://www.youtube.com/watch?v=_0ZkciOHnmU). It shows the performance of the uploaded solutions for the best three participants in each challenge, together with their automatic evaluations for each 2 min rounds. In summary, it was an excellent acid test for the capability of RoboticsAcademy as an online platform for holding, among others, distance competitions in robotics and computer vision.
More than 40 students participated in the competitions ( Figure 13). The feedback from them was very positive in the three editions, which encouraged us to scale up in scope from a local competition to a national one, and finally to an international one. For instance, "Thank you for organizing this drone programming competition; this type of enterprise is really cool" from one participant from the 2017 edition is an illustrative example. Another participant of IROS2018 edition even kept engaged with RoboticsAcademy after the competition and developed a new exercise for it, the "Visual Odometry with RGBD Camera" exercise (https://jderobot.github.io/RoboticsAcademy/exercises/ComputerVision/visual_odometry) as an open-source contribution to the project. The video of the final round in the IROS2018-Program-A-Robot competition is available online (https://www.youtube.com/watch?v=_0ZkciOHnmU). It shows the performance of the uploaded solutions for the best three participants in each challenge, together with their automatic evaluations for each 2 min rounds. In summary, it was an excellent acid test for the capability of RoboticsAcademy as an online platform for holding, among others, distance competitions in robotics and computer vision.
More than 40 students participated in the competitions ( Figure 13). The feedback from them was very positive in the three editions, which encouraged us to scale up in scope from a local competition to a national one, and finally to an international one. For instance, "Thank you for organizing this drone programming competition; this type of enterprise is really cool" from one participant from the 2017 edition is an illustrative example. Another participant of IROS2018 edition even kept engaged with RoboticsAcademy after the competition and developed a new exercise for it, the "Visual Odometry with RGBD Camera" exercise (https://jderobot.github.io/RoboticsAcademy/exercises/ComputerVision/ visual_odometry) as an open-source contribution to the project. drone programming competition; this type of enterprise is really cool" from one participant from the 2017 edition is an illustrative example. Another participant of IROS2018 edition even kept engaged with RoboticsAcademy after the competition and developed a new exercise for it, the "Visual Odometry with RGBD Camera" exercise (https://jderobot.github.io/RoboticsAcademy/exercises/ComputerVision/visual_odometry) as an open-source contribution to the project.

Summary and Conclusions
The course presented in this article is a free educational tool for teaching drone programming in higher education in engineering, particularly suitable for distance learning. It comprises several independent exercises that mimic drones' real-life practical applications, including visual landing, search-and-rescue missions, and road following in real or simulated arrangements. These exercises focus on perception, planning-and-control algorithms, emphasizing state-of-the-art topics as computational vision or artificial intelligence. All activities are proposed to the students with a practical "learning by doing" approach.
Regarding the RoboticsAcademy platform, its main features are, in summary: • Each exercise is divided into three layers: hardware, middleware, and application. This internal design makes it easier to run the same student's code in physical and simulated robots with only minor configuration changes. The open-source Gazebo simulator, widely recognized in the robotics community, has been used for 3D simulations of all robots, sensors, and environments.

•
The middleware layer is ROS-based, the de facto standard in service robotics that supports many programming languages, including Python. Previously, RoboticsAcademy was separated from this widely used middleware, needing frequent maintenance, and having little scalability and a short number of users (only those who already had a thorough background with the tool).

•
The application layer contains the student's code. It includes a hardware abstraction layer (HAL) API to grant students high-level access to the robot sensors and actuators, and a graphical user interface (GUI) API to show sensor data, process images, or debug code. • RoboticsAcademy runs natively on Linux Ubuntu machines and can be installed using official Debian packages. Docker containers are used to allow easy installation in Windows and macOS based computers, making RoboticsAcademy particularly suited to the distance learning approach.
An open-access distance full course covering diverse drone programming aspects has been proposed, aimed at engineering students. Its syllabus comprises six academic units, comprising from sensors and actuators to navigation and computer vision. The course includes practical content, all covered with RoboticsAcademy exercises. It was successfully tested and validated with students of several engineering degrees at the Rey Juan Carlos University over five academic years. In this course, any student could learn and practice from home, anytime. End-of-course surveys showed that using the platform was helpful, productive, and motivating for them.
RoboticsAcademy also includes additional functionalities as automatic assessment tools and competitive goals, making it a perfectly adapted tool for gamified learning strategies. It has been proven as a convenient platform for organizing online robotics competitions as well. The Program-A-Robot annual championship was born in 2016. It has already celebrated its third edition using RoboticsAcademy, proposing competitive challenges about programming the autonomous intelligence of different vision-assisted mobile robots. Future work lines for enhancing RoboticsAcademy include developing more exercises, expanding the present full course, and proposing different specific learning paths to reach a broader range of educational stages. Furthermore, an in-depth experimental analysis is planned to provide quantitative evidence about user experience, creating a complete questionnaire, and collecting information from other students. It will provide a more solid methodological basis.