UAV-Based Smart Educational Mechatronics System Using a MoCap Laboratory and Hardware-in-the-Loop

Within Industry 4.0, drones appear as intelligent devices that have brought a new range of innovative applications to the industrial sector. The required knowledge and skills to manage and appropriate these technological devices are not being developed in most universities. This paper presents an unmanned aerial vehicle (UAV)-based smart educational mechatronics system that makes use of a motion capture (MoCap) laboratory and hardware-in-the-loop (HIL) to teach UAV knowledge and skills, within the Educational Mechatronics Conceptual Framework (EMCF). The macro-process learning construction of the EMCF includes concrete, graphic, and abstract levels. The system comprises a DJI Phantom 4, a MoCap laboratory giving the drone location, a Simulink drone model, and an embedded system for performing the HIL simulation. The smart educational mechatronics system strengthens the assimilation of the UAV waypoint navigation concept and the capacity for drone flight since it permits the validation of the physical drone model and testing of the trajectory tracking control. Moreover, it opens up a new range of possibilities in terms of knowledge construction through best practices, activities, and tasks, enriching the university courses.


Introduction
The next era of the industrial revolution is a reality, and many companies are integrating the concepts of Industry 4.0 into their processes. Industry 4.0 proposes the digitalization of companies through Artificial Intelligence (AI) and the Internet of Things (IoT). Incorporating new technologies, from Information and Communications Technology (ICT), within the industrial environment has been changing business models as we know them. It is worth mentioning that the present work is based on [1], in which an unmanned aerial vehi-In particular, a case study in developing the mechatronic concept of drone navigation by waypoints is presented as it represents one of the significant challenges in robotics and autonomous vehicle research fields. The proposal is based on a motion capture system to retrieve the drone's state during the pedagogical experiments' development. The MoCap system permits obtaining measurements for the inertial position and attitude of the vehicle. Then, this information is used as input for the overall educational framework levels and its instructional design. Moreover, a HIL simulation is used to validate the physical UAV waypoint navigation. Compared to the reported scientific literature, the novelty of this work relies on a combination of educational tools such as the drone, MoCap, and HIL simulation to construct knowledge and skills in the students within the EMCF.
The rest of the document is organized as follows: Section 2 describes the EMCF. Then, the materials and methods applied during the proposed activities are defined in Section 3. After this, the proposed instructional design and its levels are described in Section 4. Finally, a discussion and the main conclusions on the results of the presented work are outlined in Sections 5 and 6, respectively.

Educational Mechatronics Conceptual Framework
The EMCF aims to guide teachers on designing, implementing, and evaluating pedagogical activities to develop mechatronic thinking in students. The latter is understood as the capacity for designing and implementing production systems [20] under the principle of interdisciplinary collaboration. In addition, it is important to understand the concept of multidisciplinary provision of knowledge [21], in a flexible way [22,23], considering the high-level intelligence hierarchy as the backbone of the mechatronic system [24,25]. Educational mechatronics is intended to allow the student to understand abstract concepts based on which the applications we call mechatronics are built. Moreover, they will thus be able to face the speed of growth and exponential change of Industry 4.0, responding to the megatrends of the manufacturing industry and advanced manufacturing processes, focusing on the development, application, or integration of a set of enablers and technologies in order to generate impact [26].
The EMCF is structured into three reference perspectives, process, application, and artifact [27], as shown in Figure 1. The first perspective is oriented to mechatronics' basic concepts as a process. The second perspective comprehends all the applications (subdisciplines) from the basic mechatronics concepts. Moreover, finally, the artifact is oriented to obtain some artifacts related to the process and application construction. In summary, the EMCF is structured into three reference perspectives: process, application, and artifact [27]. The macro-process learning construction of the EMCF is based on the structured teaching methodologies proposed by [28,29]. Figure 2 shows the three learning levels: concrete, graphic, and abstract. The first level involves the process of real object manipulation and experiences [30,31]. The second level relates the elements of reality (concrete level) to graphics or symbolic elements, enabling students to integrate this knowledge as a skill [32]. Finally, the third level represents the highest level of abstraction and focuses on learning outside of reality.

Materials and Methods
The materials and methods comprising the UAV-based smart educational mechatronics system are chosen based on the mechatronic prototypes and existing academic spaces at the Universidad del Valle de México: DJI Phantom 4, a MoCap laboratory giving the drone location, a Simulink drone model, and an electronic board for performing the HIL simulation. Moreover, the proposed instructional design is aligned with the EMCF.

DJI Phantom 4
DJI Phantom 4 is a quadcopter equipped with a collision-avoidance system, called an Obstacle Sensing System, which uses two forward-facing cameras to detect obstacles as far as 49.5 ft (15 m) ahead of the drone. The drone comes mainly with a remote controller, camera, and gimbal (see Figure 3). It is worth mentioning that drone flight phases involve takeoff, flight, and landing.
• Takeoff: this is the phase where the drone accelerates from zero speed to the speed necessary to rise to a certain altitude at which the takeoff is considered to have finished. • Operational flight: in this phase, the drone can hover (hold a stationary position in the air) and maneuver in flight, where mixed movements to the left, right, forward, backward, up, and down are possible. • Landing: this is the phase where the drone approaches the destination and the landing gear makes contact with the runway while decelerating its motors until reaching zero speed.

Motion Capture System
The MoCap system installed in Universidad del Valle de México is shown in Figure 4; this is a market-based system that consists of the following elements:  To work with the MoCap system, first, it is necessary to locate all cameras properly in 3D space; then, calibration of the Vicon hardware must be done. To do so, turn on the PoE switch, the server computer, and open the Vicon Tracker program. Then, select the "SYSTEM" tab and select the eight cameras. Go to the "CALIBRATE" tab and click "START". One person must take the active wand, turn it on with the solid red LEDs, and go to the MoCap system workspace, and then start moving the wand in different directions with different orientations in front of each camera. Once the process is finished, the Vicon Tracker software will send the calibration results; if everything is green, it does not indicate that the process was carried out satisfactorily; otherwise, it will have to be done again. Finally, the active wand must be placed where we want to establish the origin of the MoCap workspace (see Figure 5).
Next, to continue the setup to have the drone working in the MoCap system, the markers are attached to the drone frame, as shown in Figure 4, and an object representing the drone must be created using the Vicon Tracker software. Finally, the measurements from the MoCap system are collected with Simulink, a MATLAB-based graphical programming environment for modeling, simulating, and analyzing multidomain dynamical systems. In this case, the drone's 2D positioning and orientation graphs and a 3D graph of its absolute position are displayed to the participant on the TV monitor. It is worth mentioning that the 50-inch TV monitor plays a key role when designing the instructional design based on the EMCF.

Simulation Model of the Quadrotor in Simulink
The quadrotor dynamical model is the result of analyzing the gyroscopic effects on the rigid structure of the multirotor due to the thrust forces generated by four rotating propellers. These propellers are attached to the axes of four brushless DC motors. The whole dynamics of the aerial robot involve two main reference frames, the earth-fixed frame and body-fixed frame, whose origins are located in the origin defined by the wand in the MoCap system (see Figure 5) and the center of mass of the quadrotor defined by the markers in the drone with an offset (see Figure 4), respectively.
The absolute pose of the quadrotor must be expressed in the earth frame, which is composed of its Euclidean 3D position X E = [x, y, z] T and its attitude Θ = [ϕ, θ, φ] T , which is represented by the Euler angles.
The dynamics of a quadrotor, in its state space form, can be defined by defining the state vector as X = [x,ẋ, y,ẏ, z,ż, φ,φ, θ,θ, ψ,ψ] T , and it is described by the following differential equations: where A 2 , A 4 , A 6 , A 8 , A 10 , A 12 are unknown but bounded perturbations; I x , I y , I z are inertial terms; m is the mass of the drone; and ω = −ω 1 + ω 2 − ω 3 + ω 4 . Moreover, the input vector U = (U 1 , U 2 , U 3 , U 4 ) T is composed of , with b as the thrust factor, which is the thrust generated by each rotor. Moreover, d is the distance from the center of mass to the rotor, and c is the drag factor. For a more comprehensive analysis of the modeling process, please refer to [34]. The Simulink model of the quadrotor mathematical model is shown in Figure 6. Moreover, the control subsystem block comprising the drone flight controller can be seen in Figure 6. For this trajectory tracking controller, the control objective is to design the control inputs U 1 , U 2 , U 3 , U 4 such that the system's outputs x 1 , x 3 , x 5 , x 11 track the desired references x 1r (t), x 3r (t), x 5r (t), x 11r (t). Figure 7 depicts the complete trajectory tracking control involving the position and rotational control. The position control has the reference position drone variables x 1r (t), x 3r (t), x 5r (t) as inputs. It generates the desired variables x 1d and x 3d , which serve as input for the rotational control along the reference for yaw angles x 11r . The complete control input vector is the output for this block.

HIL Simulation in Simulink
The HIL simulation is commonly used to test controller design. It shows the controller's response in real time to realistic virtual stimuli. In addition, the HIL simulation can also be used to validate a physical system (plant) model.
In this HIL simulation, a real-time computer is used for the virtual representation of the UAV plant model and an embedded system as a real version of the UAV flight controller (see Figure 8). The embedded system (development hardware) is the RDDRONE-FMUK66 vehicle/flight management unit (FMU), which is supported by the business-friendly open source PX4.org (accessed on 1 July 2022) flight stack. It is worth mentioning that the embedded system is part of the NXP HoverGames drone kit (KIT-HGDRONEK66). The proposed HIL architecture is shown in Figure 9. HIL testing simulates the drone variables collected by the sensors and the reference signals and sends them to the FMU being tested, making it believe that it is reacting to real-world flight conditions. The HIL simulation contains all the relevant components of the drone. The HIL simulation approach supports the verification and validation activities.

Instructional Design for Drone Flight Basics within the EMCF
The quadrotor is an aerial robot useful when dealing with several concepts such as translation, rotation, line segmentation, and path planning, among other topics. This work considers the teaching case for which the instructional design is devoted to constructing the mechatronic concept of drone navigation by waypoints under the EMCF, involving the perspective entities: Dynamics (process) + Robotics (Application) + Drone (Artifact). Then, the pedagogical activities for the three levels with the selected perspective are developed in the following subsections. It is worthwhile to mention that the three basic movements when starting drone flight considered in this work are: 1. Forward-backward movement; 2. Plus sign movement; 3. Square array movement.
To start the practice, the instructor turns on the MoCap system, places the drone matching the origin of the MoCap with zero Euler angles, and turns it on the drone and its remote controller. Then, they start tracking the drone object with Vicon Tracker and open Simulink to start plotting the 3D graph for the participant.

Concrete Level (First Learning Construction Level)
In this level, one must design activities oriented to perceptuo-motor characteristics.
Here, a drone, DJI Phantom 4, is chosen in order to provide the participant with the experience of flying a drone, starting with a real flight in a real environment. The designed activities are possible thanks to the factory's controller that the drone comes with, which is a speed manual control. If the participant does not move the remote control sticks, the drone will remain in the same place, and only move when the remote controller sticks are moved in any direction.
First, the flight plan for the first movement is given to the participant (see Figure 10a); it includes the state diagram showing the sequence in which the pilot must reach each waypoint.
Then, the instructor must start recording the position and orientation data. The set of instructions for participants are the following; it is worthwhile to mention that the drone starts in its home position P 0 = (x, y, z) = (0, 0, 0).

•
The takeoff phase involves two steps: turning the motors on and elevating the drone to a specific altitude: 1.
Raise left stick up slowly to take off until the drone reaches approximately 1 m; then, return the left stick to its center position slowly. Note: Left stick controls height (up-down) and heading (left-right). The drone reaches the waypoint P 1 = (0, 0, 1).

•
The operational flight phase involves three steps: move forward, move backward, and repeat the process:

2.
Raise right stick up slowly to move the drone forward until it reaches approximately 2 m; then, return the right stick to its center position slowly. The drone reaches the waypoint P 2 = (0, 2, 1).

3.
Lower right stick down slowly to move the drone backward until it reaches approximately −2 m; then, return the right stick to its center position slowly.

4.
Repeat the process twice and return to the center position, P 1 = (0, 0, 1), where we started the previous phase. Now, we are ready to start the landing phase. Note: The right stick controls forward, backward, left, and right movements.
• The landing phase involves one step:

5.
Lower left stick down slowly until the drone touches the ground and hold it for a few seconds to stop the motors. Then, the drone reaches its home position again P 0 = (0, 0, 0).
(Instruction remark: the instructor stops recording the data. The MoCap system records the position and orientation measurements of the drone in an Excel file. This table will contain the set of points that capture the real movement of the drone and it can be found in https://acortar.link/Rb54SB (accessed on 1 July 2022).
Once the pilot finishes the first movement, he/she continues with the second and third movements. The flight plans, including the state diagram showing the sequence in which the pilot must reach each waypoint for these movements, are shown in Figure 10b,c, respectively.
In addition, Figure 11 shows the pilot performing the flights in the MoCap laboratory. Moreover, the video showing the drone pilot performing the flights according to the plans and the instructions in this level can be found at https://acortar.link/yE0nKw (accessed on 1 July 2022).

Graphic Level (Second Learning Construction Level)
In this level, one must design activities oriented to graphic (symbolic) representation of the mechatronic concept, taking as a reference the previously developed concept at the concrete learning level; this will allow us to gradually make the transition from concrete to abstract. The Excel file containing the recorded data and a program in Simulink are given to the participant to plot it. In addition, this level allows dynamic color changes of the virtual images (such as circles or squares in a dynamic manner) but without allowing the further movement of the drone [35].
The set of instructions for participants is as follows.
• The takeoff phase involves two positions: 1.
Draw an orange vertical dotted line in the position vector plot for representing the drone home position P 0 , in time t = 0 s, and label it at the top of the graph.

2.
Then, draw a blue round dot line representing the drone reference waypoint P 1r in which the z position starts to increase its value, in time t = 3.5 s. Label it at the bottom of the graph. The resulting graph when applying the graphic level is shown in Figure 12a. Once the pilot finishes the first movement, he/she continues with the second and third movements. The obtained graphs for these movements are shown in Figure 12b,c, respectively.

Abstract Level (Third Learning Construction Level)
This level involves designing activities oriented towards gradually transitioning from the graphic (symbolic) concepts to a more abstract representation. The drone navigation for the first movement defined by waypoints can be seen in Figure 12a. Reference waypoints appear at the bottom of the graph, which will be used in the simulation in Simulink to test the mathematical model and the trajectory control of the drone.
The set of instructions for the participant is as follows.
• Build the pseudocode of the waypoint generation for the 1st movement. First, define as the input all the reference waypoints that appear in Figure 12a, i.e., P 0r = (0; 0; 0), P 1r = (0; 0; 1), P 2r = (0; 2; 1), P 3r = (0; −2; 1). Then, define the output as the waypoint reference vector wp r = (x r , y r , z r ). • Now, introduce a conditional clause if, from the time greater than or equal to 0 s to a time before the first reference waypoint P 1r will occur, i.e., t = 3.5 s; then, the reference waypoint vector will be equal to P 0 , the drone's home position. • Now, if time is greater than or equal to 3.5 s to a time before the second reference waypoint P 2r will occur, i.e., t = 6 s, then the reference waypoint vector will be equal to P 1r . • If time is greater than or equal to 6 s to a time before the third reference waypoint P 3r will occur, i.e., t = 9 s, then the reference waypoint vector will be equal to P 2r . • If time is greater than or equal to 9 s to a time before the fourth reference waypoint P 2r will occur, i.e., t = 16 s, then the reference waypoint vector will be equal to P 3r . • If time is greater than or equal to 16 s to a time before the fifth reference waypoint P 3r will occur, i.e., t = 21 s, then the reference waypoint vector will be equal to P 2r . • If time is greater than or equal to 21 s to a time before the sixth reference waypoint P 1r will occur, i.e., t = 26 s, then the reference waypoint vector will be equal to P 3r . • If time is greater than or equal to 26 s to a time before the seventh reference waypoint P 0r will occur, i.e., t = 29 s, then the reference waypoint vector will be equal to P 1r . • Finally, else the eighth reference waypoint P 0r will occur. (Instruction remark: The reference waypoint vector establishes the desired drone trajectory, which encompasses the desired waypoint in space that the drone needs to go though. It is worth mentioning that the actual waypoints P i , i = 0, 1, 2, 3 are reached after the corresponding reference waypoint vector is supplied to the controller in the simulation).
The complete pseudocode for the reference waypoint vector can be seen in Algorithm 1. Then, this pseudocode is programmed in a MATLAB file inside the waypoint reference block in the simulation of Figure 6. It is worth mentioning that the obtained behavior, shown in Figure 13, is similar to the graph in Figure 12. The participant can see the mathematical model's importance for future works. Once the pilot finishes the first movement, he/she continues with the second and third movements. The obtained pseudocodes for these movements are shown in Algorithms 2 and 3, respectively. wp r = P 1r 21: else 22: wp r = P 0r 23: end if Moreover, the HIL simulation strengthens the assimilation of the UAV flight dynamics since it permits the interaction between a simulated model of the UAV and the digital implementation of its automatic control algorithm (see Algorithm 4 and Figure 7).
The obtained behavior is similar to the real flight and the simulation model of the UAV. Figure 14 depicts the overall proposed HIL simulation results.

Algorithm 2 Waypoint generation for the 2nd movement
Input: Waypoints: P 0r = (0; 0; 0), P 1r = (0; 0; 1), P 2r = (0; 2; 1), P 3r = (2; 0; 1), P 4r = (−2; 0; 1), P 5r = (0; −2; 1) Output: Waypoint reference vector: wp r = (x r , y r , z r ) 1: if time ≥ 0 and time < 2 then wp r = P 1r 30: else 31: wp r = P 0r 32: end if This instructional design is devoted to boosting the development and construction of the mechatronic concept of drone navigation by waypoints. Here, a waypoint is an intermediate point on a drone's route or line of travel. Drone navigation by waypoints allows a drone to fly with its flying points preplanned; thus, we know exactly where the drone needs to go directly for its first point and can proceed to next point until we complete these preplanned sequences. It is worthwhile to mention that developing the knowledge, skills, and attitudes of the new personnel responsible for the new jobs generated by Industry 4.0 is key since new robot configurations, controllers, sensors, and devices will be required. Therefore, this drone flight educational mechatronics system based on the EMCF is oriented towards helping in this imminent technological transition.

Discussion
The application of UAVs in this work is not a coincidence, as recent studies present the need to integrate unmanned aerial vehicle (UAV) training into STEAM education [36,37]. UAVs have been widely used in the science, technology, engineering, arts, and mathematics (STEAM) areas, giving the students a wide range of uses in the STEAM fields and teaching them a set of valuable skills and abilities, such as the work in [9].
As mentioned in the Introduction, hardware-in-the-loop (HIL) simulation is an essential approach in the fields of autonomous vehicles and robotics. It is widely used in the automotive industry, among others. HIL helps to accelerate the design and testing phases in engineering. This work has integrated a complete solution-the UAV-based smart educational mechatronics system using a MoCap laboratory and HIL-within an educational framework, namely the EMCF. This system represents a great advantage to the academic and industrial environment as students can quickly appropriate and integrate new technologies within extensively used mechatronic concepts. The developed system opens up a new range of possibilities in terms of knowledge construction through instructional designs for practices, activities, and tasks, enriching the university courses. Table 1 shows the context in which this work is developed and the trend toward including new and more educational tools in pursuing new educational experiences.
The market for robots and drones has increased exponentially in recent years. This will have a significant influence in the long term due to the transformation of industry features in many areas, such as agriculture, logistics, cleaning, and more. The market is estimated to grow by nearly 3 and 7 times in the next 10 and 20 years, respectively [38].
Drones/UAVs have seen many improvements in a number of aspects, such as geometric structure, flying mechanism, sensing and vision ability, aviation quality, path planning, intelligent behavior, and adaptability [39]. All these features are essential to understanding the importance of subsequent development. In particular, robot navigation remains a fundamental topic within the robotics research field. Although technological advances allow us to reduce the learning curve and appropriation of these new technologies, it is crucial to increase the levels of abstraction, to more closely resemble how humans navigate and perform tasks in different environments. The presented instructional design represents a useful tool to introduce the actors of the engineering world to the technological transition of Industry 4.0. However, some important stages addressed in most of the current technological developments are missed: model-based design and testing of algorithms, simulation alternatives, and digital implementations. These fields represent opportunities for future work in the current instructional design to enhance its capabilities, applicability, and versatility. On the other hand, comparing other educational methodologies and their respective performance and results in real-time experiments would also be valuable.

Conclusions
The instructional design for drone navigation by waypoints is intended to better prepare students for drone flight to acquire the necessary knowledge for using these smart devices for applications in Industry 4.0. The developed UAV-based smart educational mechatronics system using a MoCap laboratory and HIL represents an effort toward an educational concept focused on equipping students with the knowledge and skills required to meet the new demands of companies.
The system's main features include the use of all the components within the EMCF, including the HIL simulation. Its main functionalities are validating a physical drone model and testing existing or new control algorithms without putting the real drone at risk. These allow the development of UAV knowledge and skills.
We consider it vital to disseminate educational mechatronics to help countries to transform into relevant actors in the fourth industrial revolution. In future work, this instructional design is planned to be applied to engineering students. However, the pandemic has prevented us from implementing it; we hope that this will change shortly. Moreover, it is worthwhile to mention that evaluating the flights' performance and considering sensors for outdoor environments with obstacles are considered the next steps.