A Low-Cost Immersive Virtual Reality System for Teaching Robotic Manipulators Programming

: Laboratory tasks are a powerful pedagogical strategy for developing competences in science and engineering degrees, making students understand in a practical way the theoretical topics explained in the classroom. However, performing experiments in real conditions is usually expensive in terms of time, money and energy, as it requires expensive infrastructures that are generally difﬁcult to maintain in good conditions. To overcome this problem, virtual reality has proven to be a powerful tool to achieve sustainability, making it easy to update laboratories without the need to acquire new equipment. Moreover, the ability to introduce practical knowledge into classrooms without leaving them, makes virtual laboratories capable of simulating typical operating environments as well as extreme situations in the operation of different devices. A typical subject in which students can beneﬁt from the use of virtual laboratories is robotics. In this work we will develop an immersive virtual reality (VR) pedagogical simulator of industrial robotic arms for engineering students. With the proposed system, students will know the effects of their own designed trajectories on several different robotic arms and cell environments without having to buy all of them and being safe of damaging the cell components. The simulation will be checking for collisions of the elements in the scene and alert the student when they happen. This can be achieved with a robotic simulator, but the integration with immersive VR is intended to help students better understand robotics. Moreover, even having a real robotic arm available for students, with this proposed VR method, all the students have the opportunity to manage and learn his own version of the robotic cell, without waiting times generated by having less robotic arms than students in classroom.


Introduction
Virtual reality technologies have been available for a long time to support the teaching-learning process in articles of educational scope [1]. With new technologies, cheaper and more efficient, its use has become widespread at all educational levels. Specifically, Merkant et al. [2] present a meta-analysis on the effectiveness of virtual reality-based learning tools on the performance of high school and university students: a set of 69 studies that included more than 8000 students were meta-analyzed. The conclusions of this research show an improvement in learning, with resources in individual games providing the best results.
Although the advantages of this technology are numerous, there are also difficulties in its use. Petrakou [3] clarifies that to obtain greater benefits students should be more familiar with virtual world environments and improve their technical skills; in addition, it would be desirable to overcome the technical problems associated with these computer generated environments. The results of Dalgarno, et al. [4] agree with Petrakou that the use of immersive 3D virtual worlds requires levels of instructional and technical support that make them difficult to use.
The usefulness of VR over design and engineering is undoubted. The use of this type of tools fosters collaborative and interdisciplinary work, helps in the understanding of complex concepts and allows working in environments and conditions that, for various reasons, are not easy to recreate in a real way [5][6][7][8][9][10][11][12][13][14].
In the same way, the usefulness that AR / VR environments are having in the world of engineering has been transferred to university engineering education where there are diverse works on virtual reality. Miyata [15] proposes the formation of interdisciplinary groups to promote creativity when developing VR applications. Divergent thinking is encouraged as one of the essential stages of the process. The evaluation of the teaching-learning process is carried out by means of a questionnaire of 13 dimensions that evaluate the collaborative process, on the one hand, as well as another questionnaire of 5 dimensions oriented more to the evaluation of the individual process in five dimensions or aspects. Precisely, the creative side is addressed in other more recent works [16,17]. Despite its relevance in autonomous learning and in the capacities of future engineers and designers, the enhancement of this quality by virtual learning environments is not sufficiently contrasted. In fact, the majority of works focus on assessing the quality of immersion and interaction with the environment, as well as the convenience of its use [18]. Aspects such as the promotion of imagination and creativity remain most of the time in the background, especially since they are difficult to be measured objectively [19].
Several research projects have been carried out to develop methodological proposals that can be applied in different areas of higher education, including those related to engineering education. Thus, Bell and Fogler [20] set out a series of recommendations to guide educational methods to students based on the teaching-learning styles of the instructor and the student, which often do not coincide. Following this scheme a course for Chemical Engineering is designed supported by Vicher virtual reality environment (Virtual Chemical Reaction Module). The objective of the course is to guide the teaching of chemical reactions. After the experience, the quality and effectiveness of these technologies in the teaching-learning process was evaluated. The results of the students who used the VR module were significantly better, showing a better acquisition of the evaluated competences. In addition, most of the use of technology was transferred to the field of motivation since these students stated that the VR tool had enhanced their learning. Similarly, Hashemipour [21] presented a Virtual Reality environment oriented to the field of mechanical engineering and industry based on modules. The evaluation of the usability of the system followed the SUMI method of Kirakowski [22] and five specific categories of virtual reality were added: clear entry and exit points, sense of presence, navigation and orientation, faithful viewpoints, and realistic feedback. In the same line, Sutcliffe and Gault [23] proposed a series of criteria that allow evaluating the usability of a VR application. The twelve criteria, which must be scored by the users of the environment, are fundamentally oriented to the interface with the user in each of its aspects.
Other experiences in the university world have been based on interdisciplinary projects [24]. In particular, Häfner [25] proposes the design of a course to learn how to create virtual worlds by proposing industrial projects among students of different degrees and promoting the development of diverse skills. The results showed an improvement in the quality of the breeding, as it was concluded from the analysis of the projects carried out and a series of questionnaires.
The main objective of this study was to design, build and test a low-cost immersive virtual reality system [26,27] to enhance teaching of robotics in engineering. The work focuses specifically on cells containing robotic arms. The immersion, real time, low cost, the absence of delays and educational approach were the factors that were taken into account when making the proposed design.
This article is organized as follows: Section 2 gives a background of software used in the literature and a brief insight of issues present while teaching robotics in engineering courses. Section 3, describes the design and implementation of the proposed simulation software and how the connection between this program and the VR glasses is performed. In Section 4, a prototype of the proposed method is created and described. It also contains the experiments performed to ensure the correctness of the method. Conclusions are drawn in Section 5.

Background
With the rise of Industry 4.0, more students choose robotics because they want to be prepared for the near future. Robotics is a branch of engineering with a high degree of difficulty. This requires that learning be appropriate and conscientious. From the educational point of view, there exist some problems when dealing with courses of this topic:

•
Cost of robotic arms.

•
Alternate use among students.

•
Space required by robots.

•
Courses not requiring attendance. • Different robot models.

•
Hard to evaluate programs of students. • Safety problems due to inexperienced students.
Robotic arms are expensive pieces of hardware as there are other related components such as tools, safety barriers or controllers. Having a working and maintained robotic arm in a educational institution may require excessive funds. This cost limits the number of available robots for students. Another limiting factor is the great space required for these robots. Even if the institution have one or two robotic arms, the number of students will be higher, making it impossible for each student to use a robot exclusively. In case the course does not require attendance, the impact of this problem is increased as no student has an industrial robotic arm available at home.
There exist different robotic arm manufacturers, and each one has several models. In learning, it is interesting to teach as many different options as possible, but this is very difficult to achieve, as it requires having different robotic arms available. This makes it difficult to evaluate robot programs generated by the students, as teacher needs to test them in the corresponding robot.
Students are in the middle of a learning process, and this can lead to safety problems. Robotic arms are heavy machines with fast moving parts [28] and inexperienced students may harm objects and humans in the surroundings. Simulation helps to minimize the impact of the exposed problems. There exist robotic toolboxes widely used in education [29,30]. MATLAB is a common tool used to develop the mentioned toolboxes and other custom educational resources used on engineering courses [31][32][33][34], but they are non-immersive and lacks real-time rendering as it is an interpreted language not designed to provide fast 3D rendering capabilities and they tend to be used to teach only fundamental concepts of robotics. There are also simulation software that can be used for teaching [35][36][37][38][39]. The toolboxes and simulators found in the literature can be used to teach robotics but they are very limited compared to an industrial robotic simulation software with advanced capabilities such as collision detection, easy-to-use cell design interface, trajectory visualisation and checking, connection to physical robot, robot program generation that works on the real robot etc. This way, not only students will understand better the insights of robotics, but be more prepared for work outside academic world. They also lack immersive capabilities that may be useful on educational environments. However, with the rise of immersive virtual reality in recent years [40][41][42], it is interesting to consider its use to enhance the quality of teaching in the field of robotics. The use of this technology enhances the 3D spatial vision of the student and creates an immersive experience that helps the user to better understand the insights of robotics. Given the lack of studies that have proven the goodness of robotic simulators in immersive virtual reality scenarios for education, this research presents the implementation and testing of a robotic simulator that offers immersion capacity and requires low-cost devices. This industrial robotics simulation software has advanced features that, together with the immersive feeling, will allow students to better understand the knowledge of robotics. In addition, they will be better prepared to work outside the academic world [43], where they will be able to find similar simulation software and robotic arms.

Implementation
The proposed method consists of a custom simulator software specific for robotic arms and a cheap virtual reality cardboard device attached to a mobile phone to stream 3D rendered video from the computer and send head orientation information to the simulator. Interconnection of these parts is shown in Figure 1. The following subsections will explain each one of those parts in detail.

Simulation Software
Simulation in robotics allows users to design, visualize, monitor, perform safety checks and path planning among other things. It is also a powerful tool in areas such research and development [44]. For this reason, a custom made software simulation for robotic arms has been developed [45] to allow users to design and test robotic cell environments in virtual scenarios. The simulation software has been designed to be as close as possible in capabilities to real industrial robotic simulation software, so students have a better idea of how robotics works in real-life scenarios.
The simulator uses OpenGL [46] to render the models and the robotic arm of the scene in a three-dimensional environment.
In order to simulate the movement of the robotic arm, forward kinematics [47] was used. It allows the simulator to know the world position and orientation of each joint when rotations are applied to them locally.
The standard Denavit-Hartenberg (DH) convention [48] has been used to describe the kinematic chain of the robotic arms in the simulator. The parameters of the DH table vary for each model and manufacturer. Each DH matrix i−1 T i converts from coordinate system i − 1 of a joint into i. Forward kinematics T of the robotic arm are obtained by the successive multiplication of those matrices as shown in Equation (1), allowing the transformation from an arbitrary joint's coordinate system into another.
Software uses the value of the angles of robot joints to reproduce the actual pose of the robotic arm. It applies the described kinematic equations to obtain the pose. This allows users to apply all transformations allowed to the models and camera to freely modify and inspect the scene from any viewpoint.
Inverse kinematics has been used to allow the simulator to obtain the pose angles that allows the robotic arm to move to an specific position and orientation [49]. This is needed to allow the student to perform simulations of trajectories or to check if a position and orientation is reachable by the selected robotic arm.
Collision checks between simulated entities inside the virtual environment are detected with ODE (Open Dynamics Engine) library [50]. There exist several methods for performing rigid body collision checking. However, due to the real-time characteristics of the proposed VR simulation model a combination of AABB (Axis Aligned Bounding Box) and OBB (Oriented Bounding Box) have been selected [51]. The selected collision method performs a fast check with AABB and if there is a possibility that there is a collision inside the Axis Aligned Bounding Box, then a finer check is performed with OBB to enhance the precision of detections [52]. This two step method of collision detection is intended to speed up detections and fulfil real-time requirement. Other methods can be used depending on the needed requirements [53,54]. Figure 2a shows an example of the explained two step collision detection method. The 3D bolt models are surrounded by an AABB, shown in blue. If needed, the algorithm performs collision check against the OBB, marked in green. Figure 2b shows the collision detection system working in a real scenario, checking collisions of robot joints against environment objects. To simplify and enhance the visibility, this figure shows only OBB's, although previous checks against AABB's are performed. Red boxes show the objects involved in the collision, in this case both the tool of the robotic arm and the controller. As a result, a simulation software with integrated GUI (Graphical User Interface) is obtained as shown in Figure 3.
This GUI allows users to design and test virtual robotic environments in 3D without the need of real physical hardware. As this method is intended to be used for educational purposes, it allows institutions to teach robotics without purchasing expensive robotic arms. It also increases safety, as it avoids inexperienced students to handle expensive and potentially dangerous machines.

Virtual Reality
Virtual reality allows users to create an immersive experience, allowing to navigate and explore a computer generated world without being in it [55].
It has been an interesting topic used by different sectors such as footwear [56], automotive [57] or healthcare [58]. Virtual reality combined with robotics it has already been studied by other authors, to interactively program robotic arms [59,60]. There exist various ways to achieve immersive virtual reality. One specially low-cost method is to use cardboard glasses as shown in Figure 4a. Figure 4b shows a detail of the bay used to insert a smartphone and treat its screen as VR display. There exist more expensive options that doesn't need an additional phone to work as they include extra hardware and a pair of small LCD screens, such as Oculus Rift or HTC Vive glasses. As everyone now tends to have a smart-phone, the first option seems to be the cheapest one.  The simulator used symmetric frustum to calculate the perspective. However, in order to achieve the stereo effect, asymmetric frustums [61] has been used. Figure 5 shows the different frustum used for each eye. Vertices between the near/far planes that are closer to the green convergence line will show with negative parallax while further ones will appear with positive parallax. The simulator updates the position and orientation of every object in the virtual scene once per frame, and then renders the same scene to two different horizontal viewports, each one corresponding to a separate eye and with different frustums. In order to achieve a correct virtual reality experience, the orientation of the student head needs to be sent to the simulator software. An application on the phone is constantly sending IMU (Inertial Measurement Unit) data to the simulator using an UDP network socket over Wi-Fi. This information is then used by the software in the computer to obtain an approximation of the orientation of the user's head [62]. The obtained approximation is applied to the 3D camera of the scene, affecting the rendered viewpoint.
Jitter generated by the noise of the incoming IMU data stream received from the cellphone has been reduced to obtain smooth movements of the camera controlling the scene inside the simulation software. A Complementary filter has been used to fuse sensor data and obtain an smoothed representation of the noisy input data [63]. Figure 6a,b show a comparison of both raw and filtered roll and pitch values obtained from a short period of time. The results show an smooth capture of head movements and its translation to the virtual camera can be achieved with the proposed method, which is an important step to give students an immersive experience not present in other educative software for robotics. The proposed simulation software has been adapted to include VR support. The output of asymmetric frustums applied to two parallel viewports in the rendering process can be seen in Figure 7. The last step of the process is to send the rendered 3D output of the simulator software [64] with both asymmetric frustums to the phone with virtual reality glasses attached. This has been solved by using streaming software present on NVIDIA devices and a client on the phone, both using the NVIDIA Gamestream Protocol [65]. The rate of frames per second needs to be constant and as high as the rendering device is able to achieve. This allow users to have a smooth motion experience without stuttering or lags.

Prototype Preparation
An scene has been created with the simulator to recreate a real robotic cell inside the virtual environment. Figure 8 shows both the real cell on the left side and the simulated one in the right side. The real cell is used to apply glue to shoe soles fed by a rotary table. This process is used in the footwear industry to join the sole to the rest of the shoe and is normally realised by hand using brushes. This prototype is a test bench, but the proposed method can be used for other real-life scenarios.

Measuring Real-Time Capabilities
The performed experiment measured the total spent time between sensors on the phone captures movement of user's head and the processed and rendered 3D landscape image with dual view-ports is sent back to the VR cardboard glasses. This is needed to ensure that no noticeable delay or stuttering are passed to glasses, minimizing motion sickness [66].
Lag may come from any of the different parts that compose the proposed method. Even when Wi-Fi is being used to connect the smartphone of cardboard glasses to the computer running the simulator, the average measured latency of the network is 4 ms. For this value, the collision checking process and the representation of all objects within the simulated scene for two viewports must be taken into account. With the rendering engine running at 60 frames per second, the maximum delay that can be added because of the processing part of the computer is 16.66 ms. Finally, the rendered image is sent back to the smartphone, adding more delay. Figure 9 shows the total amount of milliseconds spent from the IMU data transfer start to the received rendered frame back to the smartphone over a period of time, taking into account network, streaming and simulation processing time. The experiment captured a sample of 200 steps to fetch the data shown in the previous graph, where the value of each step was calculated as the time to complete a full communication cycle from the phone to simulator and back to the phone. The average value of 11 ms for the whole test scenario, confirms that the proposed method can be used to achieve the desired immersive experience without noticeable lags. This ensures that key contribution factors of the paper such as real-time and immersion are fulfilled and can be used to enhance teaching of robotics. If the quality of the network is high, the delay of IMU and stream messages should remain stable and low, maintaining the real-time capabilities of the method. Performance problems of the computer can negatively affect the number of images per second that the simulator can process, which leads to a greater lag and a negative VR experience. Also, the small variance of the resulting data is affected by the different amount of time needed to calculate collisions and position of elements in the scene depending of the complexity of the pose for the robot. This is directly affected by each point of the trajectory, leading to lower computation times if collision is detected on early stages of the detection process, as the system does not continue checking for more collisions.

Conclusions
The positive impact of virtual reality on the teaching-learning process has been reviewed in this paper, especially in areas where practical teaching is an important focus, such as engineering courses.
To take advantage of this technology, custom simulation software has been expanded and adapted to not only design, visualize, monitor, perform safety checks and path planning, but also to use immersive virtual reality as a tool to improve the quality of teaching in robotics technology courses. This simulator has been designed to be used in educational institutions such as high schools or universities. The proposed method makes use of currently available and easily accessible technologies, such as simulation, immersion, virtual reality, Wi-Fi technology or streaming.
Other approaches in literature regarding VR on education are designed as non-realistic VR systems or based on interpreted programming tools like MATLAB that doesn't scale in speed to get immersive VR real-time experiences. Others are focused on the emulation of the movements of a single robot, without taking into account other cell elements, or lacks advanced features such as collision detection or reachability path checks among others, as they are used to teach only fundamental concepts of robotics. Those drawbacks justify the design, construction and test of the proposed method.
The impact of this method focuses on increasing quality and reducing costs in education of robotics. This has been achieved through the use of cardboard glasses, which are inexpensive and allow students to use their own smartphone to interact with the simulation from their computer. It also prevents educational institutions from spending large amounts of money and space to buy, maintain and store robotic arms and other expensive materials.
Our approach aims to provide not only an immersive experience for students to improve the quality of teaching, but also to enable students to interact with an industrial robotics simulator with advanced capabilities such as collision detection, an easy-to-use cell design interface, path visualization and checking, connection to the physical robot and the ability to generate robot programs that work like a real robot. In this way, not only will students better understand the concepts associated with robotics, but they will also be better prepared to work outside the academic world.
Safety is other key point of the method, as the simulation reduces the chance of accidents caused by inexperienced students experimenting with heavy and dangerous robotic arms. The collision detection system included in the simulator is intended to detect and prevent those accidents. Students are able to design cells by using trial and error without generating collisions with real life cell elements.
Experimentation showed that the proposed method is able to meet the real-time needs of an immersive virtual reality system, to create an enhanced realistic experience to the students, avoiding stuttering and lag in the process.
Future works will be aimed at testing the proposed virtual reality robotic simulator on educational institutions with real students to measure the impact on their teaching. The required user studies aimed to analyse and confirm that the proposed method would be applicable and useful in real-world educational environments are planned to start during the academic year 2018/19 in the Degree of Robotic Engineering of the University of Alicante.