Next Article in Journal
Adapting Geo-Indistinguishability for Privacy-Preserving Collection of Medical Microdata
Next Article in Special Issue
Exploring User Experience and Usability in a Metaverse Learning Environment for Students: A Usability Study of the Artificial Intelligence, Innovation, and Society (AIIS)
Previous Article in Journal
The Closed-Loop Control of the Half-Bridge-Based MMC Drive with Variable DC-Link Voltage
Previous Article in Special Issue
Analyzing Accurate Egocentric Distance Estimates of University Students in Virtual Environments with a Desktop Display and Gear VR Display
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Web-Based 3D Virtual Environments Utilization in Primary and Secondary Education of Children with Multiple Impairments

Department of Computers and Informatics, Faculty of Electrical Engineering and Informatics, Technical University of Košice, 042 00 Košice, Slovakia
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(13), 2792; https://doi.org/10.3390/electronics12132792
Submission received: 9 April 2023 / Revised: 17 June 2023 / Accepted: 19 June 2023 / Published: 24 June 2023

Abstract

:
The technological advances we are witnessing today have stimulated the creation of many 3D virtual environments for various purposes, from entertainment to industry to education. While the majority of these environments are perfectly suited for the healthy population, we should not forget about impaired people living among us. Regarding children’s education, one may wonder how impaired children handle them. Do they find them usable and attractive? How well do they handle basic activities in 3D environments, including orientation and interaction with objects? The experiment presented in this article provides answers to these questions within a specific setup. The experiment used a custom web application with several 3D virtual environments in a desktop virtual reality setting. The participants were 12 children, aged 8–14, with multiple impairments, predominantly hearing impairment, borderline and mild degree of mental retardation, and inferior communication skills. The answers can be regarded as positive and are based on results gathered in the form of completion times and the System Usability Scale questionnaire scores. The article also reports on a significant relation found between completion times and questionnaire scores. Future research directions, including those related to the Metaverse concept, are discussed, too.

1. Introduction

Thanks to the recent developments in computer hardware and software, we have observed a significantly increased use of three-dimensional (3D) virtual learning environments (VLE), which utilize virtual reality (VR) technologies on various levels—from a desktop VR, with standard input and output devices, to an immersive one, with VR headsets.
Three-dimensional VLE help to teach students various topics, including anatomy [1], nursing [2], or binary counting [3]. They can also be used to form virtual engineering laboratories [4] and train surgeons [5,6], firefighters [7], or construction workers [8]. In addition, the ones utilizing immersive VR can help students to perceive and understand certain parts of an image better by means of stereoscopic 3D pictures [9], to learn about nursing from 360-degree videos [10], or to program in pre-designed graphical environments in an interactive way [11]. While numerous studies have dealt with 3D VLE utilization for the healthy population, the number of those focusing on pupils or students with impairments or disabilities is limited. This is especially true for the research measuring their performance in and assessment of 3D VLE, where, to our best knowledge, only a handful ([12,13,14,15,16]) of recent studies exist.
The experiment described in this article focused on the basic practical usability of 3D VLE in the education of children with multiple impairments. The experiment used a custom web application, named LIRKIS-SpecEdu-VRSchool, with several 3D VLE covering selected topics taught at elementary and secondary schools. Twelve children with multiple impairments, aged 8–14, participated. The impairments and disabilities included hearing impairment (all participants), borderline and mild degree of mental retardation, and inferior communication skills. Within this limited setting, the experiment tried to find answers to the following research questions:
RQ.1 
Given written instructions, are the children with multiple impairments able to perform basic activities in 3D VLE, in a reasonable time (up to 15 min)?
RQ.2 
Do the children themselves find 3D VLE usable and attractive?
The basic activities in RQ.1 involve rotation, movement and interaction with objects. During the experiment, the participants had to fulfill several tasks, focusing primarily on activities in 3D VLE. Some activities in the 2D parts of the user interface were carried out too, to be able to compare the performance in both settings. To find the answer to the first question, the completion times of the individual tasks were measured and notes about participants’ performance collected. To answer RQ.2, a modified version of the System Usability Scale (SUS) questionnaire, designed by C. Putnam et al. [17] especially for children, was used. The experiment used a desktop VR setting. In accordance with other researchers, such as Lee et al. [18] and Makransky et al. [19], by desktop VR setting we mean a standard computer with an LCD panel as an output device and a keyboard and mouse as input devices. To keep the results independent from a particular learning content, the tasks performed in the 3D VLE were tutorial activities, focused on the understanding of written instructions and turning, moving, and interacting with objects in 3D environments. The uniqueness of the experiment and its results dwells in the composition of the participants and the presence of both completion times and usability evaluation by means of a standardized questionnaire (SUS). On the other hand, the composition and number of the participants, together with the specific setting of the experiment, present limitations to the results. The participants of the experiment were pupils of the Pavol Sabadoš special boarding school in Prešov, Slovakia, which educates children with multiple impairments, both physical and mental. The experiment was a part of a long-term cooperation between the authors and the school, involving the development of several software applications and hardware devices, most notably a school desk with an integrated computer and touch screen, and an educational application implementing an original pictogram-text interface and gesture visualization [20]. The experiment was carried out in one of the classrooms of the school.
The rest of the article is organized as follows. Section 2 deals in more detail with the aforementioned works on 3D VLE for impaired children and utilization of the SUS questionnaire in similar settings. Detailed information about the participants, the web application used and the tasks of the experiment is given in Section 3. Section 4 describes how the experiment proceeded and the modified SUS questionnaire. Section 5 presents the experiment results and Section 6 interprets them, primarily in the context of the research questions formulated. The article ends with concluding remarks on future perspectives in Section 7, including the Metaverse point of view.

2. Related Work

The work most related to ours is [12] as it involved participants with multiple impairments and focused on their performance in a 3D VLE. Rhmadiva et al. [12] created a game for improving children focus, designed for VR headsets and aimed at the most common issues in autism. The game is divided into four sections: subjects’ focus, focus with distraction, rules and signs, and eye gazing. To evaluate user performance in the game, the authors conducted an experiment involving two participants with autistic traits. In addition, one of the participants had a motor skills impairment and the other one a difficulty to focus. The values measured were accuracies in the individual sections of the game and completion times. The results of the experiments were used to improve the game. Our work differs from [12] in the larger number of participants and a different mix of impairments, especially considering hearing.
Other similar studies, focusing on students’ performance in VLE in terms of completion times, can be found outside of the 3D realm. For example, in [21] Zhang et al. present an experiment with 2D collaborative games, involving individuals with autistic spectrum disorder (ASD) and typically developing (TD) ones, to establish system feasibility and tolerability, especially in a field of social communication. The games focused on sequential or parallel collaboration of two partners. There were 28 participants, but only 7 with ASD. In addition to the completion times, the performance was also measured by the success frequency, collaborative action duration, and collaborative movement ratio. The authors also used self-report questionnaires of their own design.
Returning to the 3D realm, another closely-related group of studies involves those evaluating participants’ skills or knowledge improvement after carrying out corresponding activities in 3D VLE. Here, pre- and post-measurements (tests) are the primary evaluation method. These studies include [13], where Adjorlu et al. evaluated 3D VLE for teaching money skills to adolescents with ASD. Five male students aged 18–22 with IQ from 40 to 61 participated. The authors also listed their observations on every student in plain text form. In [14], Cecil et al. created 3D VLE covering several topics, such as the assembly of simple vehicles, a manufacturing process, or the properties of various materials. Four middle school and four high school students, all with ASD, were involved and the VLE were used in two settings: the fully-immersive VR, with VR headset, and the desktop VR, with an additional haptic device. While the number of participants in [13,14] was smaller than in our study, Elfakki et al. [15] involved no fewer than 40. The 40 participants in [15] were all female students with learning disabilities and the 3D VLE represent a virtual physics laboratory. The level of evaluation in [15] was higher than in [13,14] as it included cognitive tests and a division of the participants into the experimental and control groups. Similarly to our approach, the experimentation in [15] took place in a traditional desktop VR setting and 3D VLE were interconnected with a 2D user interface (2DUI) part, this time provided by the Moodle learning management system. Another similarity, not present in other works, is utilization of web-based 3D VLE, albeit the software framework used was different (SLOODLE in [15], A-Frame in our case).
One common feature of the works [13,14,15] is that they involved participants with a single impairment (ASD, in [13,14]) or a very limited number of impairments (learning disabilities, in [15]). The work featuring both 3D VLE and an impairment mix closest to ours is [16], albeit the setting and capabilities of the VLE are quite different. Cakir et al. [16] present an augmented reality (AR) application providing 3D VLE to individuals with various impairments. The VLE aimed at enriching the educational process and raising motivation and attention. The application was used on mobile devices and contained several scenarios (e.g., Day and Night, Important Vehicles) with animations for students to observe. The experiment performed involved six students, two with the cerebral palsy and other impairments (including one case of hearing impairment), one with autistic spectrum disorder and mental retardation, one with mental and growth retardation, and two students with moderate mental deficiency (Down syndrome). The experiment lasted for eight weeks and each participant used the application for two lessons per week. Knowledge improvement and attention to the teaching material were measured, with positive results. In addition, the teachers involved prepared textual observations on student performance and user experience. However, no standardized questionnaire was utilized. The inclusion of AR is inspirational, but the 3D environments used provide limited interactivity only and do not seem to be walkable.

SUS Questionnaire Utilization

The SUS questionnaire [22,23] is a reliable and valid tool [23] and a de facto industry standard for usability measuring. It has been used multiple times for evaluating VLE, both 2D and 3D, for example, by Borsci et al. [24] for a 3D VLE training car maintenance, by Huang et al. [25] for teaching 3D model creation and by Lee et al. [26] for teaching aircraft maintenance.
The way in which the SUS questions are formulated may render them hard to understand for children. This issue is addressed in [17], where Putnam et al. introduced two different modified versions of the SUS questionnaire, one for the age group 7–8 and one for 9–11. They also addressed the validity and reliability of the modifications and suggested using icons instead of numbers for the answers of the questionnaire. Since their introduction, the modified questionnaires from [17] have been used in several studies, for example in [27] to evaluate the Novelette VLE for visual storytelling and in [28] for WordMelodies, a mobile application supporting inclusive teaching of literacy skills, evaluated by visually impaired children. For the usability evaluation, our work uses the 7–8 age group version of the modified SUS questionnaire from [17] with a graphical smiley-o-meter, 5-point Likert scale as in [27].

3. Experimental Setup

The experiment took place in a classroom at the Pavol Sabadoš special boarding school in Prešov, Slovakia. The participants were 12 pupils attending the school, aged 8–14, with multiple impairments, predominantly hearing impairment, borderline and mild degree of mental retardation, and inferior communication skills. Hearing impairment was present in all participants. The medical diagnoses of the participants included ADHD (F90.X diagnoses according to IDC-10 [29]), dyslalia (F80.0), developmental dyspraxia (F82), developmental dysphasia (F80.1, F8.2), epilepsy (G40), Down syndrome (Q90), and Coeliac disease (K90.0). The participants had no previous experience with 3D VLE. However, similar experiences, for example with 3D games, could not be excluded. The influence of potential previous experience with 3D games was partially minimized by the fact that turning around in our 3D VLE requires the left mouse button to be pressed, while in most games it is enough to move the mouse. During the experiment, the participants used a web application with multiple 3D VLE (Section 3.1), running on desktop computers in the maximized window mode. In the application, they carried out tasks according to prepared scenarios (Section 3.2). While fulfilling the tasks, their performance was measured and after fulfilling them, their user experience was evaluated. This process followed the procedure described in Section 4. All the texts in the application and the questionnaires used were in Slovak. Here, we present their English equivalents with the same meaning.

3.1. Web-Based Virtual Learning Environments Application

The web application used is called LIRKIS-SpecEdu-VRSchool and it had been developed at the home institution of the authors. It was implemented using the React JavaScript front-end software framework [30] for 2D user interface (2DUI) and the A-Frame framework [31] for 3D VLE with various learning activities. The Networked-AFrame [32], an extension of A-Frame, was used to allow 3D scene sharing by multiple users. A demo version of LIRKIS-SpecEdu-VRSchool is available at [33]. The demo version has some advanced features, such as a creation of new learning activities, disabled, but it allows to carry out all the involved tasks exactly as in the experiment.
The design of LIRKIS-SpecEdu-VRSchool was developed in consultation with special education teachers. This resulted in an increased usage of colors and icons in the user interface controls. The textual instructions are kept simple and rendered in relatively large capital letters. There was also a dilemma whether information should be provided in a spoken or textual form. Considering the target audience, the textual form was chosen. This is because of the hearing impairment and the fact that people with certain mental impairments, for example Down syndrome, have problems perceiving speech and process written information better. Based on the requirements of the teachers and our previous experience with similar applications, we excluded class management support and focused on learning activities. There are two types of activities, individual and group ones, and each of them occurs in a dedicated 3D VLE (scene). In the individual activities, the user follows written instructions to fulfill corresponding tasks. Examples of implemented individual activities include finding all conifer trees in a forest for the Nature subject or crossing a road for the Civics subject. In addition, there are tutorial activities, teaching the basics of 3D scene handling. The group activities occur in collaborative environments where multiple users can join and see each other’s avatars (Figure 1b). To distinguish between the roles, different avatars are used for students (pupils) and teachers. All 3D environments use a first-person perspective. The application also allows the addition of new scenes and editing of existing ones by means of a modified A-Frame inspector. The appearance of the application can be seen in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6. Figure 1 presents how the application is used at the boarding school, while Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 focus on the parts involved in the experiment. It should be noted that the screenshots in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 have been captured in a minimalist mode, to maintain both a compact size and readability of the texts depicted. Therefore, the individual elements may be arranged in a slightly different way than during the experiment. To receive the same experience as the participants, it is recommended to try the demo version [33] of the application in the maximized window or full screen mode.

3.2. Scenarios

Two scenarios were designed for the experiment, the first one focusing on the two-dimensional user interface (2DUI) and the second one on the three-dimensional UI (3DUI), i.e., the 3D VLE.
The first scenario, 2DS (Figure 2), focused on navigating the 2DUI of the application from the initial screen to one of the individual activities. It consisted of two subsequent tasks, 2DS.1 and 2DS.2:
2DS.1 
In the initial screen of the application (Figure 2a), choose the student role by clicking the blue “Student” button. Then, choose the subject “Nature” by clicking on the corresponding card (Figure 2b).
2DS.2 
Choose the “Forrest” topic (Figure 2c) and then the “Find conifer trees” activity (Figure 2d).
Figure 2. LIRKIS-SpecEdu-VRSchool 2DUI screens for the tasks 2DS.1 (a,b) and 2DS.2 (c,d).
Figure 2. LIRKIS-SpecEdu-VRSchool 2DUI screens for the tasks 2DS.1 (a,b) and 2DS.2 (c,d).
Electronics 12 02792 g002
Figure 3. Layout of the 3D virtual environments, used in the scenario 3DS, from above. The positions are in meters.
Figure 3. Layout of the 3D virtual environments, used in the scenario 3DS, from above. The positions are in meters.
Electronics 12 02792 g003
The second scenario, 3DS, took place in the 3D virtual environments of the tutorial and was divided into three tasks:
3DS.1 
Perform the tutorial activity “Turning” (Figure 4).
3DS.2 
Perform the tutorial activity “Moving” (Figure 5).
3DS.3 
Perform the tutorial activity “Rules” (Figure 6).
The appearance of these three environments is almost identical. Their layout is shown in Figure 3, with horizontal coordinates (x and z axes) of the right-handed coordinate system, as used by A-Frame. Components in the environments are a square fence with side length of 32.5 m (the black square in Figure 3), a player (P), a star (S1 or S2) and a lollipop (L). The star object is placed at the position S1 in 3DS.1 and at S2 in 3DS.2. The lollipop is on the same position L in both 3DS.1 and 3DS.2. The 3DS.3 environment does not display the star or the lollipop. The exact coordinates of these objects in meters are given in brackets. The first one is the horizontal coordinate (x) and the second one is vertical (z).
Figure 4. 3D VLE screenshots capturing the instructions and significant moments of the task 3DS.1: the initial instructions (a,b) and the VLE appearance before the individual sub-tasks (c,d) and after completing the whole task (e).
Figure 4. 3D VLE screenshots capturing the instructions and significant moments of the task 3DS.1: the initial instructions (a,b) and the VLE appearance before the individual sub-tasks (c,d) and after completing the whole task (e).
Electronics 12 02792 g004
In the first tutorial activity, “Turning” (3DS.1), the user finds him- or herself in the middle of the virtual environment, facing north (Figure 4a). He or she sees a floor with a checkerboard pattern, blue sky, the fence in the distance and a white panel with instructions. The panel dominates the view as it is the object on which the user should focus first. Depending on the size of the browser window where the application is running, the user can also see the star object on the left. Here, the star is visible in Figure 4b, where the user has already turned left a little. The goal of the activity is to teach the user how to turn and point at objects in the environment. How to do this is given by the textual instructions in the first two steps of the activity (Figure 4a,b). After the user reads and understands the instructions, he or she clicks on the green “CONTINUE” button. The controls are always active in the environment, so the user may try to turn (or move) even when reading the instructions. After that, the user is asked to use the knowledge gained to point at the star (Figure 4c,d) and, subsequently, at the lollipop (Figure 4d,e).
Figure 5. 3D VLE screenshots capturing the instructions and significant moments of the task 3DS.2: the initial instructions (a,b) and the VLE appearance before the individual sub-tasks (c,d) and after completing the whole task (e).
Figure 5. 3D VLE screenshots capturing the instructions and significant moments of the task 3DS.2: the initial instructions (a,b) and the VLE appearance before the individual sub-tasks (c,d) and after completing the whole task (e).
Electronics 12 02792 g005
The second activity, “Moving” (3DS.2), proceeds in a way similar to the first one. Again, the user reads the instructions (Figure 5a,b) first, this time about how to move in the 3D environment. Then, he or she learns about (Figure 5c,d) and performs the actual task, which is not only about pointing at but also about moving to the star (Figure 5d) and the lollipop (Figure 5e) objects. One may wonder why the instructions in Figure 5c start with “Turn until you find a star”, as the star is positioned right in front of the user. This is because the user may try to look around or move while reading the instructions, so the star may be out of sight when he or she gets to the actual task.
The last task, 3DS.3 (Figure 6), can be considered relaxing as its first steps are about reading information on how to behave in 3D environments (Figure 6a,b) without the need to apply the information immediately. The last step (Figure 6c), however, is of a different nature as here the user has to locate a 2D control, a green house icon in the top left corner of the environment, and click on it in order to get back to the initial screen of the application.
Figure 6. 3D VLE screenshots capturing the instructions of the task 3DS.3 (a,b) and the VLE appearance after completing 3DS.3 (c).
Figure 6. 3D VLE screenshots capturing the instructions of the task 3DS.3 (a,b) and the VLE appearance after completing 3DS.3 (c).
Electronics 12 02792 g006

4. Experimental Procedure

The experiment took place in a computer-equipped classroom at the Pavol Sabadoš special boarding school. There were enough computers for all the participants and the computers were powerful enough to run the LIRKIS-SpecEdu-VRSchool application smoothly in full-HD. The participants used the application in a standard desktop VR setting. This means that the graphical output was displayed on a full-HD widescreen LCD panel and the participants controlled the application by keyboard and mouse. The setting was similar to the one shown in Figure 1a, but with standalone LCD panels and the environments described in Section 3.2. The LCD panels used were 22  wide and the application ran in maximized window mode. The participants were familiar with the computers, knew how to use MS Windows, a web browser, and to navigate standard web pages on a basic level, provided that the user interface is localized (in Slovak). They had not used 3D environments in the educational process prior to the experiment.
Immediately before the experiment, a stopwatch application and a web browser with the initial screen of LIRKIS-SpecEdu-VRSchool (Figure 2a) had been started on every computer. After that, the experiment proceeded as follows:
  • The participants were seated to the computers by the personnel overseeing the experiment.
  • The personnel gave the participants instructions on how to accomplish the tasks of the first scenario (2DS).
  • The personnel answered questions from the participants to clarify the instructions.
  • The participants carried out the tasks of 2DS while the personnel used the stopwatch applications to measure completion times. The personnel also made notes about the performance of the participants.
  • For each participant, the personnel prepared LIRKIS-SpecEdu-VRSchool for the 3DS scenario by starting the tutorial activity “Turning”. This resulted in the scene shown in Figure 4a.
  • In order to accomplish the 3DS scenario tasks, the personnel told the participants to follow the instructions on the white panels.
  • The participants carried out the tasks of 3DS while the personnel used the stopwatch applications to measure completion times. The personnel also made notes about the performance of the participants.
  • After each participant had completed the 3DS scenario, the personnel told the participants that they will be given a questionnaire to express their opinions on the application.
  • The participants filled in the modified System Usability Scale (SUS) questionnaire, given in a paper form. While filling in, the personnel answered corresponding questions to make sure that the questionnaire was understood correctly by the participants.
The SUS questionnaire [22] is considered an industry standard for usability measurement, however, due to the relative complexity of its questions, its original form is not suitable for the participants in our experiment. Therefore, we took a modified wording of the SUS questions, introduced in [17] for children 7–8 years old. The wording of the questions is given in Table 1, with the original ones on the left and the ones we used on the right. Our questions are identical to [17], with three exceptions. In Q1, we used “often” instead of “a lot more” for easier understanding. To reflect the fact that our participants were pupils of a boarding school, we replaced the word “friends” with “classmates” in Q7. For Q9, we used an alternate form, the one introduced in [17] is “I was proud of how I played the game”. This change was made to exclude self-estimation of the participants from the answers. In all questions, we refer the application as “the game”. This is because we presented it as a game during the experiment. Another modification of the questionnaire, inspired by [17], was to present the answers in a pictorial form. The SUS questionnaire uses a 5-point Likert scale, going from “I strongly disagree” (value 1) to “I strongly agree” (value 5). We replaced the numbers with the icons shown in Figure 7.
Table 1. The original questions of the SUS questionnaire and their modified versions, used in the experiment.
Table 1. The original questions of the SUS questionnaire and their modified versions, used in the experiment.
QuestionOriginalModified (According to [17])
Q1I think that I would like to use this system frequently.I would like to play this game often.
Q2I found the system unnecessarily complex.The game was hard to play.
Q3I thought the system was easy to use.I thought the game was easy to use.
Q4I think that I would need the support of a technical person to be able to use this system.I would need help to play the game more.
Q5I found the various functions in this system were well integrated.I knew what to do next when I played the game.
Q6I thought there was too much inconsistency in this system.Some things in the game made no sense.
Q7I would imagine that most people would learn to use this system very quickly.The game would be easy for my classmates to learn.
Q8I found the system very cumbersome to use.To play the game I had to do some weird things.
Q9I felt very confident using the system.I felt good about how I played the game.
Q10I needed to learn a lot of things before I could get going with this system.There was a lot to learn to play the game.
Figure 7. A “smiley-o-meter” scale, composed of emoticons, used in the SUS questionnaire instead of numerical values.
Figure 7. A “smiley-o-meter” scale, composed of emoticons, used in the SUS questionnaire instead of numerical values.
Electronics 12 02792 g007

5. Results

As follows from the implemented procedure, the experiment produced three sets of results: the completion times of the 2DS and 3DS scenario tasks and the usability evaluation, based on the modified SUS questionnaire.
The completion times for 2DS can be seen in Figure 8. For each participant, the completion time of the task 2DS.1 is shown inside a dark blue bar, followed by the completion time of 2DS.2 in a light blue bar. The last number, right to the bars, is the total completion time of 2DS. In 2DS.1, we also intended to find out whether limiting the clickable area to a button with a text instead of a whole card, including a graphical icon, presents an issue. Such an issue was reported by all the participants as they tried to click on the icon first, instead of the button. Participants P6, P7, P8, and P10 also showed uncertainty when choosing the “Forest” topic in the first step of the task 2DS.2. They were not sure whether they had really chosen the activity after clicking the “Forest” card. The utilization of the same icon (a conifer tree) for both the topic (Figure 2c) and most of its activities (Figure 2d) was identified as the source of the confusion.
The completion times of the scenario 3DS and its three tasks are presented in Figure 9, in the same way as in Figure 8. Regarding specific problems encountered in 3DS.3, participants P3, P5, P7, and P10 had problems realizing that the green house icon is a clickable button. The most common problems during the whole experiment were associated with reading the instructions (for P1, P3, P7, and P8) and manipulating the mouse (for P3, P7, P8, and P10). Participant P8 also had issues with overall movement coordination and P7 manifested chaotic behavior and relatively low motivation to accomplish the tasks. In addition, P3 and P7 found it hard to concentrate on the given tasks.
The usability evaluation results can be seen in Table 2. The columns Q1 to Q10 contain the numerical values of the answers chosen by the participants while the last column lists the SUS score, a number from 0 to 100, computed in the standard way [22,23]. The resulting SUS score of the experiment is  81 . 6 ¯ . Despite the fact that wording suitable for children was used, the majority of participants needed an additional explanation by the personnel when filling in the questionnaire.
To reveal potential relations between the collected sets of results, we ordered the participants according to the completion times from longest to shortest and according to the SUS score from lowest to highest. The results of the comparison are shown in Figure 10. A very strong relation can be seen between the ordering of the total completion times of both the 2DS and 3DS tasks and the SUS scores (Figure 10b), where the placement of 10 out of 12 participants is identical or nearly identical (i.e., the difference in placement is 0 or 1). The participants with the same SUS score were ordered in accordance with the completion time ordering. The relation between the completion times of the 2DS and 3DS tasks (Figure 10a) is weaker; however, even here more than the half of the participants (7) have an identical or nearly identical placement.
Figure 9. Stacked bar chart with the 3DS scenario task completion times of the experiment participants in minutes:seconds.
Figure 9. Stacked bar chart with the 3DS scenario task completion times of the experiment participants in minutes:seconds.
Electronics 12 02792 g009
Table 2. SUS questionnaire answers and score of the participants.
Table 2. SUS questionnaire answers and score of the participants.
ParticipantQ1Q2Q3Q4Q5Q6Q7Q8Q9Q10SUS Score
P1325251525382.5
P2325251525382.5
P3235252535372.5
P4524251525287.5
P5514251515390
P6514242515287.5
P7133343533255
P8423242525375
P9524252515385
P10333242515275
P11515151415295
P12525151415292.5

6. Discussion

Within the limitations of the experiment carried out, we may consider the research questions RQ.1 and RQ.2, defined in Section 1, answered positively.
Regarding RQ.1, the tasks involved in the experiment covered all the basic activities expected from a utilization of 3D VLE within a lesson: navigation to a corresponding topic (2DS) and rotation, movement, and manipulation of objects in 3D VLE according to written instructions (3DS). The total time needed to complete both scenarios was 7:24 min on average (min = 5:53, max = 11:09). Therefore, even for the worst performer (P7), the time needed was significantly shorter than the 15 min limit. This limit was chosen because it represents one third of a standard, 45 min long, lesson. In addition, if the participants are able to successfully accomplish the tasks within this limit, it is possible to assume that the standard lesson provides enough room for topic-related activities, including explanation and specific tasks in 3D VLE. While further investigation is needed to fully support this assumption, a comparison of the 3DS.1 and 3DS.2 completion times speaks in its favor. On average, it took the participants 1.92 times longer (min = 1.47, max = 2.42) to complete 3DS.1 than 3DS.2, even though 3DS.1 only involved turning while 3DS.2 also required moving in the environment. This indicates that the participants adapted to the environment quickly. Performance increase associated with repeated use of a VLE was also observed by Rahmadiva et al. [12], albeit in their case, the task was the same in every repetition. In addition, Cecil et al. [14] expected such an effect, as they recommend multiple sessions for students to get involved and familiar with the environment. Compared to 2DS, 3DS took the participants 4.42 times longer on average (min = 3, max = 5.9). Besides the relatively more complex tasks, the need to read and understand written instructions also contributed to the difference. One may expect that the performance in 2DS and 3DS will be similar, however in our experiment such similarity manifested for only 7 out of 12 participants, predominantly the better performing ones (Figure 10a).
Regarding the 3DS tasks, we observed a significantly worse performance of two subjects (P3, P7), when compared to the rest of the group. Cakir et al. [16], who also experimented with participants with multiple impairments, reported a similar situation. In their case, two out of six participants were able to complete the tasks only with additional verbal support, due to language and speech problems. In our experiment, four participants had technical issues, related primarily to manipulating a computer mouse. Elfakki et al. [15], who also used 3D VLE in a desktop VR setting, noticed similar issues, albeit with only two out of forty participants. However, it should be noted that the participants in [15] were older, middle-class students with learning disabilities only.
The positive answer to RQ.2 follows from the SUS scores measured. The resulting average SUS score was  81 . 6 ¯ , which is significantly above the average (68, according to [23]). This means that, despite the difficulties, most of the participants evaluated the application with 3D virtual environments positively, would like to use it again and found it suitable for their classmates. While the application was tailored to this target audience, we find it very positive that their first encounter with it led to such positive responses. All the participants were positive on the suitability of the application for their class (Q7). While not using standardized questionnaires, a similarly positive impact was also observed by the teachers of students in [16] and Adjorlu et al. [13] concluded that some of their students seemed extremely comfortable in 3D VLE. The self-report questionnaire used by Zhang et al. [21] also indicated a high level of enjoyment. As evident from Figure 10b, for almost all participants in our experiment (except P2 and P9) the order of the completion times exactly or nearly matches the order of the SUS scores. The lowest score was given by the participants with the worst performance in the experiment, P7 and P3. However, even here, the score of P3 was above the average and the one by P7 was at least above 50. A similar observation has been made by Ahmetovic et al. in [28], where the same modification of the SUS questionnaire was used with visually impaired children. As in our case, the worst performers in [28] gave their application the lowest SUS score and even the value itself was similar (52.5 in [28], 55 in our case). However, the relation between the SUS scores and the performance was not investigated further in [28].
The experiment also revealed additional issues, which should be taken into account when creating and using VLE such as LIRKIS-SpecEdu-VRSchool with impaired pupils. The first one is the correct selection and positioning of the computer hardware, input and output devices in particular, which is sensitive to the impairments of the user. The second one is an appropriate presentation of crucial information to the users, namely the correct choice of icons (pictograms), the font type, size, and colors, and the balance between graphical and textual forms of presentation. During the experiment, this issue was primarily identified in the 2DS scenario (the preference of graphical icons over text and the confusion regarding the repeated usage of the conifer tree icon).

7. Conclusions

The utilization of computer hardware and software in education, including virtual reality and related technologies, is a relatively well covered topic. How these technologies can be used for impaired children is a slightly different issue. The experiment presented here is one of the few that tries to assess the utilization of some of these technologies, namely 3D virtual environments in a desktop VR setting.
While the answers to both research questions are, within the limitations of the conducted experiment, positive, there are several factors that should be considered and subjected to future research and development. The one that naturally comes to the mind is medical, as impaired people possess various clinical diagnoses. Here, the authors do not have the ambition to interpret the results from this point of view as the number of the participants was relatively low, no medical personnel was involved in the design and execution of the experiments, and the agreement with the boarding school does not allow to reveal specific diagnoses of the participants. In this context it should be considered to use such an application not only in a pupil–teacher mode but also in a pupil–therapist (physician) mode, where the clinical aspects can be addressed. The second factor is educational and technical (information technologies), where, on the basis of the SUS scores, we can conclude that 3D VLE are considered usable and attractive by the pupils. In addition, despite the relatively long time needed to perform the tasks, there seems to be enough room to perform educational activities within such environments in the time frame of a single, 45 min long, lesson. The third factor is socioeconomic, where the affordability of the corresponding hardware is crucial for boarding schools and families of impaired children. This point cannot be left out from the consideration as such children often come from low-income families and/or excluded communities and corresponding educational institutions usually do not operate with appropriately high budgets. When the desktop VR is utilized, as in the experiment, standard personal computer hardware is enough and at least the educational institutions are equipped with enough computers of sufficient specifications to run 3D virtual environments. Of course, the requirements on the graphical rendering power should be considered when designing the environments to keep costs at bay. This is also the reason why relatively simple, cartoon-like, graphics was used for the 3D VLE of the LIRKIS-SpecEdu-VRSchool application. Utilization of special hardware, such as VR headsets, is also becoming affordable thanks to the relatively low prices of dedicated devices (e.g., Meta Quest 2) and availability of budget solutions, where a standard smartphone can be combined with a lens-equipped holder, costing about EUR 10–20.
In addition to the impaired children’s needs, another concept considered when developing the LIRKIS-SpecEdu-VRSchool application was that of Metaverse, a shared 3D virtual world, where users, represented by their avatars, learn, work, and play. Two features associated with an ideal utilization of Metaverse are collaboration and immersion. Both of them are supported by the LIRKIS-SpecEdu-VRSchool application, but they have not been used in the experiment. Regarding the collaboration, the basic capabilities are the same as those evaluated here and the additional ones will be the subject of future experimentation. While 3D VLE included in contemporary Metaverse-oriented studies [34,35,36] suggest that desktop VR is a legitimate setting from this perspective, the immersive one is clearly preferred. Here, it may be interesting to compare the performance and experience when carrying out the tasks with VR headsets (immersive) to the desktop VR. With healthy children, one might expect a mostly positive impact of immersive VR; however, the situation may be completely different with impaired children. Our preliminary attempts indicate that a sudden immersion to a fully virtual 3D environment can be too confusing and coordination problems may prevent the users adequately controlling their movement and other activities inside the environment. In addition, because of the effects of the impairments and corresponding clinical diagnoses, some children refuse to wear VR headsets at all. In addition, the loss of visual contact with the teacher can be a serious issue for children with hearing impairments. On the other hand, the positive results of other studies [12,13,14] encourage us to explore this feature.
Future experimentation may also focus on how the 3D VLE improve skills and knowledge of the students, as in the studies [13,14,15]. A fuzzy logic-based approach to the evaluation, developed by the authors of [15] in [37], could also be considered. In addition, it is worth exploring whether the relation found between the performance and the SUS scores of the participants will manifest when carrying out similar experiments with larger groups.

Author Contributions

Conceptualization, B.S. and Š.K.; methodology, B.S. and Š.K.; validation, B.S., Š.K. and M.M.; formal analysis, B.S., Š.K. and M.M.; resources, B.S.; data curation, B.S. and Š.K.; writing—original draft preparation, Š.K., B.S. and M.M.; writing—review and editing, Š.K., B.S. and M.M.; visualization, Š.K.; supervision, B.S.; project administration, B.S.; and funding acquisition, B.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Cultural and Educational Grant Agency of the Ministry of Education, Science, Research and Sport of the Slovak Republic (KEGA, Kultúrna a edukačná grantová agentúra MŠVVaŠ SR) grant No. 048TUKE-4/2022: “Collaborative virtual reality technologies in the educational process” and the Slovak Research and Development Agency (Agentúra na podporu výskumu a vývoja) grant No. APVV-21-0105 “Trustworthy human–robot and therapist–patient interaction in virtual reality”.

Institutional Review Board Statement

Ethical review and approval were waived for this study, because the research presents no more than minimal risk of harm to subjects and involves no procedures for which written consent is normally required outside the research context.

Informed Consent Statement

Corresponding consent has been acquired from all the participants or their legal guardians. The data have been collected anonymously, without the possibility to identify the individual participants.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors. A demo version of the LIRKIS-SpecEdu-VRSchool application, allowing to replicate the scenarios of the experiment, is available at [33].

Acknowledgments

The authors would like to thank our alumni student Matej Masrna for his contribution to the design and implementation of the LIRKIS-SpecEdu-VRSchool application. We also thank Peter Borovský, Peter Guľaš, Darina Petríková and other employees of the Pavol Sabadoš special boarding school in Prešov, Slovakia, for their participation on the practical realization of the experiment and valuable remarks to the design of the application and the experiment. Last but not least, we thank the anonymous reviewers for their insightful comments and suggestions that helped to increase the quality of the article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
2DTwo-Dimensional
2DUITwo-Dimensional graphical User Interface
3DThree-Dimensional
3DUIThree-Dimensional graphical User Interface
ARAugmented Reality
ASDAutistic Spectrum Disorder
IDCInternational Classification of Diseases
SUSSystem Usability Scale
TDTypically Developing
VLEVirtual Learning Environment
VRVirtual Reality

References

  1. Seo, J.H.; Smith, B.; Cook, M.; Pine, M.; Malone, E.; Leal, S.; Suh, J. Anatomy builder VR: Applying a constructive learning method in the virtual reality canine skeletal system. In Proceedings of the 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 18–22 March 2017; pp. 399–400. [Google Scholar] [CrossRef]
  2. Choi, K.S. Virtual Reality Wound Care Training for Clinical Nursing Education: An Initial User Study. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; pp. 882–883. [Google Scholar] [CrossRef]
  3. Nersesian, E.; Vinnikov, M.; Ross-Nersesian, J.; Spryszynski, A.; Lee, M.J. Middle School Students Learn Binary Counting Using Virtual Reality. In Proceedings of the 2020 IEEE Integrated STEM Education Conference (ISEC), Princeton, NJ, USA, 1 August 2020; pp. 1–8. [Google Scholar] [CrossRef]
  4. Valdez, M.T.; Ferreira, C.M.; Barbosa, F.P.M. 3D virtual laboratory for teaching circuit theory—A virtual learning environment (VLE). In Proceedings of the 2016 51st International Universities Power Engineering Conference (UPEC), Coimbra, Portugal, 6–9 September 2016; pp. 1–4. [Google Scholar] [CrossRef]
  5. Mishra, K.; Boland, M.V.; Woreta, F.A. Incorporating a virtual curriculum into ophthalmology education in the coronavirus disease-2019 era. Curr. Opin. Ophthalmol. 2020, 31, 380–385. [Google Scholar] [CrossRef] [PubMed]
  6. Mohan, A.T.; Vyas, K.S.; Asaad, M.; Khajuria, A. Plastic Surgery Lockdown Learning during Coronavirus Disease 2019: Are Adaptations in Education Here to Stay? Plast. Reconstr. Surg. Glob. Open 2020, 8, e3064. [Google Scholar] [CrossRef] [PubMed]
  7. Wijkmark, C.H.; Fankvist, S.; Heldal, I.; Metallinou, M.M. Remote Virtual Simulation for Incident Commanders: Opportunities and Possibilities. In Proceedings of the 2020 11th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Mariehamn, Finland, 23–25 September 2020; pp. 445–452. [Google Scholar] [CrossRef]
  8. Luimula, M.; Linder, M.; Pieskä, S.; Laimio, E.; Lähde, T.; Porramo, P. Unlimited Safety Productivity - A Finnish Perspective Using Virtual Learning Methods to Improve Quality and Productivity in the Construction Industry. In Proceedings of the 2020 11th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Mariehamn, Finland, 23–25 September 2020; pp. 263–266. [Google Scholar] [CrossRef]
  9. Shibata, T. Virtual Reality in Education: How Schools Use VR in Classrooms. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018); Bagnara, S., Tartaglia, R., Albolino, S., Alexander, T., Fujita, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 423–425. [Google Scholar]
  10. Seo, J.H.; Kicklighter, C.; Garcia, B.; Chun, S.W.; Wells-Beede, E. Work-in-Progress–Design and Evaluation of 360 VR Immersive Interactions in Nursing Education. In Proceedings of the 2021 7th International Conference of the Immersive Learning Research Network (iLRN), Eureka, CA, USA, 17 May–10 June 2021. [Google Scholar]
  11. Parmar, D.; Lin, L.; Dsouza, N.; Joerg, S.; Leonard, A.E.; Daily, S.B.; Babu, S. How Immersion and Self-Avatars in VR Affect Learning Programming and Computational Thinking in Middle School Education. IEEE Trans. Vis. Comput. Graph. 2022. [Google Scholar] [CrossRef] [PubMed]
  12. Rahmadiva, M.; Arifin, A.; Fatoni, M.H.; Halimah Baki, S.; Watanabe, T. A Design of Multipurpose Virtual Reality Game for Children with Autism Spectrum Disorder. In Proceedings of the 2019 International Biomedical Instrumentation and Technology Conference (IBITeC), Special Region of Yogyakarta, Indonesia, 23–24 October 2019; Volume 1, pp. 1–6. [Google Scholar] [CrossRef]
  13. Adjorlu, A.; Serafin, S. Head-Mounted Display-Based Virtual Reality as a Tool to Teach Money Skills to Adolescents Diagnosed with Autism Spectrum Disorder. In Proceedings of the Interactivity, Game Creation, Design, Learning, and Innovation; Brooks, A.L., Brooks, E., Sylla, C., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 450–461. [Google Scholar] [CrossRef]
  14. Cecil, J.; Sweet-Darter, M.; Gupta, A. Design and Assessment of Virtual Learning Environments to Support STEM Learning for Autistic Students. In Proceedings of the 2020 IEEE Frontiers in Education Conference (FIE), Uppsala, Sweden, 21–24 October 2020; pp. 1–9. [Google Scholar] [CrossRef]
  15. Elfakki, A.O.; Sghaier, S.; Alotaibi, A.A. An Efficient System Based on Experimental Laboratory in 3D Virtual Environment for Students with Learning Disabilities. Electronics 2023, 12, 989. [Google Scholar] [CrossRef]
  16. Cakir, R.; Korkmaz, O. The effectiveness of augmented reality environments on individuals with special education needs. Educ. Inf. Technol. 2019, 24, 1631–1659. [Google Scholar] [CrossRef]
  17. Putnam, C.; Puthenmadom, M.; Cuerdo, M.A.; Wang, W.; Paul, N. Adaptation of the System Usability Scale for User Testing with Children. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–7. [Google Scholar] [CrossRef]
  18. Lee, E.A.L.; Wong, K.W. Learning with desktop virtual reality: Low spatial ability learners are more positively affected. Comput. Educ. 2014, 79, 49–58. [Google Scholar] [CrossRef] [Green Version]
  19. Makransky, G.; Petersen, G.B. Investigating the process of learning with desktop virtual reality: A structural equation modeling approach. Comput. Educ. 2019, 134, 15–30. [Google Scholar] [CrossRef]
  20. Sobota, B.; Korečko, Š.; Hudák, M.; Ondra, F.; Sivý, M. Disabled People and Pictogram-Text Application with Gesture Recognition Extension. In Proceedings of the 2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA), Virtual, 12–13 November 2020; pp. 640–645. [Google Scholar] [CrossRef]
  21. Zhang, L.; Warren, Z.; Swanson, A.; Weitlauf, A.; Sarkar, N. Understanding Performance and Verbal-Communication of Children with ASD in a Collaborative Virtual Environment. J. Autism Dev. Disord. 2018, 48, 2779–2789. [Google Scholar] [CrossRef] [PubMed]
  22. Brooke, J. SUS: A ‘quick and dirty’ usability scale. In Usability Evaluation in Industry; Jordan, P.W., Thomas, B., McClelland, I.L., Weerdmeester, B., Eds.; Taylor and Francis Ltd.: Abingdon, UK, 1996; Chapter 21; pp. 189–194. [Google Scholar]
  23. Brooke, J. SUS: A retrospective. J. Usability Stud. 2013, 8, 29–40. [Google Scholar]
  24. Borsci, S.; Lawson, G.; Jha, B.; Burges, M.; Salanitri, D. Effectiveness of a multidevice 3D virtual environment application to train car service maintenance procedures. Virtual Real. 2016, 20, 41–55. [Google Scholar] [CrossRef]
  25. Huang, H.; Lee, C.F. Factors affecting usability of 3D model learning in a virtual reality environment. Interact. Learn. Environ. 2022, 30, 848–861. [Google Scholar] [CrossRef]
  26. Lee, H.; Woo, D.; Yu, S. Virtual Reality Metaverse System Supplementing Remote Education Methods: Based on Aircraft Maintenance Simulation. Appl. Sci. 2022, 12, 2667. [Google Scholar] [CrossRef]
  27. Addone, A.; De Donato, R.; Palmieri, G.; Pellegrino, M.A.; Petta, A.; Scarano, V.; Serra, L. Novelette, a Usable Visual Storytelling Digital Learning Environment. IEEE Access 2021, 9, 168850–168868. [Google Scholar] [CrossRef]
  28. Ahmetovic, D.; Bernareggi, C.; Leporini, B.; Mascetti, S. WordMelodies: Supporting the Acquisition of Literacy Skills by Children with Visual Impairment through a Mobile App. ACM Trans. Access. Comput. 2023, 16, 1–19. [Google Scholar] [CrossRef]
  29. International Statistical Classification of Diseases and Related Health Problems, 10th Revision, WHO Version for 2019. Available online: https://icd.who.int/browse10/2019/en (accessed on 18 June 2023).
  30. React Framework Homepage. Available online: https://react.dev/ (accessed on 18 June 2023).
  31. A-Frame Homepage. Available online: https://aframe.io/ (accessed on 18 June 2023).
  32. Networked-Aframe Homepage. Available online: https://github.com/networked-aframe (accessed on 18 June 2023).
  33. LIRKIS-SpecEdu-VRSchool Demo Version. Available online: https://lirkis-vrschool-specedu.glitch.me/ (accessed on 18 June 2023).
  34. Kye, B.; Han, N.; Kim, E.; Park, Y.; Jo, S.; Huh, S. Educational applications of metaverse: Possibilities and limitations. J. Educ. Eval. Health Prof. 2021, 18, 32. [Google Scholar] [CrossRef] [PubMed]
  35. Tlili, A.; Huang, R.; Shehata, B.; Liu, D.; Zhao, J.; Metwally, A.H.S.; Wang, H.; Denden, M.; Bozkurt, A.; Lee, L.H.; et al. Is Metaverse in education a blessing or a curse: A combined content and bibliometric analysis. Smart Learn. Environ. 2022, 9, 24. [Google Scholar] [CrossRef]
  36. Sghaier, S.; Elfakki, A.O.; Alotaibi, A.A. Development of an intelligent system based on metaverse learning for students with disabilities. Front. Robot. AI 2022, 9, 1006921. [Google Scholar] [CrossRef] [PubMed]
  37. Elfakki, A.O.; Sghaier, S.; Alotaibi, A.A. An Intelligent Tool Based on Fuzzy Logic and a 3D Virtual Learning Environment for Disabled Student Academic Performance Assessment. Appl. Sci. 2023, 13, 4865. [Google Scholar] [CrossRef]
Figure 1. Selected features of LIRKIS-SpecEdu-VRSchool not used in the experiment: a pupil of the Pavol Sabadoš special boarding school using a 3D VLE for the “Nature-Forest” topic in standalone mode (a) and the same VLE in collaborative mode with 2 pupils and 1 teacher (b). In (b), the second pupil avatar is not visible as the screenshot is taken from his point of view.
Figure 1. Selected features of LIRKIS-SpecEdu-VRSchool not used in the experiment: a pupil of the Pavol Sabadoš special boarding school using a 3D VLE for the “Nature-Forest” topic in standalone mode (a) and the same VLE in collaborative mode with 2 pupils and 1 teacher (b). In (b), the second pupil avatar is not visible as the screenshot is taken from his point of view.
Electronics 12 02792 g001
Figure 8. Stacked bar chart with the 2DS scenario tasks completion times of the experiment participants in minutes:seconds.
Figure 8. Stacked bar chart with the 2DS scenario tasks completion times of the experiment participants in minutes:seconds.
Electronics 12 02792 g008
Figure 10. Scatter plots showing the relationship between the ordering of the 2DS and 3DS completion times (a) and of the total completion times and SUS scores (b) of the participants. The dotted lines are linear trend lines, the blue ones are computed from the data and the grey ones are hypothetical, for the case when both orderings match.
Figure 10. Scatter plots showing the relationship between the ordering of the 2DS and 3DS completion times (a) and of the total completion times and SUS scores (b) of the participants. The dotted lines are linear trend lines, the blue ones are computed from the data and the grey ones are hypothetical, for the case when both orderings match.
Electronics 12 02792 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sobota, B.; Korečko, Š.; Mattová, M. Web-Based 3D Virtual Environments Utilization in Primary and Secondary Education of Children with Multiple Impairments. Electronics 2023, 12, 2792. https://doi.org/10.3390/electronics12132792

AMA Style

Sobota B, Korečko Š, Mattová M. Web-Based 3D Virtual Environments Utilization in Primary and Secondary Education of Children with Multiple Impairments. Electronics. 2023; 12(13):2792. https://doi.org/10.3390/electronics12132792

Chicago/Turabian Style

Sobota, Branislav, Štefan Korečko, and Miriama Mattová. 2023. "Web-Based 3D Virtual Environments Utilization in Primary and Secondary Education of Children with Multiple Impairments" Electronics 12, no. 13: 2792. https://doi.org/10.3390/electronics12132792

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop