Next Article in Journal
Research on the Policy Analysis of Sustainable Energy Based on Policy Knowledge Graph Technology—A Case Study in China
Next Article in Special Issue
Exploratory Students’ Behavior towards Massive Open Online Courses: A Structural Equation Modeling Approach
Previous Article in Journal
Holistic System Modelling and Analysis for Energy-Aware Production: An Integrated Framework
Previous Article in Special Issue
Prototyping an Online Virtual Simulation Course Platform for College Students to Learn Creative Thinking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Can Nonliterates Interact as Easily as Literates with a Virtual Reality System? A Usability Evaluation of VR Interaction Modalities

by
Muhammad Ibtisam Gul
1,
Iftikhar Ahmed Khan
1,*,
Sajid Shah
2 and
Mohammed El-Affendi
2
1
Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, University Road, Tobe Camp, Abbottabad 22060, Pakistan
2
EIAS Data Science and Blockchain Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
*
Author to whom correspondence should be addressed.
Systems 2023, 11(2), 101; https://doi.org/10.3390/systems11020101
Submission received: 14 January 2023 / Revised: 8 February 2023 / Accepted: 9 February 2023 / Published: 11 February 2023

Abstract

:
The aim of the study is twofold: to assess the usability of a virtuality (VR) interaction designed for nonliterate users in accordance with ISO-Standard 9241-11 and to compare the feasibility of two interaction modalities (motion controllers and real hands) considering the impact of VR sickness. To accomplish these goals, two levels were designed for a VR prototype application. The system usability scale (SUS) was used for self-reported satisfaction, while effectiveness and efficiency were measured based on observations and logged data. These measures were then analyzed using exploratory factor analysis, and the ones with high factor loading were selected. For this purpose, two studies were conducted. The first study investigated the effects of three independent variables on the interaction performance of a VR system, i.e., “User Type,” “Interaction Modality,” and “Use of New Technology.” The SUS results suggest that all the participants were satisfied with the application. The results of one-way ANOVA tests showed that there were no significant differences in the use of the VR application among the three selected user types. However, some measures, such as task completion time in level one, showed significant differences between user types, suggesting that nonliterate users had difficulty with the grab-and-move interaction. The results of the multivariate analysis using statistically significant variables from both ANOVA tests were also reported to verify the effect of modern technology on interactivity. The second study evaluated the interaction performance of nonliterate adults in a VR application using two independent variables: “Interaction Modality” and “Years of Technological Experience.” The results of the study showed a high level of satisfaction with the VR application, with an average satisfaction score of 90.75. The one sample T-tests indicated that the nonliterate users had difficulty using their hands as the interaction modality. The study also revealed that nonliterates may struggle with the poses and gestures required for hand interaction. The results suggest that until advancements in hand-tracking technology are made, controllers may be easier for nonliterate adults to use compared to using their hands. The results underline the importance of designing VR applications that are usable and accessible for nonliterate adults and can be used as guidelines for creating VR learning experiences for nonliterate adults.

1. Introduction

The ability to read and write is vital in today’s world, yet a global literacy crisis is affecting many nations, with 771 million adults lacking basic literacy skills [1]. Illiteracy is the cause of various problems like poverty and non-sustainable economic growth [2]. Thus, the elimination of illiteracy is considered a key goal in the United Nations Sustainable Development Goals 2030. There are various approaches to enhancing adult literacy, including traditional instructor-led literacy programs that use direct instructional strategies [3,4] and the increasingly popular use of Information and Communication Technologies (ICT) that follow a learner-centered, active learning approach [5,6,7]. The curriculum utilized should be appropriate for instructing adults, as they possess a matured cognitive ability and their motivation to learn is shaped by their life experiences, which direct their method of acquiring knowledge [8]. There are multiple examples of research that suggests different techniques, such as using environmental print material to tutor the nonliterate adult population [6].
According to research, nonliterate adults tend to struggle with the cognitive processing of spoken language [9]. They have weaker abilities in retaining both verbal and visual information, and they possess lower visual-spatial abilities [10]. Katre et al. [11] discovered that these limitations stem from differences in cognitive development. Therefore, ICT should be designed according to the requirements of nonliterate adults. It is crucial to determine if the design principles that have been successful in conventional ICT can also be applied to create usable VR applications for nonliterates, considering the various interactivity and audio/visual differences in VR compared to traditional ICT. Furthermore, the design should be evaluated as per design standards such as ISO-9241-11 [12].
The interaction experience in traditional ICT applications includes hardware devices such as a mouse or a keyboard that provide an indirect manipulation experience to the users. Both devices provide a unique experience of interaction that is often used to do the same task in applications. So, these are the two communication channels or modalities that are engaged to interact with traditional ICT applications as defined by Bartneck et al. [13]. Multiple studies indicate that novice users face initial difficulties while using the devices [14,15,16,17,18]. One of the major difficulties faced by the user is the investment of extra time in learning input via keyboard or moving a mouse to click interfaces. Current VR systems, e.g., Oculus Quest 21 and HTC Vive2 use head-mounted displays (HMD) for the perception of vision and audio, and motion controllers or real hands are used as modalities for reality-based interaction. Therefore, it provides unique experiences of interaction, a sense of presence, and immersion in three-dimensional (3D) virtual environments. Despite several studies aimed at assessing the interaction experiences of the general population with VR [19,20], there is a lack of research specifically focused on nonliterate users. This means there are no established guidelines for evaluating the VR interaction experiences of the nonliterate population.
This study aimed to evaluate and compare the usability of various interaction modalities of VR systems in the context of nonliterate users following ISO Standard 9241-11. The study also aimed to compare the modalities among different user groups, such as tech-literate, non-tech-literate, and nonliterate, through a designed VR educational application. Therefore, the targeted research questions are as follows:
  • RQ 1: Is the designed educational application usable for the nonliterate population?
  • H1:The designed educational application will be usable by the nonliterate population.
  • RQ 2: If yes, then how easy is the application for nonliterate users as compared to the two other groups?
  • H2:The designed VR application will be as easy to use for nonliterate users as it is for literate users.
  • RQ 3: Which interaction modality is more usable by nonliterate users?
  • H3:Nonliterate people will find hands to be more usable due to their intuitive and reality-based interaction styles as compared to controllers.
The rest of the paper is organized as follows: Section 2 discusses literature in the context of the objectives of this research. Section 3 explains the design of the VR prototype, and Section 4 explains the measures used in the research. Section 5 is about study 1 and the analysis of the data and the discussion of the results. Section 6 covers the analysis of results and discussions for study 2. Section 7 summarizes the results, highlights some of the limitations of the research, and provides suggestions and recommendations. Finally, Section 8 concludes the research.

2. Literature Review

In the context of the aims and objectives of this research, the following sub-sections of the literature review will discuss the modalities of VR, the interactivity of VR, VR-induced sickness problems, and finally the characteristics of the nonliterate adult population that must be taken into consideration when designing VR experiences.

2.1. Virtual Reality

Virtual reality refers to a computer-generated simulation of a 3D environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a headset with sensors [21]. This technology creates an immersive experience for the user, in which they can feel as though they are present in and a part of the artificial environment. With the availability of affordable consumer VR systems and rapid research and development in terms of vividness and interactivity, there is a growing trend toward its adaptation in various fields of education and training.
The psychological aspects of VR encompass how the human mind processes and perceives the experience in a simulated 3D environment. These aspects include presence and immersion as the most important concepts that make the VR medium stand apart from others [22]. Presence refers to the experience of being physically present within the virtual environment rather than simply viewing it on a display. Immersion, on the other hand, is the sensation of being completely enveloped in the virtual world, with one’s attention fully devoted to it. These two concepts work together to create the perception of a convincing virtual environment through the interplay of sensory input and the brain’s processing of these stimuli [23,24]. Perception is an active system that uses inputs from our sensory system and information from our cognition. For example, if you see a hurdle blocking your path, your sensory system provides data related to the hurdle, and your cognition provides information about the type of hurdle and how to overcome it. This provides enough cues to our perceptual system to infer a model of our surroundings so we can act accordingly [25]. The same phenomenon occurs in VR. In VR, immersion experience is defined by its capability to support natural sensorimotor possibilities; this stimulates our perceptual system to generate an illusion of being there, called presence [26]. Figure 1 shows that flow is the ultimate objective of any VR experience.
Flow is an autotelic experience, which refers to a self-contained activity that is done for a present reward without the anticipation of any future benefits [27]. The flow alienates one from reality to ecstasy without ruminating on the after-effects. A person in the state of flow has a boosted sense of self and concentration due to greater control, involvement, and enjoyment [28]. Awed by this experience, time is transformed, and the person is disconnected from his surroundings [29]. Consequently, this deep focus and enjoyment affect performance and its quality [30]. For a user to be in the state of flow, he must experience a sense of presence and immersion [31,32], which are directly affected by the quality of engagement, vividness, and interactivity [21].
For an engaging flow, content is always the most important commodity [33]. The implication of content is so diverse that we cannot decern its definition. In VR, most of the development resources are allocated to the creation, management, and marketing of content [34]. Content is divided into audio/visual components such as environments, objects, visual effects, sound effects, and music. Simply displaying the content is not enough to provide an optimal flow experience in VR. A player’s interaction with the content and its response are equally important. Therefore, for the optimal flow experience to happen, one must be engaged in an activity that is aptly challenging for his skill level [35]. These activities should be designed to easily achieve the optimal flow experience by providing appropriate equipment and an appropriate environment, devising rules that require learning skills, setting achievable goals, providing consistent feedback, and ensuring control [36]. Moreover, virtual reality sickness is a characteristic that can negatively affect the VR experience [37]. The focus of this work is to measure the interaction usability of the VR system for the nonliterate population; therefore, the topics being discussed in the sub-sections are interactivity [21], VR sickness [37], and the characteristics of the nonliterate adults that must be considered in the interaction design of the VR applications.

2.2. Interactivity

Interactivity in VR refers to the ability for a user to actively engage with and control elements within a virtual environment. This can include actions such as moving objects, selecting items, and communicating with other users or virtual characters. Interactivity is a key aspect of VR, as it allows users to experience a sense of agency within the virtual world and helps to increase their sense of presence and immersion [38]. Interactivity is stimulated by the technological aspects of the VR system [21]. In consumer VR systems, interactivity is achieved through the HMD, handheld motion controllers, motion tracking of the HMD and controllers, and hand tracking. These can also be called interaction modalities of VR systems. The presence of the head and hands is enabled through motion tracking of the HMD and controllers/hands within a specified room-scale play area. There are two types of motion tracking: outside-in and inside-out. Outside-in external motion sensors are used, while these sensors are integrated inside the HMD for inside-out tracking [39]. The most common interactions enabled by this technology are a simulation of hand presence, recognition of head and hand gestures, manipulation of 3D content, and facilitation of physical and artificial locomotion.
Current VR devices provide hand presence and interaction with the virtual world using wireless motion controllers and/or hand tracking. The handheld controllers have varied interactivity based on their designs. Figure 2 illustrates controllers for popular VR systems. These controllers are ergonomically designed to enable users to realistically interact with the VR environment. VR controllers have traditional action buttons and analog triggers, as found in many gaming controllers. Thumb sticks are provided for movement/locomotion, and internal motors are used for haptic feedback. These controllers track their position and send the data to the HMD to detect user hand movements and interactions. They can be programmed for several types of interactions, such as grab, pinch, poke, etc. These controllers provide a high degree of interaction fidelity and have a minimal temporal offset between real and virtual actions [40]. Hand tracking is also enabled in the current state-of-the-art VR systems. Instead of motion controllers, the user’s real hands are tracked by the cameras/sensors attached to the HMD and rendered in the virtual world. Users can use natural hand movements and gestures for interaction in VR. Regarding the controller, designing and implementing real hand interactions and gestures may be challenging. Errors in position tracking and gestures can also occur when hands are not visible or angled awkwardly to the HMD’s camera/sensors [41]. A high temporal offset between real and virtual actions can be experienced because the hand-tracking data is first processed by the HMD [42]. Several locomotion techniques are also possible in the current generation of VR systems and are divided into two categories: artificial locomotion and physical locomotion. Physical locomotion in VR is controlled by the user’s movements in the real world using the motion tracking of the HMD. Motion tracking of the HMD and controllers controls artificial locomotion. Examples are teleportation, walking in place, and world-pulling.
There can be unimaginable possibilities for interactions in VR, but it is necessary to evaluate the contextual interactivity of a VR experience [43]. In research [44], the interactivity of handheld controllers is measured, and interaction design guidelines are proposed. However, these guidelines are general and are not specific to a certain type of VR experience.

2.3. VR Sickness

VR sickness, also known as cybersickness or simulator sickness, is a phenomenon where users of virtual reality (VR) systems experience discomfort or symptoms similar to motion sickness [37]. This is caused by a mismatch between what the user sees and what their body feels, leading to feelings of nausea, dizziness, and headaches. Factors contributing to VR sickness include high levels of motion, rapid changes in visual information, and a lack of stability in the virtual environment [45]. A person experiences motion sickness when there is a sensory conflict between visual stimuli and the vestibular system [46]. In VR, visual stimuli are the major sensory input and, therefore, can induce motion sickness and adversely affect the user experience. Eye movements and the vestibular system are predicted to be the major contributors to VR sickness, and it was advised that reducing the eye movements and incorporating motion simulation synchronized with visual stimuli may reduce it [37,45]. Since VR sickness (nausea and discomfort) may reduce the user experience, it is essential to measure it. The most used subjective method is the simulator sickness questionnaire (SSQ) [47]. Research has shown that instead of using the complete items of the SSQ, we can use the most common question from the SSQ to get some basic output from the user [48]. Similarly, for objective measurement, postural sway can be observed [37].

2.4. Characteristics of Nonliterate Adults

Designing VR applications for nonliterate people requires considering certain specific characteristics of this population. For example, they may have limited or no experience with technology, which means that interfaces and interactions must be intuitive and simple to understand. They may also have difficulty with visual information, so non-verbal cues such as audio, haptic feedback, and gesture-based controls may be more effective. Additionally, it is important to consider cultural factors, such as whether the language used in the application is appropriate, and to keep in mind that literacy levels can vary widely even within a single population. Overall, the design of VR applications for illiterate people requires careful consideration of their specific needs and abilities in order to create a successful and accessible experience. Therefore, it is recommended in multiple studies that ethnographic characteristics must also be considered when designing the content for ICTs [49,50,51,52]. These characteristics include life experiences, sociocultural factors, gender disparity, etc. Based on these observations and recommendations, several well-known general guidelines were proposed. These guidelines focus on audio/visual and task elements, for example, using hand-drawn images with visual cues, short and explicit audio cues for complex concepts, and showing consecutive steps during a task [50,53]. These guidelines might influence the implementation of engagement and vividness elements, but not the interactivity aspect of VR systems. Whereas, the use of modern technology, for example, smartphones or computers, can be considered a factor of interactivity.
The design and development of the VR applications as per the users’ characteristics demand the investigation of the interaction behavior of the users with the applications [49,50,51]. Such findings are elaborated as user-centered design (UCD). The UCD process is always conducted when underlying technology changes, or if the technology remains the same but a group of users changes. For example, Rasmussen et al. [54] elaborated that if an interface shape changes, it should be evaluated on a group of users every time.

3. VR Prototype Design

The aim of the study was two-fold. Firstly, to evaluate how well a certain interaction implemented in VR suits the nonliterate population in terms of effectiveness, efficiency, and satisfaction as per ISO-Standard 9241-11. Secondly, to evaluate the usability of two interaction modalities, i.e., motion controllers and real hands, along with the negative impact of VR sickness. To achieve the aims, two VR application levels were designed. As the focused user groups are adult, nonliterate people; therefore, the content for the VR application was designed considering the users’ characteristics as described above. The following Section 3.1 explains the design of the VR application levels and the content, and Section 3.2 provides information about the tools and technologies used.

3.1. VR Environment and Level Design

The artifacts of engagement and vividness were designed and organized in such a way that correlates with the cognitive differences and characteristics of nonliterate adult users. Elements of game-based learning were also incorporated, as it is considered an effective approach to transferring knowledge [55]. Wade et al. [56] found that prior knowledge-induced curiosity leads to higher learning. Therefore, we designed the game objects and environments following this concept while also considering the life experiences and sociocultural norms of nonliterate adults. Cognitive load is a major factor in increased error rates during a task based on gesture input [57]. Therefore, considering the lower visual-spatial skills of nonliterates, confined environments are designed for reduced cognitive load. Contemplating the lower language comprehension of nonliterates, easily comprehendible language was used to compose the audio instructions. Complements are considered positive emotion-laden word types [58], and positive emotions facilitate learning [59]; therefore, complementary remarks were also added. To invigorate an all-encompassing feeling, relaxing background music along with atmospheric sounds and effects were added. Language learning techniques were selected that can be effectively implemented in VR. Interaction schemes were programmed for both motion controllers and hand tracking. Only physical locomotion was used within a specified room-scale boundary to ponder VR sickness. Unintended accidents may occur while wearing the HMD, therefore, the real-world view was displayed on HMD screens using pass-through cameras upon crossing the virtual boundary. The prototype was thoroughly used and tested by expert VR users to find any bugs or exceptions. The final iteration was installed on the Oculus Quest 2 VR System. Figure 3 illustrates the level design process.
Two levels were designed to evaluate the interaction possibilities of the VR system. Level one was used to test and evaluate the usability and interactivity of different kinds of basic interactions using motion controllers and hand tracking. These basic interactions include grabbing, pinching, and poking. These basic interactions were then used to create complex interactions, as shown in Table 1.
Level two introduced some more complex interactions and presented the user with a learning experience with the Urdu alphabet and their learning resources. The learning processes of writing, memorizing, and recognizing the alphabet were implemented through the three practice modes.
In the first level, the user was presented with an environment consisting of some basic game objects like small and large cupboards, a table, and a TV stand. Some electronics items, such as a TV and music system, were also placed. Everyday items were placed in such a way that the user required minimal locomotion to interact with them. Most of these items were grabbable, and some were pressable or usable. Some unrealistic objects based on real-world concepts were also placed in the environment, such as a floating TV remote with big buttons. The virtual environment and the interaction modes and possibilities are shown in Figure 4. In VR, spatial audio plays a vital role in affecting the sense of presence [60]. Therefore, several audio sources were placed in the environment to play music, ambiance sounds, interaction sounds, and instructions for the user.
Furthermore, this level was comprised of six tasks. The user must complete all tasks to progress forward. Each task was designed to evaluate a specific type of interaction, as shown in Table 2. Periodic audio instructions were played to guide the user toward task completion. After the successful completion of the task, complementary audio was played to motivate the user.
The user can progress to the next level by pressing the big red button that will only appear after each task in level one was completed. The main objective of this level was to evaluate the basic interaction possibilities while training the users for more complex interactions in the next level.
The second level presented the user with an Urdu alphabet board with interactable alphabet cards. A new interaction type, “distance grab” was introduced at this level. After grabbing the alphabet card, the user can press the button on that card to play an educational video associated with the alphabet. Three alphabet writing modes were also created at this level so that the user can practice the learned alphabet. The first mode presented a traditional blackboard and marker setup where the user could grab a marker and write on the board. The second mode was an Urdu keyboard with color-coded alphabetic keys. Color-coding alphabet families was proven effective in teaching nonliterates [61]. The third mode gives the user the ability to write in the air. This mode was used to practice writing. Itaguchi et al. [62] have demonstrated that this is an effective way to memorize language shapes and letters both consciously and unconsciously. Figure 5 illustrates the user’s interaction with the VR environment using different interaction modalities.
Like level one, the user had to complete tasks; Table 3 shows the task interaction mapping of level two.

3.2. Tools and Technologies

Unity 2021.3.83 was used as the game engine with Oculus Interaction SDK4 and C# was used as a scripting language. Open-source tools such as Blender5 for 3D modeling, Gimp6 for textures, and Audacity7 for sound recording were used. Moreover, free assets from Unity Asset Store8 and Quixel’s Megascans9 were also used. Informative videos on the Urdu alphabet and Pakistan were linked in the game from YouTube channels “MUSE Lessons—Education Cartoons for Kids,” “Urdu Reading,” and “Discover Pakistan.”
This study used the Oculus Quest 210 VR System, which includes an HMD and two motion controllers. Oculus Quest 2 is among the cheapest VR Systems that provide several excellent features. This system requires no external computer and sensors, as it has its own computer and inside-out tracking system that is managed by Android. Users can interact with the VR using motion controllers or their hands. The portability, simplicity, and feature-rich characteristics of this system make it an ideal choice for this research.

4. Measures

Two studies were conducted to evaluate the usability of VR systems for nonliterate adult users. In the first study, we compared the usability of VR applications amongst three user groups, i.e., tech literates, non-tech literates, and nonliterates, and analyzed the differences based on the usability of interaction modalities and the effect of using technology. The second study was conducted only on nonliterate people. In this study, we used the data from study 1 as hypothesized values for tests in study 2. Furthermore, we added variables for technological experience and analyzed their effect on usability.
The interaction outcomes were measured as per ISO Standard 9241-11. The standard dictates usability in terms of effectiveness, efficiency, and user satisfaction. Effectiveness is the capability of users to carry out tasks and the quality of the productivity of those tasks. Efficiency is the amount of resource consumption by the user in executing the tasks. Satisfaction is the personal response of the user to using the system. These three aspects of usability should be measured holistically to evaluate the interaction usability of the designed application. There are multiple examples of such ICT evaluation, such as designing ATM user interfaces [10], designing multimedia content [11], and developing a website for nonliterate people [50]. The personal reaction of satisfaction can be measured subjectively using SUS [63]; thus, a survey was created using 10 items of SUS and a 5-point Likert scale. The questions are shown in Table 4. Based on previous research, a SUS score ≥ 68 is considered above average and <68 below average. SUS score was calculated by summing the score contributions of items 1, 3, 5, 7, and 9, i.e., item value minus 1, and for 2, 4, 6, 8, and 10, the contribution is 5 minus the item value and then multiplying by 2.5. This gives the SUS score a range of 0 to 100. The objective measures of efficiency and effectiveness were measured by logged gameplay data, video recordings of gameplay, and external videos, and observations were documented for several variables listed in Table 5. VR Sickness was measured using a 2-item questionnaire selected from the SSQ [37]. All the measures for dependent variables were verified using exploratory factor analysis.

Validity of Measures

The questions in the SUS scale are proven to have both internal and external validity and have been used in numerous other studies [64,65]. The validity of measures for effectiveness and efficiency was verified by exploratory factor analysis. VR Sickness measures were also verified as they can negatively affect other variables. The results in Table 6 and Table 7 show that the selected measures can be used as factors in this research. All the components that have factor loading > absolute value of ±0.5 are included as measures in this research.

5. Study 1

The main aim of this study was to evaluate the usability of the VR system and its interaction modalities for nonliterate adult users. Unfortunately, we have not found any similar research to compare our results against. Therefore, we decided to test our prototype VR on adult users that belonged to the following user groups:
  • Tech-Literate: This group encompasses literate individuals with a high level of expertise and proficiency in utilizing computer systems, software applications, and digital devices. Participants in this group were mainly from computer science and software engineering backgrounds.
  • Non-tech-Literate: This group consists of literate individuals with a basic or limited familiarity with the use of technology. Participants in this group came from non-technical fields.
  • Nonliterate: This group comprises individuals who are not literate, regardless of their level of technology literacy. These individuals may have difficulties using digital devices and computer systems and may need support or training to effectively utilize technology. Participants in this group came from a variety of fields that did not require education or technical experience.
This study was conducted to answer the following research questions:
  • RQ 1: Is the designed educational application usable for the nonliterate population?
  • H1: The designed educational application will be usable by the nonliterate population.
  • RQ 2: If yes, then how easy is the application for nonliterate users as compared to the two other groups?
  • H2: The designed VR application will be as easy to use for nonliterate users as it is for literate users.

5.1. Procedure

All participants in the study provided their consent, either by completing a form (for literate participants) or giving verbal approval (for nonliterate participants), which was recorded by the experimenter. Some of the female participants refused external video capture, so the experimenter recorded the observations on paper. The experiment was continued only after the consent of the participant. The procedure of the experiment was duly approved by the COMSATS University Islamabad, Research and Evaluation Committee (CUI-REC). The experiments were conducted from 7–11 November 2022. The experiment was based on a between-subjects design. The experiments were conducted at separate locations as per users’ suitability and availability.
The VR prototype was designed in such a way that an average user took about 6 to 8 min to complete it. VR gameplay data was logged, and internal gameplay and external videos were recorded for objective evaluation. After the gameplay, the literate participants completed a concise survey based on 10 items of SUS and two items of VR sickness for 3 to 5 min. The experimenter read the same questions in the local language to the nonliterate participants, and their responses were recorded. Before using the VR devices, each participant was asked to sanitize their hands. Remedial counteractions were prepared in case VR sickness was experienced by the user. Some of the remedial counteractions were a place for the user to lie down, the availability of drinking water, and a jar of lemon and orange sweets.

5.2. Participants

The participants for the study were all residents of Abbottabad, KPK, Pakistan. The participants were recruited from different departments at COMSATS University Islamabad Abbottabad Campus: tech literates from the Computer Science department, non-tech literates from the Management Science department, and nonliterates from the Establishment department. Some nonliterate participants were also sourced from outside the university. They all practiced Islam as their religion and were fluent in Urdu, the national language of Pakistan. Due to cultural sensitivity, a female experimenter was arranged, as the female participants felt uncomfortable performing the experiment in front of male individuals. The tech-literate user group was composed of software engineering students and faculty from the COMSATS University Islamabad Abbottabad Campus in Pakistan. The non-tech-literate user group included students and faculty from non-technical degree programs. Meanwhile, the nonliterate user group consisted of individuals from various backgrounds who lacked literacy skills. A total of 30 participants took part in the experiment, with ages ranging from 21 to 55 years old. Out of the 30 participants, 12 (11 males and one female) were from the tech-literate group, eight (five males and three females) were from the non-tech-literate group, and 10 (seven males and three females) were from the nonliterate group. Only one tech-literate participant had prior experience with VR. All the literate participants used modern ICT equipment, while some of the nonliterates had limited interaction with ICT, using only mobile phones (N = 3), or mostly using smartphones to watch multimedia content (N = 7). The nonliterate participants were aware of computers and smartphones but had never used VR before, yet they were eager to try it. Twelve participants used the VRLE prototype with motion controllers, and 18 participants used their hands. A table with more detailed information on the participants is provided in Table 8.

5.3. Analysis of Results

Three independent variables were used in this experiment. The comparison between the interaction of distinct types of users with the VR system was evaluated using “User Type” with three classes (tech-literate, non-tech-literate, and nonliterate). Interaction performance between controllers and hands was evaluated using the variable “Interaction Modality” with two classes (controllers and hands). The effect of technology use on interactivity was measured using “Use of New Technology” with two classes (Yes, No). SPSS 20 was used to analyze the acquired data. Two one-way ANOVA tests were conducted. In the first test, “User Type” was used as a factor for the dependent variables listed in Table 5, and in the second test, “Interaction Modality” was used as a factor.
In our survey, the satisfaction level reported by the participants is between 78 to 100 inclusive (nonliterate participants: 78 to 90 and literate participants: 83 to 100), which indicates that the users are satisfied with the designed application. Two one-way ANOVA tests were conducted to measure effectiveness and efficiency. In the first test, “User Type” was used as the independent variable, and in the second test, “Interaction Modality” was used as the independent variable. The descriptive and ANOVA results are attached as Appendix A, Appendix B, Appendix C and Appendix D and Table A1, Table A2, Table A3 and Table A4. All the statistically significant results along with their descriptive values are shown in Table 9 and Table 10. Out of 30 measures (seven for effectiveness, 20 for efficiency, and three for VR sickness), only eight variables were found statistically significant for “User Type” as the independent variable and only four for “Interaction Modality.” The results suggest that there were no significant differences in the use of the designed VR application for any type of user among the selected types (tech-literate, non-tech-literate, and nonliterate).
The variables for which significant differences were found between the user types and interaction modalities are discussed in detail in the following sections. Furthermore, multivariate analysis using statistically significant variables from both ANOVA tests was conducted to verify the effect of the use of modern technology on interactivity. The related data is attached as Appendix E and Table A5. All the measures having significant mean differences are displayed in Table 11. The results are elaborated on in later subsections.

5.3.1. Analysis of Results with “User Type” as Predictor

There were three user classes: tech-literate, non-tech-literate, and nonliterate. The dependent variables listed in Table 9 and Table 8 were found to be statistically significant with respect to the user type. The first significant variable was the “2nd task completion time,” which was logged during level one gameplay. This task was created to evaluate the grab-and-move interaction. In the interaction, the user must grab a blue pebble placed on the map of Pakistan and move it to the KPK province. The pebble movement was constrained in x and y coordinates; therefore, the user will not be able to grab and move it like other objects. It was observed that tech literates learned this interaction in three to four tries, yet this task proved difficult for the non-tech literates and nonliterates. This is also evident in Table 9, where the mean of the task completion time for non-tech-literate and nonliterate users is 54.25 and 58.80 s, respectively, while tech literates completed the task, on average, in 25.58 s. The results indicated that constrained movement in three-dimensional space was not comprehended properly by the non-tech-literate and nonliterate groups.
The next significant measure, i.e., “4th Task completion time” was also logged while playing level one. Grab, move, and place interactions were tested in this task. The user must open a drawer in the small cupboard and put a gold bracelet inside. Like in the 2nd task, the drawer was also constrained to be moved in only z coordinate, but this time, the difference was not due to the constrained movement. It was observed that nonliterate users were not grabbing the drawer by the handle; instead, they tried to open it from the sides. That may have been the reason nonliterates lagged as compared to the other groups.
Another variable from level one, i.e., “6th Task completion time” was also found to be significant. This task introduced interaction with usable objects. In this task, the user must open a red box, grab the pistol from inside, and shoot the target. This task was among the most complex ones because the user had to interact in multiple ways to complete it. The sequence of interactions started with opening the box lid; the user must grab the lid handle and move it upwards to open it. The next interaction in the sequence was to grab the pistol and use it to destroy the target. A pistol must be grabbed with grip interaction and then shot with the index finger. Interestingly, the tech-literate and nonliterate users completed this task later than the non-tech-literate ones. Some of the earlier experiments were conducted in a lecture room where the brightness levels were not adequate for hand tracking by the HMD and all the users were tech literate. Therefore, it can be assumed that this disparity resulted from the factor highlighted above. It was observed that all the above significant measures involved grab-and-move interaction, which proved difficult for nonliterate users. Therefore, we have decided not to implement this interaction in the later levels.
The next significant measure was the second task of level two, where the user must press a small button on the alphabet card to watch an informative video on TV. To complete this task, the user either must watch the whole video (30 to 45 s) or press the button again to stop the video. Despite knowing that the video can be stopped, most of the nonliterate users watched the video while the others stopped it, which was also reflected in the above result that nonliterate users completed this task in double time, but this also indicates their curiosity to learn.
The measure of “Errors in distance grab” was also related to level two. The observations revealed that nonliterate users tried to grab more than one card at a time. Another odd behavior by the nonliterates was uncovered during the observations, i.e., it seemed that they wanted to collect the alphabet cards. Therefore, after the experiment, they were asked about this phenomenon. Their responses disclosed that the displayed reticle while trying to grab a card was confusing. The observation was noted for a future iteration of VRLE.
The next three statistically significant measures were related to behavioral observations. The first one, “User required external help,” means that the user sometimes relied on outside help to complete the task. For example, in one of the experiments, it was observed that a nonliterate user was not moving while wearing the VR. The experimenter intervened and helped him move physically to get him started. Later investigation revealed that the user was waiting for some event to happen. Moreover, the user said that he had never seen such content and was confused. However, if given another chance, he would do better. The second observed behavior was that the “user tried to interact with every object.” This means that the user was not just picking the required object but also trying to grab other static or irrelevant objects, which were deliberately placed in the level to cater to the curiosity element. This observation was more prominent in level one. From the results, it was evident that most of the nonliterate users interacted with only the objects required for the task completion, and therefore they have the highest mean in the next measure, i.e., “User follows in-app instructions.” It was also apparent from the results that prior knowledge-induced curiosity was commonly observed in tech-literate users.

5.3.2. Analysis of Results with “Interaction Modality” as a Predictor

This analysis was made to compare the differences in interaction performance between controllers and hands as interaction modalities. We have found four statistically significant measures. The first measure was the completion time of the sixth task in level one, which also relates to the measure of errors in grabbing and using a pistol. As discussed in the above section, in this task, the user must use a pistol to shoot a target. It was observed that shooting using the controllers was much quicker than using actual hands, and that was because the controller had physical buttons for grip and trigger. Users could grab the gun using the grip button and shoot using the trigger button. The results in Table 10 revealed that grab-and-use interaction was more performant using the controllers.
The results also indicated that the temporal offset between real and simulated actions while using real hands as a modality could affect fast-paced interactions in VR. Users’ hands were tracked by the HMD, and if the real-world environment was not properly lit, then there could be problems with hand tracking due to a greater temporal offset, whereas controllers were self-tracked and were not plagued by it. This could be the only logical explanation for the observed significance. Some unrecorded experiments were conducted with different lighting conditions to confirm the above phenomenon. A reduction in errors was observed when an infrared illuminator was used, but more data is needed to confirm it.

5.3.3. Analysis of Results with the “Use of the Technology” as a Predictor

A multivariate analysis was conducted to analyze the effect of the use of modern technology on the interactivity of VR systems by nonliterate users. Therefore, an independent variable called “Use of New Technology” was introduced along with the other independent variables used in the above sections. This analysis was made using only the statistically significant measures identified in ANOVA tests. The variable “Use of New Technology” has two classes (“Yes” and “No”). The use of smart phones, computers, laptops, or any other state-of-the-art technology was categorized as “Yes.” Moreover, only the significant measures shown in Table 11 that have notable mean differences were considered in this analysis. It was observed that the use of modern technology did not affect the behavior of the user, yet it affected some interactions. It was observed that for both interaction modalities, nonliterate users who did not use modern technology had a tough time completing task 2 of level one, which involved constrained grab and move interaction. The other two measures were related to task two of level 2. In this level, the user had to distance grab an alphabet card and press a button on it. The interaction types associated with this task were distance grab, pinch, and poke. Overall, if the user uses modern technology, that may affect some of the complex two-handed interactions.

5.4. Discussion of Results

Research question 1 hypothesizes that the designed educational application will be usable for nonliterate users. The results of the experiment showed that the satisfaction level reported by participants was between 78 to 100, indicating that users were satisfied with the application. The results of the two one-way ANOVA tests showed that there were no significant differences in the use of the VR application among the three user types (tech literate, non-tech literate, and nonliterate). However, a few variables were found to be statistically significant.
Research question 2 hypothesis states that the educational application will be easier for nonliterate users to use, which is partially supported by the results of the experiment. The satisfaction level reported by the participants indicates that all types of users are satisfied with the designed application. However, the results of the one-way ANOVA tests suggest that there are significant differences in task completion times between the user types, with non-tech-literate and nonliterate users taking longer to complete certain tasks compared to tech-literate users. The results also indicate that nonliterate users had difficulty with some of the interactions, such as the grab-and-move interaction and interactions with usable objects. The results suggest that the educational application is generally easier for nonliterate users, but the application may need to be improved to better cater to the needs of nonliterate users in certain interactions.

6. Study 2

In our previous study, we evaluated the usability performance of a VR application among three user groups: tech-literate, non-tech-literate, and nonliterate. Our findings indicated that the tech-literate and non-tech-literate groups performed relatively better compared to the nonliterate group. As a result, we took the average values of the dependent variables obtained from the tech-literate and non-tech-literate user groups for each interaction modality and used the values to establish test values. The main objective of this study was to use these test values shown in Table 12 to analyze the differences in efficiency and effectiveness between the two modalities. Another predictor was added to the study to analyze the effect of years of technological experience on VR usability. This study was conducted to answer the following research question:
  • RQ 3: Which interaction modality is more usable by nonliterate users?
  • H3: Nonliterate will find hands to be more usable due to their intuitive and reality-based interaction style as compared to controllers.

6.1. Procedure

The participants in the study voluntarily agreed to participate in the experiment after being fully informed by the experimenter. Some female participants declined to have video recordings taken, so their observations were recorded by the experimenter through written documentation. The study was approved by the CUI-REC at COMSATS University Islamabad and only carried out after obtaining the participants’ consent. The experiments took place from 5–10 January 2023 and employed a between-subjects design, conducted at various locations depending on the participants’ schedules and accessibility.
The VR prototype was designed to take approximately 6 to 8 min to complete, and data was recorded during the gameplay, including internal gameplay and external videos for evaluation. After completing the VR session, the questions for SUS were read to the participants and the responses recorded by the experimenter. Before using the VR device, each participant was required to sanitize their hands, and remedial measures such as a place to lie down, water, and lemon/orange sweets were available in case of VR sickness.

6.2. Participants

The participants in the experiment were all from Abbottabad, KPK, Pakistan, and were recruited through the establishment department of COMSATS University Islamabad, Abbottabad Campus. Additionally, some nonliterates were also sourced from external locations. All the participants followed Islam as their religion. Urdu is the national language of Pakistan; hence, all the participants understand the language. Female participants were hesitant to perform the experiment in front males; therefore, the female experimenter was arranged. A sample of 10 nonliterate adults (split evenly between males and females), ranging in age from 19 to 50, participated in the experiment. All the participants had smartphones except for one, and three of them also used computers. The participants who had access to modern technology, such as smartphones or computers, used these devices primarily for consuming multimedia content. During the experiment, half of the participants used the prototype VRLE with motion controllers, while the other half used their hands. A full list of participant information can be found in Table 13.

6.3. Analysis of Results

The study is focused on nonliterate users and aims to compare the interaction performance between controllers and hands when using a VR application. Additionally, the effect of technological experience on interactivity was also measured. In the experiment, two independent variables were employed. The “Interaction Modality” variable, which had two classes (Controllers and Hands), was used to assess the interaction performance between controllers and hands. The “Years of Technological Experience” variable, which had three classes (less than 1 year, 1 to 5 years, and over 5 years), was used to evaluate the impact of technological experience on interactivity. The analysis was conducted using SPSS 20. One-sample T-tests were performed on the measures of effectiveness and efficiency, using the hypothesized mean values from Table 12. The tests were conducted for both interaction modalities, and the results are shown in Appendix F, Table A6 and Appendix G, Table A7. The statistically significant results and their descriptive statistics are presented in Table 14 and Table 15.
The participants reported a satisfaction level ranging from 70 to 100 with an average of 90.75, indicating a high level of satisfaction with the VR application. Out of 31 measures, only 6 (comprising 1 measure of effectiveness and 5 measures of efficiency) were statistically significant when controllers were employed as the interaction modality. However, when hands were used as the interaction modality, the number of statistically significant measures increased to 12 (with 3 measures of effectiveness and 9 measures of efficiency) out of 31. The findings suggest that nonliterate adults may have struggled when using their hands for interaction and that controllers may be a preferable option until hand-tracking technology improves. Detailed discussions are presented in the later subsection.

6.3.1. Analysis of Results with Controllers as an Interaction Modality

Controllers were found to be easier to use for nonliterate adults compared to using hands as an interaction modality. The controllers were equipped with triggers and buttons that the users used to perform the interactions. Our prototype implemented three basic interactions: grab, pinch, and poke. Before the start of the experiment, the users were informed about the controllers and how they worked. A 3D model of the controllers was also provided in VR for reference.
The first significant measure was “Level 1 2nd Task,” in this task, the user had to move a blue pebble on the map while grabbing it to a specific region. This pebble can only be positioned along the x and y-axis, and the user had to continuously press the grip button to drag it along the map. On average, nonliterate users completed this task 21.110 sec later than the test value. The second measure was “Level 1 4th Task,” in this task, the user had to open a drawer and place jewelry in it. This was a complex task that required multiple interactions, but it was observed that the nonliterates struggled to find the drawer handle that was intentionally colored the same as the cupboard. That was the main reason that this variable was significant in both interaction modalities. Moreover, the mean difference for both interaction modalities was also similar, i.e., 14.74 and 13.25. The third and sixth measures were correlated and needed to be evaluated together. Both interaction modalities resulted in significant findings for these measures, where the goal was to grab a sword and use it to cut hay sticks. The results showed significance in both interaction modalities; however, the values and causes of this significance varied. The mean difference from the test value was 4.0 s and 12.09 s for the interaction modalities of controllers and hands, respectively. The results showed that when using controllers, nonliterate users were quicker to grab the sword with the grip button but had difficulties holding onto it when trying to cut the hay sticks, resulting in more mistakes and longer completion times. On the other hand, the slower completion time while using hands was due to a higher temporal offset. The next measure was “Level 2 1st Task,” in which the user is required to grab an alphabet card using a distance grab interaction. The reasons for this significance were aligned with the ones observed in study 1 and discussed in Section 5.3.2. The next significant measure that occurred in both analyses was related to the user’s curiosity. The results indicated that nonliterates prioritized completing the task at hand and did not spend much time interacting with unmentioned objects, as evident from the lower mean difference.

6.3.2. Analysis of Results with Hands as Interaction Modality

It is evident from the results of the one-sample t-tests shown in Appendix F, Table A6 and Appendix G, Table A7 and Table 15 that using real hands as an interaction modality proved to be challenging for nonliterate users. Although the same interactions were implemented for both interaction modalities, to interact with objects using hands, the user had to form the required pose for the interaction. It was observed that users created varied poses for specified interactions. For example, to grab an object, the user had to make a fist, but most of the nonliterates tried to grab the object according to their perception of the real world. This proved to be the major cause of increased errors and completion time delays. However, on the other hand, the reported satisfaction level was higher when hands were used as an interaction modality, yet the observed effectiveness and efficiency were better with controllers.
We have already discussed measures 1, 2, 6, and 11 in the section above. The next three significant measures in the study were related to three different modes for practicing alphabet writing and understanding at level two. The first mode involved writing letters in the air, where the user was required to make a specific pinch pose to write. Many participants struggled to do so because of the variations in poses created. The second mode involved writing with a marker on a board. It was noted that nonliterate users may not know how to hold a marker properly, leading to difficulties with creating the necessary pose to grab it. The third mode was a typewriter-style interface for identifying alphabets. Yet again, it was observed that the users struggled to comprehend the correct pose, or it may be due to the numerous buttons in close proximity. The next significant measure was related to human behavior in terms of compliance; it was observed that nonliterate users are more compliant with instructions than the non-tech-literate ones. The next significant measure is related to the poses and gestures required for interaction. In the above discussion, we have mentioned several times that the creation of varied poses was the primary reason for the difficulties in interaction when using hands as the interaction modality. The remaining significant measures dealt with errors in complex interactions. It was noted that tasks that required two-handed interaction had the highest error rates, such as grabbing an alphabet card with one hand and pressing the button on it with the other hand. Nonliterate users were also found to struggle in unexpected situations. For instance, in level one, the user may have to adjust the speed of their sword swing to cut the hay stick due to a temporal offset, but nonliterate users were unable to do so.

6.3.3. Analysis of Results for Years of Technological Experience on Interaction

A Kruskal–Wallis H-test was also performed to evaluate the impact of technological experience on interaction. Initially, it was assumed that this factor could affect the effectiveness and efficiency of interactions, but the results shown in Appendix H, Table A8 showed that there was no significant impact of the number of years of experience with technology on the usability of VR applications.

6.4. Discussions of Results

The results of the experiment suggest that controllers are more usable by nonliterates compared to hands as an interaction modality, which goes against our initial assumption. The results indicated that the nonliterate users had difficulty using their hands as the interaction modality. Nonliterates found controllers easier to use as they were equipped with triggers and buttons for performing interactions. In contrast, using hands as an interaction modality proved to be difficult for nonliterates, as they had trouble creating the correct pose for the specified interactions. Despite the higher reported satisfaction level with hands, the observed effectiveness and efficiency were better with controllers.

7. Discussions

In this study, the usability of the VR prototype was analyzed by evaluating its effectiveness and efficiency parameters. The users’ satisfaction with the experience was gauged, and it was found that all participants were generally pleased. Task completion was considered an indicator of effectiveness, and all users were able to successfully complete each task in the VR prototype. The effectiveness aspect of productivity and the usability parameter of efficiency were further analyzed using statistical techniques such as ANOVA (as presented in Appendix A, Appendix B, Appendix C and Appendix D) and one-sample T-tests (in Appendix F and Appendix G).

7.1. Summary of Results

The following summary of results is related to Study 1. The results of the study showed that for non-tech-literate and nonliterate users, moving in 3D proved to be challenging, as seen in the slow completion time of task 2. Nonliterate users faced difficulties when trying to open the drawer in task 4, attempting to do so from the sides instead of the handle. Task 6, which was the most complex, took longer for both non-tech-literate and nonliterate users to complete, which may have been due to the low lighting in the room where the experiment was conducted. The results also showed that nonliterate users were slower at completing task 2 of level 2, but their curiosity to learn was demonstrated by watching the entire video. Nonliterate users struggled with some tasks requiring hands as an interaction method, such as opening a drawer, which required finding a handle that blended in with the cupboard. The results indicated that nonliterate users sometimes sought external help and were confused by the reticle while trying to grab an object. They only interacted with the required objects and followed the in-app instructions, while non-tech-literate users showed a higher level of curiosity and tried to interact with all objects. The study found that controllers were a better option for tasks requiring fast actions, such as shooting a target with a pistol, as they led to faster completion times and fewer errors. On the other hand, hand tracking in VR resulted in a delay between real and simulated actions, affecting fast interactions, whereas controllers were self-tracked and not impacted. The multivariate analysis showed that the use of modern technology had a significant impact on some VR interactions for nonliterate users, making task 2 of level 1 more challenging for those who did not use modern technology. However, it also influenced some complex two-handed interactions.
The following summarizes the findings of Study 2. The results showed that using controllers as an interaction mode was easier for nonliterate adults than using their hands. The controllers were equipped with triggers and buttons, making it easier for users to perform interactions. Nonliterate users were able to complete certain tasks faster and with fewer errors when using controllers, such as shooting a target. However, nonliterate users had difficulties with some tasks that required multiple interactions using controllers, such as cutting hay sticks. The study suggests that controllers were a more effective and efficient interaction mode for nonliterate users compared to using hands. The variations in hand poses caused difficulties in interaction when using hands as the interaction mode. Nonliterate users were found to be more compliant with instructions but struggled in unexpected situations. Tasks requiring two-handed interaction had the highest error rates, and nonliterate users struggled with understanding and performing the correct poses. The Kruskal–Wallis H-test performed in the study did not find a significant impact of the number of years of experience with technology on the usability of VR applications, suggesting that more experience with technology does not necessarily lead to better VR interaction performance.

7.2. Limitations

The study had some limitations that should be considered. One of the limitations was the small sample size, which might have limited the generalizability of the findings. Further research with a larger sample size is needed to make more robust comparisons. Another limitation was the variation in lighting conditions between different experimental locations, which caused difficulties with hand tracking due to the temporal offset. To mitigate this issue, an infrared illuminator was used, and a decrease in errors was seen, but more data is needed to confirm this effect.

7.3. Suggestions and Recommendations

Based on the results of Study 1 and Study 2, the following suggestions and recommendations are proposed to direct future research and development of VR experiences for nonliterate users:
  • Consider using controllers as the interaction mode instead of hands: Results from both studies indicate that nonliterate users find controllers easier to use and perform interactions faster and with fewer errors compared to using their hands.
  • Enhance the design of the VR educational application: Based on the results of Study 1, it is recommended that the design of the application be modified to cater to the specific needs of nonliterate users, taking into consideration their behavior patterns and the difficulties they face.
  • Improve lighting conditions: Results from Study 1 indicate that low lighting levels can negatively impact the completion time of complex tasks. Hence, it is recommended to ensure adequate brightness levels in the room where the VR experience is conducted.
  • Provide clear instructions and a reticle: Results from Study 1 show that nonliterate users sometimes sought external help and were confused by the reticle while trying to grab an object. Hence, it is recommended to provide clear instructions and a visible reticle to help users with their interactions.
  • Consider alternative interactions: Results from Study 1 suggest that gestures could be a suitable alternative interaction mode for nonliterate users and warrant further investigation.
  • Consider user training and familiarization: Results from Study 1 indicate that user training and familiarization could impact the performance of nonliterate users in VR systems. Hence, it is recommended to explore the effect of training on VR interaction performance for nonliterate users.
  • Consider the impact of technology experience: Results from Study 2 suggest that more experience with technology does not necessarily lead to better VR interaction performance. Hence, it is recommended to examine the impact of technology experience on specific VR interaction tasks to better understand how experience affects performance.
  • Consider the impact of technology use: Results from Study 1 show that the use of modern technology can have a significant impact on VR interactions for nonliterate users. Hence, it is recommended to explore the impact of using different types of technology (e.g., smartphones, laptops, and computers) on VR interactivity.
  • Expand the study to include a wider range of tasks and interactions: Results from Study 1 indicate that nonliterate users struggled with some VR tasks and that the study could be expanded to include a wider range of tasks and interactions to gain a more comprehensive understanding of the abilities of nonliterate users in VR systems.

8. Conclusions

Our study found that the educational VR application is effective and efficient for nonliterate individuals due to its simple interface and use of visual and auditory aids for learning. The results showed high levels of satisfaction among all types of users, with 78–100% reporting being pleased with the application. The data showed no significant differences in usage of the VR app among tech-literate, non-tech-literate, and nonliterate individuals. The results also showed that controllers were more usable for nonliterate individuals than hands as an interaction modality and that experience with technology and familiarity with modern technology had little impact on the usability of VR applications.
The first research question stated that the educational VR application will be usable for nonliterate individuals. The results of the study showed high levels of satisfaction among all types of users, with 78–100% reporting being pleased with the application. The data from the two one-way ANOVA tests showed no significant differences in usage of the VR app among tech-literate, non-tech-literate, and nonliterate individuals. However, a few variables were found to have a significant impact.
The second research question speculated that the educational VR app would be easier for nonliterate individuals to use, and this was partially supported by the results. All user types reported a high level of satisfaction with the app. However, the one-way ANOVA tests indicated that there were significant differences in task completion times among the user types, with non-tech-literate and nonliterate individuals taking longer to finish certain tasks than tech-literate users. The results also revealed that nonliterate individuals had difficulty with certain interactions, such as grabbing and moving objects and interacting with usable objects. This suggests that the educational VR app may need to be improved to better cater to the needs of nonliterate users in certain areas.
The third research question focused on the interaction performance between controllers and hands for nonliterate individuals. The results showed that controllers were more usable for nonliterate individuals than hands as an interaction modality, which negates our hypothesis. 19.4% of measures were found to be statistically significant when using controllers, compared to only 38.7% with hands. Nonliterate individuals found controllers easier to use because of the triggers and buttons for performing interactions. On the other hand, using hands as an interaction modality was challenging for nonliterate individuals as they struggled to create the correct pose for interactions. Despite a higher reported satisfaction level with hands, the observed effectiveness and efficiency were better with controllers.
The results also indicated that experience with technology and familiarity with modern technology had little impact on the usability of VR applications.

Author Contributions

Conceptualization, M.I.G. and I.A.K.; Data curation, M.I.G.; Formal analysis, I.A.K. and S.S.; Investigation, M.I.G.; Methodology, I.A.K.; Project administration, M.E.-A.; Resources, S.S.; Software, M.I.G.; Supervision, I.A.K.; Validation, I.A.K.; Visualization, M.I.G. and I.A.K.; Writing—original draft, M.I.G.; Writing—review and editing, S.S. and M.E.-A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the COMSATS University Islamabad, Research and Evaluation Committee (CUI-REC) guidelines.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Acknowledgments

This work has been supported by EIAS Data Science Lab, Prince Sultan University, KSA. The authors would like to thank EIAS Data Science Lab and Prince Sultan University for their encouragement, support and the facilitation of resources needed to complete the project.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Descriptive Values of ANOVA with user type as the independent variable. 1—Tech-Literate, 2—Non-Tech-Literate, 3—Nonliterate.
Table A1. Descriptive Values of ANOVA with user type as the independent variable. 1—Tech-Literate, 2—Non-Tech-Literate, 3—Nonliterate.
NMeanStd. Deviation
1st Task (Object Interaction—Grab Interaction) Completion Time11245.4222.236
2844.8819.853
31059.4012.756
Total3049.9319.483
ModelFixed Effects 18.917
Random Effects
2nd Task (Map—Grab & Move Interaction) Completion Time11225.5827.158
2854.2536.850
31058.8020.741
Total3044.3031.398
ModelFixed Effects 28.212
Random Effects
3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time11233.4219.228
2824.6314.162
31041.6016.399
Total3033.8017.787
ModelFixed Effects 17.096
Random Effects
4th Task (Put Jewellery—Grab & Move + Grab & Place Interaction) Completion Time11221.5011.302
2823.3811.057
31038.6017.977
Total3027.7015.501
ModelFixed Effects 13.837
Random Effects
5th Task (Sword & Cut—Grab & Use Interaction) Completion Time11220.2511.871
2823.3811.488
31033.3015.720
Total3025.4313.987
ModelFixed Effects 13.191
Random Effects
6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time11253.8321.294
2832.2510.553
31053.1016.231
Total3047.8319.289
ModelFixed Effects 17.361
Random Effects
1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time11230.3021.734
2849.0540.580
31021.364.527
Total3032.3226.520
ModelFixed Effects 25.024
Random Effects
2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time11239.2222.222
2836.2213.764
31069.2137.228
Total3048.4229.804
ModelFixed Effects 26.688
Random Effects
Air Writing (Pinch + Move Interaction)11264.0115.376
2868.2726.401
31080.8811.873
Total3070.7718.909
ModelFixed Effects 18.001
Random Effects
Board Writing (Grab or Pinch + Move interaction)11280.0119.229
2885.3432.991
310101.1114.839
Total3088.4623.638
ModelFixed Effects 22.500
Random Effects
Typewriter (Poke Interaction)11296.0023.063
28102.3939.581
310121.3217.793
Total30106.1428.356
ModelFixed Effects 26.989
Random Effects
Errors in Grab Interaction1122.171.193
282.752.252
3104.202.251
Total303.002.034
ModelFixed Effects 1.893
Random Effects
Errors in Grab and Move Interaction1128.175.734
287.754.833
31011.406.004
Total309.135.655
ModelFixed Effects 5.609
Random Effects
Errors in Grab and Use Sword1124.584.420
282.881.727
3104.403.688
Total304.073.591
ModelFixed Effects 3.642
Random Effects
Errors in Grab and Use Pistol1128.756.510
284.254.590
3104.704.218
Total306.205.586
ModelFixed Effects 5.354
Random Effects
Errors in Distance Grab Interaction1121.001.954
281.752.435
3104.002.667
Total302.202.618
ModelFixed Effects 2.337
Random Effects
Errors in Poke Interaction on Alphabet Card112.831.115
281.632.200
3102.803.706
Total301.702.575
ModelFixed Effects 2.518
Random Effects
Errors in Pinch and Move Interaction to Write in Air1121.831.850
281.000.926
3102.502.321
Total301.831.877
ModelFixed Effects 1.848
Random Effects
Errors in Grab, Move, and Use Interaction to Write on Board1122.081.621
281.001.773
3103.102.514
Total302.132.097
ModelFixed Effects 1.998
Random Effects
Errors in Poke Interaction on Urdu Keyboard1121.672.807
282.252.659
3101.301.059
Total301.702.277
ModelFixed Effects 2.328
Random Effects
The user is confident while interacting with the VR.1124.750.622
284.880.354
3104.300.823
Total304.630.669
ModelFixed Effects 0.645
Random Effects
The user required external guidance to complete the task.1121.330.778
281.130.354
3102.300.483
Total301.600.770
ModelFixed Effects 0.598
Random Effects
The user tries to interact with every object.1123.081.165
282.881.458
3101.400.699
Total302.471.332
ModelFixed Effects 1.125
Random Effects
The user is following the in-app instructions.1123.580.900
284.001.195
3104.800.422
Total304.100.995
ModelFixed Effects 0.872
Random Effects
The user tried varied poses to interact with the objects.1122.331.371
281.880.835
3101.700.949
Total302.001.114
ModelFixed Effects 1.116
Random Effects
Total Interaction in Level 111227.677.820
2828.3810.446
31025.406.022
Total3027.107.897
ModelFixed Effects 8.080
Random Effects
Total Interactions in Level 21128.581.443
289.382.825
3109.102.132
Total308.972.059
ModelFixed Effects 2.105
Random Effects
I feel discomfort after using VR.1121.170.389
281.250.463
3101.500.707
Total301.300.535
ModelFixed Effects 0.533
Random Effects
I feel fatigued after using VR.1121.250.622
281.130.354
3101.000.000
Total301.130.434
ModelFixed Effects 0.436
Random Effects
The user is in postural sway while standing.1121.080.289
281.000.000
3101.300.483
Total301.130.346
ModelFixed Effects 0.334
Random Effects

Appendix B

Table A2. Descriptive Values of ANOVA with Interaction Modality as the independent variable. 1—Controllers, 2—Hands.
Table A2. Descriptive Values of ANOVA with Interaction Modality as the independent variable. 1—Controllers, 2—Hands.
NMeanStd. Deviation
1st Task (Object Interaction—Grab Interaction) Completion Time11249.3320.219
21850.3319.560
Total3049.9319.483
ModelFixed Effects 19.821
Random Effects
2nd Task (Map—Grab & Move Interaction) Completion Time11235.5023.283
21850.1735.211
Total3044.3031.398
ModelFixed Effects 31.076
Random Effects
3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time11234.8317.251
21833.1118.598
Total3033.8017.787
ModelFixed Effects 18.081
Random Effects
4th Task (Put Jewellery—Grab & Move + Grab & Place Interaction) Completion Time11222.179.916
21831.3917.614
Total3027.7015.501
ModelFixed Effects 15.066
Random Effects
5th Task (Sword & Cut—Grab & Use Interaction) Completion Time11224.5814.126
21826.0014.275
Total3025.4313.987
ModelFixed Effects 14.216
Random Effects
6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time11235.7510.931
21855.8919.638
Total3047.8319.289
ModelFixed Effects 16.765
Random Effects
1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time11230.6125.809
21833.4627.665
Total3032.3226.520
ModelFixed Effects 26.951
Random Effects
2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time11244.9626.247
21850.7232.488
Total3048.4229.804
ModelFixed Effects 30.190
Random Effects
Air Writing (Pinch + Move Interaction)11274.4217.347
21868.3419.990
Total3070.7718.909
ModelFixed Effects 18.996
Random Effects
Board Writing (Grab or Pinch + Move interaction)11293.0221.672
21885.4224.995
Total3088.4623.638
ModelFixed Effects 23.745
Random Effects
Typewriter (Poke Interaction)112111.6226.011
218102.4929.977
Total30106.1428.356
ModelFixed Effects 28.485
Random Effects
Errors in Grab Interaction1122.502.023
2183.332.029
Total303.002.034
ModelFixed Effects 2.027
Random Effects
Errors in Grab and Move Interaction1126.754.181
21810.726.047
Total309.135.655
ModelFixed Effects 5.391
Random Effects
Errors in Grab and Use Sword1122.251.603
2185.284.056
Total304.073.591
ModelFixed Effects 3.316
Random Effects
Errors in Grab and Use Pistol1122.421.165
2188.725.959
Total306.205.586
ModelFixed Effects 4.700
Random Effects
Errors in Distance Grab Interaction1122.002.256
2182.332.890
Total302.202.618
ModelFixed Effects 2.659
Random Effects
Errors in Poke Interaction on Alphabet Card1121.581.782
2181.783.040
Total301.702.575
ModelFixed Effects 2.619
Random Effects
Errors in Pinch and Move Interaction to Write in Air1121.421.621
2182.112.026
Total301.831.877
ModelFixed Effects 1.877
Random Effects
Errors in Grab, Move, and Use Interaction to Write on Board1121.171.403
2182.782.264
Total302.132.097
ModelFixed Effects 1.971
Random Effects
Errors in Poke Interaction on Urdu Keyboard1121.581.311
2181.782.777
Total301.702.277
ModelFixed Effects 2.315
Random Effects
The user is confident while interacting with the VR.1124.920.289
2184.440.784
Total304.630.669
ModelFixed Effects 0.637
Random Effects
The user required external guidance to complete the task.1121.500.522
2181.670.907
Total301.600.770
ModelFixed Effects 0.779
Random Effects
The user tries to interact with every object.1122.331.231
2182.561.423
Total302.471.332
ModelFixed Effects 1.351
Random Effects
The user is following the in-app instructions.1124.500.798
2183.831.043
Total304.100.995
ModelFixed Effects 0.954
Random Effects
The user tried varied poses to interact with the objects.1121.670.888
2182.221.215
Total302.001.114
ModelFixed Effects 1.098
Random Effects
Total Interaction in Level 111225.007.198
21828.508.227
Total3027.107.897
ModelFixed Effects 7.839
Random Effects
Total Interactions in Level 21129.252.527
2188.781.734
Total308.972.059
ModelFixed Effects 2.082
Random Effects
I feel discomfort after using VR.1121.420.669
2181.220.428
Total301.300.535
ModelFixed Effects 0.535
Random Effects
I feel fatigued after using VR.1121.170.389
2181.110.471
Total301.130.434
ModelFixed Effects 0.441
Random Effects
The user is in postural sway while standing.1121.080.289
2181.170.383
Total301.130.346
ModelFixed Effects 0.349
Random Effects

Appendix C

Table A3. ANOVA Results with user type as the independent variable. 1—Tech-Literate, 2—Non-tech-Literate, 3—Nonliterate.
Table A3. ANOVA Results with user type as the independent variable. 1—Tech-Literate, 2—Non-tech-Literate, 3—Nonliterate.
Sum of SquaresdfMean SquareFSig.
1st Task (Object Interaction—Grab Interaction) Completion TimeBetween Groups1345.6752672.8381.8800.172
Within Groups9662.19227357.859
Total11,007.86729
2nd Task (Map—Grab & Move Interaction) Completion TimeBetween Groups7098.28323549.1424.4590.021
Within Groups21,490.01727795.927
Total28,588.30029
3rd Task (Change Music/TV Ch—Poke Interaction) Completion TimeBetween Groups1283.6082641.8042.1960.131
Within Groups7891.19227292.266
Total9174.80029
4th Task (Put Jewellery—Grab & Move + Grab & Place Interaction) Completion TimeBetween Groups1799.0252899.5134.6980.018
Within Groups5169.27527191.455
Total6968.30029
5th Task (Sword & Cut—Grab & Use Interaction) Completion TimeBetween Groups975.1422487.5712.8020.078
Within Groups4698.22527174.008
Total5673.36729
6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion TimeBetween Groups2652.10021326.0504.3990.022
Within Groups8138.06727301.410
Total10,790.16729
1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion TimeBetween Groups3489.32421744.6622.7860.079
Within Groups16,907.40427626.200
Total20,396.72829
2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion TimeBetween Groups6528.40123264.2014.5830.019
Within Groups19,231.38127712.273
Total25,759.78229
Air Writing (Pinch + Move Interaction)Between Groups1620.5632810.2812.5010.101
Within Groups8748.74027324.027
Total10,369.30329
Board Writing (Grab or Pinch + Move interaction)Between Groups2535.39321267.6962.5040.101
Within Groups13,668.25727506.232
Total16,203.65029
Typewriter (Poke Interaction)Between Groups3650.80921825.4042.5060.100
Within Groups19,666.82527728.401
Total23,317.63429
Errors in Grab InteractionBetween Groups23.233211.6173.2410.055
Within Groups96.767273.584
Total120.00029
Errors in Grab and Move InteractionBetween Groups77.900238.9501.2380.306
Within Groups849.5672731.465
Total927.46729
Errors in Grab and Use SwordBetween Groups15.67527.8380.5910.561
Within Groups358.1922713.266
Total373.86729
Errors in Grab and Use PistolBetween Groups130.950265.4752.2840.121
Within Groups773.8502728.661
Total904.80029
Errors in Distance Grab InteractionBetween Groups51.300225.6504.6950.018
Within Groups147.500275.463
Total198.80029
Errors in Poke Interaction on Alphabet CardBetween Groups21.158210.5791.6690.207
Within Groups171.142276.339
Total192.30029
Errors in Pinch and Move Interaction to Write in AirBetween Groups10.00025.0001.4650.249
Within Groups92.167273.414
Total102.16729
Errors in Grab, Move, and Use Interaction to Write on BoardBetween Groups19.65029.8252.4600.104
Within Groups107.817273.993
Total127.46729
Errors in Poke Interaction on Urdu KeyboardBetween Groups4.03322.0170.3720.693
Within Groups146.267275.417
Total150.30029
The user is confident while interacting with the VR.Between Groups1.74220.8712.0950.143
Within Groups11.225270.416
Total12.96729
The user required external guidance to complete the task.Between Groups7.55823.77910.5830.000
Within Groups9.642270.357
Total17.20029
The user tries to interact with every object.Between Groups17.27528.6386.8210.004
Within Groups34.192271.266
Total51.46729
The user is following the in-app instructions.Between Groups8.18324.0925.3850.011
Within Groups20.517270.760
Total28.70029
The user tried varied poses to interact with the objects.Between Groups2.35821.1790.9460.401
Within Groups33.642271.246
Total36.00029
Total Interaction in Level 1Between Groups45.758222.8790.3500.708
Within Groups1762.9422765.294
Total1808.70029
Total Interactions in Level 2Between Groups3.27521.6370.3690.695
Within Groups119.692274.433
Total122.96729
I feel discomfort after using VR.Between Groups0.63320.3171.1150.342
Within Groups7.667270.284
Total8.30029
I feel fatigued after using VR.Between Groups0.34220.1710.9000.418
Within Groups5.125270.190
Total5.46729
The user is in postural sway while standing.Between Groups0.45020.2252.0140.153
Within Groups3.017270.112
Total3.46729

Appendix D

Table A4. ANOVA Results with interaction modality as the independent variable. 1—Tech-Literate, 2—Non-Tech-Literate, 3—Nonliterate.
Table A4. ANOVA Results with interaction modality as the independent variable. 1—Tech-Literate, 2—Non-Tech-Literate, 3—Nonliterate.
Sum of SquaresdfMean SquareFSig.
1st Task (Object Interaction—Grab Interaction) Completion TimeBetween Groups7.20017.2000.0180.893
Within Groups11,000.66728392.881
Total11,007.86729
2nd Task (Map—Grab & Move Interaction) Completion TimeBetween Groups1548.80011548.8001.6040.216
Within Groups27,039.50028965.696
Total28,588.30029
3rd Task (Change Music/TV Ch—Poke Interaction) Completion TimeBetween Groups21.356121.3560.0650.800
Within Groups9153.44428326.909
Total9174.80029
4th Task (Put Jewellery—Grab & Move + Grab & Place Interaction) Completion TimeBetween Groups612.3561612.3562.6980.112
Within Groups6355.94428226.998
Total6968.30029
5th Task (Sword & Cut—Grab & Use Interaction) Completion TimeBetween Groups14.450114.4500.0710.791
Within Groups5658.91728202.104
Total5673.36729
6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion TimeBetween Groups2920.13912920.13910.3890.003
Within Groups7870.02828281.072
Total10,790.16729
1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion TimeBetween Groups58.596158.5960.0810.778
Within Groups20,338.13228726.362
Total20,396.72829
2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion TimeBetween Groups239.2011239.2010.2620.612
Within Groups25,520.58028911.449
Total25,759.78229
Air Writing (Pinch + Move Interaction)Between Groups265.9641265.9640.7370.398
Within Groups10,103.33928360.834
Total10,369.30329
Board Writing (Grab or Pinch + Move interaction)Between Groups416.1761416.1760.7380.398
Within Groups15,787.47428563.838
Total16,203.65029
Typewriter (Poke Interaction)Between Groups599.1481599.1480.7380.397
Within Groups22,718.48628811.375
Total23,317.63429
Errors in Grab InteractionBetween Groups5.00015.0001.2170.279
Within Groups115.000284.107
Total120.00029
Errors in Grab and Move InteractionBetween Groups113.6061113.6063.9080.058
Within Groups813.8612829.066
Total927.46729
Errors in Grab and Use SwordBetween Groups66.006166.0066.0030.021
Within Groups307.8612810.995
Total373.86729
Errors in Grab and Use PistolBetween Groups286.2721286.27212.9590.001
Within Groups618.5282822.090
Total904.80029
Errors in Distance Grab InteractionBetween Groups0.80010.8000.1130.739
Within Groups198.000287.071
Total198.80029
Errors in Poke Interaction on Alphabet CardBetween Groups0.27210.2720.0400.844
Within Groups192.028286.858
Total192.30029
Errors in Pinch and Move Interaction to Write in AirBetween Groups3.47213.4720.9850.329
Within Groups98.694283.525
Total102.16729
Errors in Grab, Move, and Use Interaction to Write on BoardBetween Groups18.689118.6894.8110.037
Within Groups108.778283.885
Total127.46729
Errors in Poke Interaction on Urdu KeyboardBetween Groups0.27210.2720.0510.823
Within Groups150.028285.358
Total150.30029
The user is confident while interacting with the VR.Between Groups1.60611.6063.9570.057
Within Groups11.361280.406
Total12.96729
The user required external guidance to complete the task.Between Groups0.20010.2000.3290.571
Within Groups17.000280.607
Total17.20029
The user tries to interact with every object.Between Groups0.35610.3560.1950.662
Within Groups51.111281.825
Total51.46729
The user is following the in-app instructions.Between Groups3.20013.2003.5140.071
Within Groups25.500280.911
Total28.70029
The user tried varied poses to interact with the objects.Between Groups2.22212.2221.8420.186
Within Groups33.778281.206
Total36.00029
Total Interaction in Level 1Between Groups88.200188.2001.4350.241
Within Groups1720.5002861.446
Total1808.70029
Total Interactions in Level 2Between Groups1.60611.6060.3700.548
Within Groups121.361284.334
Total122.96729
I feel discomfort after using VR.Between Groups0.27210.2720.9490.338
Within Groups8.028280.287
Total8.30029
I feel fatigued after using VR.Between Groups0.02210.0220.1140.738
Within Groups5.444280.194
Total5.46729
The user is in postural sway while standing.Between Groups0.05010.0500.4100.527
Within Groups3.417280.122
Total3.46729

Appendix E

Table A5. Estimates of Multivariate Analysis.
Table A5. Estimates of Multivariate Analysis.
Estimates
Dependent VariableUse of Recent TechnologyUser TypeInteraction ModalityMeanStd. Error95% Confidence Interval
Lower BoundUpper Bound
2nd Task (Map—Grab & Move Interaction) Completion TimeNoNonliterateControllers78.00022.33531.681124.319
Hands87.50015.79354.747120.253
LiterateControllers....
Hands....
Tech-LiterateControllers....
Hands....
YesNonliterateControllers48.00011.16724.84071.160
Hands47.66712.89520.92474.409
Non-Tech-LiterateControllers23.50011.1670.34046.660
Hands85.00011.16761.840108.160
Tech-LiterateControllers20.66712.895−6.07647.409
Hands27.2227.44511.78242.662
4th Task (Put Jewellery—Grab & Move + Grab & Place Interaction) Completion TimeNoNonliterateControllers31.00010.6708.87253.128
Hands68.0007.54552.35383.647
Non-Tech-LiterateControllers....
Hands....
Tech-LiterateControllers....
Hands....
YesNonliterateControllers29.2505.33518.18640.314
Hands34.0006.16021.22446.776
Non-Tech-LiterateControllers16.2505.3355.18627.314
Hands30.5005.33519.43641.564
Tech-LiterateControllers17.6676.1604.89130.443
Hands22.7783.55715.40230.154
6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion TimeNoNonliterateControllers50.00013.59621.80478.196
Hands46.5009.61426.56266.438
Non-Tech-LiterateControllers....
Hands....
Tech-LiterateControllers....
Hands....
YesNonliterateControllers42.5006.79828.40256.598
Hands72.6677.85056.38788.946
Non-Tech-LiterateControllers30.0006.79815.90244.098
Hands34.5006.79820.40248.598
Tech-LiterateControllers29.6677.85013.38745.946
Hands61.8894.53252.49071.288
2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion TimeNoNonliterateControllers78.60024.02228.782128.418
Hands117.80016.98682.574153.026
Non-Tech-LiterateControllers....
Hands....
Tech-LiterateControllers....
Hands....
YesNonliterateControllers52.35012.01127.44177.259
Hands56.16713.86927.40484.929
Non-Tech-LiterateControllers33.10012.0118.19158.009
Hands39.35012.01114.44164.259
Tech-LiterateControllers39.70013.86910.93868.462
Hands39.0568.00722.45055.661
The user required external guidance to complete the task.NoNonliterateControllers2.0000.5890.7783.222
Hands3.0000.4172.1363.864
Non-Tech-LiterateControllers....
Hands....
Tech-LiterateControllers....
Hands....
YesNonliterateControllers2.0000.2951.3892.611
Hands2.3330.3401.6283.039
Non-Tech-LiterateControllers1.2500.2950.6391.861
Hands1.0000.2950.3891.611
Tech-LiterateControllers1.0000.3400.2941.706
Hands1.444.1961.0371.852
The user tries to interact with every object.NoNonliterateControllers1.0001.180−1.4473.447
Hands1.0000.834−0.7312.731
Non-Tech-LiterateControllers....
Hands....
Tech-LiterateControllers....
Hands....
YesNonliterateControllers2.0000.5900.7763.224
Hands1.0000.681−0.4132.413
Non-Tech-LiterateControllers2.5000.5901.2763.724
Hands3.2500.5902.0264.474
Tech-LiterateControllers3.0000.6811.5874.413
Hands3.1110.3932.2953.927
The user is following the in-app instructions.NoNonliterateControllers4.0000.8702.1955.805
Hands4.5000.6153.2245.776
Non-Tech-LiterateControllers....
Hands....
Tech-LiterateControllers....
Hands....
YesNonliterateControllers5.0000.4354.0975.903
Hands5.0000.5033.9586.042
Non-Tech-LiterateControllers4.2500.4353.3475.153
Hands3.7500.4352.8474.653
Tech-LiterateControllers4.3330.5033.2915.375
Hands3.3330.2902.7323.935
Errors in Grab and Use SwordNoNonliterateControllers3.0003.656−4.58310.583
Hands5.5002.5860.13810.862
Non-Tech-LiterateControllers....
Hands....
Tech-LiterateControllers....
Hands....
YesNonliterateControllers3.0001.828-.7926.792
Hands6.0002.1111.62210.378
Non-Tech-LiterateControllers1.7501.828−2.0425.542
Hands4.0001.8280.2087.792
Tech-LiterateControllers1.6672.111−2.7116.045
Hands5.5561.2193.0288.083
Errors in Grab and Use PistolNoNonliterateControllers4.0004.529−5.39213.392
Hands2.0003.202−4.6418.641
Non-Tech-LiterateControllers....
Hands....
Tech-LiterateControllers....
Hands....
YesNonliterateControllers2.5002.264−2.1967.196
Hands9.6672.6154.24415.089
Non-Tech-LiterateControllers2.2502.264−2.4466.946
Hands6.2502.2641.55410.946
Tech-LiterateControllers2.0002.615−3.4227.422
Hands11.0001.5107.86914.131
Errors in Distance Grab InteractionNoNonliterateControllers5.0002.1260.5919.409
Hands7.5001.5034.38310.617
Non-Tech-LiterateControllers....
Hands....
Tech-LiterateControllers....
Hands....
YesNonliterateControllers1.7501.063−0.4543.954
Hands4.3331.2271.7886.879
Non-Tech-LiterateControllers2.2501.0630.0464.454
Hands1.2501.063−0.9543.454
Tech-LiterateControllers1.0001.227−1.5453.545
Hands1.0000.709−0.4702.470
Errors in Grab, Move, and Use Interaction to Write on BoardNoNonliterateControllers3.0001.3760.1465.854
Hands4.441 × 10−160.973−2.0182.018
Non-Tech-LiterateControllers....
Hands....
Tech-LiterateControllers....
Hands....
YesNonliterateControllers2.5000.6881.0733.927
Hands6.0000.7954.3527.648
Non-Tech-LiterateControllers−2.776 × 10−160.688−1.4271.427
Hands2.0000.6880.5733.427
Tech-LiterateControllers0.3330.795−1.3141.981
Hands2.6670.4591.7153.618

Appendix F

Table A6. Results of t-tests with controllers as an interaction modality.
Table A6. Results of t-tests with controllers as an interaction modality.
Test Value = 24.14
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Total Interactions in Level 10.91340.4131.460−2.985.90
Test Value = 40
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 1st Task (Object Interaction—Grab Interaction) Completion Time2.45240.07014.200−1.8830.28
Test Value = 22.29
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 2nd Task (Map—Grab & Move Interaction) Completion Time4.53640.01121.1108.1934.03
Test Value = 27.14
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time1.60940.1832.060−1.505.62
Test Value = 16.86
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 4th Task (Put Jewelry—Grab & Move + Grab & Place Interaction) Completion Time3.61140.02314.7403.4126.07
Test Value = 20
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 5th Task (Sword & Cut—Grab & Use Interaction) Completion Time3.16240.0344.0000.497.51
Test Value = 29.86
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time2.20240.09211.540−3.0126.09
Test Value = 9
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Total Interactions in Level 20.12140.9100.200−4.404.80
Test Value = 38.54
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time−7.20740.002−16.340−22.63−10.05
Test Value = 35.93
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time0.31140.7713.870−30.6838.42
Test Value = 72.81
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 3rd Task Air Writing (Pinch + Move Interaction)1.95040.12310.130−4.2924.55
Test Value = 91.01
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 3rd Task Board Writing (Grab or Pinch + Move interaction)1.94640.12412.650−5.4030.70
Test Value = 109.19
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 3rd Task Typewriter (Poke Interaction)1.95140.12315.210−6.4436.86
Test Value = 1.14
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
External Users required external guidance to complete the task.0.30040.7790.060−0.500.62
Test Value = 2.71
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Gameplay User tries to interact with every object.−5.34840.006−1.310−1.99−0.63
Test Value = 1.29
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 Errors in Grab Interaction1.85040.1381.110−0.562.78
Test Value = 4.71
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 Errors in Grab and Move Interaction1.09640.3350.890−1.373.15
Test Value = 4.0
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 Errors is Poke Interaction−1.37240.242−0.800−2.420.82
Test Value = 1.71
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 Errors in Grab and Use Sword−5.34840.006−1.310−1.99−0.63
Test Value = 2.14
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 Errors in Grab and Use Pistol−1.91940.127−0.940−2.300.42
Test Value = 1.71
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 Errors in Distance Grab Interaction−1.85840.137−0.910−2.270.45
Test Value = 0.86
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 Errors in Poke Interaction on Alphabet Card2.20540.0920.540−0.141.22
Test Value = 0.71
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 Errors in Pinch and Move Interaction to Write in Air−1.26640.274−0.310−0.990.37
Test Value = 0.14
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 Errors in Grab, Move and Use Interaction to Write on Board1.92340.1270.860−0.382.10
Test Value = 1.86
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 Errors in Poke Interaction on Urdu Keyboard−0.51040.637−0.260−1.681.16
Test Value = 1.29
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
VR Sickness I feel discomfort.−0.45040.676−0.090−0.650.47

Appendix G

Table A7. Results of t-tests with hands as interaction modality.
Table A7. Results of t-tests with hands as interaction modality.
Test Value = 30
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Total Interactions in Level 1−2.26440.086−4.200−9.350.95
Test Value = 48
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 1st Task (Object Interaction—Grab Interaction) Completion Time−0.64940.552−2.400−12.667.86
Test Value = 45
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 2nd Task (Map—Grab & Move Interaction) Completion Time−0.67340.538−1.600−8.215.01
Test Value = 31.38
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time1.62540.1808.020−5.6821.72
Test Value = 25.15
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 4th Task (Put Jewelry—Grab & Move + Grab & Place Interaction) Completion Time6.74440.00313.2507.8018.70
Test Value = 22.31
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 5th Task (Sword & Cut—Grab & Use Interaction) Completion Time3.68640.02112.0902.9821.20
Test Value = 53.46
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time1.28340.2699.940−11.5831.46
Test Value = 8.85
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Total Interactions in Level 20.49540.6471.150−5.307.60
Test Value = 37.40
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time−1.56440.193−5.800−16.104.50
Test Value = 39.15
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time1.83740.14029.250−14.9573.45
Test Value = 61.89
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 3rd Task Air Writing (Pinch + Move Interaction)4.86040.00818.8708.0929.65
Test Value = 77.36
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 3rd Task Board Writing (Grab or Pinch + Move interaction)4.83940.00823.56010.0437.08
Test Value = 92.83
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 3rd Task Typewriter (Poke Interaction)4.84940.00828.29012.0944.49
Test Value = 4.69
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
External User is confident while interacting with the VR.−1.00040.374−0.490−1.850.87
Test Value = 1.31
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
External Users required external guidance to complete the task.2.37940.0760.890−0.151.93
Test Value = 3.15
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Gameplay User tries to interact with every object.−3.63740.022−1.150−2.03−0.27
Test Value = 3.46
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Gameplay User is following the in-app instructions.6.70040.0031.3400.781.90
Test Value = 2.46
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Gameplay User tried varied poses to interact with the objects.−3.51140.025−0.860−1.54−0.18
Test Value = 3
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 Errors in Grab Interaction−0.66740.541−0.400−2.071.27
Test Value = 9.77
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 Errors in Grab and Move Interaction−4.54840.010−4.770−7.68−1.86
Test Value = 7.85
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 Errors is Poke Interaction−16.16940.000−6.050−7.09−5.01
Test Value = 5.08
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 1 Errors in Grab and Use Sword−10.94140.000−2.680−3.36−2.00
Test Value = 1.08
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 Errors in Distance Grab Interaction4.70440.0093.1201.284.96
Test Value = 1.31
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 Errors in Poke Interaction on Alphabet Card−0.98040.382−0.310−1.190.57
Test Value = 1.92
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 Errors in Pinch and Move Interaction to Write in Air−0.47240.662−0.320−2.201.56
Test Value = 2.46
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 Errors in Grab, Move and Use Interaction to Write on Board0.27540.7970.140−1.281.56
Test Value = 1.92
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
Level 2 Errors in Poke Interaction on Urdu Keyboard−0.80040.469−0.320−1.430.79
Test Value = 1.15
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
VR Sickness I feel discomfort.1.83740.1400.450−0.231.13
Test Value = 1.15
tdfSig. (2-tailed)Mean Difference95% Confidence Interval of the Difference
LowerUpper
VR Sickness I feel fatigued.1.83740.1400.450−0.231.13

Appendix H

Table A8. Results of the non-parametric Kruskal–Wallis H-test with years of technological experience as the grouping variable.
Table A8. Results of the non-parametric Kruskal–Wallis H-test with years of technological experience as the grouping variable.
Chi-SquaredfAsymp. Sig.
Total Interactions in Level 10.04820.976
Level 1 1st Task (Object Interaction—Grab Interaction) Completion Time2.09120.351
Level 1 2nd Task (Map—Grab & Move Interaction) Completion Time4.96020.084
Level 1 3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time1.63620.441
Level 1 4th Task (Put Jewelry—Grab & Move + Grab & Place Interaction) Completion Time1.72920.421
Level 1 5th Task (Sword & Cut—Grab & Use Interaction) Completion Time1.40120.496
Level 1 6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time0.27320.873
Total Interactions in Level 21.13220.568
Level 2 1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time0.02120.990
Level 2 2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time0.49120.782
Level 2 3rd Task Air Writing (Pinch + Move Interaction)2.10620.349
Level 2 3rd Task Board Writing (Grab or Pinch + Move interaction)2.10620.349
Level 2 3rd Task Typewriter (Poke Interaction)2.10620.349
External User is confident while interacting with the VR.0.56320.755
External Users required external guidance to complete the task.1.95320.377
Gameplay User tries to interact with every object.2.02520.363
Gameplay User is following the in-app instructions.4.00020.135
Gameplay User tried varied poses to interact with the objects.1.50020.472
Level 1 Errors in Grab Interaction3.30920.191
Level 1 Errors in Grab and Move Interaction1.17420.556
Level 1 Errors is Poke Interaction2.55920.278
Level 1 Errors in Grab and Use Sword0.10920.947
Level 1 Errors in Grab and Use Pistol1.96920.374
Level 2 Errors in Distance Grab Interaction0.51920.771
Level 2 Errors in Poke Interaction on Alphabet Card0.37520.829
Level 2 Errors in Pinch and Move Interaction to Write in Air2.03020.362
Level 2 Errors in Grab, Move and Use Interaction to Write on Board1.04720.593
Level 2 Errors in Poke Interaction on Urdu Keyboard0.30020.861
VR Sickness I feel discomfort.0.56320.755
VR Sickness I feel fatigued.0.42920.807
VR Sickness User is in postural sway while standing.0.00021.000

Notes

1
https://www.oculus.com/ (accessed on 22 September 2022).
2
https://www.vive.com (accessed on 22 September 2022).
3
https://www.unity.com (accessed on 22 September 2022).
4
5
https://www.blender.org (accessed on 3 October 2022).
6
https://www.gimp.org (accessed on 3 October 2022).
7
https://www.audacityteam.org (accessed on 3 October 2022).
8
https://assetstore.unity.com (accessed on 3 October 2022).
9
https://quixel.com/megascans (accessed on 3 October 2022).
10
https://www.meta.com/quest/products/quest-2/ (accessed on 3 October 2022).

References

  1. UNESCO-UIS. Literacy. Available online: http://uis.unesco.org/en/topic/literacy (accessed on 7 February 2021).
  2. Lal, B.S. The Economic and Social Cost of Illiteracy: An Overview. Int. J. Adv. Res. Innov. Ideas Educ. 2015, 1, 663–670. [Google Scholar]
  3. Literate Pakistan Foundation. Aagahi Adult Literacy Programme, Pakistan. Available online: https://uil.unesco.org/case-study/effective-practices-database-litbase-0/aagahi-adult-literacy-programme-pakistan (accessed on 2 July 2021).
  4. UIL. National Literacy Programme, Pakistan. Available online: https://uil.unesco.org/case-study/effective-practices-database-litbase-0/national-literacy-programme-pakistan (accessed on 2 July 2021).
  5. Iqbal, T.; Hammermüller, K.; Nussbaumer, A.; Tjoa, A.M. Towards Using Second Life for Supporting Illiterate Persons in Learning. In Proceedings of the World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2009, Vancouver, BC, Canada, 26–30 October 2009; pp. 2704–2709. [Google Scholar]
  6. Iqbal, T.; Iqbal, S.; Hussain, S.S.; Khan, I.A.; Khan, H.U.; Rehman, A. Fighting Adult Illiteracy with the Help of the Environmental Print Material. PLoS ONE 2018, 13, e0201902. [Google Scholar] [CrossRef] [PubMed]
  7. Ur-Rehman, I.; Shamim, A.; Khan, T.A.; Elahi, M.; Mohsin, S. Mobile Based User-Centered Learning Environment for Adult Absolute Illiterates. Mob. Inf. Syst. 2016, 2016, 1841287. [Google Scholar] [CrossRef]
  8. Knowles, M. The Adult Learner: A Neglected Species, 3rd ed.; Gulf Publishing: Houston, TX, USA, 1984. [Google Scholar]
  9. Pereira, A.; Ortiz, K.Z. Language Skills Differences between Adults without Formal Education and Low Formal Education. Psicol. Reflexão Crítica 2022, 35, 4. [Google Scholar] [CrossRef] [PubMed]
  10. van Linden, S.; Cremers, A.H.M. Cognitive Abilities of Functionally Illiterate Persons Relevant to ICT Use. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2008; Volume 5105 LNCS, pp. 705–712. [Google Scholar] [CrossRef]
  11. Katre, D.S. Unorganized Cognitive Structures of Illiterate as the Key Factor in Rural E-Learning Design. J. Educ. 2006, 2, 67–71. [Google Scholar] [CrossRef]
  12. ISO (International Organization for Standardization). Ergonomics of Human-System Interaction—Part 11: Usability: Definitions and Concepts. Available online: https://www.iso.org/standard/63500.html (accessed on 18 December 2022).
  13. Bartneck, C.; Forlizzi, J. A Design-Centred Framework for Social Human-Robot Interaction. In Proceedings of the RO-MAN 2004, 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759), Kurashiki, Japan, 22 September 2004; IEEE: New York, NY, USA, 2004; pp. 591–594. [Google Scholar] [CrossRef]
  14. Lalji, Z.; Good, J. Designing New Technologies for Illiterate Populations: A Study in Mobile Phone Interface Design. Interact. Comput. 2008, 20, 574–586. [Google Scholar] [CrossRef]
  15. Carvalho, M.B. Designing for Low-Literacy Users: A Framework for Analysis of User-Centred Design Methods. Master’s Thesis, Tampere University, Tampere, Finland, 2011. [Google Scholar]
  16. Taoufik, I.; Kabaili, H.; Kettani, D. Designing an E-Government Portal Accessible to Illiterate Citizens. In Proceedings of the 1st International Conference on Theory and Practice of Electronic Governance—ICEGOV’07, Macau, China, 10–13 December 2007; p. 327. [Google Scholar] [CrossRef]
  17. Friscira, E.; Knoche, H.; Huang, J. Getting in Touch with Text: Designing a Mobile Phone Application for Illiterate Users to Harness SMS. In Proceedings of the 2nd ACM Symposium on Computing for Development—ACM DEV’12, Atlanta, GA, USA, 11–12 March 2012; p. 1. [Google Scholar] [CrossRef]
  18. Huenerfauth, M.P. Design Approaches for Developing User-Interfaces Accessible to Illiterate Users. Master’s Thesis, University College Dublin, Dublin, Ireland, 2002. [Google Scholar]
  19. Rashid, S.; Khattak, A.; Ashiq, M.; Ur Rehman, S.; Rashid Rasool, M. Educational Landscape of Virtual Reality in Higher Education: Bibliometric Evidences of Publishing Patterns and Emerging Trends. Publications 2021, 9, 17. [Google Scholar] [CrossRef]
  20. Huettig, F.; Mishra, R.K. How Literacy Acquisition Affects the Illiterate Mind—A Critical Examination of Theories and Evidence. Lang. Linguist. Compass 2014, 8, 401–427. [Google Scholar] [CrossRef]
  21. Steuer, J. Defining Virtual Reality: Dimensions Determining Telepresence. J. Commun. 1992, 42, 73–93. [Google Scholar] [CrossRef]
  22. Slater, M. Immersion and the Illusion of Presence in Virtual Reality. Br. J. Psychol. 2018, 109, 431–433. [Google Scholar] [CrossRef]
  23. Zube, E.H. Environmental Perception. In Environmental Geology; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999; pp. 214–216. [Google Scholar] [CrossRef]
  24. Ohno, R. A Hypothetical Model of Environmental Perception. In Theoretical Perspectives in Environment-Behavior Research; Springer: Boston, MA, USA, 2000; pp. 149–156. [Google Scholar] [CrossRef]
  25. Wilson, G.I.; Holton, M.D.; Walker, J.; Jones, M.W.; Grundy, E.; Davies, I.M.; Clarke, D.; Luckman, A.; Russill, N.; Wilson, V.; et al. A New Perspective on How Humans Assess Their Surroundings; Derivation of Head Orientation and Its Role in ‘Framing’ the Environment. PeerJ 2015, 3, e908. [Google Scholar] [CrossRef]
  26. Slater, M.; Sanchez-Vives, M.V. Enhancing Our Lives with Immersive Virtual Reality. Front. Robot. AI 2016, 3, 74. [Google Scholar] [CrossRef]
  27. Csikszentmihalyi, M. Flow: The Psychology of Optimal Experience; Harper & Row: New York, NY, USA, 2008. [Google Scholar]
  28. Csikszentmihalyi, M.; LeFevre, J. Optimal Experience in Work and Leisure. J. Pers. Soc. Psychol. 1989, 56, 815–822. [Google Scholar] [CrossRef] [PubMed]
  29. Alexiou, A.; Schippers, M.; Oshri, I. Positive Psychology and Digital Games: The Role of Emotions and Psychological Flow in Serious Games Development. Psychology 2012, 3, 1243–1247. [Google Scholar] [CrossRef]
  30. Ruvimova, A.; Kim, J.; Fritz, T.; Hancock, M.; Shepherd, D.C. “transport Me Away”: Fostering Flow in Open Offices through Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020. [Google Scholar] [CrossRef]
  31. Jennett, C.; Cox, A.L.; Cairns, P.; Dhoparee, S.; Epps, A.; Tijs, T.; Walton, A. Measuring and Defining the Experience of Immersion in Games. Int. J. Hum. Comput. Stud. 2008, 66, 641–661. [Google Scholar] [CrossRef]
  32. Kim, D.; Ko, Y.J. The Impact of Virtual Reality (VR) Technology on Sport Spectators’ Flow Experience and Satisfaction. Comput. Hum. Behav. 2019, 93, 346–356. [Google Scholar] [CrossRef]
  33. Kiili, K. Content Creation Challenges and Flow Experience in Educational Games: The IT-Emperor Case. Internet High. Educ. 2005, 8, 183–198. [Google Scholar] [CrossRef]
  34. de Regt, A.; Barnes, S.J.; Plangger, K. The Virtual Reality Value Chain. Bus. Horiz. 2020, 63, 737–748. [Google Scholar] [CrossRef]
  35. Bodzin, A.; Robson, A., Jr.; Hammond, T.; Anastasio, D. Investigating Engagement and Flow with a Placed-Based Immersive Virtual Reality Game. J. Sci. Educ. Technol. 2020, 30, 347–360. [Google Scholar] [CrossRef]
  36. Csikszentmihalyi, M. Flow and Education. In Applications of Flow in Human Development and Education; Springer: Dordrecht, The Netherlands, 2014; pp. 129–151. [Google Scholar] [CrossRef]
  37. Chang, E.; Kim, H.T.; Yoo, B. Virtual Reality Sickness: A Review of Causes and Measurements. Int. J. Hum. Comput. Interact. 2020, 36, 1658–1682. [Google Scholar] [CrossRef]
  38. Bown, J.; White, E.; Boopalan, A. Looking for the Ultimate Display. In Boundaries of Self and Reality Online; Elsevier: Amsterdam, The Netherlands, 2017; pp. 239–259. [Google Scholar] [CrossRef]
  39. Meta Store. Apps That Support Hand Tracking on Meta Quest Headsets. Available online: https://www.meta.com/help/quest/articles/headsets-and-accessories/controllers-and-hand-tracking/#hand-tracking (accessed on 12 December 2022).
  40. Seibert, J.; Shafer, D.M. Control Mapping in Virtual Reality: Effects on Spatial Presence and Controller Naturalness. Virtual Real. 2018, 22, 79–88. [Google Scholar] [CrossRef]
  41. Masurovsky, A.; Chojecki, P.; Runde, D.; Lafci, M.; Przewozny, D.; Gaebler, M. Controller-Free Hand Tracking for Grab-and-Place Tasks in Immersive Virtual Reality: Design Elements and Their Empirical Study. Multimodal Technol. Interact. 2020, 4, 91. [Google Scholar] [CrossRef]
  42. Benda, B.; Esmaeili, S.; Ragan, E.D. Determining Detection Thresholds for Fixed Positional Offsets for Virtual Hand Remapping in Virtual Reality. In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Porto de Galinhas, Brazil, 9–13 November 2020; IEEE: New York, NY, USA, 2020; pp. 269–278. [Google Scholar] [CrossRef]
  43. Wang, A.; Thompson, M.; Uz-Bilgin, C.; Klopfer, E. Authenticity, Interactivity, and Collaboration in Virtual Reality Games: Best Practices and Lessons Learned. Front. Virtual Real. 2021, 2, 734083. [Google Scholar] [CrossRef]
  44. Nanjappan, V.; Liang, H.N.; Lu, F.; Papangelis, K.; Yue, Y.; Man, K.L. User-Elicited Dual-Hand Interactions for Manipulating 3D Objects in Virtual Reality Environments. Hum.-Cent. Comput. Inf. Sci. 2018, 8, 31. [Google Scholar] [CrossRef]
  45. Hemmerich, W.; Keshavarz, B.; Hecht, H. Visually Induced Motion Sickness on the Horizon. Front. Virtual Real. 2020, 1, 582095. [Google Scholar] [CrossRef]
  46. Yardley, L. Orientation Perception, Motion Sickness and Vertigo: Beyond the Sensory Conflict Approach. Br. J. Audiol. 1991, 25, 405–413. [Google Scholar] [CrossRef]
  47. Kennedy, R.S.; Lane, N.E.; Berbaum, K.S.; Lilienthal, M.G. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness. Int. J. Aviat. Psychol. 1993, 3, 203–220. [Google Scholar] [CrossRef]
  48. Kwon, C. Verification of the Possibility and Effectiveness of Experiential Learning Using HMD-Based Immersive VR Technologies. Virtual Real. 2019, 23, 101–118. [Google Scholar] [CrossRef]
  49. Ghosh, K.; Parikh, T.S.; Chavan, A.L. Design Considerations for a Financial Management System for Rural, Semi-Literate Users. In Proceedings of the CHI’03 Extended Abstracts on Human Factors in Computer Systems, Fort Lauderdale, FL, USA, 5–10 April 2003; p. 824. [Google Scholar] [CrossRef]
  50. Huenerfauth, M.P. Developing Design Recommendations for Computer Interfaces Accessible to Illiterate Users. Master’s Thesis, University College Dublin, Dublin, Ireland, 2002. [Google Scholar]
  51. Parikh, T.; Ghosh, K.; Chavan, A. Design Studies for a Financial Management System for Micro-Credit Groups in Rural India. ACM SIGCAPH Comput. Phys. Handicap. 2003, 73–74, 15–22. [Google Scholar] [CrossRef]
  52. Zaman, S.K.U.; Khan, I.A.; Hussain, S.S.; Iqbal, T.; Shuja, J.; Ahmed, S.F.; Jararweh, Y.; Ko, K. PreDiKT-OnOff: A Complex Adaptive Approach to Study the Impact of Digital Social Networks on Pakistani Students’ Personal and Social Life. Concurr. Comput. 2020, 32, e5121. [Google Scholar] [CrossRef]
  53. Medhi Thies, I. User Interface Design for Low-Literate and Novice Users: Past, Present and Future. Found. Trends®Hum.–Comput. Interact. 2015, 8, 1–72. [Google Scholar] [CrossRef]
  54. Rasmussen, M.K.; Pedersen, E.W.; Petersen, M.G.; Hornbæk, K. Shape-Changing Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; ACM: New York, NY, USA, 2012; pp. 735–744. [Google Scholar] [CrossRef]
  55. Saba, T. Intelligent Game-Based Learning: An Effective Learning Model Approach. Int. J. Comput. Appl. Technol. 2020, 64, 208. [Google Scholar] [CrossRef]
  56. Wade, S.; Kidd, C. The Role of Prior Knowledge and Curiosity in Learning. Psychon. Bull. Rev. 2019, 26, 1377–1387. [Google Scholar] [CrossRef] [PubMed]
  57. Ali, W.; Riaz, O.; Mumtaz, S.; Khan, A.R.; Saba, T.; Bahaj, S.A. Mobile Application Usability Evaluation: A Study Based on Demography. IEEE Access 2022, 10, 41512–41524. [Google Scholar] [CrossRef]
  58. El-Dakhs, D.A.S.; Altarriba, J. How Do Emotion Word Type and Valence Influence Language Processing? The Case of Arabic–English Bilinguals. J. Psycholinguist. Res. 2019, 48, 1063–1085. [Google Scholar] [CrossRef] [PubMed]
  59. Tyng, C.M.; Amin, H.U.; Saad, M.N.M.; Malik, A.S. The Influences of Emotion on Learning and Memory. Front. Psychol. 2017, 8, 1454. [Google Scholar] [CrossRef] [PubMed]
  60. Kern, A.C.; Ellermeier, W. Audio in VR: Effects of a Soundscape and Movement-Triggered Step Sounds on Presence. Front. Robot. AI 2020, 7, 20. [Google Scholar] [CrossRef] [PubMed]
  61. Stockton, B.A.G. How Color Coding Formulaic Writing Enhances Organization: A Qualitative Approach for Measuring Student Affect. Master’s Thesis, Humphreys College, Stockton, CA, USA, 2014. [Google Scholar]
  62. Itaguchi, Y.; Yamada, C.; Yoshihara, M.; Fukuzawa, K. Writing in the Air: A Visualization Tool for Written Languages. PLoS ONE 2017, 12, e0178735. [Google Scholar] [CrossRef]
  63. Brooke, J. SUS: A “Quick and Dirty” Usability Scale, 1st ed.; Taylor & Francis: Abingdon, UK, 1996. [Google Scholar]
  64. Kamińska, D.; Zwoliński, G.; Laska-Leśniewicz, A. Usability Testing of Virtual Reality Applications—The Pilot Study. Sensors 2022, 22, 1342. [Google Scholar] [CrossRef]
  65. Khundam, C.; Vorachart, V.; Preeyawongsakul, P.; Hosap, W.; Noël, F. A Comparative Study of Interaction Time and Usability of Using Controllers and Hand Tracking in Virtual Reality Training. Informatics 2021, 8, 60. [Google Scholar] [CrossRef]
Figure 1. Implementation elements influencing VR Characteristics.
Figure 1. Implementation elements influencing VR Characteristics.
Systems 11 00101 g001
Figure 2. Left HTC Vive, Middle Oculus Rift, Left Oculus Quest 2.
Figure 2. Left HTC Vive, Middle Oculus Rift, Left Oculus Quest 2.
Systems 11 00101 g002
Figure 3. Level Design Process.
Figure 3. Level Design Process.
Systems 11 00101 g003
Figure 4. Gameplay Images of Level One, designed to test and evaluate several possible interactions using motion controllers and hand tracking. Images (ac) show the user interacting with the objects using motion controllers, while (df) show interaction with hands.
Figure 4. Gameplay Images of Level One, designed to test and evaluate several possible interactions using motion controllers and hand tracking. Images (ac) show the user interacting with the objects using motion controllers, while (df) show interaction with hands.
Systems 11 00101 g004aSystems 11 00101 g004b
Figure 5. Gameplay images of Level Two, designed as the application of the interaction types in Level One, and a complex interaction type known as distance grab is also used. Images (ac) show the user interacting with the objects using motion controllers, while (df) show interaction with hands.
Figure 5. Gameplay images of Level Two, designed as the application of the interaction types in Level One, and a complex interaction type known as distance grab is also used. Images (ac) show the user interacting with the objects using motion controllers, while (df) show interaction with hands.
Systems 11 00101 g005
Table 1. Implementation scheme of Interaction modalities and Interaction types.
Table 1. Implementation scheme of Interaction modalities and Interaction types.
Interaction
Modalities
Interaction Types
GrabPinchPokeMoveHaptics
Motion ControllersPress the Grip button.Press the trigger button.Press the grip button and point the index finger.The controllers track translation and rotation.Feedback on interaction
Hand TrackingMake a grip.Join the index finger and thumb.Make a grip and point the index finger.Translation and rotation are tracked by HMD.None
Table 2. Level One Tasks and Interactions Mapping.
Table 2. Level One Tasks and Interactions Mapping.
#TaskInteractions
1Grab the objects, 2 are required for the next task.Grab and/or pinch
2Grab the blue object on the map and move it to the appropriate province.Grab and move (translate)
3Press the buttons on the left of the table to change the music and/or on the right to change the TV channels.Poke
4Open the black drawer of the small cupboard and put the bracelet inside.Grab and move (translate), grab and place
5Grab the sword on the pedestal and cut the hay sticks.Grab, move (translate and rotate), and use.
6Open the red box in the big cupboard and grab the pistol inside. Shoot the target.Grab and move (rotate), Grab, move and use.
Table 3. Level Two Tasks and Interactions Mapping.
Table 3. Level Two Tasks and Interactions Mapping.
#TaskInteractions
1Grab the alphabet card from the alphabet board by pointing your palm toward the board and pinch or grab it. Distance Grab and pinch
2Press the button on the alphabet card to watch an informative video about the alphabet. (Task 1 must be completed for this to work).Poke
3Use all three practice modes of writing the alphabets. Air-Writing, Board writing, Using Urdu keyboard.Pinch and Move, Grab and Move, Poke
Table 4. SUS Questions.
Table 4. SUS Questions.
IdQuestions
1I think that I would like to use this system frequently.
مجھے لگتا ہے کہ میں اس سسٹم کو کثرت سے استعمال کرنا چاہوں گا۔
2I found the system unnecessarily complex.
میں نے سسٹم کو غیر ضروری طور پر پیچیدہ پایا۔
3I thought the system was easy to use.
میں نے سوچا تھا کہ سسٹم استعمال کرنا آسان ہوگا۔
4I think that I would need the support of a technical person to be able to use this system.
میں سمجھتا ہوں کہ اس سسٹم کو استعمال کرنے کے لیے مجھے کسی تکنیکی شخص کی مدد کی ضرورت ہوگی۔
5I found the various functions in this system were well integrated.
میں نے پایا کہ اس سسٹم میں مختلف طریقے اچھی طرح سے ضم تھے۔
6I thought there was too much inconsistency in this system.
میں نے سوچا کہ اس سسٹم میں بہت زیادہ عدم مطابقت ہے۔
7I would imagine that most people would learn to use this system very quickly.
میں تصور کرتا ہوں کہ زیادہ تر لوگ اس نظام کو بہت جلد استعمال کرنا سیکھ لیں گے۔
8I found the system very cumbersome to use.
میں نے سسٹم کو استعمال کرنے میں اپنے آپ کو بہت بوجھل پایا۔
9I felt very confident using the system.
میں نے سسٹم کا استعمال کرتے ہوئے بہت پر اعتماد محسوس کیا۔
10I needed to learn a lot of things before I could get going with this system.
اس سسٹم استعمال کرنے سے پہلے مجھے بہت سی چیزیں سیکھنے کی ضرورت تھی۔
Table 5. Variables and their measures.
Table 5. Variables and their measures.
VariablesLevel 1Level 2
IdMeasuresIdMeasures
Effectiveness1Total Interaction in Level 11The user is confident while interacting in VR.
2The user was confident while interacting in VR.2The user required external help.
3The user required external help.3The user followed in-app instructions.
4The user followed in-app instructions.4The user tried varied poses for interaction
5The user tried varied poses for interaction.5The user tried to interact with every object.
6The user tried to interact with every object.6Total Interactions in Level 2
Efficiency11st Task Completion Time11st Task Completion Time
22nd Task Completion Time22nd Task Completion Time
33rd Task Completion Time33rd-a Task Completion Time
44th Task Completion Time43rd-b Task Completion Time
55th Task Completion Time53rd-c Task Completion Time
66th Task Completion Time6Errors in Distance Grab Interaction
7Errors in Grab Interaction7Errors in Poke Interaction
8Errors in Grab and Move Interaction8Errors in Pinch and Move Interaction
9Errors in Grab and Use Sword9Errors in Grab, Move, and Use Interaction
10Errors in Grab and Use Pistol10Errors in Poke and Use Interaction
VR Sickness1Discomfort reported.1Discomfort reported.
2Fatigue reported.2Fatigue reported.
3Postural sway observed.3Postural sway observed.
Table 6. Results of Exploratory Factor Analysis—Level One.
Table 6. Results of Exploratory Factor Analysis—Level One.
FactorsFactor Analysis
ComponentsFactor LoadingCommunalityEigen ValueExplained Variance %Cumulative Variance %
EffectivenessEffectiveness 10.5300.3312.99437.42237.422
Effectiveness 2−0.7730.7961.83622.95160.373
Effectiveness 30.8800.7780.94311.79172.164
Effectiveness 4−0.6910.6510.7709.62281.786
Effectiveness 50.6340.4080.6007.49589.282
Effectiveness 6−0.7240.6710.1932.418100.000
EfficiencyEfficiency 10.8550.7652.85625.96425.964
Efficiency 20.7070.6812.27420.67346.637
Efficiency 30.5810.7091.27311.57258.209
Efficiency 40.7970.6711.0919.91868.127
Efficiency 50.8080.7380.9088.25876.386
Efficiency 60.7580.6780.6716.10482.490
Efficiency 70.7350.6340.6245.67588.165
Efficiency 80.7600.6850.4263.87092.036
Efficiency 90.7670.7750.3783.43395.469
Efficiency 100.8230.7330.3052.77398.242
VR SicknessVR Sickness 10.9260.8772.51727.96427.964
VR Sickness 20.8480.7701.88320.92448.888
VR Sickness 30.6440.6411.22413.60462.492
Used Principal Component Analysis as an extraction method. Used Varimax as a rotation method, KMO = 0.610, p = 0.003.
Table 7. Results of Exploratory Factor Analysis—Level Two.
Table 7. Results of Exploratory Factor Analysis—Level Two.
FactorsFactor Analysis
ComponentsFactor LoadingCommunalityEigen ValueExplained Variance %Cumulative Variance %
EffectivenessEffectiveness 1−0.7490.8122.88035.99735.997
Effectiveness 20.8810.7881.82122.76158.758
Effectiveness 30.6970.7661.07513.44072.199
Effectiveness 4−0.6010.3770.84310.54382.742
Effectiveness 5−0.6710.7010.2683.34897.574
Effectiveness 60.9470.9080.1942.426100.000
EfficiencyEfficiency 10.8200.7663.61436.13936.139
Efficiency 20.9040.8341.85918.59154.730
Efficiency 30.9850.9921.52715.26869.997
Efficiency 40.9850.9921.04610.46480.461
Efficiency 50.9850.9920.8968.95889.419
Efficiency 60.8500.7550.7167.15696.575
Efficiency 70.5120.3870.1991.99298.567
Efficiency 80.9110.8660.1431.433100.000
Efficiency 90.9080.8541.480 × 10−61.480 × 10−5100.000
Efficiency 100.6960.6077.838 × 10−77.838 × 10−6100.000
VR SicknessVR Sickness 10.9260.8772.51727.96427.964
VR Sickness 20.8480.7701.88320.92448.888
VR Sickness 30.6440.6411.22413.60462.492
Used Principal Component Analysis as an extraction method. Used Varimax as a rotation method, KMO = 0.605, p = 0.003.
Table 8. Information about the participants.
Table 8. Information about the participants.
Participant’s DemographyLiteracy StatisticsInteraction Modalities Used
Area:     Abbottabad, KPK, PakistanNo Education 7
Less than 2 years 3
Higher Education 20
12—used motion controllers.
  3—Tech-Literate
  4—Non-Tech-Literate
  5—Nonliterate
18—used their hands.
  9—Tech-Literates
  4—Non-Tech-Literate
  5—Nonliterate
Numbers:   30—23 Male, 7 Female
Occupation:  17—Teaching/Training
         11—Tech-Literate
         6—Non-Tech-Literate
         4—Labour/Worker
         All Nonliterate
         3—Technical/Operational
         1—Tech-Literate
         2—Nonliterate
         3—Supervisory/Managerial
         1—Non-Tech-Literate
         2—Nonliterate
         3—Unemployed
         All Nonliterate
Technology Use: 19—Smartphone/computer
         12—Tech-Literate
         7—Non-Tech-Literate
         8—Only smart phone
         1—Non-Tech-Literate
         7—Nonliterate
         3—Only regular phone
         All nonliterate
Age Range:  21–55
Table 9. ANOVA Analysis Results—user type as the independent variable.
Table 9. ANOVA Analysis Results—user type as the independent variable.
Measurep-ValueDescriptive Values
ClassNumberMeanStd. Deviation
Level 1 2nd Task Completion Time0.021Tech-Literate1225.5827.158
Non-Tech-Literate854.2536.850
Nonliterate1058.8020.741
Level 1 4th Task Completion Time0.018Tech-Literate1221.5011.302
Non-Tech-Literate823.3811.057
Nonliterate1038.6017.977
Level 1 6th Task Completion Time0.022Tech-Literate1253.8321.294
Non-Tech-Literate832.2510.553
Nonliterate1053.1016.231
Level 2 2nd Task Completion Time0.019Tech-Literate1239.2222.222
Non-Tech-Literate836.2213.764
Nonliterate1069.2137.228
Errors in distance grab0.018Tech-Literate121.001.954
Non-Tech-Literate81.752.435
Nonliterate104.002.667
The user required external help0.000Tech-Literate121.330.778
Non-Tech-Literate81.130.354
Nonliterate102.300.483
The user tried to interact with every object0.004Tech-Literate123.081.165
Non-Tech-Literate82.881.458
Nonliterate101.400.699
The user followed in-app instructions0.011Tech-Literate123.580.900
Non-Tech-Literate84.001.195
Nonliterate104.800.422
Table 10. ANOVA Analysis Results—interaction modality as the independent variable.
Table 10. ANOVA Analysis Results—interaction modality as the independent variable.
Measurep-ValueDescriptive Values
ClassNumberMeanStd. Deviation
Level 1 6th Task Completion Time0.003Controllers1235.7510.931
Hands1855.8919.638
Errors in grab and use sword0.021Controllers122.251.603
Hands185.284.056
Errors in grab and use pistol0.001Controllers122.421.165
Hands188.725.959
Errors in grab, move, and use a marker0.037Controllers121.171.403
Hands182.782.264
Table 11. Multivariate Analysis of Statistically Significant Variables.
Table 11. Multivariate Analysis of Statistically Significant Variables.
Dependent VariableUse of New TechnologyUser TypeInteraction ModalityMean
Level 1 2nd Task Completion Time (Grab & Move Interaction)NoNonliterateControllers78.000
Hands87.500
YesNonliterateControllers48.000
Hands47.667
Non-Tech-LiterateControllers23.500
Hands85.000
Tech-LiterateControllers20.667
Hands27.222
Level 2 2nd Task Completion Time (Distance Grab, Pinch & Poke Interaction)NoNonliterateControllers78.600
Hands117.800
YesNonliterateControllers52.350
Hands56.167
Non-Tech-LiterateControllers33.100
Hands39.350
Tech-LiterateControllers39.700
Hands39.056
Errors in Distance Grab InteractionNoNonliterateControllers5.000
Hands7.500
YesNonliterateControllers1.750
Hands4.333
Non-Tech-LiterateControllers2.250
Hands1.250
Tech-LiterateControllers1.000
Hands1.000
Table 12. Test values.
Table 12. Test values.
VariablesMeasuresAverage Values (Controllers)Average Values (Hands)
EffectivenessTotal Interaction in Level 1.24.1430.00
Total Interactions in Level 2.9.008.85
The user was confident while interacting in VR.5.004.69
The user required external help.1.141.31
The user followed in-app instructions.4.293.46
The user tried varied poses for interaction.1.572.46
The user tried to interact with every object.2.713.15
EfficiencyLevel 1 1st Task Completion Time40.0048.00
Level 1 2nd Task Completion Time22.2945.00
Level 1 3rd Task Completion Time27.1431.38
Level 1 4th Task Completion Time16.8625.15
Level 1 5th Task Completion Time20.0022.31
Level 1 6th Task Completion Time29.8653.46
Leval 2 1st Task Completion Time38.5437.40
Leval 2 2nd Task Completion Time35.9339.15
Level 2 3rd-a Task Completion Time72.8161.89
Level 2 3rd-b Task Completion Time91.0177.36
Level 2 3rd-c Task Completion Time109.1992.83
Level 1 Errors in Grab Interaction1.293.00
Level 1 Errors in Grab and Move Interaction4.719.77
Level 1 Errors in Poke Interaction4.007.85
Level 1 Errors in Grab and Use Sword1.715.08
Level 1 Errors in Grab and Use Pistol2.149.54
Level 2 Errors in Distance Grab Interaction1.711.08
Level 2 Errors in Poke Interaction0.861.31
Level 2 Errors in Pinch and Move Interaction0.711.92
Level 2 Errors in Grab, Move, and Use Interaction0.142.46
Level 2 Errors in Poke and Use Interaction1.861.92
VR SicknessDiscomfort reported.1.291.15
Fatigue reported.1.291.15
Postural sway observed.1.001.08
Table 13. Information about the participants.
Table 13. Information about the participants.
Participant’s DemographyLiteracy StatisticsInteraction Modalities Used
Area:     Abbottabad, KPK, PakistanNo Education 85 participants used motion controllers.
Numbers:   10—5 Males, 5 FemalesLess than 2 years 25 participants used their hands.
Occupation:  5—Labour/Worker
            1—Technical/Operational
            1—Supervisory/Managerial
            3—Unemployed
Technology Use: 5—Smartphone/computer
          4—Only smart phone
          1—Only regular phone
Technological
Experience:    4—More than 5 years
          4—1 to 5 years
          2—Less than 1 year
Age Range:    19–50
Table 14. Significant measures of T-tests with Controller as an interaction modality.
Table 14. Significant measures of T-tests with Controller as an interaction modality.
#VariablesStatistics
1Level 1 2nd Task (Map—Grab & Move Interaction) Completion TimeTest Value = 22.29
tdfSig. (2-tailed)Mean Difference
4.53640.01121.110
2Level 1 4th Task (Put Jewellery—Grab & Move + Grab & Place Interaction) Completion TimeTest Value = 16.86
tdfSig. (2-tailed)Mean Difference
3.61140.02314.740
3Level 1 5th Task (Sword & Cut—Grab & Use Interaction) Completion TimeTest Value = 20
tdfSig. (2-tailed)Mean Difference
3.16240.0344.000
4Level 2 1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion TimeTest Value = 38.54
tdfSig. (2-tailed)Mean Difference
−7.20740.002−16.340
5Gameplay User tries to interact with every object.Test Value = 2.71
tdfSig. (2-tailed)Mean Difference
−5.34840.006−1.310
6Level 1 Errors in Grab and Use SwordTest Value = 1.71
tdfSig. (2-tailed)Mean Difference
−5.34840.006−1.310
Table 15. Significant measures of T-tests with hands as an interaction modality.
Table 15. Significant measures of T-tests with hands as an interaction modality.
#VariablesStatistics
1Level 1 4th Task (Put jewellery—Grab & Move + Grab & Place Interaction) Completion TimeTest Value = 25.15
tdfSig. (2-tailed)Mean Difference
6.74440.00313.250
2Level 1 5th Task (Sword & Cut—Grab & Use Interaction) Completion TimeTest Value = 22.31
tdfSig. (2-tailed)Mean Difference
3.68640.02112.090
3Level 2 3rd Task Air Writing (Pinch + Move Interaction)Test Value = 61.89
tdfSig. (2-tailed)Mean Difference
4.86040.00818.870
4Level 2 3rd Task Board Writing (Grab or Pinch + Move interaction)Test Value = 77.36
tdfSig. (2-tailed)Mean Difference
4.83940.00823.560
5Level 2 3rd Task Typewriter (Poke Interaction)Test Value = 92.83
tdfSig. (2-tailed)Mean Difference
4.84940.00828.290
6Gameplay User tries to interact with every object.Test Value = 3.15
tdfSig. (2-tailed)Mean Difference
−3.63740.022−1.150
7Gameplay User is following the in-app instructions.Test Value = 3.46
tdfSig. (2-tailed)Mean Difference
6.70040.0031.340
8Gameplay User tried varied poses to interact with the objects.Test Value = 2.46
tdfSig. (2-tailed)Mean Difference
−3.51140.025−0.860
9Level 1 Errors in Grab and Move InteractionTest Value = 9.77
tdfSig. (2-tailed)Mean Difference
−4.54840.010−4.770
10Level 1 Errors is Poke InteractionTest Value = 7.85
tdfSig. (2-tailed)Mean Difference
−16.16940.000−6.050
11Level 1 Errors in Grab and Use SwordTest Value = 5.08
tdfSig. (2-tailed)Mean Difference
−10.94140.000−2.680
12Level 2 Errors in Distance Grab InteractionTest Value = 1.08
tdfSig. (2-tailed)Mean Difference
4.70440.0093.120
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gul, M.I.; Khan, I.A.; Shah, S.; El-Affendi, M. Can Nonliterates Interact as Easily as Literates with a Virtual Reality System? A Usability Evaluation of VR Interaction Modalities. Systems 2023, 11, 101. https://doi.org/10.3390/systems11020101

AMA Style

Gul MI, Khan IA, Shah S, El-Affendi M. Can Nonliterates Interact as Easily as Literates with a Virtual Reality System? A Usability Evaluation of VR Interaction Modalities. Systems. 2023; 11(2):101. https://doi.org/10.3390/systems11020101

Chicago/Turabian Style

Gul, Muhammad Ibtisam, Iftikhar Ahmed Khan, Sajid Shah, and Mohammed El-Affendi. 2023. "Can Nonliterates Interact as Easily as Literates with a Virtual Reality System? A Usability Evaluation of VR Interaction Modalities" Systems 11, no. 2: 101. https://doi.org/10.3390/systems11020101

APA Style

Gul, M. I., Khan, I. A., Shah, S., & El-Affendi, M. (2023). Can Nonliterates Interact as Easily as Literates with a Virtual Reality System? A Usability Evaluation of VR Interaction Modalities. Systems, 11(2), 101. https://doi.org/10.3390/systems11020101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics