Abstract
The aim of the study is twofold: to assess the usability of a virtuality (VR) interaction designed for nonliterate users in accordance with ISO-Standard 9241-11 and to compare the feasibility of two interaction modalities (motion controllers and real hands) considering the impact of VR sickness. To accomplish these goals, two levels were designed for a VR prototype application. The system usability scale (SUS) was used for self-reported satisfaction, while effectiveness and efficiency were measured based on observations and logged data. These measures were then analyzed using exploratory factor analysis, and the ones with high factor loading were selected. For this purpose, two studies were conducted. The first study investigated the effects of three independent variables on the interaction performance of a VR system, i.e., “User Type,” “Interaction Modality,” and “Use of New Technology.” The SUS results suggest that all the participants were satisfied with the application. The results of one-way ANOVA tests showed that there were no significant differences in the use of the VR application among the three selected user types. However, some measures, such as task completion time in level one, showed significant differences between user types, suggesting that nonliterate users had difficulty with the grab-and-move interaction. The results of the multivariate analysis using statistically significant variables from both ANOVA tests were also reported to verify the effect of modern technology on interactivity. The second study evaluated the interaction performance of nonliterate adults in a VR application using two independent variables: “Interaction Modality” and “Years of Technological Experience.” The results of the study showed a high level of satisfaction with the VR application, with an average satisfaction score of 90.75. The one sample T-tests indicated that the nonliterate users had difficulty using their hands as the interaction modality. The study also revealed that nonliterates may struggle with the poses and gestures required for hand interaction. The results suggest that until advancements in hand-tracking technology are made, controllers may be easier for nonliterate adults to use compared to using their hands. The results underline the importance of designing VR applications that are usable and accessible for nonliterate adults and can be used as guidelines for creating VR learning experiences for nonliterate adults.
1. Introduction
The ability to read and write is vital in today’s world, yet a global literacy crisis is affecting many nations, with 771 million adults lacking basic literacy skills [1]. Illiteracy is the cause of various problems like poverty and non-sustainable economic growth [2]. Thus, the elimination of illiteracy is considered a key goal in the United Nations Sustainable Development Goals 2030. There are various approaches to enhancing adult literacy, including traditional instructor-led literacy programs that use direct instructional strategies [3,4] and the increasingly popular use of Information and Communication Technologies (ICT) that follow a learner-centered, active learning approach [5,6,7]. The curriculum utilized should be appropriate for instructing adults, as they possess a matured cognitive ability and their motivation to learn is shaped by their life experiences, which direct their method of acquiring knowledge [8]. There are multiple examples of research that suggests different techniques, such as using environmental print material to tutor the nonliterate adult population [6].
According to research, nonliterate adults tend to struggle with the cognitive processing of spoken language [9]. They have weaker abilities in retaining both verbal and visual information, and they possess lower visual-spatial abilities [10]. Katre et al. [11] discovered that these limitations stem from differences in cognitive development. Therefore, ICT should be designed according to the requirements of nonliterate adults. It is crucial to determine if the design principles that have been successful in conventional ICT can also be applied to create usable VR applications for nonliterates, considering the various interactivity and audio/visual differences in VR compared to traditional ICT. Furthermore, the design should be evaluated as per design standards such as ISO-9241-11 [12].
The interaction experience in traditional ICT applications includes hardware devices such as a mouse or a keyboard that provide an indirect manipulation experience to the users. Both devices provide a unique experience of interaction that is often used to do the same task in applications. So, these are the two communication channels or modalities that are engaged to interact with traditional ICT applications as defined by Bartneck et al. [13]. Multiple studies indicate that novice users face initial difficulties while using the devices [14,15,16,17,18]. One of the major difficulties faced by the user is the investment of extra time in learning input via keyboard or moving a mouse to click interfaces. Current VR systems, e.g., Oculus Quest 21 and HTC Vive2 use head-mounted displays (HMD) for the perception of vision and audio, and motion controllers or real hands are used as modalities for reality-based interaction. Therefore, it provides unique experiences of interaction, a sense of presence, and immersion in three-dimensional (3D) virtual environments. Despite several studies aimed at assessing the interaction experiences of the general population with VR [19,20], there is a lack of research specifically focused on nonliterate users. This means there are no established guidelines for evaluating the VR interaction experiences of the nonliterate population.
This study aimed to evaluate and compare the usability of various interaction modalities of VR systems in the context of nonliterate users following ISO Standard 9241-11. The study also aimed to compare the modalities among different user groups, such as tech-literate, non-tech-literate, and nonliterate, through a designed VR educational application. Therefore, the targeted research questions are as follows:
- RQ 1: Is the designed educational application usable for the nonliterate population?
- H1:The designed educational application will be usable by the nonliterate population.
- RQ 2: If yes, then how easy is the application for nonliterate users as compared to the two other groups?
- H2:The designed VR application will be as easy to use for nonliterate users as it is for literate users.
- RQ 3: Which interaction modality is more usable by nonliterate users?
- H3:Nonliterate people will find hands to be more usable due to their intuitive and reality-based interaction styles as compared to controllers.
The rest of the paper is organized as follows: Section 2 discusses literature in the context of the objectives of this research. Section 3 explains the design of the VR prototype, and Section 4 explains the measures used in the research. Section 5 is about study 1 and the analysis of the data and the discussion of the results. Section 6 covers the analysis of results and discussions for study 2. Section 7 summarizes the results, highlights some of the limitations of the research, and provides suggestions and recommendations. Finally, Section 8 concludes the research.
2. Literature Review
In the context of the aims and objectives of this research, the following sub-sections of the literature review will discuss the modalities of VR, the interactivity of VR, VR-induced sickness problems, and finally the characteristics of the nonliterate adult population that must be taken into consideration when designing VR experiences.
2.1. Virtual Reality
Virtual reality refers to a computer-generated simulation of a 3D environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a headset with sensors [21]. This technology creates an immersive experience for the user, in which they can feel as though they are present in and a part of the artificial environment. With the availability of affordable consumer VR systems and rapid research and development in terms of vividness and interactivity, there is a growing trend toward its adaptation in various fields of education and training.
The psychological aspects of VR encompass how the human mind processes and perceives the experience in a simulated 3D environment. These aspects include presence and immersion as the most important concepts that make the VR medium stand apart from others [22]. Presence refers to the experience of being physically present within the virtual environment rather than simply viewing it on a display. Immersion, on the other hand, is the sensation of being completely enveloped in the virtual world, with one’s attention fully devoted to it. These two concepts work together to create the perception of a convincing virtual environment through the interplay of sensory input and the brain’s processing of these stimuli [23,24]. Perception is an active system that uses inputs from our sensory system and information from our cognition. For example, if you see a hurdle blocking your path, your sensory system provides data related to the hurdle, and your cognition provides information about the type of hurdle and how to overcome it. This provides enough cues to our perceptual system to infer a model of our surroundings so we can act accordingly [25]. The same phenomenon occurs in VR. In VR, immersion experience is defined by its capability to support natural sensorimotor possibilities; this stimulates our perceptual system to generate an illusion of being there, called presence [26]. Figure 1 shows that flow is the ultimate objective of any VR experience.
Figure 1.
Implementation elements influencing VR Characteristics.
Flow is an autotelic experience, which refers to a self-contained activity that is done for a present reward without the anticipation of any future benefits [27]. The flow alienates one from reality to ecstasy without ruminating on the after-effects. A person in the state of flow has a boosted sense of self and concentration due to greater control, involvement, and enjoyment [28]. Awed by this experience, time is transformed, and the person is disconnected from his surroundings [29]. Consequently, this deep focus and enjoyment affect performance and its quality [30]. For a user to be in the state of flow, he must experience a sense of presence and immersion [31,32], which are directly affected by the quality of engagement, vividness, and interactivity [21].
For an engaging flow, content is always the most important commodity [33]. The implication of content is so diverse that we cannot decern its definition. In VR, most of the development resources are allocated to the creation, management, and marketing of content [34]. Content is divided into audio/visual components such as environments, objects, visual effects, sound effects, and music. Simply displaying the content is not enough to provide an optimal flow experience in VR. A player’s interaction with the content and its response are equally important. Therefore, for the optimal flow experience to happen, one must be engaged in an activity that is aptly challenging for his skill level [35]. These activities should be designed to easily achieve the optimal flow experience by providing appropriate equipment and an appropriate environment, devising rules that require learning skills, setting achievable goals, providing consistent feedback, and ensuring control [36]. Moreover, virtual reality sickness is a characteristic that can negatively affect the VR experience [37]. The focus of this work is to measure the interaction usability of the VR system for the nonliterate population; therefore, the topics being discussed in the sub-sections are interactivity [21], VR sickness [37], and the characteristics of the nonliterate adults that must be considered in the interaction design of the VR applications.
2.2. Interactivity
Interactivity in VR refers to the ability for a user to actively engage with and control elements within a virtual environment. This can include actions such as moving objects, selecting items, and communicating with other users or virtual characters. Interactivity is a key aspect of VR, as it allows users to experience a sense of agency within the virtual world and helps to increase their sense of presence and immersion [38]. Interactivity is stimulated by the technological aspects of the VR system [21]. In consumer VR systems, interactivity is achieved through the HMD, handheld motion controllers, motion tracking of the HMD and controllers, and hand tracking. These can also be called interaction modalities of VR systems. The presence of the head and hands is enabled through motion tracking of the HMD and controllers/hands within a specified room-scale play area. There are two types of motion tracking: outside-in and inside-out. Outside-in external motion sensors are used, while these sensors are integrated inside the HMD for inside-out tracking [39]. The most common interactions enabled by this technology are a simulation of hand presence, recognition of head and hand gestures, manipulation of 3D content, and facilitation of physical and artificial locomotion.
Current VR devices provide hand presence and interaction with the virtual world using wireless motion controllers and/or hand tracking. The handheld controllers have varied interactivity based on their designs. Figure 2 illustrates controllers for popular VR systems. These controllers are ergonomically designed to enable users to realistically interact with the VR environment. VR controllers have traditional action buttons and analog triggers, as found in many gaming controllers. Thumb sticks are provided for movement/locomotion, and internal motors are used for haptic feedback. These controllers track their position and send the data to the HMD to detect user hand movements and interactions. They can be programmed for several types of interactions, such as grab, pinch, poke, etc. These controllers provide a high degree of interaction fidelity and have a minimal temporal offset between real and virtual actions [40]. Hand tracking is also enabled in the current state-of-the-art VR systems. Instead of motion controllers, the user’s real hands are tracked by the cameras/sensors attached to the HMD and rendered in the virtual world. Users can use natural hand movements and gestures for interaction in VR. Regarding the controller, designing and implementing real hand interactions and gestures may be challenging. Errors in position tracking and gestures can also occur when hands are not visible or angled awkwardly to the HMD’s camera/sensors [41]. A high temporal offset between real and virtual actions can be experienced because the hand-tracking data is first processed by the HMD [42]. Several locomotion techniques are also possible in the current generation of VR systems and are divided into two categories: artificial locomotion and physical locomotion. Physical locomotion in VR is controlled by the user’s movements in the real world using the motion tracking of the HMD. Motion tracking of the HMD and controllers controls artificial locomotion. Examples are teleportation, walking in place, and world-pulling.
Figure 2.
Left HTC Vive, Middle Oculus Rift, Left Oculus Quest 2.
There can be unimaginable possibilities for interactions in VR, but it is necessary to evaluate the contextual interactivity of a VR experience [43]. In research [44], the interactivity of handheld controllers is measured, and interaction design guidelines are proposed. However, these guidelines are general and are not specific to a certain type of VR experience.
2.3. VR Sickness
VR sickness, also known as cybersickness or simulator sickness, is a phenomenon where users of virtual reality (VR) systems experience discomfort or symptoms similar to motion sickness [37]. This is caused by a mismatch between what the user sees and what their body feels, leading to feelings of nausea, dizziness, and headaches. Factors contributing to VR sickness include high levels of motion, rapid changes in visual information, and a lack of stability in the virtual environment [45]. A person experiences motion sickness when there is a sensory conflict between visual stimuli and the vestibular system [46]. In VR, visual stimuli are the major sensory input and, therefore, can induce motion sickness and adversely affect the user experience. Eye movements and the vestibular system are predicted to be the major contributors to VR sickness, and it was advised that reducing the eye movements and incorporating motion simulation synchronized with visual stimuli may reduce it [37,45]. Since VR sickness (nausea and discomfort) may reduce the user experience, it is essential to measure it. The most used subjective method is the simulator sickness questionnaire (SSQ) [47]. Research has shown that instead of using the complete items of the SSQ, we can use the most common question from the SSQ to get some basic output from the user [48]. Similarly, for objective measurement, postural sway can be observed [37].
2.4. Characteristics of Nonliterate Adults
Designing VR applications for nonliterate people requires considering certain specific characteristics of this population. For example, they may have limited or no experience with technology, which means that interfaces and interactions must be intuitive and simple to understand. They may also have difficulty with visual information, so non-verbal cues such as audio, haptic feedback, and gesture-based controls may be more effective. Additionally, it is important to consider cultural factors, such as whether the language used in the application is appropriate, and to keep in mind that literacy levels can vary widely even within a single population. Overall, the design of VR applications for illiterate people requires careful consideration of their specific needs and abilities in order to create a successful and accessible experience. Therefore, it is recommended in multiple studies that ethnographic characteristics must also be considered when designing the content for ICTs [49,50,51,52]. These characteristics include life experiences, sociocultural factors, gender disparity, etc. Based on these observations and recommendations, several well-known general guidelines were proposed. These guidelines focus on audio/visual and task elements, for example, using hand-drawn images with visual cues, short and explicit audio cues for complex concepts, and showing consecutive steps during a task [50,53]. These guidelines might influence the implementation of engagement and vividness elements, but not the interactivity aspect of VR systems. Whereas, the use of modern technology, for example, smartphones or computers, can be considered a factor of interactivity.
The design and development of the VR applications as per the users’ characteristics demand the investigation of the interaction behavior of the users with the applications [49,50,51]. Such findings are elaborated as user-centered design (UCD). The UCD process is always conducted when underlying technology changes, or if the technology remains the same but a group of users changes. For example, Rasmussen et al. [54] elaborated that if an interface shape changes, it should be evaluated on a group of users every time.
3. VR Prototype Design
The aim of the study was two-fold. Firstly, to evaluate how well a certain interaction implemented in VR suits the nonliterate population in terms of effectiveness, efficiency, and satisfaction as per ISO-Standard 9241-11. Secondly, to evaluate the usability of two interaction modalities, i.e., motion controllers and real hands, along with the negative impact of VR sickness. To achieve the aims, two VR application levels were designed. As the focused user groups are adult, nonliterate people; therefore, the content for the VR application was designed considering the users’ characteristics as described above. The following Section 3.1 explains the design of the VR application levels and the content, and Section 3.2 provides information about the tools and technologies used.
3.1. VR Environment and Level Design
The artifacts of engagement and vividness were designed and organized in such a way that correlates with the cognitive differences and characteristics of nonliterate adult users. Elements of game-based learning were also incorporated, as it is considered an effective approach to transferring knowledge [55]. Wade et al. [56] found that prior knowledge-induced curiosity leads to higher learning. Therefore, we designed the game objects and environments following this concept while also considering the life experiences and sociocultural norms of nonliterate adults. Cognitive load is a major factor in increased error rates during a task based on gesture input [57]. Therefore, considering the lower visual-spatial skills of nonliterates, confined environments are designed for reduced cognitive load. Contemplating the lower language comprehension of nonliterates, easily comprehendible language was used to compose the audio instructions. Complements are considered positive emotion-laden word types [58], and positive emotions facilitate learning [59]; therefore, complementary remarks were also added. To invigorate an all-encompassing feeling, relaxing background music along with atmospheric sounds and effects were added. Language learning techniques were selected that can be effectively implemented in VR. Interaction schemes were programmed for both motion controllers and hand tracking. Only physical locomotion was used within a specified room-scale boundary to ponder VR sickness. Unintended accidents may occur while wearing the HMD, therefore, the real-world view was displayed on HMD screens using pass-through cameras upon crossing the virtual boundary. The prototype was thoroughly used and tested by expert VR users to find any bugs or exceptions. The final iteration was installed on the Oculus Quest 2 VR System. Figure 3 illustrates the level design process.
Figure 3.
Level Design Process.
Two levels were designed to evaluate the interaction possibilities of the VR system. Level one was used to test and evaluate the usability and interactivity of different kinds of basic interactions using motion controllers and hand tracking. These basic interactions include grabbing, pinching, and poking. These basic interactions were then used to create complex interactions, as shown in Table 1.
Table 1.
Implementation scheme of Interaction modalities and Interaction types.
Level two introduced some more complex interactions and presented the user with a learning experience with the Urdu alphabet and their learning resources. The learning processes of writing, memorizing, and recognizing the alphabet were implemented through the three practice modes.
In the first level, the user was presented with an environment consisting of some basic game objects like small and large cupboards, a table, and a TV stand. Some electronics items, such as a TV and music system, were also placed. Everyday items were placed in such a way that the user required minimal locomotion to interact with them. Most of these items were grabbable, and some were pressable or usable. Some unrealistic objects based on real-world concepts were also placed in the environment, such as a floating TV remote with big buttons. The virtual environment and the interaction modes and possibilities are shown in Figure 4. In VR, spatial audio plays a vital role in affecting the sense of presence [60]. Therefore, several audio sources were placed in the environment to play music, ambiance sounds, interaction sounds, and instructions for the user.

Figure 4.
Gameplay Images of Level One, designed to test and evaluate several possible interactions using motion controllers and hand tracking. Images (a–c) show the user interacting with the objects using motion controllers, while (d–f) show interaction with hands.
Furthermore, this level was comprised of six tasks. The user must complete all tasks to progress forward. Each task was designed to evaluate a specific type of interaction, as shown in Table 2. Periodic audio instructions were played to guide the user toward task completion. After the successful completion of the task, complementary audio was played to motivate the user.
Table 2.
Level One Tasks and Interactions Mapping.
The user can progress to the next level by pressing the big red button that will only appear after each task in level one was completed. The main objective of this level was to evaluate the basic interaction possibilities while training the users for more complex interactions in the next level.
The second level presented the user with an Urdu alphabet board with interactable alphabet cards. A new interaction type, “distance grab” was introduced at this level. After grabbing the alphabet card, the user can press the button on that card to play an educational video associated with the alphabet. Three alphabet writing modes were also created at this level so that the user can practice the learned alphabet. The first mode presented a traditional blackboard and marker setup where the user could grab a marker and write on the board. The second mode was an Urdu keyboard with color-coded alphabetic keys. Color-coding alphabet families was proven effective in teaching nonliterates [61]. The third mode gives the user the ability to write in the air. This mode was used to practice writing. Itaguchi et al. [62] have demonstrated that this is an effective way to memorize language shapes and letters both consciously and unconsciously. Figure 5 illustrates the user’s interaction with the VR environment using different interaction modalities.
Figure 5.
Gameplay images of Level Two, designed as the application of the interaction types in Level One, and a complex interaction type known as distance grab is also used. Images (a–c) show the user interacting with the objects using motion controllers, while (d–f) show interaction with hands.
Like level one, the user had to complete tasks; Table 3 shows the task interaction mapping of level two.
Table 3.
Level Two Tasks and Interactions Mapping.
3.2. Tools and Technologies
Unity 2021.3.83 was used as the game engine with Oculus Interaction SDK4 and C# was used as a scripting language. Open-source tools such as Blender5 for 3D modeling, Gimp6 for textures, and Audacity7 for sound recording were used. Moreover, free assets from Unity Asset Store8 and Quixel’s Megascans9 were also used. Informative videos on the Urdu alphabet and Pakistan were linked in the game from YouTube channels “MUSE Lessons—Education Cartoons for Kids,” “Urdu Reading,” and “Discover Pakistan.”
This study used the Oculus Quest 210 VR System, which includes an HMD and two motion controllers. Oculus Quest 2 is among the cheapest VR Systems that provide several excellent features. This system requires no external computer and sensors, as it has its own computer and inside-out tracking system that is managed by Android. Users can interact with the VR using motion controllers or their hands. The portability, simplicity, and feature-rich characteristics of this system make it an ideal choice for this research.
4. Measures
Two studies were conducted to evaluate the usability of VR systems for nonliterate adult users. In the first study, we compared the usability of VR applications amongst three user groups, i.e., tech literates, non-tech literates, and nonliterates, and analyzed the differences based on the usability of interaction modalities and the effect of using technology. The second study was conducted only on nonliterate people. In this study, we used the data from study 1 as hypothesized values for tests in study 2. Furthermore, we added variables for technological experience and analyzed their effect on usability.
The interaction outcomes were measured as per ISO Standard 9241-11. The standard dictates usability in terms of effectiveness, efficiency, and user satisfaction. Effectiveness is the capability of users to carry out tasks and the quality of the productivity of those tasks. Efficiency is the amount of resource consumption by the user in executing the tasks. Satisfaction is the personal response of the user to using the system. These three aspects of usability should be measured holistically to evaluate the interaction usability of the designed application. There are multiple examples of such ICT evaluation, such as designing ATM user interfaces [10], designing multimedia content [11], and developing a website for nonliterate people [50]. The personal reaction of satisfaction can be measured subjectively using SUS [63]; thus, a survey was created using 10 items of SUS and a 5-point Likert scale. The questions are shown in Table 4. Based on previous research, a SUS score ≥ 68 is considered above average and <68 below average. SUS score was calculated by summing the score contributions of items 1, 3, 5, 7, and 9, i.e., item value minus 1, and for 2, 4, 6, 8, and 10, the contribution is 5 minus the item value and then multiplying by 2.5. This gives the SUS score a range of 0 to 100. The objective measures of efficiency and effectiveness were measured by logged gameplay data, video recordings of gameplay, and external videos, and observations were documented for several variables listed in Table 5. VR Sickness was measured using a 2-item questionnaire selected from the SSQ [37]. All the measures for dependent variables were verified using exploratory factor analysis.
Table 4.
SUS Questions.
Table 5.
Variables and their measures.
Validity of Measures
The questions in the SUS scale are proven to have both internal and external validity and have been used in numerous other studies [64,65]. The validity of measures for effectiveness and efficiency was verified by exploratory factor analysis. VR Sickness measures were also verified as they can negatively affect other variables. The results in Table 6 and Table 7 show that the selected measures can be used as factors in this research. All the components that have factor loading > absolute value of ±0.5 are included as measures in this research.
Table 6.
Results of Exploratory Factor Analysis—Level One.
Table 7.
Results of Exploratory Factor Analysis—Level Two.
5. Study 1
The main aim of this study was to evaluate the usability of the VR system and its interaction modalities for nonliterate adult users. Unfortunately, we have not found any similar research to compare our results against. Therefore, we decided to test our prototype VR on adult users that belonged to the following user groups:
- Tech-Literate: This group encompasses literate individuals with a high level of expertise and proficiency in utilizing computer systems, software applications, and digital devices. Participants in this group were mainly from computer science and software engineering backgrounds.
- Non-tech-Literate: This group consists of literate individuals with a basic or limited familiarity with the use of technology. Participants in this group came from non-technical fields.
- Nonliterate: This group comprises individuals who are not literate, regardless of their level of technology literacy. These individuals may have difficulties using digital devices and computer systems and may need support or training to effectively utilize technology. Participants in this group came from a variety of fields that did not require education or technical experience.
This study was conducted to answer the following research questions:
- RQ 1: Is the designed educational application usable for the nonliterate population?
- H1: The designed educational application will be usable by the nonliterate population.
- RQ 2: If yes, then how easy is the application for nonliterate users as compared to the two other groups?
- H2: The designed VR application will be as easy to use for nonliterate users as it is for literate users.
5.1. Procedure
All participants in the study provided their consent, either by completing a form (for literate participants) or giving verbal approval (for nonliterate participants), which was recorded by the experimenter. Some of the female participants refused external video capture, so the experimenter recorded the observations on paper. The experiment was continued only after the consent of the participant. The procedure of the experiment was duly approved by the COMSATS University Islamabad, Research and Evaluation Committee (CUI-REC). The experiments were conducted from 7–11 November 2022. The experiment was based on a between-subjects design. The experiments were conducted at separate locations as per users’ suitability and availability.
The VR prototype was designed in such a way that an average user took about 6 to 8 min to complete it. VR gameplay data was logged, and internal gameplay and external videos were recorded for objective evaluation. After the gameplay, the literate participants completed a concise survey based on 10 items of SUS and two items of VR sickness for 3 to 5 min. The experimenter read the same questions in the local language to the nonliterate participants, and their responses were recorded. Before using the VR devices, each participant was asked to sanitize their hands. Remedial counteractions were prepared in case VR sickness was experienced by the user. Some of the remedial counteractions were a place for the user to lie down, the availability of drinking water, and a jar of lemon and orange sweets.
5.2. Participants
The participants for the study were all residents of Abbottabad, KPK, Pakistan. The participants were recruited from different departments at COMSATS University Islamabad Abbottabad Campus: tech literates from the Computer Science department, non-tech literates from the Management Science department, and nonliterates from the Establishment department. Some nonliterate participants were also sourced from outside the university. They all practiced Islam as their religion and were fluent in Urdu, the national language of Pakistan. Due to cultural sensitivity, a female experimenter was arranged, as the female participants felt uncomfortable performing the experiment in front of male individuals. The tech-literate user group was composed of software engineering students and faculty from the COMSATS University Islamabad Abbottabad Campus in Pakistan. The non-tech-literate user group included students and faculty from non-technical degree programs. Meanwhile, the nonliterate user group consisted of individuals from various backgrounds who lacked literacy skills. A total of 30 participants took part in the experiment, with ages ranging from 21 to 55 years old. Out of the 30 participants, 12 (11 males and one female) were from the tech-literate group, eight (five males and three females) were from the non-tech-literate group, and 10 (seven males and three females) were from the nonliterate group. Only one tech-literate participant had prior experience with VR. All the literate participants used modern ICT equipment, while some of the nonliterates had limited interaction with ICT, using only mobile phones (N = 3), or mostly using smartphones to watch multimedia content (N = 7). The nonliterate participants were aware of computers and smartphones but had never used VR before, yet they were eager to try it. Twelve participants used the VRLE prototype with motion controllers, and 18 participants used their hands. A table with more detailed information on the participants is provided in Table 8.
Table 8.
Information about the participants.
5.3. Analysis of Results
Three independent variables were used in this experiment. The comparison between the interaction of distinct types of users with the VR system was evaluated using “User Type” with three classes (tech-literate, non-tech-literate, and nonliterate). Interaction performance between controllers and hands was evaluated using the variable “Interaction Modality” with two classes (controllers and hands). The effect of technology use on interactivity was measured using “Use of New Technology” with two classes (Yes, No). SPSS 20 was used to analyze the acquired data. Two one-way ANOVA tests were conducted. In the first test, “User Type” was used as a factor for the dependent variables listed in Table 5, and in the second test, “Interaction Modality” was used as a factor.
In our survey, the satisfaction level reported by the participants is between 78 to 100 inclusive (nonliterate participants: 78 to 90 and literate participants: 83 to 100), which indicates that the users are satisfied with the designed application. Two one-way ANOVA tests were conducted to measure effectiveness and efficiency. In the first test, “User Type” was used as the independent variable, and in the second test, “Interaction Modality” was used as the independent variable. The descriptive and ANOVA results are attached as Appendix A, Appendix B, Appendix C and Appendix D and Table A1, Table A2, Table A3 and Table A4. All the statistically significant results along with their descriptive values are shown in Table 9 and Table 10. Out of 30 measures (seven for effectiveness, 20 for efficiency, and three for VR sickness), only eight variables were found statistically significant for “User Type” as the independent variable and only four for “Interaction Modality.” The results suggest that there were no significant differences in the use of the designed VR application for any type of user among the selected types (tech-literate, non-tech-literate, and nonliterate).
Table 9.
ANOVA Analysis Results—user type as the independent variable.
Table 10.
ANOVA Analysis Results—interaction modality as the independent variable.
The variables for which significant differences were found between the user types and interaction modalities are discussed in detail in the following sections. Furthermore, multivariate analysis using statistically significant variables from both ANOVA tests was conducted to verify the effect of the use of modern technology on interactivity. The related data is attached as Appendix E and Table A5. All the measures having significant mean differences are displayed in Table 11. The results are elaborated on in later subsections.
Table 11.
Multivariate Analysis of Statistically Significant Variables.
5.3.1. Analysis of Results with “User Type” as Predictor
There were three user classes: tech-literate, non-tech-literate, and nonliterate. The dependent variables listed in Table 9 and Table 8 were found to be statistically significant with respect to the user type. The first significant variable was the “2nd task completion time,” which was logged during level one gameplay. This task was created to evaluate the grab-and-move interaction. In the interaction, the user must grab a blue pebble placed on the map of Pakistan and move it to the KPK province. The pebble movement was constrained in x and y coordinates; therefore, the user will not be able to grab and move it like other objects. It was observed that tech literates learned this interaction in three to four tries, yet this task proved difficult for the non-tech literates and nonliterates. This is also evident in Table 9, where the mean of the task completion time for non-tech-literate and nonliterate users is 54.25 and 58.80 s, respectively, while tech literates completed the task, on average, in 25.58 s. The results indicated that constrained movement in three-dimensional space was not comprehended properly by the non-tech-literate and nonliterate groups.
The next significant measure, i.e., “4th Task completion time” was also logged while playing level one. Grab, move, and place interactions were tested in this task. The user must open a drawer in the small cupboard and put a gold bracelet inside. Like in the 2nd task, the drawer was also constrained to be moved in only z coordinate, but this time, the difference was not due to the constrained movement. It was observed that nonliterate users were not grabbing the drawer by the handle; instead, they tried to open it from the sides. That may have been the reason nonliterates lagged as compared to the other groups.
Another variable from level one, i.e., “6th Task completion time” was also found to be significant. This task introduced interaction with usable objects. In this task, the user must open a red box, grab the pistol from inside, and shoot the target. This task was among the most complex ones because the user had to interact in multiple ways to complete it. The sequence of interactions started with opening the box lid; the user must grab the lid handle and move it upwards to open it. The next interaction in the sequence was to grab the pistol and use it to destroy the target. A pistol must be grabbed with grip interaction and then shot with the index finger. Interestingly, the tech-literate and nonliterate users completed this task later than the non-tech-literate ones. Some of the earlier experiments were conducted in a lecture room where the brightness levels were not adequate for hand tracking by the HMD and all the users were tech literate. Therefore, it can be assumed that this disparity resulted from the factor highlighted above. It was observed that all the above significant measures involved grab-and-move interaction, which proved difficult for nonliterate users. Therefore, we have decided not to implement this interaction in the later levels.
The next significant measure was the second task of level two, where the user must press a small button on the alphabet card to watch an informative video on TV. To complete this task, the user either must watch the whole video (30 to 45 s) or press the button again to stop the video. Despite knowing that the video can be stopped, most of the nonliterate users watched the video while the others stopped it, which was also reflected in the above result that nonliterate users completed this task in double time, but this also indicates their curiosity to learn.
The measure of “Errors in distance grab” was also related to level two. The observations revealed that nonliterate users tried to grab more than one card at a time. Another odd behavior by the nonliterates was uncovered during the observations, i.e., it seemed that they wanted to collect the alphabet cards. Therefore, after the experiment, they were asked about this phenomenon. Their responses disclosed that the displayed reticle while trying to grab a card was confusing. The observation was noted for a future iteration of VRLE.
The next three statistically significant measures were related to behavioral observations. The first one, “User required external help,” means that the user sometimes relied on outside help to complete the task. For example, in one of the experiments, it was observed that a nonliterate user was not moving while wearing the VR. The experimenter intervened and helped him move physically to get him started. Later investigation revealed that the user was waiting for some event to happen. Moreover, the user said that he had never seen such content and was confused. However, if given another chance, he would do better. The second observed behavior was that the “user tried to interact with every object.” This means that the user was not just picking the required object but also trying to grab other static or irrelevant objects, which were deliberately placed in the level to cater to the curiosity element. This observation was more prominent in level one. From the results, it was evident that most of the nonliterate users interacted with only the objects required for the task completion, and therefore they have the highest mean in the next measure, i.e., “User follows in-app instructions.” It was also apparent from the results that prior knowledge-induced curiosity was commonly observed in tech-literate users.
5.3.2. Analysis of Results with “Interaction Modality” as a Predictor
This analysis was made to compare the differences in interaction performance between controllers and hands as interaction modalities. We have found four statistically significant measures. The first measure was the completion time of the sixth task in level one, which also relates to the measure of errors in grabbing and using a pistol. As discussed in the above section, in this task, the user must use a pistol to shoot a target. It was observed that shooting using the controllers was much quicker than using actual hands, and that was because the controller had physical buttons for grip and trigger. Users could grab the gun using the grip button and shoot using the trigger button. The results in Table 10 revealed that grab-and-use interaction was more performant using the controllers.
The results also indicated that the temporal offset between real and simulated actions while using real hands as a modality could affect fast-paced interactions in VR. Users’ hands were tracked by the HMD, and if the real-world environment was not properly lit, then there could be problems with hand tracking due to a greater temporal offset, whereas controllers were self-tracked and were not plagued by it. This could be the only logical explanation for the observed significance. Some unrecorded experiments were conducted with different lighting conditions to confirm the above phenomenon. A reduction in errors was observed when an infrared illuminator was used, but more data is needed to confirm it.
5.3.3. Analysis of Results with the “Use of the Technology” as a Predictor
A multivariate analysis was conducted to analyze the effect of the use of modern technology on the interactivity of VR systems by nonliterate users. Therefore, an independent variable called “Use of New Technology” was introduced along with the other independent variables used in the above sections. This analysis was made using only the statistically significant measures identified in ANOVA tests. The variable “Use of New Technology” has two classes (“Yes” and “No”). The use of smart phones, computers, laptops, or any other state-of-the-art technology was categorized as “Yes.” Moreover, only the significant measures shown in Table 11 that have notable mean differences were considered in this analysis. It was observed that the use of modern technology did not affect the behavior of the user, yet it affected some interactions. It was observed that for both interaction modalities, nonliterate users who did not use modern technology had a tough time completing task 2 of level one, which involved constrained grab and move interaction. The other two measures were related to task two of level 2. In this level, the user had to distance grab an alphabet card and press a button on it. The interaction types associated with this task were distance grab, pinch, and poke. Overall, if the user uses modern technology, that may affect some of the complex two-handed interactions.
5.4. Discussion of Results
Research question 1 hypothesizes that the designed educational application will be usable for nonliterate users. The results of the experiment showed that the satisfaction level reported by participants was between 78 to 100, indicating that users were satisfied with the application. The results of the two one-way ANOVA tests showed that there were no significant differences in the use of the VR application among the three user types (tech literate, non-tech literate, and nonliterate). However, a few variables were found to be statistically significant.
Research question 2 hypothesis states that the educational application will be easier for nonliterate users to use, which is partially supported by the results of the experiment. The satisfaction level reported by the participants indicates that all types of users are satisfied with the designed application. However, the results of the one-way ANOVA tests suggest that there are significant differences in task completion times between the user types, with non-tech-literate and nonliterate users taking longer to complete certain tasks compared to tech-literate users. The results also indicate that nonliterate users had difficulty with some of the interactions, such as the grab-and-move interaction and interactions with usable objects. The results suggest that the educational application is generally easier for nonliterate users, but the application may need to be improved to better cater to the needs of nonliterate users in certain interactions.
6. Study 2
In our previous study, we evaluated the usability performance of a VR application among three user groups: tech-literate, non-tech-literate, and nonliterate. Our findings indicated that the tech-literate and non-tech-literate groups performed relatively better compared to the nonliterate group. As a result, we took the average values of the dependent variables obtained from the tech-literate and non-tech-literate user groups for each interaction modality and used the values to establish test values. The main objective of this study was to use these test values shown in Table 12 to analyze the differences in efficiency and effectiveness between the two modalities. Another predictor was added to the study to analyze the effect of years of technological experience on VR usability. This study was conducted to answer the following research question:
Table 12.
Test values.
- RQ 3: Which interaction modality is more usable by nonliterate users?
- H3: Nonliterate will find hands to be more usable due to their intuitive and reality-based interaction style as compared to controllers.
6.1. Procedure
The participants in the study voluntarily agreed to participate in the experiment after being fully informed by the experimenter. Some female participants declined to have video recordings taken, so their observations were recorded by the experimenter through written documentation. The study was approved by the CUI-REC at COMSATS University Islamabad and only carried out after obtaining the participants’ consent. The experiments took place from 5–10 January 2023 and employed a between-subjects design, conducted at various locations depending on the participants’ schedules and accessibility.
The VR prototype was designed to take approximately 6 to 8 min to complete, and data was recorded during the gameplay, including internal gameplay and external videos for evaluation. After completing the VR session, the questions for SUS were read to the participants and the responses recorded by the experimenter. Before using the VR device, each participant was required to sanitize their hands, and remedial measures such as a place to lie down, water, and lemon/orange sweets were available in case of VR sickness.
6.2. Participants
The participants in the experiment were all from Abbottabad, KPK, Pakistan, and were recruited through the establishment department of COMSATS University Islamabad, Abbottabad Campus. Additionally, some nonliterates were also sourced from external locations. All the participants followed Islam as their religion. Urdu is the national language of Pakistan; hence, all the participants understand the language. Female participants were hesitant to perform the experiment in front males; therefore, the female experimenter was arranged. A sample of 10 nonliterate adults (split evenly between males and females), ranging in age from 19 to 50, participated in the experiment. All the participants had smartphones except for one, and three of them also used computers. The participants who had access to modern technology, such as smartphones or computers, used these devices primarily for consuming multimedia content. During the experiment, half of the participants used the prototype VRLE with motion controllers, while the other half used their hands. A full list of participant information can be found in Table 13.
Table 13.
Information about the participants.
6.3. Analysis of Results
The study is focused on nonliterate users and aims to compare the interaction performance between controllers and hands when using a VR application. Additionally, the effect of technological experience on interactivity was also measured. In the experiment, two independent variables were employed. The “Interaction Modality” variable, which had two classes (Controllers and Hands), was used to assess the interaction performance between controllers and hands. The “Years of Technological Experience” variable, which had three classes (less than 1 year, 1 to 5 years, and over 5 years), was used to evaluate the impact of technological experience on interactivity. The analysis was conducted using SPSS 20. One-sample T-tests were performed on the measures of effectiveness and efficiency, using the hypothesized mean values from Table 12. The tests were conducted for both interaction modalities, and the results are shown in Appendix F, Table A6 and Appendix G, Table A7. The statistically significant results and their descriptive statistics are presented in Table 14 and Table 15.
Table 14.
Significant measures of T-tests with Controller as an interaction modality.
Table 15.
Significant measures of T-tests with hands as an interaction modality.
The participants reported a satisfaction level ranging from 70 to 100 with an average of 90.75, indicating a high level of satisfaction with the VR application. Out of 31 measures, only 6 (comprising 1 measure of effectiveness and 5 measures of efficiency) were statistically significant when controllers were employed as the interaction modality. However, when hands were used as the interaction modality, the number of statistically significant measures increased to 12 (with 3 measures of effectiveness and 9 measures of efficiency) out of 31. The findings suggest that nonliterate adults may have struggled when using their hands for interaction and that controllers may be a preferable option until hand-tracking technology improves. Detailed discussions are presented in the later subsection.
6.3.1. Analysis of Results with Controllers as an Interaction Modality
Controllers were found to be easier to use for nonliterate adults compared to using hands as an interaction modality. The controllers were equipped with triggers and buttons that the users used to perform the interactions. Our prototype implemented three basic interactions: grab, pinch, and poke. Before the start of the experiment, the users were informed about the controllers and how they worked. A 3D model of the controllers was also provided in VR for reference.
The first significant measure was “Level 1 2nd Task,” in this task, the user had to move a blue pebble on the map while grabbing it to a specific region. This pebble can only be positioned along the x and y-axis, and the user had to continuously press the grip button to drag it along the map. On average, nonliterate users completed this task 21.110 sec later than the test value. The second measure was “Level 1 4th Task,” in this task, the user had to open a drawer and place jewelry in it. This was a complex task that required multiple interactions, but it was observed that the nonliterates struggled to find the drawer handle that was intentionally colored the same as the cupboard. That was the main reason that this variable was significant in both interaction modalities. Moreover, the mean difference for both interaction modalities was also similar, i.e., 14.74 and 13.25. The third and sixth measures were correlated and needed to be evaluated together. Both interaction modalities resulted in significant findings for these measures, where the goal was to grab a sword and use it to cut hay sticks. The results showed significance in both interaction modalities; however, the values and causes of this significance varied. The mean difference from the test value was 4.0 s and 12.09 s for the interaction modalities of controllers and hands, respectively. The results showed that when using controllers, nonliterate users were quicker to grab the sword with the grip button but had difficulties holding onto it when trying to cut the hay sticks, resulting in more mistakes and longer completion times. On the other hand, the slower completion time while using hands was due to a higher temporal offset. The next measure was “Level 2 1st Task,” in which the user is required to grab an alphabet card using a distance grab interaction. The reasons for this significance were aligned with the ones observed in study 1 and discussed in Section 5.3.2. The next significant measure that occurred in both analyses was related to the user’s curiosity. The results indicated that nonliterates prioritized completing the task at hand and did not spend much time interacting with unmentioned objects, as evident from the lower mean difference.
6.3.2. Analysis of Results with Hands as Interaction Modality
It is evident from the results of the one-sample t-tests shown in Appendix F, Table A6 and Appendix G, Table A7 and Table 15 that using real hands as an interaction modality proved to be challenging for nonliterate users. Although the same interactions were implemented for both interaction modalities, to interact with objects using hands, the user had to form the required pose for the interaction. It was observed that users created varied poses for specified interactions. For example, to grab an object, the user had to make a fist, but most of the nonliterates tried to grab the object according to their perception of the real world. This proved to be the major cause of increased errors and completion time delays. However, on the other hand, the reported satisfaction level was higher when hands were used as an interaction modality, yet the observed effectiveness and efficiency were better with controllers.
We have already discussed measures 1, 2, 6, and 11 in the section above. The next three significant measures in the study were related to three different modes for practicing alphabet writing and understanding at level two. The first mode involved writing letters in the air, where the user was required to make a specific pinch pose to write. Many participants struggled to do so because of the variations in poses created. The second mode involved writing with a marker on a board. It was noted that nonliterate users may not know how to hold a marker properly, leading to difficulties with creating the necessary pose to grab it. The third mode was a typewriter-style interface for identifying alphabets. Yet again, it was observed that the users struggled to comprehend the correct pose, or it may be due to the numerous buttons in close proximity. The next significant measure was related to human behavior in terms of compliance; it was observed that nonliterate users are more compliant with instructions than the non-tech-literate ones. The next significant measure is related to the poses and gestures required for interaction. In the above discussion, we have mentioned several times that the creation of varied poses was the primary reason for the difficulties in interaction when using hands as the interaction modality. The remaining significant measures dealt with errors in complex interactions. It was noted that tasks that required two-handed interaction had the highest error rates, such as grabbing an alphabet card with one hand and pressing the button on it with the other hand. Nonliterate users were also found to struggle in unexpected situations. For instance, in level one, the user may have to adjust the speed of their sword swing to cut the hay stick due to a temporal offset, but nonliterate users were unable to do so.
6.3.3. Analysis of Results for Years of Technological Experience on Interaction
A Kruskal–Wallis H-test was also performed to evaluate the impact of technological experience on interaction. Initially, it was assumed that this factor could affect the effectiveness and efficiency of interactions, but the results shown in Appendix H, Table A8 showed that there was no significant impact of the number of years of experience with technology on the usability of VR applications.
6.4. Discussions of Results
The results of the experiment suggest that controllers are more usable by nonliterates compared to hands as an interaction modality, which goes against our initial assumption. The results indicated that the nonliterate users had difficulty using their hands as the interaction modality. Nonliterates found controllers easier to use as they were equipped with triggers and buttons for performing interactions. In contrast, using hands as an interaction modality proved to be difficult for nonliterates, as they had trouble creating the correct pose for the specified interactions. Despite the higher reported satisfaction level with hands, the observed effectiveness and efficiency were better with controllers.
7. Discussions
In this study, the usability of the VR prototype was analyzed by evaluating its effectiveness and efficiency parameters. The users’ satisfaction with the experience was gauged, and it was found that all participants were generally pleased. Task completion was considered an indicator of effectiveness, and all users were able to successfully complete each task in the VR prototype. The effectiveness aspect of productivity and the usability parameter of efficiency were further analyzed using statistical techniques such as ANOVA (as presented in Appendix A, Appendix B, Appendix C and Appendix D) and one-sample T-tests (in Appendix F and Appendix G).
7.1. Summary of Results
The following summary of results is related to Study 1. The results of the study showed that for non-tech-literate and nonliterate users, moving in 3D proved to be challenging, as seen in the slow completion time of task 2. Nonliterate users faced difficulties when trying to open the drawer in task 4, attempting to do so from the sides instead of the handle. Task 6, which was the most complex, took longer for both non-tech-literate and nonliterate users to complete, which may have been due to the low lighting in the room where the experiment was conducted. The results also showed that nonliterate users were slower at completing task 2 of level 2, but their curiosity to learn was demonstrated by watching the entire video. Nonliterate users struggled with some tasks requiring hands as an interaction method, such as opening a drawer, which required finding a handle that blended in with the cupboard. The results indicated that nonliterate users sometimes sought external help and were confused by the reticle while trying to grab an object. They only interacted with the required objects and followed the in-app instructions, while non-tech-literate users showed a higher level of curiosity and tried to interact with all objects. The study found that controllers were a better option for tasks requiring fast actions, such as shooting a target with a pistol, as they led to faster completion times and fewer errors. On the other hand, hand tracking in VR resulted in a delay between real and simulated actions, affecting fast interactions, whereas controllers were self-tracked and not impacted. The multivariate analysis showed that the use of modern technology had a significant impact on some VR interactions for nonliterate users, making task 2 of level 1 more challenging for those who did not use modern technology. However, it also influenced some complex two-handed interactions.
The following summarizes the findings of Study 2. The results showed that using controllers as an interaction mode was easier for nonliterate adults than using their hands. The controllers were equipped with triggers and buttons, making it easier for users to perform interactions. Nonliterate users were able to complete certain tasks faster and with fewer errors when using controllers, such as shooting a target. However, nonliterate users had difficulties with some tasks that required multiple interactions using controllers, such as cutting hay sticks. The study suggests that controllers were a more effective and efficient interaction mode for nonliterate users compared to using hands. The variations in hand poses caused difficulties in interaction when using hands as the interaction mode. Nonliterate users were found to be more compliant with instructions but struggled in unexpected situations. Tasks requiring two-handed interaction had the highest error rates, and nonliterate users struggled with understanding and performing the correct poses. The Kruskal–Wallis H-test performed in the study did not find a significant impact of the number of years of experience with technology on the usability of VR applications, suggesting that more experience with technology does not necessarily lead to better VR interaction performance.
7.2. Limitations
The study had some limitations that should be considered. One of the limitations was the small sample size, which might have limited the generalizability of the findings. Further research with a larger sample size is needed to make more robust comparisons. Another limitation was the variation in lighting conditions between different experimental locations, which caused difficulties with hand tracking due to the temporal offset. To mitigate this issue, an infrared illuminator was used, and a decrease in errors was seen, but more data is needed to confirm this effect.
7.3. Suggestions and Recommendations
Based on the results of Study 1 and Study 2, the following suggestions and recommendations are proposed to direct future research and development of VR experiences for nonliterate users:
- Consider using controllers as the interaction mode instead of hands: Results from both studies indicate that nonliterate users find controllers easier to use and perform interactions faster and with fewer errors compared to using their hands.
- Enhance the design of the VR educational application: Based on the results of Study 1, it is recommended that the design of the application be modified to cater to the specific needs of nonliterate users, taking into consideration their behavior patterns and the difficulties they face.
- Improve lighting conditions: Results from Study 1 indicate that low lighting levels can negatively impact the completion time of complex tasks. Hence, it is recommended to ensure adequate brightness levels in the room where the VR experience is conducted.
- Provide clear instructions and a reticle: Results from Study 1 show that nonliterate users sometimes sought external help and were confused by the reticle while trying to grab an object. Hence, it is recommended to provide clear instructions and a visible reticle to help users with their interactions.
- Consider alternative interactions: Results from Study 1 suggest that gestures could be a suitable alternative interaction mode for nonliterate users and warrant further investigation.
- Consider user training and familiarization: Results from Study 1 indicate that user training and familiarization could impact the performance of nonliterate users in VR systems. Hence, it is recommended to explore the effect of training on VR interaction performance for nonliterate users.
- Consider the impact of technology experience: Results from Study 2 suggest that more experience with technology does not necessarily lead to better VR interaction performance. Hence, it is recommended to examine the impact of technology experience on specific VR interaction tasks to better understand how experience affects performance.
- Consider the impact of technology use: Results from Study 1 show that the use of modern technology can have a significant impact on VR interactions for nonliterate users. Hence, it is recommended to explore the impact of using different types of technology (e.g., smartphones, laptops, and computers) on VR interactivity.
- Expand the study to include a wider range of tasks and interactions: Results from Study 1 indicate that nonliterate users struggled with some VR tasks and that the study could be expanded to include a wider range of tasks and interactions to gain a more comprehensive understanding of the abilities of nonliterate users in VR systems.
8. Conclusions
Our study found that the educational VR application is effective and efficient for nonliterate individuals due to its simple interface and use of visual and auditory aids for learning. The results showed high levels of satisfaction among all types of users, with 78–100% reporting being pleased with the application. The data showed no significant differences in usage of the VR app among tech-literate, non-tech-literate, and nonliterate individuals. The results also showed that controllers were more usable for nonliterate individuals than hands as an interaction modality and that experience with technology and familiarity with modern technology had little impact on the usability of VR applications.
The first research question stated that the educational VR application will be usable for nonliterate individuals. The results of the study showed high levels of satisfaction among all types of users, with 78–100% reporting being pleased with the application. The data from the two one-way ANOVA tests showed no significant differences in usage of the VR app among tech-literate, non-tech-literate, and nonliterate individuals. However, a few variables were found to have a significant impact.
The second research question speculated that the educational VR app would be easier for nonliterate individuals to use, and this was partially supported by the results. All user types reported a high level of satisfaction with the app. However, the one-way ANOVA tests indicated that there were significant differences in task completion times among the user types, with non-tech-literate and nonliterate individuals taking longer to finish certain tasks than tech-literate users. The results also revealed that nonliterate individuals had difficulty with certain interactions, such as grabbing and moving objects and interacting with usable objects. This suggests that the educational VR app may need to be improved to better cater to the needs of nonliterate users in certain areas.
The third research question focused on the interaction performance between controllers and hands for nonliterate individuals. The results showed that controllers were more usable for nonliterate individuals than hands as an interaction modality, which negates our hypothesis. 19.4% of measures were found to be statistically significant when using controllers, compared to only 38.7% with hands. Nonliterate individuals found controllers easier to use because of the triggers and buttons for performing interactions. On the other hand, using hands as an interaction modality was challenging for nonliterate individuals as they struggled to create the correct pose for interactions. Despite a higher reported satisfaction level with hands, the observed effectiveness and efficiency were better with controllers.
The results also indicated that experience with technology and familiarity with modern technology had little impact on the usability of VR applications.
Author Contributions
Conceptualization, M.I.G. and I.A.K.; Data curation, M.I.G.; Formal analysis, I.A.K. and S.S.; Investigation, M.I.G.; Methodology, I.A.K.; Project administration, M.E.-A.; Resources, S.S.; Software, M.I.G.; Supervision, I.A.K.; Validation, I.A.K.; Visualization, M.I.G. and I.A.K.; Writing—original draft, M.I.G.; Writing—review and editing, S.S. and M.E.-A. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
The study was conducted in accordance with the COMSATS University Islamabad, Research and Evaluation Committee (CUI-REC) guidelines.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
Data can be found here: [https://github.com/Ibtisam/ResearchPaper_MDPI_ID_systems-11-00101].
Acknowledgments
This work has been supported by EIAS Data Science Lab, Prince Sultan University, KSA. The authors would like to thank EIAS Data Science Lab and Prince Sultan University for their encouragement, support and the facilitation of resources needed to complete the project.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
Table A1.
Descriptive Values of ANOVA with user type as the independent variable. 1—Tech-Literate, 2—Non-Tech-Literate, 3—Nonliterate.
Table A1.
Descriptive Values of ANOVA with user type as the independent variable. 1—Tech-Literate, 2—Non-Tech-Literate, 3—Nonliterate.
| N | Mean | Std. Deviation | |||
|---|---|---|---|---|---|
| 1st Task (Object Interaction—Grab Interaction) Completion Time | 1 | 12 | 45.42 | 22.236 | |
| 2 | 8 | 44.88 | 19.853 | ||
| 3 | 10 | 59.40 | 12.756 | ||
| Total | 30 | 49.93 | 19.483 | ||
| Model | Fixed Effects | 18.917 | |||
| Random Effects | |||||
| 2nd Task (Map—Grab & Move Interaction) Completion Time | 1 | 12 | 25.58 | 27.158 | |
| 2 | 8 | 54.25 | 36.850 | ||
| 3 | 10 | 58.80 | 20.741 | ||
| Total | 30 | 44.30 | 31.398 | ||
| Model | Fixed Effects | 28.212 | |||
| Random Effects | |||||
| 3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time | 1 | 12 | 33.42 | 19.228 | |
| 2 | 8 | 24.63 | 14.162 | ||
| 3 | 10 | 41.60 | 16.399 | ||
| Total | 30 | 33.80 | 17.787 | ||
| Model | Fixed Effects | 17.096 | |||
| Random Effects | |||||
| 4th Task (Put Jewellery—Grab & Move + Grab & Place Interaction) Completion Time | 1 | 12 | 21.50 | 11.302 | |
| 2 | 8 | 23.38 | 11.057 | ||
| 3 | 10 | 38.60 | 17.977 | ||
| Total | 30 | 27.70 | 15.501 | ||
| Model | Fixed Effects | 13.837 | |||
| Random Effects | |||||
| 5th Task (Sword & Cut—Grab & Use Interaction) Completion Time | 1 | 12 | 20.25 | 11.871 | |
| 2 | 8 | 23.38 | 11.488 | ||
| 3 | 10 | 33.30 | 15.720 | ||
| Total | 30 | 25.43 | 13.987 | ||
| Model | Fixed Effects | 13.191 | |||
| Random Effects | |||||
| 6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time | 1 | 12 | 53.83 | 21.294 | |
| 2 | 8 | 32.25 | 10.553 | ||
| 3 | 10 | 53.10 | 16.231 | ||
| Total | 30 | 47.83 | 19.289 | ||
| Model | Fixed Effects | 17.361 | |||
| Random Effects | |||||
| 1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time | 1 | 12 | 30.30 | 21.734 | |
| 2 | 8 | 49.05 | 40.580 | ||
| 3 | 10 | 21.36 | 4.527 | ||
| Total | 30 | 32.32 | 26.520 | ||
| Model | Fixed Effects | 25.024 | |||
| Random Effects | |||||
| 2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time | 1 | 12 | 39.22 | 22.222 | |
| 2 | 8 | 36.22 | 13.764 | ||
| 3 | 10 | 69.21 | 37.228 | ||
| Total | 30 | 48.42 | 29.804 | ||
| Model | Fixed Effects | 26.688 | |||
| Random Effects | |||||
| Air Writing (Pinch + Move Interaction) | 1 | 12 | 64.01 | 15.376 | |
| 2 | 8 | 68.27 | 26.401 | ||
| 3 | 10 | 80.88 | 11.873 | ||
| Total | 30 | 70.77 | 18.909 | ||
| Model | Fixed Effects | 18.001 | |||
| Random Effects | |||||
| Board Writing (Grab or Pinch + Move interaction) | 1 | 12 | 80.01 | 19.229 | |
| 2 | 8 | 85.34 | 32.991 | ||
| 3 | 10 | 101.11 | 14.839 | ||
| Total | 30 | 88.46 | 23.638 | ||
| Model | Fixed Effects | 22.500 | |||
| Random Effects | |||||
| Typewriter (Poke Interaction) | 1 | 12 | 96.00 | 23.063 | |
| 2 | 8 | 102.39 | 39.581 | ||
| 3 | 10 | 121.32 | 17.793 | ||
| Total | 30 | 106.14 | 28.356 | ||
| Model | Fixed Effects | 26.989 | |||
| Random Effects | |||||
| Errors in Grab Interaction | 1 | 12 | 2.17 | 1.193 | |
| 2 | 8 | 2.75 | 2.252 | ||
| 3 | 10 | 4.20 | 2.251 | ||
| Total | 30 | 3.00 | 2.034 | ||
| Model | Fixed Effects | 1.893 | |||
| Random Effects | |||||
| Errors in Grab and Move Interaction | 1 | 12 | 8.17 | 5.734 | |
| 2 | 8 | 7.75 | 4.833 | ||
| 3 | 10 | 11.40 | 6.004 | ||
| Total | 30 | 9.13 | 5.655 | ||
| Model | Fixed Effects | 5.609 | |||
| Random Effects | |||||
| Errors in Grab and Use Sword | 1 | 12 | 4.58 | 4.420 | |
| 2 | 8 | 2.88 | 1.727 | ||
| 3 | 10 | 4.40 | 3.688 | ||
| Total | 30 | 4.07 | 3.591 | ||
| Model | Fixed Effects | 3.642 | |||
| Random Effects | |||||
| Errors in Grab and Use Pistol | 1 | 12 | 8.75 | 6.510 | |
| 2 | 8 | 4.25 | 4.590 | ||
| 3 | 10 | 4.70 | 4.218 | ||
| Total | 30 | 6.20 | 5.586 | ||
| Model | Fixed Effects | 5.354 | |||
| Random Effects | |||||
| Errors in Distance Grab Interaction | 1 | 12 | 1.00 | 1.954 | |
| 2 | 8 | 1.75 | 2.435 | ||
| 3 | 10 | 4.00 | 2.667 | ||
| Total | 30 | 2.20 | 2.618 | ||
| Model | Fixed Effects | 2.337 | |||
| Random Effects | |||||
| Errors in Poke Interaction on Alphabet Card | 1 | 12 | .83 | 1.115 | |
| 2 | 8 | 1.63 | 2.200 | ||
| 3 | 10 | 2.80 | 3.706 | ||
| Total | 30 | 1.70 | 2.575 | ||
| Model | Fixed Effects | 2.518 | |||
| Random Effects | |||||
| Errors in Pinch and Move Interaction to Write in Air | 1 | 12 | 1.83 | 1.850 | |
| 2 | 8 | 1.00 | 0.926 | ||
| 3 | 10 | 2.50 | 2.321 | ||
| Total | 30 | 1.83 | 1.877 | ||
| Model | Fixed Effects | 1.848 | |||
| Random Effects | |||||
| Errors in Grab, Move, and Use Interaction to Write on Board | 1 | 12 | 2.08 | 1.621 | |
| 2 | 8 | 1.00 | 1.773 | ||
| 3 | 10 | 3.10 | 2.514 | ||
| Total | 30 | 2.13 | 2.097 | ||
| Model | Fixed Effects | 1.998 | |||
| Random Effects | |||||
| Errors in Poke Interaction on Urdu Keyboard | 1 | 12 | 1.67 | 2.807 | |
| 2 | 8 | 2.25 | 2.659 | ||
| 3 | 10 | 1.30 | 1.059 | ||
| Total | 30 | 1.70 | 2.277 | ||
| Model | Fixed Effects | 2.328 | |||
| Random Effects | |||||
| The user is confident while interacting with the VR. | 1 | 12 | 4.75 | 0.622 | |
| 2 | 8 | 4.88 | 0.354 | ||
| 3 | 10 | 4.30 | 0.823 | ||
| Total | 30 | 4.63 | 0.669 | ||
| Model | Fixed Effects | 0.645 | |||
| Random Effects | |||||
| The user required external guidance to complete the task. | 1 | 12 | 1.33 | 0.778 | |
| 2 | 8 | 1.13 | 0.354 | ||
| 3 | 10 | 2.30 | 0.483 | ||
| Total | 30 | 1.60 | 0.770 | ||
| Model | Fixed Effects | 0.598 | |||
| Random Effects | |||||
| The user tries to interact with every object. | 1 | 12 | 3.08 | 1.165 | |
| 2 | 8 | 2.88 | 1.458 | ||
| 3 | 10 | 1.40 | 0.699 | ||
| Total | 30 | 2.47 | 1.332 | ||
| Model | Fixed Effects | 1.125 | |||
| Random Effects | |||||
| The user is following the in-app instructions. | 1 | 12 | 3.58 | 0.900 | |
| 2 | 8 | 4.00 | 1.195 | ||
| 3 | 10 | 4.80 | 0.422 | ||
| Total | 30 | 4.10 | 0.995 | ||
| Model | Fixed Effects | 0.872 | |||
| Random Effects | |||||
| The user tried varied poses to interact with the objects. | 1 | 12 | 2.33 | 1.371 | |
| 2 | 8 | 1.88 | 0.835 | ||
| 3 | 10 | 1.70 | 0.949 | ||
| Total | 30 | 2.00 | 1.114 | ||
| Model | Fixed Effects | 1.116 | |||
| Random Effects | |||||
| Total Interaction in Level 1 | 1 | 12 | 27.67 | 7.820 | |
| 2 | 8 | 28.38 | 10.446 | ||
| 3 | 10 | 25.40 | 6.022 | ||
| Total | 30 | 27.10 | 7.897 | ||
| Model | Fixed Effects | 8.080 | |||
| Random Effects | |||||
| Total Interactions in Level 2 | 1 | 12 | 8.58 | 1.443 | |
| 2 | 8 | 9.38 | 2.825 | ||
| 3 | 10 | 9.10 | 2.132 | ||
| Total | 30 | 8.97 | 2.059 | ||
| Model | Fixed Effects | 2.105 | |||
| Random Effects | |||||
| I feel discomfort after using VR. | 1 | 12 | 1.17 | 0.389 | |
| 2 | 8 | 1.25 | 0.463 | ||
| 3 | 10 | 1.50 | 0.707 | ||
| Total | 30 | 1.30 | 0.535 | ||
| Model | Fixed Effects | 0.533 | |||
| Random Effects | |||||
| I feel fatigued after using VR. | 1 | 12 | 1.25 | 0.622 | |
| 2 | 8 | 1.13 | 0.354 | ||
| 3 | 10 | 1.00 | 0.000 | ||
| Total | 30 | 1.13 | 0.434 | ||
| Model | Fixed Effects | 0.436 | |||
| Random Effects | |||||
| The user is in postural sway while standing. | 1 | 12 | 1.08 | 0.289 | |
| 2 | 8 | 1.00 | 0.000 | ||
| 3 | 10 | 1.30 | 0.483 | ||
| Total | 30 | 1.13 | 0.346 | ||
| Model | Fixed Effects | 0.334 | |||
| Random Effects | |||||
Appendix B
Table A2.
Descriptive Values of ANOVA with Interaction Modality as the independent variable. 1—Controllers, 2—Hands.
Table A2.
Descriptive Values of ANOVA with Interaction Modality as the independent variable. 1—Controllers, 2—Hands.
| N | Mean | Std. Deviation | |||
|---|---|---|---|---|---|
| 1st Task (Object Interaction—Grab Interaction) Completion Time | 1 | 12 | 49.33 | 20.219 | |
| 2 | 18 | 50.33 | 19.560 | ||
| Total | 30 | 49.93 | 19.483 | ||
| Model | Fixed Effects | 19.821 | |||
| Random Effects | |||||
| 2nd Task (Map—Grab & Move Interaction) Completion Time | 1 | 12 | 35.50 | 23.283 | |
| 2 | 18 | 50.17 | 35.211 | ||
| Total | 30 | 44.30 | 31.398 | ||
| Model | Fixed Effects | 31.076 | |||
| Random Effects | |||||
| 3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time | 1 | 12 | 34.83 | 17.251 | |
| 2 | 18 | 33.11 | 18.598 | ||
| Total | 30 | 33.80 | 17.787 | ||
| Model | Fixed Effects | 18.081 | |||
| Random Effects | |||||
| 4th Task (Put Jewellery—Grab & Move + Grab & Place Interaction) Completion Time | 1 | 12 | 22.17 | 9.916 | |
| 2 | 18 | 31.39 | 17.614 | ||
| Total | 30 | 27.70 | 15.501 | ||
| Model | Fixed Effects | 15.066 | |||
| Random Effects | |||||
| 5th Task (Sword & Cut—Grab & Use Interaction) Completion Time | 1 | 12 | 24.58 | 14.126 | |
| 2 | 18 | 26.00 | 14.275 | ||
| Total | 30 | 25.43 | 13.987 | ||
| Model | Fixed Effects | 14.216 | |||
| Random Effects | |||||
| 6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time | 1 | 12 | 35.75 | 10.931 | |
| 2 | 18 | 55.89 | 19.638 | ||
| Total | 30 | 47.83 | 19.289 | ||
| Model | Fixed Effects | 16.765 | |||
| Random Effects | |||||
| 1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time | 1 | 12 | 30.61 | 25.809 | |
| 2 | 18 | 33.46 | 27.665 | ||
| Total | 30 | 32.32 | 26.520 | ||
| Model | Fixed Effects | 26.951 | |||
| Random Effects | |||||
| 2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time | 1 | 12 | 44.96 | 26.247 | |
| 2 | 18 | 50.72 | 32.488 | ||
| Total | 30 | 48.42 | 29.804 | ||
| Model | Fixed Effects | 30.190 | |||
| Random Effects | |||||
| Air Writing (Pinch + Move Interaction) | 1 | 12 | 74.42 | 17.347 | |
| 2 | 18 | 68.34 | 19.990 | ||
| Total | 30 | 70.77 | 18.909 | ||
| Model | Fixed Effects | 18.996 | |||
| Random Effects | |||||
| Board Writing (Grab or Pinch + Move interaction) | 1 | 12 | 93.02 | 21.672 | |
| 2 | 18 | 85.42 | 24.995 | ||
| Total | 30 | 88.46 | 23.638 | ||
| Model | Fixed Effects | 23.745 | |||
| Random Effects | |||||
| Typewriter (Poke Interaction) | 1 | 12 | 111.62 | 26.011 | |
| 2 | 18 | 102.49 | 29.977 | ||
| Total | 30 | 106.14 | 28.356 | ||
| Model | Fixed Effects | 28.485 | |||
| Random Effects | |||||
| Errors in Grab Interaction | 1 | 12 | 2.50 | 2.023 | |
| 2 | 18 | 3.33 | 2.029 | ||
| Total | 30 | 3.00 | 2.034 | ||
| Model | Fixed Effects | 2.027 | |||
| Random Effects | |||||
| Errors in Grab and Move Interaction | 1 | 12 | 6.75 | 4.181 | |
| 2 | 18 | 10.72 | 6.047 | ||
| Total | 30 | 9.13 | 5.655 | ||
| Model | Fixed Effects | 5.391 | |||
| Random Effects | |||||
| Errors in Grab and Use Sword | 1 | 12 | 2.25 | 1.603 | |
| 2 | 18 | 5.28 | 4.056 | ||
| Total | 30 | 4.07 | 3.591 | ||
| Model | Fixed Effects | 3.316 | |||
| Random Effects | |||||
| Errors in Grab and Use Pistol | 1 | 12 | 2.42 | 1.165 | |
| 2 | 18 | 8.72 | 5.959 | ||
| Total | 30 | 6.20 | 5.586 | ||
| Model | Fixed Effects | 4.700 | |||
| Random Effects | |||||
| Errors in Distance Grab Interaction | 1 | 12 | 2.00 | 2.256 | |
| 2 | 18 | 2.33 | 2.890 | ||
| Total | 30 | 2.20 | 2.618 | ||
| Model | Fixed Effects | 2.659 | |||
| Random Effects | |||||
| Errors in Poke Interaction on Alphabet Card | 1 | 12 | 1.58 | 1.782 | |
| 2 | 18 | 1.78 | 3.040 | ||
| Total | 30 | 1.70 | 2.575 | ||
| Model | Fixed Effects | 2.619 | |||
| Random Effects | |||||
| Errors in Pinch and Move Interaction to Write in Air | 1 | 12 | 1.42 | 1.621 | |
| 2 | 18 | 2.11 | 2.026 | ||
| Total | 30 | 1.83 | 1.877 | ||
| Model | Fixed Effects | 1.877 | |||
| Random Effects | |||||
| Errors in Grab, Move, and Use Interaction to Write on Board | 1 | 12 | 1.17 | 1.403 | |
| 2 | 18 | 2.78 | 2.264 | ||
| Total | 30 | 2.13 | 2.097 | ||
| Model | Fixed Effects | 1.971 | |||
| Random Effects | |||||
| Errors in Poke Interaction on Urdu Keyboard | 1 | 12 | 1.58 | 1.311 | |
| 2 | 18 | 1.78 | 2.777 | ||
| Total | 30 | 1.70 | 2.277 | ||
| Model | Fixed Effects | 2.315 | |||
| Random Effects | |||||
| The user is confident while interacting with the VR. | 1 | 12 | 4.92 | 0.289 | |
| 2 | 18 | 4.44 | 0.784 | ||
| Total | 30 | 4.63 | 0.669 | ||
| Model | Fixed Effects | 0.637 | |||
| Random Effects | |||||
| The user required external guidance to complete the task. | 1 | 12 | 1.50 | 0.522 | |
| 2 | 18 | 1.67 | 0.907 | ||
| Total | 30 | 1.60 | 0.770 | ||
| Model | Fixed Effects | 0.779 | |||
| Random Effects | |||||
| The user tries to interact with every object. | 1 | 12 | 2.33 | 1.231 | |
| 2 | 18 | 2.56 | 1.423 | ||
| Total | 30 | 2.47 | 1.332 | ||
| Model | Fixed Effects | 1.351 | |||
| Random Effects | |||||
| The user is following the in-app instructions. | 1 | 12 | 4.50 | 0.798 | |
| 2 | 18 | 3.83 | 1.043 | ||
| Total | 30 | 4.10 | 0.995 | ||
| Model | Fixed Effects | 0.954 | |||
| Random Effects | |||||
| The user tried varied poses to interact with the objects. | 1 | 12 | 1.67 | 0.888 | |
| 2 | 18 | 2.22 | 1.215 | ||
| Total | 30 | 2.00 | 1.114 | ||
| Model | Fixed Effects | 1.098 | |||
| Random Effects | |||||
| Total Interaction in Level 1 | 1 | 12 | 25.00 | 7.198 | |
| 2 | 18 | 28.50 | 8.227 | ||
| Total | 30 | 27.10 | 7.897 | ||
| Model | Fixed Effects | 7.839 | |||
| Random Effects | |||||
| Total Interactions in Level 2 | 1 | 12 | 9.25 | 2.527 | |
| 2 | 18 | 8.78 | 1.734 | ||
| Total | 30 | 8.97 | 2.059 | ||
| Model | Fixed Effects | 2.082 | |||
| Random Effects | |||||
| I feel discomfort after using VR. | 1 | 12 | 1.42 | 0.669 | |
| 2 | 18 | 1.22 | 0.428 | ||
| Total | 30 | 1.30 | 0.535 | ||
| Model | Fixed Effects | 0.535 | |||
| Random Effects | |||||
| I feel fatigued after using VR. | 1 | 12 | 1.17 | 0.389 | |
| 2 | 18 | 1.11 | 0.471 | ||
| Total | 30 | 1.13 | 0.434 | ||
| Model | Fixed Effects | 0.441 | |||
| Random Effects | |||||
| The user is in postural sway while standing. | 1 | 12 | 1.08 | 0.289 | |
| 2 | 18 | 1.17 | 0.383 | ||
| Total | 30 | 1.13 | 0.346 | ||
| Model | Fixed Effects | 0.349 | |||
| Random Effects | |||||
Appendix C
Table A3.
ANOVA Results with user type as the independent variable. 1—Tech-Literate, 2—Non-tech-Literate, 3—Nonliterate.
Table A3.
ANOVA Results with user type as the independent variable. 1—Tech-Literate, 2—Non-tech-Literate, 3—Nonliterate.
| Sum of Squares | df | Mean Square | F | Sig. | ||
|---|---|---|---|---|---|---|
| 1st Task (Object Interaction—Grab Interaction) Completion Time | Between Groups | 1345.675 | 2 | 672.838 | 1.880 | 0.172 |
| Within Groups | 9662.192 | 27 | 357.859 | |||
| Total | 11,007.867 | 29 | ||||
| 2nd Task (Map—Grab & Move Interaction) Completion Time | Between Groups | 7098.283 | 2 | 3549.142 | 4.459 | 0.021 |
| Within Groups | 21,490.017 | 27 | 795.927 | |||
| Total | 28,588.300 | 29 | ||||
| 3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time | Between Groups | 1283.608 | 2 | 641.804 | 2.196 | 0.131 |
| Within Groups | 7891.192 | 27 | 292.266 | |||
| Total | 9174.800 | 29 | ||||
| 4th Task (Put Jewellery—Grab & Move + Grab & Place Interaction) Completion Time | Between Groups | 1799.025 | 2 | 899.513 | 4.698 | 0.018 |
| Within Groups | 5169.275 | 27 | 191.455 | |||
| Total | 6968.300 | 29 | ||||
| 5th Task (Sword & Cut—Grab & Use Interaction) Completion Time | Between Groups | 975.142 | 2 | 487.571 | 2.802 | 0.078 |
| Within Groups | 4698.225 | 27 | 174.008 | |||
| Total | 5673.367 | 29 | ||||
| 6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time | Between Groups | 2652.100 | 2 | 1326.050 | 4.399 | 0.022 |
| Within Groups | 8138.067 | 27 | 301.410 | |||
| Total | 10,790.167 | 29 | ||||
| 1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time | Between Groups | 3489.324 | 2 | 1744.662 | 2.786 | 0.079 |
| Within Groups | 16,907.404 | 27 | 626.200 | |||
| Total | 20,396.728 | 29 | ||||
| 2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time | Between Groups | 6528.401 | 2 | 3264.201 | 4.583 | 0.019 |
| Within Groups | 19,231.381 | 27 | 712.273 | |||
| Total | 25,759.782 | 29 | ||||
| Air Writing (Pinch + Move Interaction) | Between Groups | 1620.563 | 2 | 810.281 | 2.501 | 0.101 |
| Within Groups | 8748.740 | 27 | 324.027 | |||
| Total | 10,369.303 | 29 | ||||
| Board Writing (Grab or Pinch + Move interaction) | Between Groups | 2535.393 | 2 | 1267.696 | 2.504 | 0.101 |
| Within Groups | 13,668.257 | 27 | 506.232 | |||
| Total | 16,203.650 | 29 | ||||
| Typewriter (Poke Interaction) | Between Groups | 3650.809 | 2 | 1825.404 | 2.506 | 0.100 |
| Within Groups | 19,666.825 | 27 | 728.401 | |||
| Total | 23,317.634 | 29 | ||||
| Errors in Grab Interaction | Between Groups | 23.233 | 2 | 11.617 | 3.241 | 0.055 |
| Within Groups | 96.767 | 27 | 3.584 | |||
| Total | 120.000 | 29 | ||||
| Errors in Grab and Move Interaction | Between Groups | 77.900 | 2 | 38.950 | 1.238 | 0.306 |
| Within Groups | 849.567 | 27 | 31.465 | |||
| Total | 927.467 | 29 | ||||
| Errors in Grab and Use Sword | Between Groups | 15.675 | 2 | 7.838 | 0.591 | 0.561 |
| Within Groups | 358.192 | 27 | 13.266 | |||
| Total | 373.867 | 29 | ||||
| Errors in Grab and Use Pistol | Between Groups | 130.950 | 2 | 65.475 | 2.284 | 0.121 |
| Within Groups | 773.850 | 27 | 28.661 | |||
| Total | 904.800 | 29 | ||||
| Errors in Distance Grab Interaction | Between Groups | 51.300 | 2 | 25.650 | 4.695 | 0.018 |
| Within Groups | 147.500 | 27 | 5.463 | |||
| Total | 198.800 | 29 | ||||
| Errors in Poke Interaction on Alphabet Card | Between Groups | 21.158 | 2 | 10.579 | 1.669 | 0.207 |
| Within Groups | 171.142 | 27 | 6.339 | |||
| Total | 192.300 | 29 | ||||
| Errors in Pinch and Move Interaction to Write in Air | Between Groups | 10.000 | 2 | 5.000 | 1.465 | 0.249 |
| Within Groups | 92.167 | 27 | 3.414 | |||
| Total | 102.167 | 29 | ||||
| Errors in Grab, Move, and Use Interaction to Write on Board | Between Groups | 19.650 | 2 | 9.825 | 2.460 | 0.104 |
| Within Groups | 107.817 | 27 | 3.993 | |||
| Total | 127.467 | 29 | ||||
| Errors in Poke Interaction on Urdu Keyboard | Between Groups | 4.033 | 2 | 2.017 | 0.372 | 0.693 |
| Within Groups | 146.267 | 27 | 5.417 | |||
| Total | 150.300 | 29 | ||||
| The user is confident while interacting with the VR. | Between Groups | 1.742 | 2 | 0.871 | 2.095 | 0.143 |
| Within Groups | 11.225 | 27 | 0.416 | |||
| Total | 12.967 | 29 | ||||
| The user required external guidance to complete the task. | Between Groups | 7.558 | 2 | 3.779 | 10.583 | 0.000 |
| Within Groups | 9.642 | 27 | 0.357 | |||
| Total | 17.200 | 29 | ||||
| The user tries to interact with every object. | Between Groups | 17.275 | 2 | 8.638 | 6.821 | 0.004 |
| Within Groups | 34.192 | 27 | 1.266 | |||
| Total | 51.467 | 29 | ||||
| The user is following the in-app instructions. | Between Groups | 8.183 | 2 | 4.092 | 5.385 | 0.011 |
| Within Groups | 20.517 | 27 | 0.760 | |||
| Total | 28.700 | 29 | ||||
| The user tried varied poses to interact with the objects. | Between Groups | 2.358 | 2 | 1.179 | 0.946 | 0.401 |
| Within Groups | 33.642 | 27 | 1.246 | |||
| Total | 36.000 | 29 | ||||
| Total Interaction in Level 1 | Between Groups | 45.758 | 2 | 22.879 | 0.350 | 0.708 |
| Within Groups | 1762.942 | 27 | 65.294 | |||
| Total | 1808.700 | 29 | ||||
| Total Interactions in Level 2 | Between Groups | 3.275 | 2 | 1.637 | 0.369 | 0.695 |
| Within Groups | 119.692 | 27 | 4.433 | |||
| Total | 122.967 | 29 | ||||
| I feel discomfort after using VR. | Between Groups | 0.633 | 2 | 0.317 | 1.115 | 0.342 |
| Within Groups | 7.667 | 27 | 0.284 | |||
| Total | 8.300 | 29 | ||||
| I feel fatigued after using VR. | Between Groups | 0.342 | 2 | 0.171 | 0.900 | 0.418 |
| Within Groups | 5.125 | 27 | 0.190 | |||
| Total | 5.467 | 29 | ||||
| The user is in postural sway while standing. | Between Groups | 0.450 | 2 | 0.225 | 2.014 | 0.153 |
| Within Groups | 3.017 | 27 | 0.112 | |||
| Total | 3.467 | 29 | ||||
Appendix D
Table A4.
ANOVA Results with interaction modality as the independent variable. 1—Tech-Literate, 2—Non-Tech-Literate, 3—Nonliterate.
Table A4.
ANOVA Results with interaction modality as the independent variable. 1—Tech-Literate, 2—Non-Tech-Literate, 3—Nonliterate.
| Sum of Squares | df | Mean Square | F | Sig. | ||
|---|---|---|---|---|---|---|
| 1st Task (Object Interaction—Grab Interaction) Completion Time | Between Groups | 7.200 | 1 | 7.200 | 0.018 | 0.893 |
| Within Groups | 11,000.667 | 28 | 392.881 | |||
| Total | 11,007.867 | 29 | ||||
| 2nd Task (Map—Grab & Move Interaction) Completion Time | Between Groups | 1548.800 | 1 | 1548.800 | 1.604 | 0.216 |
| Within Groups | 27,039.500 | 28 | 965.696 | |||
| Total | 28,588.300 | 29 | ||||
| 3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time | Between Groups | 21.356 | 1 | 21.356 | 0.065 | 0.800 |
| Within Groups | 9153.444 | 28 | 326.909 | |||
| Total | 9174.800 | 29 | ||||
| 4th Task (Put Jewellery—Grab & Move + Grab & Place Interaction) Completion Time | Between Groups | 612.356 | 1 | 612.356 | 2.698 | 0.112 |
| Within Groups | 6355.944 | 28 | 226.998 | |||
| Total | 6968.300 | 29 | ||||
| 5th Task (Sword & Cut—Grab & Use Interaction) Completion Time | Between Groups | 14.450 | 1 | 14.450 | 0.071 | 0.791 |
| Within Groups | 5658.917 | 28 | 202.104 | |||
| Total | 5673.367 | 29 | ||||
| 6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time | Between Groups | 2920.139 | 1 | 2920.139 | 10.389 | 0.003 |
| Within Groups | 7870.028 | 28 | 281.072 | |||
| Total | 10,790.167 | 29 | ||||
| 1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time | Between Groups | 58.596 | 1 | 58.596 | 0.081 | 0.778 |
| Within Groups | 20,338.132 | 28 | 726.362 | |||
| Total | 20,396.728 | 29 | ||||
| 2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time | Between Groups | 239.201 | 1 | 239.201 | 0.262 | 0.612 |
| Within Groups | 25,520.580 | 28 | 911.449 | |||
| Total | 25,759.782 | 29 | ||||
| Air Writing (Pinch + Move Interaction) | Between Groups | 265.964 | 1 | 265.964 | 0.737 | 0.398 |
| Within Groups | 10,103.339 | 28 | 360.834 | |||
| Total | 10,369.303 | 29 | ||||
| Board Writing (Grab or Pinch + Move interaction) | Between Groups | 416.176 | 1 | 416.176 | 0.738 | 0.398 |
| Within Groups | 15,787.474 | 28 | 563.838 | |||
| Total | 16,203.650 | 29 | ||||
| Typewriter (Poke Interaction) | Between Groups | 599.148 | 1 | 599.148 | 0.738 | 0.397 |
| Within Groups | 22,718.486 | 28 | 811.375 | |||
| Total | 23,317.634 | 29 | ||||
| Errors in Grab Interaction | Between Groups | 5.000 | 1 | 5.000 | 1.217 | 0.279 |
| Within Groups | 115.000 | 28 | 4.107 | |||
| Total | 120.000 | 29 | ||||
| Errors in Grab and Move Interaction | Between Groups | 113.606 | 1 | 113.606 | 3.908 | 0.058 |
| Within Groups | 813.861 | 28 | 29.066 | |||
| Total | 927.467 | 29 | ||||
| Errors in Grab and Use Sword | Between Groups | 66.006 | 1 | 66.006 | 6.003 | 0.021 |
| Within Groups | 307.861 | 28 | 10.995 | |||
| Total | 373.867 | 29 | ||||
| Errors in Grab and Use Pistol | Between Groups | 286.272 | 1 | 286.272 | 12.959 | 0.001 |
| Within Groups | 618.528 | 28 | 22.090 | |||
| Total | 904.800 | 29 | ||||
| Errors in Distance Grab Interaction | Between Groups | 0.800 | 1 | 0.800 | 0.113 | 0.739 |
| Within Groups | 198.000 | 28 | 7.071 | |||
| Total | 198.800 | 29 | ||||
| Errors in Poke Interaction on Alphabet Card | Between Groups | 0.272 | 1 | 0.272 | 0.040 | 0.844 |
| Within Groups | 192.028 | 28 | 6.858 | |||
| Total | 192.300 | 29 | ||||
| Errors in Pinch and Move Interaction to Write in Air | Between Groups | 3.472 | 1 | 3.472 | 0.985 | 0.329 |
| Within Groups | 98.694 | 28 | 3.525 | |||
| Total | 102.167 | 29 | ||||
| Errors in Grab, Move, and Use Interaction to Write on Board | Between Groups | 18.689 | 1 | 18.689 | 4.811 | 0.037 |
| Within Groups | 108.778 | 28 | 3.885 | |||
| Total | 127.467 | 29 | ||||
| Errors in Poke Interaction on Urdu Keyboard | Between Groups | 0.272 | 1 | 0.272 | 0.051 | 0.823 |
| Within Groups | 150.028 | 28 | 5.358 | |||
| Total | 150.300 | 29 | ||||
| The user is confident while interacting with the VR. | Between Groups | 1.606 | 1 | 1.606 | 3.957 | 0.057 |
| Within Groups | 11.361 | 28 | 0.406 | |||
| Total | 12.967 | 29 | ||||
| The user required external guidance to complete the task. | Between Groups | 0.200 | 1 | 0.200 | 0.329 | 0.571 |
| Within Groups | 17.000 | 28 | 0.607 | |||
| Total | 17.200 | 29 | ||||
| The user tries to interact with every object. | Between Groups | 0.356 | 1 | 0.356 | 0.195 | 0.662 |
| Within Groups | 51.111 | 28 | 1.825 | |||
| Total | 51.467 | 29 | ||||
| The user is following the in-app instructions. | Between Groups | 3.200 | 1 | 3.200 | 3.514 | 0.071 |
| Within Groups | 25.500 | 28 | 0.911 | |||
| Total | 28.700 | 29 | ||||
| The user tried varied poses to interact with the objects. | Between Groups | 2.222 | 1 | 2.222 | 1.842 | 0.186 |
| Within Groups | 33.778 | 28 | 1.206 | |||
| Total | 36.000 | 29 | ||||
| Total Interaction in Level 1 | Between Groups | 88.200 | 1 | 88.200 | 1.435 | 0.241 |
| Within Groups | 1720.500 | 28 | 61.446 | |||
| Total | 1808.700 | 29 | ||||
| Total Interactions in Level 2 | Between Groups | 1.606 | 1 | 1.606 | 0.370 | 0.548 |
| Within Groups | 121.361 | 28 | 4.334 | |||
| Total | 122.967 | 29 | ||||
| I feel discomfort after using VR. | Between Groups | 0.272 | 1 | 0.272 | 0.949 | 0.338 |
| Within Groups | 8.028 | 28 | 0.287 | |||
| Total | 8.300 | 29 | ||||
| I feel fatigued after using VR. | Between Groups | 0.022 | 1 | 0.022 | 0.114 | 0.738 |
| Within Groups | 5.444 | 28 | 0.194 | |||
| Total | 5.467 | 29 | ||||
| The user is in postural sway while standing. | Between Groups | 0.050 | 1 | 0.050 | 0.410 | 0.527 |
| Within Groups | 3.417 | 28 | 0.122 | |||
| Total | 3.467 | 29 | ||||
Appendix E
Table A5.
Estimates of Multivariate Analysis.
Table A5.
Estimates of Multivariate Analysis.
| Estimates | |||||||
|---|---|---|---|---|---|---|---|
| Dependent Variable | Use of Recent Technology | User Type | Interaction Modality | Mean | Std. Error | 95% Confidence Interval | |
| Lower Bound | Upper Bound | ||||||
| 2nd Task (Map—Grab & Move Interaction) Completion Time | No | Nonliterate | Controllers | 78.000 | 22.335 | 31.681 | 124.319 |
| Hands | 87.500 | 15.793 | 54.747 | 120.253 | |||
| Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Yes | Nonliterate | Controllers | 48.000 | 11.167 | 24.840 | 71.160 | |
| Hands | 47.667 | 12.895 | 20.924 | 74.409 | |||
| Non-Tech-Literate | Controllers | 23.500 | 11.167 | 0.340 | 46.660 | ||
| Hands | 85.000 | 11.167 | 61.840 | 108.160 | |||
| Tech-Literate | Controllers | 20.667 | 12.895 | −6.076 | 47.409 | ||
| Hands | 27.222 | 7.445 | 11.782 | 42.662 | |||
| 4th Task (Put Jewellery—Grab & Move + Grab & Place Interaction) Completion Time | No | Nonliterate | Controllers | 31.000 | 10.670 | 8.872 | 53.128 |
| Hands | 68.000 | 7.545 | 52.353 | 83.647 | |||
| Non-Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Yes | Nonliterate | Controllers | 29.250 | 5.335 | 18.186 | 40.314 | |
| Hands | 34.000 | 6.160 | 21.224 | 46.776 | |||
| Non-Tech-Literate | Controllers | 16.250 | 5.335 | 5.186 | 27.314 | ||
| Hands | 30.500 | 5.335 | 19.436 | 41.564 | |||
| Tech-Literate | Controllers | 17.667 | 6.160 | 4.891 | 30.443 | ||
| Hands | 22.778 | 3.557 | 15.402 | 30.154 | |||
| 6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time | No | Nonliterate | Controllers | 50.000 | 13.596 | 21.804 | 78.196 |
| Hands | 46.500 | 9.614 | 26.562 | 66.438 | |||
| Non-Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Yes | Nonliterate | Controllers | 42.500 | 6.798 | 28.402 | 56.598 | |
| Hands | 72.667 | 7.850 | 56.387 | 88.946 | |||
| Non-Tech-Literate | Controllers | 30.000 | 6.798 | 15.902 | 44.098 | ||
| Hands | 34.500 | 6.798 | 20.402 | 48.598 | |||
| Tech-Literate | Controllers | 29.667 | 7.850 | 13.387 | 45.946 | ||
| Hands | 61.889 | 4.532 | 52.490 | 71.288 | |||
| 2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time | No | Nonliterate | Controllers | 78.600 | 24.022 | 28.782 | 128.418 |
| Hands | 117.800 | 16.986 | 82.574 | 153.026 | |||
| Non-Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Yes | Nonliterate | Controllers | 52.350 | 12.011 | 27.441 | 77.259 | |
| Hands | 56.167 | 13.869 | 27.404 | 84.929 | |||
| Non-Tech-Literate | Controllers | 33.100 | 12.011 | 8.191 | 58.009 | ||
| Hands | 39.350 | 12.011 | 14.441 | 64.259 | |||
| Tech-Literate | Controllers | 39.700 | 13.869 | 10.938 | 68.462 | ||
| Hands | 39.056 | 8.007 | 22.450 | 55.661 | |||
| The user required external guidance to complete the task. | No | Nonliterate | Controllers | 2.000 | 0.589 | 0.778 | 3.222 |
| Hands | 3.000 | 0.417 | 2.136 | 3.864 | |||
| Non-Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Yes | Nonliterate | Controllers | 2.000 | 0.295 | 1.389 | 2.611 | |
| Hands | 2.333 | 0.340 | 1.628 | 3.039 | |||
| Non-Tech-Literate | Controllers | 1.250 | 0.295 | 0.639 | 1.861 | ||
| Hands | 1.000 | 0.295 | 0.389 | 1.611 | |||
| Tech-Literate | Controllers | 1.000 | 0.340 | 0.294 | 1.706 | ||
| Hands | 1.444 | .196 | 1.037 | 1.852 | |||
| The user tries to interact with every object. | No | Nonliterate | Controllers | 1.000 | 1.180 | −1.447 | 3.447 |
| Hands | 1.000 | 0.834 | −0.731 | 2.731 | |||
| Non-Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Yes | Nonliterate | Controllers | 2.000 | 0.590 | 0.776 | 3.224 | |
| Hands | 1.000 | 0.681 | −0.413 | 2.413 | |||
| Non-Tech-Literate | Controllers | 2.500 | 0.590 | 1.276 | 3.724 | ||
| Hands | 3.250 | 0.590 | 2.026 | 4.474 | |||
| Tech-Literate | Controllers | 3.000 | 0.681 | 1.587 | 4.413 | ||
| Hands | 3.111 | 0.393 | 2.295 | 3.927 | |||
| The user is following the in-app instructions. | No | Nonliterate | Controllers | 4.000 | 0.870 | 2.195 | 5.805 |
| Hands | 4.500 | 0.615 | 3.224 | 5.776 | |||
| Non-Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Yes | Nonliterate | Controllers | 5.000 | 0.435 | 4.097 | 5.903 | |
| Hands | 5.000 | 0.503 | 3.958 | 6.042 | |||
| Non-Tech-Literate | Controllers | 4.250 | 0.435 | 3.347 | 5.153 | ||
| Hands | 3.750 | 0.435 | 2.847 | 4.653 | |||
| Tech-Literate | Controllers | 4.333 | 0.503 | 3.291 | 5.375 | ||
| Hands | 3.333 | 0.290 | 2.732 | 3.935 | |||
| Errors in Grab and Use Sword | No | Nonliterate | Controllers | 3.000 | 3.656 | −4.583 | 10.583 |
| Hands | 5.500 | 2.586 | 0.138 | 10.862 | |||
| Non-Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Yes | Nonliterate | Controllers | 3.000 | 1.828 | -.792 | 6.792 | |
| Hands | 6.000 | 2.111 | 1.622 | 10.378 | |||
| Non-Tech-Literate | Controllers | 1.750 | 1.828 | −2.042 | 5.542 | ||
| Hands | 4.000 | 1.828 | 0.208 | 7.792 | |||
| Tech-Literate | Controllers | 1.667 | 2.111 | −2.711 | 6.045 | ||
| Hands | 5.556 | 1.219 | 3.028 | 8.083 | |||
| Errors in Grab and Use Pistol | No | Nonliterate | Controllers | 4.000 | 4.529 | −5.392 | 13.392 |
| Hands | 2.000 | 3.202 | −4.641 | 8.641 | |||
| Non-Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Yes | Nonliterate | Controllers | 2.500 | 2.264 | −2.196 | 7.196 | |
| Hands | 9.667 | 2.615 | 4.244 | 15.089 | |||
| Non-Tech-Literate | Controllers | 2.250 | 2.264 | −2.446 | 6.946 | ||
| Hands | 6.250 | 2.264 | 1.554 | 10.946 | |||
| Tech-Literate | Controllers | 2.000 | 2.615 | −3.422 | 7.422 | ||
| Hands | 11.000 | 1.510 | 7.869 | 14.131 | |||
| Errors in Distance Grab Interaction | No | Nonliterate | Controllers | 5.000 | 2.126 | 0.591 | 9.409 |
| Hands | 7.500 | 1.503 | 4.383 | 10.617 | |||
| Non-Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Yes | Nonliterate | Controllers | 1.750 | 1.063 | −0.454 | 3.954 | |
| Hands | 4.333 | 1.227 | 1.788 | 6.879 | |||
| Non-Tech-Literate | Controllers | 2.250 | 1.063 | 0.046 | 4.454 | ||
| Hands | 1.250 | 1.063 | −0.954 | 3.454 | |||
| Tech-Literate | Controllers | 1.000 | 1.227 | −1.545 | 3.545 | ||
| Hands | 1.000 | 0.709 | −0.470 | 2.470 | |||
| Errors in Grab, Move, and Use Interaction to Write on Board | No | Nonliterate | Controllers | 3.000 | 1.376 | 0.146 | 5.854 |
| Hands | 4.441 × 10−16 | 0.973 | −2.018 | 2.018 | |||
| Non-Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Tech-Literate | Controllers | . | . | . | . | ||
| Hands | . | . | . | . | |||
| Yes | Nonliterate | Controllers | 2.500 | 0.688 | 1.073 | 3.927 | |
| Hands | 6.000 | 0.795 | 4.352 | 7.648 | |||
| Non-Tech-Literate | Controllers | −2.776 × 10−16 | 0.688 | −1.427 | 1.427 | ||
| Hands | 2.000 | 0.688 | 0.573 | 3.427 | |||
| Tech-Literate | Controllers | 0.333 | 0.795 | −1.314 | 1.981 | ||
| Hands | 2.667 | 0.459 | 1.715 | 3.618 | |||
Appendix F
Table A6.
Results of t-tests with controllers as an interaction modality.
Table A6.
Results of t-tests with controllers as an interaction modality.
| Test Value = 24.14 | ||||||
|---|---|---|---|---|---|---|
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Total Interactions in Level 1 | 0.913 | 4 | 0.413 | 1.460 | −2.98 | 5.90 |
| Test Value = 40 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 1st Task (Object Interaction—Grab Interaction) Completion Time | 2.452 | 4 | 0.070 | 14.200 | −1.88 | 30.28 |
| Test Value = 22.29 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 2nd Task (Map—Grab & Move Interaction) Completion Time | 4.536 | 4 | 0.011 | 21.110 | 8.19 | 34.03 |
| Test Value = 27.14 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time | 1.609 | 4 | 0.183 | 2.060 | −1.50 | 5.62 |
| Test Value = 16.86 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 4th Task (Put Jewelry—Grab & Move + Grab & Place Interaction) Completion Time | 3.611 | 4 | 0.023 | 14.740 | 3.41 | 26.07 |
| Test Value = 20 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 5th Task (Sword & Cut—Grab & Use Interaction) Completion Time | 3.162 | 4 | 0.034 | 4.000 | 0.49 | 7.51 |
| Test Value = 29.86 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time | 2.202 | 4 | 0.092 | 11.540 | −3.01 | 26.09 |
| Test Value = 9 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Total Interactions in Level 2 | 0.121 | 4 | 0.910 | 0.200 | −4.40 | 4.80 |
| Test Value = 38.54 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time | −7.207 | 4 | 0.002 | −16.340 | −22.63 | −10.05 |
| Test Value = 35.93 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time | 0.311 | 4 | 0.771 | 3.870 | −30.68 | 38.42 |
| Test Value = 72.81 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 3rd Task Air Writing (Pinch + Move Interaction) | 1.950 | 4 | 0.123 | 10.130 | −4.29 | 24.55 |
| Test Value = 91.01 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 3rd Task Board Writing (Grab or Pinch + Move interaction) | 1.946 | 4 | 0.124 | 12.650 | −5.40 | 30.70 |
| Test Value = 109.19 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 3rd Task Typewriter (Poke Interaction) | 1.951 | 4 | 0.123 | 15.210 | −6.44 | 36.86 |
| Test Value = 1.14 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| External Users required external guidance to complete the task. | 0.300 | 4 | 0.779 | 0.060 | −0.50 | 0.62 |
| Test Value = 2.71 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Gameplay User tries to interact with every object. | −5.348 | 4 | 0.006 | −1.310 | −1.99 | −0.63 |
| Test Value = 1.29 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 Errors in Grab Interaction | 1.850 | 4 | 0.138 | 1.110 | −0.56 | 2.78 |
| Test Value = 4.71 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 Errors in Grab and Move Interaction | 1.096 | 4 | 0.335 | 0.890 | −1.37 | 3.15 |
| Test Value = 4.0 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 Errors is Poke Interaction | −1.372 | 4 | 0.242 | −0.800 | −2.42 | 0.82 |
| Test Value = 1.71 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 Errors in Grab and Use Sword | −5.348 | 4 | 0.006 | −1.310 | −1.99 | −0.63 |
| Test Value = 2.14 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 Errors in Grab and Use Pistol | −1.919 | 4 | 0.127 | −0.940 | −2.30 | 0.42 |
| Test Value = 1.71 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 Errors in Distance Grab Interaction | −1.858 | 4 | 0.137 | −0.910 | −2.27 | 0.45 |
| Test Value = 0.86 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 Errors in Poke Interaction on Alphabet Card | 2.205 | 4 | 0.092 | 0.540 | −0.14 | 1.22 |
| Test Value = 0.71 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 Errors in Pinch and Move Interaction to Write in Air | −1.266 | 4 | 0.274 | −0.310 | −0.99 | 0.37 |
| Test Value = 0.14 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 Errors in Grab, Move and Use Interaction to Write on Board | 1.923 | 4 | 0.127 | 0.860 | −0.38 | 2.10 |
| Test Value = 1.86 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 Errors in Poke Interaction on Urdu Keyboard | −0.510 | 4 | 0.637 | −0.260 | −1.68 | 1.16 |
| Test Value = 1.29 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| VR Sickness I feel discomfort. | −0.450 | 4 | 0.676 | −0.090 | −0.65 | 0.47 |
Appendix G
Table A7.
Results of t-tests with hands as interaction modality.
Table A7.
Results of t-tests with hands as interaction modality.
| Test Value = 30 | ||||||
|---|---|---|---|---|---|---|
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Total Interactions in Level 1 | −2.264 | 4 | 0.086 | −4.200 | −9.35 | 0.95 |
| Test Value = 48 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 1st Task (Object Interaction—Grab Interaction) Completion Time | −0.649 | 4 | 0.552 | −2.400 | −12.66 | 7.86 |
| Test Value = 45 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 2nd Task (Map—Grab & Move Interaction) Completion Time | −0.673 | 4 | 0.538 | −1.600 | −8.21 | 5.01 |
| Test Value = 31.38 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time | 1.625 | 4 | 0.180 | 8.020 | −5.68 | 21.72 |
| Test Value = 25.15 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 4th Task (Put Jewelry—Grab & Move + Grab & Place Interaction) Completion Time | 6.744 | 4 | 0.003 | 13.250 | 7.80 | 18.70 |
| Test Value = 22.31 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 5th Task (Sword & Cut—Grab & Use Interaction) Completion Time | 3.686 | 4 | 0.021 | 12.090 | 2.98 | 21.20 |
| Test Value = 53.46 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time | 1.283 | 4 | 0.269 | 9.940 | −11.58 | 31.46 |
| Test Value = 8.85 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Total Interactions in Level 2 | 0.495 | 4 | 0.647 | 1.150 | −5.30 | 7.60 |
| Test Value = 37.40 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time | −1.564 | 4 | 0.193 | −5.800 | −16.10 | 4.50 |
| Test Value = 39.15 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time | 1.837 | 4 | 0.140 | 29.250 | −14.95 | 73.45 |
| Test Value = 61.89 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 3rd Task Air Writing (Pinch + Move Interaction) | 4.860 | 4 | 0.008 | 18.870 | 8.09 | 29.65 |
| Test Value = 77.36 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 3rd Task Board Writing (Grab or Pinch + Move interaction) | 4.839 | 4 | 0.008 | 23.560 | 10.04 | 37.08 |
| Test Value = 92.83 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 3rd Task Typewriter (Poke Interaction) | 4.849 | 4 | 0.008 | 28.290 | 12.09 | 44.49 |
| Test Value = 4.69 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| External User is confident while interacting with the VR. | −1.000 | 4 | 0.374 | −0.490 | −1.85 | 0.87 |
| Test Value = 1.31 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| External Users required external guidance to complete the task. | 2.379 | 4 | 0.076 | 0.890 | −0.15 | 1.93 |
| Test Value = 3.15 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Gameplay User tries to interact with every object. | −3.637 | 4 | 0.022 | −1.150 | −2.03 | −0.27 |
| Test Value = 3.46 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Gameplay User is following the in-app instructions. | 6.700 | 4 | 0.003 | 1.340 | 0.78 | 1.90 |
| Test Value = 2.46 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Gameplay User tried varied poses to interact with the objects. | −3.511 | 4 | 0.025 | −0.860 | −1.54 | −0.18 |
| Test Value = 3 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 Errors in Grab Interaction | −0.667 | 4 | 0.541 | −0.400 | −2.07 | 1.27 |
| Test Value = 9.77 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 Errors in Grab and Move Interaction | −4.548 | 4 | 0.010 | −4.770 | −7.68 | −1.86 |
| Test Value = 7.85 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 Errors is Poke Interaction | −16.169 | 4 | 0.000 | −6.050 | −7.09 | −5.01 |
| Test Value = 5.08 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 1 Errors in Grab and Use Sword | −10.941 | 4 | 0.000 | −2.680 | −3.36 | −2.00 |
| Test Value = 1.08 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 Errors in Distance Grab Interaction | 4.704 | 4 | 0.009 | 3.120 | 1.28 | 4.96 |
| Test Value = 1.31 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 Errors in Poke Interaction on Alphabet Card | −0.980 | 4 | 0.382 | −0.310 | −1.19 | 0.57 |
| Test Value = 1.92 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 Errors in Pinch and Move Interaction to Write in Air | −0.472 | 4 | 0.662 | −0.320 | −2.20 | 1.56 |
| Test Value = 2.46 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 Errors in Grab, Move and Use Interaction to Write on Board | 0.275 | 4 | 0.797 | 0.140 | −1.28 | 1.56 |
| Test Value = 1.92 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| Level 2 Errors in Poke Interaction on Urdu Keyboard | −0.800 | 4 | 0.469 | −0.320 | −1.43 | 0.79 |
| Test Value = 1.15 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| VR Sickness I feel discomfort. | 1.837 | 4 | 0.140 | 0.450 | −0.23 | 1.13 |
| Test Value = 1.15 | ||||||
| t | df | Sig. (2-tailed) | Mean Difference | 95% Confidence Interval of the Difference | ||
| Lower | Upper | |||||
| VR Sickness I feel fatigued. | 1.837 | 4 | 0.140 | 0.450 | −0.23 | 1.13 |
Appendix H
Table A8.
Results of the non-parametric Kruskal–Wallis H-test with years of technological experience as the grouping variable.
Table A8.
Results of the non-parametric Kruskal–Wallis H-test with years of technological experience as the grouping variable.
| Chi-Square | df | Asymp. Sig. | |
|---|---|---|---|
| Total Interactions in Level 1 | 0.048 | 2 | 0.976 |
| Level 1 1st Task (Object Interaction—Grab Interaction) Completion Time | 2.091 | 2 | 0.351 |
| Level 1 2nd Task (Map—Grab & Move Interaction) Completion Time | 4.960 | 2 | 0.084 |
| Level 1 3rd Task (Change Music/TV Ch—Poke Interaction) Completion Time | 1.636 | 2 | 0.441 |
| Level 1 4th Task (Put Jewelry—Grab & Move + Grab & Place Interaction) Completion Time | 1.729 | 2 | 0.421 |
| Level 1 5th Task (Sword & Cut—Grab & Use Interaction) Completion Time | 1.401 | 2 | 0.496 |
| Level 1 6th Task (Pistol & Shoot—Grab & Move + Grab & Use Interaction) Completion Time | 0.273 | 2 | 0.873 |
| Total Interactions in Level 2 | 1.132 | 2 | 0.568 |
| Level 2 1st Task (Alphabet Cards—Distance Grab + Pinch Interaction) Completion Time | 0.021 | 2 | 0.990 |
| Level 2 2nd Task (Alphabet Cards—Two-handed, Distance Grab + Pinch + Poke Interaction) Completion Time | 0.491 | 2 | 0.782 |
| Level 2 3rd Task Air Writing (Pinch + Move Interaction) | 2.106 | 2 | 0.349 |
| Level 2 3rd Task Board Writing (Grab or Pinch + Move interaction) | 2.106 | 2 | 0.349 |
| Level 2 3rd Task Typewriter (Poke Interaction) | 2.106 | 2 | 0.349 |
| External User is confident while interacting with the VR. | 0.563 | 2 | 0.755 |
| External Users required external guidance to complete the task. | 1.953 | 2 | 0.377 |
| Gameplay User tries to interact with every object. | 2.025 | 2 | 0.363 |
| Gameplay User is following the in-app instructions. | 4.000 | 2 | 0.135 |
| Gameplay User tried varied poses to interact with the objects. | 1.500 | 2 | 0.472 |
| Level 1 Errors in Grab Interaction | 3.309 | 2 | 0.191 |
| Level 1 Errors in Grab and Move Interaction | 1.174 | 2 | 0.556 |
| Level 1 Errors is Poke Interaction | 2.559 | 2 | 0.278 |
| Level 1 Errors in Grab and Use Sword | 0.109 | 2 | 0.947 |
| Level 1 Errors in Grab and Use Pistol | 1.969 | 2 | 0.374 |
| Level 2 Errors in Distance Grab Interaction | 0.519 | 2 | 0.771 |
| Level 2 Errors in Poke Interaction on Alphabet Card | 0.375 | 2 | 0.829 |
| Level 2 Errors in Pinch and Move Interaction to Write in Air | 2.030 | 2 | 0.362 |
| Level 2 Errors in Grab, Move and Use Interaction to Write on Board | 1.047 | 2 | 0.593 |
| Level 2 Errors in Poke Interaction on Urdu Keyboard | 0.300 | 2 | 0.861 |
| VR Sickness I feel discomfort. | 0.563 | 2 | 0.755 |
| VR Sickness I feel fatigued. | 0.429 | 2 | 0.807 |
| VR Sickness User is in postural sway while standing. | 0.000 | 2 | 1.000 |
Notes
| 1 | https://www.oculus.com/ (accessed on 22 September 2022). |
| 2 | https://www.vive.com (accessed on 22 September 2022). |
| 3 | https://www.unity.com (accessed on 22 September 2022). |
| 4 | https://developer.oculus.com/documentation/unity/unity-isdk-interaction-sdk-overview (accessed on 22 September 2022). |
| 5 | https://www.blender.org (accessed on 3 October 2022). |
| 6 | https://www.gimp.org (accessed on 3 October 2022). |
| 7 | https://www.audacityteam.org (accessed on 3 October 2022). |
| 8 | https://assetstore.unity.com (accessed on 3 October 2022). |
| 9 | https://quixel.com/megascans (accessed on 3 October 2022). |
| 10 | https://www.meta.com/quest/products/quest-2/ (accessed on 3 October 2022). |
References
- UNESCO-UIS. Literacy. Available online: http://uis.unesco.org/en/topic/literacy (accessed on 7 February 2021).
- Lal, B.S. The Economic and Social Cost of Illiteracy: An Overview. Int. J. Adv. Res. Innov. Ideas Educ. 2015, 1, 663–670. [Google Scholar]
- Literate Pakistan Foundation. Aagahi Adult Literacy Programme, Pakistan. Available online: https://uil.unesco.org/case-study/effective-practices-database-litbase-0/aagahi-adult-literacy-programme-pakistan (accessed on 2 July 2021).
- UIL. National Literacy Programme, Pakistan. Available online: https://uil.unesco.org/case-study/effective-practices-database-litbase-0/national-literacy-programme-pakistan (accessed on 2 July 2021).
- Iqbal, T.; Hammermüller, K.; Nussbaumer, A.; Tjoa, A.M. Towards Using Second Life for Supporting Illiterate Persons in Learning. In Proceedings of the World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2009, Vancouver, BC, Canada, 26–30 October 2009; pp. 2704–2709. [Google Scholar]
- Iqbal, T.; Iqbal, S.; Hussain, S.S.; Khan, I.A.; Khan, H.U.; Rehman, A. Fighting Adult Illiteracy with the Help of the Environmental Print Material. PLoS ONE 2018, 13, e0201902. [Google Scholar] [CrossRef] [PubMed]
- Ur-Rehman, I.; Shamim, A.; Khan, T.A.; Elahi, M.; Mohsin, S. Mobile Based User-Centered Learning Environment for Adult Absolute Illiterates. Mob. Inf. Syst. 2016, 2016, 1841287. [Google Scholar] [CrossRef]
- Knowles, M. The Adult Learner: A Neglected Species, 3rd ed.; Gulf Publishing: Houston, TX, USA, 1984. [Google Scholar]
- Pereira, A.; Ortiz, K.Z. Language Skills Differences between Adults without Formal Education and Low Formal Education. Psicol. Reflexão Crítica 2022, 35, 4. [Google Scholar] [CrossRef] [PubMed]
- van Linden, S.; Cremers, A.H.M. Cognitive Abilities of Functionally Illiterate Persons Relevant to ICT Use. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2008; Volume 5105 LNCS, pp. 705–712. [Google Scholar] [CrossRef]
- Katre, D.S. Unorganized Cognitive Structures of Illiterate as the Key Factor in Rural E-Learning Design. J. Educ. 2006, 2, 67–71. [Google Scholar] [CrossRef]
- ISO (International Organization for Standardization). Ergonomics of Human-System Interaction—Part 11: Usability: Definitions and Concepts. Available online: https://www.iso.org/standard/63500.html (accessed on 18 December 2022).
- Bartneck, C.; Forlizzi, J. A Design-Centred Framework for Social Human-Robot Interaction. In Proceedings of the RO-MAN 2004, 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759), Kurashiki, Japan, 22 September 2004; IEEE: New York, NY, USA, 2004; pp. 591–594. [Google Scholar] [CrossRef]
- Lalji, Z.; Good, J. Designing New Technologies for Illiterate Populations: A Study in Mobile Phone Interface Design. Interact. Comput. 2008, 20, 574–586. [Google Scholar] [CrossRef]
- Carvalho, M.B. Designing for Low-Literacy Users: A Framework for Analysis of User-Centred Design Methods. Master’s Thesis, Tampere University, Tampere, Finland, 2011. [Google Scholar]
- Taoufik, I.; Kabaili, H.; Kettani, D. Designing an E-Government Portal Accessible to Illiterate Citizens. In Proceedings of the 1st International Conference on Theory and Practice of Electronic Governance—ICEGOV’07, Macau, China, 10–13 December 2007; p. 327. [Google Scholar] [CrossRef]
- Friscira, E.; Knoche, H.; Huang, J. Getting in Touch with Text: Designing a Mobile Phone Application for Illiterate Users to Harness SMS. In Proceedings of the 2nd ACM Symposium on Computing for Development—ACM DEV’12, Atlanta, GA, USA, 11–12 March 2012; p. 1. [Google Scholar] [CrossRef]
- Huenerfauth, M.P. Design Approaches for Developing User-Interfaces Accessible to Illiterate Users. Master’s Thesis, University College Dublin, Dublin, Ireland, 2002. [Google Scholar]
- Rashid, S.; Khattak, A.; Ashiq, M.; Ur Rehman, S.; Rashid Rasool, M. Educational Landscape of Virtual Reality in Higher Education: Bibliometric Evidences of Publishing Patterns and Emerging Trends. Publications 2021, 9, 17. [Google Scholar] [CrossRef]
- Huettig, F.; Mishra, R.K. How Literacy Acquisition Affects the Illiterate Mind—A Critical Examination of Theories and Evidence. Lang. Linguist. Compass 2014, 8, 401–427. [Google Scholar] [CrossRef]
- Steuer, J. Defining Virtual Reality: Dimensions Determining Telepresence. J. Commun. 1992, 42, 73–93. [Google Scholar] [CrossRef]
- Slater, M. Immersion and the Illusion of Presence in Virtual Reality. Br. J. Psychol. 2018, 109, 431–433. [Google Scholar] [CrossRef]
- Zube, E.H. Environmental Perception. In Environmental Geology; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999; pp. 214–216. [Google Scholar] [CrossRef]
- Ohno, R. A Hypothetical Model of Environmental Perception. In Theoretical Perspectives in Environment-Behavior Research; Springer: Boston, MA, USA, 2000; pp. 149–156. [Google Scholar] [CrossRef]
- Wilson, G.I.; Holton, M.D.; Walker, J.; Jones, M.W.; Grundy, E.; Davies, I.M.; Clarke, D.; Luckman, A.; Russill, N.; Wilson, V.; et al. A New Perspective on How Humans Assess Their Surroundings; Derivation of Head Orientation and Its Role in ‘Framing’ the Environment. PeerJ 2015, 3, e908. [Google Scholar] [CrossRef]
- Slater, M.; Sanchez-Vives, M.V. Enhancing Our Lives with Immersive Virtual Reality. Front. Robot. AI 2016, 3, 74. [Google Scholar] [CrossRef]
- Csikszentmihalyi, M. Flow: The Psychology of Optimal Experience; Harper & Row: New York, NY, USA, 2008. [Google Scholar]
- Csikszentmihalyi, M.; LeFevre, J. Optimal Experience in Work and Leisure. J. Pers. Soc. Psychol. 1989, 56, 815–822. [Google Scholar] [CrossRef] [PubMed]
- Alexiou, A.; Schippers, M.; Oshri, I. Positive Psychology and Digital Games: The Role of Emotions and Psychological Flow in Serious Games Development. Psychology 2012, 3, 1243–1247. [Google Scholar] [CrossRef]
- Ruvimova, A.; Kim, J.; Fritz, T.; Hancock, M.; Shepherd, D.C. “transport Me Away”: Fostering Flow in Open Offices through Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020. [Google Scholar] [CrossRef]
- Jennett, C.; Cox, A.L.; Cairns, P.; Dhoparee, S.; Epps, A.; Tijs, T.; Walton, A. Measuring and Defining the Experience of Immersion in Games. Int. J. Hum. Comput. Stud. 2008, 66, 641–661. [Google Scholar] [CrossRef]
- Kim, D.; Ko, Y.J. The Impact of Virtual Reality (VR) Technology on Sport Spectators’ Flow Experience and Satisfaction. Comput. Hum. Behav. 2019, 93, 346–356. [Google Scholar] [CrossRef]
- Kiili, K. Content Creation Challenges and Flow Experience in Educational Games: The IT-Emperor Case. Internet High. Educ. 2005, 8, 183–198. [Google Scholar] [CrossRef]
- de Regt, A.; Barnes, S.J.; Plangger, K. The Virtual Reality Value Chain. Bus. Horiz. 2020, 63, 737–748. [Google Scholar] [CrossRef]
- Bodzin, A.; Robson, A., Jr.; Hammond, T.; Anastasio, D. Investigating Engagement and Flow with a Placed-Based Immersive Virtual Reality Game. J. Sci. Educ. Technol. 2020, 30, 347–360. [Google Scholar] [CrossRef]
- Csikszentmihalyi, M. Flow and Education. In Applications of Flow in Human Development and Education; Springer: Dordrecht, The Netherlands, 2014; pp. 129–151. [Google Scholar] [CrossRef]
- Chang, E.; Kim, H.T.; Yoo, B. Virtual Reality Sickness: A Review of Causes and Measurements. Int. J. Hum. Comput. Interact. 2020, 36, 1658–1682. [Google Scholar] [CrossRef]
- Bown, J.; White, E.; Boopalan, A. Looking for the Ultimate Display. In Boundaries of Self and Reality Online; Elsevier: Amsterdam, The Netherlands, 2017; pp. 239–259. [Google Scholar] [CrossRef]
- Meta Store. Apps That Support Hand Tracking on Meta Quest Headsets. Available online: https://www.meta.com/help/quest/articles/headsets-and-accessories/controllers-and-hand-tracking/#hand-tracking (accessed on 12 December 2022).
- Seibert, J.; Shafer, D.M. Control Mapping in Virtual Reality: Effects on Spatial Presence and Controller Naturalness. Virtual Real. 2018, 22, 79–88. [Google Scholar] [CrossRef]
- Masurovsky, A.; Chojecki, P.; Runde, D.; Lafci, M.; Przewozny, D.; Gaebler, M. Controller-Free Hand Tracking for Grab-and-Place Tasks in Immersive Virtual Reality: Design Elements and Their Empirical Study. Multimodal Technol. Interact. 2020, 4, 91. [Google Scholar] [CrossRef]
- Benda, B.; Esmaeili, S.; Ragan, E.D. Determining Detection Thresholds for Fixed Positional Offsets for Virtual Hand Remapping in Virtual Reality. In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Porto de Galinhas, Brazil, 9–13 November 2020; IEEE: New York, NY, USA, 2020; pp. 269–278. [Google Scholar] [CrossRef]
- Wang, A.; Thompson, M.; Uz-Bilgin, C.; Klopfer, E. Authenticity, Interactivity, and Collaboration in Virtual Reality Games: Best Practices and Lessons Learned. Front. Virtual Real. 2021, 2, 734083. [Google Scholar] [CrossRef]
- Nanjappan, V.; Liang, H.N.; Lu, F.; Papangelis, K.; Yue, Y.; Man, K.L. User-Elicited Dual-Hand Interactions for Manipulating 3D Objects in Virtual Reality Environments. Hum.-Cent. Comput. Inf. Sci. 2018, 8, 31. [Google Scholar] [CrossRef]
- Hemmerich, W.; Keshavarz, B.; Hecht, H. Visually Induced Motion Sickness on the Horizon. Front. Virtual Real. 2020, 1, 582095. [Google Scholar] [CrossRef]
- Yardley, L. Orientation Perception, Motion Sickness and Vertigo: Beyond the Sensory Conflict Approach. Br. J. Audiol. 1991, 25, 405–413. [Google Scholar] [CrossRef]
- Kennedy, R.S.; Lane, N.E.; Berbaum, K.S.; Lilienthal, M.G. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness. Int. J. Aviat. Psychol. 1993, 3, 203–220. [Google Scholar] [CrossRef]
- Kwon, C. Verification of the Possibility and Effectiveness of Experiential Learning Using HMD-Based Immersive VR Technologies. Virtual Real. 2019, 23, 101–118. [Google Scholar] [CrossRef]
- Ghosh, K.; Parikh, T.S.; Chavan, A.L. Design Considerations for a Financial Management System for Rural, Semi-Literate Users. In Proceedings of the CHI’03 Extended Abstracts on Human Factors in Computer Systems, Fort Lauderdale, FL, USA, 5–10 April 2003; p. 824. [Google Scholar] [CrossRef]
- Huenerfauth, M.P. Developing Design Recommendations for Computer Interfaces Accessible to Illiterate Users. Master’s Thesis, University College Dublin, Dublin, Ireland, 2002. [Google Scholar]
- Parikh, T.; Ghosh, K.; Chavan, A. Design Studies for a Financial Management System for Micro-Credit Groups in Rural India. ACM SIGCAPH Comput. Phys. Handicap. 2003, 73–74, 15–22. [Google Scholar] [CrossRef]
- Zaman, S.K.U.; Khan, I.A.; Hussain, S.S.; Iqbal, T.; Shuja, J.; Ahmed, S.F.; Jararweh, Y.; Ko, K. PreDiKT-OnOff: A Complex Adaptive Approach to Study the Impact of Digital Social Networks on Pakistani Students’ Personal and Social Life. Concurr. Comput. 2020, 32, e5121. [Google Scholar] [CrossRef]
- Medhi Thies, I. User Interface Design for Low-Literate and Novice Users: Past, Present and Future. Found. Trends®Hum.–Comput. Interact. 2015, 8, 1–72. [Google Scholar] [CrossRef]
- Rasmussen, M.K.; Pedersen, E.W.; Petersen, M.G.; Hornbæk, K. Shape-Changing Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; ACM: New York, NY, USA, 2012; pp. 735–744. [Google Scholar] [CrossRef]
- Saba, T. Intelligent Game-Based Learning: An Effective Learning Model Approach. Int. J. Comput. Appl. Technol. 2020, 64, 208. [Google Scholar] [CrossRef]
- Wade, S.; Kidd, C. The Role of Prior Knowledge and Curiosity in Learning. Psychon. Bull. Rev. 2019, 26, 1377–1387. [Google Scholar] [CrossRef] [PubMed]
- Ali, W.; Riaz, O.; Mumtaz, S.; Khan, A.R.; Saba, T.; Bahaj, S.A. Mobile Application Usability Evaluation: A Study Based on Demography. IEEE Access 2022, 10, 41512–41524. [Google Scholar] [CrossRef]
- El-Dakhs, D.A.S.; Altarriba, J. How Do Emotion Word Type and Valence Influence Language Processing? The Case of Arabic–English Bilinguals. J. Psycholinguist. Res. 2019, 48, 1063–1085. [Google Scholar] [CrossRef] [PubMed]
- Tyng, C.M.; Amin, H.U.; Saad, M.N.M.; Malik, A.S. The Influences of Emotion on Learning and Memory. Front. Psychol. 2017, 8, 1454. [Google Scholar] [CrossRef] [PubMed]
- Kern, A.C.; Ellermeier, W. Audio in VR: Effects of a Soundscape and Movement-Triggered Step Sounds on Presence. Front. Robot. AI 2020, 7, 20. [Google Scholar] [CrossRef] [PubMed]
- Stockton, B.A.G. How Color Coding Formulaic Writing Enhances Organization: A Qualitative Approach for Measuring Student Affect. Master’s Thesis, Humphreys College, Stockton, CA, USA, 2014. [Google Scholar]
- Itaguchi, Y.; Yamada, C.; Yoshihara, M.; Fukuzawa, K. Writing in the Air: A Visualization Tool for Written Languages. PLoS ONE 2017, 12, e0178735. [Google Scholar] [CrossRef]
- Brooke, J. SUS: A “Quick and Dirty” Usability Scale, 1st ed.; Taylor & Francis: Abingdon, UK, 1996. [Google Scholar]
- Kamińska, D.; Zwoliński, G.; Laska-Leśniewicz, A. Usability Testing of Virtual Reality Applications—The Pilot Study. Sensors 2022, 22, 1342. [Google Scholar] [CrossRef]
- Khundam, C.; Vorachart, V.; Preeyawongsakul, P.; Hosap, W.; Noël, F. A Comparative Study of Interaction Time and Usability of Using Controllers and Hand Tracking in Virtual Reality Training. Informatics 2021, 8, 60. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).