Next Article in Journal
Use of Haptics to Promote Learning Outcomes in Serious Games
Previous Article in Journal / Special Issue
Augmented Reality: Advances in Diagnostic Imaging

Multimodal Technologies Interact. 2017, 1(4), 30; https://doi.org/10.3390/mti1040030

Article
Exploring the Virtuality Continuum for Complex Rule-Set Education in the Context of Soccer Rule Comprehension
Department of Computer Science, University of Central Florida, 4000 Central Florida Blvd., Orlando, FL 32816, USA
*
Author to whom correspondence should be addressed.
Received: 23 July 2017 / Accepted: 3 November 2017 / Published: 9 November 2017

Abstract

:
We present an exploratory study to assess the benefits of using Augmented Reality (AR) in training sports rule comprehension. Soccer is the chosen context for this study due to the wide range of complexity in the rules and regulations. Observers must understand and holistically evaluate the proximity of players in the game to the ball and other visual objects, such as the goal, penalty area, and other players. Grounded in previous literature investigating the effects of Virtual Reality (VR) scenarios on transfer of training (ToT), we explore how three different interfaces influence user perception using both qualitative and quantitative measures. To better understand how effective augmented reality technology is when combined with learning systems, we compare results on the effects of learning outcomes in three interface conditions: AR, VR and a traditional Desktop interface. We also compare these interfaces as measured by user experience, engagement, and immersion. Results show that there were no significance difference among VR and AR; however, these participants outperformed the Desktop group which needed a higher number of adaptations to acquire the same knowledge.
Keywords:
augmented reality; sports training; soccer; intelligent tutoring systems; virtual reality

1. Introduction

Milgram’s virtuality continuum conceptualizes the composition of environments based upon the presence and absence of virtual and real objects [1]. Due to this conceptualization, the field of human-computer interaction has a growing interest in understanding how these forms of reality along the continuum influence learning and user experience. Beyond understanding how people perceive this mix of virtual and real objects, researchers must understand how the composition of the environment influences not only perceptual factors, but how these factors contribute to higher cognitive processes. Virtual Reality (VR) has successfully provided an environment for training users on perception and action tasks [2]. For example, novice pilots often train in simulators that are designed to focus on cognitive tasks to prevent unnecessary distraction from irrelevant controls [3]. This environment allows them to rehearse procedures without the risks associated with performing tasks incorrectly in a real-world environment. However, virtual reality simulation based trainers have limitations in interaction design. Some control panels may require a different interaction other than what would be required in the actual operational environment, such as pushing a button, or the space constraints of a real cockpit. By extension, it follows that Augmented Reality (AR) may fill a gap related to this capability by leveraging real world cues to enhance performance and fidelity. The increasing availability of commercial, off-the-shelf technology, such as the Microsoft Hololens, makes AR a viable tool for a variety of training domains. This is particularly applicable in a domain such as sports, where not only visual cues are necessary, but spatial awareness and environmental cues are required for successful performance [4].
The extant literature on ToT is primarily derived from healthcare, aviation, and military domains [5]. ToT in AR-based simulations/applications has been studied in the context of assembly or manufacturing tasks [6,7]. Our study extends this current literature by highlighting the need for further research in complex rule set environments, namely in the domain of sports. Due to the typical hands on approach of learning sports rules from a coach or other instructor, we are interested in exploring some of the exposure effects that might help prior to experiential forms of learning. By exposing students to the rules and regulations prior to playing the game, it may lessen the burden of learning both physical and cognitive tasks at the same time for novice players. In addition to extending current transfer of training literature to the AR domain, we explore user experience across several interfaces including an Augmented Reality interface and a Virtual Reality interface. We evaluated differences in perceived user experience and whether learning outcomes are influenced by the type of interface presented to the user. The experiment starts with an intervention followed by a post-test, this decision is based on evaluation designs used with intelligent tutoring systems (ITS) [8]. Because ITS are designed to present personalized information to the end user, it follows that studies have demonstrated that this form of instruction may lead to better learning outcomes than reading a paper manual or watching a video tutorial alone [9].
Previous studies on referee behavior and decision making demonstrated that referees make an average of 44.4 decisions alone during a typical cup match [10]. Although our goal in this paper is to train novice players to better understand soccer rules, this information does lend itself well to better understanding the kind of cognitive demands made on observers of a soccer match who have interest in knowing what is occurring throughout the game and how the plays affect the game outcomes. Additionally, if experts are required to make this many decisions alone, we can better understand how overwhelming it is for a novice player to make calls about rule violations. Therefore, with this as the motivation for the present work, we aim to lessen the cognitive demands novices face when learning soccer for the first time, by isolating the rule comprehension aspect of the game. To the best of our knowledge, no other paper has studied the differences in user perception of all three interface types while also exploring the utility of adding an ITS to provide feedback to the user. Therefore, the purpose of this study is to report whether there are differences in learning outcomes between VR, AR, and a traditional Desktop interface. For these reasons, we present the following hypotheses. First, we contend that participants will perform just as well, if not better, in the AR condition when compared to the other two study conditions, Desktop and VR. Because the content is the same for all three interface conditions we claim that the AR environment allows them to view examples in the real world context, providing a more enjoyable user experience. The presence of the participants in the real world environment allows them to focus on the most important visual aspects required for understanding the rules which contributes to better learning outcomes. Second, we suggest that the AR environment will result in a more enjoyable user experience due to the availability of visual cues and affordances in the real-world environment [11]. We point to these, as potential areas for further research. We also conducted a preliminary evaluation of user experience, to better understand which condition provided the most interesting and enjoyable experience for participants. Primarily, we are interested in understanding whether there are differences in reported user experiences between AR and VR.
We found that participants after the experiment overall achieved a similar understanding of the soccer rules under the three conditions. This is supported by: (1) the fact that just one participant in each condition did not pass all quiz assessments after the third attempt and (2) the significance found in the perceived increase of knowledge participants reported. However, to carry out this knowledge more learning reinforcements are needed by the Desktop interface than in the VR and AR condition. We also detected that the reported user experience favors AR and VR over Desktop. Interestingly, even though some users reported discomfort (due to device weight) on the use of the Hololens for the duration of the study, they rated the experience less frustrating and more interesting than the other two counterparts.

2. Related Work

Increasingly, studies in sports training emphasize the importance of training and improving perceptual skills [12]. Typically, these studies are grounded in either expertise-performance based theories or frameworks that focus on decision making or other aspects of macrocognition [13]. Recent research expanded this macrocognitive approach to using simulated environments for sports training. For example, VR has been implemented as an environment for training behavioral and physiological responses related to performance anxiety [14]. Thus, the application for VR in a variety of sports-related contexts lends itself well to extending this research into AR applications. AR for sports is a rapidly growing area for research due to the increasing availability of AR head-mounted displays and mobile AR interfaces. This broad, but relatively new application of AR provides a rich area for exploration. For example, Kajastila et al. demonstrated the utility of AR in planning and navigating surfaces in the context of team rock climbing [15]. Additionally, Sano et al. have studied the effects of using AR as a training tool to help novice players visualize the velocity and trajectories of a soccer ball [16].
To better understand whether these results apply across multiple sports and team sports, we study the effects of three different interfaces and ITS combined system to support and facilitate learning the common rules of a soccer match. The ITS in this experiment is a product of the Army Research Lab’s Generalized Intelligent Tutoring Framework [17] (GIFT). This ITS is designed to be configurable for any domain. Due to the robust nature of GIFT, we explore this as a potential combination for maximizing learning outcomes in the context of learning soccer rules and regulations [18]. The adaptive nature of the tool allows the user to enhance their learning experience by practicing concepts and viewing more direct examples of plays (See Figure 1). Additional work involving AR has shown clear benefits in assembly tasks, for instance in [4] was found that AR combined with ITS improves student performance and in [19] the AR trained group had fewer unsolved errors compare to the group that followed a filmed demonstration.
We tested six of the most common concepts derived directly from official International Football Association Board “Laws of the Game 2016/2017” documentation [20]. These six concepts included: offside rules, free kicks, penalty kicks, throw-ins, goal kicks, and corner kicks. We chose these concepts as they represent common scenarios encountered in soccer matches. Soccer provides a domain that is unique in the sense that the participant demographic we chose is not familiar with soccer tasks or rules. This prevents bias from skewing the results of the study since it is not a task that would create a learning effect. Additionally, soccer represents a sport that relies heavily on visual cues. This allows us to explore some of the proxemic measurements associated between conditions and whether these measurements contribute to a positive user experience or increased performance measured in terms of learning outcomes. Although other studies have demonstrated the utility of using video simulation to improve a novice player’s ability to anticipate opponent movements, these studies primarily emphasize skill-building and improvement [21].

3. Materials and Methods

We conducted a study in which we examined three interface types: a traditional desktop, a virtual reality condition, and an augmented reality condition.

3.1. Subjects and Apparatus

A total of 36 participants (23 male, 13 female) were randomly assigned to one of the three interface conditions. The participants were all university students between 18 and 35 years of age (Mean = 23.1, Median = 21). For the desktop and VR scenarios, we used a computer running Windows 10 with 16 GB of RAM and a 3.4 GHz i7 processor, with an Nvidia GeForce GTX 1080 using 8 GB VRAM. Specifically, for virtual reality we used the HTC Vive to immerse our participants and one Vive controller to navigate around the environment and interact with the GUI. For the AR scenario, we used a Microsoft Hololens to display virtual overlays in the physical world, an Xbox One controller to interface with the GUI and a Surface Pro 3 running Windows 10 with 8 GB of RAM and a 2.3 GHz i7 processor. Each condition made use of the same simulation, which was developed in Unity3D version 2017.1.0f3.

3.2. Software Description (ITS)

A tutoring system application was developed using the Generalized Intelligent Framework for Tutoring (GIFT) [17]. Figure 1 illustrates the course flow, starting with content about the overall description about the game to concepts and terminology in the following sections. Each course node maps to supporting media elements from a Unity application. “Concept” and “Rule” (Figure 2) are visualized in a screen canvas for desktop and a floating canvas in the case of the VR and AR setup. The application loads the respective scene identified by unique keywords defined for each scenario. The “Evaluation” component manages the adaptive strategy used for the experiment based on the Engine For Management of Adaptive Pedagogy (EMAP) [22]. Finally, the “Feedback” object shows the outcome of the student after each concept providing feedback for incorrect answers. The process repeats for each concept until the six of them are completed.
The Unity application uses 3D models and animations bought from the asset store but the logic is implemented for each concept and evaluation. A total of 45 scenes are divided as follows: Introduction (1), Offside (1) (see Figure 3), Free Kick (2), Penalty Kick (2), Throw-in (1), Goal Kick (1), Corner Kick (1) and six scenes for each concept for evaluations (36). For each setup: desktop, VR and AR changes were made in order to adapt the scenarios to each device used. The screen canvas used for desktop needed to be replaced by a floating canvas in VR and AR. For AR, due to the small FOV of the device, space was optimized to fill in content in a readable manner. The Unity application connected via XML-RPC to the GIFT local instance.
In Figure 3 an example shows how the content is presented. In this scene, players in yellow are attacking the goal area of the defending players in blue, with the goal to the left of the viewer. The user decides the correct moment an attacking player can legally pass the ball to his teammate, by considering both his teammate’s back and forth movements from the area and the defenders’ positions. If the user decides that the player can pass the ball to a teammate whose indicator line of action is green, it is a legal pass, since that teammate is not closer to the last defending player than both the second last defender and the ball (rule explained in Figure 3b). This correct scenario is labeled in green bold lettering (‘NOT OFFSIDE POSITION’), otherwise, the player’s indicator line will be red and labeled ‘OFFSIDE POSITION’ in red lettering.
Using EMAP guidelines [22], student learning outcomes are evaluated with a question bank of 36 questions for 6 concepts, each supported with learning content developed with Unity. For each concept, the six questions were divided into easy (2), medium (2) and hard (2). At evaluation time, the student is provided with three questions, one at each difficulty level. For each question, the student’s response is based on specific events happening on the soccer field. All questions are multiple-choice that either have 2 (see Figure 4) or 5 options. The scoring system awards a single point for an easy question, two points for medium questions, and three points for hard questions. A student is required to answer 2 questions correctly in order to be “above expectation” and pass on to the next concept; otherwise, a student is “at below expectation” and “at expectation” which starts the remediation loop for the concept failed [22]. Using intelligent feedback provided from the incorrect answers, the remediation loop involves a review of the content highlighting key failing aspects to end back in recall. Questions are not repeated in the first two recalls but they are in the third.

3.3. Interface Setup

We conducted a study in which we examined three interface types: a traditional desktop, virtual reality, and augmented reality.
Desktop: For the desktop setting the experiment was performed in a lab environment. The virtual soccer field dimensions are 45 m × 90 m. A user views the scenarios on a flat-panel TV display while navigating and selecting with a keyboard and mouse. To move around the scene, keyboard inputs are used for changing viewing positions, while the mouse for changing viewing angles (see Figure 5). Both mouse and keyboard are used for navigating between learning content and attempting the quizzes.
Virtual Reality: Users performed the experiment in a lab environment within a space of 3 m × 4 m. The virtual soccer field dimensions are 45 m × 90 m. The content is viewed through a Head-Mounted Display (HMD) interacting with a HTC-Vive controller (Figure 6). The HTC-Vive provides significantly greater immersion, but at a cost of the users’ spatial awareness of their physical surrounding. This is however, mitigated by using the controller for teleportation via Trigger pressed and pointing to a valid location; at nearer distances, the user can choose to walk so long they remain within the configured safe zone before and after travel.
Augmented Reality: This experiment location took place in an indoor soccer field facility with approximate dimensions of 40 m × 70 m. The virtual soccer field is scaled to 40 m × 70 m to fit right into the real environment. The scenarios are adapted for AR deployment; except for players and the soccer ball, all other remaining virtual objects, such as the stadium, soccer field, and goal posts were removed from the scene. Virtual player placements are relative to the real goal, and users need to ambulate around the soccer field for better views when needed. Content is visualized through Hololens and an Xbox One S controller was used as an input device (Figure 7).

3.4. Experimental Procedure

Participants were screened based upon their level of experience playing soccer on a formal team or club, and preferences regarding watching televised sports. A questionnaire was used to screen participants for these factors. After passing screening, participants completed a pre-test to rate their knowledge and familiarity with the scenario concepts using Likert scale ratings from 1 to 7, with 1 representing “little knowledge” and 7 representing “very knowledgeable”. Following this questionnaire, the users were then assigned to their scenario and given instructions on how to complete the study, including a quick overview of the corresponding input device. In the virtual reality scenario, the Vive trigger button was used to teleport around the environment. The touchpad was used to cycle through the GUI prompts; clicking right meant go to the next window, and clicking left meant go to the previous. In the AR scenario, the Xbox controller directional buttons right and left, respectively, performed the same functions.
The scenario began with the user viewing two teams (yellow and blue) of soccer player computer controlled agents. The participant starting position is at the center of the blue team goal behind the goalkeeper. Through the experiment all concepts are explained on the blue side, which represents the defensive side. The yellow team represented the attacking team. The avatars could be seen running, dribbling and kicking the ball, throwing in from out of bounds, and even tackling opponents. A floating GUI window was used to convey information regarding each step of the simulation. This GUI was displayed in front of the users’ viewport and allowed to be hidden to users discretion. When the user clicked a button to go to the next or previous GUI prompt, the text would change, and the avatars would move to a corresponding spot on the virtual field. The GUI cycled through a randomly predetermined order of lessons (called “concepts”) for the user to learn, including direct and indirect free kicks; offsides; throw-ins; goal kicks; penalty kicks; and corner kicks.
After the user reviewed all points for a given concept, recall was invoked and three questions were presented to the user. Using an input device, see Table 1, the user could select a multiple-choice answer. If more than one question was answered incorrect, feedback was provided and the remediation loop started, so that the user could reinforce knowledge. We logged any mistakes made at each concept quiz. This process continued until all concepts were completed.
At the conclusion of the study, participants were asked to complete a questionnaire that quantified their understanding of soccer rules and required them to rate their experience with the system [23]. This questionnaire included the same initial knowledge questions asked in the pre-questionnaire, with some additional user experience questions added. Participants also had the option to write in any feedback regarding the system or their experience. See Table 2 for the questions asked in the questionnaire.

4. Results

We performed multiple analyses to interpret both our objective data and the qualitative user experience data. First, we conducted an analysis of how many adaptations were required (overall) for each experimental condition Table 3. An adaptation involves a reinforcement phase for 2 or more incorrectly answered questions in the next iteration. We wanted to understand which study conditions required multiple iterations of the concept. The Desktop condition users required the most adaptations at 25. Following this, the AR condition users adapted a total of 19 times, and the VR users adapted the least at 16 times.
For the pre/post-test data, we performed a Wilcoxon Signed Rank Test to determine changes between the pre and post-tests. All groups demonstrated statistical significance for all concepts. This is not totally unexpected since we would expect to see improvement following the intervention. Statistical values are presented in Table 4.
For each concept, Deliberation Time (see Figure 8) and the number of correct answers for each quiz attempt are recorded. Deliberation Time is recorded from the moment a participant commences learning a concept, and ends when the quiz begins. If remediation loops are required, Deliberation Time is calculated as the mean amount of time taken for all attempts. Additionally, the number of questions answered correctly at the first attempt is recorded for each concept, and its mean is shown in Figure 9a. Figure 9b shows the number of participants that required adaptations.

5. Discussion

Alternative methods for sports training have been investigated in a variety of settings and environments with varying levels of success. In this paper, we extend the existing literature to better understand the influence of both virtual and augmented reality in teaching soccer rules. From the results, we can see that participants in the desktop condition required the most adaptation overall by frequency count (25 adaptations total). We attribute this to the input device. We contend that the mapping of the keyboard and mouse inputs did not translate intuitively and required more training for the participant to become adept at using the interface. This is also supported by the fact that 9 out of the 12 desktop participants required adaptation on the first concept, regardless of the actual concept presented Table 5. Additionally, we noticed that the throw-ins concept also seemed to be challenging for participants assigned to the desktop condition. Interestingly enough, in the previous paper by Helsen et al., throw-ins were the most frequent decision referees had to make alone [10]. This presents an interesting finding in relation to our work since we can postulate that the visual cues afforded by the desktop environment may not be sufficient for decision making in the throw-in scenarios.
Overall, the VR environment required the least amount of adaptation by frequency (16 total). This is interesting since the VR participants tended to rate their prior knowledge of soccer before the intervention as lower than the other two groups (Table 4). From Table 6, we can see that AR outperformed the VR and Desktop conditions with regard to overall satisfaction with feedback the system provided. Also, participants perceived a higher overall knowledge improvement in AR. However, users in the AR condition reported less stress and frustration while using the system. This question (Q7) was not reverse coded so a lower number indicates less stress. We cannot be sure whether this is due to the learning curve for the Vive controller since it is slightly different than the more well-known Xbox One S control system. Additionally, both the VR and AR participants rated the system as interesting to use, more so than their desktop counterparts.
In addition to the aforementioned factors, we want to note some of the limitations of the study that should be improved with further investigation. First and foremost, we recognize that the indoor environment of the AR condition may not be directly mapped to the VR and desktop outdoor conditions. Due to the inclement Florida weather and the risk of heat exhaustion, we were not able to test the AR condition in an outdoor soccer field. We instead had to compromise by choosing an indoor field. While the content was held constant, it should be noted that such conditions limited direct comparison in some of the environmental factors. For instance, constant lightning conditions and the presence of people in the surroundings provide variation in the study. Although we did not find statistical differences, it is still worth comparing these factors in more detail. Second, we recognize that some of the feedback the users provided would enhance the system. For example, a better way of differentiating the players and opponent’s teams may help scaffold the novice player’s understanding and improve their knowledge regarding what is happening in the scenario. Third, we acknowledge that the feedback of the system could be improved by tailoring it to the individual’s exact answer. In this case, if the participant did not score correctly, feedback would not explain exactly what they did incorrectly. Finally, with regard to the actual hardware we utilized, it is important to note that the Microsoft Hololens field of view limits the users’ ability to view all aspects of the scene. We contend that other augmented reality head mounted displays should be tested in these scenarios to better understand the implications of AR on learning. Although all participants were able to complete the experiment, 2 of them reported discomfort in the AR condition because of the weight of the equipment and the second one because of aching eyes due to sudden movements trying to find the ball in the field. HTC-Vive participants did not provide any feedback or express discomfort at all.
It is interesting to mention differences in participants navigation across the three conditions. First, all participants started at the goal center behind blue team goalkeeper. Generally, a user would look for the best point of view to visualize the scene. When the scenario changes, if a different point of view is needed participants would move otherwise they would stay. This decision, however was different across the interfaces. For desktop users freely stepped around. For virtual reality, participants did not walk significantly and just teleported around, most participants even requested a chair and performed the study seated. Users that were less familiar with the vive controller tended to move less. For Augmented Reality if the first scenario is FreeKick, Goal Kick or Corner Kick users would walk minimally and stay around the initial position until a change of position is required. Navigation was needed for scenarios such as Offside and Penalty Kick that required a side view of the field and further apart from the goal in order to visualize the most rendering into the hololens screen. If the course on the other side starts with PK or Offside and the user place itself on the side of the field he/she would stay on that side and move around a small area. In general, we observed that participants were not so keen to move around constantly as they were in the other two conditions, this due to the effortless of Desktop and VR navigation.
We found limited support for our hypothesis that AR/VR would outperform the desktop condition. While we did see that users tended to favor the AR condition according to our qualitative data, these comparisons are very close in number to the VR condition. Additionally, the fact that only three more adaptations total were required by the AR group, lends support to the idea that the AR and VR seemed to present the content in an engaging and interesting way. The participants expressed no discomfort with the VR equipment, which seems to suggest that all participants perceived this interface favorably. Additionally, users in the VR condition subjectively rated their initial knowledge as lower than the participants in the AR group (Table 4). In the VR group, they rated their knowledge after the intervention as higher across almost all six concepts. Further investigation with a larger sample is necessary to determine if this is because this group really demonstrated less previous knowledge than the AR group. The AR users reported slightly higher satisfaction, leading to a more positive user experience.

6. Conclusions

In the field of education, researchers have previously found that by using augmented reality coupled with constructivist and situated learning theories on environments is positively correlated with students’ learning when compared to traditional methods [24]. Similarly, Wojciechowski et al. found that STEM students displayed more positive attitudes towards AR scenarios due to perceived usefulness and enjoyment [25] they provide. We contend that this is a rich area of exploration and that sports training could point to future areas of STEM education such as training abstract concepts or domains where full spatial information may not be available to the end user. Scenarios such as circuit layouts in rooms can benefit from the spatial relation of the environment with the educational content. The use of AR can scaffold novice players or learners to better understand where to direct their visual attention in these cases, which would allow them to focus training on the most important take aways from their training scenarios.
In this study, we found that though AR generally helped users learn favorably, this was not the case for all concepts. First, by observing the number of participants that required adaptation per concept (Figure 9), it is shown that concepts that demonstrate player interaction in larger spaces were a problem for the AR condition. Specifically, three concepts that are rated unfavorably require the user to be aware of spatial positions not only of the soccer ball, but also of players who are dispersed further apart. It is expected that by using a wider field of view these concepts will have better results. On the other hand, the two concepts that favor AR do not necessitate a wide field of view. This requires that the user observes the trail and position of the soccer ball with respect to the field, such as positions relative to the goalposts and touch-line, and any direct ball contact with the affected players. Another important factor that we found in the AR condition is depth perception. In a real world environment, users have more depth perception cues that could lean towards more positive results for these concepts. We also observed that Mean Deliberation Time under each experiment condition did not affect user learning and frustration, even though AR requires more time and ambulation to complete the study.
Finally, we hope to expand upon this area by integrating physiological measures in addition to the subjective user feedback. These physiological measures could help us to better understand when the user feels frustrated or stressed while using the system and could provide more tailored, individualized feedback in these cases. Additionally, we hope to add audio or other sensory cues in to test the effects of a multi modal learning system.

Acknowledgments

This work is supported in part by NSF Award IIS-1638060, Lockheed Martin, Office of Naval Research Award ONRBAA15001, Army RDECOM Award W911QX13C0052, and Coda Enterprises, LLC. We also thank the ISUE lab members at UCF for their support as well as the anonymous reviewers for their helpful feedback.

Author Contributions

The work is a product of the intellectual environment of the authors; and all members have contributed equally to the analytical methods used, to the research concept, and to the experiment design.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  2. Young, M.K.; Gaylor, G.B.; Andrus, S.M.; Bodenheimer, B. A comparison of two cost-differentiated virtual reality systems for perception and action tasks. In Proceedings of the ACM Symposium on Applied Perception (ACM), Vancouver, BC, Canada, 8–9 August 2014; pp. 83–90. [Google Scholar]
  3. Pool, D.M.; Harder, G.A.; Damveld, H.J.; van Paassen, M.M.; Mulder, M. Evaluating simulator-based training of skill-based control behavior using multimodal operator models. In Proceedings of the 2014 IEEE International Conference on Systems, Man and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 3132–3137. [Google Scholar]
  4. Westerfield, G.; Mitrovic, A.; Billinghurst, M. Intelligent augmented reality training for motherboard assembly. Int. J. Artif. Intell. Educ. 2015, 25, 157–172. [Google Scholar] [CrossRef]
  5. Hamstra, S.J.; Brydges, R.; Hatala, R.; Zendejas, B.; Cook, D.A. Reconsidering fidelity in simulation-based training. Acad. Med. 2014, 89, 387–392. [Google Scholar] [CrossRef] [PubMed]
  6. Henderson, S.J.; Feiner, S.K. Augmented reality in the psychomotor phase of a procedural task. In Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality, Basel, Switzerland, 26–29 October 2011; pp. 191–200. [Google Scholar]
  7. Henderson, S.; Feiner, S. Exploring the benefits of augmented reality documentation for maintenance and repair. IEEE Trans. Vis. Comput. Graph. 2011, 17, 1355–1368. [Google Scholar] [CrossRef] [PubMed]
  8. Woolf, B.P. Building Intelligent Interactive Tutors: Student-Centered Strategies for Revolutionizing E-Learning; Morgan Kaufmann: Burlington, MA, USA, 2010. [Google Scholar]
  9. Ma, W.; Adesope, O.O.; Nesbit, J.C.; Liu, Q. Intelligent tutoring systems and learning outcomes: A meta-analysis. J. Educ. Psychol. 2014, 106, 901–918. [Google Scholar] [CrossRef]
  10. Helsen, W.; Bultynck, J.B. Physical and perceptual-cognitive demands of top-class refereeing in association football. J. Sports Sci. 2004, 22, 179–189. [Google Scholar] [CrossRef] [PubMed]
  11. Ruffaldi, E.; Filippeschi, A.; Avizzano, C.A.; Bardy, B.; Gopher, D.; Bergamasco, M. Feedback, affordances, and accelerators for training sports in virtual environments. Presence 2011, 20, 33–46. [Google Scholar] [CrossRef]
  12. Williams, A.M.; Ward, P.; Chapman, C. Training perceptual skill in field hockey: Is there transfer from the laboratory to the field? Res. Q. Exerc. Sport 2003, 74, 98–103. [Google Scholar] [CrossRef] [PubMed]
  13. Ward, P.; Williams, A.M. Perceptual and cognitive skill development in soccer: The multidimensional nature of expert performance. J. Sport Exerc. Psychol. 2003, 25, 93–111. [Google Scholar] [CrossRef]
  14. Stinson, C.; Bowman, D.A. Feasibility of training athletes for high-pressure situations using virtual reality. IEEE Trans. Vis. Comput. Graph. 2014, 20, 606–615. [Google Scholar] [CrossRef] [PubMed]
  15. Kajastila, R.; Holsti, L.; Hämäläinen, P. The Augmented Climbing Wall: High-exertion proximity interaction on a wall-sized interactive surface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 758–769. [Google Scholar]
  16. Sano, Y.; Sato, K.; Shiraishi, R.; Otsuki, M. Sports support system: Augmented ball game for filling gap between player skill levels. In Proceedings of the 2016 ACM on Interactive Surfaces and Spaces, Niagara Falls, ON, Canada, 6–9 November 2016; pp. 361–366. [Google Scholar]
  17. Sottilare, R.; Brawner, K.; Goldberg, B.; Holden, H. The Generalized Intelligent Framework for Tutoring (GIFT); Concept Paper Released as Part of GIFT Software Documentation; US Army Research Laboratory–Human Research & Engineering Directorate (ARL-HRED): Orlando, FL, USA, 2012. [Google Scholar]
  18. Goodwin, G.A.; Niehaus, J.; Kim, J.W. Modeling Training Efficiency in GIFT. In Proceedings of the International Conference on Augmented Cognition, Vancoucer, BC, Canada, 9–14 July 2017; Springer: Berlin, Germany, 2017; pp. 131–147. [Google Scholar]
  19. Gavish, N.; Gutiérrez, T.; Webel, S.; Rodríguez, J.; Peveri, M.; Bockholt, U.; Tecchia, F. Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks. Interact. Learn. Environ. 2015, 23, 778–798. [Google Scholar] [CrossRef]
  20. Laws of the Game. Available online: http://www.theifab.com/laws (accessed on 6 November 2017).
  21. Williams, A.M.; Herron, K.; Ward, P.; Smeeton, N.J. Using Situational Probabilities to Train Perceptual and Cognitive Skill in Novice Soccer Players; Routledge: Abingdon, UK, 2008. [Google Scholar]
  22. Goldberg, B.; Hoffman, M. Adaptive course flow and sequencing through the engine for management of adaptive pedagogy (EMAP). In Proceedings of the AIED Workshops, Madrid, Spain, 22–26 June 2015. [Google Scholar]
  23. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  24. Wasko, C. What teachers need to know about augmented reality enhanced learning environments. TechTrends 2013, 57, 17–21. [Google Scholar] [CrossRef]
  25. Wojciechowski, R.; Cellary, W. Evaluation of learners’ attitude toward learning in ARIES augmented reality environments. Comput. Educ. 2013, 68, 570–585. [Google Scholar] [CrossRef]
Figure 1. This figure represents an abstract layout for a concept in the soccer course we created. Each box represents an aspect of the course, with evaluation and feedback representing the intelligent portions of the tutoring system.
Figure 1. This figure represents an abstract layout for a concept in the soccer course we created. Each box represents an aspect of the course, with evaluation and feedback representing the intelligent portions of the tutoring system.
Mti 01 00030 g001
Figure 2. This is a picture that represents the perspective of the user in the Augmented Reality study condition. However the user is able to see just a small rectangular area around the text canvas. Red rectangle shows a rough view of the visible area.
Figure 2. This is a picture that represents the perspective of the user in the Augmented Reality study condition. However the user is able to see just a small rectangular area around the text canvas. Red rectangle shows a rough view of the visible area.
Mti 01 00030 g002
Figure 3. (a) Picture taken from HoloLens about the offside rule, visualization is presented to the user and explained with text, animations and visual cues superimposed on the real environment. Users can see just a limited area of this screen-shot. Red rectangle shows a rough view of the visible area; (b) Image taken from Vive application showing content about Offside position explained in a virtual soccer stadium.
Figure 3. (a) Picture taken from HoloLens about the offside rule, visualization is presented to the user and explained with text, animations and visual cues superimposed on the real environment. Users can see just a limited area of this screen-shot. Red rectangle shows a rough view of the visible area; (b) Image taken from Vive application showing content about Offside position explained in a virtual soccer stadium.
Mti 01 00030 g003aMti 01 00030 g003b
Figure 4. In this figure a question is presented to the user with 2 options to answer. The question belongs to the easy category.
Figure 4. In this figure a question is presented to the user with 2 options to answer. The question belongs to the easy category.
Mti 01 00030 g004
Figure 5. A user from the desktop group is performing the experiment, currently at Penalty Kick procedures.
Figure 5. A user from the desktop group is performing the experiment, currently at Penalty Kick procedures.
Mti 01 00030 g005
Figure 6. (a) This figure depicts the HTC Vive/VR Set Up; (b) This figure shows a detailed view of the HTC Vive controller, with the button names used in Table 1.
Figure 6. (a) This figure depicts the HTC Vive/VR Set Up; (b) This figure shows a detailed view of the HTC Vive controller, with the button names used in Table 1.
Mti 01 00030 g006
Figure 7. (a) This figure depicts the Microsoft Hololens/AR Set Up; (b) This figure shows a detailed view of the Xbox One S controller with button names as used in Table 1.
Figure 7. (a) This figure depicts the Microsoft Hololens/AR Set Up; (b) This figure shows a detailed view of the Xbox One S controller with button names as used in Table 1.
Mti 01 00030 g007
Figure 8. The three box plot charts show the mean concept deliberation time for each environment (a) desktop environment; (b) Virtual Reality environment and (c) Augmented Reality Environment.
Figure 8. The three box plot charts show the mean concept deliberation time for each environment (a) desktop environment; (b) Virtual Reality environment and (c) Augmented Reality Environment.
Mti 01 00030 g008aMti 01 00030 g008b
Figure 9. (a) This figure depicts the average number of correct answers participants achieved in their first attempt; (b) Shows the number of people that needed remediation loops for each concept and interface (see Table 3).
Figure 9. (a) This figure depicts the average number of correct answers participants achieved in their first attempt; (b) Shows the number of people that needed remediation loops for each concept and interface (see Table 3).
Mti 01 00030 g009
Table 1. Different input methods used to navigate through the tutoring application.
Table 1. Different input methods used to navigate through the tutoring application.
ActionDesktop/Keyboard-MouseVR/HTC Vive ControllerAR/XBOX One S Controller
NavigateA, W, S, D or arrow keysWalking or lasing a location with trigger buttonWalking
Cycle PreviousMouse Left-Click on previous buttonPress Touch-Pad leftPress D-Pad Left
Cycle NextMouse Left-Click on next buttonPress Touch-Pad rightPress D-Pad Right
Play scenarioLeft-ShiftPress Grip buttonPress A button
Reset scenarioRPress Menu buttonPress X button
Toggle OptionsMouse Left-Click on radio buttonsPress Touch-Pad UpPress Right Bumper
Show and Hide CanvasEscPress Touch-Pad centerPress Left Bumper
Table 2. Qualitative questions asked during the Post Questionnaire.
Table 2. Qualitative questions asked during the Post Questionnaire.
NumberQuestion
Q1Rate your experience and knowledge of soccer rules after using the system
Q2Using the system improved my overall knowledge
Q3The system taught the concepts well
Q4The system was effective
Q5I was satisfied with the feedback the system provided me
Q6The system was interesting to use
Q7Please rate your level of frustration and stress when using the system
Table 3. Total Count of Required Adaptations per Experimental Condition.
Table 3. Total Count of Required Adaptations per Experimental Condition.
Experimental ConditionAdaptations by Frequency
Desktop25
VR16
AR19
Table 4. Statistical results from the Wilcoxon Signed Rank Test comparing users’ self perception of soccer knowledge before using the system to their perceived increase in knowledge following use of the system. Statistical significance is indicated by bold text.
Table 4. Statistical results from the Wilcoxon Signed Rank Test comparing users’ self perception of soccer knowledge before using the system to their perceived increase in knowledge following use of the system. Statistical significance is indicated by bold text.
DesktopZpMedian-PreMedian-Post
Overall−2.9710.00325
Offsides−3.0780.00226
Free Kick−2.5680.01035
Penalty Kick−2.9460.00326
Throw-In−2.9660.00316
Goal Kick−2.7260.00616
Corner Kick−2.6790.00726
Virtual Reality Z p Median-PreMedian-Post
Overall−3.0010.00325
Offsides−2.8520.00416
Free Kick−2.9680.00316
Penalty Kick−2.8020.00515
Throw-In−3.0710.00226
Goal Kick−2.8170.00526
Corner Kick−3.0710.00216
Augmented Reality Z p Median-PreMedian-Post
Overall−3.1140.00235
Offsides−2.8210.00525
Free Kick−2.9520.00336
Penalty Kick−2.2850.02256
Throw-In−2.7080.00726
Goal Kick−2.6620.00827
Corner Kick−2.7400.00625
Table 5. Number of Participants that Repeated the First Presented Concept at Least Once.
Table 5. Number of Participants that Repeated the First Presented Concept at Least Once.
Experimental ConditionNumber of Participants
Desktop9/12
VR6/12
AR5/12
Table 6. Means and medians for each of our questionnaire responses by condition.
Table 6. Means and medians for each of our questionnaire responses by condition.
QuestionD MeanD MedianVR MeanVR MedianAR MeanAR Median
Q14.90955.16755.5005
Q25.45456.16766.5007
Q35.63665.6675.56.0006
Q45.90965.83366.1676.5
Q55.54565.41766.3337
Q66.27366.91777.0007
Q73.00033.1672.52.6672

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Back to TopTop