Next Article in Journal
On the Design of Constructively Aligned Educational Unit
Next Article in Special Issue
Hybrid E-Service Learning Practice during COVID-19: Promoting Dog Adoption in Philosophy of Life Course in Taiwan
Previous Article in Journal
NUTS III as Decision-Making Vehicles for Diffusion and Implementation of Education for Entrepreneurship Programmes in the European Union: Some Lessons from the Portuguese Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robot-Assisted Language Learning: Integrating Artificial Intelligence and Virtual Reality into English Tour Guide Practice

1
Department of Applied Foreign Languages, Lunghwa University of Science and Technology, Taoyuan City 333326, Taiwan
2
Department of Multimedia and Game Science, Lunghwa University of Science and Technology; Taoyuan City 333326, Taiwan
*
Author to whom correspondence should be addressed.
Educ. Sci. 2022, 12(7), 437; https://doi.org/10.3390/educsci12070437
Submission received: 5 April 2022 / Revised: 19 June 2022 / Accepted: 21 June 2022 / Published: 24 June 2022
(This article belongs to the Special Issue Participatory Pedagogy)

Abstract

:
This action research created an application system using robots as a tool for training English-language tour guides. It combined artificial intelligence (AI) and virtual reality (VR) technologies to develop content for tours and a 3D VR environment using the AI Unity plug-in for programming. Students learned to orally interact with the robot and act as a guide to various destinations. The qualitative methods included observation, interviews, and self-reporting of learning outcomes. Two students voluntarily participated in the study. The intervention lasted for ten weeks. The results indicated the teaching effectiveness of robot-assisted language learning (RALL). The students acknowledged the value of RALL and had positive attitudes toward it. The contextualized VR learning environment increased their motivation and engagement in learning, and students perceived that RALL could help develop autonomy, enhance interaction, and provide an active learning experience. The implications of the study are that RALL has potential and that it provides an alternative learning opportunity for students.

1. Introduction

Educational reforms and movements have been occurring and taking different forms worldwide. In the meantime, innovation has played a crucial role in the transformation of education, particularly since the advancement of technology has brought vigor to educational settings over the past decades [1]. Educators have striven to create new opportunities for innovative and alternative instruction by taking advantage of these technological developments (e.g., Carlson, et al. [2]; Cheng et al. [3]; Freina and Ott [4]; Heflin et al. [5]; Howard et al. [6]; Huda et al. [7]; Lee and Wong [8]; Sural [9]; Xie et al. [10]). The development of technology has constantly provided opportunities to renovate learning methods and posed challenges for educators. Following their advent, the potential of mixed reality and artificial intelligence (AI) in educational contexts has been of particular interest to researchers. Although the field of computer-assisted language learning (CALL) was established in the early 1960s, the recent CALL literature has focused on the potential effects of mobile-assisted language learning (MALL), or M-learning [11]. Recently, the innovative robot-assisted language learning (RALL) and speech bots have emerged with the use of AI technology [12].
The 2019 New Media Alliance (NMC) higher education panel reached an agreement that six technology developments have the potential to make a real difference in education, in particular, in the aspects of pedagogical and learning approaches, teachers’ work organization, and the design, arrangement, and delivery of instruction. The six technology developments include: mobile learning, analytics technologies, mixed reality, AI, blockchains, and virtual assistants. In other words, AI and mixed reality have entered the mainstream and become integral features of contemporary education [13].
AI has the ability to personalize individual experiences, decrease workloads, and assist with the analysis of large and complex data sets. According to the market analysis report [14] published in 2022 by Grand View Research, Inc., the global AI market size was valued at USD 93.5 billion in 2020 and is expected to expand at a compound annual growth rate (CAGR) of 38.1% from 2022 to 2030. Markets and Markets [15] forecast the global AI market size to grow from USD 58.3 billion in 2021 to USD 309.6 billion by 2026, at a CAGR of 39.7%.
The global virtual reality (VR) market was valued at USD 21.83 billion in 2021 and is expected to grow at a CAGR of 15.0% from 2022 to 2030 and to reach a value of USD 87 billion by 2030 [16]. The increasing usage of this technology in instructive training—such as for training mechanics, engineers, pilots, soldiers, field workers, and technicians—is driving the growth of the market. In addition to providing training and fulfilling educational purposes, the technology is widely accepted across industries for various other purposes, such as VR exposure therapy for patients and VR technology for tourism. VR technology has also experienced a surge in demand during the pandemic due to the necessity of companies to continue their business operations virtually.
Along with the advancement of technology, English-language professionals have striven to enhance students’ English-language proficiency levels by incessantly innovating, e.g., with mobile technology [17,18], VR [19,20], augmented reality [21], and robots [22]. Some of the features in virtual learning environments fit with the instructional strategies of language learning [20,23]. For example, in a VR environment, learners can play the role of an avatar to interact with other avatars and practice speaking a language to communicate with others. Furthermore, the interactivity of robots facilitates language learning. For example, role playing is a commonly used strategy in language teaching [24].
Many studies have emphasized the importance of designing English courses based on the real communicative needs of workers in a particular workplace [25,26,27]. As mentioned above, VR is a simulated environment, and the interactive feature of robots allows language learning and communication to take place. Learners are able to interact with robots and be immersed in simulated situations. However, since only a few research studies have integrated programmed AI robots in language learning (e.g., Hu [28]; Ji [29]; Malerba et al. [30]), more research on RALL should be conducted to provide various insights into how application systems and learning contents can be designed to motivate and engage students in learning.
Accordingly, the purpose of this action research study was to create an application system as a tool for training English-language tour guides using robots combined with AI and VR technologies. The English learning content, which is scenario-based, was developed mainly in the form of dialogues. Students interacted with the robot through dialogues to practice speaking English. They could play the role of either a tour guide leading tours or a traveler who takes part in a guided tour. The learning contexts comprised VR environments that were tourism attractions of Northern Taiwan. It was expected that students would feel sufficiently immersed in VR to easily understand the content. The intervention lasted for ten weeks in an 18-week semester course. The study aimed to examine the teaching effectiveness of RALL in the VR contexts. After practicing with the learning system, students were assessed in a real-world situation to examine their speaking ability and relevant vocabulary. It is hoped that guiding the tours of attractions in Taiwan through the dialogues with a robot will not only improve students’ English abilities and increase their speaking fluency, but also enhance their motivation and interest and reduce their learning anxiety.

2. Literature Review

2.1. Robot-Assisted Language Learning

RALL is gradually becoming a commonly studied field of human–robot interaction. Research has obviously indicated that RALL can support both native and foreign language acquisition [31,32]. Randall [33] defined RALL as “the use of robots to teach people language expression or comprehension skills—such as speaking, writing, reading, or listening” (p. 1). Furthermore, a robot is intrinsically suitable for language learning as it is capable of social interaction [33] and interaction is a method of language learning. Social robots have proven effective at achieving cognitive and affective outcomes similar to human interlocutors [34,35]. In addition, social robots enhance learning motivation and engagement in tasks and reduce learning anxiety, though the long-term advantages are still unknown [33].
Robots have benefits over other technologies, as they aid language production, and RALL is a viable, practical, and valuable approach for oral language development [35]. Lin et al. [36] invited two fourth-graders to participate in a two-phase study applying multimodal cues in a task-based learning system consisting of an educational robot and a 3D book supported by the Internet of Things (IoT) to enhance vocabulary learning in children learning English as a foreign language. The results indicated a significant improvement in oral production due to the elicitation cues in the learning environment. Lin et al. [37] also conducted a systematic review of 22 empirical studies published between 2010 and 2020 and, in particular, analyzed oral interaction design, including language teaching methods, interactive learning tasks, and interaction effects. Oral fluency, literacy, and communicative competence, rather than grammatical accuracy, were given prominence regarding the pedagogical implications of RALL instructional design. Currently, robots can be regarded as interactive instructive agents with multiple senses which facilitate oral communication development in learners better than machines that carry out series of programmed actions.
There are technical challenges when developing a robot as an instructor or a tutor, such as automatic speech recognition, social interaction, and non-verbal social signals. It is especially challenging and elusive to automate the social elements of interaction, though robots can autonomously operate in a restricted context [34]. Randall [33] argued that the effectiveness of robots at acting as teachers or tutors relies on the efficacy of information transfer. Hence, when designing the form, speech, and behaviors of robots, a specific domain must be considered.

2.2. Artificial Intelligence in Education

The term “artificial intelligence” first appeared in the 1956 Dartmouth Summer Research Project on Artificial Intelligence, and the study of AI was defined as “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” [38] (p. 17). Although the field of AI has been developed over decades, as Popenici and Kerr [39] contended, there is no commonly agreed upon definition of AI. Given that AI has been applied in various fields, such as medicine, customer service, transportation, energy, and education [40,41], it may not be easy to reach a consensus regarding the definition of AI. Nevertheless, with a focus on AI in education, Popenici and Kerr defined AI as “computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and use of data for complex processing tasks” (p. 2). AI also has great potential for teaching and learning, as it can enhance the way that people “learn, remember, perceive, and make decisions” [41] (p. 3953). With the help of AI, it is possible to monitor the learning process to offer just-in-time support and facilitate personalized learning [41].
Roll and Wylie [42] argued that the developments in educational practices and theories over the past 25 years have transformed the contemporary educational environment into one that emphasizes “authentic practices using big problems in collaborative settings” (p. 583) and stresses the importance of personalization. Roll and Wylie further specified these developments in terms of educational goals, practices, and environments. Regarding educational goals, Roll and Wylie insisted that instead of providing a rigid body of knowledge, today’s education aims to train students to be “adaptive experts” and “on-the-job learners” by focusing on “knowledge application, collaboration, and self-regulated learning skills” (p. 591). Consequently, assessment has also changed from “a summative measure of performance” to “an ongoing formative measure that informs just-in-time support” [42] (p. 591). In terms of education practices, Roll and Wylie indicated that the current focus is on “authentic problems”, “experiential learning opportunities”, “group work” (p. 591) and personalization and asserted that AI has great potential to address significant problems by providing constructivist activities and just-in-time support simultaneously. Lastly, today’s education is not limited to the classroom and informal and workplace learning has become an important trend [42]. Informal and workplace learning are forms of self-directed learning that can happen at any point over an individual’s lifetime, so there is a strong emphasis on “life-long and life-wide learning” [42] (p. 591).
What is more, Timms [43] explained that it is unnecessary for AI systems to be delivered solely via computers and pads; that is to say, people should design technologies specifically for educational purposes so as to offer new possibilities for teaching and learning. There are two possible developmental directions that the field of AI in education can pursue [43]. First, the combination of AI and robotics can provide more social interaction, attract and keep students’ attention, and offer personalized instruction in the classroom [43]. Second, with the use of sensors, the IoT, and AI, people can create “smart classrooms” that are better able to support student learning by “leveraging the big data from rich interactions with ‘smart’ technologies to produce value for the teacher and classroom” [43] (p. 702). Educators have long been striving for personalized education. Currently, since AI has illuminated its significance in education and the use of big data helps to achieve teaching efficiency and effectiveness, AI has created values for education. Furthermore, interaction with robots integrating AI brings infinite possibilities to various fields, especially language learning.

2.3. Virtual Reality

Along with the development of technology in recent years, VR has been increasingly employed in various fields, such as vocational training, education, and entertainment [44,45]. Given the ever-increasing presence and importance of VR in daily life, one might wonder what VR is. Inoue [46] defined VR as “a ‘technology’ or an ‘environment’ that provides artificially generated sensory cues sufficient to engender in the user some willing suspension disbelief” (p. 2). In a VR environment, users are made to believe that what they see and feel is real to a great extent [45]. Vesisenaho et al. [47] provided another definition of VR, deeming it “a high-end user interface involving real-time simulation of an environment that people can explore and interact with through multiple senses” (p. 2). What is more, Makransky and Petersen [48] contended that VR is “a computer-mediated simulation that is three-dimensional, multisensory, and interactive, so that the user’s experience is ‘as if’ inhabiting and acting within an external environment” (p. 15). Although there are various ways of defining VR, it can be seen that, in general, VR affords an immersive experience in which users can explore and interact with a simulated environment using multiple senses.
Moreover, Smedley and Higgins [49] argued that VR can be “anything from a simple simulation program to full immersion involving special equipment” (p. 115). Based on the level of immersion, VR can be categorized into two major types: immersive and non-immersive [8,46,48,50]. In a non-immersive VR environment, users interact with computer-generated 3D images on a computer screen by using equipment, such as a keyboard, a mouse, a joystick, or a data glove [8,46,49,50]. On the other hand, in an immersive VR environment, users experience the virtual world through a helmet-mounted display or other immersive display technologies [8,46,49,50]. Martín-Gutiérrez et al. [45] indicated that there is an in-between category: semi-immersive VR.
Freina and Ott [4] wrote that VR provides users with an opportunity to live and experience situations that “cannot be accessed physically” due to time constraints, physical inaccessibility, dangers in the real situation, and ethical issues (p. 139). Jang et al. [51] made a similar argument that VR “affords investigation of distant locations, exploration of hidden phenomena, and manipulation of otherwise immutable structures” (p. 151). In addition, Siegle [52] contended that VR allows users to “immerse themselves into environments in which they are not physically present, but they feel like they are experiencing the environments” (p. 46). Hence, VR has created opportunities for education, as certain issues that were considered difficult to solve in the past, such as the ethical issues of medical anatomy, the destruction of marine ecology, or the protection of historical heritage, can be taught in VR environments without causing any further and serious problems.
Chung [53] further indicated that VR has the following three features: “real-time interactivity”, “strong immersion”, and “high imagination” (p. 251). With these features, VR is capable of providing a learning environment that enables students to “learn from experiencing the context” in which they can “go beyond textbooks and develop more flexible and fitting learning strategies” [53] (p. 251). Among the three features, immersion and interactivity are essential to creating an experience of reality that facilitates learning [45,54]. Chiu [55] identified the major types of technologies adopted in chemical education between 2010 and 2021—AR and VR applications were the most extensively investigated.
Sukotjo et al. [56] applied VR to implant surgery and investigated students’ perceptions of the use of VR in dental education. Chen [19] developed a 3D, VR English learning platform and conducted an experiment to examine its learning effectiveness for technological university students in Taiwan. Based on Bloom’s taxonomy, students improved not only their phonological, morphological, grammatical, and syntactical knowledge but also their level of thinking. Hence, since VR has potential in education and language learning, the study integrated VR into RALL to enhance teaching and learning effectiveness.

2.4. Engagement Theory

Engaging students in learning activities is crucial and for a long time a myriad of educators and researchers have intended to enhance student engagement with the purpose of increasing student motivation [57,58,59]. Recently, there has been a variety of technology-enhanced instructional approaches (e.g., VR and wearable devices) applied in different disciplines with this simple but great purpose because contemporary digital natives approach their daily activities differently and online, relying heavily on technology.
When students are engaged in the process of learning, they are active and motivated. In other words, they apply cognitive strategies to facilitate and boost their understanding [60]. Engagement is not just fulfilling a task; the features of being engaged are being involved, energized, and active. People expend effort and make full use of their cognitive capability [61]. Jacobi et al. [62] maintained that student engagement is related to the time and physical energy spent on learning activities. Kuh [63] regarded engagement as the efforts that a student makes in studying, practicing, and obtaining feedback on analyzing and solving problems. Taylor and Parsons’ study [64] suggests that research studies regarding student engagement have grown, and, among them, the change from focusing on disengaged students to engaged learners is the greatest of all.
In their 17-item engagement inventory, Schaufeli et al. [65] defined engagement as “a positive, fulfilling, work-related state of mind that is characterized by vigor, dedication, and absorption” and “a more persistent and pervasive affective-cognitive state that is not focused on any particular object, event, individual, or behavior” (p. 74). There are three dimensions of engagement: vigor, dedication, and absorption. Vigor has the following characteristics: (1) “high levels of energy and mental resilience while working”, (2) “the willingness to invest effort in one’s work”, and (3) “persistence even in the face of difficulties”. Dedication refers to “a sense of significance, enthusiasm, inspiration, pride, and challenge”. Finally, absorption is defined as “being fully concentrated and deeply engrossed in one’s work, whereby time passes quickly and one has difficulties with detaching oneself from work” (pp. 74–75). Hence, the current study observed the learners’ behaviors with attention being paid to the features mentioned above.
Participants in Allcoat and von Mühlenen’s study [66] on learning in VR reported higher engagement than learners using traditional and video-learning methods. Leese [67] used a virtual learning environment (VLE) known as the Wolverhampton Online Learning Framework (WOLF) to encourage collaboration within learning sets. Students not only acquired skills of technology use, collaborative working, and presentation, they also improved their performance. Kim et al. [58] tested an engagement motivation model with smartphone users. The results indicated that participant engagement motivations affected their value, satisfaction, and overall engagement intentions. Zhou et al. [68] proposed a model using VR natural interaction to enhance learning. This not only made learning interesting and fostered student engagement, it also improved the knowledge construction in practices.
Rintjema et al. [24] used a social robot for one-to-one second-language tutoring. The results indicated that children improved their performance after spending more time with the robot, leading to a significant, positive change in the pattern of engagement across the interactions. Hence, the study employed a robot to motivate students to learn English for leading tours to increase student engagement in formal learning in the classroom. While research has clearly shown that robots can support language acquisition and that learning with VR can motivate learners, the teaching effectiveness of using robots with VR in language learning remains unclear.
The research questions for this qualitative study are as follows:
  • How effective is RALL at enhancing students’ behavioral, affective, and cognitive engagement in terms of their learning motivation and performance?
  • Did the participating learners support RALL? If so, why and to what degree?
  • What are students’ perceptions of the advantages and disadvantages of RALL? What are their suggestions relating to them?

3. Methodology

3.1. Research Design

This study was primarily designed as action research. Action research approaches to educational research were adopted in the late 1960s and early 1970s by the teacher–researcher movement in the secondary education sector. Its combination of action and research has attracted researchers, teachers, and the academic community alike [69]. Action research adopts a methodical, iterative approach embracing problem identification, action planning, implementation, evaluation, and reflection. The insights gained from the initial cycle feed into the planning of the second cycle, for which the action plan is modified and the research process repeated. Kolb [70] extended this model to offer a conception of the action research cycle as a learning process whereby people learn and create knowledge by critically reflecting upon their own actions and experiences, forming abstract concepts, and testing the implications of these concepts in new situations. Practitioners can create their own knowledge and understanding of a situation and act upon it, thereby improving practice and advancing knowledge in the field. Likewise, during the period of the present study, the researchers continuously reflected and improved the teaching practice by understanding the learning effectiveness of the students.
The interventions of this study involved several teaching cycles and teaching effectiveness was evaluated after student learning. The intervention lasted for ten weeks, and the major concern was whether the learning system enhanced teaching effectiveness by integrating AI and VR into RALL for English tour guide practice. During the first phase of the intervention, student feedback and reflections from the teacher were collected to clarify some of the issues raised. The next phase involved the implementation of the improved teaching practice and data were collected on the impact of the practice on specific areas, such as student performance, including oral proficiency and motivation. During the implementation of the intervention, the issues that the platform has brought into the teaching practice were explored, and the teaching practices were changed for their improvement.
In this proposed study, the robot-assisted VR-based tour-guide learning system design was an ongoing project; the researchers continued to create modules through the feedback from the pilot study and then the actual implementation followed. The design stage included the programming and development of the learning content, ideas, and functions for leading tours. After the planning and design of one learning module, a pilot study was conducted in which the modification of the learning materials was followed by reflections upon these actions and experiences. Hence, feedback obtained was used for the improvement of the system. In the second cycle, more modules were written, and actual experiments followed that used the improved materials to analyze and evaluate student learning performance and engagement.
A qualitative study was employed to gain an in-depth understanding of student learning effectiveness and engagement. Observations were made during the training process, while student self-reports of their learning outcomes were carried out after each module. An interview with each participant followed after the training. Interviews and student self-reports were analyzed by themes. The analysis aimed to generate meaning from the collected data based on the topic and the research questions.

3.2. Participants

The two students participating in the study volunteered for the interviews. They were identified by the pseudonyms of Jeffrey and Amanda and their English proficiency level was intermediately high. These two students were enrolled in the International Tourism and Meetings, Incentives, Conferences, Exhibitions (M.I.C.E.) Industry master’s degree program of the Department of Applied Foreign Languages at a university of science and technology in Taiwan. This program features a practice-based curriculum model with the aim of cultivating planning and management executives with linguistic competence. All of the courses are taught based on the philosophy of “learning by doing”. Three of the courses that these two students took were: (1) English Guidance, Interpretation, Practice and Studies; (2) Digital and Cultural Tourism; and (3) Research Methods. Through the process of in-person practice and the guidance of these three courses, students come to understand how technology can be integrated into tourism and related fields and how research can be conducted. Students develop tour-guiding skills in English and learn the application system during the course of English Guidance, Interpretation, Practice and Studies. Four students were enrolled in the course; however, Jeffrey and Amanda were the ones to volunteer for the interviews.

3.3. Content Design of English for Guiding Tours

The language-learning mechanism of the system was designed for social interaction. Thus, the content of this application system was focused on leading tours in English through interactive dialogues developed to provide learners with opportunities to practice oral skills. There were ten modules (destinations), each containing several scenes. In other words, there were several dialogues for each destination. Each dialogue had a scenario giving details of the plot (e.g., discussion of the history of an architectural attraction, route planning, or planning for the next stop) and individual scenes resembling the corresponding real-life situation. During the practice, the dialogues were facilitated by Robot Robert, and the learners and the robot played the roles of either tour-guide or traveler.
In addition, there were simple exercises that involved multiple choices (e.g., proper wording, matching words and collocation) and sentence structure (i.e., grammar and syntax), as seen in Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5. The robot gave feedback after each answer provided. Students were expected to improve their speaking proficiency and relevant vocabulary by practicing dialogues with the robot.

3.4. Design of Robot Robert: 3D VR English Learning Interactive Application System

From a design perspective, this study also represents design-based research (DBR). The design of the interactive application system used Unity’s AI plug-in that includes seven major functions: motion, touch, motor, LED, voice playback, speech recognition or speech to text, and object recognition. The design procedure was comprised of five steps: idea generation, design and definition, development and prototyping, post-production and debugging, and publishing.
VR was developed using the Unity game engine. All of the destinations were created with 3D models by using Google Street View to obtain 360° panoramas based on feedback. The system was designed to enable situated learning with scenarios in the VR environments. The exercises were game-based language learning with feedback provided (e.g., “Yes, you’re right” and “Sorry! Try again”).
In the initial stage of the DBR cycles, students provided their opinions on the one module. Their feedback was used to improve the system. For example, students expected greater realism in the 3D environment, so 3D Google Street View was used. They also wanted the game to be more fun, so certain game elements were added into the game-based language learning exercises, such as a time limit for the challenges, a clear goal (sentence scrambling), and a reward (score).

Introducing the Interactive Robot

The interactive robot is named Robert, stands 35 cm tall, and weighs about 4.4 kg. He has a face with a 7-inch screen that has 5 million pixels. The charging time is three hours, and it can be used for 2.5 h at a time. Robot Robert is able to add expressions and body movements to the scenario, resembling someone telling a story. Through his multi-sensors (e.g., visualization, voice, touch, etc.) and delicate body performances, learners interacted with Robot Robert as if they were there in the story. Additionally, by using the control function, the designer could create interactions with Robot Robert by themselves, which differs from the traditional key-in using a keyboard or touch screen.
The design concept for this interactive robot is a learning companion, as the learner must interact with Robot Robert. The more the learner engages with Robot Robert, the more feedback Robert will give. The feedback can be set up with text, motion (e.g., clapping or nodding of the head), LED lighting, or touch. When Robert performs an action, such as a free-style stroke, the learner can respond with the word “swimming” and Robert will provide feedback to the learner. It also has general camera functions and a smart voice feature, as well as various situation (i.e., scene) setup and development options.

3.5. The Instrument

Triangulation with multiple methods was employed in the study to gain a comprehensive understanding of RALL in the VR context. Observations were made during the training by the researchers, and students were requested to write a report about their learning experience and outcome at each point. In addition, a semi-structured interview was conducted after completing a module during the learning period. Student performance regarding listening, speaking, and vocabulary and student experience regarding their motivation and engagement were examined. All of the collected data were integrated, synthesized, and analyzed, then categorized into various themes in order to answer the three research questions.

3.5.1. Student Self-Reported Learning Outcome

The study adopted student self-reported learning outcomes (SRLOs) to understand whether students perceived that they learned from RALL. SRLO allowed them the opportunity to reflect on and consider their own learning. It also offered teachers systematic information on student perspectives and ratings of their learning [71]. Measurements of SRLOs had greater effects on learning outcomes than concrete scores from the exam, but the difference between these two was not statistically significant [72]. In this study, students’ self-reports mainly focused on their RALL and VR experience, feedback, reflection, and self-perceived learning outcomes. Some guidelines for their writing were English learning, perception of RALL and VR, the benefits and drawbacks of using the system, and suggestions.

3.5.2. The Observations

Participant observations by the researcher were used to record student learning behavior, the data of which were cross-analyzed with those of the interviews. Marshall and Rossman [73] defined “observation” as systematic description of the events, behaviors, and artifacts of the social environment selected for research. Bernard [74] observed that during the participation process, the observer can establish good relations and have close interaction with the group in a community. In this way, its members will behave naturally. Therefore, the observer can understand the real situation and interpret the representative significance of the data.
The researchers in the study took notes while making observations. Motivation regarding students’ behavioral, affective, and cognitive engagement was observed. For behavioral engagement, it was observed whether students paid attention to the content and whether they persisted in learning. Their participation in activities and their efforts in learning were observed. For affective engagement, their attitudes and emotions toward such learning were the observable components. For cognitive engagement, comprehension, working memory, and application were used to understand student learning.

3.5.3. The Interview

The interview is one of the most widely used qualitative research tools in the social sciences. The purpose of an interview is to understand students’ and practitioners’ feedback and comments after the experiment. The Huberman and Miles [75] method for generating meaning from transcriptions and interview data was used for qualitative data analysis. Their methods of noting patterns and themes, clustering items into categories, building logical chains of evidence through noting causality and making inferences, and making conceptual coherence allowed for typically large amounts of qualitative data to be reduced [69].
The study involved the conduction of semi-structured interviews. An interview protocol was prepared with questions (e.g., “What is your perception of RALL?” and “Would you like to be exposed to similar learning environments in other courses?”) based on the purpose statement and the research questions of the study. The interviews were recorded with the participants’ permission and then transcribed into text. For the coding process, all the researchers first read through the collected data and identified the themes by labeling a noun or a noun phrase, such as “active learning experience”. The coding process went through several iterations of coding and recoding, as well as grouping and regrouping, until the meaning and explanation were consolidated. The emergent themes were approved by all the researchers so that there was inter-rater reliability.

3.6. Data Analysis

The qualitative data were collected from student self-reported learning outcomes, interviews, and observations. All of the collected data were analyzed respectively and then integrated to determine how effectively the students were trained with this type of learning. According to Yin [76], the analysis of qualitative data usually consists of five phases. The first analytic phase, compiling data into a formal database, calls for the careful and methodical organization of the original data. The second phase, disassembling the data in the database, involves a formal coding procedure. The third phase, reassembling, is less mechanical and benefits from a researcher’s insightfulness while noting emerging patterns. Creating data arrays helps reveal such patterns in this third phase. The fourth phase, interpreting, involves using the reassembled material to create a narrative with accompanying tables and graphics that will become the key analytic portion. The final phase, concluding, calls for drawing conclusions from the entire study. It is noticeable that the five phases form a recursive instead of linear relationship [76] (p. 179), as the researchers constantly revisit and revise in order to attain the correct and representative themes and interpret them appropriately.
In this study, thematic analysis was adopted for the interview data with the purpose of finding recurring themes. Content analysis was used to analyze the students’ SRLOs to determine whether their feedback and reflections were consistent with what they revealed in the interviews. Observation was employed to examine the students’ actual learning and performance. The interactive and observable components regarding student behavioral, affective, and cognitive engagement were considered. Finally, the results mainly came from the interviews, and the results from the other sources were used to corroborate the major findings.

4. Results and Discussion

The data from the interviews, observations, and student SRLOs were analyzed, integrated, interpreted, and synthesized in order to answer the three research questions. The themes that emerged were discussed accordingly.
Research Question 1:
How effective is RALL at enhancing the students’ behavioral, affective, and cognitive engagement in terms of their learning motivation and performance?
Generally speaking, the teaching appeared to be effective from the assessment of student performance, as the students improved their English oral speaking abilities. They demonstrated oral skills and fluency in orally guiding tours of the attractions after interacting with the robot. Their relevant vocabulary also improved. As they did not have any experience with RALL before the practice, when invited to participate in the study they demonstrated an interest and were willing to accept. They had positive attitudes toward RALL and felt good about their ability to master new knowledge and skills by orally guiding tours of the attractions. Overall, the two students were highly motivated. They were active in learning and spent time and expended effort on it. Student engagement from the behavioral, affective, and cognitive perspectives is discussed below.
  • Behavioral Engagement
Behavioral engagement refers to a learner’s observable acts involved in learning. Student behavioral engagement is obtained from the observation and judgment of whether students complete their assignments and work through content. Since this was the first time that both students had experienced RALL, there was a sense of novelty at the beginning. However, they continued to pay attention to the learning content—in particular, the VR environment and the mini-game-based exercises.
Both Amanda and Jeffrey exhibited energy and dedicated effort to the performance of the tasks. They indicated a higher level of behavioral engagement because they studied the learning content with intensity and persisted in doing their best to perform well for their assignments. Moreover, the two students were relatively active in the exchange of ideas about the content, the VR, and the RALL because they acknowledged the value of RALL. Most importantly, they indicated their ability to apply their oral skills to the real-world situations when they were requested to lead the tours in the destinations.
B.
Affective Engagement
Learners’ affective engagement is related to their emotions during the learning process. In this study, RALL enhanced the students’ emotional engagement. They had fun and demonstrated interest in learning with Robot Robert, especially when seeing his movements to respond. Amanda was constantly excited and energetic when she saw the robot’s responses and motions, and she enjoyed the fun features of VR. She was proud of completing the assignment that the researchers set and was enthusiastic while discussing RALL with the researchers. Jeffrey thought it a less stressful experience compared to the traditional teaching method and it helped develop autonomy, as he could decide how he wanted to learn. Furthermore, both students felt positive about RALL when they compared it to the traditional approach to learning. Amanda said:
“The most interesting experience was that instead of just listening to the teacher who often does not know if you understand the lesson or not in a traditional classroom, I could learn at my own pace because I could stop and check the new words at any time. Besides, while learning English, interacting with the robot was fun for me. Personally, I think learning with a robot is more effective than traditional ways of learning as the content of the lesson is well-designed and engaging. In a traditional classroom, sometimes you may feel intimidated, especially when the teacher is not so friendly to you, and your education will be greatly hampered. In contrast, the robot looks harmless to me and makes me feel relaxed. So I prefer to learn with the robot.”
Jeffrey said:
“The most important strength of RALL, in my opinion, is its interactivity and the learning experiences that it provides, which are closer to real situations. In addition, the presence of a teacher is often more stressful for a learner because teachers are usually seen as authority figures. In contrast, the appearance of the robot is cute and I think it looks friendly to many people, so learning with the robot might be a less stressful experience. In addition, learners have great autonomy when learning with the robot because it allows the user to decide how they want to learn.”
C.
Cognitive Engagement
Both students thought that the VR context made it easier for them to understand the content. Judging from their performance, they improved their oral skills and fluency, and they had a good memory of the new vocabularies in the content. Ultimately, they were able to apply their knowledge and skills to real life. Amanda said:
“The first time, I felt very curious about how to use this robot platform to improve how I learn a foreign language and what the differences between it and traditional methods teaching and learning were. Besides, it is very convenient because it can be set and used for multiple purposes without restriction. I can spend a lot of time listening and playing with this platform to see how my English ability improves every day.”
Both students spent some time trying to ascertain how to learn and how to interact with the robot at the beginning. They comprehended the scenario in this VR environment. Overall, they realized the meaning of using this platform to learn how to guide tours of these attractions. The simulated environment allowed them to easily understand and memorize the new vocabulary and apply it to the real situations easily.
Research Question 2:
Did the participating learners support RALL? If so, why and to what degree?
The two students thought that RALL was a unique experience for them. They thought that such teaching was effective because the learning materials were well-designed and properly used. They agreed that the robot could serve as a supplement to teaching and learning in many disciplines. Additionally, they would recommend such learning to other people, especially for language learning. Amanda said:
“I support the implementation of this robot platform to various learning circumstances because it could bring different feelings and unique user experiences. Users could play with it anytime and anywhere. Robots also replace numerous human roles, such as museum tour guide, information desk worker, hotel concierge, teaching assistant, and companies’ reception desk. This learning method can be popularized so that everyone can learn while playing and gain useful knowledge.”
Jeffrey said:
“I think it has lots of potential. It can make learning more fun for students. And for shy students, it may provide them with an opportunity to practice English without being watched by others, which can be very stressful and cause them to perform poorly and learn less effectively. Yet, I think it is very important that learning materials must be well-designed. Otherwise, users are only learning how to interact with the robot instead of the language itself. Maybe it can also be used as a supplement to the regular class so students can review what they have learned in the class and try to understand things that are not yet clear to them with the robot. In my opinion, it will be very helpful in teaching and learning in the near future. If it is used properly, learning can be more effective and fun. Anyhow, it is very suitable to be a foreign language teaching assistant, and it can be applied to almost all kinds of learning.”
Research Question 3:
What are the students’ perceptions of the advantages and disadvantages of RALL? What are their suggestions in relation to them?
The perceived advantages and disadvantages of RALL were presented in various themes regarding learning, content, and technical issues. Four themes were sorted: contextualized learning, autonomy, interaction, and active learning experience. Students were satisfied with the content. In addition, students responded that the design of this robot was simple and clear enough that users would quickly become familiar with it once they experienced it. It was not complicated, and it was more understandable than other electronic devices that they had experienced. All of the drawbacks that the students proposed were mostly technical issues, including volume function.
  • RALL
  • Contextualized Learning
The VR context allowed the students to learn faster and better and retain vocabulary, knowledge, and information for longer. Since Jeffrey felt connected to the context, he was committed to learning. Jeffrey also regarded the VR environment as meaningful and relevant to him, so the learning made sense to him. Furthermore, he could take control of his learning.
“I think the lessons were interesting and relevant to me because they focused on famous places in Taiwan like the National Palace Museum and Taipei 101, which I have been to. Also, learning English in a game-like environment can improve learners’ motivation because without a teacher who firmly directs the lesson, learners can enjoy more autonomy than in a traditional classroom and have more control over their own learning. Another significant advantage is that it allows students to learn new words in a certain context. If you only try to memorize words without knowing how they are used in a specific context, you are very likely to forget about them after a few days. In contrast, all the new words in the lessons are presented in a context so it is much easier to remember them because they are strongly connected to what is happening to the characters and closely linked to the stories. When you encounter a similar context, it may be easy for you to think of the new word you just learned.”
Amanda had the same thought, as she said:.
“Just like what I said, language learning is most effective when you are in the context. In a traditional classroom, most of the time you are just memorizing new vocabulary words, phrases, and sentences. Without knowing how they are used in real situations, you are likely to misuse them and make many mistakes. In contrast, RALL provides an opportunity for you to learn through real interactions in daily situations, which is much more engaging and more likely to leave a strong impression on the user. In this way, language learning should be more effective and possibly learners are less likely to misuse words because they are linked to specific contexts.”
b.
Autonomy
As the students could choose the lessons they wanted to learn and repeated those lessons, they could control their learning process. Thus, the learning was more focused and personal for learners. These students felt that they had the capacity and strength to regulate their learning activities. Jeffrey said, “I was able to control my own learning pace and so have more autonomy. This was another thing that made me feel motivated to learn with the robot”. Meanwhile, Amanda said:
“I think this platform is highly participatory because you have to choose which lesson you want to begin with and learn the lessons based on your own learning pace. In other words, you do not passively accept what is planned for you but actively participate in your own education by deciding on how you want to learn.”
c.
Interaction
The interaction in this platform involved two parts: the VR and the robot. Amanda pointed out that interacting with Robert was more like a real situation and more immersive than the VR. She remarked:
“RALL is a quite an innovative idea. The learning materials are well-designed to capitalize on its strengths such as interactivity, that provide learning experiences that are much closer to our daily interactions than those enabled by computer games because interacting with a robot is much like interacting with a real human being. Also, compared to VR, I think it is more immersive because when you use VR, you are still in a virtual environment, but robot-assisted learning allows you to interact in a real situation. In my opinion, this is a more contextualized learning experience.”
However, Jeffrey thought that his learning experience could be improved if there were more interactions resembling daily interactions with other people in the learning process, as the dialogues were pre-programmed. He said:
“In terms of interactivity, I think it was a fun experience interacting with the robot. But this learning experience could be even better. I felt that I was just touching on the screen, reading the stories, playing the mini-games, and answering the questions most of the time. RALL should be able to do things more than that. I think this learning experience can be improved by creating lessons that include real interactions between the user and the robot. By real interactions I mean more human-like interactions. For example, the user can talk to the robot, and the robot can respond accordingly—just like our daily conversations. In this way, the user won’t feel that he or she is just playing a game but having a conversation in a real context. If the lessons can include more interactions like that, learning can be more immersive and motivating.”
d.
Active Learning Experience
As for active learning experience, Amanda said, “you do not passively accept what is planned for you but actively participate in your own education by deciding on how you want to learn”. Jeffrey thought that in RALL learners must actively interact with the robot to create an active learning experience, which is different from the traditional approach. Furthermore, he posited that it would help shy students to learn a language comfortably and effectively. He said,
“Yes, especially students who do not feel comfortable learning with other people and those who are too shy to participate in the class. Robots are able to provide a private learning space and allow you to learn at your own pace. This might improve the effectiveness of language learning. Also, RALL provides an active learning experience; you have to actively interact with the robot instead of just passively listening to the teacher. I think this will make students who feel bored in a traditional classroom motivated to learn and have better learning outcomes.”
To conclude, in terms of learning motivation, interaction, participation, innovation, and language learning, the students strongly supported the implementation of robot platforms in multiple dimensions that would provide foundational knowledge so as to understand user expectations and experiences while learning a foreign language. Implementing robot platforms can increase learning opportunities and time, attract people, increase their motivation, and enhance their experience.
B.
Content
As to the content, the two students were satisfied with it, and they gave the following positive feedback:
  • The introduction includes the historical and cultural background of the tourism destination, allowing users to gain useful knowledge. Chapters emphasize and explain important English words to help users remember easily. The content was mainly in the form of dialogue and students got to practice speaking.
  • The content provides detailed information, and the precautions are also very careful. It introduces various famous tourism destinations in Taiwan with colorful photos that enhance user experiences. Those photos are of good quality and contain bright colors, and many have different perspectives that provide a view of the entire landscape.
C.
Technical Issues
Students spent only a little time becoming familiar with the operation of the system. It was not complicated, and it was more understandable than the other electronic devices that they had experienced.
  • Technical Problems
As the content was programmed, the students’ errors could not be corrected. Jeffrey expected that the robot could clear up his confusion. He wished that the robot was more user-friendly so that people would feel more comfortable using it. He said,
“But an obvious weakness of the robot is that it can’t correct the user right away if he or she makes a mistake. Also, if the user doesn’t understand something, it’s very likely that the robot can’t help them clear up their confusion because it’s programmed in advance and doesn’t have any flexibility in the learning process. Besides, for users who are not familiar with technology, RALL may be challenging for them and even make them less motivated to learn because of difficulties they may encounter when using the robot. Make these people comfortable with using the robot may be an urgent issue.”
The problem with the volume function is another thing that the researchers did not expect. If it could be adjusted automatically depending on the surroundings, this would be more convenient for learners.
“Another suggestion is that the volume function should be adjustable during the playing time. The user could control the volume button or use a shortcut in flexible ways, but there was no way to adjust it once they entered the section. It is easy to be too loud or too small depending on the robot’s surroundings. This suggestion is based on the environment when I tested learning apps. The environment is simply quiet, and I don’t realize that the volume is too loud before I log in to the section. For future applications, it is necessary to consider that each place has a different noise level.”
In addition to the above, Amanda suggested that the learning apps could use a short trailer or video in the introduction to make the operation clearer and more attractive to users.
b.
Results Board
A result board appears when a unit is finished. It clearly shows the score which is based on the answers given by filling in the gaps and choosing the correct English words. Amanda said, “The results board is very attractive and clearly shows the results. However, if the user has played two different chapters, the results board only saves the result of the last chapter.”

Discussion

The students improved their oral skills and fluency and increased their vocabulary knowledge. From the observations and interviews with these two learners, they acknowledged the value of RALL. They were active in learning and eager to master the content and acquire new knowledge. They were challenged and remained committed to completing their assignments and were driven toward higher achievement. Even though there were technical issues, these students were satisfied with the overall design and the content of the learning platform.
Both students had a positive attitude toward RALL when they compared RALL to the traditional approach to learning. They had fun and remained interested while learning. The content was designed with VR technology to be scenario-based. Students were excited about seeing the VR content design. Visualization aroused their interest and learning motivation, so they were curious about the developed scenario and felt enthusiastic and energetic while interacting with the robot. Learners interacted with the robot through dialogue. Just as Timms [43] proposed that the social interaction developed by the combination of AI and robotics attracts and keeps learners’ attention, it can also offer personalized instruction because students can learn at their own pace.
The VR learning environment allowed the students to easily understand the content. The students were able to explore and interact with the simulated environment using multiple senses. The features of VR allowed the students to immerse themselves and they felt like they were experiencing the situations, which is exactly what Makransky and Petersen [48] and Siegle [52] contended. The VR environment created an experience of reality that facilitated learning, as students learned from experiencing the context and went beyond the textbook to develop flexible learning strategies. This complies with the study results of Martín-Gutiérrez et al. [45] and Chung [53]. Contextualized learning makes learning easier for students and situated learning with scenarios in VR environments has its advantages in learning. In addition, game-based language learning exercises drive learner engagement and facilitate knowledge retention. It is one of the most effective ways to engage and motivate students.
Both students expended effort and made use of their cognitive capabilities while performing the tasks, which complies with the theory of Guthrie et al. [61] on cognitive engagement. They used cognitive strategies, such as repetition, organizing new language, and guessing meaning from the context. In addition, they showed energy and application, and their behaviors were associated with attention, concentration, and persistence, which predicted their learning outcomes. The results revealed a positive change in engagement across the interactions with the robot, which is consistent with Rintjema et al.’s study [24]. The students exhibited the features of being engaged and were involved, energized, and active in learning. Interaction in RALL not only made the learning interesting and fostered student engagement, it also improved knowledge construction. This complies with the study results of Zhou et al. [68].
To conclude, such learning has brought meaning to the learners. While they were learning, they considered the traditional learning methods and environment. RALL inspired the learners to think about how it surpasses the traditional approach, how they can benefit from RALL, and whether it assists and improves their learning. Students developed critical thinking through the comparison. Hence, it was meaningful for them and was a meaning-making process.
The observers and content designers were one and the same, so some subjective judgments might affect the analysis of the results. Hence, possible sources of bias from this qualitative analysis are acknowledged. The study involved two participants only. It is difficult to apply quantitative study methods by widely using robots to obtain results for generalization. As for the design, the dialogues for the scenarios were programmed, so real and human-like interaction was expected by the students. In addition, even though the destinations were all designed in VR, such a reality was different from a fully immersive VR environment.

5. Conclusions

The learning system in this study helped the students enhance their oral skills and fluency and increased their vocabulary knowledge. RALL combined with AI and robotics indeed provided interaction, attracted and maintained student attention, and provided more personalized instruction. However, the interaction with the robot was pre-programmed, and the interactions were limited in terms of context. Although the students acknowledged the value and significance of applying VR and robot technologies in language learning, they thought that there was still potential to develop the platform to make it more human-like. Future studies can take advantage of the students’ practice data as big data for improvement and further development of the platform. As AI has a great potential to address large problems by providing constructivist activities and just-in-time support simultaneously, future studies should consider using chatbots to provide feedback to students for them to address the confusion and gradually construct their knowledge with self-efficacy in RALL.
The study implies that both VR and robot technologies have enormous potential to be integrated into language learning and teaching. Further improvement of the robot-assisted learning system for personalized learning of specific content is highly recommended. It would also be good to consider the possibility of employing the language robot in learning in cross-disciplines.
As the application system is in line with current and future developments of technology [77], educators—particularly language teachers—can gradually try innovating and giving thought to applying learner-centered approaches to suit the characteristics of this generation of students and guide them in becoming autonomous, life-long learners.

Author Contributions

Conceptualization, Y.-L.C., C.-C.H., C.-Y.L. and H.-H.H.; methodology, Y.-L.C., C.-C.H., C.-Y.L. and H.-H.H.; software, Y.-L.C., C.-C.H. and C.-Y.L.; validation, Y.-L.C., C.-C.H., C.-Y.L. and H.-H.H.; formal analysis, Y.-L.C., C.-C.H. and H.-H.H.; investigation, Y.-L.C., C.-C.H., C.-Y.L. and H.-H.H.; resources, C.-C.H. and C.-Y.L.; data curation, Y.-L.C. and H.-H.H.; writing-original draft preparation, Y.-L.C.; writing-review and editing, Y.-L.C., C.-C.H., C.-Y.L. and H.-H.H.; visualization, C.-C.H. and C.-Y.L.; supervision, Y.-L.C.; project administration, Y.-L.C., C.-C.H., C.-Y.L. and H.-H.H.; funding acquisition, Y.-L.C. and C.-Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by two grants from Taiwan’s Ministry of Science and Technology: project nos. MOST 109-2622-H-262-001 and MOST 110-2410-H-262-003.

Institutional Review Board Statement

Ethical approval of the research was obtained from the Research Ethics Committee at the National Chengchi University in Taiwan. The approval number is NCCU-REC-202005-E041.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kou, Y.; Zhao, P. Reform and innovation in education: Perspective of technological development. Digital Educ. 2017, 5, 4. [Google Scholar]
  2. Carlson, C.S.; Aust, P.J.; Gainey, B.S.; McNeill, S.J.; Powell, T.; Witt, L. Which technology should I use to teach online? Online technology and communication course instruction. MERLOT J. Online Learn. Teach. 2012, 8, 334–347. [Google Scholar]
  3. Cheng, Y.W.; Sun, P.C.; Chen, N.S. The essential applications of educational robot: Requirement analysis from the per-spectives of experts, researchers and instructors. Comput. Educ. 2018, 126, 399–416. [Google Scholar] [CrossRef]
  4. Freina, L.; Ott, M. A literature review on immersive virtual reality in education: State of the art and perspectives. In Proceedings of the International Scientific Conference Elearning and Software for Education, Bucharest, Romania, 25–26 April 2015; pp. 133–141. [Google Scholar]
  5. Heflin, H.; Shewmaker, J.; Nguyen, J. Impact of mobile technology on student attitudes, engagement, and learning. Comput. Educ. 2017, 107, 91–99. [Google Scholar] [CrossRef]
  6. Howard, S.; Serpanchy, K.; Lewin, K. Virtual reality content for higher education curriculum. In Proceedings of the 19th Biennial Conference and Exhibition, Melbourne, Australia, 13–15 February 2018. [Google Scholar]
  7. Huda, M.; Anshari, M.; Almunawar, M.N.; Shahrill, M.; Tan, A.; Jaidin, J.H.; Daud, S.; Masri, M. Innovative teaching in higher education: The big data approach. Turkish Online J. Educ. Technol. 2016, 15, 1210–1216. [Google Scholar]
  8. Lee, E.A.-L.; Wong, K.W. Learning with desktop virtual reality: Low spatial ability learners are more positively affected. Comput. Educ. 2014, 79, 49–58. [Google Scholar] [CrossRef] [Green Version]
  9. Sural, I. Augmented Reality Experience: Initial Perceptions of Higher Education Students. Int. J. Instr. 2018, 11, 565–576. [Google Scholar] [CrossRef]
  10. Xie, K.; Heddy, B.C.; Greene, B.A. Affordances of using mobile technology to support experience-sampling method in ex-amining college students’ engagement. Comput. Educ. 2019, 128, 183–198. [Google Scholar] [CrossRef]
  11. Godwin-Jones, R. Mobile apps for language learning. Lang. Learn. Technol. 2011, 15, 2–11. [Google Scholar]
  12. Yamazaki, K. Future of technology in language teaching and learning. Technol. Lang. Teach. Learn. 2018, 1, 1–2. [Google Scholar] [CrossRef]
  13. Adams Becker, S.; Cummins, M.; Davis, A.; Freeman, A.; Hall Giesinger, C.; Ananthanarayanan, V. NMC Horizon Report: 2019 Higher Education Edition; The New Media Consortium: Austin, TX, USA, 2019. [Google Scholar]
  14. Grand View Research. Artificial Intelligence Market Size Analysis Report, 2022–2030 (Report ID: GVR-1-68038-955-5). Available online: https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market (accessed on 4 April 2022).
  15. Markets and Markets. AI in Education Market by Technology (Deep Learning and ML, NLP), Application (Virtual Facilitators and Learning Environments, ITS, CDS, Fraud and Risk Management), Component (Solutions, Services), Deployment, End-User, and Region—Global Forecast to 2023; Markets and Markets: Northbrook, IL, USA, 2019. [Google Scholar]
  16. Grand View Research. Virtual Reality Market Size Analysis Report, 2022–2030 (Report ID: GVR-1-68038-831-2). Available online: https://www.grandviewresearch.com/industry-analysis/virtual-reality-vr-market (accessed on 4 April 2022).
  17. Kukulska-Hulme, A.; Viberg, O. Mobile collaborative language learning: State of the art. Br. J. Educ. Technol. 2018, 49, 207–218. [Google Scholar] [CrossRef]
  18. Lin, C.-C.; Lin, V.; Liu, G.-Z.; Kou, X.; Kulikova, A.; Lin, W. Mobile-assisted reading development: A review from the Activity Theory perspective. Comput. Assist. Lang. Learn. 2020, 33, 833–864. [Google Scholar] [CrossRef]
  19. Chen, Y.-L. The Effects of Virtual Reality Learning Environment on Student Cognitive and Linguistic Development. Asia Pacific Educ. Res. 2016, 25, 637–646. [Google Scholar] [CrossRef]
  20. Lin, T.J.; Lan, Y.J. Language learning in virtual reality environments: Past, present, and future. J. Educa. Technol. Soc. 2015, 18, 486–497. [Google Scholar]
  21. Bonner, E.; Reinders, H. Augmented and virtual reality in the language classroom: Practical ideas. Teach. Engl. Technol. 2018, 18, 33–53. [Google Scholar]
  22. van den Berghe, R.; Verhagen, J.; Oudgenoeg-Paz, O.; van der Ven, S.; Leseman, P. Social Robots for Language Learning: A Review. Rev. Educ. Res. 2019, 89, 259–295. [Google Scholar] [CrossRef] [Green Version]
  23. Penfold, P. Learning Through the World of Second Life—A Hospitality and Tourism Experience. J. Teach. Travel Tour. 2009, 8, 139–160. [Google Scholar] [CrossRef] [Green Version]
  24. Rintjema, E.; van den Berghe, R.; Kessels, A.; de Wit, J.; Vogt, P. A robot teaching young children a second language: The effect of multiple interactions on engagement and performance. In Proceedings of the Companion of the 2018 ACM/IEEE Interna-tional Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 219–220. [Google Scholar]
  25. Malicka, A.; Guerrero, R.G.; Norris, J.M. From needs analysis to task design: Insights from an English for specific purposes context. Lang. Teach. Res. 2019, 23, 78–106. [Google Scholar] [CrossRef]
  26. Zahari, S.D.Z.; Zain, A.A.M.; Bakar, N.A.; Basri, I.S.; Omar, S. English language communication needs at workplace as perceived by students. JSET 2016, 3, 27–31. [Google Scholar]
  27. Simion, M.O. Communication needs for business English students in outcome based education. An. Univ. Constantin Brancusi Targu Jiu Serie Lit. Stiinte Soc. 2016, 4, 39–42. [Google Scholar]
  28. Hu, R. Functional Design and Implementation of Educational Robots in English Teaching. Ph.D. Thesis, Central China Normal University, Wuhan, China, 2018. [Google Scholar]
  29. Ji, H. A study of language learning paradigms from the perspective of emerging technologies. Jianghan Acad. 2019, 38, 111–119. [Google Scholar]
  30. Malerba, D.; Appice, A.; Buono, P.; Castellano, G.; De Carolis, B.; de Gemmis, M.; Polignano, M.; Rossano, V.; Rudd, L.M. Advanced programming of intelligent social robots. J. Learn. Knowl. Soc. 2019, 15, 13–26. [Google Scholar]
  31. Westlund, J.M.K.; Dickens, L.; Jeong, S.; Harris, P.L.; DeSteno, D.; Breazeal, C.L. Children use non-verbal cues to learn new words from robots as well as people. Int. J. Child Comput. Interact. 2017, 13, 1–9. [Google Scholar] [CrossRef]
  32. Wu, W.-C.V.; Wang, R.-J.; Chen, N.-S. Instructional design using an in-house built teaching assistant robot to enhance elementary school English-as-a-foreign-language learning. Interact. Learn. Environ. 2015, 23, 696–714. [Google Scholar] [CrossRef]
  33. Randall, N. A Survey of Robot-Assisted Language Learning (RALL). ACM Trans. Hum. Robot Interact. 2019, 9, 1–36. [Google Scholar] [CrossRef] [Green Version]
  34. Belpaeme, T.; Kennedy, J.; Ramachandran, A.; Scassellati, B.; Tanaka, F. Social robots for education: A review. Sci. Robot. 2018, 3, eaat5954. [Google Scholar] [CrossRef] [Green Version]
  35. Neumann, M.M. Social Robots and Young Children’s Early Language and Literacy Learning. Day Care Early Educ. 2019, 48, 157–170. [Google Scholar] [CrossRef]
  36. Lin, V.; Yeh, H.C.; Huang, H.H.; Chen, N.S. Enhancing EFL vocabulary learning with multimodal cues supported by an educational robot and an IoT-Based 3D book. System 2022, 104, 102691. [Google Scholar] [CrossRef]
  37. Lin, V.; Yeh, H.-C.; Chen, N.-S. A Systematic Review on Oral Interactions in Robot-Assisted Language Learning. Electronics 2022, 11, 290. [Google Scholar] [CrossRef]
  38. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Pearson Education: Upper Saddle River, NJ, USA, 2010. [Google Scholar]
  39. Popenici, S.A.D.; Kerr, S. Exploring the impact of artificial intelligence on teaching and learning in higher education. Res. Pract. Technol. Enhanc. Learn. 2017, 12, 22. [Google Scholar] [CrossRef]
  40. Canbek, N.G.; Mutlu, M.E. On the track of Artificial Intelligence: Learning with Intelligent Personal Assistants. J. Hum. Sci. 2016, 13, 592–601. [Google Scholar] [CrossRef]
  41. Mozer, M.C.; Wiseheart, M.; Novikoff, T.P. Artificial intelligence to support human instruction. Proc. Natl. Acad. Sci. USA 2019, 116, 3953–3955. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Roll, I.; Wylie, R. Evolution and revolution in artificial intelligence in education. Int. J. Artif. Int. Educ. 2016, 26, 582–599. [Google Scholar] [CrossRef] [Green Version]
  43. Timms, M.J. Letting artificial intelligence in education out of the box: Educational robots and smart classrooms. Int. J. Artif. Int. Educ. 2016, 26, 701–712. [Google Scholar] [CrossRef]
  44. Chen, P.-H. The Design of Applying Gamification in an Immersive Virtual Reality Virtual Laboratory for Powder-Bed Binder Jetting 3DP Training. Educ. Sci. 2020, 10, 172. [Google Scholar] [CrossRef]
  45. Martín-Gutiérrez, J.; Mora, C.E.; Añorbe-Díaz, B.; González-Marrero, A. Virtual technologies trends in education. EURASIA J. Math. Sci. Technol. Educ. 2017, 13, 469–486. [Google Scholar]
  46. Inoue, Y. Concepts, applications, and research of virtual reality learning environments. Int. J. Soc. Sci. 2007, 2, 1–7. [Google Scholar]
  47. Vesisenaho, M.; Juntunen, M.; Häkkinen, P.; Pöysä-Tarhonen, J.; Fagerlund, J.; Miakush, I.; Parviainen, T. Virtual Reality in Education: Focus on the Role of Emotions and Physiological Reactivity. J. Virtual Worlds Res. 2019, 12, 1–15. [Google Scholar] [CrossRef]
  48. Makransky, G.; Petersen, G.B. Investigating the process of learning with desktop virtual reality: A structural equation modeling approach. Comput. Educ. 2019, 134, 15–30. [Google Scholar] [CrossRef]
  49. Smedley, T.M.; Higgins, K. Virtual technology: Bringing the world into the special education classroom. Int. Sch. Clin. 2005, 41, 114–119. [Google Scholar]
  50. Parong, J.; Mayer, R.E. Learning science in immersive virtual reality. J. Educ. Psychol. 2018, 110, 785–797. [Google Scholar] [CrossRef]
  51. Jang, S.; Vitale, J.M.; Jyung, R.W.; Black, J.B. Direct manipulation is better than passive viewing for learning anatomy in a three-dimensional virtual reality environment. Comput. Educ. 2017, 106, 150–165. [Google Scholar] [CrossRef]
  52. Siegle, D. Seeing Is Believing: Using Virtual and Augmented Reality to Enhance Student Learning. Gift. Child Today 2019, 42, 46–52. [Google Scholar] [CrossRef]
  53. Chung, L.Y. Incorporating 3D-virtual reality into language learning. Int. J. Digit. Content Technol. Appl. 2012, 6, 249–255. [Google Scholar]
  54. Whyte, J. Virtual Reality and the Built Environment; Architectural Press: Oxford, UK, 2002. [Google Scholar]
  55. Chiu, W.K. Pedagogy of emerging technologies in chemical education during the era of digitalization and artificial intel-ligence: A systematic review. Educ. Sci. 2021, 11, 709. [Google Scholar] [CrossRef]
  56. Sukotjo, C.; Schreiber, S.; Li, J.; Zhang, M.; Yuan, J.C.-C.; Santoso, M. Development and Student Perception of Virtual Reality for Implant Surgery. Educ. Sci. 2021, 11, 176. [Google Scholar] [CrossRef]
  57. Becker, H.J. Pedagogical motivations for student computer use that lead to student engagement. Educ. Technol. 2000, 40, 5–17. [Google Scholar]
  58. Kim, Y.H.; Kim, D.J.; Wachter, K. A study of mobile user engagement (MoEN): Engagement motivations, perceived value, satisfaction, and continued engagement intention. Decis. Support Syst. 2013, 56, 361–370. [Google Scholar] [CrossRef]
  59. Saeed, S.; Zyngier, D. How Motivation Influences Student Engagement: A Qualitative Case Study. J. Educ. Learn. 2012, 1, 252–267. [Google Scholar] [CrossRef] [Green Version]
  60. Mollen, A.; Wilson, H. Engagement, telepresence and interactivity in online consumer experience: Reconciling scholastic and managerial perspectives. J. Bus. Res. 2010, 63, 919–925. [Google Scholar] [CrossRef] [Green Version]
  61. Guthrie, J.T.; Meter, P.; McCann, A.D.; Wigfield, A.; Bennett, L.; Poundstone, C.C.; Mitchell, A.M. Growth of literacy en-gagement: Changes in motivations and strategies during concept-oriented reading instruction. Read. Res. Q. 1996, 31, 306–332. [Google Scholar] [CrossRef]
  62. Jacobi, M.; Astin, A.; Ayala, F., Jr. College Student Outcomes Assessment; Clearinghouse on Higher Education: Washington, DC, USA, 1987. [Google Scholar]
  63. Kuh, G.D. What we’re learning about student engagement from NSSE. Change 2003, 35, 24–41. [Google Scholar] [CrossRef]
  64. Taylor, L.; Parsons, J. Improving student engagement. Curr. Issues Educ. 2011, 14, 1–33. [Google Scholar]
  65. Schaufeli, W.B.; Salanova, M.; González-Romá, V.; Bakker, A.B. The Measurement of Engagement and Burnout: A Two Sample Confirmatory Factor Analytic Approach. J. Happiness Stud. 2002, 3, 71–92. [Google Scholar] [CrossRef]
  66. Allcoat, D.; von Mühlenen, A. Learning in virtual reality: Effects on performance, emotion and engagement. Res. Learn. Technol. 2018, 26, 2140. [Google Scholar] [CrossRef] [Green Version]
  67. Leese, M. Out of class-out of mind? The use of a virtual learning environment to encourage student engagement in out of class activities. Br. J. Educ. Technol. 2009, 40, 70–77. [Google Scholar] [CrossRef]
  68. Zhou, Y.; Ji, S.; Xu, T.; Wang, Z. Promoting Knowledge Construction: A Model for Using Virtual Reality Interaction to Enhance Learning. Procedia Comput. Sci. 2018, 130, 239–246. [Google Scholar] [CrossRef]
  69. Cohen, L.; Manion, L.; Morrison, K. Research Methods in Education; Routledge: New York, NY, USA, 2007. [Google Scholar]
  70. Kolb, D.A. Experiential Learning: Experience as the Source of Learning and Development; Prentice-Hall: Hoboken, NJ, USA, 1984. [Google Scholar]
  71. Pike, G.R. Using college students’ self-reported learning outcomes in scholarly research. New Dir. Institut. Res. 2011, 2011, 41–58. [Google Scholar] [CrossRef]
  72. Warren, J.L. Does service-learning increase student learning? A meta-analysis. Mich. J. Commun. Serv. Learn. 2012, 18, 56–61. [Google Scholar]
  73. Marshall, C.; Rossman, G.B. Designing Qualitative Research; Sage: Thousand Oaks, CA, USA, 1995. [Google Scholar]
  74. Bernard, H.R. Research Methods in Anthropology: Qualitative and Quantitative Approaches; AltaMira Press: Walnut Creek, CA, USA, 1994. [Google Scholar]
  75. Miles, M.B.; Huberman, M. The Qualitative Researcher’s Companion; Sage: Thousand Oaks, CA, USA, 2022. [Google Scholar]
  76. Yin, R.K. Applications of Case Study Research; Sage: Thousand Oaks, CA, USA, 2011. [Google Scholar]
  77. Troussas, C.; Krouska, A.; Sgouropoulou, C. Collaboration and fuzzy-modeled personalization for mobile game-based learning in higher education. Comput. Educ. 2020, 144, 103698. [Google Scholar] [CrossRef]
Figure 1. The lessons of the 3D VR English Learning Interactive Application System.
Figure 1. The lessons of the 3D VR English Learning Interactive Application System.
Education 12 00437 g001
Figure 2. Dialogues in text.
Figure 2. Dialogues in text.
Education 12 00437 g002
Figure 3. The learner learning the vocabulary.
Figure 3. The learner learning the vocabulary.
Education 12 00437 g003
Figure 4. Game-based language-learning exercises.
Figure 4. Game-based language-learning exercises.
Education 12 00437 g004
Figure 5. Street scene.
Figure 5. Street scene.
Education 12 00437 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Y.-L.; Hsu, C.-C.; Lin, C.-Y.; Hsu, H.-H. Robot-Assisted Language Learning: Integrating Artificial Intelligence and Virtual Reality into English Tour Guide Practice. Educ. Sci. 2022, 12, 437. https://doi.org/10.3390/educsci12070437

AMA Style

Chen Y-L, Hsu C-C, Lin C-Y, Hsu H-H. Robot-Assisted Language Learning: Integrating Artificial Intelligence and Virtual Reality into English Tour Guide Practice. Education Sciences. 2022; 12(7):437. https://doi.org/10.3390/educsci12070437

Chicago/Turabian Style

Chen, Yu-Li, Chun-Chia Hsu, Chih-Yung Lin, and Hsiao-Hui Hsu. 2022. "Robot-Assisted Language Learning: Integrating Artificial Intelligence and Virtual Reality into English Tour Guide Practice" Education Sciences 12, no. 7: 437. https://doi.org/10.3390/educsci12070437

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop