Next Article in Journal
Prediction and Control of the Nitrogen Oxides Emission for Environmental Protection Goal Based on Data-Driven Model in the SCR de-NOx System
Previous Article in Journal
Polychlorinated Biphenyls Interactions with Water—Characterization Based on the Analysis of Non-Covalent Interactions and Energy Partitioning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

X-Education: Education of All Things with AI and Edge Computing—One Case Study for EFL Learning

Graduate Institute of Network Learning Technology, National Central University, Zhongli District, Taoyuan City 32001, Taiwan
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(19), 12533; https://doi.org/10.3390/su141912533
Submission received: 10 September 2022 / Revised: 28 September 2022 / Accepted: 28 September 2022 / Published: 1 October 2022
(This article belongs to the Section Sustainable Education and Approaches)

Abstract

:
Education usually only focuses on how to educate human beings with pedagogical or technical support. However, with artificial intelligence (AI) and edge computing, education can be extended and considered not only to educate human beings but also all things, such as physical or digital things. In this study, all things are given the opportunity to learn more about themselves and build their knowledge through interactions with other things, people, and AI agents. Thus, the X-Education framework is proposed in this study for educating all things, including human beings, physical, digital, and AI agents. One preliminary study for EFL writing was conducted to investigate not only whether all things can speed up their knowledge but also whether EFL learners as humans can also obtain the benefits of using X-Education. Further, the forwarding mechanisms of questioning and answering (Q&A) were designed to speed up interactions among all things. In total, 22 learners were divided into two groups, the experimental group (EG) and the control group (CG), with/without the Q&A forwarding mechanisms, respectively. A mixed-method approach with the two experimental phases was used in this study. The results showed that the knowledge of all things in the EG increased significantly more than the CG. Moreover, the EG received better EFL answers from the on-device AI with the forwarding mechanisms. They also felt that X-Education could help them to learn EFL writing better through Q&A. Furthermore, it was demonstrated that X-Education can accommodate not only humans but also all things to improve their knowledge.

1. Introduction

The growth of the internet of things (IoT) has significantly affected our society, especially in education. IoT can be used not only to build comfortable learning environments but also to provide interactive learning content. For example, a classroom is equipped with IoT to monitor and control its temperature through sensors [1]. Non-native English learners can learn with an IoT-based 3D book to enhance their EFL vocabulary [2]. However, these IoT implementations usually followed the instructions or the tasks given by humans [3]. Educational research on IoT with artificial intelligence (AI) has grown quickly and become more mature [4]; it has, therefore, become a promising research issue that could improve education quality [5]. However, not many studies addressed the implementation of AI in the IoT since most of the IoT has limited computing resources to run AI algorithms efficiently [6]. Therefore, IoT usually used cloud AI services on the internet and obtains smart computational results [7]. However, these mechanisms still suffer serious delays in obtaining smart computational results due to their distance from cloud services locations [8].
Due to the rapid development of mobile technology, Shin and Kim [9] argue that AI can be deployed on mobile devices and integrated with the IoT with low computing resources. For example, the open-source AI technology, such as TensorFlow Lite (TF-Lite), can efficiently infer an AI model on mobile devices, called on-device AI [10]. On-device AI can not only recognize the objects surrounding learners but also recognize learners’ speech with natural language processing (NLP) to understand its semantic meaning. Hence, the TF-Lite has the potential to support the NLP implementation on mobile devices as on-device AI to answer learners’ questions without connecting to the AI cloud services.
The IoT and mobile devices can collaborate on computing locally to become edge computing [6,8]. Furthermore, edge computing is implemented not only for the IoT but also for all physical forms in the world. For example, the plants or furniture, empowered by edge computing with IoT and mobile devices can understand their conditions and properties [5]. Under these conditions, a human can ask a plant about its state and the plant will answer. Hence, edge computing possibly helps all things in the real world to have some knowledge about themselves and to interact with humans through question and answer (Q&A).
The things empowered by edge computing are not limited to physical objects, such as plants; they can also be digital objects, such as images or 3D models. In the digital era, digital objects also have their own identities with blockchain technology [11]. A digital object has a unique contract address that can distinguish it from other digital objects, making it a non-fungible token (NFT). For example, a 3D model uploaded into the NFT marketplace will obtain the contract address of the NFT stored on the blockchain system. The contract address of an NFT is defined as the unique identity of the digital object, and the 3D model can be traded to another owner [11]. Hence, humans can interact with digital objects and ask questions through edge computing.
Although all things can have knowledge through on-device AI, making them smart things, they still need to sustain their learning and improve their understanding [7]. Likewise, learners need the teachers to enhance their understanding and improve their knowledge. The possible methods to improve the limited knowledge of on-device AI is to use external AI on cloud services because external cloud AI services learn through big data and acquire much more knowledge and resources than on-device AI [12]. External AI could teach smart things with on-device AI, and these smart things could also learn from each other simultaneously through forwarding mechanisms. Likewise, when a student asks questions of a teacher, others students could also learn from that interaction.
Regarding EFL learning, learners can interact with all things to understand their properties and conditions in English through Q&A. For example, human learners can ask a general question in English to a trash bin as a physical object about its function, such as “what is your function?” then the trash bin will then answer the question. Further, the learner receives the benefit of enhancing their English ability through the interaction of Q&A with the smart things. Therefore, we proposed X-Education as one innovative education framework that not only educates human learners but also educates all things, such as physical, digital, and AI things. We will conduct one experiment for the case study of EFL writing in X-Education and investigate the influence of EFL writing with/without Q&A forwarding mechanisms. Hopefully, with the forwarding mechanisms, the questions asked by learners to smart things will be forwarded to all other smart things, meaning that, hopefully, the forwarding mechanisms will speed up the knowledge learning of all things. Furthermore, improving knowledge of all things will also benefit learners since they obtain more and better answers in English when they ask questions to all things. The following are the research questions for the X-Education about the effects of EFL writing with/without the forwarding mechanisms of Q&A.
  • Is there any significant difference in the knowledge levels of all things with/without the forwarding mechanisms of Q&A in X-Education?
  • Is there any significant difference in the interactions between humans and all things with/without the forwarding mechanisms of Q&A in X-Education?
  • Is there any significant difference in the learning behaviors of EFL learners with/without the forwarding mechanisms of Q&A and their influence on EFL writing in X-Education?
  • What are the EFL learners’ perceptions of X-Education?

2. Literature Review

2.1. The Shifting Perspective on Education

There are four elements that are key to education: learners, teachers, content, and contexts [13]. Learners can learn to gather information by themselves or collaborate with their peers to sustain and improve their knowledge [14]. Teachers can supervise and help learners gain knowledge correctly. Furthermore, a teacher can help to deliver knowledge content to learners with a suitable learning approach [15]. However, most teachers lack the time to teach the learners more deeply [16]. Previous studies have demonstrated that technology can help teachers overcome the time limitations [17,18].
These studies have implemented technology to support educational settings because technology has become more mature and reliable. For example, recognition technologies, such as speech recognition and the support of AI could accommodate learners to practice pronouncing English texts independently [18]. A different study suggested that learners could learn geometry meaningfully in their surroundings as the learning contexts are defined as authentic contexts with the recognition of geometric objects through AI [19]. These studies also demonstrated that the implementation of technology in education can sustain the learners’ motivation and enhance learning achievement [18,19].
Recently, researchers have trained NLP with large datasets to improve its recognition accuracy through deep learning [20]. By doing so, the NLP can understand human language more with its knowledge. For example, the NLP with a large language model like GPT-3 has 175 billion parameters that can understand the human language as input and generate new texts based on their knowledge [21]. This mechanism is similar to the education setting, whereas the teacher educates the learners, gradually increasing their knowledge. Lu et al. [22] used the GPT-2 for the question answering system (Q&A) as authoring the question and grading the answer. They trained the AI to match their purpose before generating the short question for learners. Later, they used it to grade the students’ answers. Doing so could help teachers to prepare the question and grade the students’ answers. In this case, the AI also learns to gain its knowledge. However, the capabilities of AI could act like a teacher to help humans become more knowledgeable and help others become more knowledgeable. The next subsection will describe how education could accommodate not only humans but also all things.

2.2. Education for All Things

All things in this world are physical objects, digital objects, or other forms, such as AI. Some of them are alive, such as plants, and some of them are inanimate objects, such as chairs. For example, the IoT could be formed of physical objects that could control their hardware with microprocessors and sensors [23]. However, the implementation of IoT is usually only for data collection and managing its hardware using task scheduling [24]. In addition, it is used remotely on the cloud server, which requires internet connectivity [3]. The IoT itself has limited computing resources [25], so it requires the use of the computing resources on a cloud server.
In daily life, most people interact with digital objects, such as pictures, videos, or music on their mobile devices. Currently, digital objects can have their own identities and single authorship with NFTs [11]. A digital object uploaded to the NFT marketplace receives a unique address or token [26]. The other thing is like AI that could understand the human world through mathematical representations [24]. Humans cannot see the AI directly, unlike physical or digital objects, but they can benefit from implementing it. However, recent research has shown that AI still has a lot of space to improve its ability to help humans in different fields [6,24].
Similarly, human intelligence is the natural form that can distinguish between humans and other things in this world [6]. Commonly, human intelligence also needs to be improved with help from others. For example, EFL learners could improve their English writing skills after being taught by an experienced English teacher [27,28]. However, learners cannot receive help from teachers at any time for their learning due to the limited learning time available in school. AI could tackle this limitation because it works 24 h a day to help learners learn, in and out of school, using ubiquitous technologies and cloud services. Several studies have addressed this issue and investigated how to fill the gap of learning time with AI support [28,29]. For example, EFL learners could practice writing at any time with the help of grammar checking that implements AI. Through these mechanisms, EFL learners as humans could benefit from learning with AI [28].
Similarly, this AI mechanism can also be used to teach the IoT, as AI can learn from a physical object and pictures as digital objects and increase their knowledge. Likewise, the IoT as a learner will learn from the AI as a teacher. Meanwhile, all things could connect and learn from each other without time limitations. All things, including people, can not only learn from AI but can also learn from other smart things, such as digital objects. For example, EFL learners could learn from smart physical objects to understand their properties, and physical objects can also learn from the digital objects through interaction.
The problem is that physical and digital objects cannot understand human language directly. In addition, the IoT will require higher computing resources from a cloud server to understand the human language with NLP [30]. This will cause high latency, which implies long waits to receive computational results. Furthermore, previous studies suggested implementing edge computing, which is computing resources that need to be placed near the user or the IoT in order to reduce the connection delay [3,7]. In addition, edge computing could collaborate with mobile computing to share their computing resources. Hence, AI, especially NLP, could be deployed in mobile computing to build edge computing that can understand human language with a low delay. The inferencing process will use shared resources in edge computing, not in the cloud [6,7].
The edge computing implementation of on-device AI using TF-Lite and a pre-trained model, such as the mobileBERT model, could enable the Q&A interactions between on-device AI and the humans [31,32]. Thus, physical objects and digital objects with on-device AI and edge computing could answer the human questions quickly. Hence, physical objects could be formed as smart physical objects because physical objects with on-device AI have the intelligence to understand human language. For this reason, smart physical objects are not only limited to the IoT devices but also other objects, such as plants and furniture. Edge computing can be implemented with plants; thus, the plants could understand human language, and humans could ask questions of the plants. Likewise, for human intelligence, an on-device AI also needs an initial knowledge base related to its properties to answer the question from EFL learners. Later, on-device AI could improve its knowledge by learning from other things, such as AI or external AI [22]. External AI refers to large language models like the GPT-3 from OpenAI, which have more knowledge than on-device AI [21]. Hence, external AI could help to understand the human language and improve the knowledge of on-device AI by generating the answers to enhance its knowledge.
Therefore, EFL learners could learn from all things, such as smart physical objects, smart digital objects, and AI, to gather more English lexical resources and practice EFL learning. In addition to human beings, all things can learn from each other; for example, smart physical objects can learn from external AI to improve their knowledge about their properties, functions, and benefits.
In addition, the data of all things from the IoT, edge computing, mobile devices, and AI algorithms have different new and emerging forms [33]. Using the new and emerging forms of data for education requires careful considerations concerning the risk of using AI [34]. This is because the successful orchestration and thorough integration of the data from different sources are the key factors for successfully aiding education. Moreover, humans also need to consider the risk of AI support in education. Humans can monitor and evaluate the computational results from AI to reduce the risk, besides improving the security of AI algorithms.

3. X-Education and the Implementation of EFL Learning

3.1. X-Education Framework

The X-education framework is a new perspective of education that not only educates humans but also educates all things to improve their knowledge. It consists of three layers adapted from education perspectives, as shown in Figure 1.
In the first layer, the learning environment supported by technologies and educational theories is fundamental for X-Education. It is also critical that the infrastructure to support the technologies be ready, at least the communication network, such as 5G.
The second and third layers are all things from different worlds (i.e., the real world and the digital world) that have the intelligence to think like humans. All things can be referred to as learners because they can interact and learn from other things to improve their knowledge and intelligence. This framework has four things, including two types of intelligence and two smart things. The four things are described below:
  • Human intelligence is the intelligence of human beings in the real world. For example, EFL learners that learn English writing at school enhance their English language knowledge.
  • AI is the agent in a neural network that has the ability to think like a human. However, AI is a kind of computer algorithm in the digital world. For example, the NLP algorithm that is trained with a large dataset can understand human language in a specific domain.
  • Smart physical objects are the form of physical objects in the real world, such as home or office appliances with IoT devices.
  • Smart digital objects are the form of digital objects in the digital world. Digital objects also have their identity with the NFTs. For example, multimedia resources such as pictures, videos, or 3D models are uploaded to the NFT market and have their token as their unique addresses.
In X-Education, we added on-device AI that uses memory, storage, and the processor inside physical and digital objects, known as smart physical and smart digital objects. Through on-device AI, physical and digital objects will understand human language based on their knowledge of their roles in the targeted domain. In addition, their knowledge can be increased like humans’ knowledge. For example, a smart trash bin, as a smart physical object, has a knowledge base of its properties, functions, and benefits. Through Q&A interactions, humans can ask the smart trash bin about its benefits, and humans can learn from the answer, which contains English lexical resources (e.g., vocabularies, words, and sentences) related to the benefits of the trash bin. Furthermore, the smart trash bin can also increase its knowledge related to its benefits through forwarding the new question to external AI and then obtaining answers from external AI.
The third layer is the content and the contexts. The EFL learners can learn from the interactions with smart physical and smart digital objects. Smart physical or digital objects can give learners related lexical resources for their learning content. Further, learning occurs in similar contents and contexts. In this case, they can learn the English language as the context.

3.2. System Design and Implementation

We developed and implemented the X-Education in the real and digital worlds, such as smart physical objects (i.e., a smart display, a smart alcohol spray, and a smart trash bin) and the smart digital objects (i.e., a smart air purifier) as the experimental environment for EFL learning. EFL learners could have Q&A with smart physical objects and smart digital objects with the help of on-device AI on mobile devices (see Appendix A).
In the experimental environment derived from the X-Education framework, all things have intelligence and knowledge for learning from each other, as shown in Figure 2. In the first step, EFL learners as human beings can learn from all things, including three smart physical objects and one smart digital object, by asking the questions in English. Further, all things can also learn from others, such as external AI, to seek new answers and gain new knowledge if a thing cannot answer a question directly from its own knowledge. Then, EFL learners will receive an answer after they ask a question related to the smart things that contain English lexical resources with texts and audio. In the second step, EFL learners learn from the lexical resources that they receive from the Q&A interactions. Further, they are assigned to write an essay to explain their understanding and apply the lexical resources in their writing.
In this study, edge computing is the combination of the IoT and mobile computing to control the sensors and actuators in smart physical objects, for example, detecting a hand with the outer ultrasonic sensor to open the cover of a trash bin and check its current trash amount with the inner ultrasonic sensor, as shown in Figure 3a. In addition, it is integrated with on-device AI on the mobile device to gather the knowledge and answer the questions from the EFL learners, as shown in Figure 3b. We used the GPT-3 by OpenAI as the artificial intelligence for the external AI [21]. On the other hand, TF-Lite was used to deploy the pre-trained mobileBERT model on mobile devices for on-device AI [32]. The on-device AI function is to understand and answer human questions based on the knowledge base inside the on-device AI without any internet connection. However, on-device AI needs to improve its knowledge to fulfill the EFL learners’ questions. Hence, on-device AI can learn from other things, such as by obtaining help from OpenAI as external AI to improve the new knowledge base, as shown in Figure 3c. Since the external AI has the ability to generate new sentences and answer questions, external AI will help to answer questions if on-device AI cannot find the answer in the current knowledge base and will save the answer as a new knowledge base. Gradually, the knowledge base will improve after these interactions.
In the Q&A interaction, EFL learners can ask basic six literal comprehension questions (i.e., 5W1H) to understand the properties of the smart physical objects and the smart digital objects by using a mobile app that includes on-device AI. For example, EFL learners can interact with the physical contents of the smart trash bin, such as opening the cover without touching it through its sensor, as shown in Figure 3a. They could ask the smart trash bin about its properties or functions, as shown in Figure 3b. The smart trash bin will answer directly with its on-device AI based on its knowledge base. Learners can also ask similar questions again if the answers they receive do not address their questions. Hence, the smart trash bin will seek help from the external AI to answer the questions since the external AI is more knowledgeable than the on-device AI. Therefore, EFL learners can obtain more meaningful answers, as shown in Figure 3c. The answers contain English lexical resources in the form of text and audio for learners. This interaction not only helps EFL learners learn new lexical resources through the answers, but, in addition, the smart trash bin as a smart physical object can learn from external AI to generate the answers and gradually improve its knowledge base in its on-device AI.
In addition to the source of answers from the on-device AI and the external AI, we proposed the forwarding mechanisms to improve knowledge bases of the smart things, as shown in Figure 4. This means that physical objects or smart digital objects not only learn from the AI thing but can also learn from others. For example, when a human asks a question to the smart digital air purifier, the question will be forwarded to other smart things. Further, the other smart things will ask for help from the external AI to generate an answer. Later, the answers will be used to improve the knowledge base of each smart thing in the background process. In addition, an expert needs to carefully check for duplicated knowledge bases or unimportant knowledge before using it for new learners.

4. Methods

4.1. Participants

The participants in this study were 22 graduate learners from a university, 1 independent expert, and 1 AI agent or bot to gain an initial knowledge base. The 22 learners were divided randomly into two groups to learn the benefits of the X-Education for EFL learning. Then, 11 learners were assigned to the experimental groups (EG) with the forwarding mechanisms, and 11 learners were assigned to the control groups (CG) without the forwarding mechanisms. In addition, an expert was invited to monitor the gained knowledge base before it was used for the next learners.

4.2. Experimental Procedure

The experimental design was divided into two phases, as shown in Figure 5. The similarity of the two phases was designed to gain the knowledge base of each smart thing. The knowledge base is defined as one sentence that represents a unique concept related to each smart thing. In the process of Q&A, external AI helps the smart things to answer the questions with one sentence. The unique answer will become the new knowledge base (see Appendix B). In the beginning, each smart thing has six similar initial knowledge bases based on Wikipedia and modified by the expert. In the first phase, we developed an AI agent that acted like a human to generate 50 random questions related to smart things and then asked the smart things the questions. Through the help of AI agents, the smart things learned to answer the questions, and then their initial knowledge bases improved with or without the forwarding mechanisms. After that, the expert monitored the knowledge bases.
In the second phase, the human learners helped to improve the knowledge bases of the smart things with or without the forwarding mechanisms. In addition to the smart things learned from the human learners to improve their knowledge bases, the learners also learned from the smart things to enhance their EFL skills. A quasi-experiment was used to compare the differences between the two groups in the second phase. First, the learners were assigned a pre-test to write a short essay within 40 min based on the smart things that they saw. The pre-test was conducted to determine if the learners have similar prior knowledge before the Q&A interactions with smart things. The topic for the pre-test was clean environments at a smart office to prevent COVID-19. Afterward, the learners took turns sitting in the smart office for Q&A, as shown in Figure 6. In the Q&A with all the smart things, the learners interacted with each smart thing for 5 min by asking the 6 basic literal questions starting with who, where, why, what, when, and how, and then any related questions they liked. These questions helped learners to learn about the identities, attributes, functions, and benefits of the smart things in English and improve their EFL writing.
Regarding the experimental setting of X-Education for EFL writing, there are four smart objects: the three smart physical objects, such as smart alcohol spray, smart trash bin, smart display; and one smart digital object, such as the smart air purifier shown in Figure 6 (more details on each smart thing are in Appendix A). After that, learners had a post-test within 40 min. Finally, the interview was conducted to understand the perceptions of the EG learners and the reasons behind their perception deeply.

4.3. Research Variables, Data Collection, and Measurement

In the first phase, the total number of the gained knowledge bases as a variable was counted after the AI agent or the bot asked the questions. An expert checked whether the new knowledge base results were related to the smart things without duplication.
In the second phase, the variables collected were the total number of gained knowledge bases on each smart thing and the total Q&A interactions that received answers from different sources, such as from on-device AI or external AI or answers generated from the forwarding mechanisms. Further, the total Q&A interactions were counted, and their quality was also checked and determined as good or not good. Good quality interaction received a score of 1 if the question was relevant to the targeted object and the answer addressed the question well; otherwise, the score was 0.
A mixed-method approach was used for analyzing the quantitative data from the interactions and the qualitative data from the interviews to support the statistical results. Further, the pretest and the writing test were collected in our system and evaluated for quantity and quality. The quantity variables collected were the number of words from the essay. The quality variables were measure of textual lexical diversity (MTLD) and the clause per T-unit. In addition, we collected the learners’ behaviors with the writing test, as shown in Figure 7, such as their use of the Q&A history as a citation, their use of the summary by AI as an alternative citation, their use of the grammar feedback, and the total writing revisions. In the end, the EG learners were interviewed with eight open-ended questions.

4.4. Data Analysis

We used Python to visualize the data time series about the total number of gained knowledge bases of each smart thing after the learners finished the Q&A. The statistical analysis used the Mann–Whitney test in the SPSS software to determine the difference between the gained knowledge base of each smart thing, the prior knowledge of the learners, and the writing between the two groups. In addition, writing quality was evaluated using the Text Inspector to obtain the MTLD score [35] and the L2SCA software to obtain the clause per T-unit [36]. The Pearson correlation was conducted to check the correlation of the forwarding mechanisms with the gained knowledge bases. In addition, the interviews were coded based on the themes from the answers of the EG and we used the abbreviation code of interview (IV) of each theme.

5. Results and Discussion

5.1. Analysis of Knowledge Increase in All Things between Two Groups

The total number of the knowledge bases increased over time, as shown in Figure 8. In detail, the initial knowledge of each object had six basic knowledge bases from Wikipedia. In the first phase, the AI agent or bot asked random questions so that the knowledge base of each object consequently had more than 20 newly gained knowledge bases after checking. This shows that the knowledge base of each smart thing can be increased significantly after the AI agent or the bot asks several questions. This result was similar to the results of Gong et al. [7] and Zhang et al. [1] that the knowledge from the AI cloud services or external AI could transfer to on-device AI in edge computing. Hence, the limitations of computing resources in edge computing could be overcome by collaborating with external AI to provide knowledgeable intelligence. However, it is indicated that the implementation of intelligence could allow all things to teach and learn from each other in X-Education so that they can be used further for the second phase using question forwarding.
The most important phase is the second phase, in which human learners participate in the experiment. Regarding the first research question about the significant difference in the knowledge of all things with/without the forwarding mechanisms of Q&A. Figure 8 shows that the total number of the knowledge bases of smart physical objects (see Figure 8a) and the smart digital object (see Figure 8b) increased over time with/without the forwarding mechanisms from the first participant through the last participant after Q&A interactions. The example of the knowledge bases can be seen in the Appendix B. Interestingly, the knowledge bases increased twice or more when using our proposed question forwarding mechanisms in the EG. One of possible reasons is that there were many more interactions among learners, external AI, and all smart things in the EG than in the CG, thereby increasing the knowledge base of all objects in the EG because of the proposed forwarding mechanisms.
Further, we combined the three smart physical objects into one variable for further analysis. The results of the Mann–Whitney test in Table 1 show that the gained knowledge bases between the two groups was different for the smart digital object(s) (U = 16.500, Z = −2.929, p < 0.05) and the smart physical objects (U = 12.500, Z = −3.161, p < 0.05). Further, it shows that the EG generated more new knowledge bases on the on-device AI than the CG that might be useful for the learners when interacting with Q&A.
In contrast, we did not measure the gained knowledge base of the artificial intelligence or external AI because it used the cloud services provided by OpenAI. Similarly, we did not measure the gained knowledge bases inside humans since this is a preliminary study within a short time frame. Later, In the next subsection, we will assess the benefits for learners after they learned through Q&A with the smart things.

5.2. The Interactions between Humans and Smart Things

5.2.1. The Interactions between Humans and Smart Things

Regarding the second research question about the significant difference in the interactions between humans and all things with/without the forwarding mechanisms of Q&A in X-Education. The results of the Mann–Whitney test in Table 2 show that the total number of answers from the knowledge bases of the on-device AI were not significantly different between the two groups. Further, it was found significant differences were found in the answers coming from the external AI of the smart digital object(s) (U = 28.000, Z = −2.164, p < 0.05) and the smart physical objects (U = 6.500, Z = −3.566, p < 0.01). This indicates that on-device AI in the CG cannot fulfill the questions with its current knowledge base, causing the CG to need more help from the external AI to answer the questions (M = 4.64; M = 12.27). On the other hand, the EG used less help from the external AI to answer the questions (M = 2.36; M = 4.64) since the answers from the on-device AI and its knowledge bases in the EG could fulfill the learners’ questions directly.
Regarding the forwarding mechanism in the EG, the pairwise correlation results in Table 3 show that the total answers generated from the forwarding mechanisms have significant correlations with the total answers from the on-device AI. It was found that the answers generated from forwarding mechanisms will help the smart digital object(s) to gain knowledge bases in the on-device AI (r = 0.801, p < 0.01). Similarly, it was found that answers generated from the forwarding mechanisms will help the smart physical objects gain the knowledge bases in the on-device AI (r = 0.873, p < 0.01). This indicates that our forwarding mechanisms could help smart things learn from each other with the help of external AI to improve their knowledge bases, then perform the answers with on-device AI to fulfill the questions from learners. This result is in line with Table 2, which shows that the EG was able to fulfill the questions directly from the on-device AI because of the forwarding mechanisms.

5.2.2. The Quality of the Q&A Interactions

In addition to addressing the second research question about the difference of the interactions between two groups. Table 4 shows the quality of the interactions with the percentage of the total score and the total score between two groups, with/without the forwarding mechanisms, respectively. It was found that the EG had a higher percentage of good quality Q&A and a less not good Q&A compared with the CG. This indicated that the EG had better Q&A interactions with the smart things than the CG. This is because the EG had more knowledge bases through the forwarding mechanism that could increase the on-device AI with the help of external AI to provide more good quality answers for EFL learners (see Appendix C). Further, the expert had more opportunities to choose the knowledge bases since these all things produced more knowledge bases.

5.3. Learning Behaviors and Achievements of EFL Learners in X-Education

5.3.1. Prior Knowledge between the Two Groups

Regarding the third research question about the significant difference in the learning behaviors of EFL learners with/without the forwarding mechanisms of Q&A. The results of the Mann–Whitney test in Table 5 show that the prior knowledge when writing an essay between the two groups was not significantly different before the experiment. Both the quantity variable that counted with the number of words (U = 44.500, Z = −1.051, p > 0.05) and the quality variables of the essay that were measured by MTLD (U = 53.000, Z = −0.492, p > 0.05) and clause per T-unit (U = 55.500, Z = −0.329, p > 0.05) were not significantly different between the two groups. This indicates that all EFL learners could participate in and interact with X-Education as human intelligence in this study.

5.3.2. The Writing Test and Associated Behaviors between the Two Groups

In addition to addressing the third research question about the influence of the learners’ behaviors on the EFL writing in X-Education. The results of the Mann–Whitney test in Table 6 show that the writing test between the two groups was not significantly different. However, there was a significant difference in the writing behaviors, particularly the total use of Q&A history between the two groups (U = 28.000, Z = −2.144, p < 0.05). This indicates that the Q&A interaction with the smart things helped EFL learners to write meaningful essays. The learners could see the Q&A history and cite the Q&A if they were inspired, as shown in Figure 7. Similarly, Yang et al. [37] stated that the EFL learners received help after reviewing the Q&A to build their knowledge structures. Hence, the EG, which used forwarding mechanisms, could make the essays longer than the CG. This is because the quality of the Q&A for the EG was better than for the CG, as shown in Table 4. Thus, it could help the EG to write comprehensively with more words based on the Q&A history (M = 245.54).
In addition to the EG essays having more words, the MTLD and clause per T-unit as measures of writing quality were slightly higher in the EG compared with the CG. The MTLD indicated that the EG was more productive because they did not use repetitive words [35]. In addition, the EG had a higher ratio of clauses per T-unit than the CG. This indicates the EG’s ability to communicate complex ideas in the essay more fluently than the CG [38].
Further investigation in the EG, applying the Pearson correlation, found that the total use of the Q&A had a significant correlation in the EG. The total use of the Q&A was significantly correlated with the number of words (r = 0.666, p < 0.05). Similar to the previous analyses, this indicates that the Q&A history could have made the essays longer in the EG.

5.4. The Learners’ Perceptions of X-Education

The interview was used to address the fourth research question about the EFL learners’ perceptions of X-Education. The interviews in the EG were coded based on the themes of their answers to understand the benefits of X-Education for EFL learning, as shown in Table 7. The EFL learners felt that X-Education was simple and easy to use for the Q&A and the writing (IV4). This is because they could recall the questions they asked before to inspire them when writing the essays (IV2). Further, they felt that the answers from the smart things were useful (IV1) since the EG used the forwarding mechanism to make the answers better in its knowledge base.
However, they think the interaction would be more interesting if they could interact with smart physical objects rather than the smart digital objects (IV6 and IV7). It might be that they could interact with the physical forms directly, for example, opening the cover or checking the status of the smart trash bin while asking the questions. In addition to the use of Q&A history in the writing test, grammar feedback could also help learners enhance their essay (IV5). In summary, the learners felt that X-Education could help them to learn English, not only learning the writing after the Q&A with the smart things but also learning to ask the questions in English by speaking (IV8).

6. Conclusions

This preliminary study demonstrated that all the things in the X-Education could educate each other by learning and teaching from other things, whether physical, digital, AI, or human. Hence, the X-Education framework can be used as a foundation for designing the pedagogical aspect for the future of education. This is because the human teacher not only teaches human learners but also teaches other things, such as smart physical objects, to improve their knowledge. In this study, we also demonstrated how to apply X-Education with orchestrated all things for EFL learners through Q&A interactions in English. Thus, the EFL learners could receive benefits after having Q&A interactions with all things. In addition, this study was providing the details of all things and the example of knowledge base that can be used for other studies to implement the X-Education for their research in the future.
Regarding the first research question, the knowledge bases of the smart things, such as the smart digital objects and the smart physical objects, were increased over time through Q&A interaction with human learners. The smart things received help from external AI. Hence, the X-Education framework demonstrated that smart things could learn and teach the others which caused the improvement of its knowledge bases. We can conclude that education can occur not only for humans but also for all things in the real or digital world.
Regarding the second research question, the Q&A interaction between the two groups was different. Most smart things in the CG need help from the external AI since their knowledge bases in on-device AI cannot provide better answers and not address the learners’ questions. On the other hand, the EG with the forwarding mechanisms could answer the learners’ questions with its knowledge bases in on-device AI directly since the on-device AI was learned from each other through the forwarding mechanisms.
Regarding the third research question, the EFL learner as the humans had benefited from interacting with smart things. However, learners in the EG received more benefits than in the CG because the quality of the answers from the smart things’ knowledge bases in on-device AI could address the learners’ questions. The learners can use the answers in the Q&A history for reference or inspiration when they are writing a meaningful essay. Regarding the fourth research question, the learners felt that X-Education could enhance their English writing and speaking skills because they interact with smart things in English.
The limitation of this preliminary study, the Q&A interaction time was limited and only accommodated the short Q&A with a pre-trained mobileBERT model. We only have an expert to monitor the gained knowledge bases and manage the smart things before they are used for the new learners. More experts are needed to check each smart thing’s knowledge base. Future study not only increases the knowledge base but also increases on-device AI and the external AI ability with fine-tuning mechanisms. In addition, not only AI agents or external AI help on-device AI to increase the knowledge base, but also the experts could help to gain the knowledge base of the smart thing by asking several questions and then clarifying its answer before using it by the EFL learners. Hence, the future study of X-Education not only uses AI as a robot to gain knowledge base but also the experts as humans.

Author Contributions

Conceptualization, W.-Y.H. and R.N.; methodology, W.-Y.H. and R.N.; formal analysis, R.N.; investigation, R.N.; data curation, R.N.; writing—original draft preparation, R.N.; writing—review and editing, W.-Y.H.; visualization, R.N.; supervision, W.-Y.H.; funding acquisition, W.-Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This study is partly funded through 109-2511-H-008-009-MY3 and 111-2410-H-008-061-MY3, National Science and Technology Council, Taiwan.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The smart digital object(s) and the smart physical objects.
Table A1. The smart digital object(s) and the smart physical objects.
Internet of ThingsMobile Computing for Q&A
Sustainability 14 12533 i001Sustainability 14 12533 i002
The smart digital object can show a 3D digital NFT version of the air purifier with augmented reality (left). The learner could ask the questions through speech that is related to the air purifier (right).
Sustainability 14 12533 i003Sustainability 14 12533 i004
The first smart physical object is an alcohol spray. It can spray the alcohol automatically if the hand is near the sprayer (left). The learner could ask the questions through speech that related to the alcohol spray (right).
Sustainability 14 12533 i005Sustainability 14 12533 i006
The second smart physical object is a smart trash bin. The cover of the smart trash bin can open or close automatically if a hand is near (left). The learner could ask the questions through speech that related to the trash bin (right).
Sustainability 14 12533 i007Sustainability 14 12533 i008
The third smart physical object is a smart display. It can show related information about the COVID-19 or others (left). The learner can ask the questions through speech that related to the smart display (right).

Appendix B

Table A2. The example of a knowledge base with forwarding mechanisms on the smart digital air purifier used by EG.
Table A2. The example of a knowledge base with forwarding mechanisms on the smart digital air purifier used by EG.
PhaseKnowledge Base
Initial knowledge base (derived from 5W1H)
  • I am a smart Air Purifier.
  • My functions are to reduce allergy sufferers, and asthmatics and to eliminate second-hand tobacco smoke.
  • Air purifier is a device which removes contaminants from the air in a room to improve indoor air quality.
  • I am located near your desk in your office.
  • I can be used daily when you are working in the office.
  • I have a blue color and three swing fans.
The gained knowledge base after the AI agent asked several questions in the first phase.
7.
I remove contaminants from the air in a room to improve indoor air quality.
8.
I need to be cleaned regularly, and my filter should be changed depending on the type of air purifier and the environment in which it is used.
9.
I’m an air purifier and my job is to remove contaminants from the air in a room to improve indoor air quality.
10.
Depending on the model of air purifier, you may need to change the filter every few months or every year.
11.
I’m the best way is to use an air purifier.
12.
By using it daily, I reduce allergy sufferers, asthmatics and eliminate second-hand tobacco smoke.
13.
I’m an air purifier, and I can remove allergens, pollutants, and other particles from the air.
14.
This can improve indoor air quality and help reduce the symptoms of allergies and asthma.
15.
I’m an air purifier and I work by pulling in air from the room and passing it through a filter.
16.
The filter removes particles, such as dust, pollen, and smoke, from the air.
17.
This improves the air quality in the room and can help with allergies, asthma, and smoke.
18.
An air purifier helps to remove contaminants from the air in a room to improve indoor air quality.
19.
The filter in an air purifier should be replaced every 3–6 months.
20.
I clean the air by removing contaminants from the air in a room to improve indoor air quality.
The gained knowledge base after the first participant asks questions in the second phase.
21.
I come in many different shapes, sizes, and types.
22.
The most common type of air purifier is the mechanical filter, which uses a fan to force air through a series of filters to remove contaminants.
23.
I should be replaced every 3 to 6 months.
24.
And so on…

Appendix C

The first EFL learner asked the question again because the answer was not relevant with the question, which caused a long Q&A interaction (see Figure A1, number a3–a4). However, the last EFL learner benefited from previous increased knowledge base (see Figure A1, number b3).
Figure A1. The benchmark of the Q&A interaction quality in the six literal questions which answered by on-device AI, as follows: (a) the Q&A of the first participant with the smart air purifier; (b) the Q&A of the last participant with the smart air purifier.
Figure A1. The benchmark of the Q&A interaction quality in the six literal questions which answered by on-device AI, as follows: (a) the Q&A of the first participant with the smart air purifier; (b) the Q&A of the last participant with the smart air purifier.
Sustainability 14 12533 g0a1

References

  1. Zhang, M.B.; Li, X. Design of Smart Classroom System Based on Internet of Things Technology and Smart Classroom. Mob. Inf. Syst. 2021, 2021, 1–9. [Google Scholar] [CrossRef]
  2. Lin, K.; Li, Y.; Zhang, Q.; Fortino, G. AI-Driven Collaborative Resource Allocation for Task Execution in 6G-Enabled Massive IoT. IEEE Internet Things J. 2021, 8, 5264–5273. [Google Scholar] [CrossRef]
  3. Baghban, H.; Rezapour, A.; Hsu, C.H.; Nuannimnoi, S.; Huang, C.Y. Edge-AI: IoT Request Service Provisioning in Federated Edge Computing Using Actor-Critic Reinforcement Learning. IEEE Trans. Eng. Manag. 2022, 1–10. [Google Scholar] [CrossRef]
  4. Dec, G.; Stadnicka, D.; Pasko, L.; Madziel, M.; Figlie, R.; Mazzei, D.; Tyrovolas, M.; Stylios, C.; Navarro, J.; Sole-Beteta, X. Role of Academics in Transferring Knowledge and Skills on Artificial Intelligence, Internet of Things and Edge Computing. Sensors 2022, 22, 2496. [Google Scholar] [CrossRef]
  5. Ramasamy, L.K.; Khan, F.; Shah, M.H.M.; Prasad, B.; Iwendi, C.; Biamba, C. Secure Smart Wearable Computing through Artificial Intelligence-Enabled Internet of Things and Cyber-Physical Systems for Health Monitoring. Sensors 2022, 22, 1076. [Google Scholar] [CrossRef]
  6. Gill, S.S.; Xu, M.; Ottaviani, C.; Patros, P.; Bahsoon, R.; Shaghaghi, A.; Golec, M.; Stankovski, V.; Wu, H.; Abraham, A.; et al. AI for next generation computing: Emerging trends and future directions. Internet Things 2022, 19, 100514. [Google Scholar] [CrossRef]
  7. Gong, C.; Lin, F.H.; Gong, X.W.; Lu, Y.M. Intelligent Cooperative Edge Computing in Internet of Things. IEEE Internet Things J. 2020, 7, 9372–9382. [Google Scholar] [CrossRef]
  8. Zhang, Q.; Zhong, H.; Shi, W.; Liu, L. A trusted and collaborative framework for deep learning in IoT. Comput. Netw. 2021, 193, 108055. [Google Scholar] [CrossRef]
  9. Shin, D.J.; Kim, J.J. A Deep Learning Framework Performance Evaluation to Use YOLO in Nvidia Jetson Platform. Appl. Sci. 2022, 12, 3734. [Google Scholar] [CrossRef]
  10. Lin, T.; Fan, Y.; Chung, J.; Cen, C. What’s New in TensorFlow Lite for NLP. Available online: https://blog.tensorflow.org/2020/09/whats-new-in-tensorflow-lite-for-nlp.html (accessed on 8 August 2022).
  11. Dowling, M. Is non-fungible token pricing driven by cryptocurrencies? Financ. Res. Lett. 2022, 44, 102097. [Google Scholar] [CrossRef]
  12. Jang, I.; Kim, H.; Lee, D.; Son, Y.S.; Kim, S. Knowledge Transfer for On-Device Deep Reinforcement Learning in Resource Constrained Edge Computing Systems. IEEE Access 2020, 8, 146588–146597. [Google Scholar] [CrossRef]
  13. Fernández-Río, J. Student-teacher-content-context: Indissoluble Ingredients in the Teaching-learning Process. J. Phys. Educ. Recreat. Danc. 2016, 87, 3–5. [Google Scholar] [CrossRef]
  14. Matuk, C.; Linn, M.C. Why and how do middle school students exchange ideas during science inquiry? Int. J. Comput-Support. Collab. Learn. 2018, 13, 263–299. [Google Scholar] [CrossRef]
  15. Amarasinghe, I.; Michos, K.; Crespi, F.; Hernandez-Leo, D. Learning analytics support to teachers’ design and orchestrating tasks. J. Comput. Assist. Learn. 2022, 1–16. [Google Scholar] [CrossRef]
  16. Hamad, S.; Tairab, H.; Wardat, Y.; Rabbani, L.; AlArabi, K.; Yousif, M.; Abu-Al-Aish, A.; Stoica, G. Understanding Science Teachers’ Implementations of Integrated STEM: Teacher Perceptions and Practice. Sustainability 2022, 14, 3594. [Google Scholar] [CrossRef]
  17. Silvestru, C.I.; Firulescu, A.C.; Iordoc, D.G.; Icociu, V.C.; Stoica, M.A.; Platon, O.E.; Orzan, A.O. Smart Academic and Professional Education. Sustainability 2022, 14, 6408. [Google Scholar] [CrossRef]
  18. Hwang, W.-Y.; Nguyen, V.-G.; Purba, S.W.D. Systematic survey of anything-to-text recognition and constructing its framework in language learning. Educ. Inf. Technol. 2022. [Google Scholar] [CrossRef]
  19. Hwang, W.-Y.; Hoang, A.; Lin, Y.-H. Smart mechanisms and their influence on geometry learning of elementary school students in authentic contexts. J. Comput. Assist. Learn. 2021, 37, 1441–1454. [Google Scholar] [CrossRef]
  20. Narang, S.; Chowdhery, A. Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance. Available online: https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html (accessed on 9 August 2022).
  21. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models are Few-Shot Learners. arXiv 2020, arXiv:2005.14165. [Google Scholar]
  22. Lu, O.H.T.; Huang, A.Y.Q.; Tsai, D.C.L.; Yang, S.J.H. Expert-Authored and Machine-Generated Short-Answer Questions for Assessing Students Learning Performance. Educ. Technol. Soc. 2021, 24, 159–173. [Google Scholar]
  23. Sohag, M.U.; Podder, A.K. Smart garbage management system for a sustainable urban life: An IoT based application. Internet Things 2020, 11, 100255. [Google Scholar] [CrossRef]
  24. Messaoud, S.; Bradai, A.; Bukhari, S.H.R.; Quang, P.T.A.; Ahmed, O.B.; Atri, M. A survey on machine learning in Internet of Things: Algorithms, strategies, and applications. Internet Things 2020, 12, 100314. [Google Scholar] [CrossRef]
  25. Cárdenas, R.; Arroba, P.; Risco Martín, J.L. Bringing AI to the edge: A formal M&S specification to deploy effective IoT architectures. J. Simul. 2021, 16, 494–511. [Google Scholar] [CrossRef]
  26. Rao, A.R.; Clarke, D. Perspectives on emerging directions in using IoT devices in blockchain applications. Internet Things 2020, 10, 100079. [Google Scholar] [CrossRef]
  27. Castells, N.; Minguela, M.; Solé, I.; Miras, M.; Nadal, E.; Rijlaarsdam, G. Improving Questioning–Answering Strategies in Learning from Multiple Complementary Texts: An Intervention Study. Read. Res. Q. 2022, 57, 879–912. [Google Scholar] [CrossRef]
  28. Guo, Q.; Feng, R.; Hua, Y. How effectively can EFL students use automated written corrective feedback (AWCF) in research writing? Comput. Assist. Lang. Learn. 2021, 1–20. [Google Scholar] [CrossRef]
  29. Nguyen, T.-H.; Hwang, W.-Y.; Pham, X.-L.; Pham, T. Self-experienced storytelling in an authentic context to facilitate EFL writing. Comput. Assist. Lang. Learn. 2022, 35, 666–695. [Google Scholar] [CrossRef]
  30. Gladence, L.M.; Anu, V.M.; Rathna, R.; Brumancia, E. Recommender system for home automation using IoT and artificial intelligence. J. Ambient Intell. Humaniz. Comput. 2020. [Google Scholar] [CrossRef]
  31. Deriu, J.; Rodrigo, A.; Otegi, A.; Echegoyen, G.; Rosset, S.; Agirre, E.; Cieliebak, M. Survey on evaluation methods for dialogue systems. Artif. Intell. Rev. 2021, 54, 755–810. [Google Scholar] [CrossRef]
  32. Sun, Z.; Yu, H.; Song, X.; Liu, R.; Yang, Y.; Zhou, D. MobileBERT: A Compact Task-Agnostic BERT for Resource-Limited Devices. arXiv 2020, arXiv:2004.02984. [Google Scholar]
  33. Radanliev, P.; De Roure, D. New and emerging forms of data and technologies: Literature and bibliometric review. Multimed. Tools Appl. 2022, 1–25. [Google Scholar] [CrossRef] [PubMed]
  34. Radanliev, P.; De Roure, D. Advancing the cybersecurity of the healthcare system with self-optimising and self-adaptative artificial intelligence (part 2). Health Technol. 2022, 12, 923–929. [Google Scholar] [CrossRef]
  35. Gayed, J.M.; Carlon, M.K.J.; Oriola, A.M.; Cross, J.S. Exploring an AI-based writing Assistant’s impact on English language learners. Comput. Educ. Artif. Intell. 2022, 3, 100055. [Google Scholar] [CrossRef]
  36. Lu, X. Automatic analysis of syntactic complexity in second language writing. Int. J. Corpus Linguist. 2010, 15, 474–496. [Google Scholar] [CrossRef]
  37. Yang, Z.; Wang, Y.; Gan, J.; Li, H.; Lei, N. Design and Research of Intelligent Question-Answering(Q&A) System Based on High School Course Knowledge Graph. Mob. Netw. Appl. 2021, 26, 1884–1890. [Google Scholar] [CrossRef]
  38. Beers, S.F.; Nagy, W.E. Syntactic complexity as a predictor of adolescent writing quality: Which measures? Which genre? Read. Writ. 2009, 22, 185–200. [Google Scholar] [CrossRef]
Figure 1. The X-Education framework.
Figure 1. The X-Education framework.
Sustainability 14 12533 g001
Figure 2. All things learn from each other.
Figure 2. All things learn from each other.
Sustainability 14 12533 g002
Figure 3. One physical object example such as smart trash bin with edge computing for Q&A: (a) internet of things (IoT); (b) mobile computing; and (c) Q&A interactions.
Figure 3. One physical object example such as smart trash bin with edge computing for Q&A: (a) internet of things (IoT); (b) mobile computing; and (c) Q&A interactions.
Sustainability 14 12533 g003
Figure 4. The forwarding mechanism.
Figure 4. The forwarding mechanism.
Sustainability 14 12533 g004
Figure 5. The experimental procedure.
Figure 5. The experimental procedure.
Sustainability 14 12533 g005
Figure 6. The X-Education experimental environment for EFL learning: (a) smart digital air purifier; (b) smart alcohol spray; (c) smart trash bin; and (d) smart display.
Figure 6. The X-Education experimental environment for EFL learning: (a) smart digital air purifier; (b) smart alcohol spray; (c) smart trash bin; and (d) smart display.
Sustainability 14 12533 g006
Figure 7. The user interface of the writing test.
Figure 7. The user interface of the writing test.
Sustainability 14 12533 g007
Figure 8. The knowledge bases of the four smart objects increased over time, as follows: (a) the knowledge bases increased for the smart alcohol spray, smart trash bin, and smart display; (b) the knowledge increase in the smart digital air purifier.
Figure 8. The knowledge bases of the four smart objects increased over time, as follows: (a) the knowledge bases increased for the smart alcohol spray, smart trash bin, and smart display; (b) the knowledge increase in the smart digital air purifier.
Sustainability 14 12533 g008
Table 1. The gained knowledge base between two groups.
Table 1. The gained knowledge base between two groups.
All ThingsEGCGMann–Whitney Test
MSDMSDMSDUZp
On-device AI
     1. Smart digital object(s)9.916.443.641.296.775.5516.500−2.9290.003 **
     2. Smart physical objects35.9120.5514.002.6824.9518.1712.500−3.1610.002 **
Notes. ** p < 0.01.
Table 2. The interaction between humans and the smart things between two groups.
Table 2. The interaction between humans and the smart things between two groups.
All ThingsEGCGMann–Whitney Test
MSDMSDMSDUZp
Answers from on-device AI
     1. Smart digital object(s)13.454.1313.092.3413.273.2847.500−0.8630.388
     2. Smart physical objects41.737.5638.096.2339.917.0240.500−1.3170.188
Answers from external AI
     1. Smart digital object(s)2.361.434.642.943.502.5428.000−2.1640.030 *
     2. Smart physical objects4.643.5312.274.128.455.416.500−3.5660.000 ***
Notes. * p < 0.05; *** p < 0.001.
Table 3. The Pearson correlation of the answer from the on-device AI and the generated answer from forwarding mechanisms in the EG.
Table 3. The Pearson correlation of the answer from the on-device AI and the generated answer from forwarding mechanisms in the EG.
Interaction1234
The total number of answers from on-device AI
     1. Smart digital object(s)1
     2. Smart physical objects0.823 **1
The total number of generated answers from the forwarding mechanism
     3. Smart digital object(s)0.801 **0.935 **1
     4. Smart physical objects0.879 **0.873 **0.881 **1
Notes. ** Correlation is significant at the 0.01 level (two-tailed).
Table 4. The quality of the Q&A interactions.
Table 4. The quality of the Q&A interactions.
All ThingsEGCG
GoodNot GoodGoodNot Good
Answers from on-device AI
     1. Smart digital object(s)70.27% (104)29.73% (44)66.67% (88)33.33% (44)
     2. Smart physical objects81.05% (372)18.95% (87)74.94% (314)25.06% (105)
Answers from external AI
     1. Smart digital object(s)80.77% (21)19.23% (5)66.67% (34)33.33% (17)
     2. Smart physical objects82.35% (42)17.65% (9)77.78% (105)22.22% (30)
Table 5. The prior knowledge between two groups.
Table 5. The prior knowledge between two groups.
VariablesEGCGMann–Whitney test
MSDMSDMSDUZp
Prior knowledge
     1. The number of words166.8273.62174.8230.15170.8155.0444.500−1.0510.293
     2. MTLD52.9911.3355.7516.3154.3713.7753.000−0.4920.622
     3. Clause per T-unit1.590.481.430.221.510.37555.500−0.3290.742
Table 6. The writing test between two groups.
Table 6. The writing test between two groups.
VariablesEGCGMann–Whitney Test
MSDMSDMSDUZp
Writing test
     1. The number of words245.5485.18201.5458.85223.5485.1848.500−0.7880.430
     2. MTLD59.3113.7956.1612.4157.7412.9051.000−0.6240.533
     3. Clause per T-unit1.830.621.710.481.770.5552.000−0.5580.577
Writing behaviors
     1. Total used of the Q&A history (cite)13.097.046.276.639.687.5328.000−2.1440.032 *
     2. Total used of the summary by AI5.094.223.365.394.224.8145.000−1.0450.296
     3. Total used of grammar feedback18.2715.439.815.5614.0412.1232.500−1.8430.065
     4. Total revisions7.094.415.183.216.133.8944.000−1.0920.275
Notes. * p < 0.05
Table 7. The interview results with data coding of the EG.
Table 7. The interview results with data coding of the EG.
CodeThemesPercentage (n = 11)
IV1The answers from the smart digital object(s) and physical objects were helpful for my essay writing.81% (9)
IV2The history of Q&A was useful because I can recall my questions and their answers and then use them to reference (citation) when I write my essay.100% (11)
IV3The summary by AI is not useful since the meaning is almost similar to the history of Q&A.72% (8)
IV4I think the learning environment was simple and easy to use for Q&A and writing.100% (11)
IV5I felt that the history of Q&A and the grammar feedback were useful. It is because I can remember the question and the answer through the history of Q&A, and later I can check my writing with the grammar feedback when writing an essay.54% (6)
IV6I like the smart physical objects in the X-Education because I can interact with the real object.54% (6)
IV7I will use the X-education in the future. Especially, I want to use smart physical objects to do Q&A.81% (9)
IV8I think the X-education can enhance my English ability. It is because I can enhance my speaking ability when I have Q&A with smart digital(s) or physical objects, and I can enhance my writing ability when I write an essay with grammar feedback.72% (8)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hwang, W.-Y.; Nurtantyana, R. X-Education: Education of All Things with AI and Edge Computing—One Case Study for EFL Learning. Sustainability 2022, 14, 12533. https://doi.org/10.3390/su141912533

AMA Style

Hwang W-Y, Nurtantyana R. X-Education: Education of All Things with AI and Edge Computing—One Case Study for EFL Learning. Sustainability. 2022; 14(19):12533. https://doi.org/10.3390/su141912533

Chicago/Turabian Style

Hwang, Wu-Yuin, and Rio Nurtantyana. 2022. "X-Education: Education of All Things with AI and Edge Computing—One Case Study for EFL Learning" Sustainability 14, no. 19: 12533. https://doi.org/10.3390/su141912533

APA Style

Hwang, W. -Y., & Nurtantyana, R. (2022). X-Education: Education of All Things with AI and Edge Computing—One Case Study for EFL Learning. Sustainability, 14(19), 12533. https://doi.org/10.3390/su141912533

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop