Next Article in Journal
Hough Transform-Based Large Dynamic Reflection Coefficient Micro-Motion Target Detection in SAR
Next Article in Special Issue
Can You Ink While You Blink? Assessing Mental Effort in a Sensor-Based Calligraphy Trainer
Previous Article in Journal
Meat and Fish Freshness Assessment by a Portable and Simplified Electronic Nose System (Mastersense)
Previous Article in Special Issue
Detecting Mistakes in CPR Training with Multimodal Data and Neural Networks

Sensors 2019, 19(14), 3226; https://doi.org/10.3390/s19143226

Article
Use of Computing Devices as Sensors to Measure Their Impact on Primary and Secondary Students’ Performance
Escuela Politécnica, Universidad Católica de Murcia, 30107 Guadalupe, Spain
*
Correspondence: [email protected]; Tel.: +34-968-278-821
These authors contributed equally to this work.
Received: 20 May 2019 / Accepted: 18 July 2019 / Published: 22 July 2019

Abstract

:
The constant innovation in new technologies and the increase in the use of computing devices in different areas of the society have contributed to a digital transformation in almost every sector. This digital transformation has also reached the world of education, making it possible for members of the educational community to adopt Learning Management Systems (LMS), where the digital contents replacing the traditional textbooks are exploited and managed. This article aims to study the relationship between the type of computing device from which students access the LMS and how affects their performance. To achieve this, the LMS accesses of students in a school comprising from elementary to bachelor’s degree stages have been monitored by means of different computing devices acting as sensors to gather data such as the type of device and operating system used by the students.The main conclusion is that students who access the LMS improve significantly their performance and that the type of device and the operating system has an influence in the number of passed subjects. Moreover, a predictive model has been generated to predict the number of passed subjects according to these factors, showing promising results.
Keywords:
computing devices; primary and secondary education; students’ performance; Learning Management System; learning analytics

1. Introduction

As a consequence of the accelerated advance of digital transformation in today’s society, different sectors and services have notably improved in their efficiency, sustainability and innovation processes. This is the case of the sectors of education, health, transport and farming, among others. This change has meant a whole digital revolution, transforming the fundamental nature of organizations, which highlights the need to update their digital processes [1].
In the last decade the Internet of Things (IoT) brings us closer to an increasingly transparent integration between the real and computing worlds. The digitization of processes in different social sectors has been enhanced by the continued proliferation and impact that have led to the use of mobile devices in society. This fact has enabled institutions to base their workflows through software systems controlled by digital devices [2,3]. These factors have generated a special interest in the educational community, which use Learning Management Systems (LMS) as the main tool for managing their academic content. As a consequence, both students and teachers use LMS on a daily frequency to access learning resources through different digital devices, such as computers, tablets or mobile phones [4]. In this context it may be of special interest to evaluate how the teaching and learning processes would improve depending on the type of device from which the access is performed [5].
A solution adopted in recent years for obtaining the data generated from digital devices is the integration of sensors, used to analyze the usability and benefits obtained from new emerging technologies, which allow us to quantify accurately the data obtained through digital devices in different areas [6,7,8,9]. In the area of education, the use of sensors as measurement tools in various educational disciplines is noteworthy. For example, the use of inertial sensors to monitor and evaluate movement patterns of different sports activities in schools, with the aim of engaging students in sport disciplines [10].
The use of sensors is one of the potential lines in digital transformation, industry 4.0 and a growing field in the education sector. The use of sensors offers new learning opportunities, allows the acceptance of e-learning materials to be measured and creates new learning strategies for lecturers and students [11,12]. For example, the use of Virtual Technologies in Education will allow immersion experiences interacting with objects, concepts or processes, developing new methodologies adaptable to any educational level, from primary school to higher education [13]. Another aspect of interest is to quantify the benefit of the use of mobile devices in the classroom through the sensors that they integrate, with the purpose of evaluating the academic performance of students and narrowing the generational technological gap by adopting new educational methodologies [14,15]. Another research line is the manufacture of a type of paper equipped with electromagnetic sensors, which could interact with electronic devices, and whose main destination would be the education field [16]. In [17] it is proposed a fog computing model to improve the limitations of LMS based on the analysis of the streams of data collected from students’ smartphones (GPS location and timestamp data) and heart rate bands while they are connected to the LMS. The main improvements aim at obtaining real-time prediction of students’ performances and fatigues and uncovering new didactic models according to location and times of study.
The use of mobile devices in the classroom initially generates rejection in the lecturers, since they are perceived as a distraction in the classroom. The disorders associated with the use of this technology have even been included as an addiction, generating a debate among the scientific community as to whether or not these devices should be excluded from the educational system [18]. However, it is important to progress towards a digital conversion in teaching methodologies and to take advantage of the benefits offered by digital tools [19,20]. Thanks to the use of new technologies, new opportunities are created in the education sector that challenge traditional methodologies. The regular use of digital technology and devices in classrooms has a positive effect on both digital skills and attitudes towards technology, which can benefit the learning process of students [21,22]. Therefore, it is a great challenge for schools to be able to incorporate and adapt this digital revolution to their classrooms [23,24].
The results obtained in different works demonstrate that thanks to the use of new technologies in the classroom, access to materials in the classroom will improve, both for lecturers and students, as they can access learning resources at anytime and anyplace. Another factor that favors the use of technology in the classroom is the improvement of communication and feedback between students, lecturers and parents. The use of technology allows students to grow as an educational community, extending beyond the walls of the classroom. As a result, it may bring students closer to multiple resources and through different perspectives, as well as to new educational levels and challenges not otherwise available [25,26].
It has also been investigated the impact of mobile devices on different educational stages, concluding that the use of such devices have become an essential learning tool with great potential both inside and outside the classroom. It reveals an innovative and motivating educational process for the students that allows them to be an active part of it. At the same time, in general, the use of mobile devices in education is better than when desktop computers are used or no mobile devices are used at all in the teaching and learning process [27,28,29]. One of the problems detected for e-learning is the access to materials and activities from mobile devices. It is necessary to adequate learning environments and develop new applications that allow to personalize and adapt digital content to digital devices and thus remove barriers and improve the quality of teaching, as discussed elsewhere [30,31,32].
It is essential to implement methodological changes in the teaching materials in order to adapt these devices. The integration of these technologies should be gradual and transparent from the earliest stages, with the aim of minimizing the digital barrier in the period of higher education [33,34]. It is important to adapt the use of new technologies taking into account the factors included in the Model of Acceptance of Technology and Use of Technology, so in this way the digital devices will have a greater acceptance by students and lecturers as a new learning tool [35,36]. Therefore, it is necessary to study new strategies for digital instruction, which help lecturers to promote the use of new teaching technologies that benefit teaching and learning processes, in order to improve the academic results of students.
The main objective of this work is to analyze the impact on the students’ performance according to the use of computing devices for connecting to the LMS. These computing devices act as sensors to obtain different data such as the type of accessing device (e.g., computer, tablet, phablet or smartphone), the type of operating system or the time when the student logs in the LMS. By means of this analysis it is expected to identify which are the most optimal patterns of connection to the LMS and be able to demonstrate that the students’ performance can be conditioned by different factors such as the type of mobile device from which the student connects, operating system used and the frequency of connections, among others.
The rest of paper is structured as follows. The methodology followed in this work and the data utilized are explained in Section 2. Section 3 shows the results obtained in the case study and these are discussed in Section 4. Finally, Section 5 summarizes the findings obtained in this work and shows possible future lines of research.

2. Material and Methods

This section describes the experimentation research method applied in this work following the guidelines given in the book “Basic of software engineering experimentation” [37]. First, the research problem is defined through several research questions addressing the topics considered in the experiment (Section 2.1). Then, the design of the experiment is explained (Section 2.2) as well as the specific techniques used to perform the experiment (Section 2.3).

2.1. Research Problem

The aim of this experimental research is to measure the impact on the student’s performance depending on the use of different computing devices, considering these devices as sensors measuring the student’s activity in the LMS. In order to reach the main objective of this work, the following research questions (RQ) have been defined:
  • RQ1. Are there differences in the number of passed subjects between students who connect to the LMS and those who do not connect?
  • RQ2. Are there differences in the number of failed subjects between students who connect to the LMS and those who do not connect?
  • RQ3. For those students who do connect to the LMS, are there differences in the number of passed subjects depending on the computing device being used?
  • RQ4. For those students who do connect to the LMS, are there differences in the number of failed subjects depending on the computing device being used?
  • RQ5. For those students who do connect to the LMS, are there differences in the number of passed subjects depending on the operating system being used?
  • RQ6. For those students who do connect to the LMS, are there differences in the number of failed subjects depending on the operating system being used?
  • RQ7. Is it possible to generate a predictive model on the number of passed subjects taking into account the data from the computing device (e.g., type of device and OS)?
Observe that we separately study the possible effect of the use of devices and operating system (OS) on the student’s performance. The reason for this separation is that there is not a unique OS for each type of device (e.g., Windows, iOS and Android for tablets), and therefore we cannot assume that they are dependent variables.

2.2. Constructing the Experiment

The experiment proposed in this paper has been performed in the Santo Domingo School network, which is composed of several school centers in Alicante, Spain. This institution offers a whole educational plan from pre-school to high school. The majority of the students’ families have an upper-middle socioeconomic status. For this study the stages taken into consideration start from Fourth Grade (9-to-10-year-old students) to the last year of the bachelor’s degree (17-to-18-year-old students). All the students in this stage are allowed to use computing devices (desktop or mobile) to connect to the LMS. In this manner, the computing devices work as a type of sensor to collect data from students every time they are connected to the LMS. It is important to note that students are not restricted to any kind of device and that they can connect to the LMS anytime and anyplace. Likewise, the use of a specific type of device (i.e., smartphone, tablet, PC, etc.) does not imply the students use the same OS for that type of device.
The data related to the study correspond to the academic year 2017/18 and they have been gathered from two sources: (1) the student’s file for gender, number of passed subjects and number of failed subjects in the whole academic year; (2) the logs for the student’s connections composed of the timestamp (in the format “dd-mm-yyyy hh:mm:ss”), device type and operating system. From these logs we have also obtained the following data per each student: Total number of logins in the LMS in the whole academic year, average number of logins per week, maximum and minimum number of logins per week, most frequent connection day and most frequent connection time slot divided into hours (0–23). Other variables such as family problems (divorced parents and/or problems with siblings) or financial problems (related to the payment of school fees) have been considered to verify their relationship to the student’s performance. However, the preliminary results do not show a significant relationship between the student’s performance and the possible economic and/or family problems, so these variables have not been included in this study.
A stratified sampling method has been used in the first place to construct the experiment. We have grouped the students’ data following the Spanish education system, namely Elementary (in this study comprising Fourth, Fifth and Sixth grades), Secondary (four years) and Bachelor’s Degree (two years). The number of participants in each group has been calculated in proportion to the total number of students in those stages. Then, for each group a clustering sampling method has been employed to divide students between those that have ever used a computing device to connect to the LMS and those who have never connected. Finally, a simple random sampling has been applied to select student for each cluster in each stage.
It is worth mentioning that a control group cannot be created for this study due to legal reasons raised by the Santo Domingo School. To solve this problem, students were volunteered to participate in the experiment by ensuring with the school faculty that the contents to be evaluated were the same for those students who did not want to participate in the study. Likewise, there has been no homogeneous distribution of device types among students as this would oblige them to purchase a specific device. The solution adopted has been to allow the use of any type of device included in the study or to borrow a device already available in the school. For this study specifically the school lent 192 iPads (tablets).

2.3. Conducting the Experiment

A total of 1,938 students have been analysed, out of which 881 (45.45%) are men and 1057 (54.55%) are women. From the total number of students, 1011 students (52.17%) have ever connected to the LMS, whereas 927 students (47.83%) have never been logged on. Table 1 shows the distribution of the sample according to the education stage.
Table 2 shows the number of students using each device and operating system to connect to the LMS. In general, the most used OS in computers is Windows, iOS in tablets and Android and iOS in smartphones. Note that there are two students using the Android OS in computers in the Bachelor’s degree stage. This is due to the fact that these two students are using a Smart TV box device with an Android OS according to the recorded logs. We have considered this kind of device as “Desktop Computer” since it is normally fixed in a room. It is also noteworthy that 12 students in this stage and other 3 in secondary are using an iOS emulator in their PC. We have asked them about this and they stated that they wanted to use some apps for iOS that were recommended by some teachers in their own PC at the same time they accessed to the LMS.
The analysis of the data has been carried out in three steps. Firstly, a descriptive analysis is performed on the variables available from the connection logs (type of devices and OS). Secondly, an inferential analysis is done to answer RQ1-RQ6. The Kolmogorov-Smirnov normality test is employed to examine if the variables are normally distributed. For the analysis the Mann-Withney test and the Kruskal-Wallis test with Bonferroni adjustment have been used to compare two means and more than two means, respectively [38]. In all the tests we consider the p-value < 0.05 as significant. Finally, predictive tests are performed to evaluate RQ7. To this end, the M5Rules algorithm is applied [39]. In a nutshell, this technique generates a list of decision trees for regression problems using the separate-and-conquer technique. It returns a list of rules to calculate a value for the output variable according to the answers received for each input variable.
The SPSS 24.0 software has been employed to execute the two first stages and Weka suite for the third one.

3. Results

In this section we present the results obtained for the study proposed in Section 2. First of all, the results of the descriptive analysis on the sensor (computing device) variables are shown (see Section 3.1). Then, the results of the inferential analysis on these variables related to the students’ performance are described (see Section 3.2). Finally, the results of applying the M5 rules technique to predict the number of subjects passed by the students according to their LMS connection behavior are shown (see Section 3.3).

3.1. Descriptive Statistics

Initially we analyze the data from a global point of view, without discerning the different educational stages. Table 3 shows the maximum, mean and standard deviation (SD) of the number of passed and failed subjects of students who have ever logged on the LMS and who have never logged on it. Observe that the total number of subjects is 9 in elementary, 13 in secondary and 10 in Bachelor’s degree. This Table 3 shows that students who have ever logged on have an average of 9.27 passed subjects and 1.1 failed subjects with a standard deviation of 2.26 and 1.75, respectively. In addition, the maximum number of failed subjects is 8, whereas the maximum number of passed subjects is 13. The maximum number of connections registered for a single student is 498 logins, whereas the average number of logins per student is 135.2 (SD 88.76). Students who have never logged on to the LMS have an average of 7.65 passed subjects and 2.25 failed subjects, with a standard deviation of 2.8 and 2.64, respectively. In addition, the maximum number of passed subjects by a student without connection to the system is 13 and the maximum number of failed subjects is 12.
Now the analysis is divided into the different educational stages. Table 4 shows for each of the educational stages, the number of failed subjects and the number of passed subjects taking into account whether or not the student has ever logged into the LMS. The maximum number of connections registered for a single student is 415 logins for elementary, 498 for secondary and 372 for Bachelor’s degree, whereas the average number of logins per student is 138.44 (SD 84.43) for elementary, 153.39 (SD 94.10) for secondary and 96.61 (SD 67.87) for Bachelor’s degree.
Table 5 shows for each of the educational stages and for students who have ever logged in the LMS, the number of passed and failed subjects considering the type of device used to log in to the LMS. The frequency column shows the number of students using a device for a particular stage, and in brackets is the percentage that the use of that device represents in the total of students who have ever connected for that stage. Note that “Phablets” are used by very few students so their results are not conclusive. In the bachelor’s education stage, the average number of passed subjects (8.02) is higher when students use the tablet as a connecting device. The tablet is also the connecting device which students in this stage obtain fewer failed subjects on average (1.30 failed subjects). For the secondary education stage, students logged into the LMS using the computer have on average more subjects passed (10.82). The students who on average get fewer failed subjects (1.06) are those logged using tablets. For the elementary stage, the results are very similar for any device, although the students connected by means of the smartphones obtain on average a better result in the number of passed subjects.
Table 6 shows the number of failed and passed subjects for students in each of the three educational stages and types of operating systems used to log into the LMS. The frequency column shows the number of students using a type of operating system for a particular stage, and in brackets is the percentage the operating system represents of the total for that stage. In general, students using Windows as the operating system have a greater average number of passed subjects and a lesser average number of failed ones for all stages. Observe that the ChromeOS operating system is only used by students in the secondary stage.

3.2. Inferential Statistics

This section shows the results of the different statistical tests for RQ1-RQ6. First, we must study whether the data allow the use of parametric or non-parametric tests according to the distribution of the variables. The Kolmogorov-Smirnov normality test returns a 0.0 p-value for the variables representing both the number of passed subject and the number of failed subjects. Then, the non-parametric tests will be used for the hypothesis defined in the research questions.
The Mann-Whitney test is performed to evaluate RQ1, namely if there are significant differences between the number of subjects passed between students who have ever logged into the LMS and those who have not. Four tests have been performed considering all the education stages at the same time (Figure 1a), elementary (Figure 1b), secondary (Figure 1c) and Bachelor’s degree (Figure 1d) stages. The p-value obtained for the four tests is 0.0, concluding that there are significant differences with a 95% confidence level in the number of subjects passed between students who have ever logged into the LMS and those who have not. Figure 1 shows the significant differences in graphical form and also indicates the sample (N) and the mean “Mean Rank” of each test.
Similarly, RQ2 is evaluated through the Mann-Whitney test as well to demonstrate if there are significant differences between the number of subjects failed between students who have ever logged into the LMS and those who have not. Four tests have been performed considering all the education stages (Figure 2a), elementary (Figure 2b), secondary (Figure 2c) and Bachelor’s degree (Figure 2d) stages. The results indicate a p-value value of 0.0 for the three first tests, showing that there are significant differences, with a 95% confidence level, of the number of failed subjects for students who ever connect to the LMS and those who do not connect for the elementary and secondary stages. Contrarily, for the bachelor’s degree stage the p-value is 0.5, thus there are no significant differences for this stage related to RQ2.
The RQ3 is evaluated by means of the Kruskal-Wallis test to compare samples looking for significant differences in the number of passed subjects depending on the type of computing device used when connecting to the LMS. The p-value obtained is equal to 0.0 indicating that there are significant differences with a 95% confidence level between the number of passed subjects and the type of device used by the student. By analyzing the p-values for each pair of device type and then adjusting the p-value using the Bonferroni test, there are significant differences with a 95% confidence level between smartphones and PCs, between smartphones and tablets, and between PCs and tablets. These p-values are shown in Table 7, where the last column (“Adj. P-value”) shows the p-value adjusted by Bonferroni’s post-hoc test.
The RQ4 follows RQ3 but taking failed subjects into account instead. Table 8 shows the results of the Kruskal-Wallis test with the p-value obtained for each pair of compared devices adjusted with Bonferroni’s post-hoc test. There are significant differences with a 95% confidence level for the number of failed subjects when comparing the use between tablets and PCs, between smartphones and tablets and between smartphones and PCs.
Finally, RQ5 and RQ6 are evaluated through the Kruskal-Wallis test for differences between the number of passed/failed subjects and the operating system used by students to connect to the LMS.
Table 9 shows the values for RQ5. The adjusted p-values of this table indicate that there are differences in the number of passed subjects when comparing the use of Windows, MacOS and Android; when comparing Android with iOS, MacOS and ChromeOS; and when comparing iOS and ChromeOS.
Likewise, Table 10 shows the values for RQ6. The adjusted p-values of this table indicate that there are differences in the number of failed subjects when comparing the use of Windows with MacOS and iOs and when comparing the use of Android with MacOs and iOS.

3.3. Predicting the Number of Passed Subjects Based on Computing Device Data

The last result to be shown is related to RQ7, namely the generation of a predictive model to predict the number of passed subjects using data from the computing devices. To carry out the creation of this initial predictive model, we have first created a dataset composed of the following attributes:
  • Gender
  • Educational stage
  • Type of device
  • Type of operating system
  • Total number of connections during the academic year
  • Average number of connections per week
  • Maximum number of connections per week
  • Minimum number of connections per week
  • Most frequent connection day
  • Most frequent connection time slot
  • Number of passed subjects
The first ten variables are input attributes and the last variable “number of passed subjects” is the target attribute. The technique used both to create the model and to classify its fitness has been the M5Rules algorithm, described in Section 2.3. One advantage of this technique is the readability of the results, including easy-to-understand explanations about them. The experiment has been performed by dividing the original dataset into two: 80% has been used for the training phase and 20% has been used to test the model. To ensure the robustness of the model, instead of performing a cross validation, we have chosen to repeat the experiment 5 times, taking different sets of training and test each time. Thus, if the results obtained each time do not vary, we can affirm that the model is stable and robust. The measures used to measure the model fitness are the root mean squared error (RMSE), the mean absolute error (MAE) and the correlation coefficient (CC) between the input attributes and the output variable. In addition, each one of these measures has associated the standard deviation (SD) value that has been obtained from the 5 repetitions. Table 11 shows the mean results of the models created along with their standard deviation.
In total the model creates 206 rules. The following two rules are an example of standard rules created by the M5Rules algorithm to determine the number of passed subjects per student. The variable E.Stage indicates the educational stage by referring code 1 to “Elementary” and code 3 to “Bachelor’s degree”. The other variables involved are OSystem, CodDevice, MaxconnectionsWeek, NconnectionsWeek and NpassedSubject that represent the variables type of operating system, Type of device, maximum number of connections per week, average number of connections per week and number of passed subjects respectively.
IF ( E . S t a g e = 3 ) ( O S y s t e m = 5 ) ( C o d D e v i c e = 0 ) ( M a x c o n n e c t i o n s W e e k > = 7.245 ) ( N c o n n e c t i o n s W e e k > 4.63 )
THEN N p a s s e d S u b j e c t = 10.4463
IF ( E . S t a g e = 1 ) ( C o d D e v i c e = 1 ) ( M a x c o n n e c t i o n s W e e k < = 3.845 ) ( N c o n n e c t i o n s W e e k < 2.29 )
THEN N p a s s e d S u b j e c t = 6.3052
In addition to the prediction model, we have also obtained a selection of the most relevant variables for the creation of the model. We consider a relevant variable as the one that has been used in at least 90% of the rules obtained in the predictive model. The most relevant variables are (in brackets the percentage of occurrence of each variable regarding the total number of rules): Average number of connections per week (92.3%), maximum number of connections per week (95.4%), educational stage and type of device (98.1%).

4. Discussion

The descriptive statistics applied on the data set indicate that, on average, students who log into the LMS obtain a better average of passed and failed subjects, even taking into account the standard deviation of these figures. Thus, on average, students with connections to the system pass almost two more subjects and fail one subject less than students who never connect. The most common devices used to log into the LMS are tablets, followed by PCs and smartphones. Regarding the operating system, the most used is iOS followed by Windows and Android. It must be taken into account that phablets are rarely used and their data are not representative. After consulting the school about the reason for the low use of this device, it is concluded that it is not effective in terms of performance and cost.
The analysis of the proposed RQs leads us to the following conclusions:
  • RQ1. There are significant differences between the number of subjects passed by a student with respect to whether or not the student has ever logged into the LMS. Analyzing the values in Table 4 along with the statistical test demonstrates that a student passes more subjects if he or she ever logged into the LMS. This conclusion holds considering each educational stage separately.
  • RQ2. There are significant differences between the number of subjects failed by a student with respect to whether or not the student has ever logged into the LMS. Analyzing the values in Table 4 along with the statistical test it is concluded that a student fails fewer subjects if he or she ever logs into the LMS. This conclusion holds considering each educational stage separately except for the bachelor’s degree stage, where there is no significant evidence among the relation of students who have ever accessed the LMS and the number of failed subjects.
  • RQ3. There are significant differences between the number of subjects passed by a student and the type of device used to log into the LMS. Specifically, and studying in depth the results of the statistical test and Table 5, there is a greater number of subjects passed for students who use tablets, followed by students who use PCs and finally for students who use smartphones.
  • RQ4. There are significant differences between the number of subjects failed by a student and the type of device used to log into the LMS. Analyzing the results of the statistical test and Table 5, there are fewer failed subjects for students who use tablets, followed by students who use PCs and finally students who use smartphones.
  • RQ5. There are significant differences between the number of subjects passed by a student and the operating system used to log into the LMS. Studying the results of the statistical test and Table 6, the students who use the MacOS and ChromeOS obtain a greater number of passed subjects, followed by the students who use the iOS operating system, then the students who use the Windows operating, and finally the students who use Android.
  • RQ6. There are significant differences between the number of subjects failed by a student and the operating system used to log into the LMS. In this case, students using MacOS, iOS and ChromeOS obtain a similar number of failed subjects without significant differences. This number of failed subjects is lower than students using Windows and Android.
  • RQ7. The initial model of rules created using the M5Rules algorithm is robust since after repeating it for 5 times randomly it has given similar results obtaining a low standard deviation. Although the results can be improved, the model obtains an average error of almost 2 subjects when predicting the number of passed subjects and obtains a correlation between the input attributes and the target attribute of 72% on average. There have been variables not included in this first initial model as the time slot or the most frequent day of connection. These variables will be analyzed in a more complex model that can provide more adjusted results.
We can therefore conclude that the use of computing devices at anytime for studying positively influences students in the educational stages of elementary, secondary and bachelor’s degree. It both reduces the number of failed subjects and increases the number of passed subjects.

5. Conclusions and Future Work

The main objective of this work resides in analyzing the impact on the students’ performance when connecting to an LMS (Learning Management System) through different digital devices (PCs, smartphones, tablets and phablets). These devices act as sensors for collecting data about the students’ habits in their access to the LMS, taking into account variables such as the type of device, operating system and number of connections per student. Three educational stages have been considered in this study, namely elementary, secondary and bachelor’s degree.
The results of this work show that students who connect to the LMS pass a greater number of subjects in all stages. In addition, and taking into account the limitations of the study as explained below, the use of a specific computing device may help to obtain a better education performance. Thus, the number of passed subjects increases for students who use tablets as the preferred computing device. It has been also shown that the operating system used is an influential factor in the number of passed subjects, since students using MacOS get the best results. Moreover, a rule model has been built to predict the number of passed subjects considering the number of connections to the LMS and the type of device as the most relevant factors.
It should be highlighted the relevance of the operating system as a factor on the students’ performance according to our study. In this sense, there are several studies in this line showing that aspects such as the usability and the interface design of the OS build a greater engagement for the students [40,41]. The operating systems taken into account in this study (Windows, Chrome OS, Android, iOS and MacOS) can integrate a variety of e-learning applications. In this line, the findings of this study show that students who use MacOS as operating system have a better success rate on average. As a result, the school associated with this study will be recommending as the main option those devices that allow the use of such an operating system.
There are some limitations on this study. One of the most relevant is that the results obtained may be partially conditioned due to the use of tablets as the most used device, since it is the device recommended by the school. Another limitation factor is the notable use of Ipads, in this case due to the school’s loan policy. For this study, the school lent 192 Ipads, almost 19% of the devices used in the study. However, we think that the rest of devices and OS are well represented in the sample except for Phablets and the Chrome OS, which can be considered as marginal elements in the study.
Future works in this research line include the study of data in more school centres and educational stages. New variables will also be considered to help identify new patterns of behaviour, such as the duration of the connection and the specific academic term. Moreover, an extension and improvement of the predictive model will be conducted to improve not only the prediction of the number of passed subjects but also to find a pattern of the students’ behavior based on their intermediate grades in the same academic year. Finally, it would be interesting to integrate new sensors such as cameras that can provide more information on the students’ behaviour and emotions.

Author Contributions

The authors M.C. and F.L.F.-S. have carried out the Section 1 of introduction and contextualization of the problem. Authors B.L.; A.M. and R.M.-E. have made the Section 2 and Section 3 of materials and methods and the analysis of the results. All the authors have carried out the discussion (Section 4), conclusions and future ways of the study (Section 5), besides reviewing and preparing the data to be able to evaluate it.

Funding

This work is supported by the Fundación Séneca del Centro de Coordinación de la Investigación de la Región de Murcia under Project 20813/PI/18, and by Spanish Ministry of Science, Innovation and Universities under grants TIN2016-78799-P (AEI/FEDER, UE).

Acknowledgments

Thanks to Santo Domingo School network for the trust and cession of the data to carry out the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Demirkan, H.; Spohrer, J.C.; Welser, J.J. Digital innovation and strategic transformation. IT Prof. 2016, 18, 14–18. [Google Scholar] [CrossRef]
  2. Nespoli, P.; Useche Pelaez, D.; Díaz López, D.; Gómez Mármol, F. COSMOS: Collaborative, Seamless and Adaptive Sentinel for the Internet of Things. Sensors 2019, 19, 1492. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, H.; Xie, S.; Li, K.; Ahmad, M.O. Big Data-Driven Cellular Information Detection and Coverage Identification. Sensors 2019, 19, 937. [Google Scholar] [CrossRef] [PubMed]
  4. Osimani, F.; Stecanella, B.; Capdehourat, G.; Etcheverry, L.; Grampín, E. Managing Devices of a One-to-One Computing Educational Program Using an IoT Infrastructure. Sensors 2019, 19, 70. [Google Scholar] [CrossRef] [PubMed]
  5. Gikas, J.; Grant, M.M. Mobile computing devices in higher education: Student perspectives on learning with cellphones, smartphones & social media. Internet High. Educ. 2013, 19, 18–26. [Google Scholar]
  6. Al-Turjman, F.M. Information-centric sensor networks for cognitive IoT: an overview. Ann. Telecommun. 2017, 72, 3–18. [Google Scholar] [CrossRef]
  7. Shenoy, J.; Pingle, Y. IOT in agriculture. In Proceedings of the 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 16–18 March 2016; pp. 1456–1458. [Google Scholar]
  8. Jagüey, J.G.; Villa-Medina, J.F.; López-Guzmán, A.; Porta-Gándara, M.Á. Smartphone irrigation sensor. IEEE Sens. J. 2015, 15, 5122–5127. [Google Scholar] [CrossRef]
  9. Seedhouse, P.; Knight, D. Applying digital sensor technology: A problem-solving approach. Appl. Linguist. 2016, 37, 7–32. [Google Scholar] [CrossRef]
  10. Espinosa, H.G.; Lee, J.; Keogh, J.; Grigg, J.; James, D.A. On the use of inertial sensors in educational engagement activities. Procedia Eng. 2015, 112, 262–266. [Google Scholar] [CrossRef]
  11. Blikstein, P.; Worsley, M. Multimodal Learning Analytics and Education Data Mining: using computational technologies to measure complex learning tasks. J. Learn. Anal. 2016, 3, 220–238. [Google Scholar] [CrossRef]
  12. Baygin, M.; Yetis, H.; Karakose, M.; Akin, E. An effect analysis of industry 4.0 to higher education. In Proceedings of the 2016 15th International Conference on Information Technology Based Higher Education and Training (ITHET), Istanbul, Turkey, 8–10 September 2016; pp. 1–4. [Google Scholar]
  13. Martín-Gutiérrez, J.; Mora, C.E.; Añorbe-Díaz, B.; González-Marrero, A. Virtual technologies trends in education. Eurasia J. Math. Sci. Technol. Educ. 2017, 13, 469–486. [Google Scholar] [CrossRef]
  14. Parsons, D.; Thomas, H.; Wishart, J. Exploring Mobile Affordances in the Digital Classroom. In Proceedings of the 12th International Conference on Mobile Learning; Arnedillo-Sanchez, I., Isias, P., Eds.; IADIS Press: Lisbon, Portugal, 2016; pp. 43–50. [Google Scholar]
  15. Abdel-Basset, M.; Manogaran, G.; Mohamed, M.; Rushdy, E. Internet of things in smart education environment: Supportive framework in the decision-making process. Concurr. Comput. Pract. Exp. 2019, 31, e4515. [Google Scholar] [CrossRef]
  16. Akin, M.; Pratt, A.; Blackburn, J.; Dietzel, A. Paper-Based Magneto-Resistive Sensor: Modeling, Fabrication, Characterization, and Application. Sensors 2018, 18, 4392. [Google Scholar] [CrossRef] [PubMed]
  17. Pecori, R. A virtual learning architecture enhanced by fog computing and big data streams. Future Internet 2018, 10, 4. [Google Scholar] [CrossRef]
  18. Chang, F.C.; Chiu, C.H.; Chen, P.H.; Chiang, J.T.; Miao, N.F.; Chuang, H.Y.; Liu, S. Children’s use of mobile devices, smartphone addiction and parental mediation in Taiwan. Comput. Hum. Behav. 2019, 93, 25–32. [Google Scholar] [CrossRef]
  19. McCoy, B.R. Digital distractions in the classroom phase II: Student classroom use of digital devices for non-class related purposes. J. Media Educ. 2016, 7, 5–32. [Google Scholar]
  20. Wollscheid, S.; Sjaastad, J.; Tømte, C. The impact of digital devices vs. Pen (cil) and paper on primary school students’ writing skills—A research review. Comput. Educ. 2016, 95, 19–35. [Google Scholar] [CrossRef]
  21. Collins, A.; Halverson, R. Rethinking Education in the Age of Technology: The Digital Revolution and Schooling in America; Teachers College Press: New York, NY, USA, 2018. [Google Scholar]
  22. Crompton, H.; Burke, D. The use of mobile learning in higher education: A systematic review. Comput. Educ. 2018, 123, 53–64. [Google Scholar] [CrossRef]
  23. Watson, S.L.; Watson, W.R. Principles for personalized instruction. In Instructional-Design Theories and Models; Routledge: Abingdon, UK, 2016; Volume IV, pp. 109–136. [Google Scholar]
  24. Reigeluth, C.M.; Beatty, B.J.; Myers, R.D. Instructional-Design Theories and Models, Volume IV: The Learner-Centered Paradigm of Education; Routledge: Abingdon, UK, 2016. [Google Scholar]
  25. McKnight, K.; O’Malley, K.; Ruzic, R.; Horsley, M.K.; Franey, J.J.; Bassett, K. Teaching in a digital age: How educators use technology to improve student learning. J. Res. Technol. Educ. 2016, 48, 194–211. [Google Scholar] [CrossRef]
  26. Shamir-Inbal, T.; Blau, I. Developing digital wisdom by students and teachers: The impact of integrating tablet computers on learning and pedagogy in an elementary school. J. Educ. Comput. Res. 2016, 54, 967–996. [Google Scholar] [CrossRef]
  27. Sung, Y.T.; Chang, K.E.; Liu, T.C. The effects of integrating mobile devices with teaching and learning on students’ learning performance: A meta-analysis and research synthesis. Comput. Educ. 2016, 94, 252–275. [Google Scholar] [CrossRef]
  28. Sevillano-Garcia, M.L.; Vázquez-Cano, E. The impact of digital mobile devices in higher education. J. Educ. Technol. Soc. 2015, 18, 106–118. [Google Scholar]
  29. Farley, H.; Murphy, A.; Johnson, C.; Carter, B.; Lane, M.; Midgley, W.; Hafeez-Baig, A.; Dekeyser, S.; Koronios, A. How do students use their mobile devices to support learning? A case study from an Australian regional university. J. Interact. Media Educ. 2015, 2015, 13. [Google Scholar] [CrossRef]
  30. Garcia-Cabot, A.; de Marcos, L.; Garcia-Lopez, E. An empirical study on m-learning adaptation: Learning performance and learning contexts. Comput. Educ. 2015, 82, 450–459. [Google Scholar] [CrossRef]
  31. Premlatha, K.; Geetha, T. Learning content design and learner adaptation for adaptive e-learning environment: A survey. Artif. Intell. Rev. 2015, 44, 443–465. [Google Scholar] [CrossRef]
  32. Klašnja-Milićević, A.; Vesin, B.; Ivanović, M.; Budimac, Z.; Jain, L.C. Personalization and adaptation in e-learning systems. In E-Learning Systems; Springer: Berlin, Germany, 2017; pp. 21–25. [Google Scholar]
  33. Nikolopoulou, K. Mobile learning usage and acceptance: perceptions of secondary school students. J. Comput. Educ. 2018, 5, 499–519. [Google Scholar] [CrossRef]
  34. Sahlin, J.S.; Tsertsidis, A.; Islam, M.S. Usages and impacts of the integration of information and communication technologies (ICTs) in elementary classrooms: Case study of Swedish municipality schools. Interact. Learn. Environ. 2017, 25, 561–579. [Google Scholar] [CrossRef]
  35. Aliaño, Á.M.; Hueros, A.D.; Franco, M.G.; Aguaded, I. Mobile Learning in University Contexts Based on the Unified Theory of Acceptance and Use of Technology (UTAUT). J. New Approaches Educ. Res. (NAER J.) 2019, 8, 7–17. [Google Scholar] [CrossRef]
  36. Cacciamani, S.; Villani, D.; Bonanomi, A.; Carissoli, C.; Olivari, M.G.; Morganti, L.; Riva, G.; Confalonieri, E. Factors affecting students’ acceptance of tablet PCs: A study in Italian high schools. J. Res. Technol. Educ. 2018, 50, 120–133. [Google Scholar] [CrossRef]
  37. Juristo, N.; Moreno, A.M. Basics of Software Engineering Experimentation; Springer Science & Business Media: Berlin, Germany, 2013. [Google Scholar]
  38. McKight, P.E.; Najab, J. Kruskal-wallis test. Corsini Encycl. Psychol. 2010. [Google Scholar] [CrossRef]
  39. Holmes, G.; Hall, M.; Prank, E. Generating rule sets from model trees. In Australasian Joint Conference on Artificial Intelligence; Springer: Berlin, Germany, 1999; pp. 1–12. [Google Scholar]
  40. Harrati, N.; Bouchrika, I.; Tari, A.; Ladjailia, A. Exploring user satisfaction for e-learning systems via usage-based metrics and system usability scale analysis. Comput. Hum. Behav. 2016, 61, 463–471. [Google Scholar] [CrossRef]
  41. Paz, F.; Pow-Sang, J.A. A systematic mapping review of usability evaluation methods for software development process. Int. J. Softw. Eng. Its Appl. 2016, 10, 165–178. [Google Scholar] [CrossRef]
Figure 1. Statistical differences according to the Mann-Whitney test considering if there is any relation between accessing or not accessing the LMS and the number of passed subjects. The right side of the subfigures (in green color) represents the students who have logged into the LMS and the left side (blue color) represents the students who have not logged into the LMS. (a) Differences between the number of passed subjects for students accessing and not accessing the LMS considering all the educational stages. (b) Differences between the number of passed subjects for students accessing and not accessing the LMS considering the elementary stage. (c) Differences between the number of passed subjects for students accessing and not accessing the LMS considering the secondary stage. (d) Differences between the number of passed subjects for students accessing and not accessing the LMS considering the bachelor degree’s stage.
Figure 1. Statistical differences according to the Mann-Whitney test considering if there is any relation between accessing or not accessing the LMS and the number of passed subjects. The right side of the subfigures (in green color) represents the students who have logged into the LMS and the left side (blue color) represents the students who have not logged into the LMS. (a) Differences between the number of passed subjects for students accessing and not accessing the LMS considering all the educational stages. (b) Differences between the number of passed subjects for students accessing and not accessing the LMS considering the elementary stage. (c) Differences between the number of passed subjects for students accessing and not accessing the LMS considering the secondary stage. (d) Differences between the number of passed subjects for students accessing and not accessing the LMS considering the bachelor degree’s stage.
Sensors 19 03226 g001
Figure 2. Statistical differences according to the Mann-Whitney test considering if there is any relation between accessing or not accessing the LMS and the number of failed subjects. The right side of the subfigures (in green color) represents the students who have logged into the LMS and the left side (blue color) represents the students who have not logged into the LMS. (a) Differences between the number of failed subjects for students accessing and not accessing the LMS considering all the educational stages. (b) Differences between the number of failed subjects for students accessing and not accessing the LMS considering the elementary stage. (c) Differences between the number of failed subjects for students accessing and not accessing the LMS considering the secondary stage. (d) Differences between the number of failed subjects for students accessing and not accessing the LMS considering the bachelor degree’s stage.
Figure 2. Statistical differences according to the Mann-Whitney test considering if there is any relation between accessing or not accessing the LMS and the number of failed subjects. The right side of the subfigures (in green color) represents the students who have logged into the LMS and the left side (blue color) represents the students who have not logged into the LMS. (a) Differences between the number of failed subjects for students accessing and not accessing the LMS considering all the educational stages. (b) Differences between the number of failed subjects for students accessing and not accessing the LMS considering the elementary stage. (c) Differences between the number of failed subjects for students accessing and not accessing the LMS considering the secondary stage. (d) Differences between the number of failed subjects for students accessing and not accessing the LMS considering the bachelor degree’s stage.
Sensors 19 03226 g002
Table 1. Number of students involved in the study grouped by education stage, including gender information and the number of students who have ever logged on.
Table 1. Number of students involved in the study grouped by education stage, including gender information and the number of students who have ever logged on.
Education Stage# of StudentsGenderEver Logged on
Elementary499M: 262 (52.5%)
F: 237 (47.5%)
Y: 250 (50%)
N: 249 (50%)
Secondary992M: 463 (46.7%)
F: 529 (53.3%)
Y: 503 (51%)
N: 489 (49%)
Bachelor’s degree447M: 156 (34.9%)
F: 291 (65.1%)
Y: 258 (58.8%)
N: 189 (41.2%)
Table 2. Number of students per device and operating system disaggregated by different educational stages (Bachelor’s degree, Secondary and Elementary).
Table 2. Number of students per device and operating system disaggregated by different educational stages (Bachelor’s degree, Secondary and Elementary).
Educational StageDeviceOSN of Students
Bachelor’s degreeComputerAndroid2
iOS12
macOS29
Windows88
SmartphoneAndroid30
iOS42
TabletAndroid2
iOS51
PhabletsAndroid2
SecondaryComputerChrome OS17
iOS3
macOS32
Windows30
SmartphoneAndroid17
iOS13
TabletAndroid11
iOS378
PhabletsAndroid2
ElementaryComputermacOS1
Windows15
SmartphoneAndroid8
iOS2
TabletAndroid2
iOS221
PhabletsAndroid1
Table 3. Maximum, mean and standard deviation (SD) of the number of subjects failed and passed by students of all stages considering whether they have ever connected to the LMS.
Table 3. Maximum, mean and standard deviation (SD) of the number of subjects failed and passed by students of all stages considering whether they have ever connected to the LMS.
Ever Logged onAttributesMaxMeanSD
YESNum_Failed_Subjects81.11.75
Num_Passed_Subjects139.272.26
NONum_Failed_Subjects122.292.64
Num_Passed_Subjects137.652.8
Table 4. Descriptive analysis of the number of subjects failed and passed by students divided by educational stages.
Table 4. Descriptive analysis of the number of subjects failed and passed by students divided by educational stages.
Educational StageEver Logged onAttributesMaxMeanSD
Bachelor’s degreeYESNum_Failed_Subjects81.722.064
Num_Passed_Subjects107.292.298
NONum_Failed_Subjects81.892.390
Num_Passed_Subjects84.962.363
SecondaryYESNum_Failed_Subjects71.241.766
Num_Passed_Subjects1310.511.910
NONum_Failed_Subjects123.302.790
Num_Passed_Subjects138.312.910
ElementaryYESNum_Failed_Subjects40.200.651
Num_Passed_Subjects98.800.651
NONum_Failed_Subjects70.601.188
Num_Passed_Subjects98.401.188
Table 5. Number of students, percentage, maximum, mean and standard deviation of number of passed and failed subject for students who have ever logged in the LMS according to the educational stage and connecting device. The best results considering the mean number of subjects passed for each educational stage are highlighted in bold.
Table 5. Number of students, percentage, maximum, mean and standard deviation of number of passed and failed subject for students who have ever logged in the LMS according to the educational stage and connecting device. The best results considering the mean number of subjects passed for each educational stage are highlighted in bold.
Educational StageDeviceN. of Students (%)AttributesMaxMeanSD
Bachelor’s degreeComputer131 (50.8%)Num_Failed_Subjects71.511.951
Num_Passed_Subjects107.422.201
Smartphone72 (27.9%)Num_Failed_Subjects82.392.323
Num_Passed_Subjects106.532.584
Tablet53 (20.5%)Num_Failed_Subjects51.301.761
Num_Passed_Subjects108.021.855
Phablets2 (0.8%)Num_Failed_Subjects42.002.828
Num_Passed_Subjects87.001.414
SecondaryComputer82 (16.3%)Num_Failed_Subjects71.552.044
Num_Passed_Subjects1310.822.363
Smartphone30 (6.0%)Num_Failed_Subjects72.802.265
Num_Passed_Subjects128.502.271
Tablet389 (77.3%)Num_Failed_Subjects61.061.592
Num_Passed_Subjects1310.601.681
Phablets2 (0.4%)Num_Failed_Subjects21.001.414
Num_Passed_Subjects1111.000.0
ElementaryComputer16 (6.4%)Num_Failed_Subjects40.311.014
Num_Passed_Subjects98.691.014
Smartphone10 (4.0%)Num_Failed_Subjects00.00.0
Num_Passed_Subjects99.000.0
Tablet223 (89.2%)Num_Failed_Subjects40.20.634
Num_Passed_Subjects98.800.634
Phablets1 (0.4%)Num_Failed_Subjects00.00.0
Num_Passed_Subjects99.000.0
Table 6. Number of students, percentage, maximum, mean and standard deviation of number of passed and failed subjects for students who have ever logged in the LMS according to the educational stage and operating system used for the connection. The best results considering the mean number of subjects passed for each educational stageare highlighted in bold
Table 6. Number of students, percentage, maximum, mean and standard deviation of number of passed and failed subjects for students who have ever logged in the LMS according to the educational stage and operating system used for the connection. The best results considering the mean number of subjects passed for each educational stageare highlighted in bold
Educational StageOSN. of Students (%)AttributesMaxMeanSD
Bachelor’s degreeAndroid36 (14.0%)Num_Failed_Subjects82.192.505
Num_Passed_Subjects106.862.587
iOS105(40.7%)Num_Failed_Subjects61.611.959
Num_Passed_Subjects107.422.227
macOS29(11.2%)Num_Failed_Subjects61.141.726
Num_Passed_Subjects107.832.172
Windows88(34.1%)Num_Failed_Subjects71.842.067
Num_Passed_Subjects107.142.290
SecondaryAndroid30(6.0%)Num_Failed_Subjects62.272.116
Num_Passed_Subjects129.072.149
Chrome OS17(3.4%)Num_Failed_Subjects51.711.724
Num_Passed_Subjects1311.291.724
iOS394(78.3%)Num_Failed_Subjects71.111.649
Num_Passed_Subjects1310.551.747
macOS32(6.4%)Num_Failed_Subjects50.691.378
Num_Passed_Subjects1312.131.581
Windows30(6.0%)Num_Failed_Subjects72.272.477
Num_Passed_Subjects139.402.486
ElementaryAndroid11(4.4%)Num_Failed_Subjects30.270.905
Num_Passed_Subjects98.730.905
iOS223(89.2)Num_Failed_Subjects40.180.606
Num_Passed_Subjects98.820.606
macOS1(0.4%)Num_Failed_Subjects00.00.0
Num_Passed_Subjects99.000.0
Windows15(6.0%)Num_Failed_Subjects40.331.047
Num_Passed_Subjects98.671.047
Table 7. Values of the Kruskal-Wallis statistical test and the Bonferroni post-hoc test related to the differences between the number of passed subjects and the type of device used to access the LMS.
Table 7. Values of the Kruskal-Wallis statistical test and the Bonferroni post-hoc test related to the differences between the number of passed subjects and the type of device used to access the LMS.
Device1-Device2Test StatisticsStd. ErrorStd. Test StatisticsP-ValueAdj. p-Value
Smartphone - Computer153.1433.194.610.000.00
Smartphone - Tablet−281.44329.40−9.570.000.00
Computer - Tablet−128.2922.05−5.810.000.00
Table 8. Values of the Kruskal-Wallis statistical test and the Bonferroni post-hoc test related to the differences between the number of failed subjects and the type of device used to access the LMS.
Table 8. Values of the Kruskal-Wallis statistical test and the Bonferroni post-hoc test related to the differences between the number of failed subjects and the type of device used to access the LMS.
Device1- Device2Test StatisticsStd. ErrorStd. Test Statisticsp-ValueAdj. p-Value
Tablet - Computer95.4119.6854.840.000.00
Tablet - Smartphone195.3426.247.440.000.00
Computer - Smartphone−99.9329.62−3.370.010.04
Table 9. Values of the Kruskal-Wallis statistical test and the Bonferroni post-hoc test showing significant differences between the number of passed subjects and the OS used for connections.
Table 9. Values of the Kruskal-Wallis statistical test and the Bonferroni post-hoc test showing significant differences between the number of passed subjects and the OS used for connections.
OS1 - OS2Test StatisticsStd. ErrorStd. Test Statisticsp-ValueAdj. p-Value
Windows - iOS207.1127.167.620.000.00
Windows - macOS272.0144.266.140.000.00
Windows - Chrome OS439.0174.145.920.000.00
Android - iOS−184.9634.51−5.360.000.00
Android - macOS−249.8649.11−5.080.000.00
Android - Chrome OS−416.8577.13−5.410.000.00
iOS - Chrome OS231.89970.633.280.010.10
Table 10. Values of the Kruskal-Wallis statistical test and the Bonferroni post-hoc test showing significant differences between the number of failed subjects and the OS used for connections.
Table 10. Values of the Kruskal-Wallis statistical test and the Bonferroni post-hoc test showing significant differences between the number of failed subjects and the OS used for connections.
OS1 - OS2Test StatisticsStd. ErrorStd. Test Statisticsp-valueAdj. p-value
iOS - Windows−120.7224.24−4.980.000.00
iOS - Android130.69730.804.240.000.00
macOS - Windows−114.6839.51−2.900.040.04
macOS - Android124.6643.852.840.040.05
Table 11. Mean of evaluation results of the M5Rules model after five repetitions to predict the number of passed subjects.
Table 11. Mean of evaluation results of the M5Rules model after five repetitions to predict the number of passed subjects.
CCMAERMSE
Mean0.72791.37181.86967
SD0.01160.03110.0446

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Back to TopTop