Next Article in Journal
Ultraviolet Erythemal Irradiance (UVER) under Different Sky Conditions in Burgos, Spain: Multilinear Regression and Artificial Neural Network Models
Next Article in Special Issue
Information Communication Technology (ICT) and Education
Previous Article in Journal
IoT Anomaly Detection to Strengthen Cybersecurity in the Critical Infrastructure of Smart Cities
Previous Article in Special Issue
Automatic Essay Evaluation Technologies in Chinese Writing—A Systematic Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Interaction Time for Students with Vision and Motor Problems when Using Computers and E-Learning Technology

by
Concepción Batanero-Ochaíta
1,*,
Luis Fernández-Sanz
2,
Luis Felipe Rivera-Galicia
3,
María José Rueda-Bernao
2 and
Inés López-Baldominos
2
1
Computer Engineering Department, Polytechnics School, University of Alcalá, 28871 Alcalá de Henares, Spain
2
Computer Science Department, Polytechnics School, University of Alcalá, 28871 Alcalá de Henares, Spain
3
Department of Economics, Faculty of Economics, Business and Tourism, University of Alcalá, 28802 Alcalá de Henares, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(19), 10978; https://doi.org/10.3390/app131910978
Submission received: 23 August 2023 / Revised: 29 September 2023 / Accepted: 2 October 2023 / Published: 5 October 2023
(This article belongs to the Special Issue Information and Communication Technology (ICT) in Education)

Abstract

:

Featured Application

Now instructors know how much extra time they should allocate to students with disability for completing basic e-learning tasks such as answering questionnaires.

Abstract

Students with disabilities can attend online education using virtual learning platforms and assistive technology adapted to their personal needs. However, access is not enough to avoid difficulties as they tend to require more time for interaction with learning resources. Analysis of the literature suggests that there is relevant interest among researchers in exploring the interaction time required by students with disabilities. The aim of this paper is to explore the average time required by students with disabilities for interaction with questionnaires, the most typical e-learning resource, in comparison to students without disabilities. This is especially relevant for computer and telecommunication engineering students since all of their teaching activities are computer-related. The average time required for interaction is estimated through empirical testing with 60 students filling out a questionnaire while attending two courses on digital technology in a total of four editions. The sample included students with three types of disability as well as non-disabled students as a control group, with ages ranging from 22 to 58. Results showed time ratios of 2.92, 1.88, and 1.58 times, respectively, for blind, partially sighted, and reduced motor capability students, compared to students without disabilities. Although the results are robust, the small sample of the reduced motor capability students and the variability of capabilities for this group recommend further research with additional samples for this type of disability. It is also recommended to continue experimentation with additional types of e-learning resources.

1. Introduction

All of our professional and personal activity is governed by computers and information technology, so ensuring that all people have reasonable access to them is a moral obligation. Preferences and special needs of users are becoming more and more important in the digital world [1,2,3]. Information providers are progressively addressing the interaction needs of many different groups of population ranging from blind people, who number 285 million in the world and are expected to increase in the coming years [4], to people with other types of disabilities, elder people with declining capacity, etc. The advances in accessibility are not only beneficial for people with permanent conditions but also for those who need to interact with systems in adverse situations (e.g., noisy places, places with bad illumination, and unstable workplaces). Terms like ”accessibility” and ”user modeling” pursue the same objective: the personalization and adaptation of systems to the specific needs of users to achieve an interaction adapted to the user. ISO 9241-171:2008 [5], which provides ergonomics guidance and specifications for the design of accessible software, uses the term different capabilities, rather than disability. We also use this term throughout this article to align with this ISO standard.
International organizations have been working for decades on accessibility guidelines for providing adapted general interaction options to people with disabilities [6,7] and specifically in the educational area [8]. Standardization organizations have developed standards that guide the application of accessibility principles to digital systems. For example, IMS AFA 3.0 [9] and ISO/IEC 24751-1-2-3 [10] provide technical requirements to adapt learning platforms to the personal needs and preferences of students.
People with different eyesight and/or motor abilities in the arms or hands need assistive technology [11], such as screen readers, Braille devices, or personalized keyboards and mice, to access information. They also need transformations and adaptations for the presentation of data in different formats or different ways to access the data. For example, some tests are adapted for students with different visual capabilities. However, even the adaptation of these tests could not always satisfy the needs of students because of the short amount of time taken to operate it [12]. Assistive technologies help blind people and people with motor problems to access information but the effort of interaction is higher than the that of those not needing such technologies; they generally require a longer time of interaction. This extra time is independent of both the previous knowledge of users and their motivation as it concerns mere interaction tasks, e.g., using a keyboard or a mouse to increase font size requires a more frequent use of the scroll bar. Using a Braille device or a screen reader also takes a longer time, as it requires compensation for the loss of perspective in information.
This extra time has a greater importance in education as students have limited allocated time for the completion of educational tasks. Therefore, students with different visual and motor capabilities actually spend a good part of the time during their learning activities interacting with assistive technology [13]. This might have a negative impact on their education, especially if teachers and instructors are not aware of the amount of extra time they require for the tasks.
The characteristic of being computer-mediated is relevant because e-learning is frequently the only possible way to access education for people with different capabilities. It enables distance interaction and allows the presentation of information in different formats adapted to their needs, e.g., audio descriptions, subtitles, alternative options to non-textual content, etc.
The main objective of this article is to empirically analyze how much interaction time is required by people with different capabilities (blindness, reduced vision, and reduced motor skills) to complete a typical basic activity in e-learning like filling out a questionnaire in a context of adapted learning platforms. This objective leads to the two research questions as explained at the beginning of Section 3. This is a task that students must frequently do in this type of environment. Questionnaires are a very frequent resource in e-learning for different purposes, such as assessment and self-assessment, feedback and opinion, etc. A study [14] confirmed the use of questionnaires as an e-innovative ICT resource that allows teachers to use them with pedagogical criteria for improvement. This study highlighted varied advantages of questionnaires, e.g., objectivity, rigor, reliability in assessment and sustainability of monitoring and grading by teachers.
Studying the differences in the time required to answer questionnaires by people with different types of disability would help teachers in estimating the extra effort and time students need to complete learning tasks. This contribution represents an additional advance over previous research efforts like [15] which already empirically analyzed the interaction time required by blind people, in comparison to sighted people, in an e-learning context.

2. Related Works

People with different capacities do not have the same opportunities as non-impaired persons in fields such as the educational field. Research work has aimed at providing solutions for equality of access to learning resources for impaired students, increasing the quality of education, and allowing society to benefit from the contributions of this group. There is an increasing interest in research in this area as expressed in recent contributions, e.g., medical education echoed this problem [16], proposing a new educational system based on accessibility and the inclusion of impaired people using the stages of the Deming cycle. Alcaraz et al. [17] opened the accessibility requirements to new resources like statistical graphs, facilitating their use by people with low-vision and color-vision conditions. Moreover, the study also addresses the future analysis of the feasibility of the evaluation methods by measuring the impact of the factor “time spent in each evaluation” on the learning results. Another work [18] has studied the engagement in STEM of middle- and high-school visually impaired students through an outdoor immersion educational program focused on learning, collaboration, accessibility, and independence.
Although, in e-learning, some research works have assessed the accessibility of websites [19,20,21], researchers have also devoted a great deal of effort to develop new methods to improve and implement web accessibility [22,23]. As mentioned above, standardization organizations have developed accessibility guidelines such as the WCAG (Web Content Accessibility Guidelines) [24] presented as a set of recommendations for accessible web content. They are widely used in the field of web accessibility. These guidelines also represent the main reference for national and international regulation. The ISO (International Standardization Organization) standardized the WCAG [25] and included 12 guidelines and 61 success criteria, classified according to three levels of accessibility: A (low priority), AA (medium priority), and AAA (high priority) [26].
Accessibility in online educational environments has evolved slower than web accessibility, maybe because learning systems appeared a decade later. Relevant efforts in the area include several works on the adaptation of learning platforms to the preferences and needs of students, e.g., aTutor [27] and Moodle [28] according to the IMS specification v2.0 [29] and the adaptation of Moodle according to the latest version of the IMS specification [9,30].
A good number of e-learning platforms still show a low level of accessibility, as observed in the analysis of websites in Jordan and the Arab region [31]. An assessment of web content accessibility levels, using the international guidelines, in 21 Spanish official online educational environments [32] found low accessibility levels despite the increasing number of legal and regulatory measures on accessibility: 97.62% and 54.76% of the websites in education portals evidenced non-conformance issues for accessibility levels A and AA. For example, the point “Text Resize” failed in 54.8% of the analyzed websites. However, a more recent study [33] showed that technology is mature enough to develop educational systems that support students with different capabilities by combining institutional repositories and learning management systems.
Accessibility problems in educational contexts have an impact on results and the dedication of time and effort by students with different capabilities, e.g., one study also showed that bigger font size did not reduce reading time [34] due to time spent scrolling [35]. Students with motor problems also spend more time operating hardware devices. These findings suggest that a balanced learning process requires that teachers and tutors allocate more time to students with different capabilities to complete tasks or to read content.
The analysis of the impact of time spent on interaction within the educational context is not strange as the international recommendations also mention time as an essential aspect of accessibility. For example, two guidelines of the WCAG’s 3.0 recommendation [24] emphasized the time factor as an important component in accessibility: guideline 1.2 “Time-based Media: provide alternatives for time-based media” and guideline 2.2 “Enough time: provide users enough time to read and use content”. This recommendation directly stresses the need to extend the slot of time in educational activities allocated to people with different capabilities.
The literature has considered the allocation of time in accessible learning environments. Roig-Vila et al. [32] confirmed problems related to respecting accessibility guidelines such as allocating enough time to read and use content, something especially critical for students with different capabilities, e.g., blind users, low-vision users, deaf users, or users with cognitive or language limitations. A work in the e-learning context for visually impaired students [36] and another one on the use of extended time [37] highlighted the adaptation of reading time as a key factor for task completion by the different types of students. Evans and Douglas [15] measured the time spent completing different online learning tasks by 10 blind participants and 10 sighted participants and determined the ratio of extra time required by blind students compared to sighted ones. Sloan et al. [38] requested a holistic approach for multimedia to achieve more accessible learning environments and confirmed that time is a characteristic needed to be considered from the beginning. Another study [39] presented the action plan of 52 European universities to integrate students with different capabilities and highlighted modifications of the predetermined timeframe of curricular objectives as part of the teaching and evaluation activities in academic programs.
Different studies went further into the analysis of the effects of lack of time on the speed of reading [40] but also on the accuracy of results and comprehension [41,42,43]. These works confirmed that providing enough time to students with different capabilities enables them to achieve an equivalent performance to that of students with no disability. For example, an analysis of 299 students compared the reading and comprehension rates between visually impaired and normally sighted school children [43]. Results showed a statistical difference for the visually impaired students, concluding that the blind students took approximately two to three times more time to complete the tasks than the sighted learners: increasing the allocated time depending on the personal characteristics of each student works, although this is very difficult to implement. Another study [40] established a slightly shorter time to be added: between 1.5 and 2 times longer than what sighted students need.
Some countries have policies for deciding how much additional time should be allowed for students with different capabilities, but it is usually the same fixed amount for all types of disabilities [44]. Atkins [45] also studied the advantages and disadvantages of the addition of fixed extra time for reading, completing tasks, and taking tests to solve some accessibility problems; the final recommendation was using time adapted to each individual depending on their level of visual impairment and their experience in using the adapted format. The big problem is how to estimate the recommended extra time required or at least how to provide guidelines to teachers to decide the amount of extra time required.
Some studies referring to time spent on learning activities provide some indications, but they are not conclusive, as they were not supported by relevant experimentation. McNear and Torres [46] established that the time allocated to users with low vision should be 1.5 times longer than that allocated to sighted ones. Wetzel and Knowlton [47] considered an increment of 50% in the time allocated to Braille readers to compensate for differences in reading rate and considered the possibility of adding more time to adjust other activities. Packer [48] suggested a proportion between nearly two times for users with visual impairment using large fonts and more than two times longer for Braille users. Morris [49] determined a ratio for people with different capabilities ranging from 1.5 times longer for users of large fonts and 2.5 times longer for Braille learners.
The most promising measurement study was the one in ref. [15]. They used a strict method to measure the time required by one group of blind participants and one group of sighted participants while carrying out different learning activities, e.g., reading, listening, answering questions, etc. They determined a time twice longer for blind students than for the group without visual problems. However, they explained that blind students did not reach the same learning performance as the sighted group since they did not have the reference points to memorize content and they needed to read more times to keep everything in their mind. So, they extrapolated that the additional time required for understanding and answering questions was between 2 and 3 times the time of sighted participants and, for listening and reading, was between 1 and 1.25 times.
We can conclude, from all these studies, that a general trend of allocating a 1.5 to 3 times longer time to blind students than the time allocated to normally sighted students. However, this does not solve the problem of determining specific times for each type of student according to their type of capability: there are no more specific guides for visually impaired students as well as, e.g., students with motor problems. Together with the work with larger samples, this is the main motivation of our contribution: exploring the time estimation for blind students to compare it with the time described in the literature and extend the study to groups of students with low-vision and motor problems, collecting data from a more varied sample in specific online courses.

3. Method

3.1. Research Objectives

The objective of this study was to determine the amount of additional time needed by students with different capacities when using computers. We needed this information as a guide to be able to apply it in computer engineering and telecommunications degrees where the computer is a key tool in their daily tasks. The experiment was conducted in a continuing education course on digital technology in a project where people with different abilities were working in a virtual educational environment in contrast to the control group of students without visual and motor problems. Therefore, we formulated the following research questions:
RQ1. 
How much additional time do students with vision and motor problems need to interact with hardware technology and typical e-learning resources, such as questionnaires and tests, compared to those without impairment?”
RQ2. 
Are these times aligned with the ones reported in existing literature?”
We answered the research questions through the analysis of the measured times when the students filled out a questionnaire. As a result, we configured the objective with the subsequent specific conditions:
  • We used the Moodle learning platform that has good accessibility features and is widely used in the educational context around the world, ensuring its correct use by learners and providing a reliable measure of the total time taken to complete a task;
  • We conducted the experiment with a generic questionnaire, ensuring freedom from bias in answering questions, as previous knowledge or experience in a specific topic was not required nor did it depend on age, gender, or educational level. We used a questionnaire since they are common and efficient resources with pedagogical value for learning improvement [14];
  • We worked with students with different types of capabilities: on one hand, we worked with groups of blind students in order to compare and validate the study with the above-mentioned research studies (it is the most advanced group in terms of accessibility and the one with the most related research in the literature); on the other hand, we extended the study, once validated, to groups of students with low vision and with reduced motor ability.

3.2. Participants in Experiment (Sample)

The sample of the experiment was based on 60 adult learners with different accessibility skills enrolled in four editions of online digital technology skills courses: the average age was 40 years old, 36 of them were women, and 17 were men. Although limited in size due to the feasibility of recruiting and involving enough students with disabilities in e-learning, the data analysis showed that the results from this sample were good enough to extract some initial conclusions. We recruited groups of students for the e-learning courses and we were able to attract groups of them who were blind, had low vision, and had reduced motor ability. We observed, in previous courses, that they experienced more difficulties than those without apparent limitations when interacting with the learning platform, requiring more time to complete activities. A small number of deaf students also attended the course, but we did not include them in this experiment since they did not experience more relevant problems when accessing the learning content in previous courses than the control group. There were two tutors in charge of the online courses. We categorized the students in the following four groups according to the type of different capability (their personal and professional data are synthesized in Table 1):
  • Group 1 or control group: students without visual or motor problems;
  • Group 2: blind students;
  • Group 3: partially sighted students;
  • Group 4: reduced motor skill students.
The groups of students were homogeneous according to professional experience as well as in educational background as all of them had at least passed the secondary education level (European Qualification level EQF 4): the WCAG establishes this educational level as reference for content adaptation. Moreover, they also showed a similar level of technical interest in computer skills as demonstrated in the answers to the initial questionnaire of the courses. All blind students declared the use of the JAWS tool as a screen reader and the partially sighted students used a screen magnifier. All students with motor impairment suffered from limited capacity of arms and hands to use the mouse and/or keyboard.

3.3. Environment for the Experiment

Two different educational courses (with four editions in total) were the basis of the experiment. One focused on basic digital skills to train the students, following the ECDL (European Computer Driving License) certification syllabus. The topic for the second course was effective professional digital writing. We examined all the editions of the courses: three editions of the first one and one of the second. Both courses used the Spanish language and were tutored following a long-life learning style. There was a wide international representation of students as they came not only from Spain but also from several countries of Latin America (Colombia, Ecuador, Guatemala, etc.).
The courses used a Moodle learning management system with a platform with content designed to be accessible; the process described in Section 3.4 confirmed its accessibility. The process took place in three phases: First, the environment was checked using the Wave plugin for interaction and for the learning platform. The learning content files were checked with PDF and office accessibility checkers. Secondly, a manual inspection was performed with a checklist to cover additional aspects not foreseen by the tools. Finally, a blind expert tested the system. The students worked with their usual computers and, for the purposes of the experiment, were only connected to a browser to access the platform. Before the beginning of the course, tutors specialized in accessibility confirmed with students with disability the environment of assistive technology they would use during the course. Tutors reviewed and validated its adequacy regarding the specific online training context.
Students answered a questionnaire on their personal profile at the beginning of the courses (see Appendix A). The questions were typical multiple-choice ones: difficult to answer for people with visual problems as students must read and memorize every option before being able to answer correctly. This characteristic guaranteed that we were in unfavorable conditions in terms of time, so the measurement can be considered close to the upper limit of necessary time.
We also checked the satisfaction of students with the learning experience (see Appendix B) using other final questionnaire at the end of the course (not used for the time measurement experiment). This questionnaire enabled us to compile their satisfaction to discard any possible impact on their time performance: an unsatisfactory course might cause a lack of interest in students when requested to complete tasks, thus possibly leading to longer times spent answering questions.

3.4. Procedure for Measurement and Data Collection

We considered the measurement of the additional time that students with different capabilities require to interact with computers to be particularly important for computer engineering and telecommunications students since computers are involved in most of their teaching activities, both at a theoretical level through the use of the virtual learning platform and at a practical level through the development of computer programs, computer networks, simulators, etc.
We investigated the usability of the Moodle learning platform in the literature, and we found a satisfactory usability referring to factors such as efficiency, memorability, ease of use, and satisfaction [50]. Ivanovic et al. [51] described the good characteristics of the Moodle platform as well its functionality as expressed by students and teachers while they also reported barriers such as a lack of time spent preparing resources and using the platform and the fact that few students used the resources offered by the platform. In our case, a blind computing engineer, a specialist in accessibility, usability, and online learning, conducted an expert review of the usability of our Moodle platform to confirm accessibility. The expert followed some of the methods described by [52] such as heuristic evaluation, cognitive walkthroughs, and feature or standard inspection, corroborating the good results. Two tutors, also specialized in accessibility, guided the courses using debate fora, video conferencing, and e-mail as communication mechanisms.
The students completed a first questionnaire referring to their personal information prior to the effective start of the course. We measured the time that students took to answer this first questionnaire. We calculated the statistical results by comparing the times used by the three groups of students with different capabilities with that used by the control group (students without any limiting condition). Working with two different types of courses with several editions also contributed to greater soundness in conclusions.
Before starting the courses, students were informed of our research work and our goal of improving e-learning design for people with different capabilities. However, we did not disclose the relevance of the initial questionnaire for time measurement, avoiding a Hawthorne-like effect. All students participated in the study by taking the initial test before starting the course.
Time taken by students when answering the initial questionnaire was calculated by the Moodle platform using the time at which they began to answer the questionnaire and the time at which they completed it. Students did not have a time limit for the task. This may represent a risk as some students had breaks when answering the questionnaire. However, after reviewing duration data, we discarded clear outliers. Furthermore, data for blind students were aligned with times for blind people shown in ref. [15], measured with a very precise manual time measurement method. This indicated confidence in the data and the reliability of the sample.
After conversing with those students, we assigned them an amount of extra time informally determined as the results of our research were not yet available to use as a guideline. The final satisfaction questionnaire at the end of the course showed positive results: we could not infer that students were demotivated or unsatisfied with the course, so we considered recorded times as representative of normal interaction of students during an online course.
The two main differences between our approach and the contributions of precedent methods in the literature are (a) the measurement of time through the system, avoiding manual methods that would add human errors, and (b) the use of a questionnaire on students’ personal profiles to avoid a possible bias due to the need for previous knowledge. Our approach also added a group of students with reduced motor capacity and this represents an additional contribution in comparison to previous works.

4. Data Analysis and Results

The collection of data resulted in a sample of 64 students who participated in the four online courses. As a first analysis to detect outliers, we realized that three values in the control group (group 1) (28 min, 14 s; 12 days, 9 h; 16 min, 10 s) and one value in group 2 (2 h, 35 min) were excessively larger than the median so they were removed from the dataset. These figures probably represent cases where the students did not answer the questionnaire in a single step and took breaks in between the activity. Therefore, the final dataset had a total of 60 students.
This section shows the results of the statistical analysis of the quantitative data. Table 2 displays the main descriptive statistical elements for each group. One can observe differences between means and variability of the different groups. Group 2 presents the biggest mean (496.53 s) and SD (281.22 s) in contrast with group 1, where the mean (170.27 s) and SD (68.56 s) are the lowest.
The same applies to the maximum and minimum values. A comparative analysis of the mean between the control group or group 1 and groups 2, 3, and 4 shows the following relations: 2.92, 1.88, and 1.58, respectively, indicating the amount of supplementary time that students of groups 2, 3 and 4 needed to finish the questionnaire compared to students of the control group.
Figure 1 shows the variability and extreme values of the sample. Group 4 includes two outliers that possibly provide real information given the small sample for this group (6); these values are within the general range of the other observations. Therefore, we decided to keep them.
Logical statistical analysis in situations where a continuous variable exists together with samples classified in independent groups includes ANOVA tests that provide comparisons of dependent variables. However, ANOVA assumes a normally distributed sample and homogeneity of variance, meaning an approximated value of the variance among independent groups. A Shapiro–Wilk test, Table 3, shows a significance larger than the benchmark 0.05 for each group. Therefore, the null hypothesis was accepted, indicating that the sample of all groups followed a normal distribution.
The picture of error bars (Figure 2) demonstrates the great variability of groups 2, 3, and 4. A Levene test (Table 4) confirmed this premise, revealing a significance of less than 0.05 in the four groups. All these reasons lead us to accept the alternative hypothesis establishing that there are variance differences in at least two groups, so it is consequently evidenced that it is not possible to implement the ANOVA test.
The existence of variability among the groups guided us in following the approach of applying non-parametric tests, so we used the Kruskal–Wallis system to confirm the existence of significant differences between the times of the groups. The null hypothesis establishes that the distribution of response times is the same for every group of individuals. A p-value of 0.000 (<0.05) in the Kruskal–Wallis test (Table 5) prompted us to reject the null hypothesis, affirming that there were significant differences in terms of distribution in the response time of the groups.
Inspection of Figure 2 suggests differences between groups 1 and groups 2 and 3. The Wilcoxon–Mann–Whitney gives us the opportunity to conduct a more in-depth study by comparing groups to statistically confirm this conjecture (Table 6).
The p-value (0.000) for groups 1 and 2 is less than both 0.05 and 0.01, if we consider the threshold of 5% and 1%, respectively. This indicates that there are remarkable differences between these groups. Also, important differences are reflected in groups 1 and 3 as their p-value is 0.001.
Groups 1 and 4 present a p-value of 0.030, which is lower than 0.05 but larger than 0.01, showing differences if we consider the 5% level of significance and no differences if we consider the 1% level of significance. This indicates that differences are smaller among groups 1 and 4 than the differences between group 1 and groups 2 and 3. Values of 0.105, 0.080, and 0.481 as p-values evidence the absence of differences at both 5% and 1% levels of significance between groups 2 and 3, 2 and 4, and 3 and 4, respectively. Table 6 shows the summary of all comparative tests and shows differences between groups 1 and groups 2 and 3 (5% and 1% levels of significance) while differences between groups 1 and 4 exist at a 5% level of significance but not at a 1% level of significance. Figure 2 shows the differences between group 1 and the other groups since the bar of group 1 is not overlapped with the bars of the other groups. Additionally, no differences are present between groups 2 and 3 and groups 3 and 4, as the bars are overlapped in the two cases. However, although bars for groups 2 and 4 do not overlap, showing differences in these groups, the Mann–Whitney test establishes that their difference is not statistically significant (p-value = 0.080).

5. Discussion

5.1. Research Question RQ1

As a first step to answering RQ1, we need to discuss the validity and representativeness of the available data sample. Working with two different courses with data from four editions minimizes possible sampling errors due to specific circumstances of data collection. This helps us to consider that the sample of participants is robust enough and the results are homogeneous, thus suggesting that the resulting times from our experiment represent a solid trend. Moreover, the satisfaction questionnaire in all courses was good and we can assume there were no extra factors in the courses that influenced time either positively or negatively. An observation of the outliers (separated from the sample in the previous phase of the results analysis) included, among others, all blind students with secondary education who did not work (a total of three). Despite that, if we consider students with university education (12), all except one could be included in the study as their times were within the time range of the sample. This suggests a possible threshold in training and educational level to enable the integration of blind students with the rest of the adult population in equal terms if they have extra time for activities.
We chose an initial questionnaire on personal profiles as a task to be measured to avoid bias since the activity did not depend on previous knowledge. Students completed the questionnaire at the beginning of the course so there was no influence of their experience during the course. Students had no limits in time to respond to the questionnaire, although results confirmed (after discarding outliers) that they answered without taking breaks as they were instructed to do so.
We compared the time taken by the blind students (group 2) with the time taken by blind students in [15] in similar tasks measured with a detailed manual procedure: the comparison evidences the similarity of time expressed as a ratio, suggesting that our measurement method is consistent with reality and applicable to other groups. However, we cannot claim that these results are definitive due to a relatively reduced number of students in those groups and the possibility of a wider variability in the degree of vision or motor skills of the participants.
The statistical study provided a comparative study between groups by applying the Mann–Whitney U test as groups were big enough. Table 5 summarizes these results, evidencing that the control group (group 1) presented significant differences (p-value < 0.05) with groups 2, 3, and 4, but groups 2, 3, and 4 did not present significant differences with the rest of the groups (excluding group 1). Differences between group 1 and groups 2 (p-value = 0.000) and 3 (p-value = 0.001) are greater than those between group 1 and group 4 (p-value = 0.029) (Table 6), which led us to reflect on two possible causes: visual impairment causes more problems than motor limitations and/or the motor capacity level of the students of group 4 was not very restrictive. Another significant finding was the existence of two outliers out of six total elements in group 4 pointing to a large variation in operability among students with motor problems. We can suggest the need of a specific study for students with reduced motor ability based on these results, on the conclusions reached by [30], and on the lack of studies on this type of impairment in the literature.
A reflection on the experiment is that the increasing the time allocated for tasks is an effective solution to allow people with different capabilities to access information under equivalent conditions to those of people without limiting conditions. Personalizing time individually for each student depending on their special needs [45] provides certainty of allocating to all students a suitable time for the allocated tasks. This avoids work overload for redesigning courses as we often have students with different types of capabilities. However, it is not feasible to propose a specific study of each student for a personal allocation of time wherein different personal variables may also influence the result. In our opinion, it would be more sensible and less expensive to add the same amount of extra time for each group of students with different capabilities, considering the statistical comparison between students of each group and those without apparent limitations.
We performed an analysis of the means of the control group (group 1) and groups 2, 3, and 4 to follow our objective of detecting the amount of extra time needed by students with different capabilities versus students without disability.
The previous consideration, together with the comparative statistical analysis, provides an answer to RQ1: “How much additional time do students with vision and motor problems need to interact with typical e-learning resources, such as questionnaires and tests, compared to those without impairment?” The ratio of extra time interaction needed by blind students is 2.92; for the students with low vision, it is 1.88; and for the students with reduced motor skills, it is 1.58.

5.2. Research Question RQ2

The first point of this section is the comparison between our results and those located in the literature addressing the evaluation of time required for interaction for people with disabilities. The times determined by the previous related studies are summarized in Table 7.
An exhaustive review of research revealed that the work of Evans and Douglas [15] is the most complete study regarding blind students, measuring times with a detailed exhaustive procedure for different online learning tasks in two groups: one group of 10 blind participants and another group of 10 non-disabled participants. Although this study addressed the aspect of time measurement, we decided to quantify time using the Moodle platform instead of manual measurement with humans, obtaining similar results for blind people.
Results of the contrast between our results and those provided by the literature enable us to answer RQ2: “Are these times aligned with the ones reported by existing literature?” Our result is 2.92 which is within the range of 2 to 3 times longer than the control group as determined by Evans and Douglas in 2008 [15]. Our result is also within the margins proposed by Packer in 1989 [48] (little less than 2 times for visual impairment) and by Gompel, Van Bon, and Schreuder in 2004 [40] (children with low vision need between 1.5 and 2 times more time than sighted children). However, the result is far from the study of Wetzel and Knowlton in 2000 [47], who proposed providing a 50% longer time when using Braille readers and increasing additional time to accommodate different type of answers. Our results are not in line with those of Mohamed and Omar published in 2011 [43] (2 times for visual impairment and 3 times for blind students). McNear and Torres in 2002 [46] and Morris in 1974 [49] suggested a shorter rate (1.5 for visual impairment) than the rate reflected by our study. All studies analyzing both reduced vision and blindness agree in giving more time to blind students that use Braille than to partially sighted students. Allman reported in 2013 a set of authors’ insights, included in this paper, and concluded that extending time could result in a better measure of students’ abilities by helping them to reduce anxiety [37].
Results showed that most of the students with reduced vision used the screen magnifier instead of the screen reader, because the average time employed was considerably less than the average of blind students as demonstrated by the ratio between groups 3 and 1 (1.88) versus the ratio between groups 2 and 1 (2.92). We deducted from these ratios that blind students took 55% more time than low-vision students. All these arguments and the greater variability of the blind sample suggested that the time spent by group 3 (in increasing the font size, in scrolling to reach the end of a document, and in going backwards due to the loss of perspective when the size font is increased) is less than the time spent by blind students using a screen reader. This is probably caused by the loss of perspective of blind students who are pushed to listen to all the information before being able to discern and select what they need as well as to remember readings to make a decision at the right time. Furthermore, they are unable to quickly discard irrelevant information around the activity to be performed, such as decorative frames, other menus, etc., as these elements are also narrated by the screen reader. In the case of participants with motor problems, they required additional time to operate the hardware, position the mouse in the right place, and press buttons and the keyboard when typing text or using the keyboard to interact with the user interface. Although the value of extra time for this group is the lowest of all the groups (1.58 ratio), we cannot fully confirm the entire variability of this group as the one requiring the shortest extra time due to the variability shown in Figure 2. Some of the reasons for these results could be the wide variety of degrees of motor ability and the limited size of our sample. It would be reasonable to think that the time required by each student in this group could be influenced by more personal details such as their ability to move the mouse or to press the keyboard. We did not find any similar work in the field of reduced motor capability to compare our results with.
Although the variability of low-vision and reduced motor skill students could be high, we can confirm results for blind students given the minimal variability in the sample. However, all low-vision students used a screen magnifier, meaning that they spent the same time using it and reduced the variability. The reduced motor capacity students in the sample were those who declared problems in arms and hands but were able to use mouse and keyboard by themselves or through assistive technology which also reduced variability. Therefore, we consider our results relevant to the estimation of time in accessibility, although deeper research is needed for consolidating conclusions.
The study was limited to six students with reduced motor ability. Further research with a larger sample of this group is needed to complete and consolidate current results. Questionnaires are a tool frequently used in e-learning for assessment, self-assessment, and improving learning [14] so the results of our study represent a relevant part of situations in e-learning. However, this first experiment does not solve all types of situations in e-learning like reading content, solving exercises, etc. These need to be studied for more specific time analysis to cover whole learning activities performed by students.

6. Conclusions and Future Work

Accessibility in virtual education is essential to ensure all people can benefit from it, no matter what their capabilities are. Although the technical work in contents, formats, and assistive technologies still needs to be improved in terms of practical real application, learning and evaluation processes also need additional consideration to ensure that students with different capabilities can properly complete learning tasks. As detected in several studies, learners with different capabilities require extra time. Results obtained can be applied to all students who interact with computers, but they have a more direct impact on computer engineering and telecommunications students who need to use the computer for most of their tasks, increasing the differences between students with different capacities and their peers without different capacities. Ideally, time must be adapted to the specific type of conditions of each user. The problem is that teachers do not have clear guidelines to adapt the given time to each type of student.
Our work was aimed at providing more information to solve this problem: we measured the time required by 60 students with different conditions (blind, visually impaired, and motor impaired, and students without any apparent limitation, acting as a control group) to complete a task in four different online courses using an accessible Moodle platform. The main difference between our approach and precedent methods in the literature is the measurement of time through the automated functionality of the platform, thus avoiding possible human errors when using manual methods. We have also used as reference task (the completion of a questionnaire on the user’s personal profile) to avoid possible bias due to the need for previous knowledge to answer questions. Our approach also adds a group of students with reduced motor capacity, representing an additional contribution in comparison to previous works. The sample of students was homogeneous in professional experience and educational background. Students expressed a homogeneous satisfaction and motivation with the courses, so there is no reason to suggest that measured times were impacted by demotivation or low interest of people in completing the task.
Comparing the times taken by the different groups of students to complete a questionnaire on their personal profile with the time spent by the control group, we found out that blind students required 192% more time, low-vision students required 88% more time, and reduced motor skill students required 58% more time. This suggests that low-vision students who required less than half the time of blind students are more agile, using the magnifier and screen scrolling, than blind students who need a screen reader. It is also probable that the students with reduced motor capacity who participated in the study had a relatively good degree of motor skill as the time required for the task was only slightly higher (50%) that that of the control group.
The results for blind students were like those previous in previous studies, that blind students require times up to three times longer to complete tasks. This suggests the consistency of our method for analyzing time with other more effort-intensive methods such as those previously used by Evans and Douglas [15]. This consistency in the data for blind students supports the value of the results for other groups such as low-vision and reduced motor capability students. The extra time resulting from our study for students with low vision (1.88 times) is also aligned with the results of Packer [48] and Gompel, Van Bon, and Schreuder [40]. Therefore, we can now have guidelines for teachers for time allocation to blind and low-vision groups. This possibility enables a better planning of learning processes when working with students with different capabilities, not only in terms of allocated time in e-learning but also for the impact on the whole learning process. As a suggestion, the classical measures of requested effort for students like the ECTS (European Credit Transfer and Accumulation System) could be adapted for each group of students.
The comparative inspection of groups grouped in pairs using the Mann–Whitney statistical method revealed significant differences between control group (group 1) and the rest of the groups. Groups 2, 3, and 4 presented a high variability, probably due to the differences in interaction ability among students and the different levels of disability in groups 3 and 4. The sample of group 4 was small. It also presented a high variability that suggests, together with the absence of references in this field, that motor impairment is a less known disability in the educational area and requires additional studies to better determine the allocation of extra time.
Although our results already provide a solid determination of the average extra time required by different types of students such as blind, low-vision, and reduced motor skills students, the topic deserves a deeper exploration of the determination of the different degrees of capabilities in vision and motor impairment with a more detailed classification of impact during digital interaction. This would enable new experiments with larger samples of students from all groups of disability (especially from groups with motor impairment) to determine more precise guidelines for the allocation of extra time. Obviously, it is essential to avoid the influence of background effects in these experiments to isolate the measurement of time from the impact of non-homogeneous profiles of knowledge or skills.

Author Contributions

Conceptualization, C.B.-O. and L.F.-S.; methodology, C.B.-O. and L.F.-S.; software, C.B.-O. and I.L.-B.; validation, I.L.-B. and M.J.R.-B.; formal analysis, M.J.R.-B. and L.F.R.-G.; investigation, L.F.R.-G., C.B.-O. and L.F.-S.; resources, L.F.R.-G. and C.B.-O.; data curation, M.J.R.-B. and I.L.-B.; writing—original draft preparation, C.B.-O.; writing—review and editing, L.F.-S. and C.B.-O.; visualization, C.B.-O.; supervision, L.F.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This work has received no external funding.

Institutional Review Board Statement

Participants were informed of the purpose of the research. The experiment was exempt from approval by the Ethics Committee as the obligation in this type of non-medical/health study’s data compilation was prior to 2019 when the regulation code was established at UAH: https://www.uah.es/export/sites/uah/es/investigacion/.galleries/Investigacion/Codigo-Etico-de-Buenas-Practicas-en-investigacion.pdf (accessed on 27 August 2023).

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in Zenodo at https://doi.org/10.5281/zenodo.7630194 (accessed on 27 August 2023).

Acknowledgments

This paper has been produced within the framework of the WAMDIA project (2017-1-ES01-KA202-038673) and the EduTech project (609785-EPP-1-2019-1-ES-EPPKA2-CBHE-JP), both co-funded by the Erasmus+ Programme of the European Union.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Questions of the Initial Questionnaire

  • Gender:
    • Male, female
  • Type of disability
    • Total visual
    • Partial visual
    • Total auditory
    • Partial auditory
    • Motor that directly affects interaction with the computer or device
    • Motor that does not directly affect interaction with the computer or device
    • Other disability
    • None
  • Please tell us if you are currently employed or in a professional activity
    • Not currently but I have no interest in seeking employment
    • Not currently but I have worked before and I am looking for a new job or activity.
    • I have not yet had a job or professional activity but I am looking for a job or activity
    • Yes
  • Total number of years of experience in your working or professional life
    • Integer number
  • Educational level
    • Primary education
    • Secondary education
    • Secondary vocational education
    • University degree
    • University master’s degree
    • Doctorate
  • Please indicate how often you use a computer or similar device (tablet, etc.).
    • Daily
    • A few times a week
    • Occasionally, hardly once a week
    • Never
  • Please indicate how often you surf the internet or write e-mails with a computer or similar device (tablet, etc.).
    • Daily
    • A few times a week
    • Occasionally, less than once a week
    • Never
  • Please indicate how often you create documents with a word processor using a computer or similar device (tablet, etc.).
    • Daily
    • A few times a week
    • Occasionally, less than once a week
    • Never

Appendix B. Questions of the Final Questionnaire

Answers were a likert scale with the following items:
(a)
very unsatisfactory
(b)
unsatisfactory
(c)
acceptable
(d)
satisfactory
(e)
very satisfactory
1. 
Fulfilment of planned objectives:
2. 
Personal achievement of planned objectives
3. 
Usefulness of available material to achieve the objectives
4. 
Usefulness of the learned contents
5. 
Duration of the course in relation to the objectives
6. 
Mentoring to achieve the objectives
7. 
Overall satisfaction with the development of the course
8. 
Overall satisfaction with the usefulness of the course
9. 
Use of the Learning Management System

References

  1. Loitsch, C.; Weber, G.; Kaklanis, N.; Votis, K.; Tzovaras, D. A knowledge-based approach to user interface adaptation from preferences and for special needs. User Model. User Adapt. Interact. 2017, 27, 445–491. [Google Scholar] [CrossRef]
  2. Batanero, C.; de-Marcos, L.; Holvikivi, J.; Hilera, J.R.; Otón, S. Effects of New Supportive Technologies for Blind and Deaf Engineering Students in Online Learning. IEEE Trans. Educ. 2019, 62, 270–277. [Google Scholar] [CrossRef]
  3. Santos, O.C.; Boticario, J.G. Requirements for semantic educational recommender systems in formal e-learning scenarios. Algorithms 2011, 4, 131–154. [Google Scholar] [CrossRef]
  4. Jafri, R.; Khan, M.M. User-centered design of a depth data based obstacle detection and avoidance system for the visually impaired. Hum.-Centric Comput. Inf. Sci. 2018, 8, 14. [Google Scholar] [CrossRef]
  5. ISO 9241-171:2008; Ergonomics of Human-System Interaction—Part 171 Guidance on Software Accessibility. International Standard Organization: Geneve, Switzerland, 2018.
  6. W3C. Web Accessibility Initiative (WAI). Available online: https://www.w3.org/WAI/ (accessed on 13 September 2018).
  7. G3ict. Global Iniatitive for Inclusive Information and Communication Technologies. Available online: https://dig.watch/actors/global-initiative-inclusive-information-and-communication-technologies (accessed on 20 September 2018).
  8. IMS Global Learning Consortium Accessibility. Available online: http://www.imsglobal.org/activity/accessibility (accessed on 13 September 2018).
  9. IMS Global Learning Consortium. IMS AccessForAll v3.0 Public Draft Specification. 2012. Available online: http://www.imsglobal.org/accessibility/#afav3 (accessed on 17 November 2018).
  10. ISO/IEC 24751-1-2-3 (2008); Information Technology—Individualized Adaptability and Accessibility in e-Learning Education and Training. International Standard Organization: Geneve, Switzerland, 2018.
  11. W3C. Media Accessibility User Requirements. Available online: http://w3c.github.io/apa/media-accessibility-reqs/ (accessed on 3 February 2020).
  12. Hong, S. An Extended Time in a Testing Situation for People Who are Blind and Visually Impaired: Time Variable and Other Variables Affecting an Outcome of Students with Visual Impairment. Vis. Impair. 2001, 17, 123–136. [Google Scholar]
  13. Verma, P.; Singh, R.; Singh, A.K. A framework to integrate speech based interface for blind web users on the websites of public interest. Hum.-Centric Comput. Inf. Sci. 2013, 3, 21. [Google Scholar] [CrossRef]
  14. Remesal, A.; Colomina, R.M.; Mauri, T.; Rochera, M.J. Online Questionnaires Use with Automatic Feedback for e-Innovation in University Students. Comunicar. Media Educ. Res. J. 2017, 25, 51–60. [Google Scholar]
  15. Evans, S.; Douglas, G. E-learning and blindness: A comparative study of the quality of an e-learning experience. J. Vis. Impair. Blind. 2008, 102, 77. [Google Scholar] [CrossRef]
  16. Singh, S.; Meeks, L.M. Disability inclusion in medical education: Towards a quality improvement approach. Med. Educ. 2023, 57, 102–107. [Google Scholar] [CrossRef] [PubMed]
  17. Alcaraz Martínez, R.; Turró, M.R.; Granollers Saltiveri, T. Methodology for heuristic evaluation of the accessibility of statistical charts for people with low vision and color vision deficiency. Univers. Access Inf. Soc. 2022, 21, 863–894. [Google Scholar] [CrossRef]
  18. Tsinajinie, G.; Kirboyun, S.; Hong, S. An Outdoor Project-Based Learning Program: Strategic Support and the Roles of Students with Visual Impairments Interested in STEM. J. Sci. Educ. Technol. 2021, 30, 74–86. [Google Scholar] [CrossRef]
  19. Williams, R.; Rattray, R. An assessment of Web accessibility of UK accountancy firms. Manag. Audit. J. 2003, 18, 710–716. [Google Scholar] [CrossRef]
  20. Wisdom, J.R.; White, N.A.; Goldsmith, K.A.; Bielavitz, S.; Davis, C.E.; Drum, C. An assessment of web accessibility knowledge and needs at Oregon Community Colleges. Community Coll. Rev. 2006, 33, 19–37. [Google Scholar] [CrossRef]
  21. Aizpurua, A.; Harper, S.; Vigo, M. Exploring the relationship between web accessibility and user experience. Int. J. Hum. Comput. Stud. 2016, 91, 13–23. [Google Scholar] [CrossRef]
  22. Lorca, P.; de Andrés, J.; Martínez, A.B. Does Web accessibility differ among banks? World Wide Web 2016, 19, 351–373. [Google Scholar] [CrossRef]
  23. Cho, D.J. A Study on Web Accessibility Improvement Using QR-Code. Am. J. Eng. Res. 2016, 1, 3. [Google Scholar]
  24. W3C. Web Content Accessibility Guidelines (WCAG) 3.0. Available online: https://www.w3.org/TR/2021/WD-wcag-3.0-20210121/ (accessed on 24 June 2021).
  25. ISO/IEC 40500; Information Technology—W3C Web Content Accessibility Guidelines (WCAG) 2.0. International Standard Organization: Geneve, Switzerland, 2012.
  26. Batanero, C.; Karhu, M.; Holvikivi, J.; Otón, S.; Amado-Salvatierra, H.R. A method to evaluate accessibility in e-learning education systems. In Proceedings of the 14th International Conference on Advanced Learning Technologies, Athens, Greece, 7–10 July 2014; pp. 556–560. [Google Scholar]
  27. Mirri, S.; Salomoni, P.; Roccetti, M.; Gay, G. Beyond Standards: Unleashing Accessibility on a Learning Content Management System. In Transactions on Edutainment V; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6530, pp. 35–49. [Google Scholar]
  28. Laabidi, M.; Jemni, M.; Jemni Ben Ayed, L.; Ben Brahim, H.; Ben Jemaa, A. Learning technologies for people with disabilities. J. King Saud Univ.-Comput. Inf. Sci. 2014, 26, 29–45. [Google Scholar] [CrossRef]
  29. IMS Global Learning Consortium. IMS AccessForAll v2.0 Final Specification. 2009. Available online: http://www.imsglobal.org/accessibility/#accDRD (accessed on 10 November 2018).
  30. Batanero, C.; Fernández-Sanz, L.; Piironen, A.K.; Holvikivi, J.; Hilera, J.R.; Otón, S.; Alonso, J. Accessible platforms for e-learning: A case study. Comput. Appl. Eng. Educ. 2017, 25, 1018–1037. [Google Scholar] [CrossRef]
  31. Shawar, B.A. Evaluating Web Accessibility of Educational Websites. Int. J. Emerg. Technol. Learn. 2015, 10, 4. [Google Scholar] [CrossRef]
  32. Roig-Vila, R.; Ferrández, S.; Ferri-Miralles, I. Assessment of Web content accessibility levels in Spanish official online education environments. Int. Educ. Stud. 2014, 7, 31. [Google Scholar] [CrossRef]
  33. Skourlas, C.; Tsolakidis, A.; Belsis, P.; Vassis, D.; Kampouraki, A.; Kakoulidis, P.; Giannakopoulos, G.A. Integration of institutional repositories and e-learning platforms for supporting disabled students in the higher education context. Libr. Rev. 2016, 65, 136–159. [Google Scholar] [CrossRef]
  34. Kamei-Hannan, C. Examining the accessibility of a computerized adapted test using assistive technology. J. Vis. Impair. Blind. 2008, 102, 261. [Google Scholar] [CrossRef]
  35. Macik, M.; Cerny, T.; Basek, J.; Slavik, P. Platform-aware rich-form generation for adaptive systems through code-inspection. In Human Factors in Computing and Informatics, Lecture Notes in Computer Science; Springer: Berlin, Germany, 2013; Volume 7946. [Google Scholar]
  36. Permvattana, R.; Armstrong, H.; Murray, I. E-learning for the vision impaired: A holistic perspective. Int. J. Cyber Soc. Educ. 2013, 6, 15. [Google Scholar] [CrossRef]
  37. Allman, C. Position paper: Use of extended Time. 2013. Available online: http://www.gadoe.org/Curriculum-Instruction-and-Assessment/Special-Education-Services/Documents/Vision/Extended%20Time.pdf (accessed on 20 July 2023).
  38. Sloan, D.; Stratford, J.; Gregor, P. Using multimedia to enhance the accessibility of the learning environment for disabled students: Reflections from the Skills for Access project. ALT-J 2006, 14, 39–54. [Google Scholar] [CrossRef]
  39. Galán Mañas, A.; Gairín Sallán, J.; Fernández Rodríguez, M.; Sanahuja Gavaldà, J.M.; Muñoz Moreno, J.L. Tutoring students with disabilities. Pulso. Rev. De Educ. 2014, 13, 13–33. [Google Scholar] [CrossRef]
  40. Gompel, M.; van Bon, W.H.J.; Schreuder, R. Reading by children with low vision. J. Vis. Impair. Blind. 2004, 98, 77–89. [Google Scholar] [CrossRef]
  41. Douglas, G.; Grimley, M.; McLinden, M.; Watson, L. Reading errors made by children with low vision. Ophthalmic Physiol. Opt. 2004, 24, 319–322. [Google Scholar] [CrossRef] [PubMed]
  42. Evans, S. E-Learning and Blindness: Evaluating the Quality of the Learning Experience to Inform Policy and Practice. Ph.D. Thesis, University of Birmingham, Birmingham, UK, 2009. [Google Scholar]
  43. Mohammed, Z.; Omar, R. Comparison of reading performance between visually impaired and normally sighted students in Malaysia. Br. J. Vis. Impair. 2011, 29, 196–207. [Google Scholar] [CrossRef]
  44. Pepper, D. Assessment for Disabled Students: An International Comparison. Qualifications and Curriculum Authority—Gov. UK. 2007. Available online: https://dera.ioe.ac.uk/7174/1/Assessment_disabled_international_briefing.pdf (accessed on 20 July 2023).
  45. Atkins, S. Assessing the Ability of Blind and Partially Sighted People: Are Psychometric Tests Fair? RNIB Centre for Accessible Information: Birmingham, UK, 2012. [Google Scholar]
  46. McNear, D.; Torres, I. When You Have a Visually Impaired Student in Your Classroom: A Guide for Teachers; American Foundation for the Blind: Arlington, VA, USA, 2002. [Google Scholar]
  47. Wetzel, R.; Knowlton, M. A comparison of print and braille reading rates on three reading tasks. J. Vis. Impair. Blind. 2000, 94, 146–154. [Google Scholar] [CrossRef]
  48. Packer, J. How much extra time do visually impaired people need to take examinations—The case of the SAT. J. Vis. Impair. Blind. 1989, 83, 358–360. [Google Scholar] [CrossRef]
  49. Morris, J.E. The 1973 Stanford Achievement Test series as adapted for use by the visually handicapped. Educ. Vis. Handicap. 1974, 6, 33–40. [Google Scholar]
  50. Kakasevski, G.; Mihajlov, M.; Arsenovski, S.; Chungurski, S. Evaluating usability in learning management system Moodle. In Proceedings of the ITI 2008—30th International Conference on Information Technology Interfaces, Cavtat, Croatia, 23–26 June 2008; pp. 613–618. [Google Scholar]
  51. Ivanovic, M.; Putnik, Z.; Komlenov, Z.; Welzer, T.; Hölbl, M.; Schweighofer, T. Usability and privacy aspects of Moodle: Students’ and teachers’ perspective. Informatica 2013, 37, 221. [Google Scholar]
  52. Nielsen, J. Usability inspection methods. In Proceedings of the Conference Companion on Human Factors in Computing Systems, Boston, MA, USA, 24–28 April 1994; pp. 413–414. [Google Scholar]
Figure 1. Boxplots for the four groups.
Figure 1. Boxplots for the four groups.
Applsci 13 10978 g001
Figure 2. Error bars for the groups (the last group is formed by students with reduced motor capacity without outliers).
Figure 2. Error bars for the groups (the last group is formed by students with reduced motor capacity without outliers).
Applsci 13 10978 g002
Table 1. Detailed information on the groups of participants.
Table 1. Detailed information on the groups of participants.
Number of StudentsFemaleMale
Group 126206
Group 21798
Group 31156
Group 4651
Total603921
Table 2. Descriptive statistical values for the groups.
Table 2. Descriptive statistical values for the groups.
NMeanSDMedianMAX.MIN.
Group 126170.2768.56165.00340.0047.00
Group 217496.53281.22410.001076.00148.00
Group 311320.09130.51331.00564.00132.00
Group 46268.33143.77256.00507.0060.00
Total60299.98216.33236.001076.0047.00
Table 3. Normality test—Shapiro–Wilk test.
Table 3. Normality test—Shapiro–Wilk test.
SW Statisticp-Value
Seconds used in completing the activityGroup 10.9660.521
Group 20.9130.113
Group 30.9750.933
Group 40.9170.486
Table 4. Levene test results.
Table 4. Levene test results.
Variance Homogeneity Test
Statisticp-Value
Seconds used in completing the activityBased on mean13.130.000
Based on median8.690.000
Based on median with gl adjusted8.690.000
Based on trimmed mean12.260.000
Table 5. Kruskal–Wallis test results.
Table 5. Kruskal–Wallis test results.
Time
Statistic26.636
gl3
p-value0.000
Table 6. Comparative study of differences between groups (Wilcoxon–Mann–Whitney test. * significance at 5%, ** significance at 1%).
Table 6. Comparative study of differences between groups (Wilcoxon–Mann–Whitney test. * significance at 5%, ** significance at 1%).
UWZp-Value
Group 1 (26)
(control group)
Group 2 (17)35.500386.500−4.6080.000 **
Group 3 (11)46.000397.000−3.2230.001 **
Group 4 (6)33.000384.000−2.1730.030 *
Group 2 (17)Group 3 (11)59.000125.000−1.6230.105
Group 4 (6)26.00047.000−1.7150.080
Group 3 (11)Group 4 (6)26.00047.000−0.7040.481
Table 7. Comparison of extra time proposed in different studies.
Table 7. Comparison of extra time proposed in different studies.
Increased Time Rate Percentage
ResearchersYearCustomizedLow VisionBlind
Morris [49]1974-50% (1.5 times)150% (2.5 times)
Packer [48]1989-Little less than 2 timesMore than 2 times
Wetzel and Knowlton [47]2000--50% + extra time
McNear and Torres [46]2002-50% (1.5 times)-
Gompel, Van Bon and Schreuder [40]2004-Entre 50% y 100% (From 1.5 to 2 times)-
Evans and Douglas [15]2008 2 to 3 times longer
Mohamed and Omar [43]2011-100% (2 times)200%
(3 times)
Atkins [45]2012yes--
Allman [37]2013yes--
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Batanero-Ochaíta, C.; Fernández-Sanz, L.; Rivera-Galicia, L.F.; Rueda-Bernao, M.J.; López-Baldominos, I. Estimation of Interaction Time for Students with Vision and Motor Problems when Using Computers and E-Learning Technology. Appl. Sci. 2023, 13, 10978. https://doi.org/10.3390/app131910978

AMA Style

Batanero-Ochaíta C, Fernández-Sanz L, Rivera-Galicia LF, Rueda-Bernao MJ, López-Baldominos I. Estimation of Interaction Time for Students with Vision and Motor Problems when Using Computers and E-Learning Technology. Applied Sciences. 2023; 13(19):10978. https://doi.org/10.3390/app131910978

Chicago/Turabian Style

Batanero-Ochaíta, Concepción, Luis Fernández-Sanz, Luis Felipe Rivera-Galicia, María José Rueda-Bernao, and Inés López-Baldominos. 2023. "Estimation of Interaction Time for Students with Vision and Motor Problems when Using Computers and E-Learning Technology" Applied Sciences 13, no. 19: 10978. https://doi.org/10.3390/app131910978

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop