Next Article in Journal
Economic Assessment of Photovoltaics Sizing on a Sports Center’s Microgrid Equipped with PEV Chargers
Next Article in Special Issue
Different Techniques of Creating Bone Digital 3D Models from Natural Specimens
Previous Article in Journal
Spherical Tree-Structured SOM and Its Application to Hierarchical Clustering
Previous Article in Special Issue
Smartphone LiDAR Technologies for Surveying and Reality Modelling in Urban Scenarios: Evaluation Methods, Performance and Challenges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of Digital Competence for Research

by
Adrián Sánchez
1,*,
Rosa María Woo
1,
Roberto Carlos Salas
1,
Francisco López
1,*,
Esther Guadalupe Narvaez
1,
Agustín Lagunes
2 and
Carlos Arturo Torres
3
1
Facultad de Ingeniería Eléctrica y Electrónica, Universidad Veracruzana, Calzada Ruiz Cortines 455, Boca del Río, Veracruz 94294, Mexico
2
Facultad de Negocios y Tecnologías, Campus Ixtac, Carretera a Dos Rios Km. 1, Ixtaczoquitlán, Veracruz 94452, Mexico
3
Facultad de Administración, Universidad Veracruzana, Puesta del Sol S/N, Veracruz 91780, Mexico
*
Authors to whom correspondence should be addressed.
Appl. Syst. Innov. 2022, 5(4), 77; https://doi.org/10.3390/asi5040077
Submission received: 30 June 2022 / Revised: 1 August 2022 / Accepted: 2 August 2022 / Published: 8 August 2022
(This article belongs to the Special Issue Applied Systems on Emerging Technologies and Educational Innovations)

Abstract

:
Several kinds of research conclude that the level of digital competence of students is mainly oriented to their daily activities. Therefore, we present the current paper which seeks to determine the impact of implementing a blended learning course designed to improve digital competence for research (DCR) among a group of undergraduate engineering students. With this approach, a quasi-experimental explanatory methodology with a causal-comparative scope was applied. For this reason, the results were analyzed before and after applying a specially designed course to the experimental group, comparing it with a passive control group by collecting data using three previously validated instruments. For data analysis, students’ t-tests and two-way ANOVA (Analysis of Variance) were used, estimating the effects with Cohen’s study. Given the results, there was a statistically significant improvement (p ≤ 0.05) in their skills and attitudes, but not in their knowledge, obtaining a significant effect size only in the procedural dimension (f = 0.41 y η2 = 0.142). Therefore, the implementation of the course used in the blended learning modality is considered to significantly improve the DCR of a group of undergraduate engineering students, although the results should be evaluated with due reservations.

1. Introduction

Digital competence is a transversal skill [1], and its first analysis started in Europe, where a paper was published in 2006 in the Official Journal of the European Union, as a parliament recommendation [2]. In that publication, it is recognized as a key skill for the personal development of citizens. The knowledge, skills, and abilities, as well as the attitudes required for digital competence, emanate from it.
As a result of the recommendation of the European Parliament, the DIGCOMP project was launched in the European community in 2010, with the goal of establishing a reference framework for digital competence. In the following year, Ala-Mutka [3] presented a report on the theoretical-conceptual mapping of digital competence, in which he detailed all related concepts. Subsequently, Ferrari [4] presented a second report in which he provided a better understanding and development of the digital competence previously proposed by Ala-Mutka [3]. In the same year, Janssen and Stoyanov [5] presented a research report where they compiled the opinions of experts on the development of Europe’s citizen’s digital competence. A year later, Ferrari [6] presented the results of this stage of the DIGCOMP project.
On the other hand, the conceptualization and development of digital competence has undergone a transformation since its incursion in 2006 in the European Parliament. Initially, Martin and Grudziecki [7] conceptualized digital literacy (digital competence) to be divided into three levels: level I, referred to as digital competence; level II, referred to as digital use; and level III, called digital transformation. This hierarchical ranking was taken up by Ala-Mutka [3] to guide the structure of the digital competence model and served as an analysis for Ferrari’s [4] digital competence framework, called the DigComp model.
While there are other models of digital competencies or skills [8,9,10,11,12], the DigComp model is the most internationally recognized. It has served as the basis for multiple investigations, including some of these models, and, coincidentally, its purpose is similar to those proposed in this study. Therefore, it is used as a reference for the operationalization of this construct.
On the other hand, to improve classical access to education, there are massive open online courses, called MOOC (Massive Open Online Courses), since they have features of open access to educational resources with unlimited participation, as cited by Yousef and Sumner [13]. Although this type of course has been developed in the last decade, their evolution depends on the level of accessibility that students have, they consider that most MOOC are being taken advantage of by professionals to keep themselves updated. However, Yousef and Sumner [13] found that there is still a need to improve strategies to regulate their self-access since satisfactory results depend on it.
The following is a list of the main findings of some research that sought to develop digital competence:
  • Hernandez, et al. [14] verified that in a MOOC environment, OER (Open Educational Resources) contribute to the development of digital competence. Consequently, they conclude that MOOC are scenarios that foster learning and collaborative interaction, as well as problem-solving.
  • Something similar was obtained in the research of Moreira et al. [15], who concluded that collaborative work led them to a reflective process, which contributed to previous theoretical knowledge.
  • In the research of Napal et al. [16], they did not obtain the expected results, because of the 21 sub-competencies, in 12 of them most of the students showed a level of basic; but in none was there a majority with an advanced level. Therefore, they concluded that their competencies could be improved in the dimensions where they were previously trained as students, or through informal experiences and self-learning.
  • In Olivares’ doctoral thesis, the students’ self-perception of mastery of digital competence was found at a basic level, by obtaining failing grades in the practical and knowledge part, in addition to the fact that with their strategy they did not achieve a significant difference in any of the dimensions of digital competence [17].
  • González et al. [18], found statistical differences in the five competency areas between pre-test and post-test.
  • In the doctoral research of Pérez [19], it was concluded that children adopt the use of ICTs (Information and Communications Technology) in their daily lives at a high level, but not so much at school, since their main assigned use is for leisure and recreational activities. However, their level of digital competence drops drastically when trying to do activities with the Internet and ICT applied to homework.
  • Lagunes [20] identifies the need to develop the digital competence that university students require when doing research. His proposal is based on the recommendations made by the European Parliament [21] regarding the development of knowledge, skills, and attitudes required for the development of individuals.
Therefore, it is necessary to adapt the construct of digital competence that university students need to develop in order to conduct research, adapting their knowledge, skills, and attitudes to the academic environment. According to the hierarchization of digital competence proposed by Martin and Grudziecki [7] and taken up by Ala-Mutka [3], Digital Competence for Research can be placed as a level II digital competence, where the competences applied to a specific activity such as research, starting from the generic digital competence, with its skills, knowledge, attitudes and values, are all grouped together. On the other hand, according to Ferrari’s [6] perspective, the DCR is a digital competence; however, based on his recommendations, each of its competencies is required to be adapted to the specific needs of the environment or application. As an example, scientific research is a sector of society that is directly affected by digital technologies, according to the agreement of 14 May 2020, presented in the official state gazette [21], by the Ministry of Education and Vocational Training of the Spanish government.
Digital competence for research was composed of three dimensions: (a) digital information management, (b) communication and collaboration in virtual environments, and (c) digital content creation. Digital competence for research could be operationally defined as the ability to search, filter, evaluate and manage data, information, and digital content, as well as to communicate and collaborate in virtual environments, creating digital content for research purposes.
As a result of the above, the research question arises: What is the difference in digital competence for research among undergraduate engineering students with the implementation of a blended learning course? It is proposed as a research hypothesis, that there are significant differences in digital competence for research among undergraduate engineering students with the implementation of a course designed in the blended learning modality.

2. Materials and Methods

This investigation seeks to measure the level of digital competence for the research of university students, as well as to determine if it is possible to improve it significantly through an intervention process. For this purpose, it is necessary to make use of a method in which the researcher maintains a relationship independent of the process, with a neutral stance, where it objectively explains the phenomenon and verifies its hypothesis, from a positivist paradigm. According to Bizquerra et al. [22] (p. 71), the positivist paradigm “focuses on explaining, predicting and controlling the phenomena under study”.
So, following the recommendation of Hernández et al. [23], this research approach is based on a quantitative approach. Due to the nature of the study, the research was designed under a quasi-experimental explanatory method of pre-test, post-test, and passive control group, implementing a causal-comparative study [22]. It was estimated that there is a cause–effect relationship between the implementation of a course designed in the blended learning modality and the digital competence for research of university students.

2.1. Population and Sample

The population of this study consisted of 248 semester students from a Naval Engineering University in Veracruz, Mexico, from which a non-probabilistic sample [22,23] consists of 32 students (18.7% female and 81.3% male) from six educational programs that met certain inclusion criteria, and with recommendations stipulated by the campus authorities in the academic area. The inclusion and exclusion criteria were determined and are listed below:
(a)
Inclusion criteria:
  • Class-groups with the lowest failure rate and with the best behavioral record, which was an express request of the campus authorities, in order to reduce the possibility of a negative impact due to the reduction in their free time by participating in the project.
  • Be an enrolled student (regardless of gender or age) of the second semester of any major, corresponding to the academic year 2019–2020.
  • Not have failed subjects in the first half of the current semester.
  • Being interested in being part of the research project.
  • Have an assigned or own personal computer.
(b)
Exclusion criteria:
  • Those who, throughout the development of the course, have an academic risk.
  • Students who do not actively participate, showing a lack of interest.
  • Students who have requested to leave the project.

2.2. Data Collection Techniques and Instruments

For the pre-test and post-test data collection, three closed instruments were designed and validated. First, a typical performance instrument [24] to measure the attitudinal dimension, in addition to two instruments of maximum performance [25,26] to measure the level of the procedural and cognitive dimensions of digital competence for the research. These meet the evidence of psychometric quality (see Table 1), reliability (see Table 2) and validity (see Table 3), to be used in the research work.

2.3. Evaluation and Feedback

During the intervention process, 2 in-person and 15 virtual activities were carried out, in which the teacher and the students participated actively, either individually or collaboratively. To assess the level of commitment of each student in collaborative work, a co-evaluation was made based on a rubric that specifically provided a description of the criteria of the assigned work, quality of work, contribution to the team and group integration. Each of the criteria had a score ranging from 0 to 25%, depending on the performance levels: 1. Very competent; 2. Competent; 3. Work in progress; and 4. Needs improvement.
On the other hand, to evaluate how well the individual or group activities met the established requirements, an activity rubric was designed. Consequently, each of the activities were assessed by the teacher and feedback was given in due time to each student, through comments and recommendations attached to each activity, whose evidence of development was uploaded to the Moodle educational platform.

2.4. Evaluation of the Impact of the Course

Following the end of the intervention phase with the experimental group, the post-test was carried out, applying the three test instruments again to both groups, which is called test-retest, according to Olaz [29]. Because of the effect produced by memory and practice, there could be an improvement in the performance of the groups. So, in order to disregard this effect, we assessed how the performance of the experimental and passive control groups changed between the pre-test and post-test.
Subsequently, the results of both groups were analyzed between the pre-test and the post-test, in order to validate the hypotheses raised. For this purpose, a two-way ANOVA test was performed. In this analysis, the dependent variables were the results of the cognitive, procedural, and attitudinal dimensions, while the independent variables were two categorical variables, corresponding to the treatment (type of group) and the type of session. The values assigned to the categories of the treatment variable or group type were 0 for the participants who did not receive treatment (passive control group), and 1 for those who did receive treatment (experimental group). The values assigned to the session were 0 for the pre-test results and 1 for the post-test.
Table 4 shows the comparison of the characteristics of the proposed approach with our research work.

2.5. Procedure for Analysis of Results

The results obtained by the experimental and passive control groups in the previous stages were subjected to statistical analysis using Student’s t-tests and two-way ANOVA. In addition, in each, their effect size was estimated with Cohen’s d. In the hypothesis decision test, p ≤ 0.05 was used as the standard for rejecting H0.
Procedure: the following is a list of the activities carried out to evaluate the strategy used.
  • The pre-test was carried out by applying the three instruments of the DCR test to the passive and experimental control groups. The responses of both groups were evaluated based on the key and their database was created.
  • The course designed in the blended learning modality was implemented for the experimental group.
  • The post-test was carried out, applying the three instruments of the DCR test to the control and experimental groups.
  • The responses of both groups were evaluated based on the key and their database was created.
  • The pre-test-post-test databases were unified and analyzed using IBM SPSS® software.
  • The results of both groups obtained in the pretest were compared in each of the instruments, applying the Student’s t-test for independent samples.
  • The results of both groups obtained in the post-test in each of the instruments were compared by applying the Student’s t-test for independent samples.
  • The results of the passive control group obtained in the pretest-post-test in each of the instruments were compared by applying the Student’s t-test for related samples.
  • The results of the experimental group obtained in the pre-test-post-test in each of the instruments were compared by applying the Student’s t-test for related samples.
  • The results obtained from the experimental and passive control groups were compared between pre-test and post-test in each of the instruments, applying the two-way ANOVA test, looking for the interaction between group and session. In cases where a p-value of less than 0.05 was obtained, the effect size was estimated with Cohen’s d, as well as its respective statistical power with the G*Power software.

3. Results

3.1. Diagnostic Stage

First, it was necessary to determine the level of the students in each aspect of digital competence for the research, based on the score achieved in the instruments that make up the test, as well as their general level based on the overall score. Since the scores are in the interval from zero to ten, the following ranges of scores were assigned to establish the level of the students in each case: from 0 to 4.99, basic; from 5 to 7.99, intermediate; and from 8 to 10, advanced.
Table 5 shows the results of the level obtained in the pretest by the students of the experimental and control groups in each dimension of the DCR. The initial level of the experimental group, in comparison to the cognitive dimension of the DCR, was higher than that of the passive control group. However, this is an isolated case, since in the other dimensions (procedural and attitudinal) the percentage of students in the passive control group is higher, as they demonstrated a higher level of mastery.
On the other hand, in order to better assess and compare their performance in the pre-test prior to the intervention, the Student’s t-test was performed and analyzed.
The results in Table 6 show that the average performance of the passive control group was superior in all aspects of the DCR. However, this improvement was only significant in the procedural dimension, so in this case, it was possible to reject the hypothesis. That is, it is the only case in which they were identified as two distinct groups, based on the marked differences in the procedural dimension of the DCR. According to Cohen’s work, the largest contrasts obtained in the procedural dimension are estimated to be of a large size (d ≥ 0.8).

3.2. Intervention Stage and Assessment of Course Impact

In the first instance, the grades of each participant throughout the course are broken down, followed by the results obtained in the analyses comparing the performance of both groups before and after the intervention, in order to validate the research hypothesis.

3.2.1. Intervention Stage

During the application of the course, more than 13 effective hours of sessions were recorded, contained in 24 videos that include all the didactic material. Likewise, support videos were attached to the platform as alternative sources of information for the virtual sessions, as well as videos that illustrate cases related to the activities to be carried out.
On the other hand, during the intervention, the students carried out a total of 17 evaluative activities previously presented in the didactic planning, of which ten were individual and seven group activities. In each of them, the teacher assessed their knowledge and skills based on a rubric of activities. On the other hand, the students carried out a co-evaluation to estimate the general performance of their classmates in the activities, based on the evaluation of the rubric criteria which are: assigned work, quality of work, contribution to the team and group integration.

3.2.2. Course Impact Assessment Stage

First, it was considered relevant to assess the performance of the participants in the experimental group in the post-test, regarding those in the passive control group. For this purpose, the Student’s t-test for independent samples was repeated, the results of which are included in Table 7. This table shows that, unlike what was obtained in the pretest, now the average (M) of the experimental group is higher than that of the passive control group in all aspects of digital competence for research, and particularly significant in the cognitive and attitudinal dimensions (p ≤ 0.5). However, in the procedural dimension this is considered an average size, while in the overall test score it is considered large, produced by the intervention, and estimated with Cohen’s work (d ≥ 0.5 and d ≥ 0.8).
On the other hand, following Olaz [29] regarding possible odd variables in the test-retest, a Student’s t-test was performed, comparing the performance of the passive control group between the pre-test and the post-test (see Table 8). It is noted that the only aspect that showed an improvement was the cognitive aspect, although it is a minimal value. However, supported by Cohen’s work, it can be inferred that the second application of the test produced a large negative effect (d ≥ 0.8) in the attitudinal aspect, which had a medium impact on the overall score (d ≥ 0.5).
Likewise, when comparing the level of DCR between both test applications (see Table 9), there is a change in their attitude level in the post-test, in which there was a decrease in the order of 30% in the advanced level, which translated into a similar increase in the basic level. This was the main cause of this decrease in the overall level of the test
Subsequently, a Student’s t-test was performed for samples related to the experimental group, estimating Cohen’s work in each case, in order to determine the effect for each aspect of the DCR. The results are presented in Table 10, where it is shown that all aspects of digital competence for the research improved notably, based on the increase in their averages.
As for the cognitive and attitudinal dimensions, it is observed that their average value increased; however, not significantly, since the results of the analysis show a high p-value (p > 0.05). On the other hand, regarding the procedural aspect, the increase in its performance is significant, given that its p-value is low enough (p ≤ 0.05). It should be emphasized that this is consistent with the results of Cohen’s work, which estimates that the effect of the course on the procedural aspect is significant (d ≥ 0.8), enough to statistically separate the pre-test and post-test averages, with a calculated statistical power of 0.983 and a reliable interval of 95%.
Table 11 shows the results of the research hypothesis by using the two-way ANOVA test, presenting the interaction values of the group type and session variables. It is noted that the improvement obtained is statistically significant (p ≤ 0.05) in the procedural and attitudinal aspects, but not in the cognitive aspect. It should be noted that a large effect size was estimated only for the procedural aspect (f > 0.4 and 2 > 0.14), while the cognitive and attitudinal aspects had a medium effect size (f > 0.25 and 2 > 0.06). Nevertheless, low statistical power was obtained for all three cases, with the maximum of these in the procedural dimension.
Finally, it is important to determine the impact of the intervention process on the DCR level of the students in the experimental group, the results of which are shown in Table 12.

4. Discussion

In the present study, the general objective was to develop a proposal that would significantly improve the level of digital competence for research in undergraduate engineering students. To achieve this, the research hypotheses were verified to be fulfilled:
Hypotheses Hi1.
There are significant differences in the cognitive dimension of digital competence for research among university engineering students with the implementation of a course designed in the blended learning modality.
Hypotheses Hi2.
There are significant differences in the procedural dimension of digital competence for research among engineering university students with the implementation of a course designed in the blended learning modality.
Hypotheses Hi3.
There are significant differences in the attitudinal dimension of digital competence for research among engineering university students with the implementation of a course designed in the blended learning modality.

4.1. Data Collection Systems

To achieve this, the first specific objective was to apply a validated performance instrument to measure the digital competence for research of a group of university engineering students in each of the dimensions of competence. Therefore, the three instruments of the DCR test were designed and validated, the results of which were presented in the methodology section.
Just like the present study, different investigations look to develop instruments that measure the cognitive dimension of the digital competency of university students, such as in Olivares’ [17] case. His instrument consisted of a diagnostic exam that did not calculate its psychometric parameters either by applying the classic theory of the tests or the theory of the item response. Moreover, it lacks valid evidence as it is only validated by the judgment of experts.
In general terms, the present research study differentiates itself from other investigations that use typical performance instruments to measure the digital competency of university students, such as in the investigations of González et al. [18], Gutiérrez et al. [30], as well as the (Master’s degree) studies of Ambriz [1] and the doctorate studies of Ascencio [31], Marín [32], Olivares [17] and Pérez [19]. Highlighted among them is the work of Marín [32], whose main research objective consists of the design of an instrument that evaluates digital competency by means of a self-concept questionnaire, leaving aside the opportunity to measure the maximum efficiency of the participants.
Likewise, the investigations that stand out are those that use an instrument that measures maximum performance and do not measure psychometric standards, nor evidence of validity [17]. The proposed sample of the present study refers to a high-quality technique [25,27], moreover the instruments of maximum and typical performance that it contains were validated by meeting the criteria for coping to adjustment [28]. Therefore, it accomplishes the first objective, which corresponds to validating a performance sample that measures digital competency for the research of a group of engineering university students in each aspect.

4.2. Course Design and Implementation

The second and third specific objectives consisted of designing and implementing a blended learning course that would improve the level of digital competency for the research of a group of engineering university students. To this end, a blended learning course with five modules was designed with the Moodle platform, with which a total of 18 distributed sessions were carried out in a period of three months, some in-person and some virtual with 17 in-person collaborative activities as well as a total of 24 support videos, with which satisfactory results were obtained. The results of this study are in line with those obtained by Dafonte et al. [33], due to the techno-pedagogical strategy used in flipped learning, which has the objective of developing digital competency in a group of university students, resulting in a high level of satisfaction and assessment.
The results of the application coincide with commentary from the participants, as a high percentage thought that this methodology allowed them to be up to date in the development of their subjects, as well as having better communication and interaction in the classroom. Likewise, they expressed that the classes were more interesting and dynamic, and therefore preferred this methodology over traditional methodology.
Likewise, in line with Arcos’ [34] research, who designed and implemented a massive and open online course (MOOC) to develop the digital competency of mathematics teachers, upload on the Moodle platform. The study allowed us to create technological skills in a cooperative environment, which is considered ideal for the development of digital competency, thanks to the implementation of activities and participation in discussion forums. The flexibility of the MOOC was also valued as being responsible for the achieved results, but in turn, is weighed as a factor in the desertion of the participants. This flexibility is similar to that given by blended learning, which is also in line with that obtained in the application of the proposed course.
On the other hand, this study contrasts with the doctoral research study from Olivares [17], who wanted to strengthen digital competency in university students, but in the planning of a techno-educative strategy only had seven sessions that were insufficient, in light of the results. In contrast to the hypotheses, it demonstrated that it was not possible to develop digital competency in any of its areas. With the 80 h that this blended learning course proposed, it was possible to significantly improve the procedural dimension of digital competency, moreover, completely homogenizing the cognitive dimension of the participants to an advanced level. It can also be asserted that the planning of the design of this course as an attributable variable is contrary to that of González et al. [18], who proposed the development of digital competency of university students by means of assignments. Moreover, the collection of data was performed with a self-perception instrument, with which it was concluded that development was significant in all dimensions.
It is worth noting that for the application of the proposed course, it was assured that the measurement of the infrastructure was optimal, guaranteeing adequate access to the network during the in-person sessions, as well as the virtual sessions since all the participants had an assigned team. This arrangement had the objective of not negatively affecting the interest and motivation of the participants in a way that could alter the process as in de Llorente’s [35] research, which did not have adequate infrastructure. Just one computing lab complicated the development of the intervention, which he affirms could have influenced the results obtained. Although in some cases there was a lack of connectivity due to drawbacks from the contingency, it did not seem to affect interest or motivation, as reflected in the results of the attitudinal dimension in the post-test.
Consequentially, we believe that the objectives of designing and implementing a course with the blended learning modality in order to improve the level of digital competency for research of a group of university engineering students were achieved.

4.3. Hypothesis Contrast

The fourth and last specific objective consisted of determining the effect that the implementation of a course designed with the modality of blended learning would have in each one of the dimensions of digital competency for the research for a group of engineering university students. This objective, which is derived from the other research objectives, is destined to contrast with the research hypotheses previously cited. The instruments for cognitive, procedural, and attitudinal dimensions of digital competence for specially designed research were applied to the passive control and the experimental groups, before and after the intervention, according to that established with methodological design. It is worth pointing out that both groups tried to integrate the same number of participants so that, like that performed by Alducin and Vázquez [36], it would eliminate the possibility that the difference would alter the results, as can be supposed of Aguado et al. [37] and Marín’s [32] research.
Initially, the results of the pre-test (Table 6) allowed us to demonstrate that before the intervention, the passive control group had greater development in its digital competency for the investigation, based on the estimation of the performance averages in each one of the instruments with the DCR sample. The average performance of the experimental group was inferior in all aspects of digital competency for research, even in the procedural dimension (p = 0.039), with an effect of great magnitude, causing a medium-sized effect on the global score (d = −0.677). In this same order of ideas, in agreement with what Olivares [17] carried out, we decided to select (those with) the lowest performance in the sample as the experimental group in order not to favor intervention, avoiding the results from being altered by knowledge and previous abilities.
As in the studies of Vázquez and Alducin [38] and Aguado et al. [37], which omitted the control of this effect due to the fact that it could cause confusion or a bad interpretation of the results and that establishing a correlation attribute to the impact, just through intervention.
At the end of the implementation of the course designed with the blended learning modality with the experimental group, a post-test was administered that again was applied to both groups, and diametrically opposed results were obtained. The average value of the experimental group was superior to that of the passive control group in the three instruments that measure digital competence for research. Even though in the pre-test the experimental group average was close (−0.03) to the passive control group, for the post-test, its development achieved a significant difference (p = 0.044) as a result of the intervention, considering a large-size effect (d = 0.969).
In the attitudinal dimension of DCR, we obtained similar results. While the experimental group prior to the intervention had a medium that was slightly inferior to that of the passive control group (−0.13), once the application was concluded, the difference in average values increased overwhelmingly (2.33), considering the same significance order (p = 0.045) as in the cognitive dimension, and with a size of effect (d = 0.957) that is just as big, due to the significant decrease in the attitudinal dimension in the passive control group (p = 0.031). This last result concurs with that presented by Olaz [29] regarding a possible unfavorable disposition of the participants in the second application of the sample, which demands the strategy example to maintain its interest.
On the other hand, the procedural dimension obtained a difference with a medium effect size (d = 0.535), which turned out not to be significant (p = 0.247), although the growth of its average value regarding the passive control group (1.0) was more than that obtained in the cognitive dimension (0.83), where it was considered significant. It should be noted that before the intervention period, the difference was very wide (−2) in favor of the passive control group.
Subsequently, before validating the research hypothesis, a Student’s t-test was carried out for samples related to the pre-test and post-test data of the passive control group. The results allowed for the corroboration that the application of the DCR sample for the same group at two different times did not produce any considerable improvement. Only a minute increase in the cognitive dimension (0.04) was achieved, while the scores became worse in the procedural (−0.2) and the attitudinal (−1.65) dimensions. Therefore, we dismissed that memory and practice affect the obtained results with the passive control group and the experimental group, as Olaz [29] posits.
To determine the impact of the intervention process of digital competency on the research of university students, a Student’s t-test was carried out for samples related to data from the pre-test and post-test and the experimental group. In the cognitive dimension instrument, an increase in average values was obtained (0.9), but the size of the intervention was medium (−0.516), rendering its value not significant (p = 0.137).
In what corresponds to the cognitive dimension instrument, it should be noted that there was large growth thanks to intervention. In the pre-test of the experimental group, there was a median value that was much less than the passive control group (−2.0), and the post-test was able to outperform it (1.0). Consequently, based on the sample results for the validation of the hypothesis, it can be established that the intervention had a size with a great effect (d = −1.302) on the procedural effect of the students.
From the resulting averages of the typical performance instrument, it can be verified that there was a development in the attitudinal dimension (0.81). However, this growth was not significant enough (p = 0.128). In terms of the effect of the intervention course, we can conclude that it was positive at medium size (d= −0.530), differing from Olaz’s [29] conclusions regarding a probable attitude decline during the post-test. It should be noted that while the passive control group had a drastic decline, the experimental group had a slight decrease, although it was not generalized. Even if some of the participants were unmotivated, a large majority had a positive attitude, since their scores were very dispersed (SD = 1.6720) over a high value average.
Finally, with the objective of validating the research hypotheses, the two-way ANOVA test was carried out, which obtained the variables of types of groups and sessions. Analyzing the results obtained to validate the first research hypothesis corresponding to the cognitive dimension, we observed that the improvement produced was not considered decisive, given that its significance value (p = 0.221) was not small enough (p ≤ 0.05) to reject it as a void hypothesis. Despite this, the improvement achieved was estimated to be a size of medium effect due to intervention with the course (f = 0.25 y η2 = 0.04), which coincides with Cohen’s d for the experimental group. Consequentially, void hypothesis H01 is accepted, establishing that there are no significant differences in the cognitive dimension of digital competence for research for engineering university students with the implementation of a course designed with the blended learning modality.
However, in light of the results of the Student’s t-test of the experimental group, we observed that thanks to the intervention, all the students had an advanced level in the cognitive dimension, which could weigh against the aforementioned. This can be understood, since this aspect was the one that had the margin with the least improvement, thus impeding significant growth. It should also be mentioned that the course allowed for the homogenization of the cognitive dimension of digital competency for research, verifying that the post-test grades were mostly concentrated around the average value (SD = 0.6173). This contrasted with the pre-test, which had more than double the dispersity in its results (SD = 1.5191).
Moreover, with the results for the validation of the second research hypothesis, we observed that the improvement in the procedural dimension was significant (p = 0.02), together with the estimate that the course had a size with a big effect (f = 0.41 y η2 = 0.142). Due to this, we rejected hypothesis H02 as void and accepted research hypothesis Hi2, which posits that there are significant differences in the procedural dimension of digital competency for research among engineering university students with the implementation of the course designed with the blended learning modality.
Likewise, regarding the contrast results of the third research hypothesis which corresponds to the attitudinal dimension, we estimate that the curse had an effect of medium size (f = 0.35 y h2 = 0.107), but despite this, the improvement in their performance is equally considered significant (p = 0.045). Therefore, we rejected hypothesis H03 as void and accepted research hypothesis Hi3, which posits that there are significant differences in the procedural dimension of digital competency for research among engineering university students with the implementation of the course designed with the blended learning modality.
However, despite having satisfactory results in the hypothesis contrast between the procedural and attitudinal dimensions, they cannot be generalized for all the studio population and should be taken with necessary reservations, given that they were not endorsed for their statistical potency, obtaining in both cases a very low value. In the case of procedural dimension performance, a significant development was achieved with a potency of 0.662. This represents a probability of 33.8% of committing a Type II error, which is to say, accepting a void hypothesis which is in fact false [39,40]. In the attitudinal dimension, a potency of 0.523 was obtained with a probability of 47.7% of committing the same error.
With this revelation, we found that potency depends on various factors, which highlights the sample size that has a very low value for this investigation. In order to obtain a statistical potency of 0.95 with a 5% probability of committing a type II error, a sample size of 114 would be required, consistent with the estimates carried out a priori with G*Power software. This is a great area of opportunity for this research.
Likewise, it should be mentioned that, if the hypothesis test is based on the adjusted p-value of Table 11 obtained from the Bonferroni correction, it would be deduced that none of the hypotheses is fulfilled, so the conclusions of this research should be directed towards the analysis of the improvement of their opportunity areas.

5. Conclusions

Within the present study, it is concluded that the implementation of a course designed in the blended learning modality improves digital competence for research (DCR). This is verified with the development of DCR dimensions within an engineering college students group when comparing their performance versus a passive control group.
Regarding the contributions, it should be mentioned that the proposal for the operationalization of the DCR presented is based on the DigComp model, the results of which allow reviewing that its dimensions adequately express digital competence. In addition to this, the proposal itself supports the DigComp 2.0 model [41], because opposite to other works that address the digital competence of students towards general aspects, this one focuses on digital competence that students require to carry out research. As expressed by López and Sevillano [42], who consider that the level of digital competence of students is not necessarily reflected in their academic performance, basically because they tend to apply it more around leisure and recreation activities.
Likewise, the design of the course was well accepted, initially, the participants expressed before the course that they considered themselves with an acceptable level of ICT abilities (Information and Communication Technology), but their perception changed when they realized that they did a basic use, not centered on research.
However, the results of this research should be taken with reservations due to the small size of the sample. Therefore, it is considered that better results could be obtained whether the intervention time increases, and the sample size is above 114 participants. This will allow a significant increase in statistical power, which provides more certainty for generalization related to the entire study population.

Author Contributions

A.S., A.L. and C.A.T., design and validate the questioners; A.S., A.L., R.M.W., R.C.S. and E.G.N., implemented the questioners; A.S., A.L., R.M.W., R.C.S., E.G.N. and C.A.T. prepared and analyzed the results; A.S., A.L. and F.L. wrote and corrected the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

CVU number 98428.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ambriz, C. Digital Skin of Students Who Enter Higher Education: Case Study. Master’s Thesis, Instituto Politécnico Nacional, Mexico City, Mexico, October 2014. [Google Scholar]
  2. Recommendation of the European Parliament and of the Council of 18 December 2006 on Key Competences for Lifelong Learning. Available online: http://eur-lex.europa.eu/legal-content/ES/TXT/PDF/?uri=CELEX:32006H0962&from=ES (accessed on 15 October 2017).
  3. Ala-Mutka, K. Mapping Digital Competence: Towards a Conceptual Understanding, 1st ed.; Publications office of the European Union: Luxembourg, 2011. [Google Scholar]
  4. Ferrari, A. Digital Competence in Practice: An Analysis of Frameworks, 1st ed.; Publications office of the European Union: Luxembourg, 2012. [Google Scholar]
  5. Janssen, J.; Stoyanov, S. Online Consultation on Experts’ Views on Digital Competence, 1st ed.; Publications office of the European Union: Luxembourg, 2012. [Google Scholar]
  6. Ferrari, A. DIGCOMP: A Framework for Developing and Understanding Digital Competence in Europe, 1st ed.; Publications office of the European Union: Luxembourg, 2013. [Google Scholar]
  7. Martin, A.; Grudziecki, J. DigEuLit: Concepts and tools for digital literacy development. Innov. Teach. Learn. Inf. Comput. Sci. 2006, 5, 249–267. [Google Scholar] [CrossRef]
  8. Matrix of ICT Skills for Learning. Available online: http://eduteka.icesi.edu.co/pdfdir/CHILE_Matriz_Habilidades_TIC_para_el_Aprendizaje.pdf (accessed on 28 May 2018).
  9. Digital Transformation a Framework for ICT Literacy. A Report of the International. ICT literacy panel. Available online: https://www.ets.org/Media/Research/pdf/ICTREPORT.pdf (accessed on 5 June 2018).
  10. Common Digital Competence Framework for Teachers. Available online: https://aprende.intef.es/sites/default/files/2018-05/2017_1020_Marco-Com%C3%BAn-de-Competencia-Digital-Docente.pdf (accessed on 14 April 2018).
  11. Redecker, C.; Punie, Y. European Framework for the Digital Competence of Educators: DigCompEdu, 1st ed.; Publications office of the European Union: Luxembourg, 2017. [Google Scholar]
  12. Digital Skills Framework. Available online: https://www.gob.mx/cms/uploads/attachment/file/444450/Marco_de_habilidades_digitales_vf.pdf (accessed on 21 July 2019).
  13. Yousef, A.M.F.; Sumner, T. Reflections on the last decade of MOOC research. Comput. Appl. Eng. Educ. 2021, 29, 648–665. [Google Scholar] [CrossRef]
  14. Hernández, E.E.; Romero, S.I.; Ramírez, M.S. Evaluation of digital didactic skills in massive open online courses: A contribution to the Latin American movement. Comun. Rev. Cient. De Comun. Y Educ. 2015, 22, 81–90. [Google Scholar]
  15. Moreira, A.; Alberto, B.; Pereira, I.; Teixeira, M.C. MOOC “Digital competences for teachers”: An innovative training practice. RIED 2018, 21, 243–261. [Google Scholar]
  16. Napal, M.; Peñalva, A.; Mendióroz, A.M. Development of Digital Competence in Secondary Education Teachers’ Training. Educ. Sci. 2018, 8, 104. [Google Scholar] [CrossRef] [Green Version]
  17. Olivares, K.M. Development of a Techno-Educational Strategy to Strengthen Digital Competence in University Students. Doctoral Thesis, Instituto Tecnológico de Sonora, Ciudad Obregón, Mexico, October 2017. [Google Scholar]
  18. González, V.; Román, M.; Prendes, M.P. Digital competences training in digital skills for university students based on the DigComp model. Rev. Electr. De Technol. Educ. 2018, 65, 1–15. [Google Scholar]
  19. Pérez, A. Alfabetización Digital Y Competencias Digitales en El Marco de la Evaluación Educativa: Estudio en Docentes Y Alumnos de Educación Primaria en Castilla Y León. Doctoral Thesis, Universidad D Salamanca, Salamanca, Mexico, July 2015. [Google Scholar]
  20. Lagunes, A. Lagunes, A. La Competencia Investigadora en Universitarios Mediante El Blending Learning Y Flipped Classroom. In Estrategias de Investigación Socioeducativas: Propuestas Para la Educación Superior, 1st ed.; Ramírez, M.A., Ed.; Cenid: Puebla, Mexico, 2016; Volume 1, pp. 95–112. [Google Scholar]
  21. Boletín oficial del estado. Ministerio de Educación Y Formación Profesional. Available online: https://www.boe.es/boe/dias/2020/07/13/pdfs/BOE-A-2020-7775.pdf (accessed on 26 July 2020).
  22. Bisquerra, R.; Dorio, I.; Gómez, J.; Latorre, A.; Martínez, F.; Massot, I.; Mateo, J.; Sabariego, M.; Sans, A.; Torrado, M.; et al. Metodología de la Investigación Educativa, 1st ed.; La Muralla: Barcelona, Spain, 2004. [Google Scholar]
  23. Hernández, R.; Fernández, C.; Baptista, P. Metodología de la Investigación, 6th ed.; McGraw Hill: Mexico city, Mexico, 2014. [Google Scholar]
  24. Figueroa, C. Los test educativos y sus aportes a la educación. Una mirada a algunos países de Europa, América y Colombia. Rev. Interacción 2015, 14, 157–173. [Google Scholar] [CrossRef] [Green Version]
  25. Bonillo, A. Análisis de Los Ítems. In Psicometría, 1st ed.; Meneses, J., Barrios, M., Bonillo, A., Cosculluela, A., Lozano, L.M., Turbany, J., Valero, S., Eds.; UOC: Barcelona, Spain, 2013; Volume 1, pp. 231–258. [Google Scholar]
  26. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 7th ed.; Pearson Education Limited: London, UK, 2014. [Google Scholar]
  27. Pérez, J.C. Evaluación Criterial del Área Metodológica de la Carrera de Psicología de la UABC. Doctoral Thesis, Universidad Autónoma de Baja California, Ensenada, Mexico, February 2014. [Google Scholar]
  28. Hu, L.T.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Modeling A Multidiscip. J. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  29. Olaz, F.O. Confiabilidad. In Introducción a la Psicometría, 1st ed.; Tornimbeni, S., Olaz, F., Pérez, E., Eds.; Paidós SAICF: Buenos Aires, Argentina, 2008; Volume 1, pp. 71–99. [Google Scholar]
  30. Gutiérrez, J.J.; Cabero, J.; Estrada, L.I. Diseño y validación de un instrumento de evaluación de la competencia digital del estudiante universitario. Rev. Espac. 2017, 38, 1–27. [Google Scholar]
  31. Ascencio, P. Estándar de Competencia Digital Para Estudiantes de Educación Superior de la Universidad de Magallanes Chile. Doctoral Thesis, Universitat de Barcelona, Barcelona, Spain, 2017. [Google Scholar]
  32. Marín, R. Diseño Y Validación de Un Instrumento de Evaluación de la Competencia Digital Docente. Doctoral Thesis, Universitat de les Illes Balears, Illes Balears, Spain, 2017. [Google Scholar]
  33. Dafonte, A.; García, O.; Ramahí, D. Flipped learning and digital competence: Techno-pedagogical design and university students perception. Index.Comun. 2018, 8, 275–294. [Google Scholar]
  34. Arcos, R.F. Elaboración de Un MOOC Para El Desarrollo de la Competencia Digital en Docentes de Matemáticas. Masters’ Thesis, Universidad Casa Grande, Guayaquil, Ecuador, June 2019. [Google Scholar]
  35. Llorente, M.C. Blended Learning Para El Aprendizaje en Nuevas Tecnologías Aplicadas a la Educación: Un Estudio de Caso. Doctoral Thesis, Universidad de Sevilla, Seville, Spain, 2018. [Google Scholar]
  36. Alducin, J.M.; Vázquez, A.I. Mejora del rendimiento en ingeniería a través de blended-learning. Digit. Educ. Rev. 2014, 25, 87–107. [Google Scholar]
  37. Aguado, D.; Arranz, V.; Valera, A.; Marín, S. Evaluación de un programa blended-learning para el desarrollo de la competencia trabajar en equipo. Psicolhema 2011, 23, 356–361. [Google Scholar]
  38. Vázquez, A.I.; Alducin, J.M. Blended-Learning e ingeniería: Nivel de uso, rendimiento académico y valoración de los alumnos. Educ. Knowl. Soc. 2014, 15, 120–148. [Google Scholar] [CrossRef]
  39. Lipsey, M.W. Design Sensitivity: Statistical Power for Experimental Research, 1st ed.; Sage Publications Inc.: Newbury Park, CA, USA, 1990. [Google Scholar]
  40. Lipsey, M.W.; Hurley, S.M. Design Sensitivity: Statistical Power for Experimental Research. In The Sage Handbook of Applied Social Research Methods, 2nd ed.; Bickman, L., Rog, D.J., Eds.; SAGE Publications Inc: Thousand Oaks, CA, USA, 2009; Volume 1, pp. 44–76. [Google Scholar]
  41. Vuorikari, R.; Punie, Y.; Carretero, S.; Van Den Brande, G. DigComp 2.0: The Digital Competence Framework for Citizens. Update Phase 1: The Conceptual Reference Model, 1st ed.; Publications office of the European Union: Luxembourg, 2016. [Google Scholar]
  42. López-Gil, K.S.; Sevillano, M.L. Desarrollo de competencias digitales de estudiantes universitarios en contextos informales de aprendizaje. Educ. Siglo XXI 2020, 38, 53–78. [Google Scholar] [CrossRef] [Green Version]
Table 1. Technical quality parameters of the three instruments.
Table 1. Technical quality parameters of the three instruments.
DCR DimensionPmin < P < PmaxPDRbis
Knowledge0.20 < P < 0.790.4830.4690.381
Skills0.21 < P < 0.460.3180.7080.606
Attitudes-----0.420
Obtained with the classical test theory [25,27].
Table 2. Reliability of the three instruments.
Table 2. Reliability of the three instruments.
DCR DimensionKR-20Compound ReliabilityCronbach’s Alpha
Knowledge0.850.829--
Skills0.850.820--
Attitudes0.85-0.831, 0.824 y 0.8330.872
Table 3. Evidence of reliability obtained from the AFE and AFC.
Table 3. Evidence of reliability obtained from the AFE and AFC.
ParameterCriterionValue
Knowledge
χ2/(gl)>1 (Excellent)397/230 = 1.72
SRMR<0.08 (Excellent)0.0677
RMSEA>0.06 (Acceptable)0.0609
Skills
χ2/(gl)>1 (Excellent)8.41/5 = 1.682
Pclose>0.05 (Excellent)0.135
Attitudes
χ2/(gl)>1 (Excellent)241.236/149 = 1.62
CFI>0.90 (Acceptable)0.905
SRMR<0.08 (Excellent)0.064
RMSEA>0.06 (Acceptable)0.063
PClose>0.01 (Excellent)0.070
Applying the reliability criteria of Hu and Bentler [28].
Table 4. Comparison of characteristics of the proposed approach and our research work.
Table 4. Comparison of characteristics of the proposed approach and our research work.
Paradigm TypeCharacteristics of the Research ApproachResearch Characteristics
Positivist ParadigmA method in which the researcher maintains an independent relationship with the process, with a neutral posture, where he objectively explains the phenomenon and verifies his hypothesis.
  • It follows the deductive hypothetical model, so hypotheses that establish causal relationships are formulated, contrasted, and verified.
  • It is quantitative research, so the researcher maintains a relationship independent of the process with a neutral stance since objective tests and tests are used for data collection, and are automatically scored online.
  • The method is explanatory, quasi-experimental pre-test, post-test, and passive control group, given that the sample is made up of intact and non-random groups, thus guaranteeing experimental control.
  • One variable (blended learning course) is manipulated to see its effect on another (DCR level).
  • The study is causal-comparative, so comparisons are made between the experimental group and the passive control group.
  • It seeks the cause–effect relationship based on the differences between groups.
Table 5. Comparison of the level of the DCR between the experimental and passive control groups in the pre-test.
Table 5. Comparison of the level of the DCR between the experimental and passive control groups in the pre-test.
DCR DimensionGroup 1 LevelGroup 2 Level
BasicIntermediumAdvancedBasicIntermediumAdvanced
Knowledge10%20%70%0%50%50%
Skills60%30%10%10%50%40%
Attitudes0%60%40%0%40%60%
Group 1 = experimental group, group 2 = passive control group.
Table 6. Comparative performance between groups in the pre-test.
Table 6. Comparative performance between groups in the pre-test.
DCR DimensionGroup 1Group 2 Cohen’s d
MSDMSDdftp
Knowledge7.9901.51918.0200.993118−0.0520.959−0.023
Skills4.6001.89746.6002.118718−2.2240.039−0.994
Attitudes7.7200.69097.8501.516818−0.2470.808−0.110
Group 1 = experimental group, group 2 = passive control group.
Table 7. Comparison of the performance between groups in the post-test.
Table 7. Comparison of the performance between groups in the post-test.
DCR DimensionGroup 1Group 2 Cohen’s d
MSDMSDdftp
Knowledge8.8900.61738.0601.0416182.1680.0440.969
Skills7.4001.34996.4002.2706181.1970.2470.535
Attitudes8.5301.67206.2002.918114.3342.1910.0450.957
Group 1 = experimental group, group 2 = passive control group.
Table 8. Comparative performance of the passive control group between the pre-test and post-test.
Table 8. Comparative performance of the passive control group between the pre-test and post-test.
DCR DimensionGroup 1Group 2 Cohen’s d
MSDMSDdftp
Knowledge8.0200.99318.0601.04169−0.1720.867−0.055
Skills6.6002.11876.4002.270690.5570.5910.176
Attitudes7.8501.51686.2002.918192.5570.0310.808
Positive values of Cohen’s work imply that their averages were reduced in the post-test.
Table 9. Comparison of the DCR level of the passive control group between the pre and post-test.
Table 9. Comparison of the DCR level of the passive control group between the pre and post-test.
DCR DimensionLevel in PretestLevel in Post-Test
BasicIntermediumAdvancedBasicIntermediumAdvanced
Knowledge0%50%50%0%50%50%
Skills10%50%40%20%40%40%
Attitudes0%40%60%30%40%30%
The overall score level corresponds to the overall weight of all aspects of the DCR.
Table 10. Comparison of the performance of the experimental group between pre-test and post-test.
Table 10. Comparison of the performance of the experimental group between pre-test and post-test.
DCR DimensionGroup 1Group 2 Cohen’s d
MSDMSDdftp
Knowledge7.9901.51918.8900.61739−1.6330.137−0.516
Skills4.6001.89747.4001.34999−4.1180.003−1.302
Attitudes7.7200.69098.5301.67209−1.6770.128−0.530
Negative values of Cohen’s work imply that their averages increased in the post-test.
Table 11. Results of the ANOVA test with the results of the interaction of the independent variables.
Table 11. Results of the ANOVA test with the results of the interaction of the independent variables.
DCR DimensiondfFpp Adjustedη2Effect Size fObserved Power
Knowledge11.5540.2210.0500.0410.250.228
Skills15.9730.0200.0160.1420.410.662
Attitudes14.2950.0450.0250.1070.350.523
Table 12. Comparison of the DCR level of the experimental group between the pre-test and post-test.
Table 12. Comparison of the DCR level of the experimental group between the pre-test and post-test.
DCR DimensionLevel in PretestLevel in Post-Test
BasicIntermediumAdvancedBasicIntermediumAdvanced
Knowledge10%20%70%0%0%100%
Skills60%30%10%0%40%60%
Attitudes0%60%40%10%20%70%
The level of the overall score corresponds to the weighting of all aspects of the DCR.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sánchez, A.; Woo, R.M.; Salas, R.C.; López, F.; Narvaez, E.G.; Lagunes, A.; Torres, C.A. Development of Digital Competence for Research. Appl. Syst. Innov. 2022, 5, 77. https://doi.org/10.3390/asi5040077

AMA Style

Sánchez A, Woo RM, Salas RC, López F, Narvaez EG, Lagunes A, Torres CA. Development of Digital Competence for Research. Applied System Innovation. 2022; 5(4):77. https://doi.org/10.3390/asi5040077

Chicago/Turabian Style

Sánchez, Adrián, Rosa María Woo, Roberto Carlos Salas, Francisco López, Esther Guadalupe Narvaez, Agustín Lagunes, and Carlos Arturo Torres. 2022. "Development of Digital Competence for Research" Applied System Innovation 5, no. 4: 77. https://doi.org/10.3390/asi5040077

Article Metrics

Back to TopTop