Next Article in Journal
The “Clockwork” Model for Deployment Technology Innovations in Sports Industry Ecosystem: Holistic Approach
Previous Article in Journal
A SWOT: Thematic Analysis of Pedagogical Practices at Inclusive School of Pakistan
Previous Article in Special Issue
Self-Perception of Digital Competence in University Lecturers: A Comparative Study between Universities in Spain and Peru According to the DigCompEdu Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Validation Using Structural Equations of the “Cursa-T” Scale to Measure Research and Digital Competencies in Undergraduate Students

by
Rocío Elizabeth Duarte Ayala
1,
Antonio Palacios-Rodríguez
2,*,
Yunuen Ixchel Guzmán-Cedillo
3 and
Leticia Rodríguez Segura
4
1
School of Health Sciences, University of the Valley of Mexico (UVM), Mexico City 06060, Mexico
2
Faculty of Educational Sciencies, Universidad de Sevilla, 41013 Seville, Spain
3
Faculty of Psychology, Universidad Nacional Autónoma de México, Mexico City 04510, Mexico
4
Institutional Directorate of Educational Innovation and Research, University of the Valley of Mexico (UVM), Mexico City 06060, Mexico
*
Author to whom correspondence should be addressed.
Societies 2024, 14(2), 22; https://doi.org/10.3390/soc14020022
Submission received: 27 November 2023 / Revised: 6 February 2024 / Accepted: 7 February 2024 / Published: 12 February 2024

Abstract

:
Research competencies are considered essential in fields such as science, academia, and technology, and this research seeks to provide a reliable tool to evaluate them. Therefore, the main objective of this study is to validate the “Cursa-T” scale through an exploratory and confirmatory factor analysis, as well as through structural equations, to ensure that the data collected fit the proposed theoretical model. The study sample consists of 1104 university students, mostly female, and a questionnaire based on previous studies is used. The most important results of the research include the validation of the “Cursa-T” scale through advanced statistical methods, such as exploratory and confirmatory factor analysis. The scale is found to be reliable and valid for measuring undergraduate research and digital competencies, and the data collected fit the proposed theoretical model. The discussion of the research highlights the importance of technology, devices, software, and the use of platforms in the development of research and digital competencies in Health Sciences students. It also reflects on the role of social networks in these competencies, as they can facilitate participation in academic communities. Ultimately, the research underlines the relevance of preparing undergraduate students in health areas.

1. Introduction

1.1. Research Skills in Undergraduate Students

Research competencies encompass the set of skills and knowledge essential for effectively conducting research endeavors [1]. Essentially, they denote the proficiency in utilizing scientific knowledge, formulating pertinent questions, and deriving conclusions based on evidence, thereby contributing to a comprehensive understanding of the world or the transformations occurring within it. Ramírez-Armenta et al. [2] underscore the significance of these competencies in facilitating the research activities of individuals, emphasizing the need to acknowledge variations in mastery and to delineate the thresholds between levels of acquisition, such as at the bachelor’s, undergraduate, or postgraduate levels.
These competencies hold paramount importance across various domains, including but not limited to science, academia, and technology, where the generation of new knowledge or the resolution of intricate problems is imperative. Fostering research skills in undergraduate students serves a dual purpose: not only does it prepare them for advanced studies at the postgraduate level, but it also provides them with valuable research experiences that can lead to groundbreaking discoveries, innovations, or projects.
Moreover, recognizing the pivotal role of recent and high-quality research results in informed decision-making within professional settings; it becomes imperative to instill the methodology of utilizing scientific evidence throughout the entire span of university education [1,3,4,5,6]. The ability to critically evaluate and apply research findings is an asset that enhances professional competence and contributes to advancements in diverse fields.
In the realm of health, specific competencies in research are identified as foundational for the training of healthcare professionals, as highlighted by various studies [4,7,8]. These competencies equip health practitioners with the necessary tools to navigate the complex landscape of healthcare research, ensuring that they can contribute meaningfully to advancements in medical knowledge and healthcare practices. As the healthcare field continually evolves, the cultivation of research competencies becomes indispensable for professionals striving to stay abreast of the latest developments and contribute to the improvement of health outcomes.

1.2. The Importance of Technology for Health Research

Technology is already part of the daily routine of teachers and students, but it requires skill, because if used properly it supports university work, thus facilitating academic work [9].
In the context of research competencies, are located technological skills considered as relevant, such as the following: (a) information search and management based on the use of specialized databases, online search engines, and other tools for accessing relevant scientific literature and efficiently managing the information obtained; (b) use of software for analysis, visualization, interpretation, and presentation of health research results; (c) use of online communication and collaboration technologies, such as videoconferencing, email, and platforms that allow students to interact with other researchers, share results, and collaborate on research projects; (d) use of technologies such as mobile devices, sensors, and specialized applications to collect information accurately and efficiently; (e) privacy considerations when using technology in health research in compliance with established ethical standards [10,11,12].
All of these are fostered in both graduate and undergraduate students, ranging in skill level and proficiency. It is important to note that the technological skills required may vary depending on the specific field of health.

1.3. Measuring Competencies

In this respect, in the review of the literature it was observed that there are few works in which valid and reliable instruments are shared that measure investigative competencies with respect to undergraduate training; even fewer are specific to the areas of health, and something that is missing as a fundamental dimension is the inclusion of technology as a basis for carrying out research nowadays [13,14]. Digital competencies have been measured and studied as a construct separate from the ability to carry out research or as an activity far removed from undergraduate students, with instruments with psychometric properties dedicated to assessing skills in postgraduate students [15,16,17,18,19].
To address this gap, it is imperative to explore models of competition from various international contexts. This section delves into models employed in Europe, Australia, and the United States, offering historical context and scrutinizing the reasons behind the design of these competition frameworks. Understanding the evolution of these models is vital for assessing their relevance and effectiveness in measuring research skills at the undergraduate level.
European countries have been at the forefront of developing frameworks to evaluate research skills among undergraduate students. The European Higher Education Area (EHEA) has witnessed a proliferation of competency-based assessment models, such as the European Qualifications Framework (EQF) and the Tuning Educational Structures in Europe project. These initiatives were born out of the Bologna Process, aimed at harmonizing higher education systems across Europe.
The EQF, for instance, was designed to facilitate comparability and transparency of qualifications across European countries, fostering a competitive environment that encourages the development of research skills. The Bologna Process sought to create a standardized and competitive landscape to enhance the quality and relevance of education, aligning it with the demands of the knowledge-based economy [16].
In Australia, the Quality Indicators for Learning and Teaching (QILT) framework has been instrumental in measuring research skills at the undergraduate level. Originating from a need to enhance the accountability and transparency of higher education institutions, the QILT framework integrates student feedback, graduate outcomes, and employer satisfaction to assess the overall quality of education.
The competitive nature of this model stems from Australia’s commitment to providing students with a world-class education that equips them with research skills applicable in a global context. The QILT framework reflects a historical shift towards outcome-oriented education, emphasizing the practical relevance of research skills for students in the Australian higher education landscape [16,17].
In the United States, the National Survey of Student Engagement (NSSE) and the Council for Accreditation of Educator Preparation (CAEP) are examples of models assessing undergraduate research skills. The historical context of these models lies in the U.S. commitment to promoting innovation, critical thinking, and research prowess among its students.
The competitive aspect of these models is rooted in the U.S. emphasis on global leadership in research and development. The models aim to ensure that undergraduate education aligns with the evolving needs of a knowledge-driven economy, fostering a culture of competitiveness and excellence [18].
While these international models of competition have undoubtedly contributed to shaping undergraduate research skills assessment, a critical examination is necessary. The design of these models often reflects regional priorities, and their applicability to diverse contexts may be limited. Additionally, a potential criticism lies in the inherent competition these models foster, potentially overshadowing collaborative aspects of research, which are equally crucial in today’s interconnected world.
However, a nuanced critique is essential to ensure that these models effectively address the multifaceted dimensions of research competencies, including the integration of technology and collaborative approaches.
Based on the above lines, the main objective of this research is linked to the design and validation process of a scale to measure the inquiry and digital competencies of undergraduate students.

2. Methodology

2.1. Sample

The sample consists of 1104 health science students from a Mexican university, specifically, 837 women and 267 men studying at university. The average age is 26 years old. Together, all of them agreed to participate in the research after being informed of its main objectives (informed consent). With 1104 participants in a population of 16,000 people, a maximum margin of sampling error of ±2.85% is obtained with a confidence level of 95%.

2.2. Data Collection Instrument

To collect the data, a questionnaire called “Cursa-T” was developed. It was developed on the basis of similar studies [20,21,22,23,24]. Each item is measured on a Likert scale of 5 intervals, where 0 represents the minimum value and 4 the maximum.
The questionnaire has a short heading in which the main objective of the research is presented. It then proceeds to collect the socio-demographic characteristics of the participants. Next, 51 items are presented to assess the students’ self-perception of research competencies (Table 1).
The dimensions of the instrument are intricately linked to the Research Competencies Development Program, a framework grounded in the four foundational pillars of education—knowing, knowing how, doing, and being—within the specific institutional context where the evaluation took place. These pillars serve as the guiding principles for shaping a comprehensive approach to research competencies, acknowledging the multifaceted nature of education and skill development.
Simultaneously, the theoretical framework advanced by Juárez and Torres [9], emphasizing investigative competencies at varying levels—basic, intermediate, and advanced—was thoughtfully incorporated into the evaluative process. This theoretical perspective provides a nuanced understanding of research competencies, recognizing the evolution and progression of these skills across different proficiency levels. By considering these hierarchical levels, the evaluation endeavors to capture the diverse spectrum of competencies individuals may possess, ensuring a comprehensive and nuanced assessment.
This dual consideration of the institution’s educational pillars and the tiered framework proposed by Juárez and Torres not only enriches the evaluation process but also aligns it with a broader educational philosophy. It acknowledges the dynamic interplay between foundational knowledge, practical application, and the evolving nature of research competencies. As such, the evaluation instrument aims to reflect a holistic perspective that encompasses both the institution-specific educational principles and the overarching theoretical framework, fostering a more nuanced and contextually relevant understanding of research competencies.

2.3. Data Collection and Analysis Procedure

The questionnaire was carried out in digital format, through the “Microsoft Forms” platform. Data collection was carried out during the months of October 2022 and January 2023. The purposes of the study were explained and the collaboration of the students was requested. The anonymity of the participants was ensured at all times.
The data matrix has been modified for operational reasons. Thus, new “dimensions” variables have been created (Dim. A, Dim. B, Dim. C, Dim. D, and Dim. E) calculated from the average of the items that compose them, with a minimum value of 0 and maximum 4. The reliability, discriminant validity, and convergent validity of the questionnaire have been calculated using the following coefficients: Cronbach’s Alpha, Composite Reliability (CR), Extracted Average Variance (AVE), and Maximum Shared Variance (MSV). Additionally, and to compare these results, the inferential analysis method between items and dimensions is used. To do this, the bivariate correlational analysis technique is used through Spearman’s correlation coefficient ρ. For its part, the construct validity of the test has been obtained through an exploratory factor analysis (EFA). The method used to select the factors is the principal components method. The obtained factors are rotated orthogonally using the Varimax method with Kaiser normalization. Once the number of factors has been determined, a confirmatory factor analysis (CFA) is performed. Confirmatory factor analysis is used to check whether the theoretical measures of the model are consistent through modeling diagrams and use of structural equations [25]. That is, it is checked to determine whether the data fit the hypothetical measurement model revealed by the exploratory factor analysis. The method used to contrast the theoretical model has been weighted least squares (WLS), which provides consistent estimates in samples that do not fit normality criteria [25]. For this last procedure, the AMOS computer program was used, capable of revealing complex hypothetical relationships between variables, using structural equation modeling (SEM). At the same time, it has been proven that the data are not normally distributed through a descriptive study in which asymmetry and kurtosis have been taken into account. The Kolmogorov–Smirnov goodness-of-fit test has confirmed this verification, with significance (p-value) equal to 0.000 for all items (non-normal distribution).

3. Results

The assessment of the reliability of the “Cursa-T” questionnaire involved the calculation of Cronbach’s alpha coefficient, both in its entirety and for each of its distinct dimensions. The comprehensive analysis yielded an impressive Cronbach’s alpha index of 0.983, signifying an exceptionally high level of reliability for the questionnaire [26]. This index, surpassing the threshold of 0.9, unequivocally underscores the robustness and dependability of the instrument.
Furthermore, a granular examination of the individual dimensions within the questionnaire reveals equally commendable reliability coefficients for each. Specifically, the reliability coefficients for the dimensions are as follows: A. Knowledge of research methodology (0.960); B. Use of statistics for research (0.960); C. Scientific report (0.967); D. Oral support of projects (0.959); and E. Technological applications in research (0.954). Notably, all these dimensions exceed the recommended threshold of 0.7, thereby affirming their high reliability and indicating their efficacy in capturing the intended constructs [27].
To provide a more comprehensive evaluation, additional statistical metrics were considered, including the Composite Reliability (CR), Average Extracted Variance (AVE), and Maximum Shared Variance (MSV). The outcomes of these calculations, along with reference values for model adjustment, are detailed in Table 2. These supplementary analyses contribute to a more nuanced understanding of the questionnaire’s reliability by examining factors such as convergent validity, shared variance, and overall model fit [28].
In summary, the rigorous examination of the “Cursa-T” questionnaire, encompassing both global and dimensional perspectives, underscores its robust reliability, positioning it as a sound and trustworthy tool for assessing research competencies. The combination of Cronbach’s alpha coefficients and additional statistical parameters enhances the overall confidence in the questionnaire’s ability to yield reliable and valid results in the evaluation of research-related skills and knowledge.
All figures obtained adjust to the reference values. Therefore, the reliability of the model (CR) as well as its convergent (AVE) and discriminant validity (MSV) are demonstrated. Next, we proceed to analyze the simple correlations of each item with the theoretical dimension or construct. The results are shown in Table 3. Values that do not correspond to the corresponding item–dimension relationship have been eliminated to facilitate reading the table.
Any value surpassing the threshold of 0.707 is deemed acceptable for an item to be included as a constituent of its respective dimension [29]. Consequently, all items have met this criterion and have been seamlessly incorporated into their designated dimensions.
The assessment of construct validity for the test was conducted using an exploratory factor analysis, the results of which are detailed in Table 4. Prior to this analysis, the suitability of factor analysis was verified through the Kaiser–Meyer–Olkin (KMO) test, yielding a highly significant coefficient of 0.983. Additionally, Bartlett’s sphericity test confirmed the applicability of factor analysis, with a significance level (p-value) of 0.000, affirming the appropriateness of conducting the factor analysis.
These preliminary tests not only justified the utilization of factor analysis but also underscored the robustness of the dataset for such an analytical approach. The high KMO coefficient signifies the adequacy of the sample for factor analysis, while the significant Bartlett’s sphericity test implies that the variables within the dataset are interrelated, justifying their inclusion in the factor analysis.
The subsequent exploratory factor analysis, as elaborated in Table 4, delves into the underlying structure of the test, unraveling the latent factors that contribute to the observed patterns of responses. This analysis not only facilitates a deeper understanding of the construct validity of the test but also sheds light on the relationships between variables and the overarching dimensions they collectively represent. Values that do not correspond to the corresponding item–dimension relationship have been eliminated to facilitate reading the table.
The outcomes, elucidating 73.78% of the variance, establish the existence of the five proposed theoretical factors: Knowledge of research methodology (Dim. A), Use of statistics for research (Dim. B), Scientific report (Dim. C), Oral support of projects (Dim. D), and Technological applications in research (Dim. E). These factors encapsulate the fundamental dimensions underpinning the observed patterns of responses.
To validate and refine the theoretical model derived from exploratory factor analysis (EFA), a confirmatory factor analysis (CFA) is employed. Figure 1 portrays the structural diagram outlining the item–dimension relationships and dimension–dimension correlation indices, providing a visual representation of the hypothesized model.
The CFA serves as a critical step in corroborating the theoretical underpinnings identified through EFA. By examining the structural relationships between the observed variables and their respective latent factors, CFA evaluates the goodness-of-fit of the proposed model. The graphical representation in Figure 1 allows for a clear and concise depiction of how each item aligns with its designated dimension and the interconnections between different dimensions.
This iterative process, from EFA to CFA, contributes to the refinement and validation of the theoretical model, enhancing its accuracy in capturing the complex interplay between variables and dimensions. Through this analytical journey, the study aims to establish a robust and reliable framework that aligns with the underlying constructs of interest, ultimately enhancing the credibility of the assessment tool.
The factor loadings within the dimensions range from 0.747 to 0.886, indicating robust levels of correlation between the observed items and their respective latent factors. These values signify a high degree of association, underscoring the strength of the relationship between the measured variables and the underlying dimensions. Additionally, the interrelationships among different factors (dimensions) also exhibit predominantly high levels, with an average correlation coefficient of 0.694. This collective pattern of strong associations reinforces the coherence and internal consistency of the proposed model.
To further assess the model’s fit, Table 5 presents both the obtained values and reference benchmarks based on Lévy Mangin and collaborators [27]. Key indicators, including Chi-Square (CMIN), goodness-of-fit index (GFI), parsimonious goodness-of-fit index (PGFI), normalized fit index (NFI), and normalized parsimonious fit index (PNFI), are scrutinized to evaluate the model’s overall fit to the observed data. This comprehensive evaluation helps ascertain the adequacy of the proposed model in capturing the underlying structure of the dimensions and their interrelationships.
These statistical metrics serve as valuable tools for gauging the quality of the model, providing insights into its goodness of fit, parsimony, and overall suitability for representing the intricate relationships within the data. The examination of both factor loadings and model fit parameters collectively contributes to a comprehensive understanding of the model’s reliability and effectiveness in capturing the intended constructs.
Evidently, all the obtained results align harmoniously with the reference values. Consequently, these findings serve to validate and affirm the proposed theoretical model.

4. Discussion and Conclusions

Since 2007, Llanos and Dáger [30] underscored the significance of cultivating research competencies among undergraduate students, particularly in the context of addressing challenges within Latin America, because of the limited scientific production at universities in this higher education level. Aliaga-Pacora et al. [31] underlines the necessity to identify and provide a clear process of how knowledge and abilities, which together form the investigative competency, align with the purpose of allowing the mastery of practical actions that generate knowledge for professionals. The presented instrument serves as a pivotal step, allowing for diagnostic assessments to inform decisions about training programs, particularly within the high school framework that shapes the university’s educational model [4,11].
Contrary to the misconception that undergraduate education does not aim to produce researchers, studies by Ramos et al. [3] and Chávez-Ayala et al. [12] assert that it fosters positive attitudes toward research at this level, with some students adopting scientific research practices in their professional endeavors. Consequently, the necessity arises for valid and reliable instruments that can gauge the evolution of these competencies across different training levels.
While all investigative competencies integrated into the dimensions of the instrument are desirable in the context of health professionals’ education, it is crucial to recognize the potential for specific approaches based on the distinct needs or objectives of various health areas, such as medicine, nursing, pharmacy, among others [32,33,34,35]. Thus, a thorough analysis is essential, considering the unique profiles of each training program to determine advances or modifications in study plans [36,37].
At the same time, it is necessary to develop explanatory models of the use of technologies when carrying out research. The framework and results of this study underscore the pivotal role of technology, encompassing devices, software, and platforms. However, there is a recognized need to explore the influence of social networks on research skills. Acknowledging social networks as valuable reflections, given their capacity to facilitate participation in academic communities, becomes imperative. Teaching these skills to health-focused undergraduate students becomes crucial, preparing them for an increasingly digitized environment and bolstering their ability to contribute to research advancements in the health domain [38].
In conclusion, this research contributes an instrument that enables periodic evaluations of the effectiveness of teaching research competencies. It serves as a catalyst for decision-making based on evidence, the promotion of innovation, and, most importantly, the preparation of students for the academic and professional challenges awaiting them in the dynamic field of health.

Author Contributions

Conceptualization, R.E.D.A. and Y.I.G.-C.; methodology, A.P.-R.; software, A.P.-R.; validation, L.R.S., R.E.D.A. and Y.I.G.-C.; formal analysis, A.P-R.; investigation, A.P.-R.; resources, L.R.S.; data curation, A.P.-R.; writing—original draft preparation, R.E.D.A.; writing—review and editing, Y.I.G.-C.; visualization, A.P.-R.; supervision, L.R.S.; project administration, L.R.S.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Available online: https://grupotecnologiaeducativa.es/ accessed on 5 February 2024.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Márquez-Valdés, A.; Delgado Farfán, S.; Fernández Cáceres, M.; Acosta Bandomo, R. Formación de competencias investigativas en pregrado: Su diagnóstico. InterCambios. Dilemas Transic. Educ. Super. 2018, 5, 44–51. Available online: https://ojs.intercambios.cse.udelar.edu.uy/index.php/ic/article/view/161 (accessed on 1 October 2023).
  2. Ramírez-Armenta, M.O.; García-López, R.I.; Edel-Navarro, R. Validation of a scale to measure digital competence in graduate students. Form. Univ. 2021, 14, 115–126. [Google Scholar] [CrossRef]
  3. Ramos Vargas, L.F.; Escobar Cornejo, G.S. La formación investigativa en pregrado: El estado actual y consideraciones hacia el futuro. Rev. Psicol. 2020, 10, 101–116. [Google Scholar] [CrossRef]
  4. Castro-Rodríguez, Y. Iniciativas curriculares orientadas a la formación de competencias investigativas en los programas de las ciencias de la salud. Iatreia 2023, 36, 562–577. [Google Scholar] [CrossRef]
  5. Hueso-Montoro, C.; Aguilar-Ferrándiz, M.E.; Cambil-Martín, J.; García-Martínez, O.; Serrano-Guzmán, M.; Cañadas-De la Fuente, G.A. Efecto de un programa de capacitación en competencias de investigación en estudiantes de ciencias de la salud. Enfermería Glob. 2016, 15, 141–161. [Google Scholar] [CrossRef]
  6. Ramírez-Armenta, M.O.; García López, R.I.; Edel Navarro, R. Validación de escala que mide competencia metodológica en posgrado. Revista ProPulsión 2022, 5, 48–62. [Google Scholar] [CrossRef]
  7. Labrador Falero, D.M.; González Crespo, E.; Prado Tejido, D.; Fundora Sosa, A.; Vinent González, R. Estrategia para la formación de competencias investigativas en pregrado. Rev. Cienc. Médicas 2020, 24, e4414. Available online: http://revcmpinar.sld.cu/index.php/publicaciones/article/view/4414 (accessed on 1 October 2023).
  8. Nóbile, C.I.; Gutiérrez, I. Dimensiones e instrumentos para medir la competencia digital en estudiantes universitarios: Una revisión sistemática. Edutec. Rev. Electrónica Tecnol. Educ. 2022, 81, 88–104. [Google Scholar] [CrossRef]
  9. Juárez Popoca, D.; Torres Gastelú, C.A. La competencia investigativa básica. Una estrategia didáctica para la era digital. Sinéctica 2022, 58, e1302. Available online: https://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1665-109X2022000100202 (accessed on 1 October 2023). [CrossRef]
  10. Albornoz-Ocampo, J.A. Validación de instrumentos para medir la autogestión del aprendizaje y competencias informacionales en un sistema de clases en línea. Rev. Electrónica Educ. Pedagog. 2022, 6, 184–196. [Google Scholar] [CrossRef]
  11. Abbott, D. Game-based learning for postgraduates: An empirical study of an educational game to teach research skills. Higher Educ. Pedagog. 2019, 4, 80–104. [Google Scholar] [CrossRef]
  12. Chávez-Ayala, C.; Farfán-Córdova, N.; San Lucas-Poveda, H.; Falquez-Jaramillo, J. Construcción y validación de una escala de habilidades investigativas para universitarios. Rev. Innova Educ. 2023, 5, 62–78. [Google Scholar] [CrossRef]
  13. Arias Marín, L.; García Restrepo, G.; Cardona-Arias, J.A. Impacto de las prácticas profesionales sobre las competencias de investigación formativa en estudiantes de Microbiología de la Universidad de Antioquia-Colombia. Rev. Virtual Univ. Católica Norte 2019, 56, 2–15. Available online: https://dialnet.unirioja.es/servlet/articulo?codigo=7795742 (accessed on 1 October 2023).
  14. Carrillo-Larco, R.M.; Carnero, A.M. Autoevaluación de habilidades investigativas e intención de dedicarse a la investigación en estudiantes de primer año de medicina de una universidad privada en Lima, Perú. Rev. Médica Hered. 2013, 24, 17–25. Available online: http://www.scielo.org.pe/scielo.php?script=sci_arttext&pid=S1018-130X2013000100004&lng=es&tlng=es (accessed on 1 October 2023). [CrossRef]
  15. Casillas Martín, S.; Cabezas González, M.; Sanches-Ferrerira, M.; Teixeira Diogo, F.L. Estudio psicométrico de un cuestionario para medir la competencia digital de estudiantes universitarios (CODIEU). Educ. Knowl. Soc. 2018, 19, 69–81. [Google Scholar] [CrossRef]
  16. Paye-Huanca, O.; Mejia-Alarcón, C. Validez de constructo y fiabilidad de una escala de autopercepción de habilidades en investigación científica y estrategias de aprendizaje autónomo. Rev. Cient. Mem. Posgrado 2022, 3, 21–28. Available online: http://postgrado.fment.umsa.bo/memoriadelposgrado/wp-content/uploads/2022/09/PAYE-HUANCA-ERICK.pdf (accessed on 1 October 2023). [CrossRef]
  17. González-Rivera, J.A.; Dominguez-Lara, S.; Torres-Rivera, N.; Ortiz-Santiago, T.; Sepúlveda-López, V.; Tirado de Alba, M.; González-Malavé, C.M. Análisis estructural de la Escala de Autoeficacia para Investigar en estudiantes de posgrado. Rev. Evaluar 2022, 22, 17–27. [Google Scholar] [CrossRef]
  18. Silva-Quiroz, J.E.; Abricot-Marchant, N.; Aranda-Faúndez, G.; Rioseco-País, M. Diseño y validación de un instrumento para evaluar competencia digital en estudiantes de primer año de las carreras de educación de tres universidades públicas de Chile. Edutec. Rev. Electrónica Tecnol. Educ. 2022, 79, 319–335. [Google Scholar] [CrossRef]
  19. Hernández-Suárez, C.A.; Gamboa Suárez, A.A.; Avendaño Castro, W.R. Validación de una escala para evaluar competencias investigativas en docente de básica y media. Rev. Boletín Redipe 2021, 1, 393–406. [Google Scholar] [CrossRef]
  20. Cobos, H. Lectura crítica de investigación en educación médica. Investig. Educ. Médica 2016, 5, 115–120. Available online: https://www.redalyc.org/pdf/3497/349745408008.pdf (accessed on 1 October 2023). [CrossRef]
  21. Veytia Bucheli, M.G.; Lara Villanueva, R.S.; García, O. Objetos Virtuales de Aprendizaje en Educación Superior. Eikasía: Rev. Filos. 2018, 207–224. Available online: https://dialnet.unirioja.es/servlet/articulo?codigo=6632246 (accessed on 1 October 2023).
  22. Sari, S.M.; Rahayu, G.R.; Prabandari, Y. The correlation between written and practical assessments of communication skills among the first year medical students. Southeast Asian J. Med. Educ. 2014, 8, 48–53. Available online: http://seajme.md.chula.ac.th/articleVol8No2/9_OR6_Sari.pdf (accessed on 1 October 2023). [CrossRef]
  23. Peralta-Heredia, I.C.; Espinosa-Alarcón, P.A. ¿El dominio de la lectura crítica va de la mano con la proximidad a la investigación en salud? Rev. Investig. Clínica 2005, 57, 775–782. Available online: https://www.scielo.org.mx/scielo.php?pid=S0034-83762005000600003&script=sci_abstract&tlng=pt (accessed on 1 October 2023).
  24. Rocher, L.Y.; Rodríguez, S.F.; Lau, J. Fundamentación, diseño y validación de un cuestionario:“perfil del estudiante universitario en formación investigativa”. Campus Virtuales 2019, 8, 85–102. Available online: http://www.uajournals.com/ojs/index.php/campusvirtuales/article/view/501 (accessed on 1 October 2023).
  25. Ruiz, M.A.; Pardo, A.; San Martín, R. Modelos de ecuaciones estructurales. Papeles Psicólogo 2010, 31, 34–45. [Google Scholar]
  26. O’Dwyer, L.; Bernauer, J. Quantitative Research for the Qualitative Researcher; SAGE: Newcastle upon Tyne, UK, 2014. [Google Scholar] [CrossRef]
  27. Lévy Mangin, J.P.; Varela Mallou, J.; Abad González, J. Modelización con Estructuras de Covarianzas en Ciencias Sociales: Temas Esenciales, Avanzados y Aportaciones Especiales; Netbiblo: A Coruña, Spain, 2006. [Google Scholar]
  28. Hair, J.; Black, W.; Babin, B.; Anderson, R. Multivariate Data Analysis; Prentice-Hall: Hoboken, NJ, USA, 2010. [Google Scholar]
  29. Carmines, E.; Zeller, R. Reliability and Validity Assessment; SAGE: Newcastle upon Tyne, UK, 1979. [Google Scholar] [CrossRef]
  30. Llanos, R.A.; Dáger, Y.B. Estrategia de formación investigativa en jóvenes universitarios: Caso Universidad del Norte. Studiositas 2007, 2, 5–12. Available online: https://dialnet.unirioja.es/descarga/articulo/2719634.pdf (accessed on 1 October 2023).
  31. Aliaga-Pacora, A.A.; Juárez-Hernández, L.G.; Sumarriva-Bustinza, L.A. Validity of a rubric to assess graduate research skills. Rev. Cienc. Soc. 2023, 29, 415–427. [Google Scholar] [CrossRef]
  32. Palacios-Rodríguez, A.; Guillén-Gámez, F.D.; Cabero-Almenara, J.; Gutiérrez-Castillo, J.J. Teacher Digital Competence in the stages of Compulsory Education according to DigCompEdu: The impact of demographic predictors on its development. Interact. Des. Archit. 2023, 57, 115–132. [Google Scholar] [CrossRef]
  33. Mejia Corredor, C.; Ortega Ferreira, S.; Maldonado Currea, A.; Silva Monsalve, A. Adaptación del cuestionario para el estudio de la competencia digital de estudiantes de educación superior (CDAES) a la población colombiana. Pixel-Bit. Rev. Medios Educ. 2023, 68, 43–85. [Google Scholar] [CrossRef]
  34. Mallidou, A.; Deltsidou, A.; Nanou, C.; Vlachioti, E. Psychometric properties of the research competencies assessment instrument for nurses (RCAIN) in Greece. Heliyon 2023, 9, e19259. [Google Scholar] [CrossRef]
  35. Pabico, C.; Warshawsky, N.E.; Park, S.H. Psychometric Properties of the Nurse Manager Competency Instrument for Research. J. Nurs. Meas. 2023, 31, 273–283. [Google Scholar] [CrossRef]
  36. Pinto Santos, A.R.; Pérez-Garcias, A.; Darder Mesquida, A. Training in Teaching Digital Competence: Functional Validation of the TEP Model. Innoeduca. Int. J. Technol. Educ. Innov. 2023, 9, 39–52. [Google Scholar] [CrossRef]
  37. Swank, J.M.; Lambie, G.W. Development of the Research Competencies Scale. Meas. Eval. Couns. Dev. 2016, 49, 91–108. [Google Scholar] [CrossRef]
  38. Martínez-Salas, A.M.; Alemany-Martínez, D. Redes sociales educativas para la adquisición de competencias digitales en educación superior. Rev. Mex. Investig. Educ. 2022, 27, 209–234. Available online: http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-66662022000100209&lng=es&tlng=es (accessed on 1 October 2023).
Figure 1. Structural diagram of the “Cursa-T” questionnaire.
Figure 1. Structural diagram of the “Cursa-T” questionnaire.
Societies 14 00022 g001
Table 1. Description of the questionnaire items.
Table 1. Description of the questionnaire items.
DimensionItemIndicator
A. Knowledge of the research methodology (Dim. A)A1I can build an innovative and relevant contribution into a research project.
A2I make, formulate or identify variables from the title, research question, hypothesis and research objectives.
A3I recognise the types of methodology applied in a study, such as qualitative or quantitative or mixed.
A4I identify the types of sampling (probability and non-probability).
A5I am aware of the differences between inclusion, exclusion and elimination criteria.
A6I use original and relevant ideas from scientific articles in the argumentation of theoretical frameworks or state of the art.
A7I distinguish the techniques and instruments of data collection in research.
A8I compare the results of my research with those of other authors.
A9I can reflect on and draw conclusions from research based on the results and confirmation of the researcher’s hypothesis.
A10I can distinguish between exploratory, descriptive, correlational or explanatory phases of research.
A11I formulate a research question
A12I define a research problem on the basis of background information.
A13I integrate information from primary sources to propose a possible solution to a problem.
B. Use of statistics for research (Dim. B)B1I can recognise probability and non-probability samples in the results.
B2I interpret the results of statistical tests
B3I can distinguish in some results whether parametric or non-parametric statistics are used.
B4I can conclude the main research variables of a scientific paper
B5I identify measures of central tendency and dispersion in an article.
B6Use descriptive statistics for reporting
B7I identify inferential statistics in research
B8Statistically estimate the sample size of my research project.
B9I can predict the results of a research project by reviewing the theoretical and methodological framework.
C. Scientific report (Dim. C)C1I identify popular science magazines or materials
C2I draft clear and precise objectives with their basic elements.
C3I write research reports
C4I identify the parts of a research report
C5I can present the introduction, theoretical framework, results, analysis and discussion of a research topic.
C6I know the parts that make up a research project.
C7I detect experimental and non-experimental research
C8I develop a full research report
C9I write a proper research report
C10I recognise spelling rules
C11I draft documents identifying proper spelling and spelling
D. Oral presentation of projects (Dim. D)D1I participate in scientific poster exhibitions
D2I advocate research topics in forums
D3I make presentations of free papers
D4I present popular science topics
D5I know spaces for feedback on research topics.
D6I present topics considering the order of the scientific method
D7I have the oral skills to present a research poster in synthesis
D8I use my written skills to present scientific papers.
D9I have the skills to write a research poster
E. Technological applications in research (Dim. E)E1I am familiar with programmes for building references or quotations
E2I identify digital tools
E3I recognise the technological tools to develop a research project.
E4I am familiar with platforms for searching scientific information
E5I know software for statistical analysis of data.
E6I easily find scientific articles on the Internet
E7I am familiar with the process of strategic search for scientific information.
E8I am familiar with and apply standardised search terms.
E9I recognise different programmes or digital platforms to form a research project.
Table 2. Convergent and discriminant validity of the model.
Table 2. Convergent and discriminant validity of the model.
DimensionCRModel FitAVEModel FitMSVModel Fit
A0.960CR > 0.70.650AVE > 0.50.469MSV < AVE
B0.9610.7300.553
C0.9670.7260.585
D0.9600.7260.584
E0.9550.7040.585
Table 3. Correlations of the items with the associated dimensions.
Table 3. Correlations of the items with the associated dimensions.
ItemDim. ADim. BDim. CDim. DDim. E
A1 0.772
A2 0.779
A3 0.812
A4 0.770
A5 0.795
A6 0.833
A7 0.844
A8 0.791
A9 0.835
A10 0.819
A11 0.823
A12 0.848
A13 0.842
B1 0.830
B2 0.824
B3 0.849
B4 0.853
B5 0.867
B6 0.880
B7 0.886
B8 0.870
B9 0.833
C1 0.817
C2 0.862
C3 0.842
C4 0.875
C5 0.869
C6 0.877
C7 0.855
C8 0.875
C9 0.863
C10 0.807
C11 0.816
D1 0.836
D2 0.872
D3 0.844
D4 0.881
D5 0.872
D6 0.879
D7 0.862
D8 0.876
D9 0.855
E1 0.812
E2 0.845
E3 0.876
E4 0.863
E5 0.809
E6 0.843
E7 0.867
E8 0.856
E9 0.847
Table 4. Rotated component array.
Table 4. Rotated component array.
ItemDim. ADim. BDim. CDim. DDim. E
A10.681
A20.720
A30.733
A40.667
A50.698
A60.744
A70.772
A80.723
A90.768
A100.721
A110.788
A120.790
A130.788
B1 0.706
B2 0.644
B3 0.773
B4 0.673
B5 0.736
B6 0.768
B7 0.783
B8 0.714
B9 0.640
C1 0.679
C2 0.706
C3 0.668
C4 0.704
C5 0.728
C6 0.724
C7 0.675
C8 0.688
C9 0.678
C10 0.698
C11 0.661
D1 0.680
D2 0.749
D3 0.712
D4 0.762
D5 0.748
D6 0.730
D7 0.731
D8 0.700
D9 0.688
E1 0.684
E2 0.753
E3 0.772
E4 0.754
E5 0.662
E6 0.691
E7 0.710
E8 0.673
E9 0.680
Table 5. Model adjustment.
Table 5. Model adjustment.
IndexResultModel Fit
CMIN414.535CMIN < 500
GFI0.929GFI > 0.7
PGFI0.885PGFI > 0.7
NFI0.911NFI > 0.7
PNFI0.867PNFI > 0.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Duarte Ayala, R.E.; Palacios-Rodríguez, A.; Guzmán-Cedillo, Y.I.; Segura, L.R. Validation Using Structural Equations of the “Cursa-T” Scale to Measure Research and Digital Competencies in Undergraduate Students. Societies 2024, 14, 22. https://doi.org/10.3390/soc14020022

AMA Style

Duarte Ayala RE, Palacios-Rodríguez A, Guzmán-Cedillo YI, Segura LR. Validation Using Structural Equations of the “Cursa-T” Scale to Measure Research and Digital Competencies in Undergraduate Students. Societies. 2024; 14(2):22. https://doi.org/10.3390/soc14020022

Chicago/Turabian Style

Duarte Ayala, Rocío Elizabeth, Antonio Palacios-Rodríguez, Yunuen Ixchel Guzmán-Cedillo, and Leticia Rodríguez Segura. 2024. "Validation Using Structural Equations of the “Cursa-T” Scale to Measure Research and Digital Competencies in Undergraduate Students" Societies 14, no. 2: 22. https://doi.org/10.3390/soc14020022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop