Next Article in Journal
Multicriteria Decision-Making to Determine the Optimal Energy Management Strategy of Hybrid PV–Diesel Battery-Based Desalination System
Next Article in Special Issue
School-Based Training for Sustainable Emotional Development in Chinese Preschoolers: A Quasi-Experiment Study
Previous Article in Journal
Socio-Economic and Environmental Impacts of Biomass Valorisation: A Strategic Drive for Sustainable Bioeconomy
Previous Article in Special Issue
Early Education Care from Its Practitioners to Achieve Sustainability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Moderating Factors in University Students’ Self-Evaluation for Sustainability

by
Samuel P. León
1,
José María Augusto-Landa
2,* and
Inmaculada García-Martínez
3
1
Department of Pedagogy, University of Jaén, 23071 Jaén, Spain
2
Department of Psychology, University of Jaén, 23071 Jaén, Spain
3
Department of Didactics, School Organization, University of Granada, 18071 Granada, Spain
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(8), 4199; https://doi.org/10.3390/su13084199
Submission received: 8 March 2021 / Revised: 1 April 2021 / Accepted: 7 April 2021 / Published: 9 April 2021
(This article belongs to the Special Issue Advances in Evaluation of Sustainable Educational Programs)

Abstract

:
Background: Self-evaluation is a multidimensional construct that has raised increasing interest within educational research at different educational stages. Different studies have pointed out the important role that Student Self-Assessment plays in improving student learning and ensuring the sustainability in instructional and evaluation processes. Method: The aim of this study with 630 university students is to analyze how engagement profiles and study strategies (measured by questionnaire) can predict the accuracy of students’ self-assessment of their achievements. For this purpose, the UWE-9 questionnaire was used to evaluate engagement, the Study Techniques Questionnaire scale to measure study strategies and a content-based test to evaluate performance, along with a self-assessment test in which the student had to estimate the level of achievement obtained in the content-based test, once the test had been completed. Results: The results show that both the academic engagement and the study strategies undertaken by students can be important factors that may influence different aspects of learning in the educational context. Students with higher performance and more engagement tend to show greater skills with student self-assessment (SSA) and students with better study habits tend to have better scores, greater confidence in the SSAs delivered and better skills for self-assessment. Conclusions: Findings suggests that providing opportunities for students to have a greater involvement in the construction of their learning and in its evaluation raises positive attitudes, which results in increased performance in order to achieve greater sustainability in the learning process assessment.

1. Introduction

Self-assessment (SS) has been widely investigated in recent years. So far, it seems that there has been no consensus on its conceptualization and scope, as suggested by recent reviews [1,2,3], while leading to different approaches to its implementation [4]. In general terms, student self-assessment (SSA) can be defined as a person’s perception about the quality of their work and their academic skills [5]. Specifically, in the educational field, this multidimensional construct has been widely used as an evaluation mechanism for students, justified by the positive effects it has on their learning, etc., because of its potential to produce feedback during its construction [1]. Therefore, an important research body has focused on student self-evaluation, in relation to its conceptualization and typology [3], practical implications [6,7,8], its formative role [9], and possible factors that modulate it [2]. In general terms, the SSA is characterized by the development and application of a set of strategies, more or less directed, in order to assess not only a proportion of the work carried out, but also its action as a whole [3]. This assessment has to be contrasted with external assessment, in order to determine its validity and accuracy [10]. Regarding the contrast between self-assessment and that of an expert, the teaching staff tends to be considered as the expert agent with the highest level of competence, although there are also studies that involve the researchers of the studies themselves and in which peer group assessment is included. However, authors such as Andrade [1] argue for the term “consistency” to describe this contrast, as they consider that the teacher’s perception is not completely objective or precise and can become biased, based on classroom research. In this sense, she defines consistency as “the degree of alignment between students’ and expert raters’ evaluations” (p. 5). Coherence between student and teacher evaluations will facilitate sustainability in the instructional processes, as it will make it possible to establish a rationale to guide students towards a global, personal learning process that is connected to their own real needs [11], overcoming more traditional modalities of evaluation, which are unable to consider all the factors that are involved in learning. The variability of SSA in both its conceptual and practical scope means that SSA takes on multiple forms, ranging from the use of explicit criteria for self-assessment to the absence of such criteria; the intention behind SSA based on the interests or motivations of the students establishes a distinction, differing between verification of what has been learned, development of communication skills/verbalization of the learning process, and awareness and reflection on the progress of learning [12]; variations are also found in the instrument used to carry out such assessments, identifying versions in which checklists, criteria and their consequent grading with respect to qualification are used through examples, headings or scripts in question format [2]; other classifications focus on the role of the teacher in the self-assessment process [13], indicating that the teacher’s role will be the opposite to that of the student. In other words, the participation of teachers in the SSA process will be gradually reduced, as students become more involved and familiar with it, until they become autonomous.
The present study is based on this approach, taking into account the diversity found in the literature on self-assessment, and paying special attention to ensure that students achieve consistency in their self-assessments through processes of self-reflection aimed at achieving sustainable learning connected to their needs and to their environment. In this regard, the United Nations [14] have determined different educational proposals within the Sustainable Development Goals on which educational programs are needed in order to achieve sustainability after the COVID-19 pandemic. Among the 17 priority Sustainable Development Goals to be achieved in the next 10 years [14,15], we would like to highlight Quality Education (fourth place), as it is committed to the power of education to overcome inequalities. In relation to this study, learning forms of self-assessment involve increased student empowerment, which directly affects their ability to self-regulate their own learning, thereby increasing the quality and quantity of what they learn. At the same time, the implementation of these assessment modalities is advancing in relation to what is promoted by the United Nations, where the focus is placed more on people, in order to achieve changes in institutions. In this regard, empowering students, placing them as the main actors in their learning process, will contribute to improving educational institutions [15]. This investigation focuses on two factors closely related to metacognitive skills, SSA and self-regulation: engagement and study strategies. Specifically, the main objective of this paper is to analyze the relationship between engagement and study strategies with students’ academic outcomes and their ability to assess their own learning achievements (SSA).
Its use is justified by the importance of empowering students in their own learning process, contributing to a greater involvement in the construction of their own learning, where they are given the possibility of becoming aware of their findings and of their mistakes. This involvement is related to self-regulated learning, in terms of the deployment of processes of self-awareness and self-reflection. In this respect, SSA is related to Self-Regulated Learning (SRL) [7,16], where the first is used with the intention of evoking in students a consistent discourse about their learning progress, through self-reflection [3], from the feedback that emerges from their own act of learning and from the contrast between what they have learned and the path they have to follow [1] to acquire sustainable knowledge. The aim is not only to be able to attribute qualifications to the work done. Rather, its purpose is to seek an extended description, in metacognitive terms, of potentialities, errors and limitations in the work done, in order for the learner to be able to go back when necessary and to strengthen their learning appropriately [17]. On the contrary, in the literature there are also studies that advocate for an inverse position, considering that SSA is part of self-regulated learning [2], since it occurs during the whole learning process [18]. In any case, there is evidence pointing to the relationship between SRL and performance [19,20] and, consequently, between SSA and performance.
Thus, it is considered that if students know the purpose of the SSA and are familiar with the criteria by which they will be assessed, as well as with the SSA procedure, they will be able to regulate their efforts in the acquisition of such learning, achieving higher rates of performance and understanding (reflection), through the use of more complex learning and more accurate strategies [21,22].
In this regard, authors have established a difference between summative and formative SSA [23]. The difference between summative and formative SSA lies in the purpose. In the summative type, the purpose of SSA is to find out whether the learner is able to evaluate or measure in a specific test his or her performance, similarly to how the teacher does this (hence some authors call this ‘self-grading’). In the case of formative SSA the act of self-assessment takes “a learning-oriented purpose” and the use of these strategies is highlighted in order to self-regulate the learning process, internalizing what has been learned, including mistakes and emerging procedures of construction and reconstruction [9]. In the formative type, it is not common that students are aware of the assessment criteria, nor do they have experience in self-assessment or receive feedback on their own self-assessment [1,10]. Consequently, there is some agreement on assuming SSA’s formative approach impacts on the summative, because of its capacity to contribute positively to learning [1].
The conditions under which SSA is developed and incorporated into the instructional processes are another factor to consider, since, depending on how SSA is articulated, it can either favor learning, or have the opposite effect. Before delving into the proper conditions regarding ways of including SSA within the learning processes, it is necessary to highlight the pedagogical and formative importance of involving students in the construction of their learning, favoring their participation in this progress and their self-critical development [24]. These two issues are, without doubt, essential incentives to give meaning to the importance of engagement and the generation of proper study routines by students, aimed at improving their performance [25]. Authors such as Dearnley and Meddings [26] point out that self-assessment provides a greater dialogue between teacher-student, which results in improvements in critical thinking and reflectiveness, which are necessary in the development of metacognitive processes and, as a result, they transform the way in which students learn (study strategies), taking a lifelong learning approach. Other authors [27] support SSA and its important supporting role in the development of metacognitive and self-regulation skills, which are essential to achieve suitable assessment and self-regulated learning [11,28].
The promotion of metacognitive skills in students can be acquired through learning tools such as SSA, which are associated with certain study habits, where goals are established [29] and the focus is placed on those aspects in which the student consciously recognizes that he or she still needs to study in depth or needs further study [30].
In order to find the most suitable conditions for the successful implementation of SSA within the training process, the promotion of pedagogical conditions related to the implementation of SSA will contribute to its effectiveness. In this case, the teachers’ role and expectations when designing the educational act and promoting suitable spaces and opportunities to introduce SSA as a strategy will make it easier for students to gradually acquire the skill and incorporate it into their own study habits, even serving as an instrument of self-regulation [13]; and SSA will become an effective feedback tool/strategy in the achievement of educational objectives. Secondly, the moment at which the students find themselves in their learning is also an aspect to consider. This moment can be related to their engagement in learning. In this regard, a distinction is made between students who are just starting out in the field and so-called experts, because the number of cognitive resources aimed at solving certain tasks requiring some automatism will differ considerably [31]. Thirdly, students’ perception of the quality of their work, together with the existence or not of feedback, will also affect the accuracy and effectiveness of the SSA. These conditions will be related to the metacognitive processes that the student displays in carrying out and assessing their actions. In this manner, not only is awareness and knowledge about one’s actions promoted, but skills are developed to monitor and evaluate one’s progress [27]. Fourthly, we consider the students’ experience with SSA. Its existence will depend on the students’ accuracy in evaluating their own work and their ability to set up a formative, rather than a summative, self-evaluation process. Recently, a call has been made to investigate the cognitive and affective mechanisms of students that intervene when they are being assessed [1,32], although there is not enough evidence to explain to what extent they influence self-assessment.
Based on the findings reported in the literature, we hypothesized that those students who showed high levels of academic engagement and study strategies would have higher academic achievement than those who showed low levels of these factors. Likewise, given that students who are more accurate at self-assessment tend to be more competent students, we expect that higher levels of the measured factors would be related to better self-assessment skills.

2. Materials and Methods

2.1. Participants

The study involved 630 students from kindergarten education (57%) and primary education (43%) at the University of Jaén. Of the total number of participants, 83.96% were female and 16.03% male. These percentages are consistent with the distribution of males and females of the total population of students in Spain [33] The age of the participants was M = 22.36 (SD = 3.82).

2.2. Instruments

The Spanish version of the UWE-9 scale [34] was used to evaluate the students’ academic engagement. The scale consisted of nine questions divided into three dimensions associated with engagement: vigor, dedication and absorption. The degree of compliance of each item was evaluated using a 7-point Likert scale where 1 indicates “never” and 7 “always”. An adaptation of the Study Techniques Questionnaire scale was used to evaluate the students’ strategies [35]. The scale consisted of 12 items that addressed three dimensions: study organization strategies, pre-study strategies, and study strategies. A Likert scale of 7 points is used to evaluate frequency of compliance with each item, where 1 is “never” and 7 “always”. The scales are available in Appendix A. Table 1 shows a summary of the items and subfactors that make up each of the two factors measured by the scales.
The examination consisted of a multiple-choice test with 24 questions with three alternatives where only one of the alternatives was correct. In order to mark the test, the score was calculated using the following formula to remove the probability of success thanks to the final random grade = successes − (errors/(k − 1)), where k is the number of alternatives in response. That is, each correct answer adds 0.42 and each incorrect answer subtracts 0.21. Blank questions do not add or subtract to the final grade. At the end of the exam, the students had to write the answers to the questions using a template. Together with this template was the self-assessment question (SSA), where the students were asked to estimate, according to the established assessment criteria based on the performance in their exam, which grade they thought they would get on a scale of 0 to 10 (the same scale used by the teacher). In addition, a judgment of confidence (CJ) was requested on the estimation of this self-assessment on a five-point scale where 1 meant “not at all sure” and 5 “completely sure”.

2.3. Procedure

During the course, in the middle of the semester, students were asked to access a questionnaire using their mobile device (computer, tablet, or mobile), placed in the subject folder of the University’s virtual teaching platform. The questionnaire was implemented in Google Form. Once the students accessed the questionnaire, they were presented with an informed consent screen, which explained the purpose of the questionnaire, its duration and ethical characteristics in accordance with the ethical guidelines proposed in the Declaration of Helsinki [36]. The students provided the last five numbers of their ID card, which served to associate the answers given by the students in the questionnaire with the scores obtained in the other variables. Once this information was collected, any information that could identify the data was deleted, thus making the result data anonymous.
Before the examination, the students were given information about the examination procedure. They were informed that they could volunteer to indicate, once they had taken the exam and given their answers, their estimation of how well they would perform on the exam, and were asked to indicate on a scale their judgement of confidence in their estimation. The percentage of participation in the self-assessment test was 97.61% of total students.

2.4. Data Analysis

To analyze the psychometric characteristics of the scales used, a Confirmatory Factor Analysis (CFA) was performed and the Cronbach reliability α and McDonald ω were calculated. These analyses were performed with the r lavaan package [37]. The semTools package has been used to calculate Composite Reliability (CR) and Average Variance Extracted (AVE). Because the data showed a multivariate non-normal distribution (Mardia’s test, Zkurtosis = 18.91, p < 0.001), diagonally weighted least squares (DWLS) were used as the estimator [38]. The purpose of this analysis is to test the psychometric properties of the scales used in this population; moreover, and more significantly, to know the factor loadings of the items of the scales in order to scale the raw scores of the students and thus to have a more accurate measure of the factors measured [39].
After the factorial treatment of the scales and the latent variable scaling, the hypothetical theoretical proposal of the Structural Equation Modeling (SEM) was analysed. For the SEM analysis, maximum likelihood estimation with robust standard errors and the Satorra-Bentler scaled test (Maximum Likelihood Method, MLM) were used. The variables involved in the proposed model are the two scaled variables obtained through the questionnaires, engagement (ENG, onwards) and study strategies (STR) and the variables obtained through the academic assessment, test score of the students (EXA, onwards), the student self-assessment (SSA, onwards), the students’ confidence judgment on their SSA (CJ, onwards) and the students’ self-assessment ability (SSAS, onwards). The EXA score was the result obtained after evaluating the students’ tests (range 0 to 10). SSA and CJ were obtained through a self-report question at the end of the exam (range SSA 0 to 10; range CJ 1 to 5). SSAS was calculated by subtracting the student’s SSA score and the expert teacher’s assessment (exam). Scores close to 0 in this variable will indicate good student adjustment to self-assessment, positive scores will indicate overestimation of the student and negative scores will indicate underestimation.

3. Results

3.1. Factorial Treatment of the Scale Variables

To analyze validity and internal consistency of the scales used to measure engagement factors and study strategies, a Confirmatory Factor Analysis was performed for each of the scales. The results showed that both scales showed an excellent fit [38]. For the engagement scale, χ2 (24) = 61,353, p < 0.001, with Comparative Fit Index (CFI) = 0.980, Tucker–Lewis Index (TLI) = 0.970, Standardized Root Mean Square Residual (SRMR) = 0.054, Root Mean Square Error of Approximation (RMSEA) = 0.050 (RMSEA 90% CI [0.035, 0.065]), with Cronbach α = 0.79 and McDonald ω = 0.81. For the study strategies scale, χ2 (32) = 116,203, p < 0.001, with CFI = 0.933, TLI = 0.906, SRMR = 0.065, RMSEA = 0.065 (RMSEA 90% CI [0.052, 0.078]), with Cronbach α = 0.70 and McDonald ω = 0.71. Table 2 shows the factor loadings of the variables observed for each of the scales.

3.2. Structural Equation Model

The model proposed to understand the relationships between the personal factors (ENG and STR) and the academic variables (EXA, SSA, CJ and SSAS) is presented in Figure 1. In the figure, the latent variables are represented by the rectangular shapes, the one-way arrows indicating regression relationships between variables, while the two-way arrows represent correlation relationships between variables.
The SEM analysis result showed an excellent fit [40], χ2 (15) = 2287.43, p < 0.001, with CFI = 0.999, TLI = 0.999, SRMR = 0.001, RMSEA = 0.001 (RMSEA 90% CI [0.000, 0.001]). Table 3 shows the detailed results of the SEM analysis for the proposed model.
The results show how the ENG and STR factors significantly predict the variance of the scores derived from the assessment modalities (EXA and SSA), as well as the students’ ability to self-assess. In the case of STR, it also significantly predicts the confidence judgements on the students’ self-assessments (CJ), although this is not the case for the ENG variable. As expected, ENG and STR showed a significant correlation. Regarding the correlations between the evaluation variables, we can observe the absence of correlation between EXA and SSA, and between CJ and SSAS, while the remaining correlations were significant.

4. Discussion

This study aimed to analyze how engagement profiles and study strategies can predict students’ accuracy in self-assessing their achievements within the university population. This analysis is closely related to goal 4 of the Agenda established by the United Nations [14], which pursues Quality Education in order to achieve sustainable development among countries. It is also in accordance with the aim of achieving greater student ownership in order to achieve the Development of Educational Institutions [15]. Both issues are related to the changes that have taken place in recent years in Higher Education, where reforms and transformations have been implemented in order to achieve greater student autonomy, strengthen student employability and build meaningful learning that is transferable to different contexts. These changes have also had an impact on the role of teachers, turning them into guides and supports for students in acquiring and consolidating such learning. In particular, assessment is positioned as an important factor in the instructional processes, so in this study it has been analysed through the incidence of other related factors.
In this regard, it has been shown that academic engagement is an important predictor of performance, and even more accurate in SSA. This finding is consistent with research where performance and engagement have been related [41] and with studies on SSA [3]. This milestone suggests that providing opportunities for students to have a greater involvement in the construction of their learning and in its evaluation [4] raises positive attitudes, materialized in a greater involvement in their study, which results in increased performance [42]. Related to this, study strategies are positioned as another factor to consider in predicting academic performance [43]. In fact, findings from our study have shown that engagement and study strategies are predictors of SSA. In any case, both factors can be important factors in determining different aspects of learning in the educational context, because of their close link to self-regulation, self-efficacy and the use of different learning strategies [16], motivation and engagement [44] and critical thinking [45]. These qualities have not only been widely supported as predictors of academic success [46], but are positioning themselves as the basis on which to restructure current instructional processes, adding cognitive and emotional factors into the equation [32]. In this regard, some studies consider that sustainable assessment is one that responds to the students’ needs, placing itself in a formative rather than summative approach [1,23]. Thus, this type of assessment aims to reduce the gap between non-personalized assessments and student learning [11], through SRL, among others.
The findings of this study have shown positive relationships between ENG and STR factors and assessment modalities, as well as students’ ability to self-assess and students’ accuracy. Specifically, it has been shown that ENG and STR factors significantly predict the variance of the scores derived from the assessment modalities (EXA and SSA), as well as the students’ ability to self-assess (SSAS). These findings are consistent with previous research [47,48]. Furthermore, it appears that ENG is able to predict 0.54 of SSA, and its predictive ability is higher, compared to STR, which predicts only 0.27. As for the SSA variable, it is observed that both ENG and STR predict it moderately, with scores of 0.20 and 0.30, respectively. These results can be interpreted as meaning that students with high scores in ENG and STR will show high scores in EXA and SSA, so we could infer that these factors are associated with high levels of academic achievement.
Regarding the analysis of confidence judgments on the assessments (CJ), the high predictive ability of STR stands out, with a score of 0.60, in contrast to the low predictive ability of ENG, with a score of 0.03. These results suggest that students with better study strategies will be more confident about assessing their own achievement (SSA) [49,50]. However, this confidence is not found in students with high ENG scores. In this regard, the literature suggests that confidence in judgements may also be related to the development of metacognitive skills linked to SRL [27,29].
In contrast, ENG has been found to have a higher predictive value for student self-assessment skills (SSAS) (0.41), compared to STR with 0.28. This fact could be related to the importance of student involvement in self-regulating their own learning [51] and their awareness of the moment during their learning at which they are, identifying the learning gains they have achieved and the path they have to follow to achieve knowledge [52]. In terms of correlations, the correlation between STR and ENG is not surprising. However, in the case of academic evaluation scores, the high negative correlation between EXA and SSAS is noteworthy. This indicates that high scores on the test are related to low scores on SSAS, bearing in mind that SSAS indicates accuracy in self-assessment when scores are close to 0 (since SSAS is the result of SSA-EXAM). This is consistent with the literature indicating that academically proficient students tend to be more accurate in self-assessment and furthermore that the most outstanding students even tend to under-rate (negative SSAS scores) [53]. In turn, the high correlation between SSA and SSAS also supports this hypothesis, since high SSA scores will be related to high SSAS scores, indicating over-evaluation.
Furthermore, our research reports the interaction effects between engagement and study strategies as essential aspects of metacognitive skills [27]. This relationship emerges when we assert the role of metacognitive and self-regulatory skills as a guarantee of students’ academic success [24], while at the same time they are closely related to self-regulated learning [30]. As we have seen, self-regulated learning and student SSAs have a close relationship [54], and therefore they are placed in a significant position within any formative learning approach [55], where the active role of students when they are willing to learn something can be articulated and demanded [56]. In this regard, our findings have shown how students with high scores on these factors were able to predict high exam scores and have high confidence in the judgements made about the estimation of SSA, in line with Boud et al. [50].
On the other hand, students with better study strategies have been shown to have better scores in the exams, more confidence in the SSAs delivered and better ability in self-assessment. If we consider the close link between SSA and SRL [7], and the increasing interest in considering SSA as a formative learning strategy, which from the feedback it provides during progress favors the self-regulation of the student [41], this finding is not surprising [27]. There are already empirical investigations [57] and recent review studies [1,2,3], which point out the importance of using SSA for training purposes, beyond a summative evaluation, due to the multiple benefits it produces for student learning. In this regard, there is evidence that argues for the importance of implementing SSA in educational processes but, moreover, certain difficulties have been pointed out when it comes to unifying criteria in its application. Perhaps the direction to be followed is to know which factors best predict SSA, in order to be able to direct efforts towards training certain specific skills to improve the instructional processes themselves and the quality of the learning derived from them.
This study has several important limitations that must be mentioned. First, it has a cross-sectional design, so does not allow us to establish any causal effects between the study variables. Secondly, our study shows the relationship between two factors, students’ engagement and study strategies, and both academic performance and their ability and confidence in self-assessing their achievements. These factors were measured over the whole academic course, so the measure we obtained is the students’ self-reported measure at the time of the measurement. This study is not able to determine how changing these factors, e.g., as a result of an intervention to improve these competences, would affect academic performance and/or self-assessment. Future research could try to analyze this issue with a pre-post study with a control group. Another possible limitation of the study is that our sample was not distributed homogeneously for the sex variable. If there were any variance of the model across this variable, part of the effect found could be due to this unequal variance. Further research could analyze this innovation with a heterogeneous sample of men and women. It would be interesting also if further research on this subject were to consider and evaluate mediational models using a longitudinal design, to gain a better understanding of the associations between these variables.
Finally, this paper makes advances in the direction of achieving quality education in Higher Education. Taking into account the importance of assessment in the teaching and learning process and its close link with the achievement of self-regulated learning, this study explores the impact of introducing self-assessment modalities and how these can be subordinated or conditioned by study strategies and engagement.

Author Contributions

Conceptualization, S.P.L., J.M.A.-L. and I.G.-M.; methodology, S.P.L. and J.M.A.-L.; software, S.P.L.; writing—original draft preparation, S.P.L. and I.G.-M.; writing—review and editing, S.P.L., J.M.A.-L. and. I.G.-M.; supervision, S.P.L., J.M.A.-L. and I.G.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of University of Jaén (Reference: OCT.20/1.TES).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Questionnaire Used to Measure Engagement and Study Strategies

  • eng1.—My tasks as university student make me feel full of energy.
  • Eng2.—I feel strong and vigorous when I am studying or going to classes.
  • eng3.—When the day starts I feel like going to class or studying
  • eng4.—I am enthused about my studies
  • eng5.—My studies inspire me to do new things
  • eng6.—I am proud of doing this career
  • eng7.—I am happy when I am doing tasks related to my studies
  • eng8.—I am involved in my studies
  • eng9.—I “go with the flow” when I am doing my tasks as a student
  • str1.—I tend to plan the time I am going to spend studying
  • str2.—I start studying from the beginning of the course
  • str3.—I take notes of the teachers’ explanations.
  • str4.—I take notes from a classmate’s notes
  • str5.—When I take notes, I copy what the teacher says literally.
  • str6.—I expand the information with complementary bibliography.
  • str7.—I have difficulties in following the teacher’s explanations in class.
  • str8.—Before studying in depth I do superficial reading.
  • str9.—I make outlines of the material I am going to study.
  • str10.—My way of studying changes if the exam of the subject is a multiple-choice or essay exam.
  • str11.—When I study for an exam I think of questions that can be included in the exam.
  • str12.—I memorize the notes for the exam day.

References

  1. Andrade, H.L. A critical review of research on student self-assessment. Front. Educ. 2019, 27. [Google Scholar] [CrossRef] [Green Version]
  2. Brown, G.T.; Harris, L.R. The Future of Self-Assessment in Classroom Practice: Reframing Self-Assessment as a Core Competency. Front. Learn. Res. 2014, 2, 22–30. [Google Scholar] [CrossRef] [Green Version]
  3. Panadero, E.; Brown, G.T.; Strijbos, J.W. The future of student self-assessment: A review of known unknowns and potential directions. Educ. Psychol. Rev. 2016, 28, 803–830. [Google Scholar] [CrossRef] [Green Version]
  4. Tan, K.H.K. Student Self-Assessment: Assessment, Learning and Empowerment; Research Publishing: Singapore, 2012. [Google Scholar]
  5. Fletcher, A. Australia’s National Assessment Programme rubrics: An impetus for self-assessment? Educ. Res. 2020, 1–22. [Google Scholar] [CrossRef]
  6. Panadero, E.; Jonsson, A.; Botella, J. Effects of self-assessment on self-regulated learning and self-efficacy: Four meta-analyses. Educ. Res. Rev. 2017, 22, 74–98. [Google Scholar] [CrossRef]
  7. Jonsson, A.; Panadero, E. Facilitating students’ active engagement with feedback. In The Cambridge Handbook of Instructional Feedback; Cambridge Handbooks in Psychology; Lipnevich, A., Smith, J., Eds.; Cambridge University Press: Cambridge, UK, 2018; pp. 531–553. [Google Scholar] [CrossRef]
  8. Van der Kleij, F.M.; Lipnevich, A.A. Student perceptions of assessment feedback: A critical scoping review and call for research. Educ. Assess. Eval. Account. 2020, 1–29. [Google Scholar] [CrossRef]
  9. Wanner, T.; Palmer, E. Formative self-and peer assessment for improved student learning: The crucial factors of design, teacher participation and feedback. Assess. Eval. High. Educ. 2018, 43, 1032–1047. [Google Scholar] [CrossRef]
  10. Snead, L.O.; Freiberg, H.J. Rethinking student teacher feedback: Using a self-assessment resource with student teachers. J. Teach. Educ. 2019, 70, 155–168. [Google Scholar] [CrossRef]
  11. Boud, D.; Soler, R. Sustainable assessment revisited. Assess. Eval. High. Educ. 2016, 41, 400–413. [Google Scholar] [CrossRef] [Green Version]
  12. Boud, D.; Brew, A. Developing a typology for learner self-assessment practices. Res. Dev. High. Educ. 1995, 18, 130–135. [Google Scholar]
  13. Tan, K.H. Meanings and practices of power in academics’ conceptions of student self-assessment. Teach. High. Educ. 2009, 14, 361–373. [Google Scholar] [CrossRef]
  14. United Nations. Sustainable Development Goals; UN: New York, NY, USA, 2021; Available online: https://www.un.org/sustainabledevelopment/ (accessed on 30 March 2021).
  15. Ayuso, A. Naciones Unidas: Revisando los retos de la agenda de desarrollo. CIBOB Rep. 2020, 6, 9–17. Available online: https://www.cidob.org/articulos/cidob_report/n1_6/naciones_unidas_revisando_los_retos_de_la_agenda_de_desarrollo (accessed on 30 March 2021).
  16. Yan, Z. Self-assessment in the process of self-regulated learning and its relationship with academic achievement. Assess. Eval. High. Educ. 2020, 45, 224–238. [Google Scholar] [CrossRef]
  17. Yan, Z.; Brown, G.T. A cyclical self-assessment process: Towards a model of how students engage in self-assessment. Assess. Eval. High. Educ. 2017, 42, 1247–1262. [Google Scholar] [CrossRef]
  18. Cheng, G.; Chau, J. Exploring the relationship between students’ self-regulated learning ability and their ePortfolio achievement. Internet High. Educ. 2013, 17, 9–15. [Google Scholar] [CrossRef]
  19. Andrade, H.L. Students as the definitive source of formative assessment: Academic self-assessment and the self-regulation of learning. In Handbook of Formative Assessment; Andrade, H.L., Cizek, G.J., Eds.; Routledge: New York, NY, USA, 2010; pp. 90–105. [Google Scholar]
  20. Hudesman, J.; Crosby, S.; Flugman, B.; Issac, S.; Everson, H.; Clay, D.B. Using formative assessment and metacognition to improve student achievement. J. Dev. Educ. 2013, 37, 1–13. [Google Scholar]
  21. Chang, C.-C.; Liang, C.; Chen, Y.-H. Is learner self-assessment reliable and valid in a Web-based portfolio environment for high school students? Comput. Educ. 2013, 60, 325–334. [Google Scholar] [CrossRef]
  22. Van Reybroeck, M.; Penneman, J.; Vidick, C.; Galand, B. Progressive treatment and self-assessment: Effects on students’ automatisation of grammatical spelling and self-efficacy beliefs. Read. Writ. 2017, 30, 1965–1985. [Google Scholar] [CrossRef]
  23. Hartmeyer, R.; Stevenson, M.P.; Bentsen, P. Evaluating design-based formative assessment practices in outdoor science teaching. Educ. Res. 2016, 58, 420–441. [Google Scholar] [CrossRef]
  24. Pastor, V.M.L.; Pascual, M.G.; Martín, J.J.B. La participación del alumnado en la evaluación: La autoevaluación, la coevaluación y la evaluación compartida. Tándem Didáct. Educ. Física 2005, 17, 21–37. [Google Scholar]
  25. Bingham, G.; Holbrook, T.; Meyers, L.E. Using self-assessments in elementary classrooms. Phi Delta Kappa Inter. 2010, 91, 59–61. [Google Scholar] [CrossRef]
  26. Dearnley, C.A.; Meddings, F.S. Student self-assessment and its impact on learning–A pilot study. Nurse Educ. Today 2007, 27, 333–340. [Google Scholar] [CrossRef]
  27. Schuster, C.; Stebner, F.; Leutner, D.; Wirth, J. Transfer of metacognitive skills in self-regulated learning: An experimental training study. Metacog. Learn. 2020, 15, 455–477. [Google Scholar] [CrossRef]
  28. Braund, H.; DeLuca, C. Elementary students as active agents in their learning: An empirical study of the connections between assessment practices and student metacognition. Aust. Educ. Res. 2018, 45, 65–85. [Google Scholar] [CrossRef]
  29. Cagasan, L.; Care, E.; Robertson, P.; Luo, R. Developing a formative assessment protocol to examine formative assessment practices in the Philippines. Educ. Assess. 2020, 1–17. [Google Scholar] [CrossRef]
  30. Rhodes, M.G. Metacognition. Teach. Psychol. 2019, 46, 168–175. [Google Scholar] [CrossRef]
  31. Kostons, D.; van Gog, T.; Paas, F. Self-assessment and task selection in learner-controlled instruction: Differences between effective and ineffective learners. Comput. Educ. 2010, 54, 932–940. [Google Scholar] [CrossRef]
  32. Lui, A.M. Validity of the Responses to Feedback Survey: Operationalizing and Measuring Students’ Cognitive and Affective Responses to Teacher Feedback. Ph.D. Thesis, Albany State University of New York, New York, NY, USA, 2020. [Google Scholar]
  33. Spanish National Institute of Statistics. Females in the Teaching Body According to the Grade Level They Teach. Available online: http://www.ine.es/ss/Satellite?L=es_ESandc=INESeccion_Candcid=1259925481851andp=1254735110672andpagename=ProductosYServicios%2FPYSLayoutandparam3=1259924822888 (accessed on 7 February 2021).
  34. Schaufeli, W.; Bakker, A.; Salanova, M. The measurement of work engagement with a short questionnaire. A cross-national study. Educ. Psychol. Meas. 2006, 66, 701–716. [Google Scholar] [CrossRef]
  35. Herrera-Torres, L.; Lorenzo-Quiles, O. Estrategias de aprendizaje en estudiantes universitarios. Un aporte a la construcción del Espacio Europeo de Educación Superior. Educ. Educad. 2009, 12, 75–98. [Google Scholar]
  36. Asociación Médica Mundial (AMM). Declaración de Helsinki. 64ª Asamblea General, Fortaleza, Brasil, Octubre. In Principios Éticos para las Investigaciones Médicas en Seres Humanos; Asociación Médica Mundial: Ferney-Voltaire, France, 2013. [Google Scholar]
  37. Rosseel, Y. Lavaan: An R Package for Structural Equation Modeling. J. Stat. Softw. 2012, 48, 1–36. [Google Scholar] [CrossRef] [Green Version]
  38. Finney, S.J.; DiStefano, C. Non normal and categorical data in structural equation modeling. In Structural Equation Modeling: A Second Course, 2nd ed.; Hancock, G.R., Mueller, R.O., Eds.; Information Age: Charlotte, NC, USA, 2013; pp. 439–492. [Google Scholar]
  39. Cano-Lozano, M.C.; Rodríguez-Díaz, F.J.; León, S.P.; Contreras, L. Analyzing the relationship between child-to-parent violence and perceived parental warmth. Front. Psychol. 2020, 11. [Google Scholar] [CrossRef] [PubMed]
  40. Pineda-Marín, C.; Muñoz-Sastre, M.T.; Villamarín, D.G.; Espitia, M.C.; Mullet, E. Colombian People’s Willingness to Forgive Offenses against Women Perpetrated during the Armed Conflict. Rev. Latinoame. Psicol. 2019, 51, 226–235. [Google Scholar] [CrossRef] [Green Version]
  41. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis: Global Edition; Pearson Education Limited: London, UK, 2020. [Google Scholar]
  42. Sheard, M. Hardiness commitment, gender, and age differentiate university academic performance. Br. J. Educ. Psychol. 2009, 79, 189–204. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Brown, G.T.; Peterson, E.R.; Yao, E.S. Student conceptions of feedback: Impact on self-regulation, self-efficacy, and academic achievement. Br. J. Educ. Psychol. 2016, 86, 606–629. [Google Scholar] [CrossRef]
  44. Capdevilla-Seder, A.C.; Bellmunt-Villalonga, H.B. Importance of study habits on adolescents’ academic achievement: Gender differences. Educ. Siglo XXI 2016, 34, 157–172. [Google Scholar] [CrossRef] [Green Version]
  45. Martin, A.J.; Ginns, P.; Papworth, B. Motivation and engagement: Same or different? Does it matter? Learn. Individ. Differ. 2017, 55, 150–162. [Google Scholar] [CrossRef]
  46. Siddiq, F.; Gochyyev, P.; Valls, O. The role of engagement and academic behavioral skills on young students’ academic performance—A validation across four countries. Stud. Educ. Eval. 2020, 66. [Google Scholar] [CrossRef]
  47. Diseth, Å. Self-efficacy, goal orientations and learning strategies as mediators between preceding and subsequent academic achievement. Learn. Individ. Differ. 2011, 21, 191–195. [Google Scholar] [CrossRef]
  48. Munns, G.; Woodward, H. Student engagement and student self-assessment: The REAL framework. Assess. Educ. Princ. Policy Pract. 2006, 13, 193–213. [Google Scholar] [CrossRef]
  49. Dunning, D.; Johnson, K.; Ehrlinger, J.; Kruger, J. Why people fail to recognize their own incompetence. Curr. Dir. Psychol. Sci. 2003, 12, 83–87. [Google Scholar] [CrossRef]
  50. Boud, D.; Lawson, R.; Thompson, D.G. Does student engagement in self-assessment calibrate their judgement over time? Assess. Eval. High. Educ. 2013, 38, 941–956. [Google Scholar] [CrossRef]
  51. Nugteren, M.L.; Jarodzka, H.; Kester, L.; Van Merriënboer, J.J. Self-regulation of secondary school students: Self-assessments are inaccurate and insufficiently used for learning-task selection. Instr. Sci. 2018, 46, 357–381. [Google Scholar] [CrossRef] [Green Version]
  52. Ehrlinger, J.; Shain, E.A. How accuracy in students’ self perceptions relates to success in learning. In Applying Science of Learning in Education: Infusing Psychological Science into the Curriculum; Benassi, V.A., Overson, C.E., Hakala, C.M., Eds.; American Psychological Association: Washington, DC, USA, 2014; pp. 142–151. [Google Scholar]
  53. Van der Zanden, P.J.; Denessen, E.; Cillessen, A.H.; Meijer, P.C. Domains and predictors of first-year student success: A systematic review. Educ. Res. Rev. 2018, 23, 57–77. [Google Scholar] [CrossRef]
  54. Cain, K.M.; Wilkowski, B.M.; Barlett, C.P.; Boyle, C.D.; Meier, B.P. Do we see eye to eye? Moderators of correspondence between student and faculty evaluations of day-to-day teaching. Teach. Psychol. 2018, 45, 107–114. [Google Scholar] [CrossRef]
  55. Siegesmund, A. Using self-assessment to develop metacognition and self-regulated learners. FEMS Microbiol. Lett. 2017, 364. [Google Scholar] [CrossRef]
  56. Siegesmund, A. Increasing student metacognition and learning through classroom-based learning communities and self-assessment. J. Microbiol. Biol. Educ. 2016, 17, 204. [Google Scholar] [CrossRef]
  57. Zamora, Á.; Súarez, J.M.; Ardura, D. A model of the role of error detection and self-regulation in academic performance. J. Educ. Res. 2018, 111, 595–602. [Google Scholar] [CrossRef]
Figure 1. SEM model proposed.
Figure 1. SEM model proposed.
Sustainability 13 04199 g001
Table 1. Structure of the scales used.
Table 1. Structure of the scales used.
Factors (Acronym)SubfactorItems
Engagement (eng)vigoreng1 to eng3
dedicationeng4 to eng6
absorptioneng7 to eng9
Strategy (str)organizationstr1 to str2
pre-studystr3 to str6
studystr8 to str12
Table 2. Factor loading of the latent variables in Engagement and Strategies scales.
Table 2. Factor loading of the latent variables in Engagement and Strategies scales.
Latent FactorIndicatorBSEZpβR2CRAVE
Engagement (ENG)vigoreng10.8220.04020.449<0.0010.6630.439
vigoreng20.7560.03819.769<0.0010.6110.373
vigoreng30.8580.04220.602<0.0010.6250.390
vigor 0.6680.401
dedicationeng40.5120.03116.625<0.0010.6650.442
dedicationeng50.7720.04417.386<0.0010.7770.604
dedicationeng60.1930.01512.852<0.0010.4930.243
dedication 0.7220.516
absorptioneng70.5210.03813.891<0.0010.5490.301
absorptioneng80.5080.03713.679<0.0010.4940.244
absorptioneng90.5070.04211.971<0.0010.3500.123
absorption 0.4540.223
Study strategies (STR)organizationstr10.8460.07211.806<0.0010.5480.300
organizationstr20.7240.06111.806<0.0010.4840.234
organization 0.4210.268
pre-studystr30.5050.04012.606<0.0010.3960.156
pre-studystr40.3490.0536.551<0.0010.2020.041
pre-studystr50.7100.05113.966<0.0010.4740.225
pre-studystr61.1270.06816.646<0.0010.6790.461
pre-study 0.4930.223
studystr80.6410.06110.546<0.0010.4210.177
studystr90.6350.06210.176<0.0010.3680.135
studystr110.3750.0477.923<0.0010.2580.067
studystr120.7760.06911.211<0.0010.4590.210
study 0.4030.151
Table 3. Regression and correlation factors from structural equation modeling.
Table 3. Regression and correlation factors from structural equation modeling.
Latent
Variables
EstimateSEZpStd.
Estimate
ENG~
EXA0.2010.0336.048<0.0010.201
SSA0.5400.02918.400<0.0010.540
CJ0.0310.0311.0040.3160.031
SSAS11.5550.86113.418<0.0010.410
STR~
EXA0.3030.02910.428< 0.0010.303
SSA0.2700.0299.458<0.0010.270
CJ0.6040.03915.574<0.0010.604
SSAS7.9300.69711.374<0.0010.281
ENG~~
STR0.3220.0407.970<0.0010.322
EXA~~
SSA0.0510.0271.8950.0580.077
CJ0.1930.0296.569<0.0010.269
SSAS−15.9410.869−18.347<0.001−0.756
SSA~~
CJ0.0720.0262.7700.0060.124
SSAS8.2990.77710.681<0.0010.487
CJ~~
SSAS−1.3170.676−1.9470.051−0.072
Notes. EXA = Exam assessment; SSA = Student Self-assessment; CJ = Confidence judgment; SSAS = Student self-assessment skill; ENG = engagement; STR = Study strategies; ~ indicates regression relationship; ~~ indicates correlation relationship.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

León, S.P.; Augusto-Landa, J.M.; García-Martínez, I. Moderating Factors in University Students’ Self-Evaluation for Sustainability. Sustainability 2021, 13, 4199. https://doi.org/10.3390/su13084199

AMA Style

León SP, Augusto-Landa JM, García-Martínez I. Moderating Factors in University Students’ Self-Evaluation for Sustainability. Sustainability. 2021; 13(8):4199. https://doi.org/10.3390/su13084199

Chicago/Turabian Style

León, Samuel P., José María Augusto-Landa, and Inmaculada García-Martínez. 2021. "Moderating Factors in University Students’ Self-Evaluation for Sustainability" Sustainability 13, no. 8: 4199. https://doi.org/10.3390/su13084199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop